Unnamed: 0,id,type,created_at,repo,repo_url,action,title,labels,body,index,text_combine,label,text,binary_label
620985,19575120983.0,IssuesEvent,2022-01-04 14:38:56,CDCgov/prime-reportstream,https://api.github.com/repos/CDCgov/prime-reportstream,reopened,IL - OBX-23.1 Org Name truncate to 50 chars,onboarding-ops blocked receiver High Priority support,"Illinois is having a problem with the length of the organization name in OBX 23.1. The maximum character sizing limit for this subfield is 50, and quite often more than 50 characters are being sent.

",1.0,"IL - OBX-23.1 Org Name truncate to 50 chars - Illinois is having a problem with the length of the organization name in OBX 23.1. The maximum character sizing limit for this subfield is 50, and quite often more than 50 characters are being sent.

",0,il obx org name truncate to chars illinois is having a problem with the length of the organization name in obx the maximum character sizing limit for this subfield is and quite often more than characters are being sent ,0
1493,6461248458.0,IssuesEvent,2017-08-16 07:35:07,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Yum can't downgrade packages,affects_2.3 bug_report waiting_on_maintainer,"As of 1.9.1, the yum module can't downgrade packages.
I saw a 77e66cb01e9121e5fcbdc08e87d248c21bdad497 attempting to fix this, but it had to be reverted in d77f1976a6cbf30cd2284817e92a838f7f7e47ef .
(I don't know the story behind the revert, and I couldn't find an open bug or public discussion.. so I'm opening an issue!)
",True,"Yum can't downgrade packages - As of 1.9.1, the yum module can't downgrade packages.
I saw a 77e66cb01e9121e5fcbdc08e87d248c21bdad497 attempting to fix this, but it had to be reverted in d77f1976a6cbf30cd2284817e92a838f7f7e47ef .
(I don't know the story behind the revert, and I couldn't find an open bug or public discussion.. so I'm opening an issue!)
",1,yum can t downgrade packages as of the yum module can t downgrade packages i saw a attempting to fix this but it had to be reverted in i don t know the story behind the revert and i couldn t find an open bug or public discussion so i m opening an issue ,1
67793,28048486038.0,IssuesEvent,2023-03-29 02:21:00,MicrosoftDocs/live-share,https://api.github.com/repos/MicrosoftDocs/live-share,closed,[VS][C++] Error list not working,bug client: vs area: language services needs-repro,Error list on the guest in a Live Share session is not synced with host's error list.,1.0,[VS][C++] Error list not working - Error list on the guest in a Live Share session is not synced with host's error list.,0, error list not working error list on the guest in a live share session is not synced with host s error list ,0
683072,23367631212.0,IssuesEvent,2022-08-10 16:43:53,ASSETS-Conference/assets2022,https://api.github.com/repos/ASSETS-Conference/assets2022,closed,Adding table of content for long pages,medium priority,Potentially using https://tscanlin.github.io/tocbot/ for the long pages. Look into it and check if it is accessible. ,1.0,Adding table of content for long pages - Potentially using https://tscanlin.github.io/tocbot/ for the long pages. Look into it and check if it is accessible. ,0,adding table of content for long pages potentially using for the long pages look into it and check if it is accessible ,0
5219,26479630348.0,IssuesEvent,2023-01-17 13:49:16,OpenRefine/OpenRefine,https://api.github.com/repos/OpenRefine/OpenRefine,closed,Simplify the CI/CD workflows to use the builtin Maven cache feature in setup-java action that now supports it,bug maintainability CI/CD,"We can now remove the actions/cache steps and replace them with a config setting under `setup-java` action. It automatically uses actions/cache already under the hood and will create the fileHash for the Maven `**/pom.xml` automatically.
Documention: https://github.com/actions/setup-java/blob/main/README.md#caching-packages-dependencies
Usage such as:
```
steps:
- uses: actions/checkout@v3
- uses: actions/setup-java@v3
with:
distribution: 'temurin'
java-version: '17'
cache: 'maven'
- name: Build with Maven
run: mvn -B package --file pom.xml
```
### To Reproduce
Steps to reproduce the behavior:
1. Run a PR workflow.
### Current Results
Extra steps not need for `actions/cache` separately.
### Expected Behavior
cleaner and less verbose build/test output in logs.
",True,"Simplify the CI/CD workflows to use the builtin Maven cache feature in setup-java action that now supports it - We can now remove the actions/cache steps and replace them with a config setting under `setup-java` action. It automatically uses actions/cache already under the hood and will create the fileHash for the Maven `**/pom.xml` automatically.
Documention: https://github.com/actions/setup-java/blob/main/README.md#caching-packages-dependencies
Usage such as:
```
steps:
- uses: actions/checkout@v3
- uses: actions/setup-java@v3
with:
distribution: 'temurin'
java-version: '17'
cache: 'maven'
- name: Build with Maven
run: mvn -B package --file pom.xml
```
### To Reproduce
Steps to reproduce the behavior:
1. Run a PR workflow.
### Current Results
Extra steps not need for `actions/cache` separately.
### Expected Behavior
cleaner and less verbose build/test output in logs.
",1,simplify the ci cd workflows to use the builtin maven cache feature in setup java action that now supports it we can now remove the actions cache steps and replace them with a config setting under setup java action it automatically uses actions cache already under the hood and will create the filehash for the maven pom xml automatically documention usage such as steps uses actions checkout uses actions setup java with distribution temurin java version cache maven name build with maven run mvn b package file pom xml to reproduce steps to reproduce the behavior run a pr workflow current results extra steps not need for actions cache separately expected behavior cleaner and less verbose build test output in logs ,1
458,3636010678.0,IssuesEvent,2016-02-12 00:24:04,antigenomics/vdjdb-db,https://api.github.com/repos/antigenomics/vdjdb-db,closed,"Combine extra columns to a single ""comment"" column",maintainance,"- Covert table of additional columns to a single column with JSON data. For example
tissue | cell type
--------|-----------
``spleen`` | ``cd8``
changes to
comment|
--------|
``{ ""tissue"":""spleen"", ""cell type"":""cd8"" }``|
- Do it automatically upon database assembly.",True,"Combine extra columns to a single ""comment"" column - - Covert table of additional columns to a single column with JSON data. For example
tissue | cell type
--------|-----------
``spleen`` | ``cd8``
changes to
comment|
--------|
``{ ""tissue"":""spleen"", ""cell type"":""cd8"" }``|
- Do it automatically upon database assembly.",1,combine extra columns to a single comment column covert table of additional columns to a single column with json data for example tissue cell type spleen changes to comment tissue spleen cell type do it automatically upon database assembly ,1
17607,4174961541.0,IssuesEvent,2016-06-21 15:29:06,telerik/kendo-ui-core,https://api.github.com/repos/telerik/kendo-ui-core,opened,Document limited touch gesture support when multiple Grid features rely on it,Documentation,1043467 (Drawer + virtual Grid),1.0,Document limited touch gesture support when multiple Grid features rely on it - 1043467 (Drawer + virtual Grid),0,document limited touch gesture support when multiple grid features rely on it drawer virtual grid ,0
45435,12799854314.0,IssuesEvent,2020-07-02 16:02:57,snowplow/snowplow-android-tracker,https://api.github.com/repos/snowplow/snowplow-android-tracker,closed,Fix importing of kotlin on gradle,priority:medium status:completed type:defect,"This project is written 100% in Java, however the SDK ships with a dependency on [the Kotlin stdlib](https://github.com/snowplow/snowplow-android-tracker/blob/master/snowplow-tracker/build.gradle#L85) and [Kotlin Android extensions](https://github.com/snowplow/snowplow-android-tracker/blob/master/snowplow-tracker/build.gradle#L7). Kotlin was added in [this PR](https://github.com/snowplow/snowplow-android-tracker/pull/358), but seems unrelated?
Also as an aside it would be great if this library added nullability annotations to make Kotlin interoperability nicer! I can open up a separate issue for this if you'd prefer.",1.0,"Fix importing of kotlin on gradle - This project is written 100% in Java, however the SDK ships with a dependency on [the Kotlin stdlib](https://github.com/snowplow/snowplow-android-tracker/blob/master/snowplow-tracker/build.gradle#L85) and [Kotlin Android extensions](https://github.com/snowplow/snowplow-android-tracker/blob/master/snowplow-tracker/build.gradle#L7). Kotlin was added in [this PR](https://github.com/snowplow/snowplow-android-tracker/pull/358), but seems unrelated?
Also as an aside it would be great if this library added nullability annotations to make Kotlin interoperability nicer! I can open up a separate issue for this if you'd prefer.",0,fix importing of kotlin on gradle this project is written in java however the sdk ships with a dependency on and kotlin was added in but seems unrelated also as an aside it would be great if this library added nullability annotations to make kotlin interoperability nicer i can open up a separate issue for this if you d prefer ,0
15153,5071987968.0,IssuesEvent,2016-12-26 18:04:53,exercism/xjava,https://api.github.com/repos/exercism/xjava,closed,raindrops: make test failures easier to troubleshoot,code good first patch,"[`raindrops`](https://github.com/exercism/xjava/blob/master/exercises/raindrops/src/test/java/RaindropsTest.java) uses the JUnit [`Parameterized`](https://github.com/junit-team/junit4/wiki/parameterized-tests) test runner. This was done to make the test more compact (and once you learn how the mechanism works, easier to read).
However, when a test fails, the error message does not indicate which value failed. This makes it really difficult to know why the test failed.
Test failures should clearly indicate what failed.
**To Do:**
- [x] ensure that this exercise is using JUnit 4.12 or later
- [ ] add a format string to the `@Parameters` annotation.
_(ref: #147)_
",1.0,"raindrops: make test failures easier to troubleshoot - [`raindrops`](https://github.com/exercism/xjava/blob/master/exercises/raindrops/src/test/java/RaindropsTest.java) uses the JUnit [`Parameterized`](https://github.com/junit-team/junit4/wiki/parameterized-tests) test runner. This was done to make the test more compact (and once you learn how the mechanism works, easier to read).
However, when a test fails, the error message does not indicate which value failed. This makes it really difficult to know why the test failed.
Test failures should clearly indicate what failed.
**To Do:**
- [x] ensure that this exercise is using JUnit 4.12 or later
- [ ] add a format string to the `@Parameters` annotation.
_(ref: #147)_
",0,raindrops make test failures easier to troubleshoot uses the junit test runner this was done to make the test more compact and once you learn how the mechanism works easier to read however when a test fails the error message does not indicate which value failed this makes it really difficult to know why the test failed test failures should clearly indicate what failed to do ensure that this exercise is using junit or later add a format string to the parameters annotation ref ,0
162666,12685190237.0,IssuesEvent,2020-06-20 02:42:24,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,roachtest: no volatility for cast sqlsmith/setup=empty/setting=default failed,C-test-failure O-roachtest O-robot branch-master release-blocker,"[(roachtest).sqlsmith/setup=empty/setting=default failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2013639&tab=buildLog) on [master@9eea0be192499fa7d29b5eca5ed14173982b25f0](https://github.com/cockroachdb/cockroach/commits/9eea0be192499fa7d29b5eca5ed14173982b25f0):
```
)
),
(('41 years 10 mons 278 days 15:41:21.695483':::INTERVAL, '':::STRING)),
(('37 years 127 days 10:37:18.966288':::INTERVAL, e'\x17>!`o':::STRING)),
(('62 years 777 days 20:47:52.335967':::INTERVAL, e'O.\x04\x1a\x18\x11I':::STRING))
)
AS tab_165 (col_284)
RIGHT JOIN (VALUES ('06:08:31.712812':::TIME)) AS tab_166 (col_285)
FULL JOIN (
VALUES
('01:13:19.05808+13:29:00':::TIMETZ),
('06:13:39.013972+14:52:00':::TIMETZ),
('01:10:09.06268-05:06:00':::TIMETZ),
('22:21:23.701203+10:28:00':::TIMETZ),
(NULL),
(
COALESCE(
'15:24:24.193463+01:58:00':::TIMETZ,
'06:47:25.508601+10:50:00':::TIMETZ
)
)
)
AS tab_167 (col_286) ON true ON NULL,
(
VALUES
(
COALESCE(3565973807:::OID, 3506951745:::OID),
ARRAY['-45 years -3 mons -734 days -22:27:05.203906':::INTERVAL,'57 years 9 mons 464 days 02:58:02.027105':::INTERVAL,'-71 years -1 mons -219 days -15:52:43.39243':::INTERVAL,'290 years':::INTERVAL,'46 years 11 mons 740 days 05:39:54.14459':::INTERVAL,'12 years 438 days 11:51:30.856446':::INTERVAL]
),
(
259206075:::OID,
(NULL::INTERVAL[] || ARRAY['-29 years -6 mons -142 days -16:12:31.810018':::INTERVAL,'58 years 11 mons 529 days 06:07:51.173242':::INTERVAL,'52 years 930 days 17:33:56.673499':::INTERVAL,'1 day':::INTERVAL]::INTERVAL[])::INTERVAL[]
),
(
55508376:::OID,
ARRAY['10 years 3 mons 845 days 16:45:18.378939':::INTERVAL,'67 years 5 mons 761 days 23:18:58.917765':::INTERVAL]
),
(1736996154:::OID, ARRAY[]:::INTERVAL[])
)
AS tab_168 (col_287, col_288)
LIMIT
1:::INT8
)
)
)
AS tab_169 (col_290, col_291)
WHERE
false
LIMIT
39:::INT8;
```
More
Artifacts: [/sqlsmith/setup=empty/setting=default](https://teamcity.cockroachdb.com/viewLog.html?buildId=2013639&tab=artifacts#/sqlsmith/setup=empty/setting=default)
Related:
- #47541 roachtest: sqlsmith/setup=empty/setting=default failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dempty%2Fsetting%3Ddefault.%2A&sort=title&restgroup=false&display=lastcommented+project)
powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
",2.0,"roachtest: no volatility for cast sqlsmith/setup=empty/setting=default failed - [(roachtest).sqlsmith/setup=empty/setting=default failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2013639&tab=buildLog) on [master@9eea0be192499fa7d29b5eca5ed14173982b25f0](https://github.com/cockroachdb/cockroach/commits/9eea0be192499fa7d29b5eca5ed14173982b25f0):
```
)
),
(('41 years 10 mons 278 days 15:41:21.695483':::INTERVAL, '':::STRING)),
(('37 years 127 days 10:37:18.966288':::INTERVAL, e'\x17>!`o':::STRING)),
(('62 years 777 days 20:47:52.335967':::INTERVAL, e'O.\x04\x1a\x18\x11I':::STRING))
)
AS tab_165 (col_284)
RIGHT JOIN (VALUES ('06:08:31.712812':::TIME)) AS tab_166 (col_285)
FULL JOIN (
VALUES
('01:13:19.05808+13:29:00':::TIMETZ),
('06:13:39.013972+14:52:00':::TIMETZ),
('01:10:09.06268-05:06:00':::TIMETZ),
('22:21:23.701203+10:28:00':::TIMETZ),
(NULL),
(
COALESCE(
'15:24:24.193463+01:58:00':::TIMETZ,
'06:47:25.508601+10:50:00':::TIMETZ
)
)
)
AS tab_167 (col_286) ON true ON NULL,
(
VALUES
(
COALESCE(3565973807:::OID, 3506951745:::OID),
ARRAY['-45 years -3 mons -734 days -22:27:05.203906':::INTERVAL,'57 years 9 mons 464 days 02:58:02.027105':::INTERVAL,'-71 years -1 mons -219 days -15:52:43.39243':::INTERVAL,'290 years':::INTERVAL,'46 years 11 mons 740 days 05:39:54.14459':::INTERVAL,'12 years 438 days 11:51:30.856446':::INTERVAL]
),
(
259206075:::OID,
(NULL::INTERVAL[] || ARRAY['-29 years -6 mons -142 days -16:12:31.810018':::INTERVAL,'58 years 11 mons 529 days 06:07:51.173242':::INTERVAL,'52 years 930 days 17:33:56.673499':::INTERVAL,'1 day':::INTERVAL]::INTERVAL[])::INTERVAL[]
),
(
55508376:::OID,
ARRAY['10 years 3 mons 845 days 16:45:18.378939':::INTERVAL,'67 years 5 mons 761 days 23:18:58.917765':::INTERVAL]
),
(1736996154:::OID, ARRAY[]:::INTERVAL[])
)
AS tab_168 (col_287, col_288)
LIMIT
1:::INT8
)
)
)
AS tab_169 (col_290, col_291)
WHERE
false
LIMIT
39:::INT8;
```
More
Artifacts: [/sqlsmith/setup=empty/setting=default](https://teamcity.cockroachdb.com/viewLog.html?buildId=2013639&tab=artifacts#/sqlsmith/setup=empty/setting=default)
Related:
- #47541 roachtest: sqlsmith/setup=empty/setting=default failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dempty%2Fsetting%3Ddefault.%2A&sort=title&restgroup=false&display=lastcommented+project)
powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
",0,roachtest no volatility for cast sqlsmith setup empty setting default failed on years mons days interval string years days interval e o string years days interval e o string as tab col right join values time as tab col full join values timetz timetz timetz timetz null coalesce timetz timetz as tab col on true on null values coalesce oid oid array oid null interval array interval interval oid array oid array interval as tab col col limit as tab col col where false limit more artifacts related roachtest sqlsmith setup empty setting default failed powered by ,0
5707,8367726522.0,IssuesEvent,2018-10-04 13:05:10,SAEONData/ckanext-metadata,https://api.github.com/repos/SAEONData/ckanext-metadata,closed,Metadata record uniqueness,requirement,"An attempt to create a metadata record with the same DOI and download link as an existing one, should be processed as an update to that existing record.
A match on DOI but not on download link, or vice versa, should be interpreted as an error.",1.0,"Metadata record uniqueness - An attempt to create a metadata record with the same DOI and download link as an existing one, should be processed as an update to that existing record.
A match on DOI but not on download link, or vice versa, should be interpreted as an error.",0,metadata record uniqueness an attempt to create a metadata record with the same doi and download link as an existing one should be processed as an update to that existing record a match on doi but not on download link or vice versa should be interpreted as an error ,0
1671,6574093737.0,IssuesEvent,2017-09-11 11:27:28,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_network: unable to deal with network IDs,affects_2.2 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- `docker_network`
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/schwarz/code/infrastructure/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Debian GNU/Linux
##### SUMMARY
`docker` allows addressing networks by ID. Ansible should do the same. For the sake of consistency with the `docker` CLI and other modules.
##### STEPS TO REPRODUCE
``` sh
$ docker network create foo
f22618292b2d841267b21a1fe9629ecbe2f4b4262d9baf080491f53846465f93
$ ansible -m docker_network -a 'name=f22618292b2d841267b21a1fe9629ecbe2f4b4262d9baf080491f53846465f93 state=absent' localhost
```
##### EXPECTED RESULTS
The output should be the same as from `ansible -m docker_network -a 'name=foo state=absent' localhost`.
```
localhost | SUCCESS => {
""actions"": [
""Removed network f22618292b2d841267b21a1fe9629ecbe2f4b4262d9baf080491f53846465f93""
],
""changed"": true
}
```
##### ACTUAL RESULTS
Instead no network is deleted.
```
localhost | SUCCESS => {
""actions"": [],
""changed"": false
}
```
",True,"docker_network: unable to deal with network IDs - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- `docker_network`
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/schwarz/code/infrastructure/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Debian GNU/Linux
##### SUMMARY
`docker` allows addressing networks by ID. Ansible should do the same. For the sake of consistency with the `docker` CLI and other modules.
##### STEPS TO REPRODUCE
``` sh
$ docker network create foo
f22618292b2d841267b21a1fe9629ecbe2f4b4262d9baf080491f53846465f93
$ ansible -m docker_network -a 'name=f22618292b2d841267b21a1fe9629ecbe2f4b4262d9baf080491f53846465f93 state=absent' localhost
```
##### EXPECTED RESULTS
The output should be the same as from `ansible -m docker_network -a 'name=foo state=absent' localhost`.
```
localhost | SUCCESS => {
""actions"": [
""Removed network f22618292b2d841267b21a1fe9629ecbe2f4b4262d9baf080491f53846465f93""
],
""changed"": true
}
```
##### ACTUAL RESULTS
Instead no network is deleted.
```
localhost | SUCCESS => {
""actions"": [],
""changed"": false
}
```
",1,docker network unable to deal with network ids issue type bug report component name docker network ansible version ansible config file home schwarz code infrastructure ansible cfg configured module search path default w o overrides configuration n a os environment debian gnu linux summary docker allows addressing networks by id ansible should do the same for the sake of consistency with the docker cli and other modules steps to reproduce sh docker network create foo ansible m docker network a name state absent localhost expected results the output should be the same as from ansible m docker network a name foo state absent localhost localhost success actions removed network changed true actual results instead no network is deleted localhost success actions changed false ,1
38315,5173562276.0,IssuesEvent,2017-01-18 16:22:47,ngageoint/hootenanny-ui,https://api.github.com/repos/ngageoint/hootenanny-ui,closed,Issue with Cookie Cutter Conflation,Category: Test Identified During Regression Test Status: Ready for Test Type: Bug,"Attempted the following method:
**Cookie Cutter & Horizontal**
For this example we’ll need to create two custom translations, one for the DC Street Centerline Data* described in and a second simple translation to ensure that the OSM highway data for DC maintains the correct osm tags.
Ingest DC Street datasets using the recently created custom translation files. Note the different Translation Schema files used to import each dataset.
- district_of_columbia_highway.zip
- Street_Centerline_Light.shp
Return to Map and select the Street Centerlines Light as the Reference Dataset, dc highway osm as the Secondary dataset.
Click ‘Conflate’
Change the value for Type to Cookie Cutter & Horizontal.
Hit Conflate. Note the conflation time will vary depending on the specs of the machine. This example took about 10-15 min to run locally.
50+ reviews should appear. ----> Instead of launching into review mode, it kicks back an error (will be happy to include the log upon request due to length)
Hypothesized it could be any issue with the data, but I attempted the above cookie cut conflation on Hoot NOME with zero issues. Would somebody mind trying to replicate the results on hoot release? ",3.0,"Issue with Cookie Cutter Conflation - Attempted the following method:
**Cookie Cutter & Horizontal**
For this example we’ll need to create two custom translations, one for the DC Street Centerline Data* described in and a second simple translation to ensure that the OSM highway data for DC maintains the correct osm tags.
Ingest DC Street datasets using the recently created custom translation files. Note the different Translation Schema files used to import each dataset.
- district_of_columbia_highway.zip
- Street_Centerline_Light.shp
Return to Map and select the Street Centerlines Light as the Reference Dataset, dc highway osm as the Secondary dataset.
Click ‘Conflate’
Change the value for Type to Cookie Cutter & Horizontal.
Hit Conflate. Note the conflation time will vary depending on the specs of the machine. This example took about 10-15 min to run locally.
50+ reviews should appear. ----> Instead of launching into review mode, it kicks back an error (will be happy to include the log upon request due to length)
Hypothesized it could be any issue with the data, but I attempted the above cookie cut conflation on Hoot NOME with zero issues. Would somebody mind trying to replicate the results on hoot release? ",0,issue with cookie cutter conflation attempted the following method cookie cutter horizontal for this example we’ll need to create two custom translations one for the dc street centerline data described in and a second simple translation to ensure that the osm highway data for dc maintains the correct osm tags ingest dc street datasets using the recently created custom translation files note the different translation schema files used to import each dataset district of columbia highway zip street centerline light shp return to map and select the street centerlines light as the reference dataset dc highway osm as the secondary dataset click ‘conflate’ change the value for type to cookie cutter horizontal hit conflate note the conflation time will vary depending on the specs of the machine this example took about min to run locally reviews should appear instead of launching into review mode it kicks back an error will be happy to include the log upon request due to length hypothesized it could be any issue with the data but i attempted the above cookie cut conflation on hoot nome with zero issues would somebody mind trying to replicate the results on hoot release ,0
1513,6543951398.0,IssuesEvent,2017-09-03 09:02:53,cucumber/aruba,https://api.github.com/repos/cucumber/aruba,closed,Improve branch-model,needs-feedback/by-maintainer,"## Summary
Improve branch-model to make contributing easier.
## Expected Behavior
Taken from [here](https://gitter.im/cucumber/aruba?at=56fd558f76b6f9de194cefef).
it's mostly bugfixes and new features (like Docker), so there shouldn't be any conflicts. If that's an issue, it's best to switch master to 1.x, branch off 0.14 for backporting, and just focus on releasing 1.x ASAP. It's no problem to then work on 2.x a month from now - no one will complain because of ""too many major releases"". Aruba is complex, so learning is expected. And if learning is reflected in ""multiple major releases"", that's just an indication of how much learning (or rework) was needed, nothing else. I'm guessing it's actually more useful to just skip to 2.x (master - without releasing), branch off 1.x as unreleased as ""work in progress"" (too keep the branches). E.g. I'm (as a contributor) only interested in master and not backward compatibility or deprecations - so it makes no sense to add an artificial ""burden"" on master. You can always work on a ""transition release"" - but only if there's a genuine need in the community. Master branch should be optimized for quickly accepting PRs. (If PRs don't have priority over other work, contributing feels very discouraging - as if there's a ton of ""bureaucracy""). It doesn't make sense to expect contributors to make backports of their own fixes (that they don't need themselves).
## Current Behavior
It takes a long time to release a major release.
## Possible Solution
Release faster.
",True,"Improve branch-model - ## Summary
Improve branch-model to make contributing easier.
## Expected Behavior
Taken from [here](https://gitter.im/cucumber/aruba?at=56fd558f76b6f9de194cefef).
it's mostly bugfixes and new features (like Docker), so there shouldn't be any conflicts. If that's an issue, it's best to switch master to 1.x, branch off 0.14 for backporting, and just focus on releasing 1.x ASAP. It's no problem to then work on 2.x a month from now - no one will complain because of ""too many major releases"". Aruba is complex, so learning is expected. And if learning is reflected in ""multiple major releases"", that's just an indication of how much learning (or rework) was needed, nothing else. I'm guessing it's actually more useful to just skip to 2.x (master - without releasing), branch off 1.x as unreleased as ""work in progress"" (too keep the branches). E.g. I'm (as a contributor) only interested in master and not backward compatibility or deprecations - so it makes no sense to add an artificial ""burden"" on master. You can always work on a ""transition release"" - but only if there's a genuine need in the community. Master branch should be optimized for quickly accepting PRs. (If PRs don't have priority over other work, contributing feels very discouraging - as if there's a ton of ""bureaucracy""). It doesn't make sense to expect contributors to make backports of their own fixes (that they don't need themselves).
## Current Behavior
It takes a long time to release a major release.
## Possible Solution
Release faster.
",1,improve branch model summary improve branch model to make contributing easier expected behavior taken from it s mostly bugfixes and new features like docker so there shouldn t be any conflicts if that s an issue it s best to switch master to x branch off for backporting and just focus on releasing x asap it s no problem to then work on x a month from now no one will complain because of too many major releases aruba is complex so learning is expected and if learning is reflected in multiple major releases that s just an indication of how much learning or rework was needed nothing else i m guessing it s actually more useful to just skip to x master without releasing branch off x as unreleased as work in progress too keep the branches e g i m as a contributor only interested in master and not backward compatibility or deprecations so it makes no sense to add an artificial burden on master you can always work on a transition release but only if there s a genuine need in the community master branch should be optimized for quickly accepting prs if prs don t have priority over other work contributing feels very discouraging as if there s a ton of bureaucracy it doesn t make sense to expect contributors to make backports of their own fixes that they don t need themselves current behavior it takes a long time to release a major release possible solution release faster ,1
41568,16813157032.0,IssuesEvent,2021-06-17 02:18:45,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Is there a log that contains Unauthorized access to the service bus,Pri2 cxp product-question service-bus-messaging/svc triaged,"We have a problem that when connection made with Azure Service Bus from VS2019, with Azure Authentication Account set, that on machine A it works fine and on machine B it doesn't work. Exactly the same code and same settings.
On the machine that doesn't work we receive the following exception:
`Unauthorized access. 'Listen,Manage,SubscriptionRuleRead' claim(s) are required to perform this operation. Resource: 'sb://sbn-xxx-dev-001.servicebus.windows.net/xxxtopic/subscriptions/xxxsubscription/$management'.`
`Microsoft.Azure.ServiceBus.UnauthorizedException: Unauthorized access. 'Listen' claim(s) are required to perform this operation. Resource: 'sb://sbn-xxx-dev-001.servicebus.windows.net/xxxtopic/subscriptions/xxxsubscription'`
Where can I track which account/username/SAS/etc is used to contact the service bus?
Code
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 23735779-3536-079f-9d5c-1b5780924dfc
* Version Independent ID: 9f358dfa-8bfc-38cd-4bf7-b04f0acad5b7
* Content: [Azure Service Bus diagnostics logs - Azure Service Bus](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-diagnostic-logs)
* Content Source: [articles/service-bus-messaging/service-bus-diagnostic-logs.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/service-bus-messaging/service-bus-diagnostic-logs.md)
* Service: **service-bus-messaging**
* GitHub Login: @spelluru
* Microsoft Alias: **spelluru**",1.0,"Is there a log that contains Unauthorized access to the service bus - We have a problem that when connection made with Azure Service Bus from VS2019, with Azure Authentication Account set, that on machine A it works fine and on machine B it doesn't work. Exactly the same code and same settings.
On the machine that doesn't work we receive the following exception:
`Unauthorized access. 'Listen,Manage,SubscriptionRuleRead' claim(s) are required to perform this operation. Resource: 'sb://sbn-xxx-dev-001.servicebus.windows.net/xxxtopic/subscriptions/xxxsubscription/$management'.`
`Microsoft.Azure.ServiceBus.UnauthorizedException: Unauthorized access. 'Listen' claim(s) are required to perform this operation. Resource: 'sb://sbn-xxx-dev-001.servicebus.windows.net/xxxtopic/subscriptions/xxxsubscription'`
Where can I track which account/username/SAS/etc is used to contact the service bus?
Code
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 23735779-3536-079f-9d5c-1b5780924dfc
* Version Independent ID: 9f358dfa-8bfc-38cd-4bf7-b04f0acad5b7
* Content: [Azure Service Bus diagnostics logs - Azure Service Bus](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-diagnostic-logs)
* Content Source: [articles/service-bus-messaging/service-bus-diagnostic-logs.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/service-bus-messaging/service-bus-diagnostic-logs.md)
* Service: **service-bus-messaging**
* GitHub Login: @spelluru
* Microsoft Alias: **spelluru**",0,is there a log that contains unauthorized access to the service bus we have a problem that when connection made with azure service bus from with azure authentication account set that on machine a it works fine and on machine b it doesn t work exactly the same code and same settings on the machine that doesn t work we receive the following exception unauthorized access listen manage subscriptionruleread claim s are required to perform this operation resource sb sbn xxx dev servicebus windows net xxxtopic subscriptions xxxsubscription management microsoft azure servicebus unauthorizedexception unauthorized access listen claim s are required to perform this operation resource sb sbn xxx dev servicebus windows net xxxtopic subscriptions xxxsubscription where can i track which account username sas etc is used to contact the service bus code document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service service bus messaging github login spelluru microsoft alias spelluru ,0
283843,21334860577.0,IssuesEvent,2022-04-18 13:25:42,jcubic/jquery.terminal,https://api.github.com/repos/jcubic/jquery.terminal,reopened,Multi-word commands,question documentation,"### I have question related to jQuery Terminal
How do you make commands multiple words? I'm trying to make a joke ""sudo rm -rf"" command but when I try it, it only says ""command sudo not fount""",1.0,"Multi-word commands - ### I have question related to jQuery Terminal
How do you make commands multiple words? I'm trying to make a joke ""sudo rm -rf"" command but when I try it, it only says ""command sudo not fount""",0,multi word commands i have question related to jquery terminal how do you make commands multiple words i m trying to make a joke sudo rm rf command but when i try it it only says command sudo not fount ,0
309478,23296933918.0,IssuesEvent,2022-08-06 18:25:08,openscientia/terraform-provider-atlassian,https://api.github.com/repos/openscientia/terraform-provider-atlassian,closed,Update YAML frontmatter in `atlassian_jira_issue_type_scheme` markdown template,documentation enhancement jira/issuetypeschemes,"### Terraform CLI and Provider Versions
v0.1.0
### New or Affected Resource(s)
- atlassian_jira_issue_type_scheme
### Use Cases or Problem Statement
The [markdown template](https://github.com/openscientia/terraform-provider-atlassian/blob/main/templates/resources/jira_issue_type_scheme.md.tmpl) used to generate documentation for `atlassian_jira_issue_type_scheme` resources contains an incorrect `description` in the YAML frontmatter.
```yaml
description: |-
Manages {{ .Type }}.
---
```
### Proposal
The `description` in the YAML frontmatter should be as follows:
```yaml
description: |-
Manages {{ .Name }}.
---
```
### How much impact is this issue causing?
Low
### Additional Information
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct",1.0,"Update YAML frontmatter in `atlassian_jira_issue_type_scheme` markdown template - ### Terraform CLI and Provider Versions
v0.1.0
### New or Affected Resource(s)
- atlassian_jira_issue_type_scheme
### Use Cases or Problem Statement
The [markdown template](https://github.com/openscientia/terraform-provider-atlassian/blob/main/templates/resources/jira_issue_type_scheme.md.tmpl) used to generate documentation for `atlassian_jira_issue_type_scheme` resources contains an incorrect `description` in the YAML frontmatter.
```yaml
description: |-
Manages {{ .Type }}.
---
```
### Proposal
The `description` in the YAML frontmatter should be as follows:
```yaml
description: |-
Manages {{ .Name }}.
---
```
### How much impact is this issue causing?
Low
### Additional Information
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct",0,update yaml frontmatter in atlassian jira issue type scheme markdown template terraform cli and provider versions new or affected resource s atlassian jira issue type scheme use cases or problem statement the used to generate documentation for atlassian jira issue type scheme resources contains an incorrect description in the yaml frontmatter yaml description manages type proposal the description in the yaml frontmatter should be as follows yaml description manages name how much impact is this issue causing low additional information no response code of conduct i agree to follow this project s code of conduct,0
2717,9548733665.0,IssuesEvent,2019-05-02 06:50:01,RalfKoban/MiKo-Analyzers,https://api.github.com/repos/RalfKoban/MiKo-Analyzers,reopened,Test classes are in same namespace as Type under Test,Area: analyzer Area: maintainability feature,Its a good practice to place the tests for a type in the exact same namespace as the type's namespace.,True,Test classes are in same namespace as Type under Test - Its a good practice to place the tests for a type in the exact same namespace as the type's namespace.,1,test classes are in same namespace as type under test its a good practice to place the tests for a type in the exact same namespace as the type s namespace ,1
670,4212800213.0,IssuesEvent,2016-06-29 17:14:26,duckduckgo/zeroclickinfo-goodies,https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies,closed,Tips IA maintainer request,Maintainer Input Requested,"There has been several maintainer timeouts for the Tips IA. If @mattlehning does not have any objections I would like to takeover as maintainer. Thanks!
https://duck.co/ia/view/tips
cc @moollaza @edgesince84 @zekiel ",True,"Tips IA maintainer request - There has been several maintainer timeouts for the Tips IA. If @mattlehning does not have any objections I would like to takeover as maintainer. Thanks!
https://duck.co/ia/view/tips
cc @moollaza @edgesince84 @zekiel ",1,tips ia maintainer request there has been several maintainer timeouts for the tips ia if mattlehning does not have any objections i would like to takeover as maintainer thanks cc moollaza zekiel ,1
536327,15707591943.0,IssuesEvent,2021-03-26 19:07:33,itslupus/gamersnet,https://api.github.com/repos/itslupus/gamersnet,closed,Game based search,high priority user story,"**Description**:
As a user, I want to be able to search for posts to find players that are playing my game
**Acceptance Criteria**:
Search posts by name of games
**Dev Tasks**:
[Backend endpoint to fetch posts](https://github.com/itslupus/gamersnet/issues/15)
[Frontend UI to select games](https://github.com/itslupus/gamersnet/issues/16)
[Frontend UI to display posts by games](https://github.com/itslupus/gamersnet/issues/17)
**Story Points (1 - 5**): 3",1.0,"Game based search - **Description**:
As a user, I want to be able to search for posts to find players that are playing my game
**Acceptance Criteria**:
Search posts by name of games
**Dev Tasks**:
[Backend endpoint to fetch posts](https://github.com/itslupus/gamersnet/issues/15)
[Frontend UI to select games](https://github.com/itslupus/gamersnet/issues/16)
[Frontend UI to display posts by games](https://github.com/itslupus/gamersnet/issues/17)
**Story Points (1 - 5**): 3",0,game based search description as a user i want to be able to search for posts to find players that are playing my game acceptance criteria search posts by name of games dev tasks story points ,0
1063,4889233518.0,IssuesEvent,2016-11-18 09:31:24,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Wrong variable resolution when multiple parents role call the same child role,affects_2.2 bug_report waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
None specific
##### OS / ENVIRONMENT
CentOS Linux release 7.2.1511 (Core) - Ansible host and server
##### SUMMARY
When multiple parents includes the same child role but override some variables, the variable resolution is wrong.
##### STEPS TO REPRODUCE
```
./playbook.yml
---
- hosts: localhost
gather_facts: no
roles:
- parent1
- parent2
./roles/child/defaults/main.yml
---
childTestValue: ChildRoleDefault
./roles/child/tasks/main.yml
- debug: var=childTestValue
./roles/parent1/defaults/main.yml
---
parentTestValue: Parent1RoleDefault
./roles/parent1/tasks/main.yml
- debug: var=parentTestValue
- include_role:
name: child
vars:
childTestValue: ""{{ parentTestValue }}""
./roles/parent2/defaults/main.yml
---
parentTestValue: Parent2RoleDefault
./roles/parent2/tasks/main.yml
- debug: var=parentTestValue
- include_role:
name: child
vars:
childTestValue: ""{{ parentTestValue }}""
```
##### EXPECTED RESULTS
I was expecting the child role to be called twice with 2 differents value of childTestValue. Once With Parent1RoleDefault and an other with Parent2RoleDefault
##### ACTUAL RESULTS
Instead, the child role is called twice with the same value for childTestValue
```
[root@localhost debug]# ansible-playbook -vvvv /vagrant/ansible/debug/playbook.yml
Using /etc/ansible/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: playbook.yml *********************************************************
1 plays in /vagrant/ansible/debug/playbook.yml
PLAY [localhost] ***************************************************************
TASK [parent1 : debug] *********************************************************
task path: /vagrant/ansible/debug/roles/parent1/tasks/main.yml:1
ok: [localhost] => {
""parentTestValue"": ""Parent1RoleDefault""
}
TASK [child : debug] ***********************************************************
task path: /vagrant/ansible/debug/roles/child/tasks/main.yml:1
ok: [localhost] => {
""childTestValue"": ""Parent2RoleDefault""
}
TASK [parent2 : debug] *********************************************************
task path: /vagrant/ansible/debug/roles/parent2/tasks/main.yml:1
ok: [localhost] => {
""parentTestValue"": ""Parent2RoleDefault""
}
TASK [child : debug] ***********************************************************
task path: /vagrant/ansible/debug/roles/child/tasks/main.yml:1
ok: [localhost] => {
""childTestValue"": ""Parent2RoleDefault""
}
PLAY RECAP *********************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0
```
",True,"Wrong variable resolution when multiple parents role call the same child role -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
None specific
##### OS / ENVIRONMENT
CentOS Linux release 7.2.1511 (Core) - Ansible host and server
##### SUMMARY
When multiple parents includes the same child role but override some variables, the variable resolution is wrong.
##### STEPS TO REPRODUCE
```
./playbook.yml
---
- hosts: localhost
gather_facts: no
roles:
- parent1
- parent2
./roles/child/defaults/main.yml
---
childTestValue: ChildRoleDefault
./roles/child/tasks/main.yml
- debug: var=childTestValue
./roles/parent1/defaults/main.yml
---
parentTestValue: Parent1RoleDefault
./roles/parent1/tasks/main.yml
- debug: var=parentTestValue
- include_role:
name: child
vars:
childTestValue: ""{{ parentTestValue }}""
./roles/parent2/defaults/main.yml
---
parentTestValue: Parent2RoleDefault
./roles/parent2/tasks/main.yml
- debug: var=parentTestValue
- include_role:
name: child
vars:
childTestValue: ""{{ parentTestValue }}""
```
##### EXPECTED RESULTS
I was expecting the child role to be called twice with 2 differents value of childTestValue. Once With Parent1RoleDefault and an other with Parent2RoleDefault
##### ACTUAL RESULTS
Instead, the child role is called twice with the same value for childTestValue
```
[root@localhost debug]# ansible-playbook -vvvv /vagrant/ansible/debug/playbook.yml
Using /etc/ansible/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: playbook.yml *********************************************************
1 plays in /vagrant/ansible/debug/playbook.yml
PLAY [localhost] ***************************************************************
TASK [parent1 : debug] *********************************************************
task path: /vagrant/ansible/debug/roles/parent1/tasks/main.yml:1
ok: [localhost] => {
""parentTestValue"": ""Parent1RoleDefault""
}
TASK [child : debug] ***********************************************************
task path: /vagrant/ansible/debug/roles/child/tasks/main.yml:1
ok: [localhost] => {
""childTestValue"": ""Parent2RoleDefault""
}
TASK [parent2 : debug] *********************************************************
task path: /vagrant/ansible/debug/roles/parent2/tasks/main.yml:1
ok: [localhost] => {
""parentTestValue"": ""Parent2RoleDefault""
}
TASK [child : debug] ***********************************************************
task path: /vagrant/ansible/debug/roles/child/tasks/main.yml:1
ok: [localhost] => {
""childTestValue"": ""Parent2RoleDefault""
}
PLAY RECAP *********************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0
```
",1,wrong variable resolution when multiple parents role call the same child role issue type bug report component name include role ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none specific os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific centos linux release core ansible host and server summary when multiple parents includes the same child role but override some variables the variable resolution is wrong steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used playbook yml hosts localhost gather facts no roles roles child defaults main yml childtestvalue childroledefault roles child tasks main yml debug var childtestvalue roles defaults main yml parenttestvalue roles tasks main yml debug var parenttestvalue include role name child vars childtestvalue parenttestvalue roles defaults main yml parenttestvalue roles tasks main yml debug var parenttestvalue include role name child vars childtestvalue parenttestvalue expected results i was expecting the child role to be called twice with differents value of childtestvalue once with and an other with actual results instead the child role is called twice with the same value for childtestvalue ansible playbook vvvv vagrant ansible debug playbook yml using etc ansible ansible cfg as config file provided hosts list is empty only localhost is available loading callback plugin default of type stdout from usr lib site packages ansible plugins callback init pyc playbook playbook yml plays in vagrant ansible debug playbook yml play task task path vagrant ansible debug roles tasks main yml ok parenttestvalue task task path vagrant ansible debug roles child tasks main yml ok childtestvalue task task path vagrant ansible debug roles tasks main yml ok parenttestvalue task task path vagrant ansible debug roles child tasks main yml ok childtestvalue play recap localhost ok changed unreachable failed ,1
941,4662885819.0,IssuesEvent,2016-10-05 06:56:20,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_service not creating docker image,affects_2.3 bug_report cloud docker in progress waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_service
##### ANSIBLE VERSION
```
ansible 2.3.0 (devel d09f57fb3a) last updated 2016/10/04 08:46:08 (GMT +000)
lib/ansible/modules/core: (devel 0ee774ff15) last updated 2016/10/04 09:19:29 (GMT +000)
lib/ansible/modules/extras: (devel 5cc72c3f06) last updated 2016/10/04 09:20:09 (GMT +000)
config file = /etc/ansible/roles/myrole/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Ubuntu 14.04
##### SUMMARY
docker_service module doesn't create a docker image from the docker-compose file, it gives an error and crashes. The error was: compose.service.NoSuchImageError: Image 'myproject_web' not found.
However, if a docker image is present, it will proceed without error.
On the other hand, what is expected from docker-compose (and also docker_service) is it will create an image if one isn't there. Especially if the --build flag is present. During testing, ""docker-compose up"" does this successfully, whether an image exists or not.
##### STEPS TO REPRODUCE
```
- name: start with docker-compose
docker_service:
project_src: ""{{ deploy_dir }}""
build: yes
state: present
```
check the current docker images:
```
docker images
```
make a backup copy of the relevant image, if desired, and then delete the image.
```
docker tag image_name image_name.bck
docker rmi image_name
```
then run ansible-playbook.
##### EXPECTED RESULTS
TASK [djangoapp_docker : start with docker-compose] ****************************
changed: [appserver]
PLAY RECAP *********************************************************************
appserver : ok=4 changed=1 unreachable=0 failed=0
##### ACTUAL RESULTS
```
TASK [djangoapp_docker : start with docker-compose] ****************************
task path: /etc/ansible/roles/myrole/roles/djangoapp_docker/tasks/main.yml:9
Using module file /opt/github/ansible/lib/ansible/modules/core/cloud/docker/docker_service.py
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425 `"" && echo ansible-tmp-1475573103.01-30232119310425=""` echo $HOME/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425 `"" ) && sleep 0'
PUT /tmp/tmpai39S1 TO /root/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425/docker_service.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425/ /root/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425/docker_service.py && sleep 0'
EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425/docker_service.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425/"" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File ""/tmp/ansible_zG7JHD/ansible_module_docker_service.py"", line 929, in
main()
File ""/tmp/ansible_zG7JHD/ansible_module_docker_service.py"", line 924, in main
result = ContainerManager(client).exec_module()
File ""/tmp/ansible_zG7JHD/ansible_module_docker_service.py"", line 575, in exec_module
result = self.cmd_up()
File ""/tmp/ansible_zG7JHD/ansible_module_docker_service.py"", line 630, in cmd_up
result.update(self.cmd_build())
File ""/tmp/ansible_zG7JHD/ansible_module_docker_service.py"", line 773, in cmd_build
image = service.image()
File ""/usr/local/lib/python2.7/dist-packages/compose/service.py"", line 316, in image
raise NoSuchImageError(""Image '{}' not found"".format(self.image_name))
compose.service.NoSuchImageError: Image 'myproject_web' not found
fatal: [appserver]: FAILED! => {
""changed"": false,
""failed"": true,
""invocation"": {
""module_name"": ""docker_service""
},
""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_zG7JHD/ansible_module_docker_service.py\"", line 929, in \n main()\n File \""/tmp/ansible_zG7JHD/ansible_module_docker_service.py\"", line 924, in main\n result = ContainerManager(client).exec_module()\n File \""/tmp/ansible_zG7JHD/ansible_module_docker_service.py\"", line 575, in exec_module\n result = self.cmd_up()\n File \""/tmp/ansible_zG7JHD/ansible_module_docker_service.py\"", line 630, in cmd_up\n result.update(self.cmd_build())\n File \""/tmp/ansible_zG7JHD/ansible_module_docker_service.py\"", line 773, in cmd_build\n image = service.image()\n File \""/usr/local/lib/python2.7/dist-packages/compose/service.py\"", line 316, in image\n raise NoSuchImageError(\""Image '{}' not found\"".format(self.image_name))\ncompose.service.NoSuchImageError: Image 'myrole_web' not found\n"",
""module_stdout"": """",
""msg"": ""MODULE FAILURE""
}
to retry, use: --limit @/etc/ansible/roles/myrole/deploy_docker.retry
```
",True,"docker_service not creating docker image -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_service
##### ANSIBLE VERSION
```
ansible 2.3.0 (devel d09f57fb3a) last updated 2016/10/04 08:46:08 (GMT +000)
lib/ansible/modules/core: (devel 0ee774ff15) last updated 2016/10/04 09:19:29 (GMT +000)
lib/ansible/modules/extras: (devel 5cc72c3f06) last updated 2016/10/04 09:20:09 (GMT +000)
config file = /etc/ansible/roles/myrole/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Ubuntu 14.04
##### SUMMARY
docker_service module doesn't create a docker image from the docker-compose file, it gives an error and crashes. The error was: compose.service.NoSuchImageError: Image 'myproject_web' not found.
However, if a docker image is present, it will proceed without error.
On the other hand, what is expected from docker-compose (and also docker_service) is it will create an image if one isn't there. Especially if the --build flag is present. During testing, ""docker-compose up"" does this successfully, whether an image exists or not.
##### STEPS TO REPRODUCE
```
- name: start with docker-compose
docker_service:
project_src: ""{{ deploy_dir }}""
build: yes
state: present
```
check the current docker images:
```
docker images
```
make a backup copy of the relevant image, if desired, and then delete the image.
```
docker tag image_name image_name.bck
docker rmi image_name
```
then run ansible-playbook.
##### EXPECTED RESULTS
TASK [djangoapp_docker : start with docker-compose] ****************************
changed: [appserver]
PLAY RECAP *********************************************************************
appserver : ok=4 changed=1 unreachable=0 failed=0
##### ACTUAL RESULTS
```
TASK [djangoapp_docker : start with docker-compose] ****************************
task path: /etc/ansible/roles/myrole/roles/djangoapp_docker/tasks/main.yml:9
Using module file /opt/github/ansible/lib/ansible/modules/core/cloud/docker/docker_service.py
ESTABLISH LOCAL CONNECTION FOR USER: root
EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425 `"" && echo ansible-tmp-1475573103.01-30232119310425=""` echo $HOME/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425 `"" ) && sleep 0'
PUT /tmp/tmpai39S1 TO /root/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425/docker_service.py
EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425/ /root/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425/docker_service.py && sleep 0'
EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425/docker_service.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1475573103.01-30232119310425/"" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File ""/tmp/ansible_zG7JHD/ansible_module_docker_service.py"", line 929, in
main()
File ""/tmp/ansible_zG7JHD/ansible_module_docker_service.py"", line 924, in main
result = ContainerManager(client).exec_module()
File ""/tmp/ansible_zG7JHD/ansible_module_docker_service.py"", line 575, in exec_module
result = self.cmd_up()
File ""/tmp/ansible_zG7JHD/ansible_module_docker_service.py"", line 630, in cmd_up
result.update(self.cmd_build())
File ""/tmp/ansible_zG7JHD/ansible_module_docker_service.py"", line 773, in cmd_build
image = service.image()
File ""/usr/local/lib/python2.7/dist-packages/compose/service.py"", line 316, in image
raise NoSuchImageError(""Image '{}' not found"".format(self.image_name))
compose.service.NoSuchImageError: Image 'myproject_web' not found
fatal: [appserver]: FAILED! => {
""changed"": false,
""failed"": true,
""invocation"": {
""module_name"": ""docker_service""
},
""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_zG7JHD/ansible_module_docker_service.py\"", line 929, in \n main()\n File \""/tmp/ansible_zG7JHD/ansible_module_docker_service.py\"", line 924, in main\n result = ContainerManager(client).exec_module()\n File \""/tmp/ansible_zG7JHD/ansible_module_docker_service.py\"", line 575, in exec_module\n result = self.cmd_up()\n File \""/tmp/ansible_zG7JHD/ansible_module_docker_service.py\"", line 630, in cmd_up\n result.update(self.cmd_build())\n File \""/tmp/ansible_zG7JHD/ansible_module_docker_service.py\"", line 773, in cmd_build\n image = service.image()\n File \""/usr/local/lib/python2.7/dist-packages/compose/service.py\"", line 316, in image\n raise NoSuchImageError(\""Image '{}' not found\"".format(self.image_name))\ncompose.service.NoSuchImageError: Image 'myrole_web' not found\n"",
""module_stdout"": """",
""msg"": ""MODULE FAILURE""
}
to retry, use: --limit @/etc/ansible/roles/myrole/deploy_docker.retry
```
",1,docker service not creating docker image issue type bug report component name docker service ansible version ansible devel last updated gmt lib ansible modules core devel last updated gmt lib ansible modules extras devel last updated gmt config file etc ansible roles myrole ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary docker service module doesn t create a docker image from the docker compose file it gives an error and crashes the error was compose service nosuchimageerror image myproject web not found however if a docker image is present it will proceed without error on the other hand what is expected from docker compose and also docker service is it will create an image if one isn t there especially if the build flag is present during testing docker compose up does this successfully whether an image exists or not steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name start with docker compose docker service project src deploy dir build yes state present check the current docker images docker images make a backup copy of the relevant image if desired and then delete the image docker tag image name image name bck docker rmi image name then run ansible playbook expected results task changed play recap appserver ok changed unreachable failed actual results task task path etc ansible roles myrole roles djangoapp docker tasks main yml using module file opt github ansible lib ansible modules core cloud docker docker service py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp docker service py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp docker service py sleep exec bin sh c usr bin python root ansible tmp ansible tmp docker service py rm rf root ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module docker service py line in main file tmp ansible ansible module docker service py line in main result containermanager client exec module file tmp ansible ansible module docker service py line in exec module result self cmd up file tmp ansible ansible module docker service py line in cmd up result update self cmd build file tmp ansible ansible module docker service py line in cmd build image service image file usr local lib dist packages compose service py line in image raise nosuchimageerror image not found format self image name compose service nosuchimageerror image myproject web not found fatal failed changed false failed true invocation module name docker service module stderr traceback most recent call last n file tmp ansible ansible module docker service py line in n main n file tmp ansible ansible module docker service py line in main n result containermanager client exec module n file tmp ansible ansible module docker service py line in exec module n result self cmd up n file tmp ansible ansible module docker service py line in cmd up n result update self cmd build n file tmp ansible ansible module docker service py line in cmd build n image service image n file usr local lib dist packages compose service py line in image n raise nosuchimageerror image not found format self image name ncompose service nosuchimageerror image myrole web not found n module stdout msg module failure to retry use limit etc ansible roles myrole deploy docker retry ,1
3248,12371555415.0,IssuesEvent,2020-05-18 18:46:34,cloud-gov/product,https://api.github.com/repos/cloud-gov/product,closed,"As an operator, I want to remove cg-dashboard (5/18)",contractor-3-maintainability operations,"A bug in stratos is currently requiring us to continue to run cg-dashboard: https://github.com/cloudfoundry/stratos/issues/4103
Once this issue is fixed, deployed and validated in cg, we should remove cg-dashboard.
## Acceptance Criteria
* [x] GIVEN The stratos bug is fixed
AND stratos is updated in production with the bug fix
WHEN a user accesses dashboard-deprecated.fr.cloud.gov
AND looks in the docs for user management information
THEN they are redirected to stratos
AND the docs match stratos
---
## Security considerations
be sure the stratos fix meets our needs. For example, a user should not be able to see and search for all users in a system. Instead, they should have to enter an exact username when setting a role.
## Implementation sketch
* [x] Remove CircleCI access to the cg-dashboard repo (remove the webhook)
* [x] Archive the cg-dashboard repo
* [x] Update docs and remove references to dashboard-deprecated
* [x] Add a redirect for the legacy dashboard to the current Stratos dashboard
* [x] Validate Stratos dashboard docs on docs.cloud.gov show the correct procedure to manage users",True,"As an operator, I want to remove cg-dashboard (5/18) - A bug in stratos is currently requiring us to continue to run cg-dashboard: https://github.com/cloudfoundry/stratos/issues/4103
Once this issue is fixed, deployed and validated in cg, we should remove cg-dashboard.
## Acceptance Criteria
* [x] GIVEN The stratos bug is fixed
AND stratos is updated in production with the bug fix
WHEN a user accesses dashboard-deprecated.fr.cloud.gov
AND looks in the docs for user management information
THEN they are redirected to stratos
AND the docs match stratos
---
## Security considerations
be sure the stratos fix meets our needs. For example, a user should not be able to see and search for all users in a system. Instead, they should have to enter an exact username when setting a role.
## Implementation sketch
* [x] Remove CircleCI access to the cg-dashboard repo (remove the webhook)
* [x] Archive the cg-dashboard repo
* [x] Update docs and remove references to dashboard-deprecated
* [x] Add a redirect for the legacy dashboard to the current Stratos dashboard
* [x] Validate Stratos dashboard docs on docs.cloud.gov show the correct procedure to manage users",1,as an operator i want to remove cg dashboard a bug in stratos is currently requiring us to continue to run cg dashboard once this issue is fixed deployed and validated in cg we should remove cg dashboard acceptance criteria given the stratos bug is fixed and stratos is updated in production with the bug fix when a user accesses dashboard deprecated fr cloud gov and looks in the docs for user management information then they are redirected to stratos and the docs match stratos security considerations be sure the stratos fix meets our needs for example a user should not be able to see and search for all users in a system instead they should have to enter an exact username when setting a role implementation sketch remove circleci access to the cg dashboard repo remove the webhook archive the cg dashboard repo update docs and remove references to dashboard deprecated add a redirect for the legacy dashboard to the current stratos dashboard validate stratos dashboard docs on docs cloud gov show the correct procedure to manage users,1
110807,9477936354.0,IssuesEvent,2019-04-19 20:32:26,cerner/terra-core,https://api.github.com/repos/cerner/terra-core,closed,Improve icon visual regression test coverage,Orion Reviewed icon intermediate issue testing,"# Feature Request
## Description
Currently, our icon visual regression coverage is very minimal. We should expand it to better capture the full icon set. We've had a couple bugs slip through related to how the SVGs have been formatted that we can catch if we set up visual regression tests for the entire icon set.",1.0,"Improve icon visual regression test coverage - # Feature Request
## Description
Currently, our icon visual regression coverage is very minimal. We should expand it to better capture the full icon set. We've had a couple bugs slip through related to how the SVGs have been formatted that we can catch if we set up visual regression tests for the entire icon set.",0,improve icon visual regression test coverage feature request description currently our icon visual regression coverage is very minimal we should expand it to better capture the full icon set we ve had a couple bugs slip through related to how the svgs have been formatted that we can catch if we set up visual regression tests for the entire icon set ,0
199047,6980255775.0,IssuesEvent,2017-12-13 00:39:48,kubernetes-incubator/cri-containerd,https://api.github.com/repos/kubernetes-incubator/cri-containerd,closed,Add containerd/cri-containerd monitor.,priority/P2,"In our current kube-up.sh integration, we don't have a [`kube-docker-monitor.service`](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/node.yaml#L42) like health monitor for cri-containerd/containerd.
We should add one.",1.0,"Add containerd/cri-containerd monitor. - In our current kube-up.sh integration, we don't have a [`kube-docker-monitor.service`](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/node.yaml#L42) like health monitor for cri-containerd/containerd.
We should add one.",0,add containerd cri containerd monitor in our current kube up sh integration we don t have a like health monitor for cri containerd containerd we should add one ,0
178199,14663632931.0,IssuesEvent,2020-12-29 10:08:05,vchain-us/vcn,https://api.github.com/repos/vchain-us/vcn,closed,vcn documentation (LC use cases),documentation,"We should add instructions for using the CI/CD plugin in the Markdown file at path `client/pages/resources/plugins/cicd.md`in the `ledger-compliance` repository.
Here is a sample page with different kind of formatting: https://raw.githubusercontent.com/vchain-us/ledger-compliance/9412f038089e495702c0e3f677117143380ca298/client/pages/help/index.md?token=AAZFDZP2PCRYM2MD4G3E4IK7ZZWNM",1.0,"vcn documentation (LC use cases) - We should add instructions for using the CI/CD plugin in the Markdown file at path `client/pages/resources/plugins/cicd.md`in the `ledger-compliance` repository.
Here is a sample page with different kind of formatting: https://raw.githubusercontent.com/vchain-us/ledger-compliance/9412f038089e495702c0e3f677117143380ca298/client/pages/help/index.md?token=AAZFDZP2PCRYM2MD4G3E4IK7ZZWNM",0,vcn documentation lc use cases we should add instructions for using the ci cd plugin in the markdown file at path client pages resources plugins cicd md in the ledger compliance repository here is a sample page with different kind of formatting ,0
714562,24566453789.0,IssuesEvent,2022-10-13 03:49:17,AY2223S1-CS2113-T17-1/tp,https://api.github.com/repos/AY2223S1-CS2113-T17-1/tp,closed,"[List] As an AOM, I can view the details of a passenger ",type.Story priority.High,"so that I am able to have an overview of the passenger list in terminal 1.
Vignesh: Class creation (With accompanying methods)
Ivan: implement in main class
Due Date: 11th Oct 2022 (Tuesday)",1.0,"[List] As an AOM, I can view the details of a passenger - so that I am able to have an overview of the passenger list in terminal 1.
Vignesh: Class creation (With accompanying methods)
Ivan: implement in main class
Due Date: 11th Oct 2022 (Tuesday)",0, as an aom i can view the details of a passenger so that i am able to have an overview of the passenger list in terminal vignesh class creation with accompanying methods ivan implement in main class due date oct tuesday ,0
3674,15036029509.0,IssuesEvent,2021-02-02 14:48:38,IITIDIDX597/sp_2021_team1,https://api.github.com/repos/IITIDIDX597/sp_2021_team1,opened,Tagging articles for better search,Epic: 5 Maintaining the system Story Week 3,"**Project Goal:** S Lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way, while at the same time foster deeper learning experiences in order to deliver better AbilityLab Patient care.
**Hill Statement:** Individual Clinicians can reference relevant, continuously evolving information for their patient's therapy needs to self-manage their approach & patient care plan development in a single platform.
**Sub-Hill Statements:**
1. The learning platform will be routinely updated with S Lab's own research advancements, as well as outside discoveries and best practices developed for rehabilitation treatments.
### **Story Details:**
As an: administrator
I want: to be able to tag the article with various labels according to the topic
So that: it's easier for people to search
",True,"Tagging articles for better search - **Project Goal:** S Lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way, while at the same time foster deeper learning experiences in order to deliver better AbilityLab Patient care.
**Hill Statement:** Individual Clinicians can reference relevant, continuously evolving information for their patient's therapy needs to self-manage their approach & patient care plan development in a single platform.
**Sub-Hill Statements:**
1. The learning platform will be routinely updated with S Lab's own research advancements, as well as outside discoveries and best practices developed for rehabilitation treatments.
### **Story Details:**
As an: administrator
I want: to be able to tag the article with various labels according to the topic
So that: it's easier for people to search
",1,tagging articles for better search project goal s lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way while at the same time foster deeper learning experiences in order to deliver better abilitylab patient care hill statement individual clinicians can reference relevant continuously evolving information for their patient s therapy needs to self manage their approach patient care plan development in a single platform sub hill statements the learning platform will be routinely updated with s lab s own research advancements as well as outside discoveries and best practices developed for rehabilitation treatments story details as an administrator i want to be able to tag the article with various labels according to the topic so that it s easier for people to search ,1
1316,5639943133.0,IssuesEvent,2017-04-06 15:21:45,github/hubot-scripts,https://api.github.com/repos/github/hubot-scripts,closed,Conversation seems not working,needs-maintainer,"Hey,
I've installed the conversation script and i think something is broken.
I don't really know if this script is making hubot more chatty but i don't know how to test it.
",True,"Conversation seems not working - Hey,
I've installed the conversation script and i think something is broken.
I don't really know if this script is making hubot more chatty but i don't know how to test it.
",1,conversation seems not working hey i ve installed the conversation script and i think something is broken i don t really know if this script is making hubot more chatty but i don t know how to test it ,1
9702,6973973274.0,IssuesEvent,2017-12-11 22:29:43,grpc/grpc,https://api.github.com/repos/grpc/grpc,closed,Performance benchmarking : allow multi-phase warmup ,area/performance/benchmarking,"A language like Java needs a multi-phase warmup with a quiet period in between to enable the JIT to work. This will require changes to both the driver and client.
",True,"Performance benchmarking : allow multi-phase warmup - A language like Java needs a multi-phase warmup with a quiet period in between to enable the JIT to work. This will require changes to both the driver and client.
",0,performance benchmarking allow multi phase warmup a language like java needs a multi phase warmup with a quiet period in between to enable the jit to work this will require changes to both the driver and client ,0
21915,11424548328.0,IssuesEvent,2020-02-03 17:59:09,tensorflow/tensorflow,https://api.github.com/repos/tensorflow/tensorflow,closed,Keras RNN training speed significantly slower with eager execution/control flow v2,TF 2.0 comp:keras type:performance,"**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
- TensorFlow installed from (source or binary): source
- TensorFlow version (use command below): 2.0.0
- Python version: 3.6.8
- Bazel version (if compiling from source): N/A
- GCC/Compiler version (if compiling from source): N/A
- CUDA/cuDNN version: 10.0/7.6
- GPU model and memory: GTX 980 Ti
**Describe the current behavior**
Enabling eager execution or control flow v2 causes RNN training speed to decrease significantly.
**Describe the expected behavior**
Enabling eager mode or control flow v2 should not affect the training time (or improve it, ideally).
**Code to reproduce the issue**
``` python
import tensorflow as tf
import numpy as np
import timeit
use_eager = False
use_v2 = False
if not use_eager:
tf.compat.v1.disable_eager_execution()
if not use_v2:
tf.compat.v1.disable_control_flow_v2()
n_steps = 1000
n_input = 100
n_hidden = 1000
batch_size = 64
inputs = tf.keras.Input((n_steps, n_input))
outputs = tf.keras.layers.SimpleRNN(units=n_hidden, return_sequences=True)(inputs)
outputs = tf.keras.layers.Dense(units=n_input)(outputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer=tf.optimizers.SGD(0.1), loss=""mse"")
x = np.ones((batch_size, n_steps, n_input))
y = np.ones((batch_size, n_steps, n_input))
# warmup
model.fit(x, y, epochs=1)
start = timeit.default_timer()
model.fit(x, y, epochs=10)
print(""Execution time:"", timeit.default_timer() - start)
```
**Other info / logs**
On my machine the results look like:
- use_eager=False, use_v2=False: 5.90s
- use_eager=False, use_v2=True: 8.08s
- use_eager=True, use_v2=False: 9.81s
- use_eager=True, use_v2=True: 10.10s
So, overall a >60% increase in training time comparing no eager and no v2 to the current defaults.
",True,"Keras RNN training speed significantly slower with eager execution/control flow v2 - **System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
- TensorFlow installed from (source or binary): source
- TensorFlow version (use command below): 2.0.0
- Python version: 3.6.8
- Bazel version (if compiling from source): N/A
- GCC/Compiler version (if compiling from source): N/A
- CUDA/cuDNN version: 10.0/7.6
- GPU model and memory: GTX 980 Ti
**Describe the current behavior**
Enabling eager execution or control flow v2 causes RNN training speed to decrease significantly.
**Describe the expected behavior**
Enabling eager mode or control flow v2 should not affect the training time (or improve it, ideally).
**Code to reproduce the issue**
``` python
import tensorflow as tf
import numpy as np
import timeit
use_eager = False
use_v2 = False
if not use_eager:
tf.compat.v1.disable_eager_execution()
if not use_v2:
tf.compat.v1.disable_control_flow_v2()
n_steps = 1000
n_input = 100
n_hidden = 1000
batch_size = 64
inputs = tf.keras.Input((n_steps, n_input))
outputs = tf.keras.layers.SimpleRNN(units=n_hidden, return_sequences=True)(inputs)
outputs = tf.keras.layers.Dense(units=n_input)(outputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer=tf.optimizers.SGD(0.1), loss=""mse"")
x = np.ones((batch_size, n_steps, n_input))
y = np.ones((batch_size, n_steps, n_input))
# warmup
model.fit(x, y, epochs=1)
start = timeit.default_timer()
model.fit(x, y, epochs=10)
print(""Execution time:"", timeit.default_timer() - start)
```
**Other info / logs**
On my machine the results look like:
- use_eager=False, use_v2=False: 5.90s
- use_eager=False, use_v2=True: 8.08s
- use_eager=True, use_v2=False: 9.81s
- use_eager=True, use_v2=True: 10.10s
So, overall a >60% increase in training time comparing no eager and no v2 to the current defaults.
",0,keras rnn training speed significantly slower with eager execution control flow system information have i written custom code as opposed to using a stock example script provided in tensorflow yes os platform and distribution e g linux ubuntu windows mobile device e g iphone pixel samsung galaxy if the issue happens on mobile device n a tensorflow installed from source or binary source tensorflow version use command below python version bazel version if compiling from source n a gcc compiler version if compiling from source n a cuda cudnn version gpu model and memory gtx ti describe the current behavior enabling eager execution or control flow causes rnn training speed to decrease significantly describe the expected behavior enabling eager mode or control flow should not affect the training time or improve it ideally code to reproduce the issue python import tensorflow as tf import numpy as np import timeit use eager false use false if not use eager tf compat disable eager execution if not use tf compat disable control flow n steps n input n hidden batch size inputs tf keras input n steps n input outputs tf keras layers simplernn units n hidden return sequences true inputs outputs tf keras layers dense units n input outputs model tf keras model inputs inputs outputs outputs model compile optimizer tf optimizers sgd loss mse x np ones batch size n steps n input y np ones batch size n steps n input warmup model fit x y epochs start timeit default timer model fit x y epochs print execution time timeit default timer start other info logs on my machine the results look like use eager false use false use eager false use true use eager true use false use eager true use true so overall a increase in training time comparing no eager and no to the current defaults ,0
352361,10540897890.0,IssuesEvent,2019-10-02 09:28:42,UniversityOfHelsinkiCS/fuksilaiterekisteri,https://api.github.com/repos/UniversityOfHelsinkiCS/fuksilaiterekisteri,closed,season over -update,enhancement high priority management,"- [x] stop new regs
- [x] update ineligibility-page
- [x] send email lists of ready and wants to Pekka
",1.0,"season over -update - - [x] stop new regs
- [x] update ineligibility-page
- [x] send email lists of ready and wants to Pekka
",0,season over update stop new regs update ineligibility page send email lists of ready and wants to pekka ,0
5028,25801862753.0,IssuesEvent,2022-12-11 03:28:39,deislabs/spiderlightning,https://api.github.com/repos/deislabs/spiderlightning,opened,fix caching on azure,🐛 bug 🚧 maintainer issue,"**Description of the bug**
Our caching has been working somewhat intermittently.
I've ran our pipelines after an empty commit to see our caching take place. On the same agent, we got:
- a cache hit:
https://dev.azure.com/spiderlightning/slight/_build/results?buildId=357&view=logs&j=70fcc8e8-cc68-58a0-49dd-bf3991baaf6b&t=a1c8d2d5-f3e0-5740-cb25-d150119fd493
- a cache miss:
https://dev.azure.com/spiderlightning/slight/_build/results?buildId=357&view=logs&j=70fcc8e8-cc68-58a0-49dd-bf3991baaf6b&t=5cfd1f4a-b154-515d-6662-392410763baa
That said, most of them result in cache misses. Checking the caching post job, I see:
It should say:
I'm not too sure what's causing this issue. The keys are fine, and the path is correct — I've tried multiple configurations, and even changing to CacheBeta.
**To Reproduce**
n/a
**Additional context**
n/a",True,"fix caching on azure - **Description of the bug**
Our caching has been working somewhat intermittently.
I've ran our pipelines after an empty commit to see our caching take place. On the same agent, we got:
- a cache hit:
https://dev.azure.com/spiderlightning/slight/_build/results?buildId=357&view=logs&j=70fcc8e8-cc68-58a0-49dd-bf3991baaf6b&t=a1c8d2d5-f3e0-5740-cb25-d150119fd493
- a cache miss:
https://dev.azure.com/spiderlightning/slight/_build/results?buildId=357&view=logs&j=70fcc8e8-cc68-58a0-49dd-bf3991baaf6b&t=5cfd1f4a-b154-515d-6662-392410763baa
That said, most of them result in cache misses. Checking the caching post job, I see:
It should say:
I'm not too sure what's causing this issue. The keys are fine, and the path is correct — I've tried multiple configurations, and even changing to CacheBeta.
**To Reproduce**
n/a
**Additional context**
n/a",1,fix caching on azure description of the bug our caching has been working somewhat intermittently i ve ran our pipelines after an empty commit to see our caching take place on the same agent we got a cache hit a cache miss that said most of them result in cache misses checking the caching post job i see img width alt image src it should say img width alt image src i m not too sure what s causing this issue the keys are fine and the path is correct — i ve tried multiple configurations and even changing to cachebeta to reproduce n a additional context n a,1
20478,10521091223.0,IssuesEvent,2019-09-30 04:28:08,scxbush/bushnodegoat,https://api.github.com/repos/scxbush/bushnodegoat,opened,CVE-2019-10746 (High) detected in mixin-deep-1.3.1.tgz,security vulnerability,"## CVE-2019-10746 - High Severity Vulnerability
Vulnerable Library - mixin-deep-1.3.1.tgz
Deeply mix the properties of objects into the first object. Like merge-deep, but doesn't clone.
mixin-deep is vulnerable to Prototype Pollution in versions before 1.3.2 and version 2.0.0. The function mixin-deep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
mixin-deep is vulnerable to Prototype Pollution in versions before 1.3.2 and version 2.0.0. The function mixin-deep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
",0,cve high detected in mixin deep tgz cve high severity vulnerability vulnerable library mixin deep tgz deeply mix the properties of objects into the first object like merge deep but doesn t clone library home page a href path to dependency file tmp ws scm bushnodegoat package json path to vulnerable library tmp ws scm bushnodegoat node modules mixin deep package json dependency hierarchy grunt cli tgz root library liftoff tgz findup sync tgz micromatch tgz snapdragon tgz base tgz x mixin deep tgz vulnerable library found in head commit a href vulnerability details mixin deep is vulnerable to prototype pollution in versions before and version the function mixin deep could be tricked into adding or modifying properties of object prototype using a constructor payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ,0
3476,13385780971.0,IssuesEvent,2020-09-02 13:54:47,DynamoRIO/dynamorio,https://api.github.com/repos/DynamoRIO/dynamorio,closed,remove guard page counts from heap and cache runtime option values,Maintainability Type-Feature,"For heap and cache unit sizes specified in runtime options, we take guard pages
out of the requested size: so asking for 64K gives only 56K of usable space for
4K pages. The original logic was tuned for Windows without -vm_reserve where
the OS allocation granularity matters and we don't want to waste space.
On UNIX, however, with the new 4K (or page size) vmm blocks, and with
-vm_reserve covering most allocations at least for smaller applications, the OS
granularity is less important: xref #2597.
Having the guards included makes it difficult to tune the default sizes based on
actual usage (and even more so when guard pages are sometimes turned off). This
isssue covers making the heap and cache sizes like the cache sizes where the
guard pages are added on top of the requested size. (In #2592 I removed the
debug-build STACK_GUARD_PAGE which was removing a page from the given stack size
to make a guard page: now it matches release where what you ask for is the
usable size you get, for stacks.)
",True,"remove guard page counts from heap and cache runtime option values - For heap and cache unit sizes specified in runtime options, we take guard pages
out of the requested size: so asking for 64K gives only 56K of usable space for
4K pages. The original logic was tuned for Windows without -vm_reserve where
the OS allocation granularity matters and we don't want to waste space.
On UNIX, however, with the new 4K (or page size) vmm blocks, and with
-vm_reserve covering most allocations at least for smaller applications, the OS
granularity is less important: xref #2597.
Having the guards included makes it difficult to tune the default sizes based on
actual usage (and even more so when guard pages are sometimes turned off). This
isssue covers making the heap and cache sizes like the cache sizes where the
guard pages are added on top of the requested size. (In #2592 I removed the
debug-build STACK_GUARD_PAGE which was removing a page from the given stack size
to make a guard page: now it matches release where what you ask for is the
usable size you get, for stacks.)
",1,remove guard page counts from heap and cache runtime option values for heap and cache unit sizes specified in runtime options we take guard pages out of the requested size so asking for gives only of usable space for pages the original logic was tuned for windows without vm reserve where the os allocation granularity matters and we don t want to waste space on unix however with the new or page size vmm blocks and with vm reserve covering most allocations at least for smaller applications the os granularity is less important xref having the guards included makes it difficult to tune the default sizes based on actual usage and even more so when guard pages are sometimes turned off this isssue covers making the heap and cache sizes like the cache sizes where the guard pages are added on top of the requested size in i removed the debug build stack guard page which was removing a page from the given stack size to make a guard page now it matches release where what you ask for is the usable size you get for stacks ,1
1759,6574985638.0,IssuesEvent,2017-09-11 14:41:48,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,TypeError: create_load_balancer() got an unexpected keyword argument 'complex_listeners',affects_2.1 aws bug_report cloud waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_elb_lb
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Default
##### OS / ENVIRONMENT
centos 6.2
##### SUMMARY
complex listeners error when adding the listeners parameter in the ec2_elb_lb module from example listen on the module website
##### STEPS TO REPRODUCE
run the playbook to create an aws ELB.
```
tasks:
- name: Test building a staging ELB for purposes of learning ansible ELB creation
ec2_elb_lb:
region: us-east-1
name: test-ansible
state: present
listeners:
- protocol: http
instance_port: 80
load_balancer_port: 80
zones: us-east-1a
```
##### EXPECTED RESULTS
Expect ELB to be created in AWS with the referenced values
##### ACTUAL RESULTS
Playbook fails and has the follow command output.
```
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File ""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py"", line 1342, in
main()
File ""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py"", line 1326, in main
elb_man.ensure_ok()
File ""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py"", line 410, in _do_op
return op(*args, **kwargs)
File ""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py"", line 474, in ensure_ok
self._create_elb()
File ""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py"", line 701, in _create_elb
scheme=self.scheme)
TypeError: create_load_balancer() got an unexpected keyword argument 'complex_listeners'
fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""ec2_elb_lb""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py\"", line 1342, in \n main()\n File \""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py\"", line 1326, in main\n elb_man.ensure_ok()\n File \""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py\"", line 410, in _do_op\n return op(*args, **kwargs)\n File \""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py\"", line 474, in ensure_ok\n self._create_elb()\n File \""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py\"", line 701, in _create_elb\n scheme=self.scheme)\nTypeError: create_load_balancer() got an unexpected keyword argument 'complex_listeners'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false}
```
",True,"TypeError: create_load_balancer() got an unexpected keyword argument 'complex_listeners' -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_elb_lb
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Default
##### OS / ENVIRONMENT
centos 6.2
##### SUMMARY
complex listeners error when adding the listeners parameter in the ec2_elb_lb module from example listen on the module website
##### STEPS TO REPRODUCE
run the playbook to create an aws ELB.
```
tasks:
- name: Test building a staging ELB for purposes of learning ansible ELB creation
ec2_elb_lb:
region: us-east-1
name: test-ansible
state: present
listeners:
- protocol: http
instance_port: 80
load_balancer_port: 80
zones: us-east-1a
```
##### EXPECTED RESULTS
Expect ELB to be created in AWS with the referenced values
##### ACTUAL RESULTS
Playbook fails and has the follow command output.
```
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File ""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py"", line 1342, in
main()
File ""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py"", line 1326, in main
elb_man.ensure_ok()
File ""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py"", line 410, in _do_op
return op(*args, **kwargs)
File ""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py"", line 474, in ensure_ok
self._create_elb()
File ""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py"", line 701, in _create_elb
scheme=self.scheme)
TypeError: create_load_balancer() got an unexpected keyword argument 'complex_listeners'
fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_name"": ""ec2_elb_lb""}, ""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py\"", line 1342, in \n main()\n File \""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py\"", line 1326, in main\n elb_man.ensure_ok()\n File \""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py\"", line 410, in _do_op\n return op(*args, **kwargs)\n File \""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py\"", line 474, in ensure_ok\n self._create_elb()\n File \""/tmp/ansible_3AL7SI/ansible_module_ec2_elb_lb.py\"", line 701, in _create_elb\n scheme=self.scheme)\nTypeError: create_load_balancer() got an unexpected keyword argument 'complex_listeners'\n"", ""module_stdout"": """", ""msg"": ""MODULE FAILURE"", ""parsed"": false}
```
",1,typeerror create load balancer got an unexpected keyword argument complex listeners issue type bug report component name elb lb ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration default os environment centos summary complex listeners error when adding the listeners parameter in the elb lb module from example listen on the module website steps to reproduce run the playbook to create an aws elb tasks name test building a staging elb for purposes of learning ansible elb creation elb lb region us east name test ansible state present listeners protocol http instance port load balancer port zones us east expected results expect elb to be created in aws with the referenced values actual results playbook fails and has the follow command output an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module elb lb py line in main file tmp ansible ansible module elb lb py line in main elb man ensure ok file tmp ansible ansible module elb lb py line in do op return op args kwargs file tmp ansible ansible module elb lb py line in ensure ok self create elb file tmp ansible ansible module elb lb py line in create elb scheme self scheme typeerror create load balancer got an unexpected keyword argument complex listeners fatal failed changed false failed true invocation module name elb lb module stderr traceback most recent call last n file tmp ansible ansible module elb lb py line in n main n file tmp ansible ansible module elb lb py line in main n elb man ensure ok n file tmp ansible ansible module elb lb py line in do op n return op args kwargs n file tmp ansible ansible module elb lb py line in ensure ok n self create elb n file tmp ansible ansible module elb lb py line in create elb n scheme self scheme ntypeerror create load balancer got an unexpected keyword argument complex listeners n module stdout msg module failure parsed false ,1
3620,14630561247.0,IssuesEvent,2020-12-23 17:58:04,umn-asr/sessions_data_service,https://api.github.com/repos/umn-asr/sessions_data_service,opened,Update to supported version of Rails,maintainability rails EOL sessions,We're currently running Rails 4.2 which is no longer supported.,True,Update to supported version of Rails - We're currently running Rails 4.2 which is no longer supported.,1,update to supported version of rails we re currently running rails which is no longer supported ,1
15591,8969357769.0,IssuesEvent,2019-01-29 10:35:39,rust-lang/rust,https://api.github.com/repos/rust-lang/rust,opened,Tracking Issue for making incremental compilation the default for Release Builds,A-incr-comp C-tracking-issue I-compiletime T-cargo T-compiler T-core WG-compiler-performance,"Since incremental compilation supports being used in conjunction with ThinLTO the runtime performance of incrementally built artifacts is (presumably) roughly on par with non-incrementally built code. At the same time, building things incrementally often is significantly faster (([1.4-5x](https://github.com/rust-lang/rust/pull/56678#issuecomment-446606215) according to perf.rlo). As a consequence it might be a good idea to make Cargo default to incremental compilation for release builds.
Possible caveats that need to be resolved:
- [ ] The initial build is slightly slower with incremental compilation, usually around 10%. We need to decide if this is a worthwhile tradeoff. For `debug` and `check` builds everybody seems to be fine with this already.
- [ ] Some crates, like `style-servo`, are always slower to compile with incr. comp., even if there is just a small change. In the case of `style-servo` that is 62 seconds versus 64-69 seconds on perf.rlo. It is unlikely that this would improve before we make incr. comp. the default. We need to decide if this is a justifiable price to pay for improvements in other projects.
- [ ] Even if incremental compilation becomes the default, one can still always opt out of it via the `CARGO_INCREMENTAL` flag or a local Cargo config. However, this might not be common knowledge, the same as it isn't common knowledge that one can improve runtime performance by forcing the compiler to use just one codegen unit.
- [ ] It still needs to be verified that runtime performance of compiled artifacts does not suffer too much from switching to incremental compilation (see below).
## Data on runtime performance of incrementally compiled release artifacts
Apart from anectodal evidence that runtime performance is ""roughly the same"" there have been two attempts to measure this in a more reliable way:
1. PR #56678 did an experiment where we compiled the compiler itself incrementally and then tested how the compiler's runtime performance was affected by this. The results are twofold:
1. In general performance drops by **1-2%** ([compare results](https://perf.rust-lang.org/compare.html?start=3a3121337122637fa11f0e5d42aec67551e8c125&end=26f96e5eea2d6d088fd20ebc14dc90bdf123e4a1) for `clean` builds)
2. For two of the small test cases (`helloworld`, `unify-linearly`) performance drops by 30%. It is known that these test cases are very sensitive to LLVM making the right inlining decisions, which we already saw when switching from single-CGU to non-incremental ThinLTO. This is indicative that microbenchmarks may see performance drops unless the author of the benchmark takes care of marking bottleneck functions with `#[inline]`.
2. For a limited period of time we made incremental compilation the default in Cargo (https://github.com/rust-lang/cargo/pull/6564) in order to see how this affected measurements on [lolbench.rs](https://lolbench.rs). It is not yet clear if the experiment succeeded and how much useful data it collected since we had to cut it short because of a regression (#57947). The initial data looks promising: only a handful of the ~600 benchmarks showed performance losses (see https://lolbench.rs/#nightly-2019-01-27). But we need further investigation on how reliable the results are. We might also want to re-run the experiment since the regression can easily be avoided.
One more experiment we should do is compiling Firefox because it is a large Rust codebase with an excellent benchmarking infrastructure (cc @nnethercote).
cc @rust-lang/core @rust-lang/cargo @rust-lang/compiler
",True,"Tracking Issue for making incremental compilation the default for Release Builds - Since incremental compilation supports being used in conjunction with ThinLTO the runtime performance of incrementally built artifacts is (presumably) roughly on par with non-incrementally built code. At the same time, building things incrementally often is significantly faster (([1.4-5x](https://github.com/rust-lang/rust/pull/56678#issuecomment-446606215) according to perf.rlo). As a consequence it might be a good idea to make Cargo default to incremental compilation for release builds.
Possible caveats that need to be resolved:
- [ ] The initial build is slightly slower with incremental compilation, usually around 10%. We need to decide if this is a worthwhile tradeoff. For `debug` and `check` builds everybody seems to be fine with this already.
- [ ] Some crates, like `style-servo`, are always slower to compile with incr. comp., even if there is just a small change. In the case of `style-servo` that is 62 seconds versus 64-69 seconds on perf.rlo. It is unlikely that this would improve before we make incr. comp. the default. We need to decide if this is a justifiable price to pay for improvements in other projects.
- [ ] Even if incremental compilation becomes the default, one can still always opt out of it via the `CARGO_INCREMENTAL` flag or a local Cargo config. However, this might not be common knowledge, the same as it isn't common knowledge that one can improve runtime performance by forcing the compiler to use just one codegen unit.
- [ ] It still needs to be verified that runtime performance of compiled artifacts does not suffer too much from switching to incremental compilation (see below).
## Data on runtime performance of incrementally compiled release artifacts
Apart from anectodal evidence that runtime performance is ""roughly the same"" there have been two attempts to measure this in a more reliable way:
1. PR #56678 did an experiment where we compiled the compiler itself incrementally and then tested how the compiler's runtime performance was affected by this. The results are twofold:
1. In general performance drops by **1-2%** ([compare results](https://perf.rust-lang.org/compare.html?start=3a3121337122637fa11f0e5d42aec67551e8c125&end=26f96e5eea2d6d088fd20ebc14dc90bdf123e4a1) for `clean` builds)
2. For two of the small test cases (`helloworld`, `unify-linearly`) performance drops by 30%. It is known that these test cases are very sensitive to LLVM making the right inlining decisions, which we already saw when switching from single-CGU to non-incremental ThinLTO. This is indicative that microbenchmarks may see performance drops unless the author of the benchmark takes care of marking bottleneck functions with `#[inline]`.
2. For a limited period of time we made incremental compilation the default in Cargo (https://github.com/rust-lang/cargo/pull/6564) in order to see how this affected measurements on [lolbench.rs](https://lolbench.rs). It is not yet clear if the experiment succeeded and how much useful data it collected since we had to cut it short because of a regression (#57947). The initial data looks promising: only a handful of the ~600 benchmarks showed performance losses (see https://lolbench.rs/#nightly-2019-01-27). But we need further investigation on how reliable the results are. We might also want to re-run the experiment since the regression can easily be avoided.
One more experiment we should do is compiling Firefox because it is a large Rust codebase with an excellent benchmarking infrastructure (cc @nnethercote).
cc @rust-lang/core @rust-lang/cargo @rust-lang/compiler
",0,tracking issue for making incremental compilation the default for release builds since incremental compilation supports being used in conjunction with thinlto the runtime performance of incrementally built artifacts is presumably roughly on par with non incrementally built code at the same time building things incrementally often is significantly faster according to perf rlo as a consequence it might be a good idea to make cargo default to incremental compilation for release builds possible caveats that need to be resolved the initial build is slightly slower with incremental compilation usually around we need to decide if this is a worthwhile tradeoff for debug and check builds everybody seems to be fine with this already some crates like style servo are always slower to compile with incr comp even if there is just a small change in the case of style servo that is seconds versus seconds on perf rlo it is unlikely that this would improve before we make incr comp the default we need to decide if this is a justifiable price to pay for improvements in other projects even if incremental compilation becomes the default one can still always opt out of it via the cargo incremental flag or a local cargo config however this might not be common knowledge the same as it isn t common knowledge that one can improve runtime performance by forcing the compiler to use just one codegen unit it still needs to be verified that runtime performance of compiled artifacts does not suffer too much from switching to incremental compilation see below data on runtime performance of incrementally compiled release artifacts apart from anectodal evidence that runtime performance is roughly the same there have been two attempts to measure this in a more reliable way pr did an experiment where we compiled the compiler itself incrementally and then tested how the compiler s runtime performance was affected by this the results are twofold in general performance drops by for clean builds for two of the small test cases helloworld unify linearly performance drops by it is known that these test cases are very sensitive to llvm making the right inlining decisions which we already saw when switching from single cgu to non incremental thinlto this is indicative that microbenchmarks may see performance drops unless the author of the benchmark takes care of marking bottleneck functions with for a limited period of time we made incremental compilation the default in cargo in order to see how this affected measurements on it is not yet clear if the experiment succeeded and how much useful data it collected since we had to cut it short because of a regression the initial data looks promising only a handful of the benchmarks showed performance losses see but we need further investigation on how reliable the results are we might also want to re run the experiment since the regression can easily be avoided one more experiment we should do is compiling firefox because it is a large rust codebase with an excellent benchmarking infrastructure cc nnethercote cc rust lang core rust lang cargo rust lang compiler ,0
33698,7198579508.0,IssuesEvent,2018-02-05 13:21:31,ShaikASK/Testing,https://api.github.com/repos/ShaikASK/Testing,opened,"Folders::Incorrect error message is displayed ""Document name already exist"" upon not entering document name ",Defect P2 SumFive Team,"Steps To Replicate :
1. Launch the url :
2. Login as HR Admin user
3. Navigate to 'Settings' menu
4. Select 'Folders' menu >> click on + symbol to add a new folder
5. Go to 'Documents' section displayed at the right side pane >> click on ""Upload""
6. popup windiow is displayed with option ""dropfiles here or click to upload"" >> click on it
7. Browser a PDF document and without entering any document name click on save button
Experienced Behavior : Observed that incorrect error message is displayed ""Document name already exist""
Expected Behavior :Ensure that insted of displaying invalid error message application should prompt the user to enter ""Document name "" which was left empty",1.0,"Folders::Incorrect error message is displayed ""Document name already exist"" upon not entering document name - Steps To Replicate :
1. Launch the url :
2. Login as HR Admin user
3. Navigate to 'Settings' menu
4. Select 'Folders' menu >> click on + symbol to add a new folder
5. Go to 'Documents' section displayed at the right side pane >> click on ""Upload""
6. popup windiow is displayed with option ""dropfiles here or click to upload"" >> click on it
7. Browser a PDF document and without entering any document name click on save button
Experienced Behavior : Observed that incorrect error message is displayed ""Document name already exist""
Expected Behavior :Ensure that insted of displaying invalid error message application should prompt the user to enter ""Document name "" which was left empty",0,folders incorrect error message is displayed document name already exist upon not entering document name steps to replicate launch the url login as hr admin user navigate to settings menu select folders menu click on symbol to add a new folder go to documents section displayed at the right side pane click on upload popup windiow is displayed with option dropfiles here or click to upload click on it browser a pdf document and without entering any document name click on save button experienced behavior observed that incorrect error message is displayed document name already exist expected behavior ensure that insted of displaying invalid error message application should prompt the user to enter document name which was left empty,0
4903,25187081690.0,IssuesEvent,2022-11-11 19:11:18,amyjko/faculty,https://api.github.com/repos/amyjko/faculty,opened,Check for broken links at compile time,maintainability,"We might be able to do this via tests, at least for internal site links.",True,"Check for broken links at compile time - We might be able to do this via tests, at least for internal site links.",1,check for broken links at compile time we might be able to do this via tests at least for internal site links ,1
4805,24758163411.0,IssuesEvent,2022-10-21 20:02:06,MDAnalysis/membrane-curvature,https://api.github.com/repos/MDAnalysis/membrane-curvature,opened,Modernize setup to comply with PEP518 ,Maintainability,"Although still functional, installation with `setup.py` is deprecated. According to [PEP518](https://peps.python.org/pep-0518/#file-format):
> The build system dependencies will be stored in a file named pyproject.toml that is written in the TOML format [[6]](https://peps.python.org/pep-0518/#toml).
Additionally, there are two files that can be deleted in the root directory: [.lgtm.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/.lgtm.yml) and [_config.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/_config.yml)
To fix this issue:
- [ ] Add `pyproject.toml`.
- [ ] If necessary, modify `setup.cfg`.
- [ ] Remove [.lgtm.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/.lgtm.yml) and [_config.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/_config.yml) in root.
",True,"Modernize setup to comply with PEP518 - Although still functional, installation with `setup.py` is deprecated. According to [PEP518](https://peps.python.org/pep-0518/#file-format):
> The build system dependencies will be stored in a file named pyproject.toml that is written in the TOML format [[6]](https://peps.python.org/pep-0518/#toml).
Additionally, there are two files that can be deleted in the root directory: [.lgtm.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/.lgtm.yml) and [_config.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/_config.yml)
To fix this issue:
- [ ] Add `pyproject.toml`.
- [ ] If necessary, modify `setup.cfg`.
- [ ] Remove [.lgtm.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/.lgtm.yml) and [_config.yml](https://github.com/MDAnalysis/membrane-curvature/blob/main/_config.yml) in root.
",1,modernize setup to comply with although still functional installation with setup py is deprecated according to the build system dependencies will be stored in a file named pyproject toml that is written in the toml format additionally there are two files that can be deleted in the root directory and to fix this issue add pyproject toml if necessary modify setup cfg remove and in root ,1
3664,14964448034.0,IssuesEvent,2021-01-27 11:59:55,RalfKoban/MiKo-Analyzers,https://api.github.com/repos/RalfKoban/MiKo-Analyzers,closed,Assert should be preceded and followed by a blank line,Area: analyzer Area: maintainability feature,"A call to `Assert` should be preceded by a blank line if the preceding line contains a call to something that is no `Assert`.
The reason is ease of reading (spotting asserts with ease).
Following should report a violation:
```c#
var x = 42;
var y = ""something"";
Assert.That(x, Is.EqualTo(42));
Assert.That(y, Is.EqualTo(""something""));
```
While following should **not** report a violation:
```c#
var x = 42;
var y = ""something"";
Assert.That(x, Is.EqualTo(42));
Assert.That(y, Is.EqualTo(""something""));
```",True,"Assert should be preceded and followed by a blank line - A call to `Assert` should be preceded by a blank line if the preceding line contains a call to something that is no `Assert`.
The reason is ease of reading (spotting asserts with ease).
Following should report a violation:
```c#
var x = 42;
var y = ""something"";
Assert.That(x, Is.EqualTo(42));
Assert.That(y, Is.EqualTo(""something""));
```
While following should **not** report a violation:
```c#
var x = 42;
var y = ""something"";
Assert.That(x, Is.EqualTo(42));
Assert.That(y, Is.EqualTo(""something""));
```",1,assert should be preceded and followed by a blank line a call to assert should be preceded by a blank line if the preceding line contains a call to something that is no assert the reason is ease of reading spotting asserts with ease following should report a violation c var x var y something assert that x is equalto assert that y is equalto something while following should not report a violation c var x var y something assert that x is equalto assert that y is equalto something ,1
3487,13608073600.0,IssuesEvent,2020-09-23 01:18:35,amyjko/faculty,https://api.github.com/repos/amyjko/faculty,closed,Publications: Make author names symbolic,maintainability,"This will have a few benefits:
* A single author identity that I can use to quickly change names
* Consistent name rendering (e.g., no inconsistent initials)
* Links to more author details.",True,"Publications: Make author names symbolic - This will have a few benefits:
* A single author identity that I can use to quickly change names
* Consistent name rendering (e.g., no inconsistent initials)
* Links to more author details.",1,publications make author names symbolic this will have a few benefits a single author identity that i can use to quickly change names consistent name rendering e g no inconsistent initials links to more author details ,1
3798,16328869621.0,IssuesEvent,2021-05-12 06:30:58,NixOS/nixpkgs,https://api.github.com/repos/NixOS/nixpkgs,opened,ldgallery-viewer: build fails on Darwin,0.kind: bug 11.by: package-maintainer 6.topic: darwin 6.topic: nodejs,"The build of the [ldgallery-viewer package] seems to fail on Darwin.
This seems to be due to a missing native dependency only for that platform.
[ldgallery-viewer package]: https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/graphics/ldgallery/viewer/default.nix
[Build log on Hydra](https://nix-cache.s3.amazonaws.com/log/76jdsjmfnnddvi5l1qwmfkr2547d5pd1-node_ldgallery-viewer-2.0.0.drv):
```
trying to install from sub 'node_module' directory, skipping Git hooks installation
...............] \ : info lifecycle yorkie@2.0.0~install: yorkie@2.0.0[0m
> ejs@2.7.4 postinstall /nix/store/rfhp2asjaf6m6qhk713dkl59wihm8mpz-node_ldgallery-viewer-2.0.0/lib/node_modules/ldgallery-viewer/node_modules/ejs
> node ./postinstall.js
Thank you for installing EJS: built with the Jake JavaScript build tool (https://jakejs.com/)
...............] \ : info lifecycle ejs@2.7.4~postinstall: ejs@2.7.4[0m
> fsevents@1.2.12 install /nix/store/rfhp2asjaf6m6qhk713dkl59wihm8mpz-node_ldgallery-viewer-2.0.0/lib/node_modules/ldgallery-viewer/node_modules/fsevents
> node-gyp rebuild
No receipt for 'com.apple.pkg.CLTools_Executables' found at '/'.
No receipt for 'com.apple.pkg.DeveloperToolsCLILeo' found at '/'.
No receipt for 'com.apple.pkg.DeveloperToolsCLI' found at '/'.
make: Entering directory '/nix/store/rfhp2asjaf6m6qhk713dkl59wihm8mpz-node_ldgallery-viewer-2.0.0/lib/node_modules/ldgallery-viewer/node_modules/fsevents/build'
SOLINK_MODULE(target) Release/.node
CXX(target) Release/obj.target/fse/fsevents.o
../fsevents.cc:10:10: fatal error: 'CoreServices/CoreServices.h' file not found
#include ""CoreServices/CoreServices.h""
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
make: *** [fse.target.mk:127: Release/obj.target/fse/fsevents.o] Error 1
make: Leaving directory '/nix/store/rfhp2asjaf6m6qhk713dkl59wihm8mpz-node_ldgallery-viewer-2.0.0/lib/node_modules/ldgallery-viewer/node_modules/fsevents/build'
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/nix/store/a23q52llhd17vyp7n9rd256jm9xljm5g-nodejs-12.22.1/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack at ChildProcess.emit (events.js:314:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:276:12)
gyp ERR! System Darwin 17.7.0
gyp ERR! command ""/nix/store/a23q52llhd17vyp7n9rd256jm9xljm5g-nodejs-12.22.1/bin/node"" ""/nix/store/a23q52llhd17vyp7n9rd256jm9xljm5g-nodejs-12.22.1/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js"" ""rebuild""
gyp ERR! cwd /nix/store/rfhp2asjaf6m6qhk713dkl59wihm8mpz-node_ldgallery-viewer-2.0.0/lib/node_modules/ldgallery-viewer/node_modules/fsevents
gyp ERR! node -v v12.22.1
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! fsevents@1.2.12 install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the fsevents@1.2.12 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /private/tmp/nix-build-node_ldgallery-viewer-2.0.0.drv-0/.npm/_logs/2021-05-09T16_23_02_686Z-debug.log
builder for '/nix/store/76jdsjmfnnddvi5l1qwmfkr2547d5pd1-node_ldgallery-viewer-2.0.0.drv' failed with exit code 1
```
I believe that adding `CoreServices` to the build inputs of the native Node dependencies could solve the issue.
However, I don't have access to any Darwin machine to test that.
Something like:
```patch
diff --git a/pkgs/tools/graphics/ldgallery/viewer/default.nix b/pkgs/tools/graphics/ldgallery/viewer/default.nix
index 9559120069f..3730c9a2dec 100644
--- a/pkgs/tools/graphics/ldgallery/viewer/default.nix
+++ b/pkgs/tools/graphics/ldgallery/viewer/default.nix
@@ -1,4 +1,8 @@
-{ lib, stdenv, fetchFromGitHub, pkgs, nodejs-12_x, pandoc }:
+{ lib, stdenv, fetchFromGitHub, pkgs, nodejs-12_x, pandoc,
+
+ # Darwin-specific
+ CoreServices
+}:
with lib;
@@ -24,6 +28,7 @@ let
nodePkg = nodePackages.package.override {
src = ""${sourcePkg}/viewer"";
postInstall = ""npm run build"";
+ buildInputs = lib.optionals stdenv.isDarwin [ CoreServices ];
};
in
```",True,"ldgallery-viewer: build fails on Darwin - The build of the [ldgallery-viewer package] seems to fail on Darwin.
This seems to be due to a missing native dependency only for that platform.
[ldgallery-viewer package]: https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/graphics/ldgallery/viewer/default.nix
[Build log on Hydra](https://nix-cache.s3.amazonaws.com/log/76jdsjmfnnddvi5l1qwmfkr2547d5pd1-node_ldgallery-viewer-2.0.0.drv):
```
trying to install from sub 'node_module' directory, skipping Git hooks installation
...............] \ : info lifecycle yorkie@2.0.0~install: yorkie@2.0.0[0m
> ejs@2.7.4 postinstall /nix/store/rfhp2asjaf6m6qhk713dkl59wihm8mpz-node_ldgallery-viewer-2.0.0/lib/node_modules/ldgallery-viewer/node_modules/ejs
> node ./postinstall.js
Thank you for installing EJS: built with the Jake JavaScript build tool (https://jakejs.com/)
...............] \ : info lifecycle ejs@2.7.4~postinstall: ejs@2.7.4[0m
> fsevents@1.2.12 install /nix/store/rfhp2asjaf6m6qhk713dkl59wihm8mpz-node_ldgallery-viewer-2.0.0/lib/node_modules/ldgallery-viewer/node_modules/fsevents
> node-gyp rebuild
No receipt for 'com.apple.pkg.CLTools_Executables' found at '/'.
No receipt for 'com.apple.pkg.DeveloperToolsCLILeo' found at '/'.
No receipt for 'com.apple.pkg.DeveloperToolsCLI' found at '/'.
make: Entering directory '/nix/store/rfhp2asjaf6m6qhk713dkl59wihm8mpz-node_ldgallery-viewer-2.0.0/lib/node_modules/ldgallery-viewer/node_modules/fsevents/build'
SOLINK_MODULE(target) Release/.node
CXX(target) Release/obj.target/fse/fsevents.o
../fsevents.cc:10:10: fatal error: 'CoreServices/CoreServices.h' file not found
#include ""CoreServices/CoreServices.h""
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
make: *** [fse.target.mk:127: Release/obj.target/fse/fsevents.o] Error 1
make: Leaving directory '/nix/store/rfhp2asjaf6m6qhk713dkl59wihm8mpz-node_ldgallery-viewer-2.0.0/lib/node_modules/ldgallery-viewer/node_modules/fsevents/build'
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/nix/store/a23q52llhd17vyp7n9rd256jm9xljm5g-nodejs-12.22.1/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack at ChildProcess.emit (events.js:314:20)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:276:12)
gyp ERR! System Darwin 17.7.0
gyp ERR! command ""/nix/store/a23q52llhd17vyp7n9rd256jm9xljm5g-nodejs-12.22.1/bin/node"" ""/nix/store/a23q52llhd17vyp7n9rd256jm9xljm5g-nodejs-12.22.1/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js"" ""rebuild""
gyp ERR! cwd /nix/store/rfhp2asjaf6m6qhk713dkl59wihm8mpz-node_ldgallery-viewer-2.0.0/lib/node_modules/ldgallery-viewer/node_modules/fsevents
gyp ERR! node -v v12.22.1
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! fsevents@1.2.12 install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the fsevents@1.2.12 install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /private/tmp/nix-build-node_ldgallery-viewer-2.0.0.drv-0/.npm/_logs/2021-05-09T16_23_02_686Z-debug.log
builder for '/nix/store/76jdsjmfnnddvi5l1qwmfkr2547d5pd1-node_ldgallery-viewer-2.0.0.drv' failed with exit code 1
```
I believe that adding `CoreServices` to the build inputs of the native Node dependencies could solve the issue.
However, I don't have access to any Darwin machine to test that.
Something like:
```patch
diff --git a/pkgs/tools/graphics/ldgallery/viewer/default.nix b/pkgs/tools/graphics/ldgallery/viewer/default.nix
index 9559120069f..3730c9a2dec 100644
--- a/pkgs/tools/graphics/ldgallery/viewer/default.nix
+++ b/pkgs/tools/graphics/ldgallery/viewer/default.nix
@@ -1,4 +1,8 @@
-{ lib, stdenv, fetchFromGitHub, pkgs, nodejs-12_x, pandoc }:
+{ lib, stdenv, fetchFromGitHub, pkgs, nodejs-12_x, pandoc,
+
+ # Darwin-specific
+ CoreServices
+}:
with lib;
@@ -24,6 +28,7 @@ let
nodePkg = nodePackages.package.override {
src = ""${sourcePkg}/viewer"";
postInstall = ""npm run build"";
+ buildInputs = lib.optionals stdenv.isDarwin [ CoreServices ];
};
in
```",1,ldgallery viewer build fails on darwin the build of the seems to fail on darwin this seems to be due to a missing native dependency only for that platform trying to install from sub node module directory skipping git hooks installation info lifecycle yorkie install yorkie ejs postinstall nix store node ldgallery viewer lib node modules ldgallery viewer node modules ejs node postinstall js thank you for installing ejs built with the jake javascript build tool info lifecycle ejs postinstall ejs fsevents install nix store node ldgallery viewer lib node modules ldgallery viewer node modules fsevents node gyp rebuild no receipt for com apple pkg cltools executables found at no receipt for com apple pkg developertoolsclileo found at no receipt for com apple pkg developertoolscli found at make entering directory nix store node ldgallery viewer lib node modules ldgallery viewer node modules fsevents build solink module target release node cxx target release obj target fse fsevents o fsevents cc fatal error coreservices coreservices h file not found include coreservices coreservices h error generated make error make leaving directory nix store node ldgallery viewer lib node modules ldgallery viewer node modules fsevents build gyp err build error gyp err stack error make failed with exit code gyp err stack at childprocess onexit nix store nodejs lib node modules npm node modules node gyp lib build js gyp err stack at childprocess emit events js gyp err stack at process childprocess handle onexit internal child process js gyp err system darwin gyp err command nix store nodejs bin node nix store nodejs lib node modules npm node modules node gyp bin node gyp js rebuild gyp err cwd nix store node ldgallery viewer lib node modules ldgallery viewer node modules fsevents gyp err node v gyp err node gyp v gyp err not ok npm err code elifecycle npm err errno npm err fsevents install node gyp rebuild npm err exit status npm err npm err failed at the fsevents install script npm err this is probably not a problem with npm there is likely additional logging output above npm err a complete log of this run can be found in npm err private tmp nix build node ldgallery viewer drv npm logs debug log builder for nix store node ldgallery viewer drv failed with exit code i believe that adding coreservices to the build inputs of the native node dependencies could solve the issue however i don t have access to any darwin machine to test that something like patch diff git a pkgs tools graphics ldgallery viewer default nix b pkgs tools graphics ldgallery viewer default nix index a pkgs tools graphics ldgallery viewer default nix b pkgs tools graphics ldgallery viewer default nix lib stdenv fetchfromgithub pkgs nodejs x pandoc lib stdenv fetchfromgithub pkgs nodejs x pandoc darwin specific coreservices with lib let nodepkg nodepackages package override src sourcepkg viewer postinstall npm run build buildinputs lib optionals stdenv isdarwin in ,1
3451,13215591691.0,IssuesEvent,2020-08-17 00:12:37,ansible/ansible,https://api.github.com/repos/ansible/ansible,closed,Timeout configuration for rax_dns_record,affects_2.1 bot_closed cloud collection collection:community.general feature module needs_collection_redirect needs_maintainer needs_triage support:community,"From @enekofb on 2016-08-26T13:57:58Z
##### ISSUE TYPE
- Feature
##### COMPONENT NAME
rax_dns_record
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
nothing changed
##### OS / ENVIRONMENT
mac osx 10.9.5
##### SUMMARY
Trying to add a dns record through ansible, rax_dns_record hits pyrax default dns timeout resulting in
""msg"": ""The API call to '/domains/4789666/records' did not complete after 5 seconds.""
So even if 5 seconds should be enough in the majority of the cases, it would be great to be able to set a new timeout.
Copied from original issue: ansible/ansible-modules-core#4558
",True,"Timeout configuration for rax_dns_record - From @enekofb on 2016-08-26T13:57:58Z
##### ISSUE TYPE
- Feature
##### COMPONENT NAME
rax_dns_record
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
nothing changed
##### OS / ENVIRONMENT
mac osx 10.9.5
##### SUMMARY
Trying to add a dns record through ansible, rax_dns_record hits pyrax default dns timeout resulting in
""msg"": ""The API call to '/domains/4789666/records' did not complete after 5 seconds.""
So even if 5 seconds should be enough in the majority of the cases, it would be great to be able to set a new timeout.
Copied from original issue: ansible/ansible-modules-core#4558
",1,timeout configuration for rax dns record from enekofb on issue type feature component name rax dns record ansible version ansible config file configured module search path default w o overrides configuration nothing changed os environment mac osx summary trying to add a dns record through ansible rax dns record hits pyrax default dns timeout resulting in msg the api call to domains records did not complete after seconds so even if seconds should be enough in the majority of the cases it would be great to be able to set a new timeout copied from original issue ansible ansible modules core ,1
332689,24347920600.0,IssuesEvent,2022-10-02 15:14:54,ICEI-PUC-Minas-PMV-ADS/pmv-ads-2022-2-e1-proj-web-t7-planejamento-orcamentario,https://api.github.com/repos/ICEI-PUC-Minas-PMV-ADS/pmv-ads-2022-2-e1-proj-web-t7-planejamento-orcamentario,reopened,Contextualizar o projeto,documentation," documentação de contexto é um texto descritivo com a visão geral do projeto abordado, que inclui o contexto, o problema, os objetivos, a justificativa e o público-alvo do projeto.",1.0,"Contextualizar o projeto - documentação de contexto é um texto descritivo com a visão geral do projeto abordado, que inclui o contexto, o problema, os objetivos, a justificativa e o público-alvo do projeto.",0,contextualizar o projeto documentação de contexto é um texto descritivo com a visão geral do projeto abordado que inclui o contexto o problema os objetivos a justificativa e o público alvo do projeto ,0
5365,26987887854.0,IssuesEvent,2023-02-09 17:26:05,pulp/pulp-oci-images,https://api.github.com/repos/pulp/pulp-oci-images,opened,Deduplicate the CI jobs for single-process pulp vs single-process galaxy images,Triage-Needed Maintainability,"These 2 sections are so similar, that they should be differentiated via variables.",True,"Deduplicate the CI jobs for single-process pulp vs single-process galaxy images - These 2 sections are so similar, that they should be differentiated via variables.",1,deduplicate the ci jobs for single process pulp vs single process galaxy images these sections are so similar that they should be differentiated via variables ,1
40427,5216419962.0,IssuesEvent,2017-01-26 10:16:14,AeroScripts/QuestieDev,https://api.github.com/repos/AeroScripts/QuestieDev,closed,Error message on login v3.69,by design hotfix resolved,"Here is a screenshot of the error i get.

I get this every time i log into my character or every time i /reload.
This is a new character, only first quest is accepted and i haven't completed it yet... I click Yes and after reload it's there again! Never goes away...
The first time i launch the addon after deleting the SavedVariables i don't get the message.
",1.0,"Error message on login v3.69 - Here is a screenshot of the error i get.

I get this every time i log into my character or every time i /reload.
This is a new character, only first quest is accepted and i haven't completed it yet... I click Yes and after reload it's there again! Never goes away...
The first time i launch the addon after deleting the SavedVariables i don't get the message.
",0,error message on login here is a screenshot of the error i get i get this every time i log into my character or every time i reload this is a new character only first quest is accepted and i haven t completed it yet i click yes and after reload it s there again never goes away the first time i launch the addon after deleting the savedvariables i don t get the message ,0
5050,25874266099.0,IssuesEvent,2022-12-14 06:26:11,microsoft/DirectXTex,https://api.github.com/repos/microsoft/DirectXTex,closed,Add Spectre mitigation support to NuGet and vcpkg,maintainence,"For **NuGet**, add an alternative library built with `/p:SpectreMitigation=Spectre` for Desktop (not supported by UWP).
For **vcpkg**, add a ``spectre`` feature that sets a CMake build option ``ENABLE_SPECTRE_MITIGATION=ON``.
",True,"Add Spectre mitigation support to NuGet and vcpkg - For **NuGet**, add an alternative library built with `/p:SpectreMitigation=Spectre` for Desktop (not supported by UWP).
For **vcpkg**, add a ``spectre`` feature that sets a CMake build option ``ENABLE_SPECTRE_MITIGATION=ON``.
",1,add spectre mitigation support to nuget and vcpkg for nuget add an alternative library built with p spectremitigation spectre for desktop not supported by uwp for vcpkg add a spectre feature that sets a cmake build option enable spectre mitigation on ,1
24026,2665517131.0,IssuesEvent,2015-03-20 21:03:35,iFixit/iFixitAndroid,https://api.github.com/repos/iFixit/iFixitAndroid,closed,Input validation: Use TextView.setError() for errors,low priority r-All someday,Any input validation should use [`TextView.setError()`](http://developer.android.com/reference/android/widget/TextView.html#setError%28java.lang.CharSequence%29) to display errors. The only place I can think of that doesn't currently do this is login and register. This should simplify the error displaying code and make it look much better too.,1.0,Input validation: Use TextView.setError() for errors - Any input validation should use [`TextView.setError()`](http://developer.android.com/reference/android/widget/TextView.html#setError%28java.lang.CharSequence%29) to display errors. The only place I can think of that doesn't currently do this is login and register. This should simplify the error displaying code and make it look much better too.,0,input validation use textview seterror for errors any input validation should use to display errors the only place i can think of that doesn t currently do this is login and register this should simplify the error displaying code and make it look much better too ,0
5042,25841564190.0,IssuesEvent,2022-12-13 01:03:08,ElasticPerch/websocket,https://api.github.com/repos/ElasticPerch/websocket,opened,"Want to implement rfc7692, but writer side can not be implemented",enhancement waiting on new maintainer feature request,"From websocket created by [smith-30](https://github.com/smith-30): gorilla/websocket#339
Hi,
I'd like to use the context-takeover mechanism defined in rfc7692.
I forked and developed it and I could implement the reader side.
This [branch] (https://github.com/smith-30/websocket/tree/feature/upgrade_writer) is the newest.
However, I am in trouble because I can not implement the writer.
__Implementation I'm thinking__
Attempting to implement context-takeover by attaching flateWriteWrapper to Conn struct.
In flateWriteWrapper, attach flat.Writer called with flat.NewWriterDict.
https://github.com/smith-30/websocket/blob/ee46f8548a106a02264f711a1838887fd3cf58cf/conn.go#L518-L536
I do not want to make flateWriter every time I make a call.
Because performance is very poor.
Not using Pool is because GC may clean it without permission.
Avoid the window of flateWriter disappearing and inconsistency with reader side.
However, in my implementation I can not initialize the truncWriter passed to flateWriter.
In the second execution, truncWriter has state and fails in compress processing.
I'd like to reset the state of truncWriter after compression
Is there a good way to do it?
I am sorry for my poor English.
It would be extremely helpful If you review my implementation..
",True,"Want to implement rfc7692, but writer side can not be implemented - From websocket created by [smith-30](https://github.com/smith-30): gorilla/websocket#339
Hi,
I'd like to use the context-takeover mechanism defined in rfc7692.
I forked and developed it and I could implement the reader side.
This [branch] (https://github.com/smith-30/websocket/tree/feature/upgrade_writer) is the newest.
However, I am in trouble because I can not implement the writer.
__Implementation I'm thinking__
Attempting to implement context-takeover by attaching flateWriteWrapper to Conn struct.
In flateWriteWrapper, attach flat.Writer called with flat.NewWriterDict.
https://github.com/smith-30/websocket/blob/ee46f8548a106a02264f711a1838887fd3cf58cf/conn.go#L518-L536
I do not want to make flateWriter every time I make a call.
Because performance is very poor.
Not using Pool is because GC may clean it without permission.
Avoid the window of flateWriter disappearing and inconsistency with reader side.
However, in my implementation I can not initialize the truncWriter passed to flateWriter.
In the second execution, truncWriter has state and fails in compress processing.
I'd like to reset the state of truncWriter after compression
Is there a good way to do it?
I am sorry for my poor English.
It would be extremely helpful If you review my implementation..
",1,want to implement but writer side can not be implemented from websocket created by gorilla websocket hi i d like to use the context takeover mechanism defined in i forked and developed it and i could implement the reader side this is the newest however i am in trouble because i can not implement the writer implementation i m thinking attempting to implement context takeover by attaching flatewritewrapper to conn struct in flatewritewrapper attach flat writer called with flat newwriterdict i do not want to make flatewriter every time i make a call because performance is very poor not using pool is because gc may clean it without permission avoid the window of flatewriter disappearing and inconsistency with reader side however in my implementation i can not initialize the truncwriter passed to flatewriter in the second execution truncwriter has state and fails in compress processing i d like to reset the state of truncwriter after compression is there a good way to do it i am sorry for my poor english it would be extremely helpful if you review my implementation ,1
669533,22629795325.0,IssuesEvent,2022-06-30 13:48:14,catjacks38/FCC-GAN,https://api.github.com/repos/catjacks38/FCC-GAN,opened,implement a way to use training images of any set size,low priority,"this sounds like a lot of effort to do, but maybe at somepoint i will implement this... or maybe someone else can. i would prefer the latter cause im kinda lazy. easier to merge a PR than actually make it",1.0,"implement a way to use training images of any set size - this sounds like a lot of effort to do, but maybe at somepoint i will implement this... or maybe someone else can. i would prefer the latter cause im kinda lazy. easier to merge a PR than actually make it",0,implement a way to use training images of any set size this sounds like a lot of effort to do but maybe at somepoint i will implement this or maybe someone else can i would prefer the latter cause im kinda lazy easier to merge a pr than actually make it,0
36137,17467866825.0,IssuesEvent,2021-08-06 19:48:27,flutter/flutter,https://api.github.com/repos/flutter/flutter,opened,Consider allowing raster cache entries to survive 1+ frames without usage,engine severe: performance P4,"Currently the raster cache clears all entries that have not been used at the end of the frame. In the case of SVGs, this can lead to repeated rendering jank in scenarios like a scrolling list or tabbar view where the same picture (pending flutter_svg fix) is continually re-rasterized as a user interacts with the application.
For especially complex pictures (https://github.com/flutter/flutter/issues/87826), we should support some method of keeping the cache entry alive past one frame. Some ideas:
* We could tune a threshold for lack of access. Probably flaky, risk optimizing for benchmarks.
* We could tie the entry to the lifetime of the engine Picture object. If the picture isn't disposed, then the framework must be keeping it alive intentionally
* We could provide a new API that returned some sort of raster handle that the framework could manage",True,"Consider allowing raster cache entries to survive 1+ frames without usage - Currently the raster cache clears all entries that have not been used at the end of the frame. In the case of SVGs, this can lead to repeated rendering jank in scenarios like a scrolling list or tabbar view where the same picture (pending flutter_svg fix) is continually re-rasterized as a user interacts with the application.
For especially complex pictures (https://github.com/flutter/flutter/issues/87826), we should support some method of keeping the cache entry alive past one frame. Some ideas:
* We could tune a threshold for lack of access. Probably flaky, risk optimizing for benchmarks.
* We could tie the entry to the lifetime of the engine Picture object. If the picture isn't disposed, then the framework must be keeping it alive intentionally
* We could provide a new API that returned some sort of raster handle that the framework could manage",0,consider allowing raster cache entries to survive frames without usage currently the raster cache clears all entries that have not been used at the end of the frame in the case of svgs this can lead to repeated rendering jank in scenarios like a scrolling list or tabbar view where the same picture pending flutter svg fix is continually re rasterized as a user interacts with the application for especially complex pictures we should support some method of keeping the cache entry alive past one frame some ideas we could tune a threshold for lack of access probably flaky risk optimizing for benchmarks we could tie the entry to the lifetime of the engine picture object if the picture isn t disposed then the framework must be keeping it alive intentionally we could provide a new api that returned some sort of raster handle that the framework could manage,0
4407,22634276877.0,IssuesEvent,2022-06-30 17:18:32,tethysplatform/tethys,https://api.github.com/repos/tethysplatform/tethys,closed,Tethys Developer Version Installation 'staticfiles' is not a registered tag library,maintain dependencies,"I have installed the version the development version using miniconda with the following command :
```bash
conda create -n tethys -c tethysplatform/label/dev -c tethysplatform -c conda-forge tethys-platform
tethys gen portal_config
tethys db configure
```
This installation comes with python 3.10.4 However, when I start the Tethys with the command `tethys manage start` I get the following error:
```bash
(tethys) [gio@gio tethys]$ tethys manage start
Loading Tethys Extensions...
Loading Tethys Apps...
Loading Tethys Extensions...
Loading Tethys Apps...
Performing system checks...
System check identified no issues (0 silenced).
April 20, 2022 - 21:01:14
Django version 3.2.12, using settings 'tethys_portal.settings'
Starting ASGI/Channels version 3.0.4 development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
ERROR:django.request:Internal Server Error: /
Traceback (most recent call last):
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/defaulttags.py"", line 1037, in find_library
return parser.libraries[name]
KeyError: 'staticfiles'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/sync.py"", line 451, in thread_handler
raise exc_info[1]
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/core/handlers/exception.py"", line 38, in inner
response = await get_response(request)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/core/handlers/base.py"", line 233, in _get_response_async
response = await wrapped_callback(request, *callback_args, **callback_kwargs)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/sync.py"", line 414, in __call__
ret = await asyncio.wait_for(future, timeout=None)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/asyncio/tasks.py"", line 408, in wait_for
return await fut
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/current_thread_executor.py"", line 22, in run
result = self.fn(*self.args, **self.kwargs)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/sync.py"", line 455, in thread_handler
return func(*args, **kwargs)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/tethys_portal/views/home.py"", line 29, in home
return render(request, template, {""ENABLE_OPEN_SIGNUP"": settings.ENABLE_OPEN_SIGNUP,
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/shortcuts.py"", line 19, in render
content = loader.render_to_string(template_name, context, request, using=using)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader.py"", line 62, in render_to_string
return template.render(context, request)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/backends/django.py"", line 61, in render
return self.template.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 170, in render
return self._render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 162, in _render
return self.nodelist.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 938, in render
bit = node.render_annotated(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 905, in render_annotated
return self.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader_tags.py"", line 150, in render
return compiled_parent._render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 162, in _render
return self.nodelist.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 938, in render
bit = node.render_annotated(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 905, in render_annotated
return self.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader_tags.py"", line 62, in render
result = block.nodelist.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 938, in render
bit = node.render_annotated(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 905, in render_annotated
return self.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader_tags.py"", line 183, in render
template = context.template.engine.select_template(template_name)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/engine.py"", line 174, in select_template
return self.get_template(template_name)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/engine.py"", line 143, in get_template
template, origin = self.find_template(template_name)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/engine.py"", line 125, in find_template
template = loader.get_template(name, skip=skip)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loaders/base.py"", line 29, in get_template
return Template(
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 155, in __init__
self.nodelist = self.compile_nodelist()
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 193, in compile_nodelist
return parser.parse()
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 478, in parse
raise self.error(token, e)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 476, in parse
compiled_result = compile_func(self, token)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/defaulttags.py"", line 1088, in load
lib = find_library(parser, name)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/defaulttags.py"", line 1039, in find_library
raise TemplateSyntaxError(
django.template.exceptions.TemplateSyntaxError: 'staticfiles' is not a registered tag library. Must be one of:
admin_list
admin_modify
admin_urls
analytical
cache
chartbeat
clickmap
clicky
crazy_egg
django_bootstrap5
facebook_pixel
gauges
google_analytics
google_analytics_js
gosquared
gravatar
guardian_tags
hotjar
hubspot
humanize
i18n
intercom
kiss_insights
kiss_metrics
l10n
log
mixpanel
olark
optimizely
performable
piwik
rating_mailru
recaptcha2
rest_framework
session_security_tags
site_settings
snapengage
spring_metrics
static
tags
terms_tags
tethys_gizmos
tethys_services
tz
uservoice
woopra
yandex_metrica
ERROR:django.channels.server:HTTP GET / 500 [0.07, 127.0.0.1:38068]
```
I am able to fix this by editing the file at In template /home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/session_security/templates/session_security/all.html, error at line containing `{% load static from staticfiles %}`
```html
{% comment %}
This demonstrates how to setup session security client side stuff on your own.
It provides sensible defaults so you could start with just::
{% include 'session_security/all.html' %}
{% endcomment %}
{% load session_security_tags %}
{% load i18n l10n %}
{% load static from staticfiles %}
{# If the user is not authenticated then there is no session to secure ! #}
{% if request.user.is_authenticated %}
{# The modal dialog stylesheet, it's pretty light so it should be easy to hack #}
{# Include the template that actually contains the modal dialog #}
{% include 'session_security/dialog.html' %}
```
I would like to know if there is another fix besides editing the file directly ",True,"Tethys Developer Version Installation 'staticfiles' is not a registered tag library - I have installed the version the development version using miniconda with the following command :
```bash
conda create -n tethys -c tethysplatform/label/dev -c tethysplatform -c conda-forge tethys-platform
tethys gen portal_config
tethys db configure
```
This installation comes with python 3.10.4 However, when I start the Tethys with the command `tethys manage start` I get the following error:
```bash
(tethys) [gio@gio tethys]$ tethys manage start
Loading Tethys Extensions...
Loading Tethys Apps...
Loading Tethys Extensions...
Loading Tethys Apps...
Performing system checks...
System check identified no issues (0 silenced).
April 20, 2022 - 21:01:14
Django version 3.2.12, using settings 'tethys_portal.settings'
Starting ASGI/Channels version 3.0.4 development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
ERROR:django.request:Internal Server Error: /
Traceback (most recent call last):
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/defaulttags.py"", line 1037, in find_library
return parser.libraries[name]
KeyError: 'staticfiles'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/sync.py"", line 451, in thread_handler
raise exc_info[1]
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/core/handlers/exception.py"", line 38, in inner
response = await get_response(request)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/core/handlers/base.py"", line 233, in _get_response_async
response = await wrapped_callback(request, *callback_args, **callback_kwargs)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/sync.py"", line 414, in __call__
ret = await asyncio.wait_for(future, timeout=None)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/asyncio/tasks.py"", line 408, in wait_for
return await fut
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/current_thread_executor.py"", line 22, in run
result = self.fn(*self.args, **self.kwargs)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/sync.py"", line 455, in thread_handler
return func(*args, **kwargs)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/tethys_portal/views/home.py"", line 29, in home
return render(request, template, {""ENABLE_OPEN_SIGNUP"": settings.ENABLE_OPEN_SIGNUP,
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/shortcuts.py"", line 19, in render
content = loader.render_to_string(template_name, context, request, using=using)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader.py"", line 62, in render_to_string
return template.render(context, request)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/backends/django.py"", line 61, in render
return self.template.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 170, in render
return self._render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 162, in _render
return self.nodelist.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 938, in render
bit = node.render_annotated(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 905, in render_annotated
return self.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader_tags.py"", line 150, in render
return compiled_parent._render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 162, in _render
return self.nodelist.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 938, in render
bit = node.render_annotated(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 905, in render_annotated
return self.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader_tags.py"", line 62, in render
result = block.nodelist.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 938, in render
bit = node.render_annotated(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 905, in render_annotated
return self.render(context)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader_tags.py"", line 183, in render
template = context.template.engine.select_template(template_name)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/engine.py"", line 174, in select_template
return self.get_template(template_name)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/engine.py"", line 143, in get_template
template, origin = self.find_template(template_name)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/engine.py"", line 125, in find_template
template = loader.get_template(name, skip=skip)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loaders/base.py"", line 29, in get_template
return Template(
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 155, in __init__
self.nodelist = self.compile_nodelist()
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 193, in compile_nodelist
return parser.parse()
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 478, in parse
raise self.error(token, e)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py"", line 476, in parse
compiled_result = compile_func(self, token)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/defaulttags.py"", line 1088, in load
lib = find_library(parser, name)
File ""/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/defaulttags.py"", line 1039, in find_library
raise TemplateSyntaxError(
django.template.exceptions.TemplateSyntaxError: 'staticfiles' is not a registered tag library. Must be one of:
admin_list
admin_modify
admin_urls
analytical
cache
chartbeat
clickmap
clicky
crazy_egg
django_bootstrap5
facebook_pixel
gauges
google_analytics
google_analytics_js
gosquared
gravatar
guardian_tags
hotjar
hubspot
humanize
i18n
intercom
kiss_insights
kiss_metrics
l10n
log
mixpanel
olark
optimizely
performable
piwik
rating_mailru
recaptcha2
rest_framework
session_security_tags
site_settings
snapengage
spring_metrics
static
tags
terms_tags
tethys_gizmos
tethys_services
tz
uservoice
woopra
yandex_metrica
ERROR:django.channels.server:HTTP GET / 500 [0.07, 127.0.0.1:38068]
```
I am able to fix this by editing the file at In template /home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/session_security/templates/session_security/all.html, error at line containing `{% load static from staticfiles %}`
```html
{% comment %}
This demonstrates how to setup session security client side stuff on your own.
It provides sensible defaults so you could start with just::
{% include 'session_security/all.html' %}
{% endcomment %}
{% load session_security_tags %}
{% load i18n l10n %}
{% load static from staticfiles %}
{# If the user is not authenticated then there is no session to secure ! #}
{% if request.user.is_authenticated %}
{# The modal dialog stylesheet, it's pretty light so it should be easy to hack #}
{# Include the template that actually contains the modal dialog #}
{% include 'session_security/dialog.html' %}
```
I would like to know if there is another fix besides editing the file directly ",1,tethys developer version installation staticfiles is not a registered tag library i have installed the version the development version using miniconda with the following command bash conda create n tethys c tethysplatform label dev c tethysplatform c conda forge tethys platform tethys gen portal config tethys db configure this installation comes with python however when i start the tethys with the command tethys manage start i get the following error bash tethys tethys manage start loading tethys extensions loading tethys apps loading tethys extensions loading tethys apps performing system checks system check identified no issues silenced april django version using settings tethys portal settings starting asgi channels version development server at quit the server with control c error django request internal server error traceback most recent call last file home gio envs tethys lib site packages django template defaulttags py line in find library return parser libraries keyerror staticfiles during handling of the above exception another exception occurred traceback most recent call last file home gio envs tethys lib site packages asgiref sync py line in thread handler raise exc info file home gio envs tethys lib site packages django core handlers exception py line in inner response await get response request file home gio envs tethys lib site packages django core handlers base py line in get response async response await wrapped callback request callback args callback kwargs file home gio envs tethys lib site packages asgiref sync py line in call ret await asyncio wait for future timeout none file home gio envs tethys lib asyncio tasks py line in wait for return await fut file home gio envs tethys lib site packages asgiref current thread executor py line in run result self fn self args self kwargs file home gio envs tethys lib site packages asgiref sync py line in thread handler return func args kwargs file home gio envs tethys lib site packages tethys portal views home py line in home return render request template enable open signup settings enable open signup file home gio envs tethys lib site packages django shortcuts py line in render content loader render to string template name context request using using file home gio envs tethys lib site packages django template loader py line in render to string return template render context request file home gio envs tethys lib site packages django template backends django py line in render return self template render context file home gio envs tethys lib site packages django template base py line in render return self render context file home gio envs tethys lib site packages django template base py line in render return self nodelist render context file home gio envs tethys lib site packages django template base py line in render bit node render annotated context file home gio envs tethys lib site packages django template base py line in render annotated return self render context file home gio envs tethys lib site packages django template loader tags py line in render return compiled parent render context file home gio envs tethys lib site packages django template base py line in render return self nodelist render context file home gio envs tethys lib site packages django template base py line in render bit node render annotated context file home gio envs tethys lib site packages django template base py line in render annotated return self render context file home gio envs tethys lib site packages django template loader tags py line in render result block nodelist render context file home gio envs tethys lib site packages django template base py line in render bit node render annotated context file home gio envs tethys lib site packages django template base py line in render annotated return self render context file home gio envs tethys lib site packages django template loader tags py line in render template context template engine select template template name file home gio envs tethys lib site packages django template engine py line in select template return self get template template name file home gio envs tethys lib site packages django template engine py line in get template template origin self find template template name file home gio envs tethys lib site packages django template engine py line in find template template loader get template name skip skip file home gio envs tethys lib site packages django template loaders base py line in get template return template file home gio envs tethys lib site packages django template base py line in init self nodelist self compile nodelist file home gio envs tethys lib site packages django template base py line in compile nodelist return parser parse file home gio envs tethys lib site packages django template base py line in parse raise self error token e file home gio envs tethys lib site packages django template base py line in parse compiled result compile func self token file home gio envs tethys lib site packages django template defaulttags py line in load lib find library parser name file home gio envs tethys lib site packages django template defaulttags py line in find library raise templatesyntaxerror django template exceptions templatesyntaxerror staticfiles is not a registered tag library must be one of admin list admin modify admin urls analytical cache chartbeat clickmap clicky crazy egg django facebook pixel gauges google analytics google analytics js gosquared gravatar guardian tags hotjar hubspot humanize intercom kiss insights kiss metrics log mixpanel olark optimizely performable piwik rating mailru rest framework session security tags site settings snapengage spring metrics static tags terms tags tethys gizmos tethys services tz uservoice woopra yandex metrica error django channels server http get i am able to fix this by editing the file at in template home gio envs tethys lib site packages session security templates session security all html error at line containing load static from staticfiles html comment this demonstrates how to setup session security client side stuff on your own it provides sensible defaults so you could start with just include session security all html endcomment load session security tags load load static from staticfiles if the user is not authenticated then there is no session to secure if request user is authenticated the modal dialog stylesheet it s pretty light so it should be easy to hack include the template that actually contains the modal dialog include session security dialog html i would like to know if there is another fix besides editing the file directly ,1
583761,17397948604.0,IssuesEvent,2021-08-02 15:34:42,thespacedoctor/sherlock,https://api.github.com/repos/thespacedoctor/sherlock,closed,SDSS name conversion error,priority: 1 type: bug,"Getting some skipped annotations because the code falls over when it tries to convert SDSS objects into names.
https://star.pst.qub.ac.uk/sne/atlas4/candidate/1122646551085257700/
```
* 08:35:34 - ERROR: /usr/local/swtools/python/atls/anaconda3/envs/sherlock37/lib/python3.7/site-packages/astrocalc/coords/unit_conversion.py:dec_decimal_to_sexegesimal:443 > DEC must be between -90 - 90 degrees
Traceback (most recent call last):
File ""/usr/local/swtools/python/atls/anaconda3/envs/sherlock37/bin/sherlock"", line 8, in
sys.exit(main())
File ""/usr/local/swtools/python/atls/anaconda3/envs/sherlock37/lib/python3.7/site-packages/sherlock/cl_utils.py"", line 208, in main
classifier.classify()
File ""/usr/local/swtools/python/atls/anaconda3/envs/sherlock37/lib/python3.7/site-packages/sherlock/transient_classifier.py"", line 493, in classify
self.updatePeakMags)
File ""/usr/local/swtools/python/atls/anaconda3/envs/sherlock37/lib/python3.7/site-packages/sherlock/transient_classifier.py"", line 1545, in update_classification_annotations_and_summaries
match=row, updatePeakMagnitudes=updatePeakMagnitudes)
File ""/usr/local/swtools/python/atls/anaconda3/envs/sherlock37/lib/python3.7/site-packages/sherlock/transient_classifier.py"", line 1948, in generate_match_annotation
betterName = ""SDSS J"" + ra[0:9] + dec[0:9]
TypeError: 'int' object is not subscriptable
```",1.0,"SDSS name conversion error - Getting some skipped annotations because the code falls over when it tries to convert SDSS objects into names.
https://star.pst.qub.ac.uk/sne/atlas4/candidate/1122646551085257700/
```
* 08:35:34 - ERROR: /usr/local/swtools/python/atls/anaconda3/envs/sherlock37/lib/python3.7/site-packages/astrocalc/coords/unit_conversion.py:dec_decimal_to_sexegesimal:443 > DEC must be between -90 - 90 degrees
Traceback (most recent call last):
File ""/usr/local/swtools/python/atls/anaconda3/envs/sherlock37/bin/sherlock"", line 8, in
sys.exit(main())
File ""/usr/local/swtools/python/atls/anaconda3/envs/sherlock37/lib/python3.7/site-packages/sherlock/cl_utils.py"", line 208, in main
classifier.classify()
File ""/usr/local/swtools/python/atls/anaconda3/envs/sherlock37/lib/python3.7/site-packages/sherlock/transient_classifier.py"", line 493, in classify
self.updatePeakMags)
File ""/usr/local/swtools/python/atls/anaconda3/envs/sherlock37/lib/python3.7/site-packages/sherlock/transient_classifier.py"", line 1545, in update_classification_annotations_and_summaries
match=row, updatePeakMagnitudes=updatePeakMagnitudes)
File ""/usr/local/swtools/python/atls/anaconda3/envs/sherlock37/lib/python3.7/site-packages/sherlock/transient_classifier.py"", line 1948, in generate_match_annotation
betterName = ""SDSS J"" + ra[0:9] + dec[0:9]
TypeError: 'int' object is not subscriptable
```",0,sdss name conversion error getting some skipped annotations because the code falls over when it tries to convert sdss objects into names error usr local swtools python atls envs lib site packages astrocalc coords unit conversion py dec decimal to sexegesimal dec must be between degrees traceback most recent call last file usr local swtools python atls envs bin sherlock line in sys exit main file usr local swtools python atls envs lib site packages sherlock cl utils py line in main classifier classify file usr local swtools python atls envs lib site packages sherlock transient classifier py line in classify self updatepeakmags file usr local swtools python atls envs lib site packages sherlock transient classifier py line in update classification annotations and summaries match row updatepeakmagnitudes updatepeakmagnitudes file usr local swtools python atls envs lib site packages sherlock transient classifier py line in generate match annotation bettername sdss j ra dec typeerror int object is not subscriptable ,0
1323,5672330721.0,IssuesEvent,2017-04-12 00:57:05,duckduckgo/zeroclickinfo-spice,https://api.github.com/repos/duckduckgo/zeroclickinfo-spice,closed,Public Holidays: under-triggering?,Maintainer Input Requested," Should this IA cover queries such as ""when is labor day 2016?""
---
IA Page: http://duck.co/ia/view/public_holidays
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sekhavati
",True,"Public Holidays: under-triggering? - Should this IA cover queries such as ""when is labor day 2016?""
---
IA Page: http://duck.co/ia/view/public_holidays
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sekhavati
",1,public holidays under triggering should this ia cover queries such as when is labor day ia page sekhavati ,1
4065,19024914685.0,IssuesEvent,2021-11-24 01:24:54,aws/aws-sam-cli,https://api.github.com/repos/aws/aws-sam-cli,closed,"sam sync goes into ""running incremental build"" loop",blocked/more-info-needed area/sync maintainer/need-followup,"### Description:
After I successfully sync with sam sync, as soon as I do anything in VSCode (even before saving any changes), it will begin going into this ""running incremental build"" loop and continue to cycle:

### Steps to reproduce:
Run `sam sync --stack-name --watch --profile --region us-east-1`
### Observed result:
Answer ""y"" when asked if you want to proceed, then watch Cloudformation initiate deployment, then says ""Infra sync completed"". After that, it sits idle, as it should, but as soon as I click into the template.yml, it starts looping.
### Expected result:
It should remain idle until I save something.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Windows 10
2. `sam --version`: 1.35.0
3. AWS region: us-east-1
`Add --debug flag to command you are running`
Did this and it is constantly spamming the following (note StageName and LayerArn are Parameters):
2021-11-04 13:41:29,579 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,599 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,599 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,618 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,637 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,637 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,656 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,674 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,674 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,693 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,712 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,712 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,731 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,749 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,749 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,767 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,788 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,789 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,865 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,883 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,884 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,906 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,925 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,925 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,943 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,968 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,968 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,987 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,004 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,005 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,025 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,044 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,044 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,065 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,084 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,085 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,105 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,124 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,125 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,145 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,166 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,166 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,186 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,205 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,205 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,223 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,245 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,245 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,263 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,283 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,283 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,303 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,322 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,322 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,343 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,361 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,361 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,379 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,397 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,397 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,418 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,437 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,438 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,457 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,478 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,479 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,497 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,515 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,516 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,538 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,558 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,558 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,577 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,596 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,596 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,615 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,637 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,637 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,655 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,674 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,675 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,693 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,714 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,715 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,737 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,754 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,755 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,773 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,791 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,792 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,814 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,843 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,844 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,863 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,885 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,885 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,908 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,926 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,927 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,946 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,964 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,965 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,983 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,001 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,001 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,019 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,038 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,039 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,057 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,076 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,077 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,095 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,114 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,114 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,135 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,155 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,155 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,174 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,192 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,193 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,213 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,233 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,234 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,253 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,271 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,272 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,290 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,311 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,311 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,331 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,352 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,353 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,371 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,390 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,390 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,409 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,428 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,428 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,447 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,465 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,466 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,485 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,502 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,503 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,522 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,542 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,543 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,561 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,580 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,580 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,599 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,618 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,618 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,639 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,657 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,658 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,679 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,700 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,701 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,720 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,751 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,752 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,775 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,795 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,796 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,821 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,840 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,841 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,861 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,878 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,878 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,897 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,916 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,917 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,943 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,965 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,966 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,985 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,003 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,003 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,022 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,041 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,041 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,061 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,082 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,083 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,104 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,126 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,127 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,152 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,172 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,173 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,193 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,212 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,212 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,231 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,248 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,249 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,267 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,290 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,290 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}",True,"sam sync goes into ""running incremental build"" loop - ### Description:
After I successfully sync with sam sync, as soon as I do anything in VSCode (even before saving any changes), it will begin going into this ""running incremental build"" loop and continue to cycle:

### Steps to reproduce:
Run `sam sync --stack-name --watch --profile --region us-east-1`
### Observed result:
Answer ""y"" when asked if you want to proceed, then watch Cloudformation initiate deployment, then says ""Infra sync completed"". After that, it sits idle, as it should, but as soon as I click into the template.yml, it starts looping.
### Expected result:
It should remain idle until I save something.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Windows 10
2. `sam --version`: 1.35.0
3. AWS region: us-east-1
`Add --debug flag to command you are running`
Did this and it is constantly spamming the following (note StageName and LayerArn are Parameters):
2021-11-04 13:41:29,579 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,599 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,599 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,618 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,637 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,637 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,656 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,674 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,674 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,693 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,712 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,712 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,731 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,749 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,749 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,767 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,788 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,789 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,865 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,883 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,884 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,906 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,925 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,925 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,943 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,968 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:29,968 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:29,987 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,004 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,005 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,025 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,044 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,044 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,065 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,084 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,085 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,105 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,124 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,125 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,145 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,166 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,166 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,186 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,205 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,205 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,223 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,245 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,245 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,263 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,283 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,283 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,303 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,322 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,322 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,343 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,361 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,361 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,379 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,397 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,397 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,418 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,437 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,438 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,457 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,478 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,479 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,497 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,515 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,516 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,538 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,558 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,558 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,577 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,596 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,596 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,615 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,637 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,637 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,655 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,674 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,675 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,693 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,714 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,715 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,737 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,754 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,755 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,773 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,791 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,792 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,814 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,843 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,844 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,863 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,885 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,885 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,908 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,926 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,927 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,946 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,964 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:30,965 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:30,983 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,001 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,001 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,019 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,038 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,039 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,057 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,076 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,077 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,095 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,114 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,114 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,135 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,155 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,155 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,174 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,192 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,193 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,213 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,233 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,234 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,253 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,271 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,272 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,290 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,311 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,311 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,331 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,352 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,353 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,371 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,390 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,390 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,409 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,428 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,428 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,447 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,465 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,466 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,485 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,502 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,503 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,522 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,542 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,543 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,561 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,580 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,580 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,599 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,618 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,618 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,639 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,657 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,658 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,679 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,700 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,701 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,720 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,751 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,752 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,775 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,795 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,796 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,821 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,840 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,841 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,861 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,878 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,878 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,897 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,916 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,917 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,943 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,965 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:31,966 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:31,985 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,003 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,003 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,022 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,041 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,041 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,061 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,082 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,083 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,104 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,126 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,127 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,152 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,172 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,173 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,193 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,212 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,212 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,231 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,248 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,249 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,267 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}
2021-11-04 13:41:32,290 | Found the same SyncFlow in queue. Skip adding.
2021-11-04 13:41:32,290 | Collected default values for parameters: {'StageName': 'v1', 'LayerArn': 'v1'}",1,sam sync goes into running incremental build loop description after i successfully sync with sam sync as soon as i do anything in vscode even before saving any changes it will begin going into this running incremental build loop and continue to cycle steps to reproduce run sam sync stack name watch profile region us east observed result answer y when asked if you want to proceed then watch cloudformation initiate deployment then says infra sync completed after that it sits idle as it should but as soon as i click into the template yml it starts looping expected result it should remain idle until i save something additional environment details ex windows mac amazon linux etc os windows sam version aws region us east add debug flag to command you are running did this and it is constantly spamming the following note stagename and layerarn are parameters collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn collected default values for parameters stagename layerarn found the same syncflow in queue skip adding collected default values for parameters stagename layerarn ,1
581192,17287866846.0,IssuesEvent,2021-07-24 04:27:42,CryptoBlades/cryptoblades,https://api.github.com/repos/CryptoBlades/cryptoblades,closed,Combat Screen - Responsive UI for desktop and mobile,priority-medium type-frontend,"Enemy Cards are now overflowing on a 1920x1080 Screen Resolution


",1.0,"Combat Screen - Responsive UI for desktop and mobile - Enemy Cards are now overflowing on a 1920x1080 Screen Resolution


",0,combat screen responsive ui for desktop and mobile enemy cards are now overflowing on a screen resolution ,0
2073,7024832173.0,IssuesEvent,2017-12-23 00:17:34,tgstation/tgstation,https://api.github.com/repos/tgstation/tgstation,closed,Floor code needs to be refactored,Maintainability/Hinders improvements Not a bug,"The recent refactoring of map files to use different floor types is causing predictable bugs and a new refactoring of floor code is necessary to fix things once and for all.
",True,"Floor code needs to be refactored - The recent refactoring of map files to use different floor types is causing predictable bugs and a new refactoring of floor code is necessary to fix things once and for all.
",1,floor code needs to be refactored the recent refactoring of map files to use different floor types is causing predictable bugs and a new refactoring of floor code is necessary to fix things once and for all ,1
1139,4998879085.0,IssuesEvent,2016-12-09 21:20:09,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2: group_id doesn't seem to accept a list of security groups as documented,affects_2.1 aws bug_report cloud waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
Ansible doesn't accept a list of security groups when launching an instance, while it's [documented](https://docs.ansible.com/ansible/ec2_module.html) that it does: ""security group id (or list of ids) to use with the instance""
##### STEPS TO REPRODUCE
Execute the following task:
```
- name: ""Launch proxy instance: {{ owner }}_i_{{ env }}_dmz_2""
ec2:
region: ""{{ region }}""
image: ""{{ ami_id }}""
count_tag:
Name: ""{{ owner }}_i_{{ env }}_dmz_2""
exact_count: 1
#wait: yes
instance_type: ""t2.micro""
key_name: ""{{ ssh_key_name}}""
# TODO
group_id:
- ""{{ preprod_sg_ssh.group_id }}""
- ""{{ preprod_sg_proxy.group_id }}""
vpc_subnet_id: ""{{ preprod_subnet_dmz_2 }}""
zone: ""{{ az2 }}""
instance_tags:
Name: ""{{ owner }}_i_{{ env }}_dmz_2""
Env: ""{{ owner }}_{{ env }}""
Tier: ""{{ owner }}_{{ env }}_dmz""
register: preprod_i_dmz_2 # preprod_i_dmz_2.tagged_instances[0].id
```
##### EXPECTED RESULTS
Expected that the two SGs specified would be assigned to the instance.
##### ACTUAL RESULTS
None of the two SGs were assigned to the instance. The instance had the default SG assigned.
",True,"ec2: group_id doesn't seem to accept a list of security groups as documented -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
Ansible doesn't accept a list of security groups when launching an instance, while it's [documented](https://docs.ansible.com/ansible/ec2_module.html) that it does: ""security group id (or list of ids) to use with the instance""
##### STEPS TO REPRODUCE
Execute the following task:
```
- name: ""Launch proxy instance: {{ owner }}_i_{{ env }}_dmz_2""
ec2:
region: ""{{ region }}""
image: ""{{ ami_id }}""
count_tag:
Name: ""{{ owner }}_i_{{ env }}_dmz_2""
exact_count: 1
#wait: yes
instance_type: ""t2.micro""
key_name: ""{{ ssh_key_name}}""
# TODO
group_id:
- ""{{ preprod_sg_ssh.group_id }}""
- ""{{ preprod_sg_proxy.group_id }}""
vpc_subnet_id: ""{{ preprod_subnet_dmz_2 }}""
zone: ""{{ az2 }}""
instance_tags:
Name: ""{{ owner }}_i_{{ env }}_dmz_2""
Env: ""{{ owner }}_{{ env }}""
Tier: ""{{ owner }}_{{ env }}_dmz""
register: preprod_i_dmz_2 # preprod_i_dmz_2.tagged_instances[0].id
```
##### EXPECTED RESULTS
Expected that the two SGs specified would be assigned to the instance.
##### ACTUAL RESULTS
None of the two SGs were assigned to the instance. The instance had the default SG assigned.
",1, group id doesn t seem to accept a list of security groups as documented issue type bug report component name ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary ansible doesn t accept a list of security groups when launching an instance while it s that it does security group id or list of ids to use with the instance steps to reproduce execute the following task name launch proxy instance owner i env dmz region region image ami id count tag name owner i env dmz exact count wait yes instance type micro key name ssh key name todo group id preprod sg ssh group id preprod sg proxy group id vpc subnet id preprod subnet dmz zone instance tags name owner i env dmz env owner env tier owner env dmz register preprod i dmz preprod i dmz tagged instances id expected results expected that the two sgs specified would be assigned to the instance actual results none of the two sgs were assigned to the instance the instance had the default sg assigned ,1
69473,14988725638.0,IssuesEvent,2021-01-29 01:58:19,MValle21/oathkeeper,https://api.github.com/repos/MValle21/oathkeeper,opened,"CVE-2019-0205 (High) detected in github.com/uber/jaeger-client-go/thrift-fe3fa553c313b32f58cc684a59a4d48f03e07df9, github.com/uber/jaeger-client-go-fe3fa553c313b32f58cc684a59a4d48f03e07df9",security vulnerability,"## CVE-2019-0205 - High Severity Vulnerability
Vulnerable Libraries - github.com/uber/jaeger-client-go/thrift-fe3fa553c313b32f58cc684a59a4d48f03e07df9, github.com/uber/jaeger-client-go-fe3fa553c313b32f58cc684a59a4d48f03e07df9
In Apache Thrift all versions up to and including 0.12.0, a server or client may run into an endless loop when feed with specific input data. Because the issue had already been partially fixed in version 0.11.0, depending on the installed version it affects only certain language bindings.
In Apache Thrift all versions up to and including 0.12.0, a server or client may run into an endless loop when feed with specific input data. Because the issue had already been partially fixed in version 0.11.0, depending on the installed version it affects only certain language bindings.
TensorFlow is an Open Source Machine Learning Framework. In versions prior to 2.11.1 a malicious invalid input crashes a tensorflow model (Check Failed) and can be used to trigger a denial of service attack. A proof of concept can be constructed with the `Convolution3DTranspose` function. This Convolution3DTranspose layer is a very common API in modern neural networks. The ML models containing such vulnerable components could be deployed in ML applications or as cloud services. This failure could be potentially used to trigger a denial of service attack on ML cloud services. An attacker must have privilege to provide input to a `Convolution3DTranspose` call. This issue has been patched and users are advised to upgrade to version 2.11.1. There are no known workarounds for this vulnerability.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2023-25661 (Medium) detected in tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl - ## CVE-2023-25661 - Medium Severity Vulnerability
Vulnerable Library - tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl
TensorFlow is an open source machine learning framework for everyone.
TensorFlow is an Open Source Machine Learning Framework. In versions prior to 2.11.1 a malicious invalid input crashes a tensorflow model (Check Failed) and can be used to trigger a denial of service attack. A proof of concept can be constructed with the `Convolution3DTranspose` function. This Convolution3DTranspose layer is a very common API in modern neural networks. The ML models containing such vulnerable components could be deployed in ML applications or as cloud services. This failure could be potentially used to trigger a denial of service attack on ML cloud services. An attacker must have privilege to provide input to a `Convolution3DTranspose` call. This issue has been patched and users are advised to upgrade to version 2.11.1. There are no known workarounds for this vulnerability.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file jupyter tensorflow cpu requirements txt path to vulnerable library jupyter tensorflow cpu requirements txt dependency hierarchy x tensorflow whl vulnerable library found in head commit a href found in base branch master vulnerability details tensorflow is an open source machine learning framework in versions prior to a malicious invalid input crashes a tensorflow model check failed and can be used to trigger a denial of service attack a proof of concept can be constructed with the function this layer is a very common api in modern neural networks the ml models containing such vulnerable components could be deployed in ml applications or as cloud services this failure could be potentially used to trigger a denial of service attack on ml cloud services an attacker must have privilege to provide input to a call this issue has been patched and users are advised to upgrade to version there are no known workarounds for this vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
3991,18449940147.0,IssuesEvent,2021-10-15 09:15:25,tgstation/tgstation,https://api.github.com/repos/tgstation/tgstation,closed,"Drowsiness Needs to be ""automatically"" capped",Maintainability/Hinders improvements Good First Issue,"ANYTIME you are having to touch drowsiness, it requires a clamp so usually the code is
`Mob.drowsiness = max(mob.drowsiness + effect, 0)`
This is because negative values will cause semi-permanent drowsiness, and has resulted in a non-zero amount of bugs (#61396 for example).
What i would like is to have a proc that handles this on the mob similar to other values such as sleepiness and damage.
The code then will likely look like
`Mob.AdjustDrowsiness(effect)`
This will need to be changed for any part of the code that touches drowsiness.",True,"Drowsiness Needs to be ""automatically"" capped - ANYTIME you are having to touch drowsiness, it requires a clamp so usually the code is
`Mob.drowsiness = max(mob.drowsiness + effect, 0)`
This is because negative values will cause semi-permanent drowsiness, and has resulted in a non-zero amount of bugs (#61396 for example).
What i would like is to have a proc that handles this on the mob similar to other values such as sleepiness and damage.
The code then will likely look like
`Mob.AdjustDrowsiness(effect)`
This will need to be changed for any part of the code that touches drowsiness.",1,drowsiness needs to be automatically capped anytime you are having to touch drowsiness it requires a clamp so usually the code is mob drowsiness max mob drowsiness effect this is because negative values will cause semi permanent drowsiness and has resulted in a non zero amount of bugs for example what i would like is to have a proc that handles this on the mob similar to other values such as sleepiness and damage the code then will likely look like mob adjustdrowsiness effect this will need to be changed for any part of the code that touches drowsiness ,1
762591,26724662918.0,IssuesEvent,2023-01-29 15:12:25,azerothcore/azerothcore-wotlk,https://api.github.com/repos/azerothcore/azerothcore-wotlk,closed,Giant Yeti are pickpocketable,Confirmed 30-39 Priority-Low Loot Good first issue,"https://github.com/chromiecraft/chromiecraft/issues/4910
### What client do you play on?
enUS
### Faction
Both
### Content Phase:
30-39
### Current Behaviour
Giant Yeti in Alterac Mountains can be pickpocketed
https://user-images.githubusercontent.com/11332559/215287067-c2e34b48-1239-4a01-a26d-25d61ea8d5a0.mp4
### Expected Blizzlike Behaviour
They should not be able to be pickpocketed
### Source
Wrath Classic
https://user-images.githubusercontent.com/11332559/215287077-462808df-f33b-467e-87ee-8ae366831644.mp4
wowhead page with no pickpocket loot section
https://www.wowhead.com/wotlk/npc=2251/giant-yeti
### Steps to reproduce the problem
.learn 1784
.learn 921
.tele alteracmountains
### Extra Notes
https://wowgaming.altervista.org/aowow/?npc=2251
### AC rev. hash/commit
https://github.com/chromiecraft/azerothcore-wotlk/commit/3fee40be7dac90ca99f73e6ae809b18ed7135ef6
### Operating system
Ubuntu 20.04
### Modules
- [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot)
- [mod-bg-item-reward](https://github.com/azerothcore/mod-bg-item-reward)
- [mod-cfbg](https://github.com/azerothcore/mod-cfbg)
- [mod-chat-transmitter](https://github.com/azerothcore/mod-chat-transmitter)
- [mod-chromie-xp](https://github.com/azerothcore/mod-chromie-xp)
- [mod-cta-switch](https://github.com/azerothcore/mod-cta-switch)
- [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings)
- [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset)
- [mod-eluna](https://github.com/azerothcore/mod-eluna)
- [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker)
- [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena)
- [mod-low-level-rbg](https://github.com/azerothcore/mod-low-level-rbg)
- [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check)
- [mod-progression-system](https://github.com/azerothcore/mod-progression-system)
- [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles)
- [mod-pvpstats-announcer](https://github.com/azerothcore/mod-pvpstats-announcer)
- [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache)
- [mod-rdf-expansion](https://github.com/azerothcore/mod-rdf-expansion)
- [mod-transmog](https://github.com/azerothcore/mod-transmog)
- [mod-weekend-xp](https://github.com/azerothcore/mod-weekend-xp)
- [mod-instanced-worldbosses](https://github.com/nyeriah/mod-instanced-worldbosses)
- [mod-zone-difficulty](https://github.com/azerothcore/mod-zone-difficulty)
- [lua-carbon-copy](https://github.com/55Honey/Acore_CarbonCopy)
- [lua-exchange-npc](https://github.com/55Honey/Acore_ExchangeNpc)
- [lua-event-scripts](https://github.com/55Honey/Acore_eventScripts)
- [lua-level-up-reward](https://github.com/55Honey/Acore_LevelUpReward)
- [lua-recruit-a-friend](https://github.com/55Honey/Acore_RecruitAFriend)
- [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind)
- [lua-temp-announcements](https://github.com/55Honey/Acore_TempAnnouncements)
- [lua-zonecheck](https://github.com/55Honey/acore_Zonecheck)
### Customizations
None
### Server
ChromieCraft
",1.0,"Giant Yeti are pickpocketable - https://github.com/chromiecraft/chromiecraft/issues/4910
### What client do you play on?
enUS
### Faction
Both
### Content Phase:
30-39
### Current Behaviour
Giant Yeti in Alterac Mountains can be pickpocketed
https://user-images.githubusercontent.com/11332559/215287067-c2e34b48-1239-4a01-a26d-25d61ea8d5a0.mp4
### Expected Blizzlike Behaviour
They should not be able to be pickpocketed
### Source
Wrath Classic
https://user-images.githubusercontent.com/11332559/215287077-462808df-f33b-467e-87ee-8ae366831644.mp4
wowhead page with no pickpocket loot section
https://www.wowhead.com/wotlk/npc=2251/giant-yeti
### Steps to reproduce the problem
.learn 1784
.learn 921
.tele alteracmountains
### Extra Notes
https://wowgaming.altervista.org/aowow/?npc=2251
### AC rev. hash/commit
https://github.com/chromiecraft/azerothcore-wotlk/commit/3fee40be7dac90ca99f73e6ae809b18ed7135ef6
### Operating system
Ubuntu 20.04
### Modules
- [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot)
- [mod-bg-item-reward](https://github.com/azerothcore/mod-bg-item-reward)
- [mod-cfbg](https://github.com/azerothcore/mod-cfbg)
- [mod-chat-transmitter](https://github.com/azerothcore/mod-chat-transmitter)
- [mod-chromie-xp](https://github.com/azerothcore/mod-chromie-xp)
- [mod-cta-switch](https://github.com/azerothcore/mod-cta-switch)
- [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings)
- [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset)
- [mod-eluna](https://github.com/azerothcore/mod-eluna)
- [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker)
- [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena)
- [mod-low-level-rbg](https://github.com/azerothcore/mod-low-level-rbg)
- [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check)
- [mod-progression-system](https://github.com/azerothcore/mod-progression-system)
- [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles)
- [mod-pvpstats-announcer](https://github.com/azerothcore/mod-pvpstats-announcer)
- [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache)
- [mod-rdf-expansion](https://github.com/azerothcore/mod-rdf-expansion)
- [mod-transmog](https://github.com/azerothcore/mod-transmog)
- [mod-weekend-xp](https://github.com/azerothcore/mod-weekend-xp)
- [mod-instanced-worldbosses](https://github.com/nyeriah/mod-instanced-worldbosses)
- [mod-zone-difficulty](https://github.com/azerothcore/mod-zone-difficulty)
- [lua-carbon-copy](https://github.com/55Honey/Acore_CarbonCopy)
- [lua-exchange-npc](https://github.com/55Honey/Acore_ExchangeNpc)
- [lua-event-scripts](https://github.com/55Honey/Acore_eventScripts)
- [lua-level-up-reward](https://github.com/55Honey/Acore_LevelUpReward)
- [lua-recruit-a-friend](https://github.com/55Honey/Acore_RecruitAFriend)
- [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind)
- [lua-temp-announcements](https://github.com/55Honey/Acore_TempAnnouncements)
- [lua-zonecheck](https://github.com/55Honey/acore_Zonecheck)
### Customizations
None
### Server
ChromieCraft
",0,giant yeti are pickpocketable what client do you play on enus faction both content phase current behaviour giant yeti in alterac mountains can be pickpocketed expected blizzlike behaviour they should not be able to be pickpocketed source wrath classic wowhead page with no pickpocket loot section steps to reproduce the problem learn learn tele alteracmountains extra notes ac rev hash commit operating system ubuntu modules customizations none server chromiecraft ,0
2860,10270778543.0,IssuesEvent,2019-08-23 12:34:51,RalfKoban/MiKo-Analyzers,https://api.github.com/repos/RalfKoban/MiKo-Analyzers,reopened,"Do not use .Equals, ==, !=, <=, <, >=, > in assertions",Area: analyzer Area: maintainability feature,"If assertions such as `Assert.That(...)` contain operators such as `==`, `!=`, `<=`, `<`, `>=`, `>` or the `Equals()` method, then those methods test for booleans.
Hence it is hard to understand when the test fails with an assertion that e.g. `true` was expected but `false` was received.
The test situation would be much easier to understand if the test would immediately state what was expected (e.g `5` was expected but `12` was received).",True,"Do not use .Equals, ==, !=, <=, <, >=, > in assertions - If assertions such as `Assert.That(...)` contain operators such as `==`, `!=`, `<=`, `<`, `>=`, `>` or the `Equals()` method, then those methods test for booleans.
Hence it is hard to understand when the test fails with an assertion that e.g. `true` was expected but `false` was received.
The test situation would be much easier to understand if the test would immediately state what was expected (e.g `5` was expected but `12` was received).",1,do not use equals in assertions if assertions such as assert that contain operators such as or the equals method then those methods test for booleans hence it is hard to understand when the test fails with an assertion that e g true was expected but false was received the test situation would be much easier to understand if the test would immediately state what was expected e g was expected but was received ,1
2960,10616895996.0,IssuesEvent,2019-10-12 15:11:24,arcticicestudio/snowsaw,https://api.github.com/repos/arcticicestudio/snowsaw,opened,Development dependency global installation workaround ,context-workflow scope-compatibility scope-maintainability type-improvement,"The workaround implemented in #82 (PR #85) works, but due to the explicitly disabled _module_ mode it is not possible to define pinned dependency versions but only using the normal `go get` behavior to build the repositories default branch.
A better workaround is to run the `go get` command for development & build dependencies/packages outside of the project's root directory. Therefore the `go.mod` file is not in scope for the `go get` command and is therefore not updated. In order to use pinned versions the `GO1111MODULE=on` environment variable must be explicitly set when running the `go get` command.
See https://github.com/golang/go/issues/30515 for more details and proposed solutions that might be added to Go's build tools in future versions.",True,"Development dependency global installation workaround - The workaround implemented in #82 (PR #85) works, but due to the explicitly disabled _module_ mode it is not possible to define pinned dependency versions but only using the normal `go get` behavior to build the repositories default branch.
A better workaround is to run the `go get` command for development & build dependencies/packages outside of the project's root directory. Therefore the `go.mod` file is not in scope for the `go get` command and is therefore not updated. In order to use pinned versions the `GO1111MODULE=on` environment variable must be explicitly set when running the `go get` command.
See https://github.com/golang/go/issues/30515 for more details and proposed solutions that might be added to Go's build tools in future versions.",1,development dependency global installation workaround the workaround implemented in pr works but due to the explicitly disabled module mode it is not possible to define pinned dependency versions but only using the normal go get behavior to build the repositories default branch a better workaround is to run the go get command for development build dependencies packages outside of the project s root directory therefore the go mod file is not in scope for the go get command and is therefore not updated in order to use pinned versions the on environment variable must be explicitly set when running the go get command see for more details and proposed solutions that might be added to go s build tools in future versions ,1
2259,7934525672.0,IssuesEvent,2018-07-08 20:16:19,chocolatey/chocolatey-package-requests,https://api.github.com/repos/chocolatey/chocolatey-package-requests,closed,RFM - Centbrowser,Status: Available For Maintainer(s),"Looking for someone to take over this package https://chocolatey.org/packages/CentBrowser as I no longer want to deal with it anymore.. Just getting too annoying having to deal with their crap tier CDN that they use..
Until someone picks this package up, it's going to sit dormant in my deprecated folder https://github.com/JourneyOver/chocolatey-packages/tree/master/deprecated/centbrowser.",True,"RFM - Centbrowser - Looking for someone to take over this package https://chocolatey.org/packages/CentBrowser as I no longer want to deal with it anymore.. Just getting too annoying having to deal with their crap tier CDN that they use..
Until someone picks this package up, it's going to sit dormant in my deprecated folder https://github.com/JourneyOver/chocolatey-packages/tree/master/deprecated/centbrowser.",1,rfm centbrowser looking for someone to take over this package as i no longer want to deal with it anymore just getting too annoying having to deal with their crap tier cdn that they use until someone picks this package up it s going to sit dormant in my deprecated folder ,1
3042,11277788696.0,IssuesEvent,2020-01-15 04:13:00,ansible/ansible,https://api.github.com/repos/ansible/ansible,closed,terraform plan does not provide output for 2.9.0,affects_2.9 bug cloud has_pr module needs_maintainer needs_triage support:community,"##### SUMMARY
Running terraform plan from ansible 2.9.0 does not provide expected output in stdout. This may be similar to issue #46589.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
terraform
##### ANSIBLE VERSION
```
# ansible --version
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
```
# ansible-config dump --only-changed
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = explicit
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = debug
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto_silent
```
##### OS / ENVIRONMENT
- Ubuntu 18.04.3 LTS
```
# uname -a
Linux 57f42cc031d6 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
# terraform --version
Terraform v0.12.13
# pip freeze
ansible==2.9.0
asn1crypto==0.24.0
cryptography==2.1.4
enum34==1.1.6
httplib2==0.9.2
idna==2.6
ipaddress==1.0.17
Jinja2==2.10
keyring==10.6.0
keyrings.alt==3.0
MarkupSafe==1.0
paramiko==2.0.0
pyasn1==0.4.2
pycrypto==2.6.1
pygobject==3.26.1
pyxdg==0.25
PyYAML==3.12
SecretStorage==2.3.1
six==1.11.0
# pip3 freeze
asn1crypto==0.24.0
awscli==1.16.272
boto==2.49.0
boto3==1.10.8
botocore==1.13.8
chardet==3.0.4
colorama==0.4.1
cryptography==2.1.4
docutils==0.15.2
idna==2.6
jmespath==0.9.4
keyring==10.6.0
keyrings.alt==3.0
pyasn1==0.4.7
pycrypto==2.6.1
pygobject==3.26.1
python-apt==1.6.4
python-dateutil==2.8.1
python-debian==0.1.32
pyxdg==0.25
PyYAML==5.1.2
rsa==3.4.2
s3transfer==0.2.1
SecretStorage==2.3.1
six==1.11.0
unattended-upgrades==0.1
urllib3==1.25.6
virtualenv==15.1.0
```
##### STEPS TO REPRODUCE
Run terraform plan from ansible
```
############
# Run terraform plan
- name: Run terraform plan for VPC resources
terraform:
state: planned
project_path: ""{{ fileTerraformWorkingPath }}""
plan_file: ""{{ fileTerraformWorkingPath }}/plan.tfplan""
force_init: yes
backend_config:
region: ""{{ nameAWSRegion }}""
register: vpc_tf_stack
############
# Print information about the base VPC
- name: Display everything with terraform
debug:
var: vpc_tf_stack
- name: Display all terraform output for VPC
debug:
var: vpc_tf_stack.stdout
```
##### EXPECTED RESULTS
Terraform plan stdout is shown.
##### ACTUAL RESULTS
Nothing is shown in stdout.
```
TASK [planTerraform : Display everything with terraform] *************************************************************************************************************************************
ok: [127.0.0.1] => {
""vpc_tf_stack"": {
""changed"": false,
""command"": ""/usr/bin/terraform plan -input=false -no-color -detailed-exitcode -out /tmp/terraform-20191104141619238338988/plan.tfplan /tmp/terraform-20191104141619238338988/plan.tfplan"",
""failed"": false,
""outputs"": {
""idAdminHostedZone"": {
""sensitive"": false,
""type"": ""string"",
""value"": ""Z23423423423423423423""
},
""idExternalSecurityGroup"": {
""sensitive"": false,
""type"": ""string"",
""value"": ""sg-12312312312312312""
},
""idIGW"": {
""sensitive"": false,
""type"": ""string"",
""value"": ""igw-12312312312312312""
},
""idLocalHostedZone"": {
""sensitive"": false,
""type"": ""string"",
""value"": ""Z12312312312312312312""
},
""idPublicRouteTable"": {
""sensitive"": false,
""type"": ""string"",
""value"": ""rtb-12312312312312312""
},
""idVPC"": {
""sensitive"": false,
""type"": ""string"",
""value"": ""vpc-12312312312312312""
}
},
""state"": ""planned"",
""stderr"": """",
""stderr_lines"": [],
""stdout"": """",
""stdout_lines"": [],
""workspace"": ""default""
}
}
TASK [planTerraform : Display all terraform output for VPC] **********************************************************************************************************************************
ok: [127.0.0.1] => {
""vpc_tf_stack.stdout"": """"
}
```
",True,"terraform plan does not provide output for 2.9.0 - ##### SUMMARY
Running terraform plan from ansible 2.9.0 does not provide expected output in stdout. This may be similar to issue #46589.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
terraform
##### ANSIBLE VERSION
```
# ansible --version
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
```
# ansible-config dump --only-changed
ALLOW_WORLD_READABLE_TMPFILES(/etc/ansible/ansible.cfg) = True
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = explicit
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = debug
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto_silent
```
##### OS / ENVIRONMENT
- Ubuntu 18.04.3 LTS
```
# uname -a
Linux 57f42cc031d6 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
# terraform --version
Terraform v0.12.13
# pip freeze
ansible==2.9.0
asn1crypto==0.24.0
cryptography==2.1.4
enum34==1.1.6
httplib2==0.9.2
idna==2.6
ipaddress==1.0.17
Jinja2==2.10
keyring==10.6.0
keyrings.alt==3.0
MarkupSafe==1.0
paramiko==2.0.0
pyasn1==0.4.2
pycrypto==2.6.1
pygobject==3.26.1
pyxdg==0.25
PyYAML==3.12
SecretStorage==2.3.1
six==1.11.0
# pip3 freeze
asn1crypto==0.24.0
awscli==1.16.272
boto==2.49.0
boto3==1.10.8
botocore==1.13.8
chardet==3.0.4
colorama==0.4.1
cryptography==2.1.4
docutils==0.15.2
idna==2.6
jmespath==0.9.4
keyring==10.6.0
keyrings.alt==3.0
pyasn1==0.4.7
pycrypto==2.6.1
pygobject==3.26.1
python-apt==1.6.4
python-dateutil==2.8.1
python-debian==0.1.32
pyxdg==0.25
PyYAML==5.1.2
rsa==3.4.2
s3transfer==0.2.1
SecretStorage==2.3.1
six==1.11.0
unattended-upgrades==0.1
urllib3==1.25.6
virtualenv==15.1.0
```
##### STEPS TO REPRODUCE
Run terraform plan from ansible
```
############
# Run terraform plan
- name: Run terraform plan for VPC resources
terraform:
state: planned
project_path: ""{{ fileTerraformWorkingPath }}""
plan_file: ""{{ fileTerraformWorkingPath }}/plan.tfplan""
force_init: yes
backend_config:
region: ""{{ nameAWSRegion }}""
register: vpc_tf_stack
############
# Print information about the base VPC
- name: Display everything with terraform
debug:
var: vpc_tf_stack
- name: Display all terraform output for VPC
debug:
var: vpc_tf_stack.stdout
```
##### EXPECTED RESULTS
Terraform plan stdout is shown.
##### ACTUAL RESULTS
Nothing is shown in stdout.
```
TASK [planTerraform : Display everything with terraform] *************************************************************************************************************************************
ok: [127.0.0.1] => {
""vpc_tf_stack"": {
""changed"": false,
""command"": ""/usr/bin/terraform plan -input=false -no-color -detailed-exitcode -out /tmp/terraform-20191104141619238338988/plan.tfplan /tmp/terraform-20191104141619238338988/plan.tfplan"",
""failed"": false,
""outputs"": {
""idAdminHostedZone"": {
""sensitive"": false,
""type"": ""string"",
""value"": ""Z23423423423423423423""
},
""idExternalSecurityGroup"": {
""sensitive"": false,
""type"": ""string"",
""value"": ""sg-12312312312312312""
},
""idIGW"": {
""sensitive"": false,
""type"": ""string"",
""value"": ""igw-12312312312312312""
},
""idLocalHostedZone"": {
""sensitive"": false,
""type"": ""string"",
""value"": ""Z12312312312312312312""
},
""idPublicRouteTable"": {
""sensitive"": false,
""type"": ""string"",
""value"": ""rtb-12312312312312312""
},
""idVPC"": {
""sensitive"": false,
""type"": ""string"",
""value"": ""vpc-12312312312312312""
}
},
""state"": ""planned"",
""stderr"": """",
""stderr_lines"": [],
""stdout"": """",
""stdout_lines"": [],
""workspace"": ""default""
}
}
TASK [planTerraform : Display all terraform output for VPC] **********************************************************************************************************************************
ok: [127.0.0.1] => {
""vpc_tf_stack.stdout"": """"
}
```
",1,terraform plan does not provide output for summary running terraform plan from ansible does not provide expected output in stdout this may be similar to issue issue type bug report component name terraform ansible version ansible version ansible config file etc ansible ansible cfg configured module search path ansible python module location usr lib dist packages ansible executable location usr bin ansible python version default oct configuration ansible config dump only changed allow world readable tmpfiles etc ansible ansible cfg true ansible nocows etc ansible ansible cfg true default gathering etc ansible ansible cfg explicit default stdout callback etc ansible ansible cfg debug host key checking etc ansible ansible cfg false interpreter python etc ansible ansible cfg auto silent os environment ubuntu lts uname a linux generic ubuntu smp tue oct utc gnu linux terraform version terraform pip freeze ansible cryptography idna ipaddress keyring keyrings alt markupsafe paramiko pycrypto pygobject pyxdg pyyaml secretstorage six freeze awscli boto botocore chardet colorama cryptography docutils idna jmespath keyring keyrings alt pycrypto pygobject python apt python dateutil python debian pyxdg pyyaml rsa secretstorage six unattended upgrades virtualenv steps to reproduce run terraform plan from ansible run terraform plan name run terraform plan for vpc resources terraform state planned project path fileterraformworkingpath plan file fileterraformworkingpath plan tfplan force init yes backend config region nameawsregion register vpc tf stack print information about the base vpc name display everything with terraform debug var vpc tf stack name display all terraform output for vpc debug var vpc tf stack stdout expected results terraform plan stdout is shown actual results nothing is shown in stdout task ok vpc tf stack changed false command usr bin terraform plan input false no color detailed exitcode out tmp terraform plan tfplan tmp terraform plan tfplan failed false outputs idadminhostedzone sensitive false type string value idexternalsecuritygroup sensitive false type string value sg idigw sensitive false type string value igw idlocalhostedzone sensitive false type string value idpublicroutetable sensitive false type string value rtb idvpc sensitive false type string value vpc state planned stderr stderr lines stdout stdout lines workspace default task ok vpc tf stack stdout ,1
1924,6588331931.0,IssuesEvent,2017-09-14 02:25:07,tomchentw/react-google-maps,https://api.github.com/repos/tomchentw/react-google-maps,closed,Considering switching to airbnb styleguide,Maintainers_please_review,"https://github.com/airbnb/javascript
Right now we extend `react-app`'s eslint configuration with a few additions such as quotes, jsx-quotes and comma dangle. It's great that we have a styleguide in place but it would be helpful for future contributors to take on a canonical and idiomatic way of writing modern javascript.
This issue is more of a discussion piece before I do a quick refactor. Input appreciated 👍
For reference, our styles:
https://github.com/tomchentw/react-google-maps/blob/master/.eslintrc
``` javascript
{
""extends"": ""react-app"",
""rules"": {
// Possible Errors
""comma-dangle"": [""error"", ""always-multiline""],
// Stylistic Issues
""jsx-quotes"": [""error"", ""prefer-double""],
""quotes"": [""error"", ""backtick""]
}
}
```
",True,"Considering switching to airbnb styleguide - https://github.com/airbnb/javascript
Right now we extend `react-app`'s eslint configuration with a few additions such as quotes, jsx-quotes and comma dangle. It's great that we have a styleguide in place but it would be helpful for future contributors to take on a canonical and idiomatic way of writing modern javascript.
This issue is more of a discussion piece before I do a quick refactor. Input appreciated 👍
For reference, our styles:
https://github.com/tomchentw/react-google-maps/blob/master/.eslintrc
``` javascript
{
""extends"": ""react-app"",
""rules"": {
// Possible Errors
""comma-dangle"": [""error"", ""always-multiline""],
// Stylistic Issues
""jsx-quotes"": [""error"", ""prefer-double""],
""quotes"": [""error"", ""backtick""]
}
}
```
",1,considering switching to airbnb styleguide right now we extend react app s eslint configuration with a few additions such as quotes jsx quotes and comma dangle it s great that we have a styleguide in place but it would be helpful for future contributors to take on a canonical and idiomatic way of writing modern javascript this issue is more of a discussion piece before i do a quick refactor input appreciated 👍 for reference our styles javascript extends react app rules possible errors comma dangle stylistic issues jsx quotes quotes ,1
218156,16960086212.0,IssuesEvent,2021-06-29 01:41:09,anhdtqwerty/thpt,https://api.github.com/repos/anhdtqwerty/thpt,closed,Major | Quản lý Bộ môn | Thêm Bộ môn | Thêm và hiển thị thành công bộ môn mới bị trùng ,dev-done test-verified,"Thêm Bộ môn mới
Step:
1. Click ""Thêm Bộ môn""
2. Nhập bộ môn có tên bị trùng hoặc có chứa dấu space vị trí đầu/ cuối
3. Bấm ""Lưu""
Actual:
Thêm và hiển thị thành công bộ môn mới có tên bị trùng hoặc chứa space đầu/ cuối
Expect:
Thêm mới không thành công
Thông báo ""Bộ môn đã tồn tại""
",1.0,"Major | Quản lý Bộ môn | Thêm Bộ môn | Thêm và hiển thị thành công bộ môn mới bị trùng - Thêm Bộ môn mới
Step:
1. Click ""Thêm Bộ môn""
2. Nhập bộ môn có tên bị trùng hoặc có chứa dấu space vị trí đầu/ cuối
3. Bấm ""Lưu""
Actual:
Thêm và hiển thị thành công bộ môn mới có tên bị trùng hoặc chứa space đầu/ cuối
Expect:
Thêm mới không thành công
Thông báo ""Bộ môn đã tồn tại""
",0,major quản lý bộ môn thêm bộ môn thêm và hiển thị thành công bộ môn mới bị trùng thêm bộ môn mới step click thêm bộ môn nhập bộ môn có tên bị trùng hoặc có chứa dấu space vị trí đầu cuối bấm lưu actual thêm và hiển thị thành công bộ môn mới có tên bị trùng hoặc chứa space đầu cuối expect thêm mới không thành công thông báo bộ môn đã tồn tại img width alt src img width alt src img width alt src ,0
796261,28104209152.0,IssuesEvent,2023-03-30 22:19:34,zephyrproject-rtos/zephyr,https://api.github.com/repos/zephyrproject-rtos/zephyr,closed,"CONFIG_ROM_START_OFFSET change to be added to cause west flash reset fails, in NXP RT11xx platforms",bug priority: low platform: NXP,"**Describe the bug**
for RT1170 use below command to download the image, debug reset does not work, but hardwarereset works.
`
west build -b mimxrt1170_evk_cm7
west flash --runner jlink --tool-opt='-SelectEmuBySN 000725371294' -- '--device=MIMXRT1176xxxA_M7' '--reset-after-load'
`
the serial console has no output at all, press reset key works. but this blocks twister testing. bisect found below commit introduce such problem
`
commit 44628735b870b2806bbea47477c3300bb624ad31 (refs/bisect/bad)
Author: Daniel Leung
Date: Mon Feb 13 13:31:26 2023 -0800
linker: rom_start_offset: add to address instead of set
The CONFIG_ROM_START_OFFSET is supposed to be added to
the current when linking, instead of having the current
address set to it. So fix that.
Not sure why it worked up to this point, but llvm/clang/lld
complained that it could not move location counter backward.
Signed-off-by: Daniel Leung
diff --git a/arch/common/rom_start_offset.ld b/arch/common/rom_start_offset.ld
index 2e82f30d71..8546391614 100644
--- a/arch/common/rom_start_offset.ld
+++ b/arch/common/rom_start_offset.ld
@@ -4,5 +4,5 @@
* SPDX-License-Identifier: Apache-2.0
*/
-. = CONFIG_ROM_START_OFFSET;
+. += CONFIG_ROM_START_OFFSET;
. = ALIGN(4);
`
**Expected behavior**
west flash can works
**Impact**
twister testing
**Logs and console output**
whent this happen no console output
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used: zephyr-v3.3.0-530-g1751c8f0f5
this impacts mimxrt1170_evk_cm7 and mimxrt1160_evk_cm7",1.0,"CONFIG_ROM_START_OFFSET change to be added to cause west flash reset fails, in NXP RT11xx platforms - **Describe the bug**
for RT1170 use below command to download the image, debug reset does not work, but hardwarereset works.
`
west build -b mimxrt1170_evk_cm7
west flash --runner jlink --tool-opt='-SelectEmuBySN 000725371294' -- '--device=MIMXRT1176xxxA_M7' '--reset-after-load'
`
the serial console has no output at all, press reset key works. but this blocks twister testing. bisect found below commit introduce such problem
`
commit 44628735b870b2806bbea47477c3300bb624ad31 (refs/bisect/bad)
Author: Daniel Leung
Date: Mon Feb 13 13:31:26 2023 -0800
linker: rom_start_offset: add to address instead of set
The CONFIG_ROM_START_OFFSET is supposed to be added to
the current when linking, instead of having the current
address set to it. So fix that.
Not sure why it worked up to this point, but llvm/clang/lld
complained that it could not move location counter backward.
Signed-off-by: Daniel Leung
diff --git a/arch/common/rom_start_offset.ld b/arch/common/rom_start_offset.ld
index 2e82f30d71..8546391614 100644
--- a/arch/common/rom_start_offset.ld
+++ b/arch/common/rom_start_offset.ld
@@ -4,5 +4,5 @@
* SPDX-License-Identifier: Apache-2.0
*/
-. = CONFIG_ROM_START_OFFSET;
+. += CONFIG_ROM_START_OFFSET;
. = ALIGN(4);
`
**Expected behavior**
west flash can works
**Impact**
twister testing
**Logs and console output**
whent this happen no console output
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used: zephyr-v3.3.0-530-g1751c8f0f5
this impacts mimxrt1170_evk_cm7 and mimxrt1160_evk_cm7",0,config rom start offset change to be added to cause west flash reset fails in nxp platforms describe the bug for use below command to download the image debug reset does not work but hardwarereset works west build b evk west flash runner jlink tool opt selectemubysn device reset after load the serial console has no output at all press reset key works but this blocks twister testing bisect found below commit introduce such problem commit refs bisect bad author daniel leung date mon feb linker rom start offset add to address instead of set the config rom start offset is supposed to be added to the current when linking instead of having the current address set to it so fix that not sure why it worked up to this point but llvm clang lld complained that it could not move location counter backward signed off by daniel leung diff git a arch common rom start offset ld b arch common rom start offset ld index a arch common rom start offset ld b arch common rom start offset ld spdx license identifier apache config rom start offset config rom start offset align expected behavior west flash can works impact twister testing logs and console output whent this happen no console output environment please complete the following information os e g linux macos windows toolchain e g zephyr sdk commit sha or version used zephyr this impacts evk and evk ,0
152868,5871404869.0,IssuesEvent,2017-05-15 08:37:44,PX4/Firmware,https://api.github.com/repos/PX4/Firmware,closed,Losing GPS does not trigger fail safe,bug priority-critical,"On latest master, failsafe when losing GPS does not work. A few test cases:
* When I unclick `use GPS` for the EKF2_AID_MASK (in air), the only thing appearing is `WARN [navigator] global position timeout` after a few seconds but nothing happens. The quad just starts to drift.
* When I stop sending GPS (in air), nothing happens. The quad just starts to drift.
* When the home position was set but has no GPS anymore (on ground), I can still takeoff. Then the quad drifts
I guess the EKF reports local position as still valid?",1.0,"Losing GPS does not trigger fail safe - On latest master, failsafe when losing GPS does not work. A few test cases:
* When I unclick `use GPS` for the EKF2_AID_MASK (in air), the only thing appearing is `WARN [navigator] global position timeout` after a few seconds but nothing happens. The quad just starts to drift.
* When I stop sending GPS (in air), nothing happens. The quad just starts to drift.
* When the home position was set but has no GPS anymore (on ground), I can still takeoff. Then the quad drifts
I guess the EKF reports local position as still valid?",0,losing gps does not trigger fail safe on latest master failsafe when losing gps does not work a few test cases when i unclick use gps for the aid mask in air the only thing appearing is warn global position timeout after a few seconds but nothing happens the quad just starts to drift when i stop sending gps in air nothing happens the quad just starts to drift when the home position was set but has no gps anymore on ground i can still takeoff then the quad drifts i guess the ekf reports local position as still valid ,0
3703,15112294480.0,IssuesEvent,2021-02-08 21:38:30,backdrop-ops/contrib,https://api.github.com/repos/backdrop-ops/contrib,opened,Application to join: [larsdesigns],Maintainer application,"Hello and welcome to the contrib application process! We're happy to have you :)
## Please note these 3 requirements for new contrib projects:
- [ ] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [ ] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [ ] If porting a Drupal 7 project, Maintain the Git history from Drupal.
## Please provide the following information:
**The name of your module, theme, or layout**
Node Noindex
**(Optional) Post a link here to an issue in the drupal.org queue notifying the Drupal 7 maintainers that you are working on a Backdrop port of their project**
https://www.drupal.org/project/node_noindex/issues/3197373
**Post a link to your new Backdrop project under your own GitHub account (option #1)**
https://github.com/larsdesigns/backdrop-contrib-node_noindex
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
**If you have chosen option #3 above, do you agree to undergo this same maintainer application process again, should you decide to contribute code in the future?**
YES
",True,"Application to join: [larsdesigns] - Hello and welcome to the contrib application process! We're happy to have you :)
## Please note these 3 requirements for new contrib projects:
- [ ] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [ ] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [ ] If porting a Drupal 7 project, Maintain the Git history from Drupal.
## Please provide the following information:
**The name of your module, theme, or layout**
Node Noindex
**(Optional) Post a link here to an issue in the drupal.org queue notifying the Drupal 7 maintainers that you are working on a Backdrop port of their project**
https://www.drupal.org/project/node_noindex/issues/3197373
**Post a link to your new Backdrop project under your own GitHub account (option #1)**
https://github.com/larsdesigns/backdrop-contrib-node_noindex
**If you have chosen option #2 or #1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
**If you have chosen option #3 above, do you agree to undergo this same maintainer application process again, should you decide to contribute code in the future?**
YES
",1,application to join hello and welcome to the contrib application process we re happy to have you please note these requirements for new contrib projects include a readme md file containing license and maintainer information you can use this example include a license txt file you can use this example if porting a drupal project maintain the git history from drupal please provide the following information the name of your module theme or layout node noindex optional post a link here to an issue in the drupal org queue notifying the drupal maintainers that you are working on a backdrop port of their project post a link to your new backdrop project under your own github account option if you have chosen option or above do you agree to the yes if you have chosen option above do you agree to undergo this same maintainer application process again should you decide to contribute code in the future yes ,1
64664,6916760544.0,IssuesEvent,2017-11-29 04:38:34,brave/browser-laptop,https://api.github.com/repos/brave/browser-laptop,reopened,Claim token button should be hidden if a wallet is recovered with ugp token,0.19.x bug feature/ledger initiative/bat-payments QA/test-plan-specified release-notes/exclude,"
### Description
Claim token button should be hidden if a wallet is recovered with ugp token
### Steps to Reproduce
1. Clean install 0.19.96
2. Create wallet and claim ugp tokens
3. Backup wallet and clear browser profile
4. Create a new browser profile
5. Enable wallet and recover the wallet from step 3
6. Claim free token button is still shown, clicking on the button shows promotion not available
**Actual result:**
```
>>> {""statusCode"":422,""error"":""Unprocessable Entity"",""message"":""promotion already in use""}
Problem claiming promotion Error: HTTP response 422 for PUT /v1/grants/51245323-b45b---ee0584e183de
```
**Expected result:**
If wallet already contains ugp tokens, claim token button should not be shown
**Reproduces how often:**
100%
### Brave Version
**about:brave info:**
Brave | 0.19.96
-- | --
rev | 9d72944
Muon | 4.5.16
libchromiumcontent | 62.0.3202.94
V8 | 6.2.414.42
Node.js | 7.9.0
Update Channel | Release
OS Platform | Microsoft Windows
OS Release | 10.0.15063
OS Architecture | x64
**Reproducible on current live release:**
N/A
### Additional Information
Confirmed by @LaurenWags on macOS",1.0,"Claim token button should be hidden if a wallet is recovered with ugp token -
### Description
Claim token button should be hidden if a wallet is recovered with ugp token
### Steps to Reproduce
1. Clean install 0.19.96
2. Create wallet and claim ugp tokens
3. Backup wallet and clear browser profile
4. Create a new browser profile
5. Enable wallet and recover the wallet from step 3
6. Claim free token button is still shown, clicking on the button shows promotion not available
**Actual result:**
```
>>> {""statusCode"":422,""error"":""Unprocessable Entity"",""message"":""promotion already in use""}
Problem claiming promotion Error: HTTP response 422 for PUT /v1/grants/51245323-b45b---ee0584e183de
```
**Expected result:**
If wallet already contains ugp tokens, claim token button should not be shown
**Reproduces how often:**
100%
### Brave Version
**about:brave info:**
Brave | 0.19.96
-- | --
rev | 9d72944
Muon | 4.5.16
libchromiumcontent | 62.0.3202.94
V8 | 6.2.414.42
Node.js | 7.9.0
Update Channel | Release
OS Platform | Microsoft Windows
OS Release | 10.0.15063
OS Architecture | x64
**Reproducible on current live release:**
N/A
### Additional Information
Confirmed by @LaurenWags on macOS",0,claim token button should be hidden if a wallet is recovered with ugp token test plan description claim token button should be hidden if a wallet is recovered with ugp token steps to reproduce clean install create wallet and claim ugp tokens backup wallet and clear browser profile create a new browser profile enable wallet and recover the wallet from step claim free token button is still shown clicking on the button shows promotion not available actual result statuscode error unprocessable entity message promotion already in use problem claiming promotion error http response for put grants expected result if wallet already contains ugp tokens claim token button should not be shown reproduces how often brave version about brave info brave rev muon libchromiumcontent node js update channel release os platform microsoft windows os release os architecture reproducible on current live release n a additional information confirmed by laurenwags on macos,0
546577,16015107088.0,IssuesEvent,2021-04-20 15:08:16,publiclab/plots2,https://api.github.com/repos/publiclab/plots2,closed,Image upload failing,bug help wanted high-priority,"Image upload fails with this error in the body section ""Section 4"" of `/post` route
 this is on the https://publiclab.org/, was able to replicate it on https://unstable.publiclab.org and locally
Note: Image upload is working fine on ""Section 2"" of `/post`
Template: https://github.com/publiclab/plots2/blob/main/app/views/editor/rich.html.erb
",1.0,"Image upload failing - Image upload fails with this error in the body section ""Section 4"" of `/post` route
 this is on the https://publiclab.org/, was able to replicate it on https://unstable.publiclab.org and locally
Note: Image upload is working fine on ""Section 2"" of `/post`
Template: https://github.com/publiclab/plots2/blob/main/app/views/editor/rich.html.erb
",0,image upload failing image upload fails with this error in the body section section of post route this is on the was able to replicate it on and locally note image upload is working fine on section of post template ,0
386619,11447918717.0,IssuesEvent,2020-02-06 01:24:05,servicemesher/istio-official-translation,https://api.github.com/repos/servicemesher/istio-official-translation,closed,/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md,finished lang/zh priority/P0 sync/update version/1.5,"Source File: [/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md](https://github.com/istio/istio.io/tree/master/content/en/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md)
Diff:
~~~diff
diff --git a/content/en/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md b/content/en/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md
index 81c25bb27..722456a51 100644
--- a/content/en/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md
+++ b/content/en/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md
@@ -17,37 +17,20 @@ Then you configure a gateway to provide ingress access to the service via host `
## Generate client and server certificates and keys
-1. Clone the repository:
+For this task you can use your favorite tool to generate certificates and keys. The commands below use
+[openssl](https://man.openbsd.org/openssl.1)
- {{< text bash >}}
- $ git clone https://github.com/nicholasjackson/mtls-go-example
- {{< /text >}}
-
-1. Change directory to the cloned repository:
-
- {{< text bash >}}
- $ pushd mtls-go-example
- {{< /text >}}
-
-1. Generate the certificates for `nginx.example.com`.
- Use any password with the following command:
+1. Create a root certificate and private key to sign the certificate for your services:
{{< text bash >}}
- $ ./generate.sh nginx.example.com
+ $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt
{{< /text >}}
- When prompted, select `y` for all the questions.
-
-1. Move the certificates into the `nginx.example.com` directory:
+1. Create a certificate and a private key for `nginx.example.com`:
{{< text bash >}}
- $ mkdir ../nginx.example.com && mv 1_root 2_intermediate 3_application 4_client ../nginx.example.com
- {{< /text >}}
-
-1. Return to the root directory:
-
- {{< text bash >}}
- $ popd
+ $ openssl req -out nginx.example.com.csr -newkey rsa:2048 -nodes -keyout nginx.example.com.key -subj ""/CN=nginx.example.com/O=some organization""
+ $ openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in nginx.example.com.csr -out nginx.example.com.crt
{{< /text >}}
## Deploy an NGINX server
@@ -56,7 +39,7 @@ Then you configure a gateway to provide ingress access to the service via host `
certificate.
{{< text bash >}}
- $ kubectl create secret tls nginx-server-certs --key nginx.example.com/3_application/private/nginx.example.com.key.pem --cert nginx.example.com/3_application/certs/nginx.example.com.cert.pem
+ $ kubectl create secret tls nginx-server-certs --key nginx.example.com.key --cert nginx.example.com.crt
{{< /text >}}
1. Create a configuration file for the NGINX server:
@@ -162,10 +145,10 @@ to hold the configuration of the NGINX server:
server certificate activation date OK
certificate public key: RSA
certificate version: #3
- subject: C=US,ST=Denial,L=Springfield,O=Dis,CN=nginx.example.com
+ subject: CN=nginx.example.com; O=some organization
start date: Wed, 15 Aug 2018 07:29:07 GMT
expire date: Sun, 25 Aug 2019 07:29:07 GMT
- issuer: C=US,ST=Denial,O=Dis,CN=nginx.example.com
+ issuer: O=example Inc.; CN=example.com
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
@@ -242,13 +225,12 @@ to hold the configuration of the NGINX server:
it is successfully verified (_SSL certificate verify ok_ is printed).
{{< text bash >}}
- $ curl -v --resolve nginx.example.com:$SECURE_INGRESS_PORT:$INGRESS_HOST --cacert nginx.example.com/2_intermediate/certs/ca-chain.cert.pem https://nginx.example.com:$SECURE_INGRESS_PORT
+ $ curl -v --resolve nginx.example.com:$SECURE_INGRESS_PORT:$INGRESS_HOST --cacert example.com.crt https://nginx.example.com:$SECURE_INGRESS_PORT
Server certificate:
- subject: C=US; ST=Denial; L=Springfield; O=Dis; CN=nginx.example.com
- start date: Aug 15 07:29:07 2018 GMT
- expire date: Aug 25 07:29:07 2019 GMT
- common name: nginx.example.com (matched)
- issuer: C=US; ST=Denial; O=Dis; CN=nginx.example.com
+ subject: CN=nginx.example.com; O=some organization
+ start date: Wed, 15 Aug 2018 07:29:07 GMT
+ expire date: Sun, 25 Aug 2019 07:29:07 GMT
+ issuer: O=example Inc.; CN=example.com
SSL certificate verify ok.
< HTTP/1.1 200 OK
@@ -272,14 +254,14 @@ to hold the configuration of the NGINX server:
$ kubectl delete virtualservice nginx
{{< /text >}}
-1. Delete the directory containing the certificates and the repository used to generate them:
+1. Delete the certificates and keys:
{{< text bash >}}
- $ rm -rf nginx.example.com mtls-go-example
+ $ rm example.com.crt example.com.key nginx.example.com.crt nginx.example.com.key nginx.example.com.csr
{{< /text >}}
1. Delete the generated configuration files used in this example:
{{< text bash >}}
- $ rm -f ./nginx.conf
+ $ rm ./nginx.conf
{{< /text >}}
~~~",1.0,"/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md - Source File: [/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md](https://github.com/istio/istio.io/tree/master/content/en/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md)
Diff:
~~~diff
diff --git a/content/en/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md b/content/en/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md
index 81c25bb27..722456a51 100644
--- a/content/en/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md
+++ b/content/en/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/index.md
@@ -17,37 +17,20 @@ Then you configure a gateway to provide ingress access to the service via host `
## Generate client and server certificates and keys
-1. Clone the repository:
+For this task you can use your favorite tool to generate certificates and keys. The commands below use
+[openssl](https://man.openbsd.org/openssl.1)
- {{< text bash >}}
- $ git clone https://github.com/nicholasjackson/mtls-go-example
- {{< /text >}}
-
-1. Change directory to the cloned repository:
-
- {{< text bash >}}
- $ pushd mtls-go-example
- {{< /text >}}
-
-1. Generate the certificates for `nginx.example.com`.
- Use any password with the following command:
+1. Create a root certificate and private key to sign the certificate for your services:
{{< text bash >}}
- $ ./generate.sh nginx.example.com
+ $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt
{{< /text >}}
- When prompted, select `y` for all the questions.
-
-1. Move the certificates into the `nginx.example.com` directory:
+1. Create a certificate and a private key for `nginx.example.com`:
{{< text bash >}}
- $ mkdir ../nginx.example.com && mv 1_root 2_intermediate 3_application 4_client ../nginx.example.com
- {{< /text >}}
-
-1. Return to the root directory:
-
- {{< text bash >}}
- $ popd
+ $ openssl req -out nginx.example.com.csr -newkey rsa:2048 -nodes -keyout nginx.example.com.key -subj ""/CN=nginx.example.com/O=some organization""
+ $ openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in nginx.example.com.csr -out nginx.example.com.crt
{{< /text >}}
## Deploy an NGINX server
@@ -56,7 +39,7 @@ Then you configure a gateway to provide ingress access to the service via host `
certificate.
{{< text bash >}}
- $ kubectl create secret tls nginx-server-certs --key nginx.example.com/3_application/private/nginx.example.com.key.pem --cert nginx.example.com/3_application/certs/nginx.example.com.cert.pem
+ $ kubectl create secret tls nginx-server-certs --key nginx.example.com.key --cert nginx.example.com.crt
{{< /text >}}
1. Create a configuration file for the NGINX server:
@@ -162,10 +145,10 @@ to hold the configuration of the NGINX server:
server certificate activation date OK
certificate public key: RSA
certificate version: #3
- subject: C=US,ST=Denial,L=Springfield,O=Dis,CN=nginx.example.com
+ subject: CN=nginx.example.com; O=some organization
start date: Wed, 15 Aug 2018 07:29:07 GMT
expire date: Sun, 25 Aug 2019 07:29:07 GMT
- issuer: C=US,ST=Denial,O=Dis,CN=nginx.example.com
+ issuer: O=example Inc.; CN=example.com
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
@@ -242,13 +225,12 @@ to hold the configuration of the NGINX server:
it is successfully verified (_SSL certificate verify ok_ is printed).
{{< text bash >}}
- $ curl -v --resolve nginx.example.com:$SECURE_INGRESS_PORT:$INGRESS_HOST --cacert nginx.example.com/2_intermediate/certs/ca-chain.cert.pem https://nginx.example.com:$SECURE_INGRESS_PORT
+ $ curl -v --resolve nginx.example.com:$SECURE_INGRESS_PORT:$INGRESS_HOST --cacert example.com.crt https://nginx.example.com:$SECURE_INGRESS_PORT
Server certificate:
- subject: C=US; ST=Denial; L=Springfield; O=Dis; CN=nginx.example.com
- start date: Aug 15 07:29:07 2018 GMT
- expire date: Aug 25 07:29:07 2019 GMT
- common name: nginx.example.com (matched)
- issuer: C=US; ST=Denial; O=Dis; CN=nginx.example.com
+ subject: CN=nginx.example.com; O=some organization
+ start date: Wed, 15 Aug 2018 07:29:07 GMT
+ expire date: Sun, 25 Aug 2019 07:29:07 GMT
+ issuer: O=example Inc.; CN=example.com
SSL certificate verify ok.
< HTTP/1.1 200 OK
@@ -272,14 +254,14 @@ to hold the configuration of the NGINX server:
$ kubectl delete virtualservice nginx
{{< /text >}}
-1. Delete the directory containing the certificates and the repository used to generate them:
+1. Delete the certificates and keys:
{{< text bash >}}
- $ rm -rf nginx.example.com mtls-go-example
+ $ rm example.com.crt example.com.key nginx.example.com.crt nginx.example.com.key nginx.example.com.csr
{{< /text >}}
1. Delete the generated configuration files used in this example:
{{< text bash >}}
- $ rm -f ./nginx.conf
+ $ rm ./nginx.conf
{{< /text >}}
~~~",0, docs tasks traffic management ingress ingress sni passthrough index md source file diff diff diff git a content en docs tasks traffic management ingress ingress sni passthrough index md b content en docs tasks traffic management ingress ingress sni passthrough index md index a content en docs tasks traffic management ingress ingress sni passthrough index md b content en docs tasks traffic management ingress ingress sni passthrough index md then you configure a gateway to provide ingress access to the service via host generate client and server certificates and keys clone the repository for this task you can use your favorite tool to generate certificates and keys the commands below use git clone change directory to the cloned repository pushd mtls go example generate the certificates for nginx example com use any password with the following command create a root certificate and private key to sign the certificate for your services generate sh nginx example com openssl req nodes days newkey rsa subj o example inc cn example com keyout example com key out example com crt when prompted select y for all the questions move the certificates into the nginx example com directory create a certificate and a private key for nginx example com mkdir nginx example com mv root intermediate application client nginx example com return to the root directory popd openssl req out nginx example com csr newkey rsa nodes keyout nginx example com key subj cn nginx example com o some organization openssl req days ca example com crt cakey example com key set serial in nginx example com csr out nginx example com crt deploy an nginx server then you configure a gateway to provide ingress access to the service via host certificate kubectl create secret tls nginx server certs key nginx example com application private nginx example com key pem cert nginx example com application certs nginx example com cert pem kubectl create secret tls nginx server certs key nginx example com key cert nginx example com crt create a configuration file for the nginx server to hold the configuration of the nginx server server certificate activation date ok certificate public key rsa certificate version subject c us st denial l springfield o dis cn nginx example com subject cn nginx example com o some organization start date wed aug gmt expire date sun aug gmt issuer c us st denial o dis cn nginx example com issuer o example inc cn example com get http user agent curl to hold the configuration of the nginx server it is successfully verified ssl certificate verify ok is printed curl v resolve nginx example com secure ingress port ingress host cacert nginx example com intermediate certs ca chain cert pem curl v resolve nginx example com secure ingress port ingress host cacert example com crt server certificate subject c us st denial l springfield o dis cn nginx example com start date aug gmt expire date aug gmt common name nginx example com matched issuer c us st denial o dis cn nginx example com subject cn nginx example com o some organization start date wed aug gmt expire date sun aug gmt issuer o example inc cn example com ssl certificate verify ok http ok to hold the configuration of the nginx server kubectl delete virtualservice nginx delete the directory containing the certificates and the repository used to generate them delete the certificates and keys rm rf nginx example com mtls go example rm example com crt example com key nginx example com crt nginx example com key nginx example com csr delete the generated configuration files used in this example rm f nginx conf rm nginx conf ,0
2191,7735704514.0,IssuesEvent,2018-05-27 18:02:30,Chromeroni/Hera-Chatbot,https://api.github.com/repos/Chromeroni/Hera-Chatbot,closed,Store log-files in a cloud,maintainability,"Currently, log files are stored localy on the execution environment. This prevents easy access to the log files for all developers.
**Change**
Implement an interface to the google drive API, so that the daily created log files can automatically be stored in the developer cloud.
**Prerequisite**
Issue #16 & #27 ",True,"Store log-files in a cloud - Currently, log files are stored localy on the execution environment. This prevents easy access to the log files for all developers.
**Change**
Implement an interface to the google drive API, so that the daily created log files can automatically be stored in the developer cloud.
**Prerequisite**
Issue #16 & #27 ",1,store log files in a cloud currently log files are stored localy on the execution environment this prevents easy access to the log files for all developers change implement an interface to the google drive api so that the daily created log files can automatically be stored in the developer cloud prerequisite issue ,1
563043,16675255453.0,IssuesEvent,2021-06-07 15:26:40,cdr/code-server,https://api.github.com/repos/cdr/code-server,closed,Python file is not running - Code-server: 3.10.1,bug extension high-priority waiting-for-info,"
## OS/Web Information
- Web Browser: Edge
- Local OS: Linux - Ubuntu 20.04 LTS
- Remote OS: Ubuntu 20.04 LTS
- Remote Architecture: VM
- `code-server --version`: 3.10.1 421237f499079cf88d68c02163b70e2b476bbb0d Latest
## Steps to Reproduce
1. Run a python file in terminal
## Expected
It should run my python file.
## Actual
- The terminal is not opening!
- Throwing two errors:
- command : `'python.execlnTerminal'` not found
- command : `'python.execlnTerminal-icon'` not found
## Logs
[backend.log](https://github.com/cdr/code-server/files/6513351/newfile.txt)
## Screenshot

## Notes
This issue can be reproduced in VS Code: Yes
",1.0,"Python file is not running - Code-server: 3.10.1 -
## OS/Web Information
- Web Browser: Edge
- Local OS: Linux - Ubuntu 20.04 LTS
- Remote OS: Ubuntu 20.04 LTS
- Remote Architecture: VM
- `code-server --version`: 3.10.1 421237f499079cf88d68c02163b70e2b476bbb0d Latest
## Steps to Reproduce
1. Run a python file in terminal
## Expected
It should run my python file.
## Actual
- The terminal is not opening!
- Throwing two errors:
- command : `'python.execlnTerminal'` not found
- command : `'python.execlnTerminal-icon'` not found
## Logs
[backend.log](https://github.com/cdr/code-server/files/6513351/newfile.txt)
## Screenshot

## Notes
This issue can be reproduced in VS Code: Yes
",0,python file is not running code server hi there 👋 thanks for reporting a bug please search for existing issues before filing as they may contain additional information about the problem and descriptions of workarounds provide as much information as you can so that we can reproduce the issue otherwise we may not be able to help diagnose the problem and may close the issue as unreproducible or incomplete for visual defects please include screenshots to help us understand the issue os web information web browser edge local os linux ubuntu lts remote os ubuntu lts remote architecture vm code server version latest steps to reproduce run a python file in terminal expected it should run my python file actual the terminal is not opening throwing two errors command python execlnterminal not found command python execlnterminal icon not found logs first run code server with at least debug logging or trace to be really thorough by setting the log flag or the log level environment variable vvv and verbose are aliases for log trace for example code server log debug once this is done replicate the issue you re having then collect logging information from the following places the most recent files from local share code server coder logs the browser console the browser network tab additionally collecting core dumps you may need to enable them first if code server crashes can be helpful screenshot notes if you can reproduce the issue on vanilla vs code please file the issue at the vs code repository instead this issue can be reproduced in vs code yes ,0
2668,9126328653.0,IssuesEvent,2019-02-24 20:43:41,DynamoRIO/drmemory,https://api.github.com/repos/DynamoRIO/drmemory,closed,add end-user support for updating syscall #'s from pdb's,Hotlist-Release Maintainability OpSys-Windows Type-Feature,"The goal is to future-proof Dr. Memory: make it more adaptive to avoid requiring manual updates to fix breakages on each new Windows change. Xref #1826.
The plan is:
- Detect unknown version by looking at particular syscall #'s (as we can't rely on PEB versions anymore): xref https://github.com/DynamoRIO/dynamorio/issues/1598
- Create utility that downloads pdb's for the core dll's, does sthg like what winsysnums does, and comes up with new syscall numbers. We should be able to automate everything except for the usercall stuff.
- Can we launch the helper process from our online client? Even if so, we'll need to cache the results, so we could ask the user to run the utility standalone?
- Cache the results and load them in.
Things can still break if the syscall wrappers change (xref https://github.com/DynamoRIO/dynamorio/issues/1854) or other things besides numbers change, but this would be an improvement and could help future-proof Dr. Memory.
",True,"add end-user support for updating syscall #'s from pdb's - The goal is to future-proof Dr. Memory: make it more adaptive to avoid requiring manual updates to fix breakages on each new Windows change. Xref #1826.
The plan is:
- Detect unknown version by looking at particular syscall #'s (as we can't rely on PEB versions anymore): xref https://github.com/DynamoRIO/dynamorio/issues/1598
- Create utility that downloads pdb's for the core dll's, does sthg like what winsysnums does, and comes up with new syscall numbers. We should be able to automate everything except for the usercall stuff.
- Can we launch the helper process from our online client? Even if so, we'll need to cache the results, so we could ask the user to run the utility standalone?
- Cache the results and load them in.
Things can still break if the syscall wrappers change (xref https://github.com/DynamoRIO/dynamorio/issues/1854) or other things besides numbers change, but this would be an improvement and could help future-proof Dr. Memory.
",1,add end user support for updating syscall s from pdb s the goal is to future proof dr memory make it more adaptive to avoid requiring manual updates to fix breakages on each new windows change xref the plan is detect unknown version by looking at particular syscall s as we can t rely on peb versions anymore xref create utility that downloads pdb s for the core dll s does sthg like what winsysnums does and comes up with new syscall numbers we should be able to automate everything except for the usercall stuff can we launch the helper process from our online client even if so we ll need to cache the results so we could ask the user to run the utility standalone cache the results and load them in things can still break if the syscall wrappers change xref or other things besides numbers change but this would be an improvement and could help future proof dr memory ,1
244173,26369270746.0,IssuesEvent,2023-01-11 19:11:31,JoshRMendDemo/SourceFileMatching-Demo,https://api.github.com/repos/JoshRMendDemo/SourceFileMatching-Demo,opened,zip4j-1.3.2.jar: 2 vulnerabilities (highest severity is: 6.5),security vulnerability," Vulnerable Library - zip4j-1.3.2.jar
zip4j before 1.3.3 is vulnerable to directory traversal, allowing attackers to write to arbitrary files via a ../ (dot dot slash) in a Zip archive entry that is mishandled during extraction. This vulnerability is also known as 'Zip-Slip'.
zip4j up to v2.10.0 can throw various uncaught exceptions while parsing a specially crafted ZIP file, which could result in an application crash. This could be used to mount a denial of service attack against services that use zip4j library.
zip4j before 1.3.3 is vulnerable to directory traversal, allowing attackers to write to arbitrary files via a ../ (dot dot slash) in a Zip archive entry that is mishandled during extraction. This vulnerability is also known as 'Zip-Slip'.
zip4j up to v2.10.0 can throw various uncaught exceptions while parsing a specially crafted ZIP file, which could result in an application crash. This could be used to mount a denial of service attack against services that use zip4j library.
:rescue_worker_helmet: Automatic Remediation is available for this issue
***
:rescue_worker_helmet: Automatic Remediation is available for this issue.
",0, jar vulnerabilities highest severity is vulnerable library jar an open source java library to handle zip files library home page a href path to dependency file vendor aws sdk cpp code generation generator pom xml path to vulnerable library home wss scanner repository net lingala jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in version remediation available medium jar direct medium jar direct details cve vulnerable library jar an open source java library to handle zip files library home page a href path to dependency file vendor aws sdk cpp code generation generator pom xml path to vulnerable library home wss scanner repository net lingala jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch main vulnerability details before is vulnerable to directory traversal allowing attackers to write to arbitrary files via a dot dot slash in a zip archive entry that is mishandled during extraction this vulnerability is also known as zip slip publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library jar an open source java library to handle zip files library home page a href path to dependency file vendor aws sdk cpp code generation generator pom xml path to vulnerable library home wss scanner repository net lingala jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch main vulnerability details up to can throw various uncaught exceptions while parsing a specially crafted zip file which could result in an application crash this could be used to mount a denial of service attack against services that use library publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue ,0
1477,6404174472.0,IssuesEvent,2017-08-07 01:23:43,caskroom/homebrew-cask,https://api.github.com/repos/caskroom/homebrew-cask,closed,microsoft-office uninstall does not remove .app files from /Applications,awaiting maintainer feedback,"#### General troubleshooting steps
- [X] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue.
- [X] None of the templates was appropriate for my issue, or I’m not sure.
- [X] I ran `brew update-reset && brew update` and retried my command.
- [X] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
#### Description of issue
`brew cask zap microsoft-office` should get rid of the apps, but the apps (microsoft-office is a suite) remain on /Applications/.
#### Output of your command with `--verbose --debug`
```
[I] gtklocker@schwarz ~> brew cask zap microsoft-office --verbose --debug
==> Zapping Cask microsoft-office
==> Implied ""brew cask uninstall microsoft-office""
==> Un-installing artifacts
==> Determining which artifacts are present in Cask microsoft-office
==> 3 artifact/s defined
#
#
#
==> Un-installing artifact of class Hbc::Artifact::Uninstall
==> Running uninstall process for microsoft-office; your password may be necessary
==> Removing launchctl service com.microsoft.autoupdate.helpertool
==> Executing: [""/bin/launchctl"", ""list"", ""com.microsoft.autoupdate.helpertool""]
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/bin/launchctl"", ""list"", ""com.microsoft.autoupdate.helpertool""]
Password:
==> Removing launchctl service com.microsoft.office.licensing.helper
==> Executing: [""/bin/launchctl"", ""list"", ""com.microsoft.office.licensing.helper""]
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/bin/launchctl"", ""list"", ""com.microsoft.office.licensing.helper""]
==> Removing launchctl service com.microsoft.office.licensingV2.helper
==> Executing: [""/bin/launchctl"", ""list"", ""com.microsoft.office.licensingV2.helper""]
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/bin/launchctl"", ""list"", ""com.microsoft.office.licensingV2.helper""]
==> Uninstalling packages:
==> Executing: [""/usr/sbin/pkgutil"", ""--pkgs=com.microsoft.package.*""]
==> Executing: [""/usr/sbin/pkgutil"", ""--pkgs=com.microsoft.pkg.licensing""]
==> Dispatching zap stanza
==> Running zap process for microsoft-office; your password may be necessary
==> Removing files:
~/Library/Application Scripts/com.microsoft.Excel
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Scripts/com.microsoft.Office365ServiceV2
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Scripts/com.microsoft.Outlook
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Scripts/com.microsoft.Powerpoint
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Scripts/com.microsoft.Word
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Scripts/com.microsoft.errorreporting
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Scripts/com.microsoft.onenote.mac
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/com.microsoft.excel.sfl
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/com.microsoft.powerpoint.sfl
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/com.microsoft.word.sfl
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Caches/Microsoft/uls/com.microsoft.autoupdate.fba
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Caches/Microsoft/uls/com.microsoft.autoupdate2
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Caches/com.microsoft.autoupdate.fba
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Caches/com.microsoft.autoupdate2
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.Excel
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.Office365ServiceV2
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.Outlook
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.Powerpoint
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.Word
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.errorreporting
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.onenote.mac
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Cookies/com.microsoft.autoupdate.fba.binarycookies
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Cookies/com.microsoft.autoupdate2.binarycookies
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Group Containers/UBF8T346G9.Office
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Group Containers/UBF8T346G9.OfficeOsfWebHost
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Group Containers/UBF8T346G9.ms
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Preferences/com.microsoft.Excel.plist
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Preferences/com.microsoft.Powerpoint.plist
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Preferences/com.microsoft.Word.plist
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Preferences/com.microsoft.autoupdate.fba.plist
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Preferences/com.microsoft.autoupdate2.plist
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Saved Application State/com.microsoft.autoupdate2.savedState
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Saved Application State/com.microsoft.office.setupassistant.savedState
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
==> Removing directories if empty:
~/Library/Caches/Microsoft/uls
~/Library/Caches/Microsoft
==> Removing all staged versions of Cask 'microsoft-office'
==> Purging all staged versions of Cask microsoft-office
[I] gtklocker@schwarz ~> ls -d1 /Applications/Microsoft\ *
/Applications/Microsoft Excel.app
/Applications/Microsoft OneNote.app
/Applications/Microsoft Outlook.app
/Applications/Microsoft PowerPoint.app
/Applications/Microsoft Word.app
```
#### Output of `brew cask doctor`
```
[I] gtklocker@schwarz ~> brew cask doctor
==> Homebrew-Cask Version
Homebrew-Cask 1.3.0-39-gf57a172
caskroom/homebrew-cask (git revision ebc89; last commit 2017-08-07)
==> Homebrew-Cask Install Location
==> Homebrew-Cask Staging Location
/usr/local/Caskroom
==> Homebrew-Cask Cached Downloads
~/Library/Caches/Homebrew/Cask (47 files, 3.5GB)
==> Homebrew-Cask Taps:
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3676 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-fonts (1107 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-versions (160 casks)
==> Contents of $LOAD_PATH
/usr/local/Homebrew/Library/Homebrew/cask/lib
/usr/local/Homebrew/Library/Homebrew
/Library/Ruby/Site/2.0.0
/Library/Ruby/Site/2.0.0/x86_64-darwin16
/Library/Ruby/Site/2.0.0/universal-darwin16
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16
==> Environment Variables
LANG=""en_GB.UTF-8""
PATH=""/usr/local/bin:/usr/local/opt/fzf/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/Homebrew/Library/Homebrew/shims/scm""
SHELL=""/usr/local/bin/fish""
```
",True,"microsoft-office uninstall does not remove .app files from /Applications - #### General troubleshooting steps
- [X] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue.
- [X] None of the templates was appropriate for my issue, or I’m not sure.
- [X] I ran `brew update-reset && brew update` and retried my command.
- [X] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
#### Description of issue
`brew cask zap microsoft-office` should get rid of the apps, but the apps (microsoft-office is a suite) remain on /Applications/.
#### Output of your command with `--verbose --debug`
```
[I] gtklocker@schwarz ~> brew cask zap microsoft-office --verbose --debug
==> Zapping Cask microsoft-office
==> Implied ""brew cask uninstall microsoft-office""
==> Un-installing artifacts
==> Determining which artifacts are present in Cask microsoft-office
==> 3 artifact/s defined
#
#
#
==> Un-installing artifact of class Hbc::Artifact::Uninstall
==> Running uninstall process for microsoft-office; your password may be necessary
==> Removing launchctl service com.microsoft.autoupdate.helpertool
==> Executing: [""/bin/launchctl"", ""list"", ""com.microsoft.autoupdate.helpertool""]
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/bin/launchctl"", ""list"", ""com.microsoft.autoupdate.helpertool""]
Password:
==> Removing launchctl service com.microsoft.office.licensing.helper
==> Executing: [""/bin/launchctl"", ""list"", ""com.microsoft.office.licensing.helper""]
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/bin/launchctl"", ""list"", ""com.microsoft.office.licensing.helper""]
==> Removing launchctl service com.microsoft.office.licensingV2.helper
==> Executing: [""/bin/launchctl"", ""list"", ""com.microsoft.office.licensingV2.helper""]
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/bin/launchctl"", ""list"", ""com.microsoft.office.licensingV2.helper""]
==> Uninstalling packages:
==> Executing: [""/usr/sbin/pkgutil"", ""--pkgs=com.microsoft.package.*""]
==> Executing: [""/usr/sbin/pkgutil"", ""--pkgs=com.microsoft.pkg.licensing""]
==> Dispatching zap stanza
==> Running zap process for microsoft-office; your password may be necessary
==> Removing files:
~/Library/Application Scripts/com.microsoft.Excel
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Scripts/com.microsoft.Office365ServiceV2
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Scripts/com.microsoft.Outlook
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Scripts/com.microsoft.Powerpoint
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Scripts/com.microsoft.Word
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Scripts/com.microsoft.errorreporting
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Scripts/com.microsoft.onenote.mac
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/com.microsoft.excel.sfl
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/com.microsoft.powerpoint.sfl
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/com.microsoft.word.sfl
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Caches/Microsoft/uls/com.microsoft.autoupdate.fba
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Caches/Microsoft/uls/com.microsoft.autoupdate2
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Caches/com.microsoft.autoupdate.fba
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Caches/com.microsoft.autoupdate2
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.Excel
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.Office365ServiceV2
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.Outlook
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.Powerpoint
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.Word
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.errorreporting
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Containers/com.microsoft.onenote.mac
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Cookies/com.microsoft.autoupdate.fba.binarycookies
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Cookies/com.microsoft.autoupdate2.binarycookies
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Group Containers/UBF8T346G9.Office
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Group Containers/UBF8T346G9.OfficeOsfWebHost
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Group Containers/UBF8T346G9.ms
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Preferences/com.microsoft.Excel.plist
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Preferences/com.microsoft.Powerpoint.plist
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Preferences/com.microsoft.Word.plist
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Preferences/com.microsoft.autoupdate.fba.plist
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Preferences/com.microsoft.autoupdate2.plist
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Saved Application State/com.microsoft.autoupdate2.savedState
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
~/Library/Saved Application State/com.microsoft.office.setupassistant.savedState
==> Executing: [""/usr/bin/sudo"", ""-E"", ""--"", ""/usr/bin/xargs"", ""-0"", ""--"", ""/bin/rm"", ""-r"", ""-f"", ""--""]
==> Removing directories if empty:
~/Library/Caches/Microsoft/uls
~/Library/Caches/Microsoft
==> Removing all staged versions of Cask 'microsoft-office'
==> Purging all staged versions of Cask microsoft-office
[I] gtklocker@schwarz ~> ls -d1 /Applications/Microsoft\ *
/Applications/Microsoft Excel.app
/Applications/Microsoft OneNote.app
/Applications/Microsoft Outlook.app
/Applications/Microsoft PowerPoint.app
/Applications/Microsoft Word.app
```
#### Output of `brew cask doctor`
```
[I] gtklocker@schwarz ~> brew cask doctor
==> Homebrew-Cask Version
Homebrew-Cask 1.3.0-39-gf57a172
caskroom/homebrew-cask (git revision ebc89; last commit 2017-08-07)
==> Homebrew-Cask Install Location
==> Homebrew-Cask Staging Location
/usr/local/Caskroom
==> Homebrew-Cask Cached Downloads
~/Library/Caches/Homebrew/Cask (47 files, 3.5GB)
==> Homebrew-Cask Taps:
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3676 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-fonts (1107 casks)
/usr/local/Homebrew/Library/Taps/caskroom/homebrew-versions (160 casks)
==> Contents of $LOAD_PATH
/usr/local/Homebrew/Library/Homebrew/cask/lib
/usr/local/Homebrew/Library/Homebrew
/Library/Ruby/Site/2.0.0
/Library/Ruby/Site/2.0.0/x86_64-darwin16
/Library/Ruby/Site/2.0.0/universal-darwin16
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin16
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin16
==> Environment Variables
LANG=""en_GB.UTF-8""
PATH=""/usr/local/bin:/usr/local/opt/fzf/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/Homebrew/Library/Homebrew/shims/scm""
SHELL=""/usr/local/bin/fish""
```
",1,microsoft office uninstall does not remove app files from applications general troubleshooting steps i have checked the instructions for or before opening the issue none of the templates was appropriate for my issue or i’m not sure i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i understand that description of issue brew cask zap microsoft office should get rid of the apps but the apps microsoft office is a suite remain on applications output of your command with verbose debug gtklocker schwarz brew cask zap microsoft office verbose debug zapping cask microsoft office implied brew cask uninstall microsoft office un installing artifacts determining which artifacts are present in cask microsoft office artifact s defined un installing artifact of class hbc artifact uninstall running uninstall process for microsoft office your password may be necessary removing launchctl service com microsoft autoupdate helpertool executing executing password removing launchctl service com microsoft office licensing helper executing executing removing launchctl service com microsoft office helper executing executing uninstalling packages executing executing dispatching zap stanza running zap process for microsoft office your password may be necessary removing files library application scripts com microsoft excel executing library application scripts com microsoft executing library application scripts com microsoft outlook executing library application scripts com microsoft powerpoint executing library application scripts com microsoft word executing library application scripts com microsoft errorreporting executing library application scripts com microsoft onenote mac executing library application support com apple sharedfilelist com apple lssharedfilelist applicationrecentdocuments com microsoft excel sfl executing library application support com apple sharedfilelist com apple lssharedfilelist applicationrecentdocuments com microsoft powerpoint sfl executing library application support com apple sharedfilelist com apple lssharedfilelist applicationrecentdocuments com microsoft word sfl executing library caches microsoft uls com microsoft autoupdate fba executing library caches microsoft uls com microsoft executing library caches com microsoft autoupdate fba executing library caches com microsoft executing library containers com microsoft excel executing library containers com microsoft executing library containers com microsoft outlook executing library containers com microsoft powerpoint executing library containers com microsoft word executing library containers com microsoft errorreporting executing library containers com microsoft onenote mac executing library cookies com microsoft autoupdate fba binarycookies executing library cookies com microsoft binarycookies executing library group containers office executing library group containers officeosfwebhost executing library group containers ms executing library preferences com microsoft excel plist executing library preferences com microsoft powerpoint plist executing library preferences com microsoft word plist executing library preferences com microsoft autoupdate fba plist executing library preferences com microsoft plist executing library saved application state com microsoft savedstate executing library saved application state com microsoft office setupassistant savedstate executing removing directories if empty library caches microsoft uls library caches microsoft removing all staged versions of cask microsoft office purging all staged versions of cask microsoft office gtklocker schwarz ls applications microsoft applications microsoft excel app applications microsoft onenote app applications microsoft outlook app applications microsoft powerpoint app applications microsoft word app output of brew cask doctor gtklocker schwarz brew cask doctor homebrew cask version homebrew cask caskroom homebrew cask git revision last commit homebrew cask install location homebrew cask staging location usr local caskroom homebrew cask cached downloads library caches homebrew cask files homebrew cask taps usr local homebrew library taps caskroom homebrew cask casks usr local homebrew library taps caskroom homebrew fonts casks usr local homebrew library taps caskroom homebrew versions casks contents of load path usr local homebrew library homebrew cask lib usr local homebrew library homebrew library ruby site library ruby site library ruby site universal library ruby site system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby universal system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby universal environment variables lang en gb utf path usr local bin usr local opt fzf bin usr bin bin usr sbin sbin opt bin usr local homebrew library homebrew shims scm shell usr local bin fish ,1
2520,8655460291.0,IssuesEvent,2018-11-27 16:00:34,codestation/qcma,https://api.github.com/repos/codestation/qcma,closed,QCMA Sometimes Requires a Restart,unmaintained,"I am using Mac OS X 10.13.5 and quite frequently (especially on the PSTV) I have to restart QCMA before my PSTV will connect properly.
Also I have to select quit twice for the app to fully quit and then sometimes crashes.
Both PS Vita-1000 and PSTV on Enso 3.65 spoofed to 3.68.",True,"QCMA Sometimes Requires a Restart - I am using Mac OS X 10.13.5 and quite frequently (especially on the PSTV) I have to restart QCMA before my PSTV will connect properly.
Also I have to select quit twice for the app to fully quit and then sometimes crashes.
Both PS Vita-1000 and PSTV on Enso 3.65 spoofed to 3.68.",1,qcma sometimes requires a restart i am using mac os x and quite frequently especially on the pstv i have to restart qcma before my pstv will connect properly also i have to select quit twice for the app to fully quit and then sometimes crashes both ps vita and pstv on enso spoofed to ,1
5793,30693785838.0,IssuesEvent,2023-07-26 16:58:13,PyCQA/flake8-bugbear,https://api.github.com/repos/PyCQA/flake8-bugbear,closed,Stop using `python setup.py bdist_wheel/sdist`,bug help wanted terrible_maintainer,"Lets move to pypa/build in the upload to PyPI action.
```
python setup.py bdist_wheel
/opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/setuptools/config/pyprojecttoml.py:66: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*.
config = read_configuration(filepath, True, ignore_option_errors, dist)
running bdist_wheel
running build
running build_py
creating build
creating build/lib
copying bugbear.py -> build/lib
/opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!
********************************************************************************
Please avoid running ``setup.py`` directly.
Instead, use pypa/build, pypa/installer, pypa/build or
other standards-based tools.
installing to build/bdist.linux-x86_64/wheel
running install
See https://blog.ganssle.io/articles/[20](https://github.com/PyCQA/flake8-bugbear/actions/runs/5179489298/jobs/9332438714#step:5:21)[21](https://github.com/PyCQA/flake8-bugbear/actions/runs/5179489298/jobs/9332438714#step:5:22)/10/setup-py-deprecated.html for details.
running install_lib
********************************************************************************
!!
```",True,"Stop using `python setup.py bdist_wheel/sdist` - Lets move to pypa/build in the upload to PyPI action.
```
python setup.py bdist_wheel
/opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/setuptools/config/pyprojecttoml.py:66: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*.
config = read_configuration(filepath, True, ignore_option_errors, dist)
running bdist_wheel
running build
running build_py
creating build
creating build/lib
copying bugbear.py -> build/lib
/opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!
********************************************************************************
Please avoid running ``setup.py`` directly.
Instead, use pypa/build, pypa/installer, pypa/build or
other standards-based tools.
installing to build/bdist.linux-x86_64/wheel
running install
See https://blog.ganssle.io/articles/[20](https://github.com/PyCQA/flake8-bugbear/actions/runs/5179489298/jobs/9332438714#step:5:21)[21](https://github.com/PyCQA/flake8-bugbear/actions/runs/5179489298/jobs/9332438714#step:5:22)/10/setup-py-deprecated.html for details.
running install_lib
********************************************************************************
!!
```",1,stop using python setup py bdist wheel sdist lets move to pypa build in the upload to pypi action python setup py bdist wheel opt hostedtoolcache python lib site packages setuptools config pyprojecttoml py betaconfiguration support for in pyproject toml is still beta config read configuration filepath true ignore option errors dist running bdist wheel running build running build py creating build creating build lib copying bugbear py build lib opt hostedtoolcache python lib site packages setuptools distutils cmd py setuptoolsdeprecationwarning setup py install is deprecated please avoid running setup py directly instead use pypa build pypa installer pypa build or other standards based tools installing to build bdist linux wheel running install see for details running install lib ,1
1237,5268229308.0,IssuesEvent,2017-02-05 08:46:44,viktorradnai/flightgear-ask21,https://api.github.com/repos/viktorradnai/flightgear-ask21,opened,Rework Aircraft Rating?,enhancement maintainability question,"In the -set file as well as on the wiki, we still have the following rating
**_FDM_**: 2: FDM tuned for cruise configuration.
**_Systems_**: 2: Working electrical system, fuel feed cockpit controls, stable autopilot
**_Cockpit_**: 2: 2D panel in 3D cockpit, or incomplete 3D panel
**_Model_**: 3: Accurate 3D model with animated control surfaces, gear detailing (retraction, rotation), prop
In my opinion this is not quite correct, e.g. **Cockpit** should IMO be improved to a 4 as we have
"" 3D panel and accurately modelled 3D cockpit, plain texturing. Hotspots for majority of controls""
**Systems** should be corrected to 4 (note here that the ASK21 has very few systems to model). I have added a limit system which I will push after tooltips are solved. Later we can also add a bit more code to this so that e.g. the spoilers stuck/break if operated above 250km/h or that the wings brake under too heave load. Also, what I've thought of is to add fake spoilers to the wing connected to a rain property decreasing lift and increasing drag when flying in rain :D
**Model** could be increased to 4: Accurate 3D model with animated control surfaces, gear, prop, livery support (if applicable). or maybe even 5 as we already have shader effects etc. but I think for a 5 we should improve the exterior model a bit further.
About the **FDM** I'm not sure how realistic it is.
",True,"Rework Aircraft Rating? - In the -set file as well as on the wiki, we still have the following rating
**_FDM_**: 2: FDM tuned for cruise configuration.
**_Systems_**: 2: Working electrical system, fuel feed cockpit controls, stable autopilot
**_Cockpit_**: 2: 2D panel in 3D cockpit, or incomplete 3D panel
**_Model_**: 3: Accurate 3D model with animated control surfaces, gear detailing (retraction, rotation), prop
In my opinion this is not quite correct, e.g. **Cockpit** should IMO be improved to a 4 as we have
"" 3D panel and accurately modelled 3D cockpit, plain texturing. Hotspots for majority of controls""
**Systems** should be corrected to 4 (note here that the ASK21 has very few systems to model). I have added a limit system which I will push after tooltips are solved. Later we can also add a bit more code to this so that e.g. the spoilers stuck/break if operated above 250km/h or that the wings brake under too heave load. Also, what I've thought of is to add fake spoilers to the wing connected to a rain property decreasing lift and increasing drag when flying in rain :D
**Model** could be increased to 4: Accurate 3D model with animated control surfaces, gear, prop, livery support (if applicable). or maybe even 5 as we already have shader effects etc. but I think for a 5 we should improve the exterior model a bit further.
About the **FDM** I'm not sure how realistic it is.
",1,rework aircraft rating in the set file as well as on the wiki we still have the following rating fdm fdm tuned for cruise configuration systems working electrical system fuel feed cockpit controls stable autopilot cockpit panel in cockpit or incomplete panel model accurate model with animated control surfaces gear detailing retraction rotation prop in my opinion this is not quite correct e g cockpit should imo be improved to a as we have panel and accurately modelled cockpit plain texturing hotspots for majority of controls systems should be corrected to note here that the has very few systems to model i have added a limit system which i will push after tooltips are solved later we can also add a bit more code to this so that e g the spoilers stuck break if operated above h or that the wings brake under too heave load also what i ve thought of is to add fake spoilers to the wing connected to a rain property decreasing lift and increasing drag when flying in rain d model could be increased to accurate model with animated control surfaces gear prop livery support if applicable or maybe even as we already have shader effects etc but i think for a we should improve the exterior model a bit further about the fdm i m not sure how realistic it is ,1
3726,15440696773.0,IssuesEvent,2021-03-08 04:03:45,i-am-gizm0/VHL-Improvements,https://api.github.com/repos/i-am-gizm0/VHL-Improvements,closed,Move CSS to its own file to inject,maintainence,"This extension was originally a Tampermonkey script, so the CSS was injected within the script. CRX can inject CSS separately, which will clean up the source a bit.",True,"Move CSS to its own file to inject - This extension was originally a Tampermonkey script, so the CSS was injected within the script. CRX can inject CSS separately, which will clean up the source a bit.",1,move css to its own file to inject this extension was originally a tampermonkey script so the css was injected within the script crx can inject css separately which will clean up the source a bit ,1
104789,4221174733.0,IssuesEvent,2016-07-01 03:24:30,smartchicago/chicago-early-learning,https://api.github.com/repos/smartchicago/chicago-early-learning,opened,Staging: All sites are being displayed as both community-based and CPS-based,bug High Priority Hold for Phase 1 Launch,"This is a high priority items that needs to be fixed before launch. All of the sites are being tagged as both CPS- and community-based.
Taking a quick look at the map also confirms this:
Also, see the center's map info box:
",1.0,"Staging: All sites are being displayed as both community-based and CPS-based - This is a high priority items that needs to be fixed before launch. All of the sites are being tagged as both CPS- and community-based.
Taking a quick look at the map also confirms this:
Also, see the center's map info box:
",0,staging all sites are being displayed as both community based and cps based this is a high priority items that needs to be fixed before launch all of the sites are being tagged as both cps and community based taking a quick look at the map also confirms this img width alt screen shot at am src also see the center s map info box img width alt screen shot at am src ,0
138206,20372764474.0,IssuesEvent,2022-02-21 12:53:31,WordPress/gutenberg,https://api.github.com/repos/WordPress/gutenberg,closed,Template part transforms,Needs Design Feedback [Type] Discussion [Block] Template Part,"I think it may be worth discussing and exploring the transform options for template parts. Currently the transform menu includes options to transform template parts in to Columns or Group blocks:
However, neither of this options work. Selecting one just kicks you back to the wp-admin Dashboard 🐛
Should it be possible to transform template parts into other, non-template-part blocks at all? This seems like a potentially dangerous operation that might be better left to the ""Detach blocks from template part"" option in the ellipsis menu:
---
One transformation that should be possible in one way or another is switching template part variations. In previous issues the following design has been posited as an option to do this:
---
Closely related – in #28737 we are exploring how patterns that are contextually relevant to the selected template might be exposed via the transform menu.
---
Finally, in https://github.com/WordPress/gutenberg/pull/27397#issuecomment-783315207 @mtias questioned whether it should be possible to quickly/easily wrap a template part inside another block. As @jasmussen mentioned, this could be useful for things like Sidebar template parts. My personal feeling is that this could possibly be better handled by the aforementioned patterns flow, as it is not a transform in the traditional sense, but it is worth discussing.
### Tentative action plan
- [ ] Remove Group/Columns transform options on Template Part block – #29296
- [x] Add Template Part Switching
- [ ] Provide a way to view block patterns that are contextually relevant to the selected Template Part – #28737
- [ ] Potentially create an affordance for template parts to be wrapped in other blocks
",1.0,"Template part transforms - I think it may be worth discussing and exploring the transform options for template parts. Currently the transform menu includes options to transform template parts in to Columns or Group blocks:
However, neither of this options work. Selecting one just kicks you back to the wp-admin Dashboard 🐛
Should it be possible to transform template parts into other, non-template-part blocks at all? This seems like a potentially dangerous operation that might be better left to the ""Detach blocks from template part"" option in the ellipsis menu:
---
One transformation that should be possible in one way or another is switching template part variations. In previous issues the following design has been posited as an option to do this:
---
Closely related – in #28737 we are exploring how patterns that are contextually relevant to the selected template might be exposed via the transform menu.
---
Finally, in https://github.com/WordPress/gutenberg/pull/27397#issuecomment-783315207 @mtias questioned whether it should be possible to quickly/easily wrap a template part inside another block. As @jasmussen mentioned, this could be useful for things like Sidebar template parts. My personal feeling is that this could possibly be better handled by the aforementioned patterns flow, as it is not a transform in the traditional sense, but it is worth discussing.
### Tentative action plan
- [ ] Remove Group/Columns transform options on Template Part block – #29296
- [x] Add Template Part Switching
- [ ] Provide a way to view block patterns that are contextually relevant to the selected Template Part – #28737
- [ ] Potentially create an affordance for template parts to be wrapped in other blocks
",0,template part transforms i think it may be worth discussing and exploring the transform options for template parts currently the transform menu includes options to transform template parts in to columns or group blocks img width alt screenshot at src however neither of this options work selecting one just kicks you back to the wp admin dashboard 🐛 should it be possible to transform template parts into other non template part blocks at all this seems like a potentially dangerous operation that might be better left to the detach blocks from template part option in the ellipsis menu img width alt screenshot at src one transformation that should be possible in one way or another is switching template part variations in previous issues the following design has been posited as an option to do this img src closely related – in we are exploring how patterns that are contextually relevant to the selected template might be exposed via the transform menu finally in mtias questioned whether it should be possible to quickly easily wrap a template part inside another block as jasmussen mentioned this could be useful for things like sidebar template parts my personal feeling is that this could possibly be better handled by the aforementioned patterns flow as it is not a transform in the traditional sense but it is worth discussing tentative action plan remove group columns transform options on template part block – add template part switching provide a way to view block patterns that are contextually relevant to the selected template part – potentially create an affordance for template parts to be wrapped in other blocks ,0
251865,18977314437.0,IssuesEvent,2021-11-20 07:44:10,corona-warn-app/cwa-wishlist,https://api.github.com/repos/corona-warn-app/cwa-wishlist,closed,Vaccination certificate from another country,documentation enhancement,"
## What is missing
I couldn’t find if the app supports loading certificates generated in other EU countries, and if it’s possible to mix them up. For example, if one gets the first two doses in Germany and the booster in Italy or Spain.
## Why should it be included
Considering that we will probably get one dose every 6 to 12 months, more and more people will have a mix situation and would like to make sure that the app supports it.
## Where should it be included
Not sure.",1.0,"Vaccination certificate from another country -
## What is missing
I couldn’t find if the app supports loading certificates generated in other EU countries, and if it’s possible to mix them up. For example, if one gets the first two doses in Germany and the booster in Italy or Spain.
## Why should it be included
Considering that we will probably get one dose every 6 to 12 months, more and more people will have a mix situation and would like to make sure that the app supports it.
## Where should it be included
Not sure.",0,vaccination certificate from another country thanks for pointing us to missing information 🙌 ❤️ before opening a new issue please make sure that we do not have any duplicates already open you can ensure this by searching the issue list for this repository if there is a duplicate please close your issue and add a comment to the existing issue instead to browse existing issues by category please see these overview issues specifically please check if your suggestion has already been raised here what is missing i couldn’t find if the app supports loading certificates generated in other eu countries and if it’s possible to mix them up for example if one gets the first two doses in germany and the booster in italy or spain why should it be included considering that we will probably get one dose every to months more and more people will have a mix situation and would like to make sure that the app supports it where should it be included not sure ,0
111331,4468583076.0,IssuesEvent,2016-08-25 09:54:11,NSusoev/eval-service-quality,https://api.github.com/repos/NSusoev/eval-service-quality,closed,Исправить ошибки в алгоритме расчета агрегированных оценок,area: back-end priority: high type: bug,"2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : COLLECTION AFTER SORTING = [LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 1, name = очень низкое, weight = 1.0], LinguisticTerm[id = 1, name = очень низкое, weight = 1.0], LinguisticTerm[id = 1, name = очень низкое, weight = 1.0]]
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : IMPORTANCE MARKS WITH GENERATED WEIGHTS[ LinguisticTerm[id = 5, name = очень высокое, weight = 0.008] ] = [LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 1, name = очень низкое, weight = 1.0], LinguisticTerm[id = 1, name = очень низкое, weight = 1.0], LinguisticTerm[id = 1, name = очень низкое, weight = 1.0]]
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : EXIT
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : SUM WEIGHT = 3.032
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.008
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.0026385225
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 8.702251E-4
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 2.8701354E-4
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 1.0
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.3298153
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.108778134
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : IMPORTANCE MARKS WITH NORMALIZED WEIGHTS = [LinguisticTerm[id = 5, name = очень высокое, weight = 9.466145E-5], LinguisticTerm[id = 5, name = очень высокое, weight = 9.466145E-5], LinguisticTerm[id = 5, name = очень высокое, weight = 9.466145E-5], LinguisticTerm[id = 5, name = очень высокое, weight = 9.466145E-5], LinguisticTerm[id = 1, name = очень низкое, weight = 0.03587669], LinguisticTerm[id = 1, name = очень низкое, weight = 0.03587669], LinguisticTerm[id = 1, name = очень низкое, weight = 0.03587669]]
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : EXIT
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : SUM WEIGHT = 0.07203737
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 9.466145E-5
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.0013140604
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.018241372
2016-07-28 00:52:29.218 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.03587669
2016-07-28 00:52:29.218 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.4980289
2016-07-28 00:52:29.218 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : IMPORTANCE MARKS WITH NORMALIZED WEIGHTS = [LinguisticTerm[id = 5, name = очень высокое, weight = 0.25322098], LinguisticTerm[id = 5, name = очень высокое, weight = 0.25322098], LinguisticTerm[id = 5, name = очень высокое, weight = 0.25322098], LinguisticTerm[id = 1, name = очень низкое, weight = 6.91348], LinguisticTerm[id = 1, name = очень низкое, weight = 6.91348]]
2016-07-28 00:52:29.218 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : EXIT
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : SUM WEIGHT = 7.419922
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.25322098
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.034127176
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 6.91348
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : IMPORTANCE MARKS WITH NORMALIZED WEIGHTS = [LinguisticTerm[id = 5, name = очень высокое, weight = 0.0045993985], LinguisticTerm[id = 5, name = очень высокое, weight = 0.0045993985], LinguisticTerm[id = 1, name = очень низкое, weight = 0.93174565]]
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : EXIT",1.0,"Исправить ошибки в алгоритме расчета агрегированных оценок - 2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : COLLECTION AFTER SORTING = [LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 1, name = очень низкое, weight = 1.0], LinguisticTerm[id = 1, name = очень низкое, weight = 1.0], LinguisticTerm[id = 1, name = очень низкое, weight = 1.0]]
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : IMPORTANCE MARKS WITH GENERATED WEIGHTS[ LinguisticTerm[id = 5, name = очень высокое, weight = 0.008] ] = [LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 5, name = очень высокое, weight = 0.008], LinguisticTerm[id = 1, name = очень низкое, weight = 1.0], LinguisticTerm[id = 1, name = очень низкое, weight = 1.0], LinguisticTerm[id = 1, name = очень низкое, weight = 1.0]]
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : EXIT
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : SUM WEIGHT = 3.032
2016-07-28 00:52:29.216 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.008
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.0026385225
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 8.702251E-4
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 2.8701354E-4
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 1.0
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.3298153
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.108778134
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : IMPORTANCE MARKS WITH NORMALIZED WEIGHTS = [LinguisticTerm[id = 5, name = очень высокое, weight = 9.466145E-5], LinguisticTerm[id = 5, name = очень высокое, weight = 9.466145E-5], LinguisticTerm[id = 5, name = очень высокое, weight = 9.466145E-5], LinguisticTerm[id = 5, name = очень высокое, weight = 9.466145E-5], LinguisticTerm[id = 1, name = очень низкое, weight = 0.03587669], LinguisticTerm[id = 1, name = очень низкое, weight = 0.03587669], LinguisticTerm[id = 1, name = очень низкое, weight = 0.03587669]]
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : EXIT
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : SUM WEIGHT = 0.07203737
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 9.466145E-5
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.0013140604
2016-07-28 00:52:29.217 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.018241372
2016-07-28 00:52:29.218 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.03587669
2016-07-28 00:52:29.218 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.4980289
2016-07-28 00:52:29.218 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : IMPORTANCE MARKS WITH NORMALIZED WEIGHTS = [LinguisticTerm[id = 5, name = очень высокое, weight = 0.25322098], LinguisticTerm[id = 5, name = очень высокое, weight = 0.25322098], LinguisticTerm[id = 5, name = очень высокое, weight = 0.25322098], LinguisticTerm[id = 1, name = очень низкое, weight = 6.91348], LinguisticTerm[id = 1, name = очень низкое, weight = 6.91348]]
2016-07-28 00:52:29.218 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : EXIT
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : ENTER
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : SUM WEIGHT = 7.419922
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.25322098
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 0.034127176
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : WEIGHT = 6.91348
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : IMPORTANCE MARKS WITH NORMALIZED WEIGHTS = [LinguisticTerm[id = 5, name = очень высокое, weight = 0.0045993985], LinguisticTerm[id = 5, name = очень высокое, weight = 0.0045993985], LinguisticTerm[id = 1, name = очень низкое, weight = 0.93174565]]
2016-07-28 00:52:29.220 DEBUG 52174 --- [nio-8080-exec-1] esq.core.service.ESQCalculator : EXIT",0,исправить ошибки в алгоритме расчета агрегированных оценок debug esq core service esqcalculator collection after sorting linguisticterm linguisticterm linguisticterm linguisticterm linguisticterm linguisticterm debug esq core service esqcalculator enter debug esq core service esqcalculator importance marks with generated weights linguisticterm linguisticterm linguisticterm linguisticterm linguisticterm linguisticterm debug esq core service esqcalculator exit debug esq core service esqcalculator enter debug esq core service esqcalculator enter debug esq core service esqcalculator sum weight debug esq core service esqcalculator weight debug esq core service esqcalculator weight debug esq core service esqcalculator weight debug esq core service esqcalculator weight debug esq core service esqcalculator weight debug esq core service esqcalculator weight debug esq core service esqcalculator weight debug esq core service esqcalculator importance marks with normalized weights linguisticterm linguisticterm linguisticterm linguisticterm linguisticterm linguisticterm debug esq core service esqcalculator exit debug esq core service esqcalculator enter debug esq core service esqcalculator enter debug esq core service esqcalculator sum weight debug esq core service esqcalculator weight debug esq core service esqcalculator weight debug esq core service esqcalculator weight debug esq core service esqcalculator weight debug esq core service esqcalculator weight debug esq core service esqcalculator importance marks with normalized weights linguisticterm linguisticterm linguisticterm linguisticterm debug esq core service esqcalculator exit debug esq core service esqcalculator enter debug esq core service esqcalculator enter debug esq core service esqcalculator sum weight debug esq core service esqcalculator weight debug esq core service esqcalculator weight debug esq core service esqcalculator weight debug esq core service esqcalculator importance marks with normalized weights linguisticterm linguisticterm debug esq core service esqcalculator exit,0
1858,6577407751.0,IssuesEvent,2017-09-12 00:42:08,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,mount using nfs4 and state=mounted triggers error on already mounted directory,affects_2.1 bug_report waiting_on_maintainer,"I didn't find exactly this problem reported, only other bug reports related to nfs mounts.
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
mount module
##### ANSIBLE VERSION
```
ansible 2.1.0
config file = /home/alex/repos/infra/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Standard.
##### OS / ENVIRONMENT
CentOS 7 -> CentOS 7
Ubunt 15.10/16.04 -> CentOS 7
CentOS 7 -> Ubuntu 15.10/16.04
In this particular case, only CentOS 7 -> CentOS 7
##### SUMMARY
NFS mount point with state=mounted triggers error if already mounted.
##### STEPS TO REPRODUCE
Put a NFS mount stanza in a playbook and run the playbook when the mount is already mounted:
```
- name: configure fstab for alpha
action: mount name=/srv/foo/alpha src=fileserver:/mdarchive/alpha fstype=nfs4 opts=rw,hard,tcp,intr,nolock,rsize=1048576,wsize=1048576,_netdev state=mounted
```
Run the playbook, get an error:
```
TASK [configure fstab for alpha] ***********************************************
fatal: [cluster-node01]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error mounting /srv/foo/alpha: mount.nfs4: /srv/foo/alpha is busy or already mounted\n""}
```
##### EXPECTED RESULTS
I expect the documented behaviour:
http://docs.ansible.com/ansible/mount_module.html
`If mounted or unmounted, the device will be actively mounted or unmounted as needed and appropriately configured in fstab.`
Obviously, if a mount is already mounted, mounting it again is _not_ needed and triggers the error and further execution of the playbook.
##### ACTUAL RESULTS
```
TASK [configure fstab for alpha] ***********************************************
fatal: [cluster-node01]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error mounting /srv/foo/alpha: mount.nfs4: /srv/foo/alpha is busy or already mounted\n""}
```
",True,"mount using nfs4 and state=mounted triggers error on already mounted directory - I didn't find exactly this problem reported, only other bug reports related to nfs mounts.
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
mount module
##### ANSIBLE VERSION
```
ansible 2.1.0
config file = /home/alex/repos/infra/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Standard.
##### OS / ENVIRONMENT
CentOS 7 -> CentOS 7
Ubunt 15.10/16.04 -> CentOS 7
CentOS 7 -> Ubuntu 15.10/16.04
In this particular case, only CentOS 7 -> CentOS 7
##### SUMMARY
NFS mount point with state=mounted triggers error if already mounted.
##### STEPS TO REPRODUCE
Put a NFS mount stanza in a playbook and run the playbook when the mount is already mounted:
```
- name: configure fstab for alpha
action: mount name=/srv/foo/alpha src=fileserver:/mdarchive/alpha fstype=nfs4 opts=rw,hard,tcp,intr,nolock,rsize=1048576,wsize=1048576,_netdev state=mounted
```
Run the playbook, get an error:
```
TASK [configure fstab for alpha] ***********************************************
fatal: [cluster-node01]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error mounting /srv/foo/alpha: mount.nfs4: /srv/foo/alpha is busy or already mounted\n""}
```
##### EXPECTED RESULTS
I expect the documented behaviour:
http://docs.ansible.com/ansible/mount_module.html
`If mounted or unmounted, the device will be actively mounted or unmounted as needed and appropriately configured in fstab.`
Obviously, if a mount is already mounted, mounting it again is _not_ needed and triggers the error and further execution of the playbook.
##### ACTUAL RESULTS
```
TASK [configure fstab for alpha] ***********************************************
fatal: [cluster-node01]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Error mounting /srv/foo/alpha: mount.nfs4: /srv/foo/alpha is busy or already mounted\n""}
```
",1,mount using and state mounted triggers error on already mounted directory i didn t find exactly this problem reported only other bug reports related to nfs mounts issue type bug report component name mount module ansible version ansible config file home alex repos infra ansible ansible cfg configured module search path default w o overrides configuration standard os environment centos centos ubunt centos centos ubuntu in this particular case only centos centos summary nfs mount point with state mounted triggers error if already mounted steps to reproduce put a nfs mount stanza in a playbook and run the playbook when the mount is already mounted name configure fstab for alpha action mount name srv foo alpha src fileserver mdarchive alpha fstype opts rw hard tcp intr nolock rsize wsize netdev state mounted run the playbook get an error task fatal failed changed false failed true msg error mounting srv foo alpha mount srv foo alpha is busy or already mounted n expected results i expect the documented behaviour if mounted or unmounted the device will be actively mounted or unmounted as needed and appropriately configured in fstab obviously if a mount is already mounted mounting it again is not needed and triggers the error and further execution of the playbook actual results task fatal failed changed false failed true msg error mounting srv foo alpha mount srv foo alpha is busy or already mounted n ,1
411,3471089496.0,IssuesEvent,2015-12-23 13:13:18,espeak-ng/espeak-ng,https://api.github.com/repos/espeak-ng/espeak-ng,closed,Reformat the code with a consistent style.,in-progress maintainability,"The code should be reformatted to:
1. Use consistent indentation;
2. Use a space after `if`, `return`, etc.;
3. Use `return x` instead of `return(x)`.
Other style improvements should be applied to reflect modern C practices.",True,"Reformat the code with a consistent style. - The code should be reformatted to:
1. Use consistent indentation;
2. Use a space after `if`, `return`, etc.;
3. Use `return x` instead of `return(x)`.
Other style improvements should be applied to reflect modern C practices.",1,reformat the code with a consistent style the code should be reformatted to use consistent indentation use a space after if return etc use return x instead of return x other style improvements should be applied to reflect modern c practices ,1
189816,6802055157.0,IssuesEvent,2017-11-02 18:49:33,NREL/OpenStudio-BuildStock,https://api.github.com/repos/NREL/OpenStudio-BuildStock,opened,Use WattTime for hourly emissions and primary energy estimates,priority low,"- [ ] Average and marginal
- [ ] CO2e and other criteria pollutants (NOx, SOx, methane, etc.)
- [ ] primary energy (account for different heat rates aka efficiencies for different types of gas plants—peaker vs CCT, etc., e.g., 6,000 vs 11,000)
cc @joseph-robertson @rHorsey ",1.0,"Use WattTime for hourly emissions and primary energy estimates - - [ ] Average and marginal
- [ ] CO2e and other criteria pollutants (NOx, SOx, methane, etc.)
- [ ] primary energy (account for different heat rates aka efficiencies for different types of gas plants—peaker vs CCT, etc., e.g., 6,000 vs 11,000)
cc @joseph-robertson @rHorsey ",0,use watttime for hourly emissions and primary energy estimates average and marginal and other criteria pollutants nox sox methane etc primary energy account for different heat rates aka efficiencies for different types of gas plants—peaker vs cct etc e g vs cc joseph robertson rhorsey ,0
803573,29183438414.0,IssuesEvent,2023-05-19 13:41:28,aleksbobic/csx,https://api.github.com/repos/aleksbobic/csx,opened,Show delete and expand in advanced search,enhancement priority:medium Complexity:medium,Delete and expand actions should be visible also in advanced search,1.0,Show delete and expand in advanced search - Delete and expand actions should be visible also in advanced search,0,show delete and expand in advanced search delete and expand actions should be visible also in advanced search,0
244284,26375066496.0,IssuesEvent,2023-01-12 01:15:21,Watemlifts/NextSimpleStarter,https://api.github.com/repos/Watemlifts/NextSimpleStarter,opened,"WS-2023-0004 (Medium) detected in jszip-3.7.1.tgz, jszip-3.4.0.tgz",security vulnerability,"## WS-2023-0004 - Medium Severity Vulnerability
Vulnerable Libraries - jszip-3.7.1.tgz, jszip-3.4.0.tgz
jszip-3.7.1.tgz
Create, read and edit .zip files with JavaScript http://stuartk.com/jszip
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2023-01-04
Fix Resolution (jszip): 3.8.0
Direct dependency fix Resolution (snyk): 1.519.0
Fix Resolution (jszip): 3.6.0
Direct dependency fix Resolution (snyk): 1.667.0
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2023-0004 (Medium) detected in jszip-3.7.1.tgz, jszip-3.4.0.tgz - ## WS-2023-0004 - Medium Severity Vulnerability
Vulnerable Libraries - jszip-3.7.1.tgz, jszip-3.4.0.tgz
jszip-3.7.1.tgz
Create, read and edit .zip files with JavaScript http://stuartk.com/jszip
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2023-01-04
Fix Resolution (jszip): 3.8.0
Direct dependency fix Resolution (snyk): 1.519.0
Fix Resolution (jszip): 3.6.0
Direct dependency fix Resolution (snyk): 1.667.0
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws medium detected in jszip tgz jszip tgz ws medium severity vulnerability vulnerable libraries jszip tgz jszip tgz jszip tgz create read and edit zip files with javascript library home page a href path to dependency file package json path to vulnerable library node modules jszip package json dependency hierarchy snyk tgz root library snyk mvn plugin tgz java call graph builder tgz x jszip tgz vulnerable library jszip tgz create read and edit zip files with javascript library home page a href path to dependency file package json path to vulnerable library node modules snyk nuget plugin node modules jszip package json dependency hierarchy snyk tgz root library snyk nuget plugin tgz x jszip tgz vulnerable library found in head commit a href found in base branch master vulnerability details jszip before does not sanitize filenames when files are loaded with loadasync which makes the library vunerable to zip slip attack publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution jszip direct dependency fix resolution snyk fix resolution jszip direct dependency fix resolution snyk step up your open source security game with mend ,0
507969,14685929661.0,IssuesEvent,2021-01-01 12:10:33,naev/naev,https://api.github.com/repos/naev/naev,opened,[proposal] generate and pull some assets from naev-artwork-production repo,Priority-High Type-Enhancement,"Given that using binary files bloats the repo, the following proposal was discussed on discord.
1. Have naev-artwork generate some files to naev-artwork-production using CI
2. Have naev-artwork-production be a submodule of naev repository
3. Use physfs to look for files in the naev-artwork-production submodule when run from source
4. Install the naev-artwork-production submodule files alongside the regular ones
This will generate automatically production files from source and use them, avoiding bloat in the main repo. Issues that might need to be addressed include making it all run smoothly and ensuring nothing dies when a history rewrite is done to the naev-artwork-production repo (since it will likely bloat heavily).
I consider this to be fairly important given the amount of new images that are needed/planned with the VN framework.",1.0,"[proposal] generate and pull some assets from naev-artwork-production repo - Given that using binary files bloats the repo, the following proposal was discussed on discord.
1. Have naev-artwork generate some files to naev-artwork-production using CI
2. Have naev-artwork-production be a submodule of naev repository
3. Use physfs to look for files in the naev-artwork-production submodule when run from source
4. Install the naev-artwork-production submodule files alongside the regular ones
This will generate automatically production files from source and use them, avoiding bloat in the main repo. Issues that might need to be addressed include making it all run smoothly and ensuring nothing dies when a history rewrite is done to the naev-artwork-production repo (since it will likely bloat heavily).
I consider this to be fairly important given the amount of new images that are needed/planned with the VN framework.",0, generate and pull some assets from naev artwork production repo given that using binary files bloats the repo the following proposal was discussed on discord have naev artwork generate some files to naev artwork production using ci have naev artwork production be a submodule of naev repository use physfs to look for files in the naev artwork production submodule when run from source install the naev artwork production submodule files alongside the regular ones this will generate automatically production files from source and use them avoiding bloat in the main repo issues that might need to be addressed include making it all run smoothly and ensuring nothing dies when a history rewrite is done to the naev artwork production repo since it will likely bloat heavily i consider this to be fairly important given the amount of new images that are needed planned with the vn framework ,0
32686,8921078995.0,IssuesEvent,2019-01-21 09:08:32,neovim/neovim,https://api.github.com/repos/neovim/neovim,closed,"(clang >= 6.0 bug) Annoying warnings for isnan(), fpclassify(), et al",blocked:external build help wanted,"- `nvim --version`: ef4feab0e75be
- Vim (version: 8.0.1565) behaves differently? No warnings, but I didn't check if the same functions were used
- Operating system/version: arch linux
- Terminal name/version: pangoterm
- `$TERM`: xterm
### Steps to reproduce using `nvim -u NORC`
```
rm -rf build && CMAKE_EXTRA_FLAGS=""-DCMAKE_C_COMPILER=clang -DCLANG_ASAN_UBSAN=1"" make -j4
```
### Actual behaviour
```
[197/284] Building C object src/nvim/CMakeFiles/nvim.dir/eval/encode.c.o
In file included from ../src/nvim/eval/encode.c:455:
../src/nvim/eval/typval_encode.c.h:330:7: warning: implicit conversion loses floating-point precision: 'const float_T' (aka 'const double') to 'float' [-Wconv
ersion]
TYPVAL_ENCODE_CONV_FLOAT(tv, tv->vval.v_float);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/nvim/eval/encode.c:330:26: note: expanded from macro 'TYPVAL_ENCODE_CONV_FLOAT'
switch (fpclassify(flt_)) { \
~~~~~~~~~~~^~~~~
/usr/include/math.h:415:56: note: expanded from macro 'fpclassify'
# define fpclassify(x) __MATH_TG ((x), __fpclassify, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
In file included from ../src/nvim/eval/encode.c:455:
../src/nvim/eval/typval_encode.c.h:491:13: warning: implicit conversion loses floating-point precision: 'const float_T' (aka 'const double') to 'float' [-Wcon
version]
TYPVAL_ENCODE_CONV_FLOAT(tv, val_di->di_tv.vval.v_float);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/nvim/eval/encode.c:330:26: note: expanded from macro 'TYPVAL_ENCODE_CONV_FLOAT'
switch (fpclassify(flt_)) { \
~~~~~~~~~~~^~~~~
/usr/include/math.h:415:56: note: expanded from macro 'fpclassify'
# define fpclassify(x) __MATH_TG ((x), __fpclassify, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
In file included from ../src/nvim/eval/encode.c:493:
../src/nvim/eval/typval_encode.c.h:330:7: warning: implicit conversion loses floating-point precision: 'const float_T' (aka 'const double') to 'float' [-Wconv
ersion]
TYPVAL_ENCODE_CONV_FLOAT(tv, tv->vval.v_float);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/nvim/eval/encode.c:330:26: note: expanded from macro 'TYPVAL_ENCODE_CONV_FLOAT'
switch (fpclassify(flt_)) { \
~~~~~~~~~~~^~~~~
/usr/include/math.h:415:56: note: expanded from macro 'fpclassify'
# define fpclassify(x) __MATH_TG ((x), __fpclassify, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
In file included from ../src/nvim/eval/encode.c:493:
../src/nvim/eval/typval_encode.c.h:491:13: warning: implicit conversion loses floating-point precision: 'const float_T' (aka 'const double') to 'float' [-Wcon
version]
TYPVAL_ENCODE_CONV_FLOAT(tv, val_di->di_tv.vval.v_float);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/nvim/eval/encode.c:330:26: note: expanded from macro 'TYPVAL_ENCODE_CONV_FLOAT'
switch (fpclassify(flt_)) { \
~~~~~~~~~~~^~~~~
/usr/include/math.h:415:56: note: expanded from macro 'fpclassify'
# define fpclassify(x) __MATH_TG ((x), __fpclassify, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
In file included from ../src/nvim/eval/encode.c:762:
../src/nvim/eval/typval_encode.c.h:330:7: warning: implicit conversion loses floating-point precision: 'const float_T' (aka 'const double') to 'float' [-Wconv
ersion]
TYPVAL_ENCODE_CONV_FLOAT(tv, tv->vval.v_float);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/nvim/eval/encode.c:534:26: note: expanded from macro 'TYPVAL_ENCODE_CONV_FLOAT'
switch (fpclassify(flt_)) { \
~~~~~~~~~~~^~~~~
/usr/include/math.h:415:56: note: expanded from macro 'fpclassify'
# define fpclassify(x) __MATH_TG ((x), __fpclassify, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
In file included from ../src/nvim/eval/encode.c:762:
../src/nvim/eval/typval_encode.c.h:491:13: warning: implicit conversion loses floating-point precision: 'const float_T' (aka 'const double') to 'float' [-Wcon
version]
TYPVAL_ENCODE_CONV_FLOAT(tv, val_di->di_tv.vval.v_float);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/nvim/eval/encode.c:534:26: note: expanded from macro 'TYPVAL_ENCODE_CONV_FLOAT'
switch (fpclassify(flt_)) { \
~~~~~~~~~~~^~~~~
/usr/include/math.h:415:56: note: expanded from macro 'fpclassify'
# define fpclassify(x) __MATH_TG ((x), __fpclassify, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
6 warnings generated.
[261/284] Building C object src/nvim/CMakeFiles/nvim.dir/strings.c.o
../src/nvim/strings.c:1223:23: warning: implicit conversion loses floating-point precision: 'double' to 'float' [-Wconversion]
if (isinf((double)f)
~~~~~~^~~~~~~~~~
/usr/include/math.h:472:46: note: expanded from macro 'isinf'
# define isinf(x) __MATH_TG ((x), __isinf, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
../src/nvim/strings.c:1230:30: warning: implicit conversion loses floating-point precision: 'double' to 'float' [-Wconversion]
} else if (isnan(f)) {
~~~~~~^~
/usr/include/math.h:455:46: note: expanded from macro 'isnan'
# define isnan(x) __MATH_TG ((x), __isnan, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
2 warnings generated.
```
### Expected behaviour
No warnings. `maths.h` uses gcc magic to avoid this warning when compiling with gcc 4.4+, which doesn't work with clang (unless C11 is active, but using a different C version than the project's chosen one is probably a bad idea).
Anyone thinking of a better workaround than moving float code to a special `-Wno-conversion` c file?
",1.0,"(clang >= 6.0 bug) Annoying warnings for isnan(), fpclassify(), et al - - `nvim --version`: ef4feab0e75be
- Vim (version: 8.0.1565) behaves differently? No warnings, but I didn't check if the same functions were used
- Operating system/version: arch linux
- Terminal name/version: pangoterm
- `$TERM`: xterm
### Steps to reproduce using `nvim -u NORC`
```
rm -rf build && CMAKE_EXTRA_FLAGS=""-DCMAKE_C_COMPILER=clang -DCLANG_ASAN_UBSAN=1"" make -j4
```
### Actual behaviour
```
[197/284] Building C object src/nvim/CMakeFiles/nvim.dir/eval/encode.c.o
In file included from ../src/nvim/eval/encode.c:455:
../src/nvim/eval/typval_encode.c.h:330:7: warning: implicit conversion loses floating-point precision: 'const float_T' (aka 'const double') to 'float' [-Wconv
ersion]
TYPVAL_ENCODE_CONV_FLOAT(tv, tv->vval.v_float);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/nvim/eval/encode.c:330:26: note: expanded from macro 'TYPVAL_ENCODE_CONV_FLOAT'
switch (fpclassify(flt_)) { \
~~~~~~~~~~~^~~~~
/usr/include/math.h:415:56: note: expanded from macro 'fpclassify'
# define fpclassify(x) __MATH_TG ((x), __fpclassify, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
In file included from ../src/nvim/eval/encode.c:455:
../src/nvim/eval/typval_encode.c.h:491:13: warning: implicit conversion loses floating-point precision: 'const float_T' (aka 'const double') to 'float' [-Wcon
version]
TYPVAL_ENCODE_CONV_FLOAT(tv, val_di->di_tv.vval.v_float);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/nvim/eval/encode.c:330:26: note: expanded from macro 'TYPVAL_ENCODE_CONV_FLOAT'
switch (fpclassify(flt_)) { \
~~~~~~~~~~~^~~~~
/usr/include/math.h:415:56: note: expanded from macro 'fpclassify'
# define fpclassify(x) __MATH_TG ((x), __fpclassify, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
In file included from ../src/nvim/eval/encode.c:493:
../src/nvim/eval/typval_encode.c.h:330:7: warning: implicit conversion loses floating-point precision: 'const float_T' (aka 'const double') to 'float' [-Wconv
ersion]
TYPVAL_ENCODE_CONV_FLOAT(tv, tv->vval.v_float);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/nvim/eval/encode.c:330:26: note: expanded from macro 'TYPVAL_ENCODE_CONV_FLOAT'
switch (fpclassify(flt_)) { \
~~~~~~~~~~~^~~~~
/usr/include/math.h:415:56: note: expanded from macro 'fpclassify'
# define fpclassify(x) __MATH_TG ((x), __fpclassify, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
In file included from ../src/nvim/eval/encode.c:493:
../src/nvim/eval/typval_encode.c.h:491:13: warning: implicit conversion loses floating-point precision: 'const float_T' (aka 'const double') to 'float' [-Wcon
version]
TYPVAL_ENCODE_CONV_FLOAT(tv, val_di->di_tv.vval.v_float);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/nvim/eval/encode.c:330:26: note: expanded from macro 'TYPVAL_ENCODE_CONV_FLOAT'
switch (fpclassify(flt_)) { \
~~~~~~~~~~~^~~~~
/usr/include/math.h:415:56: note: expanded from macro 'fpclassify'
# define fpclassify(x) __MATH_TG ((x), __fpclassify, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
In file included from ../src/nvim/eval/encode.c:762:
../src/nvim/eval/typval_encode.c.h:330:7: warning: implicit conversion loses floating-point precision: 'const float_T' (aka 'const double') to 'float' [-Wconv
ersion]
TYPVAL_ENCODE_CONV_FLOAT(tv, tv->vval.v_float);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/nvim/eval/encode.c:534:26: note: expanded from macro 'TYPVAL_ENCODE_CONV_FLOAT'
switch (fpclassify(flt_)) { \
~~~~~~~~~~~^~~~~
/usr/include/math.h:415:56: note: expanded from macro 'fpclassify'
# define fpclassify(x) __MATH_TG ((x), __fpclassify, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
In file included from ../src/nvim/eval/encode.c:762:
../src/nvim/eval/typval_encode.c.h:491:13: warning: implicit conversion loses floating-point precision: 'const float_T' (aka 'const double') to 'float' [-Wcon
version]
TYPVAL_ENCODE_CONV_FLOAT(tv, val_di->di_tv.vval.v_float);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/nvim/eval/encode.c:534:26: note: expanded from macro 'TYPVAL_ENCODE_CONV_FLOAT'
switch (fpclassify(flt_)) { \
~~~~~~~~~~~^~~~~
/usr/include/math.h:415:56: note: expanded from macro 'fpclassify'
# define fpclassify(x) __MATH_TG ((x), __fpclassify, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
6 warnings generated.
[261/284] Building C object src/nvim/CMakeFiles/nvim.dir/strings.c.o
../src/nvim/strings.c:1223:23: warning: implicit conversion loses floating-point precision: 'double' to 'float' [-Wconversion]
if (isinf((double)f)
~~~~~~^~~~~~~~~~
/usr/include/math.h:472:46: note: expanded from macro 'isinf'
# define isinf(x) __MATH_TG ((x), __isinf, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
../src/nvim/strings.c:1230:30: warning: implicit conversion loses floating-point precision: 'double' to 'float' [-Wconversion]
} else if (isnan(f)) {
~~~~~~^~
/usr/include/math.h:455:46: note: expanded from macro 'isnan'
# define isnan(x) __MATH_TG ((x), __isnan, (x))
~~~~~~~~~~~~~~~~~~~~~~~~~~^~~
/usr/include/math.h:370:16: note: expanded from macro '__MATH_TG'
? FUNC ## f ARGS \
~~~~~~~~~ ^~~~
2 warnings generated.
```
### Expected behaviour
No warnings. `maths.h` uses gcc magic to avoid this warning when compiling with gcc 4.4+, which doesn't work with clang (unless C11 is active, but using a different C version than the project's chosen one is probably a bad idea).
Anyone thinking of a better workaround than moving float code to a special `-Wno-conversion` c file?
",0, clang bug annoying warnings for isnan fpclassify et al nvim version vim version behaves differently no warnings but i didn t check if the same functions were used operating system version arch linux terminal name version pangoterm term xterm steps to reproduce using nvim u norc rm rf build cmake extra flags dcmake c compiler clang dclang asan ubsan make actual behaviour building c object src nvim cmakefiles nvim dir eval encode c o in file included from src nvim eval encode c src nvim eval typval encode c h warning implicit conversion loses floating point precision const float t aka const double to float wconv ersion typval encode conv float tv tv vval v float src nvim eval encode c note expanded from macro typval encode conv float switch fpclassify flt usr include math h note expanded from macro fpclassify define fpclassify x math tg x fpclassify x usr include math h note expanded from macro math tg func f args in file included from src nvim eval encode c src nvim eval typval encode c h warning implicit conversion loses floating point precision const float t aka const double to float wcon version typval encode conv float tv val di di tv vval v float src nvim eval encode c note expanded from macro typval encode conv float switch fpclassify flt usr include math h note expanded from macro fpclassify define fpclassify x math tg x fpclassify x usr include math h note expanded from macro math tg func f args in file included from src nvim eval encode c src nvim eval typval encode c h warning implicit conversion loses floating point precision const float t aka const double to float wconv ersion typval encode conv float tv tv vval v float src nvim eval encode c note expanded from macro typval encode conv float switch fpclassify flt usr include math h note expanded from macro fpclassify define fpclassify x math tg x fpclassify x usr include math h note expanded from macro math tg func f args in file included from src nvim eval encode c src nvim eval typval encode c h warning implicit conversion loses floating point precision const float t aka const double to float wcon version typval encode conv float tv val di di tv vval v float src nvim eval encode c note expanded from macro typval encode conv float switch fpclassify flt usr include math h note expanded from macro fpclassify define fpclassify x math tg x fpclassify x usr include math h note expanded from macro math tg func f args in file included from src nvim eval encode c src nvim eval typval encode c h warning implicit conversion loses floating point precision const float t aka const double to float wconv ersion typval encode conv float tv tv vval v float src nvim eval encode c note expanded from macro typval encode conv float switch fpclassify flt usr include math h note expanded from macro fpclassify define fpclassify x math tg x fpclassify x usr include math h note expanded from macro math tg func f args in file included from src nvim eval encode c src nvim eval typval encode c h warning implicit conversion loses floating point precision const float t aka const double to float wcon version typval encode conv float tv val di di tv vval v float src nvim eval encode c note expanded from macro typval encode conv float switch fpclassify flt usr include math h note expanded from macro fpclassify define fpclassify x math tg x fpclassify x usr include math h note expanded from macro math tg func f args warnings generated building c object src nvim cmakefiles nvim dir strings c o src nvim strings c warning implicit conversion loses floating point precision double to float if isinf double f usr include math h note expanded from macro isinf define isinf x math tg x isinf x usr include math h note expanded from macro math tg func f args src nvim strings c warning implicit conversion loses floating point precision double to float else if isnan f usr include math h note expanded from macro isnan define isnan x math tg x isnan x usr include math h note expanded from macro math tg func f args warnings generated expected behaviour no warnings maths h uses gcc magic to avoid this warning when compiling with gcc which doesn t work with clang unless is active but using a different c version than the project s chosen one is probably a bad idea anyone thinking of a better workaround than moving float code to a special wno conversion c file ,0
417115,12155912626.0,IssuesEvent,2020-04-25 15:09:49,Scifabric/pybossa,https://api.github.com/repos/Scifabric/pybossa,closed,Error in rebuilding the database,priority.medium,"Rebuilding the database produces an error as it cannot drop tables which have some foreign key relation in some other table.
Instead of using DROP TABLE, DROP CASCADE should be used.
`python cli.py db_rebuild` produces this error.",1.0,"Error in rebuilding the database - Rebuilding the database produces an error as it cannot drop tables which have some foreign key relation in some other table.
Instead of using DROP TABLE, DROP CASCADE should be used.
`python cli.py db_rebuild` produces this error.",0,error in rebuilding the database rebuilding the database produces an error as it cannot drop tables which have some foreign key relation in some other table instead of using drop table drop cascade should be used python cli py db rebuild produces this error ,0
762863,26733790401.0,IssuesEvent,2023-01-30 07:48:38,googleapis/google-cloud-ruby,https://api.github.com/repos/googleapis/google-cloud-ruby,closed,PubSub subscriber not processing messages after starting sometimes,type: bug api: pubsub priority: p2 :rotating_light:,"We have a very similar problem to #8415 but our case is more specific in that our worker (subscriber) does not process a single message upon starting by autoscaling (both time-based and load-based) sometimes. Most of the times, the worker starts and processes messages without any problem but once every few days or week, the worker starts and processes no messages at all, so much so that we have a name for it now - zombie worker.
Below are the logs and metrics captured for a recent zombie worker occurrence.
Oldest unacked message age:

Undelivered messages number:

Expired ack deadlines count:

GRPC warnings in logs:
```
W, [2021-11-11T06:15:25.531965 #1] WARN -- : bidi: read-loop failed
W, [2021-11-11T06:15:25.532056 #1] WARN -- : 4:Deadline Exceeded. debug_error_string:{""created"":""@1636611325.531511420"",""description"":""Deadline Exceeded"",""file"":""src/core/ext/filters/deadline/deadline_filter.cc"",""file_line"":81,""grpc_status"":4} (GRPC::DeadlineExceeded)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:209:in `block in read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/bin/rake:1:in `each'
W, [2021-11-11T06:15:25.532766 #1] WARN -- : bidi: read-loop failed
W, [2021-11-11T06:15:25.532830 #1] WARN -- : 4:Deadline Exceeded. debug_error_string:{""created"":""@1636611325.531513935"",""description"":""Deadline Exceeded"",""file"":""src/core/ext/filters/deadline/deadline_filter.cc"",""file_line"":81,""grpc_status"":4} (GRPC::DeadlineExceeded)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:209:in `block in read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/bin/rake:1:in `each'
W, [2021-11-11T06:15:25.533273 #1] WARN -- : bidi-write-loop: send close failed
W, [2021-11-11T06:15:25.534701 #1] WARN -- : call#run_batch failed somehow (GRPC::Core::CallError)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `run_batch'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `write_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:75:in `block in run_on_client'
W, [2021-11-11T06:15:25.535033 #1] WARN -- : bidi-write-loop: send close failed
W, [2021-11-11T06:15:25.535089 #1] WARN -- : call#run_batch failed somehow (GRPC::Core::CallError)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `run_batch'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `write_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:75:in `block in run_on_client'
W, [2021-11-11T06:30:25.532826 #1] WARN -- : bidi: read-loop failed
W, [2021-11-11T06:30:25.532899 #1] WARN -- : 4:Deadline Exceeded. debug_error_string:{""created"":""@1636612225.532517910"",""description"":""Deadline Exceeded"",""file"":""src/core/ext/filters/deadline/deadline_filter.cc"",""file_line"":81,""grpc_status"":4} (GRPC::DeadlineExceeded)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:209:in `block in read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/bin/rake:1:in `each'
W, [2021-11-11T06:30:25.533315 #1] WARN -- : bidi-write-loop: send close failed
W, [2021-11-11T06:30:25.533533 #1] WARN -- : call#run_batch failed somehow (GRPC::Core::CallError)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `run_batch'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `write_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:75:in `block in run_on_client'
W, [2021-11-11T06:30:25.535103 #1] WARN -- : bidi: read-loop failed
W, [2021-11-11T06:30:25.535164 #1] WARN -- : 4:Deadline Exceeded. debug_error_string:{""created"":""@1636612225.534486605"",""description"":""Deadline Exceeded"",""file"":""src/core/ext/filters/deadline/deadline_filter.cc"",""file_line"":81,""grpc_status"":4} (GRPC::DeadlineExceeded)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:209:in `block in read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/bin/rake:1:in `each'
W, [2021-11-11T06:30:25.536260 #1] WARN -- : bidi-write-loop: send close failed
W, [2021-11-11T06:30:25.536758 #1] WARN -- : call#run_batch failed somehow (GRPC::Core::CallError)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `run_batch'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `write_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:75:in `block in run_on_client'
```
At the time of the occurrence, there were 31 other workers running and all are completing jobs except the zombie worker:

We are not sure if this is a client (subscriber) issue or server issue but we are very sure it is not caused by our application because after killing the zombie worker, the messages are re-delivered to another (normal) worker and it has no problem processing the messages.
During another occurrence, we managed to captured the thread status of the zombie worker’s process too:

#### Environment details
- OS: GKE ([Linux](https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#cos-variants))
- Ruby version: 2.5.0
- Gem name and version: google-cloud-pubsub (2.6.1)
#### Steps to reproduce
1. Use local copy of google-cloud-pubsub, remove `@inventory.add response.received_messages` and `register_callback rec_msg` in [stream.rb](https://github.com/googleapis/google-cloud-ruby/blob/main/google-cloud-pubsub/lib/google/cloud/pubsub/subscriber/stream.rb).
2. Start only 1 worker (subscriber) that subscribe to a topic
3. Publish batch of 80 messages to same topic
4. Wait for 15 minutes
The reason I removed these 2 lines of codes is our logs show no jobs completed and no errors thrown, and our metrics show no messages added to inventory so they were never executed anyway. And indeed, with these steps, I am able to reproduce locally the oldest unacked message age and undelivered messages metrics as well as the GRPC warnings in the logs:
```
W, [2021-11-22T15:23:51.262995 #98511] WARN -- : bidi: read-loop failed
W, [2021-11-22T15:23:51.263064 #98511] WARN -- : 4:Deadline Exceeded. debug_error_string:{""created"":""@1637565831.262731000"",""description"":""Deadline Exceeded"",""file"":""src/core/ext/filters/deadline/deadline_filter.cc"",""file_line"":81,""grpc_status"":4} (GRPC::DeadlineExceeded)
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:209:in `block in read_loop'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `loop'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `read_loop'
/Users/steve/.rbenv/versions/2.5.8/bin/rake:1:in `each'
W, [2021-11-22T15:23:51.263597 #98511] WARN -- : bidi: read-loop failed
W, [2021-11-22T15:23:51.264036 #98511] WARN -- : 4:Deadline Exceeded. debug_error_string:{""created"":""@1637565831.263365000"",""description"":""Deadline Exceeded"",""file"":""src/core/ext/filters/deadline/deadline_filter.cc"",""file_line"":81,""grpc_status"":4} (GRPC::DeadlineExceeded)
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:209:in `block in read_loop'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `loop'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `read_loop'
/Users/steve/.rbenv/versions/2.5.8/bin/rake:1:in `each'
W, [2021-11-22T15:23:51.264693 #98511] WARN -- : bidi-write-loop: send close failed
W, [2021-11-22T15:23:51.265109 #98511] WARN -- : call#run_batch failed somehow (GRPC::Core::CallError)
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `run_batch'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `write_loop'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:75:in `block in run_on_client'
W, [2021-11-22T15:23:51.264293 #98511] WARN -- : bidi-write-loop: send close failed
W, [2021-11-22T15:23:51.265388 #98511] WARN -- : call#run_batch failed somehow (GRPC::Core::CallError)
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `run_batch'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `write_loop'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:75:in `block in run_on_client'
```
If I do not remove `@inventory.add response.received_messages`, GRPC::Unavailable warnings will be thrown sporadically for the [empty request made every 30 seconds](https://github.com/googleapis/google-cloud-ruby/blob/main/google-cloud-pubsub/lib/google/cloud/pubsub/subscriber/stream.rb#L67-L72). If I do not remove both lines of codes, no GRPC warnings are thrown and there is no problem at all, even when no messages are published.
However, I am not able to reproduce the unusual expired ack deadlines metric, which leads us to believe the issue might be on the server side or the GRPC communication in between. Is that possible?
#### Code example
These are the codes I use to publish batch of 80 messages to the topic for testing:
```ruby
messages = []
80.times do |n|
message = {}
message['id'] = ""#{Time.now.to_i}#{n}""
message['content'] = Faker::Lorem.paragraph(10)
messages << message.to_json
end
pubsub = Google::Cloud::PubSub.new(
project_id: 'PROJECT_ID',
credentials: ""/path/to/keyfile.json""
)
topic = pubsub.topic 'TOPIC_NAME'
topic.publish do |batch_publisher|
messages.each do |message|
batch_publisher.publish message
end
end
```
Our worker's subscriber is operated similarly to the [example provided](https://github.com/googleapis/google-cloud-ruby/tree/main/google-cloud-pubsub#example) but with the following configuration:
```ruby
configuration = {
deadline: 10,
streams: 2,
inventory: {
max_outstanding_messages: 80,
max_total_lease_duration: 20
},
threads: { callback: 8, push: 4 }
}
pubsub.subscription('TOPIC_NAME', skip_lookup: true).listen configuration do |received_message|
process received_message
end
```",1.0,"PubSub subscriber not processing messages after starting sometimes - We have a very similar problem to #8415 but our case is more specific in that our worker (subscriber) does not process a single message upon starting by autoscaling (both time-based and load-based) sometimes. Most of the times, the worker starts and processes messages without any problem but once every few days or week, the worker starts and processes no messages at all, so much so that we have a name for it now - zombie worker.
Below are the logs and metrics captured for a recent zombie worker occurrence.
Oldest unacked message age:

Undelivered messages number:

Expired ack deadlines count:

GRPC warnings in logs:
```
W, [2021-11-11T06:15:25.531965 #1] WARN -- : bidi: read-loop failed
W, [2021-11-11T06:15:25.532056 #1] WARN -- : 4:Deadline Exceeded. debug_error_string:{""created"":""@1636611325.531511420"",""description"":""Deadline Exceeded"",""file"":""src/core/ext/filters/deadline/deadline_filter.cc"",""file_line"":81,""grpc_status"":4} (GRPC::DeadlineExceeded)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:209:in `block in read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/bin/rake:1:in `each'
W, [2021-11-11T06:15:25.532766 #1] WARN -- : bidi: read-loop failed
W, [2021-11-11T06:15:25.532830 #1] WARN -- : 4:Deadline Exceeded. debug_error_string:{""created"":""@1636611325.531513935"",""description"":""Deadline Exceeded"",""file"":""src/core/ext/filters/deadline/deadline_filter.cc"",""file_line"":81,""grpc_status"":4} (GRPC::DeadlineExceeded)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:209:in `block in read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/bin/rake:1:in `each'
W, [2021-11-11T06:15:25.533273 #1] WARN -- : bidi-write-loop: send close failed
W, [2021-11-11T06:15:25.534701 #1] WARN -- : call#run_batch failed somehow (GRPC::Core::CallError)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `run_batch'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `write_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:75:in `block in run_on_client'
W, [2021-11-11T06:15:25.535033 #1] WARN -- : bidi-write-loop: send close failed
W, [2021-11-11T06:15:25.535089 #1] WARN -- : call#run_batch failed somehow (GRPC::Core::CallError)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `run_batch'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `write_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:75:in `block in run_on_client'
W, [2021-11-11T06:30:25.532826 #1] WARN -- : bidi: read-loop failed
W, [2021-11-11T06:30:25.532899 #1] WARN -- : 4:Deadline Exceeded. debug_error_string:{""created"":""@1636612225.532517910"",""description"":""Deadline Exceeded"",""file"":""src/core/ext/filters/deadline/deadline_filter.cc"",""file_line"":81,""grpc_status"":4} (GRPC::DeadlineExceeded)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:209:in `block in read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/bin/rake:1:in `each'
W, [2021-11-11T06:30:25.533315 #1] WARN -- : bidi-write-loop: send close failed
W, [2021-11-11T06:30:25.533533 #1] WARN -- : call#run_batch failed somehow (GRPC::Core::CallError)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `run_batch'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `write_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:75:in `block in run_on_client'
W, [2021-11-11T06:30:25.535103 #1] WARN -- : bidi: read-loop failed
W, [2021-11-11T06:30:25.535164 #1] WARN -- : 4:Deadline Exceeded. debug_error_string:{""created"":""@1636612225.534486605"",""description"":""Deadline Exceeded"",""file"":""src/core/ext/filters/deadline/deadline_filter.cc"",""file_line"":81,""grpc_status"":4} (GRPC::DeadlineExceeded)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:209:in `block in read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `read_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/bin/rake:1:in `each'
W, [2021-11-11T06:30:25.536260 #1] WARN -- : bidi-write-loop: send close failed
W, [2021-11-11T06:30:25.536758 #1] WARN -- : call#run_batch failed somehow (GRPC::Core::CallError)
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `run_batch'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `write_loop'
/var/www/app/rails_shared/bundle/ruby/2.5.0/gems/grpc-1.36.0-x86_64-linux/src/ruby/lib/grpc/generic/bidi_call.rb:75:in `block in run_on_client'
```
At the time of the occurrence, there were 31 other workers running and all are completing jobs except the zombie worker:

We are not sure if this is a client (subscriber) issue or server issue but we are very sure it is not caused by our application because after killing the zombie worker, the messages are re-delivered to another (normal) worker and it has no problem processing the messages.
During another occurrence, we managed to captured the thread status of the zombie worker’s process too:

#### Environment details
- OS: GKE ([Linux](https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#cos-variants))
- Ruby version: 2.5.0
- Gem name and version: google-cloud-pubsub (2.6.1)
#### Steps to reproduce
1. Use local copy of google-cloud-pubsub, remove `@inventory.add response.received_messages` and `register_callback rec_msg` in [stream.rb](https://github.com/googleapis/google-cloud-ruby/blob/main/google-cloud-pubsub/lib/google/cloud/pubsub/subscriber/stream.rb).
2. Start only 1 worker (subscriber) that subscribe to a topic
3. Publish batch of 80 messages to same topic
4. Wait for 15 minutes
The reason I removed these 2 lines of codes is our logs show no jobs completed and no errors thrown, and our metrics show no messages added to inventory so they were never executed anyway. And indeed, with these steps, I am able to reproduce locally the oldest unacked message age and undelivered messages metrics as well as the GRPC warnings in the logs:
```
W, [2021-11-22T15:23:51.262995 #98511] WARN -- : bidi: read-loop failed
W, [2021-11-22T15:23:51.263064 #98511] WARN -- : 4:Deadline Exceeded. debug_error_string:{""created"":""@1637565831.262731000"",""description"":""Deadline Exceeded"",""file"":""src/core/ext/filters/deadline/deadline_filter.cc"",""file_line"":81,""grpc_status"":4} (GRPC::DeadlineExceeded)
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:209:in `block in read_loop'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `loop'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `read_loop'
/Users/steve/.rbenv/versions/2.5.8/bin/rake:1:in `each'
W, [2021-11-22T15:23:51.263597 #98511] WARN -- : bidi: read-loop failed
W, [2021-11-22T15:23:51.264036 #98511] WARN -- : 4:Deadline Exceeded. debug_error_string:{""created"":""@1637565831.263365000"",""description"":""Deadline Exceeded"",""file"":""src/core/ext/filters/deadline/deadline_filter.cc"",""file_line"":81,""grpc_status"":4} (GRPC::DeadlineExceeded)
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/active_call.rb:29:in `check_status'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:209:in `block in read_loop'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `loop'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:195:in `read_loop'
/Users/steve/.rbenv/versions/2.5.8/bin/rake:1:in `each'
W, [2021-11-22T15:23:51.264693 #98511] WARN -- : bidi-write-loop: send close failed
W, [2021-11-22T15:23:51.265109 #98511] WARN -- : call#run_batch failed somehow (GRPC::Core::CallError)
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `run_batch'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `write_loop'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:75:in `block in run_on_client'
W, [2021-11-22T15:23:51.264293 #98511] WARN -- : bidi-write-loop: send close failed
W, [2021-11-22T15:23:51.265388 #98511] WARN -- : call#run_batch failed somehow (GRPC::Core::CallError)
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `run_batch'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:165:in `write_loop'
/Users/steve/.rbenv/versions/2.5.8/lib/ruby/gems/2.5.0/gems/grpc-1.36.0-universal-darwin/src/ruby/lib/grpc/generic/bidi_call.rb:75:in `block in run_on_client'
```
If I do not remove `@inventory.add response.received_messages`, GRPC::Unavailable warnings will be thrown sporadically for the [empty request made every 30 seconds](https://github.com/googleapis/google-cloud-ruby/blob/main/google-cloud-pubsub/lib/google/cloud/pubsub/subscriber/stream.rb#L67-L72). If I do not remove both lines of codes, no GRPC warnings are thrown and there is no problem at all, even when no messages are published.
However, I am not able to reproduce the unusual expired ack deadlines metric, which leads us to believe the issue might be on the server side or the GRPC communication in between. Is that possible?
#### Code example
These are the codes I use to publish batch of 80 messages to the topic for testing:
```ruby
messages = []
80.times do |n|
message = {}
message['id'] = ""#{Time.now.to_i}#{n}""
message['content'] = Faker::Lorem.paragraph(10)
messages << message.to_json
end
pubsub = Google::Cloud::PubSub.new(
project_id: 'PROJECT_ID',
credentials: ""/path/to/keyfile.json""
)
topic = pubsub.topic 'TOPIC_NAME'
topic.publish do |batch_publisher|
messages.each do |message|
batch_publisher.publish message
end
end
```
Our worker's subscriber is operated similarly to the [example provided](https://github.com/googleapis/google-cloud-ruby/tree/main/google-cloud-pubsub#example) but with the following configuration:
```ruby
configuration = {
deadline: 10,
streams: 2,
inventory: {
max_outstanding_messages: 80,
max_total_lease_duration: 20
},
threads: { callback: 8, push: 4 }
}
pubsub.subscription('TOPIC_NAME', skip_lookup: true).listen configuration do |received_message|
process received_message
end
```",0,pubsub subscriber not processing messages after starting sometimes we have a very similar problem to but our case is more specific in that our worker subscriber does not process a single message upon starting by autoscaling both time based and load based sometimes most of the times the worker starts and processes messages without any problem but once every few days or week the worker starts and processes no messages at all so much so that we have a name for it now zombie worker below are the logs and metrics captured for a recent zombie worker occurrence oldest unacked message age undelivered messages number expired ack deadlines count grpc warnings in logs w warn bidi read loop failed w warn deadline exceeded debug error string created description deadline exceeded file src core ext filters deadline deadline filter cc file line grpc status grpc deadlineexceeded var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic active call rb in check status var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in block in read loop var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in loop var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in read loop var www app rails shared bundle ruby bin rake in each w warn bidi read loop failed w warn deadline exceeded debug error string created description deadline exceeded file src core ext filters deadline deadline filter cc file line grpc status grpc deadlineexceeded var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic active call rb in check status var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in block in read loop var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in loop var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in read loop var www app rails shared bundle ruby bin rake in each w warn bidi write loop send close failed w warn call run batch failed somehow grpc core callerror var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in run batch var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in write loop var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in block in run on client w warn bidi write loop send close failed w warn call run batch failed somehow grpc core callerror var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in run batch var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in write loop var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in block in run on client w warn bidi read loop failed w warn deadline exceeded debug error string created description deadline exceeded file src core ext filters deadline deadline filter cc file line grpc status grpc deadlineexceeded var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic active call rb in check status var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in block in read loop var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in loop var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in read loop var www app rails shared bundle ruby bin rake in each w warn bidi write loop send close failed w warn call run batch failed somehow grpc core callerror var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in run batch var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in write loop var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in block in run on client w warn bidi read loop failed w warn deadline exceeded debug error string created description deadline exceeded file src core ext filters deadline deadline filter cc file line grpc status grpc deadlineexceeded var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic active call rb in check status var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in block in read loop var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in loop var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in read loop var www app rails shared bundle ruby bin rake in each w warn bidi write loop send close failed w warn call run batch failed somehow grpc core callerror var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in run batch var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in write loop var www app rails shared bundle ruby gems grpc linux src ruby lib grpc generic bidi call rb in block in run on client at the time of the occurrence there were other workers running and all are completing jobs except the zombie worker we are not sure if this is a client subscriber issue or server issue but we are very sure it is not caused by our application because after killing the zombie worker the messages are re delivered to another normal worker and it has no problem processing the messages during another occurrence we managed to captured the thread status of the zombie worker’s process too environment details os gke ruby version gem name and version google cloud pubsub steps to reproduce use local copy of google cloud pubsub remove inventory add response received messages and register callback rec msg in start only worker subscriber that subscribe to a topic publish batch of messages to same topic wait for minutes the reason i removed these lines of codes is our logs show no jobs completed and no errors thrown and our metrics show no messages added to inventory so they were never executed anyway and indeed with these steps i am able to reproduce locally the oldest unacked message age and undelivered messages metrics as well as the grpc warnings in the logs w warn bidi read loop failed w warn deadline exceeded debug error string created description deadline exceeded file src core ext filters deadline deadline filter cc file line grpc status grpc deadlineexceeded users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic active call rb in check status users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic bidi call rb in block in read loop users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic bidi call rb in loop users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic bidi call rb in read loop users steve rbenv versions bin rake in each w warn bidi read loop failed w warn deadline exceeded debug error string created description deadline exceeded file src core ext filters deadline deadline filter cc file line grpc status grpc deadlineexceeded users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic active call rb in check status users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic bidi call rb in block in read loop users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic bidi call rb in loop users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic bidi call rb in read loop users steve rbenv versions bin rake in each w warn bidi write loop send close failed w warn call run batch failed somehow grpc core callerror users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic bidi call rb in run batch users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic bidi call rb in write loop users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic bidi call rb in block in run on client w warn bidi write loop send close failed w warn call run batch failed somehow grpc core callerror users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic bidi call rb in run batch users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic bidi call rb in write loop users steve rbenv versions lib ruby gems gems grpc universal darwin src ruby lib grpc generic bidi call rb in block in run on client if i do not remove inventory add response received messages grpc unavailable warnings will be thrown sporadically for the if i do not remove both lines of codes no grpc warnings are thrown and there is no problem at all even when no messages are published however i am not able to reproduce the unusual expired ack deadlines metric which leads us to believe the issue might be on the server side or the grpc communication in between is that possible code example these are the codes i use to publish batch of messages to the topic for testing ruby messages times do n message message time now to i n message faker lorem paragraph messages message to json end pubsub google cloud pubsub new project id project id credentials path to keyfile json topic pubsub topic topic name topic publish do batch publisher messages each do message batch publisher publish message end end our worker s subscriber is operated similarly to the but with the following configuration ruby configuration deadline streams inventory max outstanding messages max total lease duration threads callback push pubsub subscription topic name skip lookup true listen configuration do received message process received message end ,0
125324,4955541748.0,IssuesEvent,2016-12-01 20:43:14,WalkingMachine/sara_commun,https://api.github.com/repos/WalkingMachine/sara_commun,closed,Firefox bug d'affichage ,bug Priority : LOW,"
Il semble avoir une case vide qui apparaît quand il y a un sous-titre donné a un membre. ",1.0,"Firefox bug d'affichage - 
Il semble avoir une case vide qui apparaît quand il y a un sous-titre donné a un membre. ",0,firefox bug d affichage il semble avoir une case vide qui apparaît quand il y a un sous titre donné a un membre ,0
345,3222670266.0,IssuesEvent,2015-10-09 03:22:18,Homebrew/homebrew,https://api.github.com/repos/Homebrew/homebrew,closed,Promote z3 from homebrew-science,maintainer feedback,"Right now the Z3 SMT solver is only available through [homebrew-science](https://github.com/Homebrew/homebrew-science/blob/master/z3.rb). Other solvers like [CVC4](https://github.com/Homebrew/homebrew/blob/master/Library/Formula/cvc4.rb) are available on mainline homebrew, and SMT solvers generally are being used more as dependencies for [other software](http://goto.ucsd.edu/~rjhala/liquid/haskell/blog/about/) and [programming languages](https://github.com/Homebrew/homebrew/blob/master/Library/Formula/cryptol.rb). It would be nice to have more than just CVC4 to choose from without tapping homebrew-science.
It looks like this was proposed a couple years ago in #16188 and #21509, but at the time it was more difficult to build and was not available under an open source license. Hopefully there are no longer any blockers to including it in the main repo.",True,"Promote z3 from homebrew-science - Right now the Z3 SMT solver is only available through [homebrew-science](https://github.com/Homebrew/homebrew-science/blob/master/z3.rb). Other solvers like [CVC4](https://github.com/Homebrew/homebrew/blob/master/Library/Formula/cvc4.rb) are available on mainline homebrew, and SMT solvers generally are being used more as dependencies for [other software](http://goto.ucsd.edu/~rjhala/liquid/haskell/blog/about/) and [programming languages](https://github.com/Homebrew/homebrew/blob/master/Library/Formula/cryptol.rb). It would be nice to have more than just CVC4 to choose from without tapping homebrew-science.
It looks like this was proposed a couple years ago in #16188 and #21509, but at the time it was more difficult to build and was not available under an open source license. Hopefully there are no longer any blockers to including it in the main repo.",1,promote from homebrew science right now the smt solver is only available through other solvers like are available on mainline homebrew and smt solvers generally are being used more as dependencies for and it would be nice to have more than just to choose from without tapping homebrew science it looks like this was proposed a couple years ago in and but at the time it was more difficult to build and was not available under an open source license hopefully there are no longer any blockers to including it in the main repo ,1
563120,16676305260.0,IssuesEvent,2021-06-07 16:36:50,epam/Indigo,https://api.github.com/repos/epam/Indigo,closed,core: investigate possibility and replace SharedPtr with std::shared_ptr,Core Enhancement High priority,"Currently we have our own implementation of shared-owner smart pointer called SharedPtr (`core/indigo-core/common/base_cpp/shared_ptr.h`).
We need to:
- [ ] investigate possibility of replacing it with `std::shared_ptr`
- [ ] replace it's usages with `std::shared_ptr` with using `std::make_shared` where possible
- [ ] remove `SharedPtr` from the codebase",1.0,"core: investigate possibility and replace SharedPtr with std::shared_ptr - Currently we have our own implementation of shared-owner smart pointer called SharedPtr (`core/indigo-core/common/base_cpp/shared_ptr.h`).
We need to:
- [ ] investigate possibility of replacing it with `std::shared_ptr`
- [ ] replace it's usages with `std::shared_ptr` with using `std::make_shared` where possible
- [ ] remove `SharedPtr` from the codebase",0,core investigate possibility and replace sharedptr with std shared ptr currently we have our own implementation of shared owner smart pointer called sharedptr core indigo core common base cpp shared ptr h we need to investigate possibility of replacing it with std shared ptr replace it s usages with std shared ptr with using std make shared where possible remove sharedptr from the codebase,0
881,4543466320.0,IssuesEvent,2016-09-10 04:55:31,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,newly created containers always have default network even with purge_networks,affects_2.2 bug_report cloud docker in progress waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container.py
##### ANSIBLE VERSION
```
root@nuid:/usr/local/nubeva/config# ansible --version
ansible 2.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
default
##### OS / ENVIRONMENT
docker for mac
##### SUMMARY
When creating a completely new container instance with purge_networks set to ""yes"", the container is created with the default bridge. The second time that the same playbook is run, the playbook updates the container and removes the default bridge.
##### STEPS TO REPRODUCE
With the following playbook snippet:
```
- name: test container
docker_container:
name: test_ubuntu
image: ""{{ registry }}/ubuntu:latest""
detach: True
networks:
- name: trafficentry
- name: trafficexit
purge_networks: yes
```
Run it once on a blank system (or after docker rm -f test_ubuntu). You will see the container is created, but inspecting it shows it is connected to three networks, bridge, trafficentry and trafficexit.
Run it a second time and you will see that the container is now connected to just the trafficentry and trafficexit networks as expected.
##### EXPECTED RESULTS
Newly created containers, not just updated containers, should not have the default network connected when purge_networks is configured.
##### ACTUAL RESULTS
After initial run:
```
docker inspect test_ubuntu
""NetworkSettings"": {
""Bridge"": """",
""SandboxID"": ""bef4d5559dfcf8bc98756caea2d3a47c906718f5c0ea3475ea7e4ab70f68c3ea"",
""HairpinMode"": false,
""LinkLocalIPv6Address"": """",
""LinkLocalIPv6PrefixLen"": 0,
""Ports"": {},
""SandboxKey"": ""/var/run/docker/netns/bef4d5559dfc"",
""SecondaryIPAddresses"": null,
""SecondaryIPv6Addresses"": null,
""EndpointID"": ""c93709aa1119526627933435e47352230ecaa73445ddf0556d8ef1ae6211d351"",
""Gateway"": ""172.18.0.1"",
""GlobalIPv6Address"": """",
""GlobalIPv6PrefixLen"": 0,
""IPAddress"": ""172.18.0.2"",
""IPPrefixLen"": 16,
""IPv6Gateway"": """",
""MacAddress"": ""02:42:ac:12:00:02"",
""Networks"": {
""bridge"": {
""IPAMConfig"": null,
""Links"": null,
""Aliases"": null,
""NetworkID"": ""239e364714b16413115d71582c96e05e78b7d135572e393f4fbf3c7c14052511"",
""EndpointID"": ""c93709aa1119526627933435e47352230ecaa73445ddf0556d8ef1ae6211d351"",
""Gateway"": ""172.18.0.1"",
""IPAddress"": ""172.18.0.2"",
""IPPrefixLen"": 16,
""IPv6Gateway"": """",
""GlobalIPv6Address"": """",
""GlobalIPv6PrefixLen"": 0,
""MacAddress"": ""02:42:ac:12:00:02""
},
""trafficentry"": {
""IPAMConfig"": null,
""Links"": null,
""Aliases"": [
""540bab56ef98""
],
""NetworkID"": ""c87e984a0a94aa11408d6f709e1cc8d35e402323e1de216b2a7958aeb7b7cc53"",
""EndpointID"": ""abc3a2634ba561229f9050b05714b2ecc76b78442ddec48d76fd998d05ca659b"",
""Gateway"": ""172.20.0.1"",
""IPAddress"": ""172.20.0.2"",
""IPPrefixLen"": 16,
""IPv6Gateway"": """",
""GlobalIPv6Address"": """",
""GlobalIPv6PrefixLen"": 0,
""MacAddress"": ""02:42:ac:14:00:02""
},
""trafficexit"": {
""IPAMConfig"": null,
""Links"": null,
""Aliases"": [
""540bab56ef98""
],
""NetworkID"": ""25c50de46ae3a63ddc36d71e8c24a789052dd0e6bf9487d70f310c00f5688c38"",
""EndpointID"": ""6c7d18d74f66a8e58c4c46c3f0845fb4c8cad30a7eaac14386b1ca7f1159b660"",
""Gateway"": ""172.23.0.1"",
""IPAddress"": ""172.23.0.2"",
""IPPrefixLen"": 16,
""IPv6Gateway"": """",
""GlobalIPv6Address"": """",
""GlobalIPv6PrefixLen"": 0,
""MacAddress"": ""02:42:ac:17:00:02""
}
}
```
After second run:
```
docker inspect test_ubuntu
""Networks"": {
""trafficentry"": {
""IPAMConfig"": null,
""Links"": null,
""Aliases"": [
""540bab56ef98""
],
""NetworkID"": ""c87e984a0a94aa11408d6f709e1cc8d35e402323e1de216b2a7958aeb7b7cc53"",
""EndpointID"": ""abc3a2634ba561229f9050b05714b2ecc76b78442ddec48d76fd998d05ca659b"",
""Gateway"": ""172.20.0.1"",
""IPAddress"": ""172.20.0.2"",
""IPPrefixLen"": 16,
""IPv6Gateway"": """",
""GlobalIPv6Address"": """",
""GlobalIPv6PrefixLen"": 0,
""MacAddress"": ""02:42:ac:14:00:02""
},
""trafficexit"": {
""IPAMConfig"": null,
""Links"": null,
""Aliases"": [
""540bab56ef98""
],
""NetworkID"": ""25c50de46ae3a63ddc36d71e8c24a789052dd0e6bf9487d70f310c00f5688c38"",
""EndpointID"": ""6c7d18d74f66a8e58c4c46c3f0845fb4c8cad30a7eaac14386b1ca7f1159b660"",
""Gateway"": ""172.23.0.1"",
""IPAddress"": ""172.23.0.2"",
""IPPrefixLen"": 16,
""IPv6Gateway"": """",
""GlobalIPv6Address"": """",
""GlobalIPv6PrefixLen"": 0,
""MacAddress"": ""02:42:ac:17:00:02""
}
}
```
",True,"newly created containers always have default network even with purge_networks - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container.py
##### ANSIBLE VERSION
```
root@nuid:/usr/local/nubeva/config# ansible --version
ansible 2.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
default
##### OS / ENVIRONMENT
docker for mac
##### SUMMARY
When creating a completely new container instance with purge_networks set to ""yes"", the container is created with the default bridge. The second time that the same playbook is run, the playbook updates the container and removes the default bridge.
##### STEPS TO REPRODUCE
With the following playbook snippet:
```
- name: test container
docker_container:
name: test_ubuntu
image: ""{{ registry }}/ubuntu:latest""
detach: True
networks:
- name: trafficentry
- name: trafficexit
purge_networks: yes
```
Run it once on a blank system (or after docker rm -f test_ubuntu). You will see the container is created, but inspecting it shows it is connected to three networks, bridge, trafficentry and trafficexit.
Run it a second time and you will see that the container is now connected to just the trafficentry and trafficexit networks as expected.
##### EXPECTED RESULTS
Newly created containers, not just updated containers, should not have the default network connected when purge_networks is configured.
##### ACTUAL RESULTS
After initial run:
```
docker inspect test_ubuntu
""NetworkSettings"": {
""Bridge"": """",
""SandboxID"": ""bef4d5559dfcf8bc98756caea2d3a47c906718f5c0ea3475ea7e4ab70f68c3ea"",
""HairpinMode"": false,
""LinkLocalIPv6Address"": """",
""LinkLocalIPv6PrefixLen"": 0,
""Ports"": {},
""SandboxKey"": ""/var/run/docker/netns/bef4d5559dfc"",
""SecondaryIPAddresses"": null,
""SecondaryIPv6Addresses"": null,
""EndpointID"": ""c93709aa1119526627933435e47352230ecaa73445ddf0556d8ef1ae6211d351"",
""Gateway"": ""172.18.0.1"",
""GlobalIPv6Address"": """",
""GlobalIPv6PrefixLen"": 0,
""IPAddress"": ""172.18.0.2"",
""IPPrefixLen"": 16,
""IPv6Gateway"": """",
""MacAddress"": ""02:42:ac:12:00:02"",
""Networks"": {
""bridge"": {
""IPAMConfig"": null,
""Links"": null,
""Aliases"": null,
""NetworkID"": ""239e364714b16413115d71582c96e05e78b7d135572e393f4fbf3c7c14052511"",
""EndpointID"": ""c93709aa1119526627933435e47352230ecaa73445ddf0556d8ef1ae6211d351"",
""Gateway"": ""172.18.0.1"",
""IPAddress"": ""172.18.0.2"",
""IPPrefixLen"": 16,
""IPv6Gateway"": """",
""GlobalIPv6Address"": """",
""GlobalIPv6PrefixLen"": 0,
""MacAddress"": ""02:42:ac:12:00:02""
},
""trafficentry"": {
""IPAMConfig"": null,
""Links"": null,
""Aliases"": [
""540bab56ef98""
],
""NetworkID"": ""c87e984a0a94aa11408d6f709e1cc8d35e402323e1de216b2a7958aeb7b7cc53"",
""EndpointID"": ""abc3a2634ba561229f9050b05714b2ecc76b78442ddec48d76fd998d05ca659b"",
""Gateway"": ""172.20.0.1"",
""IPAddress"": ""172.20.0.2"",
""IPPrefixLen"": 16,
""IPv6Gateway"": """",
""GlobalIPv6Address"": """",
""GlobalIPv6PrefixLen"": 0,
""MacAddress"": ""02:42:ac:14:00:02""
},
""trafficexit"": {
""IPAMConfig"": null,
""Links"": null,
""Aliases"": [
""540bab56ef98""
],
""NetworkID"": ""25c50de46ae3a63ddc36d71e8c24a789052dd0e6bf9487d70f310c00f5688c38"",
""EndpointID"": ""6c7d18d74f66a8e58c4c46c3f0845fb4c8cad30a7eaac14386b1ca7f1159b660"",
""Gateway"": ""172.23.0.1"",
""IPAddress"": ""172.23.0.2"",
""IPPrefixLen"": 16,
""IPv6Gateway"": """",
""GlobalIPv6Address"": """",
""GlobalIPv6PrefixLen"": 0,
""MacAddress"": ""02:42:ac:17:00:02""
}
}
```
After second run:
```
docker inspect test_ubuntu
""Networks"": {
""trafficentry"": {
""IPAMConfig"": null,
""Links"": null,
""Aliases"": [
""540bab56ef98""
],
""NetworkID"": ""c87e984a0a94aa11408d6f709e1cc8d35e402323e1de216b2a7958aeb7b7cc53"",
""EndpointID"": ""abc3a2634ba561229f9050b05714b2ecc76b78442ddec48d76fd998d05ca659b"",
""Gateway"": ""172.20.0.1"",
""IPAddress"": ""172.20.0.2"",
""IPPrefixLen"": 16,
""IPv6Gateway"": """",
""GlobalIPv6Address"": """",
""GlobalIPv6PrefixLen"": 0,
""MacAddress"": ""02:42:ac:14:00:02""
},
""trafficexit"": {
""IPAMConfig"": null,
""Links"": null,
""Aliases"": [
""540bab56ef98""
],
""NetworkID"": ""25c50de46ae3a63ddc36d71e8c24a789052dd0e6bf9487d70f310c00f5688c38"",
""EndpointID"": ""6c7d18d74f66a8e58c4c46c3f0845fb4c8cad30a7eaac14386b1ca7f1159b660"",
""Gateway"": ""172.23.0.1"",
""IPAddress"": ""172.23.0.2"",
""IPPrefixLen"": 16,
""IPv6Gateway"": """",
""GlobalIPv6Address"": """",
""GlobalIPv6PrefixLen"": 0,
""MacAddress"": ""02:42:ac:17:00:02""
}
}
```
",1,newly created containers always have default network even with purge networks issue type bug report component name docker container py ansible version root nuid usr local nubeva config ansible version ansible config file configured module search path default w o overrides configuration default os environment docker for mac summary when creating a completely new container instance with purge networks set to yes the container is created with the default bridge the second time that the same playbook is run the playbook updates the container and removes the default bridge steps to reproduce with the following playbook snippet name test container docker container name test ubuntu image registry ubuntu latest detach true networks name trafficentry name trafficexit purge networks yes run it once on a blank system or after docker rm f test ubuntu you will see the container is created but inspecting it shows it is connected to three networks bridge trafficentry and trafficexit run it a second time and you will see that the container is now connected to just the trafficentry and trafficexit networks as expected expected results newly created containers not just updated containers should not have the default network connected when purge networks is configured actual results after initial run docker inspect test ubuntu networksettings bridge sandboxid hairpinmode false ports sandboxkey var run docker netns secondaryipaddresses null null endpointid gateway ipaddress ipprefixlen macaddress ac networks bridge ipamconfig null links null aliases null networkid endpointid gateway ipaddress ipprefixlen macaddress ac trafficentry ipamconfig null links null aliases networkid endpointid gateway ipaddress ipprefixlen macaddress ac trafficexit ipamconfig null links null aliases networkid endpointid gateway ipaddress ipprefixlen macaddress ac after second run docker inspect test ubuntu networks trafficentry ipamconfig null links null aliases networkid endpointid gateway ipaddress ipprefixlen macaddress ac trafficexit ipamconfig null links null aliases networkid endpointid gateway ipaddress ipprefixlen macaddress ac ,1
969,4708287645.0,IssuesEvent,2016-10-13 22:56:04,Particular/NServiceBus.AzureServiceBus,https://api.github.com/repos/Particular/NServiceBus.AzureServiceBus,closed,V7 RTM,Tag: Maintainer Prio,"## Items to complete
- ~~Change package author name -> use updated NugetPackager https://github.com/Particular/V6Launch/issues/4~~ not needed
- ~~[Performance issues with ConnectionString..ctor(String)](https://github.com/Particular/NServiceBus.AzureServiceBus/issues/332)~~ not critical, can wait
- [x] [Delivery count to respect Immediate retries and user settings](https://github.com/Particular/NServiceBus.AzureServiceBus/issues/308)
- [x] [Disable prefetching by default with transport transaction None](https://github.com/Particular/NServiceBus.AzureServiceBus/issues/340)
- [x] [MessageReceiverNotifier should not invoke recoverability when receiving](https://github.com/Particular/NServiceBus.AzureServiceBus/pull/349)
- [x] [Outbox doesn't work with ASB transport](https://github.com/Particular/NServiceBus.AzureServiceBus/issues/352)
- [x] Create release notes (general ones, similar to the [Core ones with milestones](https://github.com/Particular/V6Launch/issues/75#issuecomment-251098093))
- [x] Update [V6Launch status list](https://github.com/Particular/V6Launch/issues/4)",True,"V7 RTM - ## Items to complete
- ~~Change package author name -> use updated NugetPackager https://github.com/Particular/V6Launch/issues/4~~ not needed
- ~~[Performance issues with ConnectionString..ctor(String)](https://github.com/Particular/NServiceBus.AzureServiceBus/issues/332)~~ not critical, can wait
- [x] [Delivery count to respect Immediate retries and user settings](https://github.com/Particular/NServiceBus.AzureServiceBus/issues/308)
- [x] [Disable prefetching by default with transport transaction None](https://github.com/Particular/NServiceBus.AzureServiceBus/issues/340)
- [x] [MessageReceiverNotifier should not invoke recoverability when receiving](https://github.com/Particular/NServiceBus.AzureServiceBus/pull/349)
- [x] [Outbox doesn't work with ASB transport](https://github.com/Particular/NServiceBus.AzureServiceBus/issues/352)
- [x] Create release notes (general ones, similar to the [Core ones with milestones](https://github.com/Particular/V6Launch/issues/75#issuecomment-251098093))
- [x] Update [V6Launch status list](https://github.com/Particular/V6Launch/issues/4)",1, rtm items to complete change package author name use updated nugetpackager not needed not critical can wait create release notes general ones similar to the update ,1
535670,15696217195.0,IssuesEvent,2021-03-26 01:29:08,pxblue/icons,https://api.github.com/repos/pxblue/icons,opened,Update peer dependencies for React progress icons,enhancement high-priority,"#### Describe the desired behavior
Update peer dependencies to allow for React 17
#### Describe the current behavior
Limited to React 16
#### Is this request related to a current issue?
No
#### Additional Context
",1.0,"Update peer dependencies for React progress icons - #### Describe the desired behavior
Update peer dependencies to allow for React 17
#### Describe the current behavior
Limited to React 16
#### Is this request related to a current issue?
No
#### Additional Context
",0,update peer dependencies for react progress icons describe the desired behavior update peer dependencies to allow for react describe the current behavior limited to react is this request related to a current issue no additional context ,0
5736,30324706119.0,IssuesEvent,2023-07-10 22:28:28,professor-greebie/SENG8080-2-field_project,https://api.github.com/repos/professor-greebie/SENG8080-2-field_project,closed,Docker Image for Hadoop for Data Storage ,Data Storage and Maintainance,Will create a docker-compose file that will generate a container for Hadoop. It will contain the files which will be for further stages of projects.,True,Docker Image for Hadoop for Data Storage - Will create a docker-compose file that will generate a container for Hadoop. It will contain the files which will be for further stages of projects.,1,docker image for hadoop for data storage will create a docker compose file that will generate a container for hadoop it will contain the files which will be for further stages of projects ,1
359,3298189639.0,IssuesEvent,2015-11-02 13:21:53,Homebrew/homebrew,https://api.github.com/repos/Homebrew/homebrew,opened,Possible way to handle sandbox issues for Postgres's plugins,help wanted maintainer feedback sandbox upstream issue,"As we can seen in https://github.com/Homebrew/homebrew/pull/41962 and many others PR, all of Postgre's plugins are broken under sandbox. Moreover, this means all of them are broken during `upgrade/unlink/link/switch` etc.
Considering the amount of plugins for Postgres, vending all of them will soon become unscalable. However, until it's fixed/supported by upstream (See https://github.com/Homebrew/homebrew/issues/10247), Postgres is inherently hostile to Homebrew-style sandboxing where several components are symlinked into a common prefix.
Since there isn't any perfect solution, we may will just accept some hacking middle ground. AFAIK, NixOS handles this by copying all of binaries directly to common prefix, hence breaking its symlink sandbox as well. We may take some similar approach:
* Compile Postgres as usual.
* Copy all of binaries in `prefix/bin` to `prefix/libexec/bin-backup`.
* Hard link binaries `prefix/libexec/bin-backup` to `HOMEBREW_PREFIX/bin` during `post_install`.
Clearly, it's still breaking our symlink system. But at least, it can work under sandbox.
Any objection/suggestion/commments? OR should we just vendor all of them inside one mega formula?
cc @mikemcquaid @DomT4 ",True,"Possible way to handle sandbox issues for Postgres's plugins - As we can seen in https://github.com/Homebrew/homebrew/pull/41962 and many others PR, all of Postgre's plugins are broken under sandbox. Moreover, this means all of them are broken during `upgrade/unlink/link/switch` etc.
Considering the amount of plugins for Postgres, vending all of them will soon become unscalable. However, until it's fixed/supported by upstream (See https://github.com/Homebrew/homebrew/issues/10247), Postgres is inherently hostile to Homebrew-style sandboxing where several components are symlinked into a common prefix.
Since there isn't any perfect solution, we may will just accept some hacking middle ground. AFAIK, NixOS handles this by copying all of binaries directly to common prefix, hence breaking its symlink sandbox as well. We may take some similar approach:
* Compile Postgres as usual.
* Copy all of binaries in `prefix/bin` to `prefix/libexec/bin-backup`.
* Hard link binaries `prefix/libexec/bin-backup` to `HOMEBREW_PREFIX/bin` during `post_install`.
Clearly, it's still breaking our symlink system. But at least, it can work under sandbox.
Any objection/suggestion/commments? OR should we just vendor all of them inside one mega formula?
cc @mikemcquaid @DomT4 ",1,possible way to handle sandbox issues for postgres s plugins as we can seen in and many others pr all of postgre s plugins are broken under sandbox moreover this means all of them are broken during upgrade unlink link switch etc considering the amount of plugins for postgres vending all of them will soon become unscalable however until it s fixed supported by upstream see postgres is inherently hostile to homebrew style sandboxing where several components are symlinked into a common prefix since there isn t any perfect solution we may will just accept some hacking middle ground afaik nixos handles this by copying all of binaries directly to common prefix hence breaking its symlink sandbox as well we may take some similar approach compile postgres as usual copy all of binaries in prefix bin to prefix libexec bin backup hard link binaries prefix libexec bin backup to homebrew prefix bin during post install clearly it s still breaking our symlink system but at least it can work under sandbox any objection suggestion commments or should we just vendor all of them inside one mega formula cc mikemcquaid ,1
880,4543305563.0,IssuesEvent,2016-09-10 02:33:46,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_container removes published ports when used for restart,affects_2.1 bug_report cloud docker in progress waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Ubuntu Trusty on EC2
##### SUMMARY
docker_container alters the ports when used to restart a docker contaienr
##### STEPS TO REPRODUCE
```
I wanted the equivalent of the below using docker_container
- name: restart containers
shell: ""docker stop {{ item }}; docker start {{ item }}""
with_items: ""{{ docker_names.stdout_lines }} ""
this is my docker_container create task
- name: install lucee5 docker instance 1
docker_container:
docker_api_version: ""{{ dockerapi_version }}""
etc_hosts: ""{{ docker_extra_hosts }}""
name: ""{{ site_prefix }}{{ docker_port }}""
image: ""{{ site_docker_image }}""
state: started
restart_policy: on-failure
restart_retries: 5
pull: yes
published_ports:
- ""{{ docker_port }}:8888""
this is my restart that fails
- name: restart container
docker_container:
api_version: ""{{ dockerapi_version }}""
pull: no
name: ""{{ item }}""
image: ""{{ site_docker_image }}""
state: started
restart: yes
with_items: ""{{ docker_names.stdout_lines }} ""
```
##### EXPECTED RESULTS
The restarted container should of had the same published port i.e.
0.0.0.0:8815->8888/tcp
the problem is
after the creation of the container it has port of
0.0.0.0:8815->8888/tcp
but after the restart this is removed and hence 8815 on host is no longer mapped to 8888 on container
(ports output is now 8009/tcp, 8080/tcp, 8888/tcp and it should be 8009/tcp, 8080/tcp, 0.0.0.0:8815->8888/tcp)
using the shell variant it works as expected.
##### ACTUAL RESULTS
It all works
```
TASK [restart containers] ****************************************************** task path: /var/lib/awx/projects/_5__idg_bitbucket_ansible/aws-update-site-code.yml:22 <172.31.38.101> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.31.38.101> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_11VzYn/cp/ansible-ssh-%h-%p-%r 172.31.38.101 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1472130784.34-220794800462734 `"" && echo ansible-tmp-1472130784.34-220794800462734=""` echo $HOME/.ansible/tmp/ansible-tmp-1472130784.34-220794800462734 `"" ) && sleep 0'""'""'' <172.31.38.101> PUT /tmp/tmptHnFY8 TO /home/ubuntu/.ansible/tmp/ansible-tmp-1472130784.34-220794800462734/docker_container <172.31.38.101> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_11VzYn/cp/ansible-ssh-%h-%p-%r '[172.31.38.101]' <172.31.38.101> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.31.38.101> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_11VzYn/cp/ansible-ssh-%h-%p-%r -tt 172.31.38.101 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-vbxgltaucyynhgzltqurmkpjrmohhgwe; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1472130784.34-220794800462734/docker_container; rm -rf ""/home/ubuntu/.ansible/tmp/ansible-tmp-1472130784.34-220794800462734/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' changed: [172.31.38.101] => (item=pca8815) => {""ansible_facts"": {""ansible_docker_container"": {""AppArmorProfile"": """", ""Args"": [""run""], ""Config"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""catalina.sh"", ""run""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/tomcat/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"", ""LANG=C.UTF-8"", ""JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre"", ""JAVA_VERSION=8u72"", ""JAVA_DEBIAN_VERSION=8u72-b15-1~bpo8+1"", ""CA_CERTIFICATES_JAVA_VERSION=20140324"", ""CATALINA_HOME=/usr/local/tomcat"", ""TOMCAT_MAJOR=8"", ""TOMCAT_VERSION=8.0.32"", ""TOMCAT_TGZ_URL=https://www.apache.org/dist/tomcat/tomcat-8/v8.0.32/bin/apache-tomcat-8.0.32.tar.gz"", ""LUCEE_JARS_URL=http://snapshot.lucee.org/rest/update/provider/loader/5.0.0.228-SNAPSHOT"", ""LUCEE_JAVA_OPTS=-Xms256m -Xmx512m"", ""TERM=xterm""], ""ExposedPorts"": {""8009/tcp"": {}, ""8080/tcp"": {}, ""8888/tcp"": {}}, ""Hostname"": ""bf0ab3a35447"", ""Image"": ""idguk/lucee5:0.3"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": ""/opt""}, ""Created"": ""2016-08-25T13:13:01.27748365Z"", ""Driver"": ""devicemapper"", ""ExecIDs"": null, ""GraphDriver"": {""Data"": {""DeviceId"": ""44"", ""DeviceName"": ""docker-202:1-271495-f641f21a4059babc8e8f2461e3aee8f762836486114f375e34ebe5d7b856add9"", ""DeviceSize"": ""10737418240""}, ""Name"": ""devicemapper""}, ""HostConfig"": {""AutoRemove"": false, ""Binds"": [], ""BlkioDeviceReadBps"": null, ""BlkioDeviceReadIOps"": null, ""BlkioDeviceWriteBps"": null, ""BlkioDeviceWriteIOps"": null, ""BlkioWeight"": 0, ""BlkioWeightDevice"": null, ""CapAdd"": null, ""CapDrop"": null, ""Cgroup"": """", ""CgroupParent"": """", ""ConsoleSize"": [0, 0], ""ContainerIDFile"": """", ""CpuCount"": 0, ""CpuPercent"": 0, ""CpuPeriod"": 0, ""CpuQuota"": 0, ""CpuShares"": 0, ""CpusetCpus"": """", ""CpusetMems"": """", ""Devices"": null, ""DiskQuota"": 0, ""Dns"": null, ""DnsOptions"": null, ""DnsSearch"": null, ""ExtraHosts"": null, ""GroupAdd"": null, ""IOMaximumBandwidth"": 0, ""IOMaximumIOps"": 0, ""IpcMode"": """", ""Isolation"": """", ""KernelMemory"": 0, ""Links"": null, ""LogConfig"": {""Config"": {}, ""Type"": ""json-file""}, ""Memory"": 0, ""MemoryReservation"": 0, ""MemorySwap"": 0, ""MemorySwappiness"": -1, ""NetworkMode"": ""default"", ""OomKillDisable"": false, ""OomScoreAdj"": 0, ""PidMode"": """", ""PidsLimit"": 0, ""PortBindings"": null, ""Privileged"": false, ""PublishAllPorts"": false, ""ReadonlyRootfs"": false, ""RestartPolicy"": {""MaximumRetryCount"": 0, ""Name"": """"}, ""Runtime"": ""runc"", ""SecurityOpt"": null, ""ShmSize"": 67108864, ""UTSMode"": """", ""Ulimits"": null, ""UsernsMode"": """", ""VolumeDriver"": """", ""VolumesFrom"": null}, ""HostnamePath"": ""/var/lib/docker/containers/bf0ab3a354470979677e4b0cb4513372cc7f4351e5232a47d1d6c6f19f8bd479/hostname"", ""HostsPath"": ""/var/lib/docker/containers/bf0ab3a354470979677e4b0cb4513372cc7f4351e5232a47d1d6c6f19f8bd479/hosts"", ""Id"": ""bf0ab3a354470979677e4b0cb4513372cc7f4351e5232a47d1d6c6f19f8bd479"", ""Image"": ""sha256:c7a9c7f14b1ad1ca1a5036b21ed99418e4a9bcd2c35e83f61fbd58f664486d50"", ""LogPath"": ""/var/lib/docker/containers/bf0ab3a354470979677e4b0cb4513372cc7f4351e5232a47d1d6c6f19f8bd479/bf0ab3a354470979677e4b0cb4513372cc7f4351e5232a47d1d6c6f19f8bd479-json.log"", ""MountLabel"": """", ""Mounts"": [], ""Name"": ""/pca8815"", ""NetworkSettings"": {""Bridge"": """", ""EndpointID"": ""e4fcd43fc87ab7378f73a6ee973aefe0c8b16327dde75e30653a6b8a9668dd42"", ""Gateway"": ""172.17.0.1"", ""GlobalIPv6Address"": """", ""GlobalIPv6PrefixLen"": 0, ""HairpinMode"": false, ""IPAddress"": ""172.17.0.2"", ""IPPrefixLen"": 16, ""IPv6Gateway"": """", ""LinkLocalIPv6Address"": """", ""LinkLocalIPv6PrefixLen"": 0, ""MacAddress"": ""02:42:ac:11:00:02"", ""Networks"": {""bridge"": {""Aliases"": null, ""EndpointID"": ""e4fcd43fc87ab7378f73a6ee973aefe0c8b16327dde75e30653a6b8a9668dd42"", ""Gateway"": ""172.17.0.1"", ""GlobalIPv6Address"": """", ""GlobalIPv6PrefixLen"": 0, ""IPAMConfig"": null, ""IPAddress"": ""172.17.0.2"", ""IPPrefixLen"": 16, ""IPv6Gateway"": """", ""Links"": null, ""MacAddress"": ""02:42:ac:11:00:02"", ""NetworkID"": ""e23b204c3ead343cd4b5dbf1c9bd8963b0d5765515122e5078b84064010e0ec4""}}, ""Ports"": {""8009/tcp"": null, ""8080/tcp"": null, ""8888/tcp"": null}, ""SandboxID"": ""37ce504c47cc80c70c4d9bda33667c371f2fd8898c0f4d60b3d2d0a0e7eea634"", ""SandboxKey"": ""/var/run/docker/netns/37ce504c47cc"", ""SecondaryIPAddresses"": null, ""SecondaryIPv6Addresses"": null}, ""Path"": ""catalina.sh"", ""ProcessLabel"": """", ""ResolvConfPath"": ""/var/lib/docker/containers/bf0ab3a354470979677e4b0cb4513372cc7f4351e5232a47d1d6c6f19f8bd479/resolv.conf"", ""RestartCount"": 0, ""State"": {""Dead"": false, ""Error"": """", ""ExitCode"": 0, ""FinishedAt"": ""0001-01-01T00:00:00Z"", ""OOMKilled"": false, ""Paused"": false, ""Pid"": 7181, ""Restarting"": false, ""Running"": true, ""StartedAt"": ""2016-08-25T13:13:01.565460684Z"", ""Status"": ""running""}}}, ""changed"": true, ""invocation"": {""module_args"": {""api_version"": ""1.21"", ""blkio_weight"": null, ""cacert_path"": null, ""capabilities"": null, ""cert_path"": null, ""command"": null, ""cpu_period"": null, ""cpu_quota"": null, ""cpu_shares"": null, ""cpuset_cpus"": null, ""cpuset_mems"": null, ""debug"": false, ""detach"": true, ""devices"": null, ""dns_opts"": null, ""dns_search_domains"": null, ""dns_servers"": null, ""docker_host"": null, ""entrypoint"": null, ""env"": null, ""env_file"": null, ""etc_hosts"": null, ""exposed_ports"": null, ""filter_logger"": false, ""force_kill"": false, ""groups"": null, ""hostname"": null, ""image"": ""idguk/lucee5:0.3"", ""interactive"": false, ""ipc_mode"": null, ""keep_volumes"": true, ""kernel_memory"": null, ""key_path"": null, ""kill_signal"": null, ""labels"": null, ""links"": null, ""log_driver"": ""json-file"", ""log_options"": null, ""mac_address"": null, ""memory"": ""0"", ""memory_reservation"": null, ""memory_swap"": null, ""memory_swappiness"": null, ""name"": ""pca8815"", ""network_mode"": null, ""networks"": null, ""oom_killer"": null, ""paused"": false, ""pid_mode"": null, ""privileged"": false, ""published_ports"": null, ""pull"": false, ""purge_networks"": null, ""read_only"": false, ""recreate"": false, ""restart"": true, ""restart_policy"": null, ""restart_retries"": 0, ""security_opts"": null, ""shm_size"": null, ""ssl_version"": null, ""state"": ""started"", ""stop_signal"": null, ""stop_timeout"": null, ""timeout"": null, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null, ""trust_image_content"": false, ""tty"": false, ""ulimits"": null, ""user"": null, ""uts"": null, ""volume_driver"": null, ""volumes"": null, ""volumes_from"": null}, ""module_name"": ""docker_container""}, ""item"": ""pca8815""}
```
",True,"docker_container removes published ports when used for restart -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Ubuntu Trusty on EC2
##### SUMMARY
docker_container alters the ports when used to restart a docker contaienr
##### STEPS TO REPRODUCE
```
I wanted the equivalent of the below using docker_container
- name: restart containers
shell: ""docker stop {{ item }}; docker start {{ item }}""
with_items: ""{{ docker_names.stdout_lines }} ""
this is my docker_container create task
- name: install lucee5 docker instance 1
docker_container:
docker_api_version: ""{{ dockerapi_version }}""
etc_hosts: ""{{ docker_extra_hosts }}""
name: ""{{ site_prefix }}{{ docker_port }}""
image: ""{{ site_docker_image }}""
state: started
restart_policy: on-failure
restart_retries: 5
pull: yes
published_ports:
- ""{{ docker_port }}:8888""
this is my restart that fails
- name: restart container
docker_container:
api_version: ""{{ dockerapi_version }}""
pull: no
name: ""{{ item }}""
image: ""{{ site_docker_image }}""
state: started
restart: yes
with_items: ""{{ docker_names.stdout_lines }} ""
```
##### EXPECTED RESULTS
The restarted container should of had the same published port i.e.
0.0.0.0:8815->8888/tcp
the problem is
after the creation of the container it has port of
0.0.0.0:8815->8888/tcp
but after the restart this is removed and hence 8815 on host is no longer mapped to 8888 on container
(ports output is now 8009/tcp, 8080/tcp, 8888/tcp and it should be 8009/tcp, 8080/tcp, 0.0.0.0:8815->8888/tcp)
using the shell variant it works as expected.
##### ACTUAL RESULTS
It all works
```
TASK [restart containers] ****************************************************** task path: /var/lib/awx/projects/_5__idg_bitbucket_ansible/aws-update-site-code.yml:22 <172.31.38.101> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.31.38.101> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_11VzYn/cp/ansible-ssh-%h-%p-%r 172.31.38.101 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1472130784.34-220794800462734 `"" && echo ansible-tmp-1472130784.34-220794800462734=""` echo $HOME/.ansible/tmp/ansible-tmp-1472130784.34-220794800462734 `"" ) && sleep 0'""'""'' <172.31.38.101> PUT /tmp/tmptHnFY8 TO /home/ubuntu/.ansible/tmp/ansible-tmp-1472130784.34-220794800462734/docker_container <172.31.38.101> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_11VzYn/cp/ansible-ssh-%h-%p-%r '[172.31.38.101]' <172.31.38.101> ESTABLISH SSH CONNECTION FOR USER: ubuntu <172.31.38.101> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/tmp/ansible_tower_11VzYn/cp/ansible-ssh-%h-%p-%r -tt 172.31.38.101 '/bin/sh -c '""'""'sudo -H -S -n -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-vbxgltaucyynhgzltqurmkpjrmohhgwe; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1472130784.34-220794800462734/docker_container; rm -rf ""/home/ubuntu/.ansible/tmp/ansible-tmp-1472130784.34-220794800462734/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""'' changed: [172.31.38.101] => (item=pca8815) => {""ansible_facts"": {""ansible_docker_container"": {""AppArmorProfile"": """", ""Args"": [""run""], ""Config"": {""AttachStderr"": false, ""AttachStdin"": false, ""AttachStdout"": false, ""Cmd"": [""catalina.sh"", ""run""], ""Domainname"": """", ""Entrypoint"": null, ""Env"": [""PATH=/usr/local/tomcat/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"", ""LANG=C.UTF-8"", ""JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre"", ""JAVA_VERSION=8u72"", ""JAVA_DEBIAN_VERSION=8u72-b15-1~bpo8+1"", ""CA_CERTIFICATES_JAVA_VERSION=20140324"", ""CATALINA_HOME=/usr/local/tomcat"", ""TOMCAT_MAJOR=8"", ""TOMCAT_VERSION=8.0.32"", ""TOMCAT_TGZ_URL=https://www.apache.org/dist/tomcat/tomcat-8/v8.0.32/bin/apache-tomcat-8.0.32.tar.gz"", ""LUCEE_JARS_URL=http://snapshot.lucee.org/rest/update/provider/loader/5.0.0.228-SNAPSHOT"", ""LUCEE_JAVA_OPTS=-Xms256m -Xmx512m"", ""TERM=xterm""], ""ExposedPorts"": {""8009/tcp"": {}, ""8080/tcp"": {}, ""8888/tcp"": {}}, ""Hostname"": ""bf0ab3a35447"", ""Image"": ""idguk/lucee5:0.3"", ""Labels"": {}, ""OnBuild"": null, ""OpenStdin"": false, ""StdinOnce"": false, ""Tty"": false, ""User"": """", ""Volumes"": null, ""WorkingDir"": ""/opt""}, ""Created"": ""2016-08-25T13:13:01.27748365Z"", ""Driver"": ""devicemapper"", ""ExecIDs"": null, ""GraphDriver"": {""Data"": {""DeviceId"": ""44"", ""DeviceName"": ""docker-202:1-271495-f641f21a4059babc8e8f2461e3aee8f762836486114f375e34ebe5d7b856add9"", ""DeviceSize"": ""10737418240""}, ""Name"": ""devicemapper""}, ""HostConfig"": {""AutoRemove"": false, ""Binds"": [], ""BlkioDeviceReadBps"": null, ""BlkioDeviceReadIOps"": null, ""BlkioDeviceWriteBps"": null, ""BlkioDeviceWriteIOps"": null, ""BlkioWeight"": 0, ""BlkioWeightDevice"": null, ""CapAdd"": null, ""CapDrop"": null, ""Cgroup"": """", ""CgroupParent"": """", ""ConsoleSize"": [0, 0], ""ContainerIDFile"": """", ""CpuCount"": 0, ""CpuPercent"": 0, ""CpuPeriod"": 0, ""CpuQuota"": 0, ""CpuShares"": 0, ""CpusetCpus"": """", ""CpusetMems"": """", ""Devices"": null, ""DiskQuota"": 0, ""Dns"": null, ""DnsOptions"": null, ""DnsSearch"": null, ""ExtraHosts"": null, ""GroupAdd"": null, ""IOMaximumBandwidth"": 0, ""IOMaximumIOps"": 0, ""IpcMode"": """", ""Isolation"": """", ""KernelMemory"": 0, ""Links"": null, ""LogConfig"": {""Config"": {}, ""Type"": ""json-file""}, ""Memory"": 0, ""MemoryReservation"": 0, ""MemorySwap"": 0, ""MemorySwappiness"": -1, ""NetworkMode"": ""default"", ""OomKillDisable"": false, ""OomScoreAdj"": 0, ""PidMode"": """", ""PidsLimit"": 0, ""PortBindings"": null, ""Privileged"": false, ""PublishAllPorts"": false, ""ReadonlyRootfs"": false, ""RestartPolicy"": {""MaximumRetryCount"": 0, ""Name"": """"}, ""Runtime"": ""runc"", ""SecurityOpt"": null, ""ShmSize"": 67108864, ""UTSMode"": """", ""Ulimits"": null, ""UsernsMode"": """", ""VolumeDriver"": """", ""VolumesFrom"": null}, ""HostnamePath"": ""/var/lib/docker/containers/bf0ab3a354470979677e4b0cb4513372cc7f4351e5232a47d1d6c6f19f8bd479/hostname"", ""HostsPath"": ""/var/lib/docker/containers/bf0ab3a354470979677e4b0cb4513372cc7f4351e5232a47d1d6c6f19f8bd479/hosts"", ""Id"": ""bf0ab3a354470979677e4b0cb4513372cc7f4351e5232a47d1d6c6f19f8bd479"", ""Image"": ""sha256:c7a9c7f14b1ad1ca1a5036b21ed99418e4a9bcd2c35e83f61fbd58f664486d50"", ""LogPath"": ""/var/lib/docker/containers/bf0ab3a354470979677e4b0cb4513372cc7f4351e5232a47d1d6c6f19f8bd479/bf0ab3a354470979677e4b0cb4513372cc7f4351e5232a47d1d6c6f19f8bd479-json.log"", ""MountLabel"": """", ""Mounts"": [], ""Name"": ""/pca8815"", ""NetworkSettings"": {""Bridge"": """", ""EndpointID"": ""e4fcd43fc87ab7378f73a6ee973aefe0c8b16327dde75e30653a6b8a9668dd42"", ""Gateway"": ""172.17.0.1"", ""GlobalIPv6Address"": """", ""GlobalIPv6PrefixLen"": 0, ""HairpinMode"": false, ""IPAddress"": ""172.17.0.2"", ""IPPrefixLen"": 16, ""IPv6Gateway"": """", ""LinkLocalIPv6Address"": """", ""LinkLocalIPv6PrefixLen"": 0, ""MacAddress"": ""02:42:ac:11:00:02"", ""Networks"": {""bridge"": {""Aliases"": null, ""EndpointID"": ""e4fcd43fc87ab7378f73a6ee973aefe0c8b16327dde75e30653a6b8a9668dd42"", ""Gateway"": ""172.17.0.1"", ""GlobalIPv6Address"": """", ""GlobalIPv6PrefixLen"": 0, ""IPAMConfig"": null, ""IPAddress"": ""172.17.0.2"", ""IPPrefixLen"": 16, ""IPv6Gateway"": """", ""Links"": null, ""MacAddress"": ""02:42:ac:11:00:02"", ""NetworkID"": ""e23b204c3ead343cd4b5dbf1c9bd8963b0d5765515122e5078b84064010e0ec4""}}, ""Ports"": {""8009/tcp"": null, ""8080/tcp"": null, ""8888/tcp"": null}, ""SandboxID"": ""37ce504c47cc80c70c4d9bda33667c371f2fd8898c0f4d60b3d2d0a0e7eea634"", ""SandboxKey"": ""/var/run/docker/netns/37ce504c47cc"", ""SecondaryIPAddresses"": null, ""SecondaryIPv6Addresses"": null}, ""Path"": ""catalina.sh"", ""ProcessLabel"": """", ""ResolvConfPath"": ""/var/lib/docker/containers/bf0ab3a354470979677e4b0cb4513372cc7f4351e5232a47d1d6c6f19f8bd479/resolv.conf"", ""RestartCount"": 0, ""State"": {""Dead"": false, ""Error"": """", ""ExitCode"": 0, ""FinishedAt"": ""0001-01-01T00:00:00Z"", ""OOMKilled"": false, ""Paused"": false, ""Pid"": 7181, ""Restarting"": false, ""Running"": true, ""StartedAt"": ""2016-08-25T13:13:01.565460684Z"", ""Status"": ""running""}}}, ""changed"": true, ""invocation"": {""module_args"": {""api_version"": ""1.21"", ""blkio_weight"": null, ""cacert_path"": null, ""capabilities"": null, ""cert_path"": null, ""command"": null, ""cpu_period"": null, ""cpu_quota"": null, ""cpu_shares"": null, ""cpuset_cpus"": null, ""cpuset_mems"": null, ""debug"": false, ""detach"": true, ""devices"": null, ""dns_opts"": null, ""dns_search_domains"": null, ""dns_servers"": null, ""docker_host"": null, ""entrypoint"": null, ""env"": null, ""env_file"": null, ""etc_hosts"": null, ""exposed_ports"": null, ""filter_logger"": false, ""force_kill"": false, ""groups"": null, ""hostname"": null, ""image"": ""idguk/lucee5:0.3"", ""interactive"": false, ""ipc_mode"": null, ""keep_volumes"": true, ""kernel_memory"": null, ""key_path"": null, ""kill_signal"": null, ""labels"": null, ""links"": null, ""log_driver"": ""json-file"", ""log_options"": null, ""mac_address"": null, ""memory"": ""0"", ""memory_reservation"": null, ""memory_swap"": null, ""memory_swappiness"": null, ""name"": ""pca8815"", ""network_mode"": null, ""networks"": null, ""oom_killer"": null, ""paused"": false, ""pid_mode"": null, ""privileged"": false, ""published_ports"": null, ""pull"": false, ""purge_networks"": null, ""read_only"": false, ""recreate"": false, ""restart"": true, ""restart_policy"": null, ""restart_retries"": 0, ""security_opts"": null, ""shm_size"": null, ""ssl_version"": null, ""state"": ""started"", ""stop_signal"": null, ""stop_timeout"": null, ""timeout"": null, ""tls"": null, ""tls_hostname"": null, ""tls_verify"": null, ""trust_image_content"": false, ""tty"": false, ""ulimits"": null, ""user"": null, ""uts"": null, ""volume_driver"": null, ""volumes"": null, ""volumes_from"": null}, ""module_name"": ""docker_container""}, ""item"": ""pca8815""}
```
",1,docker container removes published ports when used for restart issue type bug report component name docker container ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu trusty on summary docker container alters the ports when used to restart a docker contaienr steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used i wanted the equivalent of the below using docker container name restart containers shell docker stop item docker start item with items docker names stdout lines this is my docker container create task name install docker instance docker container docker api version dockerapi version etc hosts docker extra hosts name site prefix docker port image site docker image state started restart policy on failure restart retries pull yes published ports docker port this is my restart that fails name restart container docker container api version dockerapi version pull no name item image site docker image state started restart yes with items docker names stdout lines expected results the restarted container should of had the same published port i e tcp the problem is after the creation of the container it has port of tcp but after the restart this is removed and hence on host is no longer mapped to on container ports output is now tcp tcp tcp and it should be tcp tcp tcp using the shell variant it works as expected actual results it all works task task path var lib awx projects idg bitbucket ansible aws update site code yml establish ssh connection for user ubuntu ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath tmp ansible tower cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ubuntu ansible tmp ansible tmp docker container ssh exec sftp b c o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath tmp ansible tower cp ansible ssh h p r establish ssh connection for user ubuntu ssh exec ssh c q o controlmaster auto o controlpersist o stricthostkeychecking no o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ubuntu o connecttimeout o controlpath tmp ansible tower cp ansible ssh h p r tt bin sh c sudo h s n u root bin sh c echo become success vbxgltaucyynhgzltqurmkpjrmohhgwe lang en us utf lc all en us utf lc messages en us utf usr bin python home ubuntu ansible tmp ansible tmp docker container rm rf home ubuntu ansible tmp ansible tmp dev null sleep changed item ansible facts ansible docker container apparmorprofile args config attachstderr false attachstdin false attachstdout false cmd domainname entrypoint null env exposedports tcp tcp tcp hostname image idguk labels onbuild null openstdin false stdinonce false tty false user volumes null workingdir opt created driver devicemapper execids null graphdriver data deviceid devicename docker devicesize name devicemapper hostconfig autoremove false binds blkiodevicereadbps null blkiodevicereadiops null blkiodevicewritebps null blkiodevicewriteiops null blkioweight blkioweightdevice null capadd null capdrop null cgroup cgroupparent consolesize containeridfile cpucount cpupercent cpuperiod cpuquota cpushares cpusetcpus cpusetmems devices null diskquota dns null dnsoptions null dnssearch null extrahosts null groupadd null iomaximumbandwidth iomaximumiops ipcmode isolation kernelmemory links null logconfig config type json file memory memoryreservation memoryswap memoryswappiness networkmode default oomkilldisable false oomscoreadj pidmode pidslimit portbindings null privileged false publishallports false readonlyrootfs false restartpolicy maximumretrycount name runtime runc securityopt null shmsize utsmode ulimits null usernsmode volumedriver volumesfrom null hostnamepath var lib docker containers hostname hostspath var lib docker containers hosts id image logpath var lib docker containers json log mountlabel mounts name networksettings bridge endpointid gateway hairpinmode false ipaddress ipprefixlen macaddress ac networks bridge aliases null endpointid gateway ipamconfig null ipaddress ipprefixlen links null macaddress ac networkid ports tcp null tcp null tcp null sandboxid sandboxkey var run docker netns secondaryipaddresses null null path catalina sh processlabel resolvconfpath var lib docker containers resolv conf restartcount state dead false error exitcode finishedat oomkilled false paused false pid restarting false running true startedat status running changed true invocation module args api version blkio weight null cacert path null capabilities null cert path null command null cpu period null cpu quota null cpu shares null cpuset cpus null cpuset mems null debug false detach true devices null dns opts null dns search domains null dns servers null docker host null entrypoint null env null env file null etc hosts null exposed ports null filter logger false force kill false groups null hostname null image idguk interactive false ipc mode null keep volumes true kernel memory null key path null kill signal null labels null links null log driver json file log options null mac address null memory memory reservation null memory swap null memory swappiness null name network mode null networks null oom killer null paused false pid mode null privileged false published ports null pull false purge networks null read only false recreate false restart true restart policy null restart retries security opts null shm size null ssl version null state started stop signal null stop timeout null timeout null tls null tls hostname null tls verify null trust image content false tty false ulimits null user null uts null volume driver null volumes null volumes from null module name docker container item ,1
567,4044282700.0,IssuesEvent,2016-05-21 07:19:06,duckduckgo/zeroclickinfo-spice,https://api.github.com/repos/duckduckgo/zeroclickinfo-spice,closed,"WordMap: Needs to ensure remainder is only 1 word, or remove ""similar to"" trigger",Low-Hanging Fruit Maintainer Input Requested Relevancy Triggering,"This instant answer triggers on any query that starts with ""similar to"" and often returns irrelevant results for queries that aren't searching for words/terms.
We need to reduce the IA to only working on single word queries or tighten the triggering to queries that more clearly indicate they're looking for similar _words_.
e.g. https://duckduckgo.com/?q=similar+to+invisible+fence+collar&ia=answer
------
IA Page: http://duck.co/ia/view/word_map
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @twinword",True,"WordMap: Needs to ensure remainder is only 1 word, or remove ""similar to"" trigger - This instant answer triggers on any query that starts with ""similar to"" and often returns irrelevant results for queries that aren't searching for words/terms.
We need to reduce the IA to only working on single word queries or tighten the triggering to queries that more clearly indicate they're looking for similar _words_.
e.g. https://duckduckgo.com/?q=similar+to+invisible+fence+collar&ia=answer
------
IA Page: http://duck.co/ia/view/word_map
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @twinword",1,wordmap needs to ensure remainder is only word or remove similar to trigger this instant answer triggers on any query that starts with similar to and often returns irrelevant results for queries that aren t searching for words terms we need to reduce the ia to only working on single word queries or tighten the triggering to queries that more clearly indicate they re looking for similar words e g ia page twinword,1
6290,5348571835.0,IssuesEvent,2017-02-18 06:32:06,zsh-users/zsh-autosuggestions,https://api.github.com/repos/zsh-users/zsh-autosuggestions,closed,Slow speed of cd,performance pull-request-welcome,"After initialization of zsh-autosuggestions the speed of change dir in Midnight commander(regular movements in mc) dramaticaly falls down. I use oh-my-zsh zsh version - 5.2. Is there way to solve it?
",True,"Slow speed of cd - After initialization of zsh-autosuggestions the speed of change dir in Midnight commander(regular movements in mc) dramaticaly falls down. I use oh-my-zsh zsh version - 5.2. Is there way to solve it?
",0,slow speed of cd after initialization of zsh autosuggestions the speed of change dir in midnight commander regular movements in mc dramaticaly falls down i use oh my zsh zsh version is there way to solve it ,0
2533,8657431119.0,IssuesEvent,2018-11-27 21:18:31,Kapeli/Dash-User-Contributions,https://api.github.com/repos/Kapeli/Dash-User-Contributions,closed,Flux Docset maintainer needed,needs maintainer,"I can no longer have time to maintain this docset and I am looking for additional contributors to assist. My repo is located at https://github.com/epitaphmike/flux-dash. If this is something you are interested in helping with please reach out. Thank you.
",True,"Flux Docset maintainer needed - I can no longer have time to maintain this docset and I am looking for additional contributors to assist. My repo is located at https://github.com/epitaphmike/flux-dash. If this is something you are interested in helping with please reach out. Thank you.
",1,flux docset maintainer needed i can no longer have time to maintain this docset and i am looking for additional contributors to assist my repo is located at if this is something you are interested in helping with please reach out thank you ,1
933,4644111853.0,IssuesEvent,2016-09-30 15:25:17,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"Will ec2_asg support ""default_cooldown"" and ""termination_policies"" options?",affects_2.0 aws cloud feature_idea waiting_on_maintainer,"When trying to use both ""termination_policies"" and ""default_cooldown"" it fails as there is no support for them even though they are listed in the ASG_ATTRIBUTES string and also supported by boto downstream. Will these get added to the module?",True,"Will ec2_asg support ""default_cooldown"" and ""termination_policies"" options? - When trying to use both ""termination_policies"" and ""default_cooldown"" it fails as there is no support for them even though they are listed in the ASG_ATTRIBUTES string and also supported by boto downstream. Will these get added to the module?",1,will asg support default cooldown and termination policies options when trying to use both termination policies and default cooldown it fails as there is no support for them even though they are listed in the asg attributes string and also supported by boto downstream will these get added to the module ,1
1808,6575944250.0,IssuesEvent,2017-09-11 17:55:52,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,lineinfile documentation should clarify whether state=present will add a line if regexp has no match,affects_2.1 docs_report waiting_on_maintainer,"##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
lineinfile module
##### ANSIBLE VERSION
2.1.1.0
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
For lineinfile module, when I run a task like this:
```
- lineinfile: line=""example"" regexp=""exampl.*"" state=present dest=somefile.txt
```
when file somefile.txt does not contain the regex:
```
other
text
```
The documentation doesn't make it clear whether `state=present` will ensure that the line is added even if the regex does not exist (adding it to the end) or if it will only be added if there is a match. It seems to me that ansible _used to_ add it regardless, but as of ansible 2.1.1.0 (if not prior), it is not doing that. I can't tell from the docs whether a bug was fixed or introduced.
##### STEPS TO REPRODUCE
Read ""regexp"" and ""state"" sections in doc:
http://docs.ansible.com/ansible/lineinfile_module.html
",True,"lineinfile documentation should clarify whether state=present will add a line if regexp has no match - ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
lineinfile module
##### ANSIBLE VERSION
2.1.1.0
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
For lineinfile module, when I run a task like this:
```
- lineinfile: line=""example"" regexp=""exampl.*"" state=present dest=somefile.txt
```
when file somefile.txt does not contain the regex:
```
other
text
```
The documentation doesn't make it clear whether `state=present` will ensure that the line is added even if the regex does not exist (adding it to the end) or if it will only be added if there is a match. It seems to me that ansible _used to_ add it regardless, but as of ansible 2.1.1.0 (if not prior), it is not doing that. I can't tell from the docs whether a bug was fixed or introduced.
##### STEPS TO REPRODUCE
Read ""regexp"" and ""state"" sections in doc:
http://docs.ansible.com/ansible/lineinfile_module.html
",1,lineinfile documentation should clarify whether state present will add a line if regexp has no match issue type documentation report component name lineinfile module ansible version configuration n a os environment n a summary for lineinfile module when i run a task like this lineinfile line example regexp exampl state present dest somefile txt when file somefile txt does not contain the regex other text the documentation doesn t make it clear whether state present will ensure that the line is added even if the regex does not exist adding it to the end or if it will only be added if there is a match it seems to me that ansible used to add it regardless but as of ansible if not prior it is not doing that i can t tell from the docs whether a bug was fixed or introduced steps to reproduce read regexp and state sections in doc ,1
121678,26012688244.0,IssuesEvent,2022-12-21 04:25:22,UnitTestBot/UTBotJava,https://api.github.com/repos/UnitTestBot/UTBotJava,closed,Useless assignment of enum values to corresponding static fields in generated tests,bug codegen engine release tailings,"**Description**
Generated tests explicitly assign enum values to corresponding static fields, which is useless as the set of enum values is fixed, and no new instances can be created.
**To Reproduce**
Generate a test suite for the `Coin.reverse()` method in the following code.
```java
public enum Coin {
HEADS,
TAILS;
public Coin reverse() {
return this == HEADS ? TAILS : HEADS;
}
}
```
**Expected behavior**
Generated tests should not contain assignments of enum values to corresponding static fields.
**Actual behavior**
Generated tests explicitly assign enum values to corresponding static fields using reflection.
Note: this behavior depends on the enum support that is not in `main` yet (PR #611). When checking on `main`, the behavior may differ.
**Visual proofs (screenshots, logs, images)**
Generated tests:
```java
///region SUCCESSFUL EXECUTIONS for method reverse()
/**
*
*/
@Test
//@org.junit.jupiter.api.DisplayName(""reverse: this == HEADS : True -> return this == HEADS ? TAILS : HEADS"")
public void testReverse_EqualsHEADS() throws ClassNotFoundException, IllegalAccessException, NoSuchFieldException {
Coin prevHEADS = Coin.HEADS;
Coin prevTAILS = Coin.TAILS;
try {
Coin heads = Coin.HEADS;
Class coinClazz = Class.forName(""enums.Coin"");
setStaticField(coinClazz, ""HEADS"", heads);
Coin tails = Coin.TAILS;
setStaticField(coinClazz, ""TAILS"", tails);
Coin actual = heads.reverse();
assertEquals(tails, actual);
} finally {
setStaticField(Coin.class, ""HEADS"", prevHEADS);
setStaticField(Coin.class, ""TAILS"", prevTAILS);
}
}
///endregion
```
**Environment**
This behavior does not depend on any specific test environment.
**Additional context**
Assignment of specific values to static fields (usually using reflection) and restoring old values after the test is a common trait of UTBot-generated tests. Generally this behavior is necessary as the codegen should guarantee that the initial state of each object matches the expected initial state induced by the symbolic engine. On the other hand, often these assignment are useless. For example, final static fields of JDK classes are correctly initialized by the JDK code, and there is no sensible way to assign something else to these fields without risking to break things. Unfortunately, it seems that the correct and non-redundant object initialization is hard.
Maybe it might be done for some special cases like enum value initialization, as we can be sure that each corresponding static field already has the correct value at the start of the test.",1.0,"Useless assignment of enum values to corresponding static fields in generated tests - **Description**
Generated tests explicitly assign enum values to corresponding static fields, which is useless as the set of enum values is fixed, and no new instances can be created.
**To Reproduce**
Generate a test suite for the `Coin.reverse()` method in the following code.
```java
public enum Coin {
HEADS,
TAILS;
public Coin reverse() {
return this == HEADS ? TAILS : HEADS;
}
}
```
**Expected behavior**
Generated tests should not contain assignments of enum values to corresponding static fields.
**Actual behavior**
Generated tests explicitly assign enum values to corresponding static fields using reflection.
Note: this behavior depends on the enum support that is not in `main` yet (PR #611). When checking on `main`, the behavior may differ.
**Visual proofs (screenshots, logs, images)**
Generated tests:
```java
///region SUCCESSFUL EXECUTIONS for method reverse()
/**
*
*/
@Test
//@org.junit.jupiter.api.DisplayName(""reverse: this == HEADS : True -> return this == HEADS ? TAILS : HEADS"")
public void testReverse_EqualsHEADS() throws ClassNotFoundException, IllegalAccessException, NoSuchFieldException {
Coin prevHEADS = Coin.HEADS;
Coin prevTAILS = Coin.TAILS;
try {
Coin heads = Coin.HEADS;
Class coinClazz = Class.forName(""enums.Coin"");
setStaticField(coinClazz, ""HEADS"", heads);
Coin tails = Coin.TAILS;
setStaticField(coinClazz, ""TAILS"", tails);
Coin actual = heads.reverse();
assertEquals(tails, actual);
} finally {
setStaticField(Coin.class, ""HEADS"", prevHEADS);
setStaticField(Coin.class, ""TAILS"", prevTAILS);
}
}
///endregion
```
**Environment**
This behavior does not depend on any specific test environment.
**Additional context**
Assignment of specific values to static fields (usually using reflection) and restoring old values after the test is a common trait of UTBot-generated tests. Generally this behavior is necessary as the codegen should guarantee that the initial state of each object matches the expected initial state induced by the symbolic engine. On the other hand, often these assignment are useless. For example, final static fields of JDK classes are correctly initialized by the JDK code, and there is no sensible way to assign something else to these fields without risking to break things. Unfortunately, it seems that the correct and non-redundant object initialization is hard.
Maybe it might be done for some special cases like enum value initialization, as we can be sure that each corresponding static field already has the correct value at the start of the test.",0,useless assignment of enum values to corresponding static fields in generated tests description generated tests explicitly assign enum values to corresponding static fields which is useless as the set of enum values is fixed and no new instances can be created to reproduce generate a test suite for the coin reverse method in the following code java public enum coin heads tails public coin reverse return this heads tails heads expected behavior generated tests should not contain assignments of enum values to corresponding static fields actual behavior generated tests explicitly assign enum values to corresponding static fields using reflection note this behavior depends on the enum support that is not in main yet pr when checking on main the behavior may differ visual proofs screenshots logs images generated tests java region successful executions for method reverse test executes conditions code this heads false returns from code return this heads tails heads test org junit jupiter api displayname reverse this heads false return this heads tails heads public void testreverse notequalsheads throws classnotfoundexception illegalaccessexception nosuchfieldexception coin prevheads coin heads try coin heads coin heads class coinclazz class forname enums coin setstaticfield coinclazz heads heads coin coin coin tails coin actual coin reverse assertequals heads actual finally setstaticfield coin class heads prevheads test executes conditions code this heads true returns from code return this heads tails heads test org junit jupiter api displayname reverse this heads true return this heads tails heads public void testreverse equalsheads throws classnotfoundexception illegalaccessexception nosuchfieldexception coin prevheads coin heads coin prevtails coin tails try coin heads coin heads class coinclazz class forname enums coin setstaticfield coinclazz heads heads coin tails coin tails setstaticfield coinclazz tails tails coin actual heads reverse assertequals tails actual finally setstaticfield coin class heads prevheads setstaticfield coin class tails prevtails endregion environment this behavior does not depend on any specific test environment additional context assignment of specific values to static fields usually using reflection and restoring old values after the test is a common trait of utbot generated tests generally this behavior is necessary as the codegen should guarantee that the initial state of each object matches the expected initial state induced by the symbolic engine on the other hand often these assignment are useless for example final static fields of jdk classes are correctly initialized by the jdk code and there is no sensible way to assign something else to these fields without risking to break things unfortunately it seems that the correct and non redundant object initialization is hard maybe it might be done for some special cases like enum value initialization as we can be sure that each corresponding static field already has the correct value at the start of the test ,0
5019,25776063601.0,IssuesEvent,2022-12-09 12:05:41,maticnetwork/matic-docs,https://api.github.com/repos/maticnetwork/matic-docs,closed,trying deploy full node on google cloud but below errors ,help wanted question T2: Maintain,"Welcome to Cloud Shell! Type ""help"" to get started.
Your Cloud Platform project in this session is set to polygon-346517.
Use “gcloud config set project [PROJECT_ID]” to change to a different project.
ramesh_ram0341@cloudshell:~ (polygon-346517)$ export POLYGON_NETWORK=mainnet
export POLYGON_NODETYPE=fullnode
export POLYGON_BOOTSTRAP_MODE=snapshot
export POLYGON_RPC_PORT=8747
export GCP_NETWORK_TAG=polygon
export EXTRA_VAR=""bor_branch=v0.2.14 heimdall_branch=v0.2.8 network_version=mainnet-v1 node_type=sentry/sentry heimdall_network=${POLYGON_NETWORK}""
gcloud compute firewall-rules create ""polygon-p2p"" --allow=tcp:26656,tcp:30303,udp:30303 --description=""polygon p2p"" --target-tags=${GCP_NETWORK_TAG}
gcloud compute firewall-rules create ""polygon-rpc"" --allow=tcp:${POLYGON_RPC_PORT} --description=""polygon rpc"" --target-tags=${GCP_NETWORK_TAG}
export INSTANCE_NAME=polygon-0
export INSTANCE_TYPE=e2-standard-8
export BOR_EXT_DISK_SIZE=1024
{POLYGON_NETWORK}' -m '${POLYGON_NODETYPE}' -s '${POLYGON_BOOTSTRAP_MODE}' -p '${POLYGON_RPC_PORT}' -e \""'${EXTRA_VAR}'\""; bash""'.sh | bash -s -- -n '$
Creating firewall...failed.
ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
- The resource 'projects/polygon-346517/global/firewalls/polygon-p2p' already exists
Creating firewall...failed.
ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
- The resource 'projects/polygon-346517/global/firewalls/polygon-rpc' already exists
ERROR: (gcloud.compute.instances.create) unrecognized arguments:
heimdall_branch=v0.2.8
network_version=mainnet-v1 (did you mean '--network-tier'?)
node_type=sentry/sentry
heimdall_network=mainnet\""; bash""
To search the help text of gcloud commands, run:
gcloud help -- SEARCH_TERMS",True,"trying deploy full node on google cloud but below errors - Welcome to Cloud Shell! Type ""help"" to get started.
Your Cloud Platform project in this session is set to polygon-346517.
Use “gcloud config set project [PROJECT_ID]” to change to a different project.
ramesh_ram0341@cloudshell:~ (polygon-346517)$ export POLYGON_NETWORK=mainnet
export POLYGON_NODETYPE=fullnode
export POLYGON_BOOTSTRAP_MODE=snapshot
export POLYGON_RPC_PORT=8747
export GCP_NETWORK_TAG=polygon
export EXTRA_VAR=""bor_branch=v0.2.14 heimdall_branch=v0.2.8 network_version=mainnet-v1 node_type=sentry/sentry heimdall_network=${POLYGON_NETWORK}""
gcloud compute firewall-rules create ""polygon-p2p"" --allow=tcp:26656,tcp:30303,udp:30303 --description=""polygon p2p"" --target-tags=${GCP_NETWORK_TAG}
gcloud compute firewall-rules create ""polygon-rpc"" --allow=tcp:${POLYGON_RPC_PORT} --description=""polygon rpc"" --target-tags=${GCP_NETWORK_TAG}
export INSTANCE_NAME=polygon-0
export INSTANCE_TYPE=e2-standard-8
export BOR_EXT_DISK_SIZE=1024
{POLYGON_NETWORK}' -m '${POLYGON_NODETYPE}' -s '${POLYGON_BOOTSTRAP_MODE}' -p '${POLYGON_RPC_PORT}' -e \""'${EXTRA_VAR}'\""; bash""'.sh | bash -s -- -n '$
Creating firewall...failed.
ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
- The resource 'projects/polygon-346517/global/firewalls/polygon-p2p' already exists
Creating firewall...failed.
ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
- The resource 'projects/polygon-346517/global/firewalls/polygon-rpc' already exists
ERROR: (gcloud.compute.instances.create) unrecognized arguments:
heimdall_branch=v0.2.8
network_version=mainnet-v1 (did you mean '--network-tier'?)
node_type=sentry/sentry
heimdall_network=mainnet\""; bash""
To search the help text of gcloud commands, run:
gcloud help -- SEARCH_TERMS",1,trying deploy full node on google cloud but below errors welcome to cloud shell type help to get started your cloud platform project in this session is set to polygon use “gcloud config set project ” to change to a different project ramesh cloudshell polygon export polygon network mainnet export polygon nodetype fullnode export polygon bootstrap mode snapshot export polygon rpc port export gcp network tag polygon export extra var bor branch heimdall branch network version mainnet node type sentry sentry heimdall network polygon network gcloud compute firewall rules create polygon allow tcp tcp udp description polygon target tags gcp network tag gcloud compute firewall rules create polygon rpc allow tcp polygon rpc port description polygon rpc target tags gcp network tag export instance name polygon export instance type standard export bor ext disk size polygon network m polygon nodetype s polygon bootstrap mode p polygon rpc port e extra var bash sh bash s n creating firewall failed error gcloud compute firewall rules create could not fetch resource the resource projects polygon global firewalls polygon already exists creating firewall failed error gcloud compute firewall rules create could not fetch resource the resource projects polygon global firewalls polygon rpc already exists error gcloud compute instances create unrecognized arguments heimdall branch network version mainnet did you mean network tier node type sentry sentry heimdall network mainnet bash to search the help text of gcloud commands run gcloud help search terms,1
225516,7482163269.0,IssuesEvent,2018-04-04 23:42:06,ngxs/store,https://api.github.com/repos/ngxs/store,closed,Select Raw Value,domain:core priority:1 type:feature,"In some cases, you need the ability to select a raw value from the store.
In the use case below, we have a store that is backed by localstorage that contains the jwt token, we need the ability to get the RAW value from the token to pass in the request object.
```
@Injectable()
export class JWTInterceptor implements HttpInterceptor {
intercept(req: HttpRequest, next: HttpHandler): Observable> {
// NEED TO GET RAW TOKEN VALUE FROM STORE
req = req.clone({
setHeaders: {
Authorization: `Bearer ${token}`
}
});
return next.handle(req);
}
}
```
For this API, I'm thinking about having it only on the the store instance, so it might look like:
```
store.selectValue(v => v.token);
```
Open to suggestions for API though.",1.0,"Select Raw Value - In some cases, you need the ability to select a raw value from the store.
In the use case below, we have a store that is backed by localstorage that contains the jwt token, we need the ability to get the RAW value from the token to pass in the request object.
```
@Injectable()
export class JWTInterceptor implements HttpInterceptor {
intercept(req: HttpRequest, next: HttpHandler): Observable> {
// NEED TO GET RAW TOKEN VALUE FROM STORE
req = req.clone({
setHeaders: {
Authorization: `Bearer ${token}`
}
});
return next.handle(req);
}
}
```
For this API, I'm thinking about having it only on the the store instance, so it might look like:
```
store.selectValue(v => v.token);
```
Open to suggestions for API though.",0,select raw value in some cases you need the ability to select a raw value from the store in the use case below we have a store that is backed by localstorage that contains the jwt token we need the ability to get the raw value from the token to pass in the request object injectable export class jwtinterceptor implements httpinterceptor intercept req httprequest next httphandler observable need to get raw token value from store req req clone setheaders authorization bearer token return next handle req for this api i m thinking about having it only on the the store instance so it might look like store selectvalue v v token open to suggestions for api though ,0
3276,2832369702.0,IssuesEvent,2015-05-25 07:24:45,HGustavs/LenaSYS,https://api.github.com/repos/HGustavs/LenaSYS,closed,Preview option dissapeared after merge.,CodeViewer highPriority,"There is no longer a preview option to choose from in the ""Kind"".
The css for the preview also seems to be missing...",1.0,"Preview option dissapeared after merge. - There is no longer a preview option to choose from in the ""Kind"".
The css for the preview also seems to be missing...",0,preview option dissapeared after merge there is no longer a preview option to choose from in the kind the css for the preview also seems to be missing ,0
387611,11463550521.0,IssuesEvent,2020-02-07 16:13:08,storybookjs/storybook,https://api.github.com/repos/storybookjs/storybook,closed,Link to brandImage not working,bug has workaround high priority theming ui,"**Describe the bug**
During upgrade from 5.2.8 to 5.3.3 my brandImage stopped working.
I have a custom theme, using a static assets folder for the image
```
./storybook
- ./public
- logo.svg
```
Previously I had me theme defined like this:
```
import { create } from '@storybook/theming';
import logo from './public/logo.svg';
export default create({
base: 'light',
brandImage: logo,
brandTitle: 'Custom - Storybook'
});
```
After updating to 5.3.3 I've moved my theming to manager.js, like so
```
import { addons } from '@storybook/addons';
import { create } from '@storybook/theming/create';
import logo from './public/logo.svg';
const theme = create({
base: 'light',
brandImage: `/${logo}`,
brandTitle: 'Custom - Storybook'
});
addons.setConfig({
panelPosition: 'bottom',
theme
});
```
But the logo.svg does not show up when I start storybook using `start-storybook -p 6006 -s ./.storybook/public`.
If I however do a static build via `build-storybook -s ./.storybook/public`, the logo shows up correctly.
Webserver fetches the logo from `/media/static/logo.svg` in both cases. But it seems the local webserver started when starting storybook locally does not correctly allow fetching images from this folder.
**System:**
Environment Info:
System:
OS: macOS 10.15.2
CPU: (12) x64 Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Binaries:
Node: 13.6.0 - ~/.nvm/versions/node/v13.6.0/bin/node
Yarn: 1.19.1 - /usr/local/bin/yarn
npm: 6.13.4 - ~/.nvm/versions/node/v13.6.0/bin/npm
Browsers:
Chrome: 79.0.3945.117
Safari: 13.0.4
npmPackages:
@storybook/addon-a11y: ^5.3.3 => 5.3.3
@storybook/addon-actions: ^5.3.3 => 5.3.3
@storybook/addon-docs: ^5.3.3 => 5.3.3
@storybook/addon-knobs: ^5.3.3 => 5.3.3
@storybook/addon-links: ^5.3.3 => 5.3.3
@storybook/addon-notes: ^5.3.3 => 5.3.3
@storybook/addon-viewport: ^5.3.3 => 5.3.3
@storybook/addons: ^5.3.3 => 5.3.3
@storybook/angular: ^5.3.3 => 5.3.3
",1.0,"Link to brandImage not working - **Describe the bug**
During upgrade from 5.2.8 to 5.3.3 my brandImage stopped working.
I have a custom theme, using a static assets folder for the image
```
./storybook
- ./public
- logo.svg
```
Previously I had me theme defined like this:
```
import { create } from '@storybook/theming';
import logo from './public/logo.svg';
export default create({
base: 'light',
brandImage: logo,
brandTitle: 'Custom - Storybook'
});
```
After updating to 5.3.3 I've moved my theming to manager.js, like so
```
import { addons } from '@storybook/addons';
import { create } from '@storybook/theming/create';
import logo from './public/logo.svg';
const theme = create({
base: 'light',
brandImage: `/${logo}`,
brandTitle: 'Custom - Storybook'
});
addons.setConfig({
panelPosition: 'bottom',
theme
});
```
But the logo.svg does not show up when I start storybook using `start-storybook -p 6006 -s ./.storybook/public`.
If I however do a static build via `build-storybook -s ./.storybook/public`, the logo shows up correctly.
Webserver fetches the logo from `/media/static/logo.svg` in both cases. But it seems the local webserver started when starting storybook locally does not correctly allow fetching images from this folder.
**System:**
Environment Info:
System:
OS: macOS 10.15.2
CPU: (12) x64 Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Binaries:
Node: 13.6.0 - ~/.nvm/versions/node/v13.6.0/bin/node
Yarn: 1.19.1 - /usr/local/bin/yarn
npm: 6.13.4 - ~/.nvm/versions/node/v13.6.0/bin/npm
Browsers:
Chrome: 79.0.3945.117
Safari: 13.0.4
npmPackages:
@storybook/addon-a11y: ^5.3.3 => 5.3.3
@storybook/addon-actions: ^5.3.3 => 5.3.3
@storybook/addon-docs: ^5.3.3 => 5.3.3
@storybook/addon-knobs: ^5.3.3 => 5.3.3
@storybook/addon-links: ^5.3.3 => 5.3.3
@storybook/addon-notes: ^5.3.3 => 5.3.3
@storybook/addon-viewport: ^5.3.3 => 5.3.3
@storybook/addons: ^5.3.3 => 5.3.3
@storybook/angular: ^5.3.3 => 5.3.3
",0,link to brandimage not working describe the bug during upgrade from to my brandimage stopped working i have a custom theme using a static assets folder for the image storybook public logo svg previously i had me theme defined like this import create from storybook theming import logo from public logo svg export default create base light brandimage logo brandtitle custom storybook after updating to i ve moved my theming to manager js like so import addons from storybook addons import create from storybook theming create import logo from public logo svg const theme create base light brandimage logo brandtitle custom storybook addons setconfig panelposition bottom theme but the logo svg does not show up when i start storybook using start storybook p s storybook public if i however do a static build via build storybook s storybook public the logo shows up correctly webserver fetches the logo from media static logo svg in both cases but it seems the local webserver started when starting storybook locally does not correctly allow fetching images from this folder system environment info system os macos cpu intel r core tm cpu binaries node nvm versions node bin node yarn usr local bin yarn npm nvm versions node bin npm browsers chrome safari npmpackages storybook addon storybook addon actions storybook addon docs storybook addon knobs storybook addon links storybook addon notes storybook addon viewport storybook addons storybook angular ,0
19944,14766587323.0,IssuesEvent,2021-01-10 01:08:13,NCAR/VAPOR,https://api.github.com/repos/NCAR/VAPOR,reopened,Disagreement between TF widget and Colorbar,High Usability,"In this case, I created a slice renderer for `dbz` for Lee Orf's tornado dataset with the default TF and added a colorbar.

",True,"Disagreement between TF widget and Colorbar - In this case, I created a slice renderer for `dbz` for Lee Orf's tornado dataset with the default TF and added a colorbar.

",0,disagreement between tf widget and colorbar in this case i created a slice renderer for dbz for lee orf s tornado dataset with the default tf and added a colorbar ,0
2134,7333017996.0,IssuesEvent,2018-03-05 18:02:22,RalfKoban/MiKo-Analyzers,https://api.github.com/repos/RalfKoban/MiKo-Analyzers,closed,Exceptions in catch blocks should be named 'ex',Area: analyzer Area: maintainability feature in progress,Exceptions that are caught and handled in catch clauses should be named `ex`.,True,Exceptions in catch blocks should be named 'ex' - Exceptions that are caught and handled in catch clauses should be named `ex`.,1,exceptions in catch blocks should be named ex exceptions that are caught and handled in catch clauses should be named ex ,1
1025,4819391596.0,IssuesEvent,2016-11-04 19:06:32,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,haproxy module failure after upgrade to ansible 2.2.0,affects_2.2 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- haproxy module
##### ANSIBLE VERSION
```
2.2.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Ubuntu 16.04
##### SUMMARY
After upgrade to ansible 2.2.0, haprox module execution failed. But the weight has been changed though.
##### STEPS TO REPRODUCE
```yml
- name: ""Set {{ backend }}/{{ host }} Weight to {{ weight }}""
haproxy:
state: enabled
backend: ""{{ backend }}""
host: ""{{ host }}""
weight: ""{{ weight }}""
socket: ""{{ socket }}""
delegate_to: ""{{ item }}""
with_items: ""{{ groups.upay }}""
```
##### EXPECTED RESULTS
Weight changed and no failure message.
##### ACTUAL RESULTS
I've formated the json output message.
```json
""failed: [s1-payment-001 -> s1-upay-001] (item=s1-upay-001) =>""
{
""failed"": true,
""item"": ""s1-upay-001"",
""module_stderr"": ""Shared connection to s1-upay-001 closed.
"",
""module_stdout"": ""Traceback (most recent call last):
File \""/tmp/ansible_gz8hxE/ansible_module_haproxy.py\"", line 350, in
main()
File \""/tmp/ansible_gz8hxE/ansible_module_haproxy.py\"", line 345, in main
ansible_haproxy.act()
File \""/tmp/ansible_gz8hxE/ansible_module_haproxy.py\"", line 317, in act
self.module.exit_json(**self.command_results)
File \""/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 1799, in exit_json
File \""/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 388, in remove_values
File \""/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 388, in
File \""/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 399, in remove_values
TypeError: Value of unknown type: ,
"",
""msg"": ""MODULE FAILURE""
}
```
",True,"haproxy module failure after upgrade to ansible 2.2.0 - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- haproxy module
##### ANSIBLE VERSION
```
2.2.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Ubuntu 16.04
##### SUMMARY
After upgrade to ansible 2.2.0, haprox module execution failed. But the weight has been changed though.
##### STEPS TO REPRODUCE
```yml
- name: ""Set {{ backend }}/{{ host }} Weight to {{ weight }}""
haproxy:
state: enabled
backend: ""{{ backend }}""
host: ""{{ host }}""
weight: ""{{ weight }}""
socket: ""{{ socket }}""
delegate_to: ""{{ item }}""
with_items: ""{{ groups.upay }}""
```
##### EXPECTED RESULTS
Weight changed and no failure message.
##### ACTUAL RESULTS
I've formated the json output message.
```json
""failed: [s1-payment-001 -> s1-upay-001] (item=s1-upay-001) =>""
{
""failed"": true,
""item"": ""s1-upay-001"",
""module_stderr"": ""Shared connection to s1-upay-001 closed.
"",
""module_stdout"": ""Traceback (most recent call last):
File \""/tmp/ansible_gz8hxE/ansible_module_haproxy.py\"", line 350, in
main()
File \""/tmp/ansible_gz8hxE/ansible_module_haproxy.py\"", line 345, in main
ansible_haproxy.act()
File \""/tmp/ansible_gz8hxE/ansible_module_haproxy.py\"", line 317, in act
self.module.exit_json(**self.command_results)
File \""/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 1799, in exit_json
File \""/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 388, in remove_values
File \""/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 388, in
File \""/tmp/ansible_gz8hxE/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 399, in remove_values
TypeError: Value of unknown type: ,
"",
""msg"": ""MODULE FAILURE""
}
```
",1,haproxy module failure after upgrade to ansible issue type bug report component name haproxy module ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary after upgrade to ansible haprox module execution failed but the weight has been changed though steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used yml name set backend host weight to weight haproxy state enabled backend backend host host weight weight socket socket delegate to item with items groups upay expected results weight changed and no failure message actual results i ve formated the json output message json failed item upay failed true item upay module stderr shared connection to upay closed module stdout traceback most recent call last file tmp ansible ansible module haproxy py line in main file tmp ansible ansible module haproxy py line in main ansible haproxy act file tmp ansible ansible module haproxy py line in act self module exit json self command results file tmp ansible ansible modlib zip ansible module utils basic py line in exit json file tmp ansible ansible modlib zip ansible module utils basic py line in remove values file tmp ansible ansible modlib zip ansible module utils basic py line in file tmp ansible ansible modlib zip ansible module utils basic py line in remove values typeerror value of unknown type msg module failure ,1
1397,6025336647.0,IssuesEvent,2017-06-08 08:26:04,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,apt: package marking,affects_2.3 feature_idea waiting_on_maintainer,"Please add options to apt module, which would make possible to arbitrary mark packages:
Considering https://github.com/ansible/ansible-modules-core/issues/19 that would be two boolean options:
auto: true/false, default: n/a, (suppose you want to make sure a list of installed packages is marked as manual)
hold: true/false, default: n/a (suppose you do not want to specify version, but still want to hold package)
",True,"apt: package marking - Please add options to apt module, which would make possible to arbitrary mark packages:
Considering https://github.com/ansible/ansible-modules-core/issues/19 that would be two boolean options:
auto: true/false, default: n/a, (suppose you want to make sure a list of installed packages is marked as manual)
hold: true/false, default: n/a (suppose you do not want to specify version, but still want to hold package)
",1,apt package marking please add options to apt module which would make possible to arbitrary mark packages considering that would be two boolean options auto true false default n a suppose you want to make sure a list of installed packages is marked as manual hold true false default n a suppose you do not want to specify version but still want to hold package ,1
332802,24350175508.0,IssuesEvent,2022-10-02 21:02:06,ovejerojose/Full_Stack,https://api.github.com/repos/ovejerojose/Full_Stack,reopened,#US09,documentation,"- Como: Personal administrativo
Quiero: Que en la confirmación de la reserva tanto al cliente como al personal administrativo se le brinde un número identificador o ID
",1.0,"#US09 - - Como: Personal administrativo
Quiero: Que en la confirmación de la reserva tanto al cliente como al personal administrativo se le brinde un número identificador o ID
",0, como personal administrativo quiero que en la confirmación de la reserva tanto al cliente como al personal administrativo se le brinde un número identificador o id ,0
54513,3068767551.0,IssuesEvent,2015-08-18 17:09:36,loklak/loklak_webclient,https://api.github.com/repos/loklak/loklak_webclient,closed,Implement a thought through choice of media items for wall,Feature Priority 1 - High Twitter Wall - Aneesh,"Currently there are options that do not make sense, e.g. the user can choose ""only images"", but can also choose ""show videos"". There are choices that exclude each other. Please implement the following functionality and UI.
* [x] add a section below the left hand side area ""What do you want to show on the wall?"" (as described here https://github.com/loklak/loklak_webclient/issues/330) with the title ""Which media do you want to show on the wall""
* [x] add button sliders for the following
* [x] Show images Yes - No - Only
* [x] Show video Yes - No - Only
* [ ] Show audio Yes - No - Only (Is this already an implemented option?)
* [x] If the user chooses at one point ""Only"", all the other options should change their color to grey and become unavailable",1.0,"Implement a thought through choice of media items for wall - Currently there are options that do not make sense, e.g. the user can choose ""only images"", but can also choose ""show videos"". There are choices that exclude each other. Please implement the following functionality and UI.
* [x] add a section below the left hand side area ""What do you want to show on the wall?"" (as described here https://github.com/loklak/loklak_webclient/issues/330) with the title ""Which media do you want to show on the wall""
* [x] add button sliders for the following
* [x] Show images Yes - No - Only
* [x] Show video Yes - No - Only
* [ ] Show audio Yes - No - Only (Is this already an implemented option?)
* [x] If the user chooses at one point ""Only"", all the other options should change their color to grey and become unavailable",0,implement a thought through choice of media items for wall currently there are options that do not make sense e g the user can choose only images but can also choose show videos there are choices that exclude each other please implement the following functionality and ui add a section below the left hand side area what do you want to show on the wall as described here with the title which media do you want to show on the wall add button sliders for the following show images yes no only show video yes no only show audio yes no only is this already an implemented option if the user chooses at one point only all the other options should change their color to grey and become unavailable,0
66140,20016489873.0,IssuesEvent,2022-02-01 12:37:00,primefaces/primefaces,https://api.github.com/repos/primefaces/primefaces,closed,DataTable: Dynamically rendered columns are not filterable/sortable,defect,"**Describe the defect**
In our application we have cases where columns are added/removed dynamically by switching the ""rendered""-attribute of each column. The column gets added but sorting and filtering is not working. This worked in previous versions of Primefaces. You can see the issue in the showcase ""DataTable - Dynamic Columns"" when adding a new column ""representative"". I have also added a reproducer which showcases this problem in smaller size.
**Reproducer**
[primefaces-test.zip](https://github.com/primefaces/primefaces/files/7977071/primefaces-test.zip)
https://www.primefaces.org/showcase/ui/data/datatable/columns.xhtml?jfwid=87ffd
**Environment:**
- PF Version: _11.0.0_
- Affected browsers: _ALL_
**To Reproduce**
Steps to reproduce the behavior:
1. Start Reproducer
2. Test Sorting/Filtering functionality on the initial column -> works
3. Press Button ""Show Column""
4. Test Sorting/Filtering functionality on the 2nd column -> does not work
**Expected behavior**
Same behaviour as in Primefaces 10 (working).
**Further Info**
I am quite sure that the underlying problem is similar to #8159, that being, that the sortByAsMap/filterByAsMap values are not updated and therefore the newly added column having nothing to go by when sorting/filtering. I have opened a PR #8358 which I believe contains the fix to the problem.
#8159 was closed with the suggestion to reset the values of sortByAsMap/filterByAsMap to null, which **would also work** for this reproducer and our application. However, in my opinion this is might not be the best solution, since every application using this feature has to be migrated in order to work. If we decide to use this approach, we should leave a hint in the migration guide and update the showcase for PF11, where it is currently used.
The fix I was proposing in my PR adds two lines which were previously there in PF10 and got lost in transition to PF11. Adding those back fixes the problem. With that solution it wouldn't be necessary to update the showcase as well. But I am open for other ideas or arguments against my approach.
**Excerpt from my comment under #8159**
After some time debugging I noticed that the functions isColumnSortable() / isColumnFilterable() from UITable are always called from within the encodeColumnHeader function of the DataTableRenderer. In PF10 it was here, where the sortByAsMap property was newly set.
For some reason though a line was dropped from PF10 to PF11 which resets the value. Please see below:
Primefaces 10
default boolean isColumnSortable(FacesContext context, UIColumn column) {
Map sortBy = getSortByAsMap();
if (sortBy.containsKey(column.getColumnKey())) {
return true;
}
SortMeta s = SortMeta.of(context, getVar(), column);
if (s == null) {
return false;
}
// unlikely to happen, in case columns change between two ajax requests
sortBy.put(s.getColumnKey(), s);
setSortByAsMap(sortBy);
return true;
}
Primefaces 11
default boolean isColumnSortable(FacesContext context, UIColumn column) {
Map sortBy = getSortByAsMap();
if (sortBy.containsKey(column.getColumnKey())) {
return true;
}
// lazy init - happens in cases where the column is initially not rendered
SortMeta s = SortMeta.of(context, getVar(), column);
if (s != null) {
sortBy.put(s.getColumnKey(), s);
}
// setSortByAsMap(sortBy); is missing here
return s != null;
}
Although isColumnFilterable looks different to isColumnSortable in PF10, in PF11 they almost look identical. Adding a setFilterByAsMap(filterBy) at the same position as above fixed the filtering for me as well.
",1.0,"DataTable: Dynamically rendered columns are not filterable/sortable - **Describe the defect**
In our application we have cases where columns are added/removed dynamically by switching the ""rendered""-attribute of each column. The column gets added but sorting and filtering is not working. This worked in previous versions of Primefaces. You can see the issue in the showcase ""DataTable - Dynamic Columns"" when adding a new column ""representative"". I have also added a reproducer which showcases this problem in smaller size.
**Reproducer**
[primefaces-test.zip](https://github.com/primefaces/primefaces/files/7977071/primefaces-test.zip)
https://www.primefaces.org/showcase/ui/data/datatable/columns.xhtml?jfwid=87ffd
**Environment:**
- PF Version: _11.0.0_
- Affected browsers: _ALL_
**To Reproduce**
Steps to reproduce the behavior:
1. Start Reproducer
2. Test Sorting/Filtering functionality on the initial column -> works
3. Press Button ""Show Column""
4. Test Sorting/Filtering functionality on the 2nd column -> does not work
**Expected behavior**
Same behaviour as in Primefaces 10 (working).
**Further Info**
I am quite sure that the underlying problem is similar to #8159, that being, that the sortByAsMap/filterByAsMap values are not updated and therefore the newly added column having nothing to go by when sorting/filtering. I have opened a PR #8358 which I believe contains the fix to the problem.
#8159 was closed with the suggestion to reset the values of sortByAsMap/filterByAsMap to null, which **would also work** for this reproducer and our application. However, in my opinion this is might not be the best solution, since every application using this feature has to be migrated in order to work. If we decide to use this approach, we should leave a hint in the migration guide and update the showcase for PF11, where it is currently used.
The fix I was proposing in my PR adds two lines which were previously there in PF10 and got lost in transition to PF11. Adding those back fixes the problem. With that solution it wouldn't be necessary to update the showcase as well. But I am open for other ideas or arguments against my approach.
**Excerpt from my comment under #8159**
After some time debugging I noticed that the functions isColumnSortable() / isColumnFilterable() from UITable are always called from within the encodeColumnHeader function of the DataTableRenderer. In PF10 it was here, where the sortByAsMap property was newly set.
For some reason though a line was dropped from PF10 to PF11 which resets the value. Please see below:
Primefaces 10
default boolean isColumnSortable(FacesContext context, UIColumn column) {
Map sortBy = getSortByAsMap();
if (sortBy.containsKey(column.getColumnKey())) {
return true;
}
SortMeta s = SortMeta.of(context, getVar(), column);
if (s == null) {
return false;
}
// unlikely to happen, in case columns change between two ajax requests
sortBy.put(s.getColumnKey(), s);
setSortByAsMap(sortBy);
return true;
}
Primefaces 11
default boolean isColumnSortable(FacesContext context, UIColumn column) {
Map sortBy = getSortByAsMap();
if (sortBy.containsKey(column.getColumnKey())) {
return true;
}
// lazy init - happens in cases where the column is initially not rendered
SortMeta s = SortMeta.of(context, getVar(), column);
if (s != null) {
sortBy.put(s.getColumnKey(), s);
}
// setSortByAsMap(sortBy); is missing here
return s != null;
}
Although isColumnFilterable looks different to isColumnSortable in PF10, in PF11 they almost look identical. Adding a setFilterByAsMap(filterBy) at the same position as above fixed the filtering for me as well.
",0,datatable dynamically rendered columns are not filterable sortable describe the defect in our application we have cases where columns are added removed dynamically by switching the rendered attribute of each column the column gets added but sorting and filtering is not working this worked in previous versions of primefaces you can see the issue in the showcase datatable dynamic columns when adding a new column representative i have also added a reproducer which showcases this problem in smaller size reproducer environment pf version affected browsers all to reproduce steps to reproduce the behavior start reproducer test sorting filtering functionality on the initial column works press button show column test sorting filtering functionality on the column does not work expected behavior same behaviour as in primefaces working further info i am quite sure that the underlying problem is similar to that being that the sortbyasmap filterbyasmap values are not updated and therefore the newly added column having nothing to go by when sorting filtering i have opened a pr which i believe contains the fix to the problem was closed with the suggestion to reset the values of sortbyasmap filterbyasmap to null which would also work for this reproducer and our application however in my opinion this is might not be the best solution since every application using this feature has to be migrated in order to work if we decide to use this approach we should leave a hint in the migration guide and update the showcase for where it is currently used the fix i was proposing in my pr adds two lines which were previously there in and got lost in transition to adding those back fixes the problem with that solution it wouldn t be necessary to update the showcase as well but i am open for other ideas or arguments against my approach excerpt from my comment under after some time debugging i noticed that the functions iscolumnsortable iscolumnfilterable from uitable are always called from within the encodecolumnheader function of the datatablerenderer in it was here where the sortbyasmap property was newly set for some reason though a line was dropped from to which resets the value please see below primefaces default boolean iscolumnsortable facescontext context uicolumn column map sortby getsortbyasmap if sortby containskey column getcolumnkey return true sortmeta s sortmeta of context getvar column if s null return false unlikely to happen in case columns change between two ajax requests sortby put s getcolumnkey s setsortbyasmap sortby return true primefaces default boolean iscolumnsortable facescontext context uicolumn column map sortby getsortbyasmap if sortby containskey column getcolumnkey return true lazy init happens in cases where the column is initially not rendered sortmeta s sortmeta of context getvar column if s null sortby put s getcolumnkey s setsortbyasmap sortby is missing here return s null although iscolumnfilterable looks different to iscolumnsortable in in they almost look identical adding a setfilterbyasmap filterby at the same position as above fixed the filtering for me as well ,0
2875,10280800344.0,IssuesEvent,2019-08-26 06:45:56,KazDragon/terminalpp,https://api.github.com/repos/KazDragon/terminalpp,closed,Inconsistent use of std::size_t in string,Compatibility Maintainability,"terminalpp::string uses std::size_t for the return type of size(), for the size constructor and for the UDS functions. It then uses string::size_type for other functions.
It should be size_type for all of these.",True,"Inconsistent use of std::size_t in string - terminalpp::string uses std::size_t for the return type of size(), for the size constructor and for the UDS functions. It then uses string::size_type for other functions.
It should be size_type for all of these.",1,inconsistent use of std size t in string terminalpp string uses std size t for the return type of size for the size constructor and for the uds functions it then uses string size type for other functions it should be size type for all of these ,1
2051,6952510854.0,IssuesEvent,2017-12-06 17:41:37,OpenRefine/OpenRefine,https://api.github.com/repos/OpenRefine/OpenRefine,closed,Weblate unable to push new translations,maintainability,"Now that the master branch is protected, Weblate fails to push new translations. One possible solution would be to change the weblate user to administrator, but that's not really ideal… Any better ideas?
Also, it's currently failing to merge translation files. I will look into solving that.",True,"Weblate unable to push new translations - Now that the master branch is protected, Weblate fails to push new translations. One possible solution would be to change the weblate user to administrator, but that's not really ideal… Any better ideas?
Also, it's currently failing to merge translation files. I will look into solving that.",1,weblate unable to push new translations now that the master branch is protected weblate fails to push new translations one possible solution would be to change the weblate user to administrator but that s not really ideal… any better ideas also it s currently failing to merge translation files i will look into solving that ,1
617,4111174116.0,IssuesEvent,2016-06-07 04:10:24,Particular/ServiceControl,https://api.github.com/repos/Particular/ServiceControl,closed,SCMU instance action buttons shouldn't show dropshadow under tooltips,Tag: Installer Tag: Maintainer Prio Type: Bug,"SCMU instance action buttons shouldn't show dropshadow under tooltips:

CC // @distantcam @gbiellem ",True,"SCMU instance action buttons shouldn't show dropshadow under tooltips - SCMU instance action buttons shouldn't show dropshadow under tooltips:

CC // @distantcam @gbiellem ",1,scmu instance action buttons shouldn t show dropshadow under tooltips scmu instance action buttons shouldn t show dropshadow under tooltips cc distantcam gbiellem ,1
3169,12226756390.0,IssuesEvent,2020-05-03 12:24:34,gfleetwood/asteres,https://api.github.com/repos/gfleetwood/asteres,opened,Zac-HD/escape-from-automanual-testing (182345998),Python maintain,"https://github.com/Zac-HD/escape-from-automanual-testing
A three-hour tutorial on property-based testing with https://hypothesis.works",True,"Zac-HD/escape-from-automanual-testing (182345998) - https://github.com/Zac-HD/escape-from-automanual-testing
A three-hour tutorial on property-based testing with https://hypothesis.works",1,zac hd escape from automanual testing a three hour tutorial on property based testing with ,1
5936,6102990535.0,IssuesEvent,2017-06-20 17:43:31,brave/browser-laptop,https://api.github.com/repos/brave/browser-laptop,closed,noscript allowing selective sites once doesn't invalidate the exceptions,bug feature/shields info-needed security,"tested on master
1. disable scripts globally and go to https://jsfiddle.net/
2. click noscript icon. unselect all except jsfiddle.net and hit 'allow once'
3. close tab then open jsfiddle.net again
4. click noscript icon. it appears jsfiddle.net is still allowed.",True,"noscript allowing selective sites once doesn't invalidate the exceptions - tested on master
1. disable scripts globally and go to https://jsfiddle.net/
2. click noscript icon. unselect all except jsfiddle.net and hit 'allow once'
3. close tab then open jsfiddle.net again
4. click noscript icon. it appears jsfiddle.net is still allowed.",0,noscript allowing selective sites once doesn t invalidate the exceptions tested on master disable scripts globally and go to click noscript icon unselect all except jsfiddle net and hit allow once close tab then open jsfiddle net again click noscript icon it appears jsfiddle net is still allowed ,0
739,4347759681.0,IssuesEvent,2016-07-29 20:43:51,gogits/gogs,https://api.github.com/repos/gogits/gogs,closed,No more Linux i386 builds?,kind/deployment status/assigned to maintainer,"Since v0.9.46, there are no Linux i386 binaries anymore.
I couldn't find any explanation of why this support has been dropped.
I'm using it on an i386 Synology NAS, and I'd rather grab an official version than go through the cross-compilation process, if possible.
Any reason why these images were not generated?
I'm not asking for a build, really, just the reason of the drop.",True,"No more Linux i386 builds? - Since v0.9.46, there are no Linux i386 binaries anymore.
I couldn't find any explanation of why this support has been dropped.
I'm using it on an i386 Synology NAS, and I'd rather grab an official version than go through the cross-compilation process, if possible.
Any reason why these images were not generated?
I'm not asking for a build, really, just the reason of the drop.",1,no more linux builds since there are no linux binaries anymore i couldn t find any explanation of why this support has been dropped i m using it on an synology nas and i d rather grab an official version than go through the cross compilation process if possible any reason why these images were not generated i m not asking for a build really just the reason of the drop ,1
425484,12341042572.0,IssuesEvent,2020-05-14 21:05:36,huridocs/uwazi,https://api.github.com/repos/huridocs/uwazi,closed,Fix intermitent tests for CSV Export,Bug Priority: High Status: Sprint,"- [x] Use supertest to test res.download and not route return
- [x] Check validation (specially `search` which is currently an array)
- [x] Test properly the file unlink, this probably means mocking the file generation naming. Please ensure that the name can not be duplicated if tests are run in paralell
- [x] Test for correct passing of the user to the search function (perhaps @daneryl can help set this up quicker)
- [x] test that the exporter got called correctly with the desired params, even if its reply is mocked. A good approach is to tailor the reply according to the arguments passed, in that way you are testing, with the response, the arguments passed.
- [x] Look if there is merit to create a separate file for the `api/export/` route?
",1.0,"Fix intermitent tests for CSV Export - - [x] Use supertest to test res.download and not route return
- [x] Check validation (specially `search` which is currently an array)
- [x] Test properly the file unlink, this probably means mocking the file generation naming. Please ensure that the name can not be duplicated if tests are run in paralell
- [x] Test for correct passing of the user to the search function (perhaps @daneryl can help set this up quicker)
- [x] test that the exporter got called correctly with the desired params, even if its reply is mocked. A good approach is to tailor the reply according to the arguments passed, in that way you are testing, with the response, the arguments passed.
- [x] Look if there is merit to create a separate file for the `api/export/` route?
",0,fix intermitent tests for csv export use supertest to test res download and not route return check validation specially search which is currently an array test properly the file unlink this probably means mocking the file generation naming please ensure that the name can not be duplicated if tests are run in paralell test for correct passing of the user to the search function perhaps daneryl can help set this up quicker test that the exporter got called correctly with the desired params even if its reply is mocked a good approach is to tailor the reply according to the arguments passed in that way you are testing with the response the arguments passed look if there is merit to create a separate file for the api export route ,0
66361,16599367810.0,IssuesEvent,2021-06-01 17:10:12,elastic/elasticsearch,https://api.github.com/repos/elastic/elasticsearch,closed,Gradle plugins used by external users are separated from buildSrc project,:Delivery/Build Team:Delivery,"We have a number of plugins that are leveraged by external users to build and test their own Elasticsearch plugins. Right now these reside in `buildSrc` since we use them in our own build. In order to expose these plugins we effectively publish the `buildSrc` JAR by defining a `build-tools` project whose project directory points to `buildSrc`. This causes us all kinds of headaches:
1. We build `buildSrc` twice, once implicitly as the normal `buildSrc` gradle project and again as `:build-tools`.
2. The duplicate existance of the `buildSrc` project causes all kinds of issues with IDE integration and makes the use of Kotlin DSL impossible.
3. Since we publish everything in `buildSrc` we include all sorts of internal stuff not intended for use outside of the Elasticsearch project.
4. The intermingling of this logic makes testing the external builds of `buildSrc` in isolation very hard. To account for this we have `BuildParams.isInternalBuild()` all over the place.
The only way out of this mess is to _physically_ move the code we want to publish so it's completely isolated in a way where it can be built, packaged, published and tested on its own. That said, our build relies on this stuff, so we need to be able to _consume_ these core plugins in our own build as well. There's a little bit of a chicken/egg situation here but I think we can leverage composite builds to get around this. As part of this work we'll want to ensure a few things:
1. Improve documentation for plugin authors. Essentially [this](https://www.elastic.co/guide/en/elasticsearch/plugins/7.12/plugin-authors.html) is all that exists. For the most part we rely on our [examples](https://github.com/elastic/elasticsearch/tree/master/plugins/examples) to act as documentation.
2. Start testing the example plugins properly again.
3. Provide some guidance/docs on testing. We want folks to use the test clusters plugin if they want, but we should give folks _some_ idea of how to actually author REST tests.",1.0,"Gradle plugins used by external users are separated from buildSrc project - We have a number of plugins that are leveraged by external users to build and test their own Elasticsearch plugins. Right now these reside in `buildSrc` since we use them in our own build. In order to expose these plugins we effectively publish the `buildSrc` JAR by defining a `build-tools` project whose project directory points to `buildSrc`. This causes us all kinds of headaches:
1. We build `buildSrc` twice, once implicitly as the normal `buildSrc` gradle project and again as `:build-tools`.
2. The duplicate existance of the `buildSrc` project causes all kinds of issues with IDE integration and makes the use of Kotlin DSL impossible.
3. Since we publish everything in `buildSrc` we include all sorts of internal stuff not intended for use outside of the Elasticsearch project.
4. The intermingling of this logic makes testing the external builds of `buildSrc` in isolation very hard. To account for this we have `BuildParams.isInternalBuild()` all over the place.
The only way out of this mess is to _physically_ move the code we want to publish so it's completely isolated in a way where it can be built, packaged, published and tested on its own. That said, our build relies on this stuff, so we need to be able to _consume_ these core plugins in our own build as well. There's a little bit of a chicken/egg situation here but I think we can leverage composite builds to get around this. As part of this work we'll want to ensure a few things:
1. Improve documentation for plugin authors. Essentially [this](https://www.elastic.co/guide/en/elasticsearch/plugins/7.12/plugin-authors.html) is all that exists. For the most part we rely on our [examples](https://github.com/elastic/elasticsearch/tree/master/plugins/examples) to act as documentation.
2. Start testing the example plugins properly again.
3. Provide some guidance/docs on testing. We want folks to use the test clusters plugin if they want, but we should give folks _some_ idea of how to actually author REST tests.",0,gradle plugins used by external users are separated from buildsrc project we have a number of plugins that are leveraged by external users to build and test their own elasticsearch plugins right now these reside in buildsrc since we use them in our own build in order to expose these plugins we effectively publish the buildsrc jar by defining a build tools project whose project directory points to buildsrc this causes us all kinds of headaches we build buildsrc twice once implicitly as the normal buildsrc gradle project and again as build tools the duplicate existance of the buildsrc project causes all kinds of issues with ide integration and makes the use of kotlin dsl impossible since we publish everything in buildsrc we include all sorts of internal stuff not intended for use outside of the elasticsearch project the intermingling of this logic makes testing the external builds of buildsrc in isolation very hard to account for this we have buildparams isinternalbuild all over the place the only way out of this mess is to physically move the code we want to publish so it s completely isolated in a way where it can be built packaged published and tested on its own that said our build relies on this stuff so we need to be able to consume these core plugins in our own build as well there s a little bit of a chicken egg situation here but i think we can leverage composite builds to get around this as part of this work we ll want to ensure a few things improve documentation for plugin authors essentially is all that exists for the most part we rely on our to act as documentation start testing the example plugins properly again provide some guidance docs on testing we want folks to use the test clusters plugin if they want but we should give folks some idea of how to actually author rest tests ,0
5363,26982805397.0,IssuesEvent,2023-02-09 14:16:07,precice/precice,https://api.github.com/repos/precice/precice,reopened,Discussion: Restructuring the Integration Tests,maintainability,"As our codebase and the application areas getting bigger, our integration tests are also getting bigger, which is really nice. However, currently, it is getting dirtier and dirtier both in terms of configuration files and the longer `SerialTests.cpp` and `ParallelTests.cpp` files.
**Describe the solution you propose.**
As the first step, I suggest to move the integration tests into a separate folder, such as `src/precice/tests/config-files`. While doing so, we can fix a naming scheme for the configuration files as well. In addition, we can even remove duplicated configuration files and group them into seperate subfolders even.
As the second step, which would be more involved, we can try to seperate the integration tests into seperate files under the same directory. It would be not as trivial as the unit tests due to the nature of the integration tests but it should be doable.
**Describe alternatives you've considered**
Do not touch at all, everyone already hates implementing tests.
**Additional context**
We can check other projects about how they are approaching this topic.
",True,"Discussion: Restructuring the Integration Tests - As our codebase and the application areas getting bigger, our integration tests are also getting bigger, which is really nice. However, currently, it is getting dirtier and dirtier both in terms of configuration files and the longer `SerialTests.cpp` and `ParallelTests.cpp` files.
**Describe the solution you propose.**
As the first step, I suggest to move the integration tests into a separate folder, such as `src/precice/tests/config-files`. While doing so, we can fix a naming scheme for the configuration files as well. In addition, we can even remove duplicated configuration files and group them into seperate subfolders even.
As the second step, which would be more involved, we can try to seperate the integration tests into seperate files under the same directory. It would be not as trivial as the unit tests due to the nature of the integration tests but it should be doable.
**Describe alternatives you've considered**
Do not touch at all, everyone already hates implementing tests.
**Additional context**
We can check other projects about how they are approaching this topic.
",1,discussion restructuring the integration tests as our codebase and the application areas getting bigger our integration tests are also getting bigger which is really nice however currently it is getting dirtier and dirtier both in terms of configuration files and the longer serialtests cpp and paralleltests cpp files describe the solution you propose as the first step i suggest to move the integration tests into a separate folder such as src precice tests config files while doing so we can fix a naming scheme for the configuration files as well in addition we can even remove duplicated configuration files and group them into seperate subfolders even as the second step which would be more involved we can try to seperate the integration tests into seperate files under the same directory it would be not as trivial as the unit tests due to the nature of the integration tests but it should be doable describe alternatives you ve considered do not touch at all everyone already hates implementing tests additional context we can check other projects about how they are approaching this topic ,1
3285,12541383596.0,IssuesEvent,2020-06-05 12:14:47,laminas/laminas-servicemanager,https://api.github.com/repos/laminas/laminas-servicemanager,closed,remove redundant isset(s),Awaiting Maintainer Response Enhancement,"when we are also testing with `! empty()` I believe we can skip `isset()` checks, as the execution speed is nearly the same (but it will double for positive isset)
---
Originally posted by @pine3ree at https://github.com/zendframework/zend-servicemanager/pull/262",True,"remove redundant isset(s) - when we are also testing with `! empty()` I believe we can skip `isset()` checks, as the execution speed is nearly the same (but it will double for positive isset)
---
Originally posted by @pine3ree at https://github.com/zendframework/zend-servicemanager/pull/262",1,remove redundant isset s when we are also testing with empty i believe we can skip isset checks as the execution speed is nearly the same but it will double for positive isset originally posted by at ,1
58120,16342528793.0,IssuesEvent,2021-05-13 00:31:03,darshan-hpc/darshan,https://api.github.com/repos/darshan-hpc/darshan,closed,MPIIO_F_WRITE_START_TIMESTAMP counter incorrect,defect,"In GitLab by @carns on Oct 5, 2016, 09:08
Reported by William Yoo. This counter doesn't match the POSIX level write start timestamp or system-level instrumentation for a VPIC benchmark that does collective writes via HDF5.",1.0,"MPIIO_F_WRITE_START_TIMESTAMP counter incorrect - In GitLab by @carns on Oct 5, 2016, 09:08
Reported by William Yoo. This counter doesn't match the POSIX level write start timestamp or system-level instrumentation for a VPIC benchmark that does collective writes via HDF5.",0,mpiio f write start timestamp counter incorrect in gitlab by carns on oct reported by william yoo this counter doesn t match the posix level write start timestamp or system level instrumentation for a vpic benchmark that does collective writes via ,0
2236,7875840510.0,IssuesEvent,2018-06-25 21:52:00,react-navigation/react-navigation,https://api.github.com/repos/react-navigation/react-navigation,closed,Constructor called twice when navigate with both routeName and key specified,needs response from maintainer,"
### Current Behavior
the constructor called twice when nav.
**i tried specify routeName only, and it works well.**
### Your Environment
| software | version
| ---------------- | -------
| react-navigation | 2.0.1
| react-native | 0.52.0
| node | 10.0.0
| npm or yarn | 5.6.0
### my route stack construct
```js
rootStack = createSwitchNavigator({
authStack,
mainStack
})
mainStack = createStackNavigator({
tabStack,
...others
})
tabStack = createBottomTabNavigator({
home,
user
})
home = createStackNavigator({
homeView
})
user = createStackNavigator({
userView
})
```
- HomeView
```jsx
...
navigation.navigate({
routeName: 'UserView',
key: 'UserView',
params: { initialPage: 2 },
});
...
```
- UserView
```jsx
...
constructor(props) {
super(props);
console.log('------>I\'m constructor');
}
...
```
there is a button in homeView, when it clicked ,i want to nav from homeView to userView with some params.",True,"Constructor called twice when navigate with both routeName and key specified -
### Current Behavior
the constructor called twice when nav.
**i tried specify routeName only, and it works well.**
### Your Environment
| software | version
| ---------------- | -------
| react-navigation | 2.0.1
| react-native | 0.52.0
| node | 10.0.0
| npm or yarn | 5.6.0
### my route stack construct
```js
rootStack = createSwitchNavigator({
authStack,
mainStack
})
mainStack = createStackNavigator({
tabStack,
...others
})
tabStack = createBottomTabNavigator({
home,
user
})
home = createStackNavigator({
homeView
})
user = createStackNavigator({
userView
})
```
- HomeView
```jsx
...
navigation.navigate({
routeName: 'UserView',
key: 'UserView',
params: { initialPage: 2 },
});
...
```
- UserView
```jsx
...
constructor(props) {
super(props);
console.log('------>I\'m constructor');
}
...
```
there is a button in homeView, when it clicked ,i want to nav from homeView to userView with some params.",1,constructor called twice when navigate with both routename and key specified current behavior the constructor called twice when nav i tried specify routename only and it works well your environment software version react navigation react native node npm or yarn my route stack construct js rootstack createswitchnavigator authstack mainstack mainstack createstacknavigator tabstack others tabstack createbottomtabnavigator home user home createstacknavigator homeview user createstacknavigator userview homeview jsx navigation navigate routename userview key userview params initialpage userview jsx constructor props super props console log i m constructor there is a button in homeview when it clicked i want to nav from homeview to userview with some params ,1
1619,6572644493.0,IssuesEvent,2017-09-11 04:01:41,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,filesystem does not support multiple devices (on btrfs f.e.),affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`filesystem` module
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /home/pasha/Projects/Ansible.cfg/ansible.cfg
configured module search path = ['modules/']
```
##### CONFIGURATION
Does not have sence
##### OS / ENVIRONMENT
Fedora repos
##### SUMMARY
Task:
```
- name: Create btrfs filesystem
filesystem: fstype=btrfs dev='/dev/mapper/centos-home' opts='--label srv'
```
work as expected, but:
```
- name: Create btrfs filesystem
filesystem: fstype=btrfs dev='/dev/sda3 /dev/sdb' opts='-d single --label srv'
```
Produce error: **Device /dev/sda3 /dev/sdb not found.**
##### EXPECTED RESULTS
`Btrfs` (and some other like `zfs` too) allow create filesystems across multiple devices - https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
So, it seams reasonable make `dev` parameter the list type or just allow pass any string.
",True,"filesystem does not support multiple devices (on btrfs f.e.) - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`filesystem` module
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /home/pasha/Projects/Ansible.cfg/ansible.cfg
configured module search path = ['modules/']
```
##### CONFIGURATION
Does not have sence
##### OS / ENVIRONMENT
Fedora repos
##### SUMMARY
Task:
```
- name: Create btrfs filesystem
filesystem: fstype=btrfs dev='/dev/mapper/centos-home' opts='--label srv'
```
work as expected, but:
```
- name: Create btrfs filesystem
filesystem: fstype=btrfs dev='/dev/sda3 /dev/sdb' opts='-d single --label srv'
```
Produce error: **Device /dev/sda3 /dev/sdb not found.**
##### EXPECTED RESULTS
`Btrfs` (and some other like `zfs` too) allow create filesystems across multiple devices - https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
So, it seams reasonable make `dev` parameter the list type or just allow pass any string.
",1,filesystem does not support multiple devices on btrfs f e issue type bug report component name filesystem module ansible version ansible config file home pasha projects ansible cfg ansible cfg configured module search path configuration does not have sence os environment fedora repos summary task name create btrfs filesystem filesystem fstype btrfs dev dev mapper centos home opts label srv work as expected but name create btrfs filesystem filesystem fstype btrfs dev dev dev sdb opts d single label srv produce error device dev dev sdb not found expected results btrfs and some other like zfs too allow create filesystems across multiple devices so it seams reasonable make dev parameter the list type or just allow pass any string ,1
3631,14680375590.0,IssuesEvent,2020-12-31 09:53:09,RalfKoban/MiKo-Analyzers,https://api.github.com/repos/RalfKoban/MiKo-Analyzers,closed,Log statements should be preceded and followed by a blank line,Area: analyzer Area: maintainability feature,"A call to a log method should be followed by a blank line if the following line contains a call to something that is no log method.
The reason is ease of reading.
Following should report a violation:
```c#
Log.Debug(""Initializing"");
var x = 42;
var y = ""something"";
var z = Guid.NewGuid();
```
While following should **not** report a violation:
```c#
Log.Debug(""Initializing"");
var x = 42;
var y = ""something"";
var z = Guid.NewGuid();
```",True,"Log statements should be preceded and followed by a blank line - A call to a log method should be followed by a blank line if the following line contains a call to something that is no log method.
The reason is ease of reading.
Following should report a violation:
```c#
Log.Debug(""Initializing"");
var x = 42;
var y = ""something"";
var z = Guid.NewGuid();
```
While following should **not** report a violation:
```c#
Log.Debug(""Initializing"");
var x = 42;
var y = ""something"";
var z = Guid.NewGuid();
```",1,log statements should be preceded and followed by a blank line a call to a log method should be followed by a blank line if the following line contains a call to something that is no log method the reason is ease of reading following should report a violation c log debug initializing var x var y something var z guid newguid while following should not report a violation c log debug initializing var x var y something var z guid newguid ,1
1032,4827588341.0,IssuesEvent,2016-11-07 14:05:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,cloudformation module fails when state:absent and stack does not exist,affects_2.0 aws bug_report cloud waiting_on_maintainer,"##### Issue Type:
- Bug Report
##### Plugin Name:
cloudformation
##### Ansible Version:
```
$ ansible --version
ansible 2.0.1.0
config file = /Users/dcarr/.ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
None
##### Environment:
N/A; Mac OS X 10.10.5
##### Summary:
I have a playbook that deletes a CloudFormation stack. If I run it when the stack is already absent, I expect it to succeed without error, noting that no changes were needed. What I actually see is that it fails with an error message:
```
fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Stack with id STACKNAME does not exist""}
```
##### Steps To Reproduce:
```
---
- name: delete stack play
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: delete stack task
cloudformation:
stack_name: ""STACKNAME""
state: ""absent""
region: ""us-east-1""
```
##### Expected Results:
Success with no changes
##### Actual Results:
```
$ ansible-playbook bug.yaml -vvvv
Using /Users/dcarr/.ansible.cfg as config file
Loaded callback default of type stdout, v2.0
1 plays in bug.yaml
PLAY [delete stack play] *******************************************************
TASK [delete stack task] *******************************************************
task path: /private/var/folders/2c/qd7lcfcs5tsctw7tmvyd61v00000gn/T/bug.s12v4I2i/bug.yaml:7
ESTABLISH LOCAL CONNECTION FOR USER: dcarr
127.0.0.1 EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760 `"" )'
127.0.0.1 PUT /var/folders/2c/qd7lcfcs5tsctw7tmvyd61v00000gn/T/tmp8h9eEU TO /Users/dcarr/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760/cloudformation
127.0.0.1 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/dcarr/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760/cloudformation; rm -rf ""/Users/dcarr/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760/"" > /dev/null 2>&1'
fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""disable_rollback"": false, ""ec2_url"": null, ""notification_arns"": null, ""profile"": null, ""region"": ""us-east-1"", ""security_token"": null, ""stack_name"": ""STACKNAME"", ""stack_policy"": null, ""state"": ""absent"", ""tags"": null, ""template"": null, ""template_format"": ""json"", ""template_parameters"": {}, ""template_url"": null, ""validate_certs"": true}, ""module_name"": ""cloudformation""}, ""msg"": ""Stack with id STACKNAME does not exist""}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @bug.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
```
",True,"cloudformation module fails when state:absent and stack does not exist - ##### Issue Type:
- Bug Report
##### Plugin Name:
cloudformation
##### Ansible Version:
```
$ ansible --version
ansible 2.0.1.0
config file = /Users/dcarr/.ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
None
##### Environment:
N/A; Mac OS X 10.10.5
##### Summary:
I have a playbook that deletes a CloudFormation stack. If I run it when the stack is already absent, I expect it to succeed without error, noting that no changes were needed. What I actually see is that it fails with an error message:
```
fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""Stack with id STACKNAME does not exist""}
```
##### Steps To Reproduce:
```
---
- name: delete stack play
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: delete stack task
cloudformation:
stack_name: ""STACKNAME""
state: ""absent""
region: ""us-east-1""
```
##### Expected Results:
Success with no changes
##### Actual Results:
```
$ ansible-playbook bug.yaml -vvvv
Using /Users/dcarr/.ansible.cfg as config file
Loaded callback default of type stdout, v2.0
1 plays in bug.yaml
PLAY [delete stack play] *******************************************************
TASK [delete stack task] *******************************************************
task path: /private/var/folders/2c/qd7lcfcs5tsctw7tmvyd61v00000gn/T/bug.s12v4I2i/bug.yaml:7
ESTABLISH LOCAL CONNECTION FOR USER: dcarr
127.0.0.1 EXEC /bin/sh -c '( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760 `"" )'
127.0.0.1 PUT /var/folders/2c/qd7lcfcs5tsctw7tmvyd61v00000gn/T/tmp8h9eEU TO /Users/dcarr/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760/cloudformation
127.0.0.1 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/dcarr/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760/cloudformation; rm -rf ""/Users/dcarr/.ansible/tmp/ansible-tmp-1458161277.5-250244469102760/"" > /dev/null 2>&1'
fatal: [localhost]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""aws_access_key"": null, ""aws_secret_key"": null, ""disable_rollback"": false, ""ec2_url"": null, ""notification_arns"": null, ""profile"": null, ""region"": ""us-east-1"", ""security_token"": null, ""stack_name"": ""STACKNAME"", ""stack_policy"": null, ""state"": ""absent"", ""tags"": null, ""template"": null, ""template_format"": ""json"", ""template_parameters"": {}, ""template_url"": null, ""validate_certs"": true}, ""module_name"": ""cloudformation""}, ""msg"": ""Stack with id STACKNAME does not exist""}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @bug.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
```
",1,cloudformation module fails when state absent and stack does not exist issue type bug report plugin name cloudformation ansible version ansible version ansible config file users dcarr ansible cfg configured module search path default w o overrides ansible configuration none environment n a mac os x summary i have a playbook that deletes a cloudformation stack if i run it when the stack is already absent i expect it to succeed without error noting that no changes were needed what i actually see is that it fails with an error message fatal failed changed false failed true msg stack with id stackname does not exist steps to reproduce name delete stack play hosts localhost connection local gather facts false tasks name delete stack task cloudformation stack name stackname state absent region us east expected results success with no changes actual results ansible playbook bug yaml vvvv using users dcarr ansible cfg as config file loaded callback default of type stdout plays in bug yaml play task task path private var folders t bug bug yaml establish local connection for user dcarr exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put var folders t to users dcarr ansible tmp ansible tmp cloudformation exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users dcarr ansible tmp ansible tmp cloudformation rm rf users dcarr ansible tmp ansible tmp dev null fatal failed changed false failed true invocation module args aws access key null aws secret key null disable rollback false url null notification arns null profile null region us east security token null stack name stackname stack policy null state absent tags null template null template format json template parameters template url null validate certs true module name cloudformation msg stack with id stackname does not exist no more hosts left to retry use limit bug retry play recap localhost ok changed unreachable failed ,1
2045,6894652324.0,IssuesEvent,2017-11-23 10:47:46,dgets/DANT2a,https://api.github.com/repos/dgets/DANT2a,closed,Switch to console window debugging or learn debugger,enhancement maintainability,"Things are getting a little too complicated for a `MessageBox` to take care of it easily. In lieu of not being able to handle unit testing or TDD just yet, we should really open up a console window for debugging stats here. Unless, of course, learning watch points & variable watches in the debugger doesn't prove to be too terrible.",True,"Switch to console window debugging or learn debugger - Things are getting a little too complicated for a `MessageBox` to take care of it easily. In lieu of not being able to handle unit testing or TDD just yet, we should really open up a console window for debugging stats here. Unless, of course, learning watch points & variable watches in the debugger doesn't prove to be too terrible.",1,switch to console window debugging or learn debugger things are getting a little too complicated for a messagebox to take care of it easily in lieu of not being able to handle unit testing or tdd just yet we should really open up a console window for debugging stats here unless of course learning watch points variable watches in the debugger doesn t prove to be too terrible ,1
1579,6572341810.0,IssuesEvent,2017-09-11 01:32:55,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,AWS ec2_vpc_route_table.py Unable to append a route to an existing route table.,affects_2.2 aws cloud feature_idea waiting_on_maintainer,"##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_vpc_route_table.py
##### ANSIBLE VERSION
```
ansible --version
ansible 2.2.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
There is no way to modify an existing AWS route table without building it completely form scratch, the only options are Create or Delete. This feature has been implemented in a different fork below, could this be implemented or is there a reason why this would not be acceptable?
https://github.com/preo/ansible-modules-core/pull/2/files
##### STEPS TO REPRODUCE
",True,"AWS ec2_vpc_route_table.py Unable to append a route to an existing route table. - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_vpc_route_table.py
##### ANSIBLE VERSION
```
ansible --version
ansible 2.2.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
There is no way to modify an existing AWS route table without building it completely form scratch, the only options are Create or Delete. This feature has been implemented in a different fork below, could this be implemented or is there a reason why this would not be acceptable?
https://github.com/preo/ansible-modules-core/pull/2/files
##### STEPS TO REPRODUCE
",1,aws vpc route table py unable to append a route to an existing route table issue type feature idea component name vpc route table py ansible version ansible version ansible configuration aws module os environment aws summary there is no way to modify an existing aws route table without building it completely form scratch the only options are create or delete this feature has been implemented in a different fork below could this be implemented or is there a reason why this would not be acceptable steps to reproduce n a ,1
169240,13131095540.0,IssuesEvent,2020-08-06 16:24:26,HEPCloud/decisionengine,https://api.github.com/repos/HEPCloud/decisionengine,closed,log_level issues in new decisionengine 1.2.0-1,fixed_in_rc prj_testing,"I am trying to test the new functionality which was implemented for issue !84.
The logger section of the /etc/decisionengine/decision_engine.conf is below
'logger' : {'log_file': '/var/log/decisionengine/decision_engine_log',
'max_file_size': 200*1000000,
'max_backup_count': 6,
'log_level': ""DEBUG"",
},
But although I am running seven channels I do not see any DEBUG entries in any of the logs.
Is any further configuration necessary?
Also what is the syntax to set the log level on a channel by channel basis.
This is set up on fermicloud117.fnal.gov right now, I can give root login if needed.
Steve Timm
",1.0,"log_level issues in new decisionengine 1.2.0-1 - I am trying to test the new functionality which was implemented for issue !84.
The logger section of the /etc/decisionengine/decision_engine.conf is below
'logger' : {'log_file': '/var/log/decisionengine/decision_engine_log',
'max_file_size': 200*1000000,
'max_backup_count': 6,
'log_level': ""DEBUG"",
},
But although I am running seven channels I do not see any DEBUG entries in any of the logs.
Is any further configuration necessary?
Also what is the syntax to set the log level on a channel by channel basis.
This is set up on fermicloud117.fnal.gov right now, I can give root login if needed.
Steve Timm
",0,log level issues in new decisionengine i am trying to test the new functionality which was implemented for issue the logger section of the etc decisionengine decision engine conf is below logger log file var log decisionengine decision engine log max file size max backup count log level debug but although i am running seven channels i do not see any debug entries in any of the logs is any further configuration necessary also what is the syntax to set the log level on a channel by channel basis this is set up on fnal gov right now i can give root login if needed steve timm ,0
2751,9828363468.0,IssuesEvent,2019-06-15 10:51:25,chocolatey-community/chocolatey-package-requests,https://api.github.com/repos/chocolatey-community/chocolatey-package-requests,closed,RFM - Scilab,Status: Available For Maintainer(s),"
## I DON'T Want To Become The Maintainer
- [x] I have followed the Package Triage Process and I do NOT want to become maintainer of the package;
- [x] There is no existing open maintainer request for this package;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://chocolatey.org/packages/SciLab
Package source URL: https://www.scilab.org/download/6.0.2
Date the maintainer was contacted: two months ago
How the maintainer was contacted: via [this ](http://disq.us/p/20r3btn) Disqus comment and [this ](https://github.com/dtgm/chocolatey-packages/issues/452) GitHub issue
## Other:
According to [this comment](http://disq.us/p/229jlbn), this formula should be auto-updatable. However, it doesn't seem to be the case. ",True,"RFM - Scilab -
## I DON'T Want To Become The Maintainer
- [x] I have followed the Package Triage Process and I do NOT want to become maintainer of the package;
- [x] There is no existing open maintainer request for this package;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://chocolatey.org/packages/SciLab
Package source URL: https://www.scilab.org/download/6.0.2
Date the maintainer was contacted: two months ago
How the maintainer was contacted: via [this ](http://disq.us/p/20r3btn) Disqus comment and [this ](https://github.com/dtgm/chocolatey-packages/issues/452) GitHub issue
## Other:
According to [this comment](http://disq.us/p/229jlbn), this formula should be auto-updatable. However, it doesn't seem to be the case. ",1,rfm scilab if you want to request a new maintainer for a package that you do not maintain please ensure you have followed the package triage process specifically you have contacted the maintainer using the contact maintainer link on the package page if you have followed the package triage process above and want to request to become the maintainer of a package that you do not maintain please go to the package page and click the contact site admins link and complete the details if you have followed the package triage process above and do not want to request to become the maintainer of a package that you do not maintain please continue please ensure the issue title starts with rfm for example rfm adobe reader please ensure you have the package url from before continuing note keep in mind we have an etiquette regarding communication that we expect folks to observe when they are looking for support in the chocolatey community please remove all comments once you have read them current maintainer i am the maintainer of the package and wish to pass it to someone else i don t want to become the maintainer i have followed the package triage process and i do not want to become maintainer of the package there is no existing open maintainer request for this package checklist issue title starts with rfm existing package details package url package source url date the maintainer was contacted two months ago how the maintainer was contacted via disqus comment and github issue other according to this formula should be auto updatable however it doesn t seem to be the case ,1
435870,12542463196.0,IssuesEvent,2020-06-05 14:05:13,jenkins-x/jx,https://api.github.com/repos/jenkins-x/jx,closed,pre installed builders not found,area/jenkins kind/bug lifecycle/rotten priority/important-longterm,"### Summary
pre installed builders not found
### Steps to reproduce the behavior
### Expected behavior
cannot find maven-nodejs
### Actual behavior
use maven-nodejs to build
```
...
java.io.IOException: container [maven-nodejs] does not exist in pod [maven-nodejs-1hhf7]
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.waitUntilPodContainersAreReady(ContainerExecDecorator.java:479)
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:275)
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:269)
at hudson.Launcher$ProcStarter.start(Launcher.java:455)
at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:194)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:99)
at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:317)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:286)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:179)
at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
at sun.reflect.GeneratedMethodAccessor4217.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:160)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:157)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:158)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:162)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:132)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:132)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
at WorkflowScript.run(WorkflowScript:36)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:84)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83)
at sun.reflect.GeneratedMethodAccessor267.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
at com.cloudbees.groovy.cps.Next.step(Next.java:83)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129)
at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:186)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:370)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:93)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:282)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:270)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:66)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE
```
### Jx version
2.0.1094
The output of `jx version` is:
```
NAME VERSION
jx 2.0.1094
jenkins x platform 2.0.1599
Kubernetes cluster v1.15.5
kubectl v1.15.5
helm client Client: v2.14.3+g0e7f3b6
git 2.24.0
Operating System Ubuntu 16.04.6 LTS
```
```
pipeline {
agent {
label ""jenkins-maven-nodejs""
}
environment {
ORG = 'xxx'
APP_NAME = 'xxx-api'
CHARTMUSEUM_CREDS = credentials('jenkins-x-chartmuseum')
DOCKER_REGISTRY_ORG = 'xxx'
}
stage('Build QA') {
when {
branch 'develop'
}
steps {
container('maven-nodejs') {
// ensure we're not on a detached head
sh ""git checkout develop""
sh ""git config --global credential.helper store""
sh ""jx step git credentials""
sh ""echo Path:""
sh ""pwd""
sh ""mvn clean deploy""
}
}
}
}
post {
always {
cleanWs()
}
}
}
```
### Jenkins type
- [ ] Serverless Jenkins X Pipelines (Tekton + Prow)
- [x] Classic Jenkins
### Kubernetes cluster
on premises k8s cluster
### Operating system / Environment
Ubuntu 16.04.6 LTS
",1.0,"pre installed builders not found - ### Summary
pre installed builders not found
### Steps to reproduce the behavior
### Expected behavior
cannot find maven-nodejs
### Actual behavior
use maven-nodejs to build
```
...
java.io.IOException: container [maven-nodejs] does not exist in pod [maven-nodejs-1hhf7]
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.waitUntilPodContainersAreReady(ContainerExecDecorator.java:479)
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:275)
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:269)
at hudson.Launcher$ProcStarter.start(Launcher.java:455)
at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:194)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:99)
at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:317)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:286)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:179)
at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
at sun.reflect.GeneratedMethodAccessor4217.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:160)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:157)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:158)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:162)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:132)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:132)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
at WorkflowScript.run(WorkflowScript:36)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:84)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83)
at sun.reflect.GeneratedMethodAccessor267.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
at com.cloudbees.groovy.cps.Next.step(Next.java:83)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129)
at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:186)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:370)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:93)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:282)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:270)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:66)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE
```
### Jx version
2.0.1094
The output of `jx version` is:
```
NAME VERSION
jx 2.0.1094
jenkins x platform 2.0.1599
Kubernetes cluster v1.15.5
kubectl v1.15.5
helm client Client: v2.14.3+g0e7f3b6
git 2.24.0
Operating System Ubuntu 16.04.6 LTS
```
```
pipeline {
agent {
label ""jenkins-maven-nodejs""
}
environment {
ORG = 'xxx'
APP_NAME = 'xxx-api'
CHARTMUSEUM_CREDS = credentials('jenkins-x-chartmuseum')
DOCKER_REGISTRY_ORG = 'xxx'
}
stage('Build QA') {
when {
branch 'develop'
}
steps {
container('maven-nodejs') {
// ensure we're not on a detached head
sh ""git checkout develop""
sh ""git config --global credential.helper store""
sh ""jx step git credentials""
sh ""echo Path:""
sh ""pwd""
sh ""mvn clean deploy""
}
}
}
}
post {
always {
cleanWs()
}
}
}
```
### Jenkins type
- [ ] Serverless Jenkins X Pipelines (Tekton + Prow)
- [x] Classic Jenkins
### Kubernetes cluster
on premises k8s cluster
### Operating system / Environment
Ubuntu 16.04.6 LTS
",0,pre installed builders not found summary pre installed builders not found steps to reproduce the behavior expected behavior cannot find maven nodejs actual behavior use maven nodejs to build java io ioexception container does not exist in pod at org csanchez jenkins plugins kubernetes pipeline containerexecdecorator waituntilpodcontainersareready containerexecdecorator java at org csanchez jenkins plugins kubernetes pipeline containerexecdecorator dolaunch containerexecdecorator java at org csanchez jenkins plugins kubernetes pipeline containerexecdecorator launch containerexecdecorator java at hudson launcher procstarter start launcher java at org jenkinsci plugins durabletask bourneshellscript launchwithcookie bourneshellscript java at org jenkinsci plugins durabletask filemonitoringtask launch filemonitoringtask java at org jenkinsci plugins workflow steps durable task durabletaskstep execution start durabletaskstep java at org jenkinsci plugins workflow cps dsl invokestep dsl java at org jenkinsci plugins workflow cps dsl invokemethod dsl java at org jenkinsci plugins workflow cps cpsscript invokemethod cpsscript java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus groovy reflection cachedmethod invoke cachedmethod java at groovy lang metamethod domethodinvoke metamethod java at groovy lang metaclassimpl invokemethod metaclassimpl java at groovy lang metaclassimpl invokemethod metaclassimpl java at org codehaus groovy runtime callsite pogometaclasssite call pogometaclasssite java at org codehaus groovy runtime callsite callsitearray defaultcall callsitearray java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at org kohsuke groovy sandbox impl checker call checker java at org kohsuke groovy sandbox groovyinterceptor onmethodcall groovyinterceptor java at org jenkinsci plugins scriptsecurity sandbox groovy sandboxinterceptor onmethodcall sandboxinterceptor java at org kohsuke groovy sandbox impl checker call checker java at org kohsuke groovy sandbox impl checker checkedcall checker java at org kohsuke groovy sandbox impl checker checkedcall checker java at org kohsuke groovy sandbox impl checker checkedcall checker java at com cloudbees groovy cps sandbox sandboxinvoker methodcall sandboxinvoker java at workflowscript run workflowscript at cps transform native method at com cloudbees groovy cps impl continuationgroup methodcall continuationgroup java at com cloudbees groovy cps impl functioncallblock continuationimpl dispatchorarg functioncallblock java at com cloudbees groovy cps impl functioncallblock continuationimpl fixarg functioncallblock java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com cloudbees groovy cps impl continuationptr continuationimpl receive continuationptr java at com cloudbees groovy cps impl constantblock eval constantblock java at com cloudbees groovy cps next step next java at com cloudbees groovy cps continuable call continuable java at com cloudbees groovy cps continuable call continuable java at org codehaus groovy runtime groovycategorysupport threadcategoryinfo use groovycategorysupport java at org codehaus groovy runtime groovycategorysupport use groovycategorysupport java at com cloudbees groovy cps continuable continuable java at org jenkinsci plugins workflow cps sandboxcontinuable access sandboxcontinuable java at org jenkinsci plugins workflow cps sandboxcontinuable sandboxcontinuable java at org jenkinsci plugins workflow cps cpsthread runnextchunk cpsthread java at org jenkinsci plugins workflow cps cpsthreadgroup run cpsthreadgroup java at org jenkinsci plugins workflow cps cpsthreadgroup access cpsthreadgroup java at org jenkinsci plugins workflow cps cpsthreadgroup call cpsthreadgroup java at org jenkinsci plugins workflow cps cpsthreadgroup call cpsthreadgroup java at org jenkinsci plugins workflow cps cpsvmexecutorservice call cpsvmexecutorservice java at java util concurrent futuretask run futuretask java at hudson remoting singlelaneexecutorservice run singlelaneexecutorservice java at jenkins util contextresettingexecutorservice run contextresettingexecutorservice java at jenkins security impersonatingexecutorservice run impersonatingexecutorservice java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java finished failure jx version the output of jx version is name version jx jenkins x platform kubernetes cluster kubectl helm client client git operating system ubuntu lts pipeline agent label jenkins maven nodejs environment org xxx app name xxx api chartmuseum creds credentials jenkins x chartmuseum docker registry org xxx stage build qa when branch develop steps container maven nodejs ensure we re not on a detached head sh git checkout develop sh git config global credential helper store sh jx step git credentials sh echo path sh pwd sh mvn clean deploy post always cleanws jenkins type select which installation type are you using serverless jenkins x pipelines tekton prow classic jenkins kubernetes cluster on premises cluster operating system environment ubuntu lts ,0
320650,27448607027.0,IssuesEvent,2023-03-02 15:51:06,PluginBugs/Issues-ItemsAdder,https://api.github.com/repos/PluginBugs/Issues-ItemsAdder,closed,Same `custom model data` with different `material type` cannot merge if using custom json `overrides` definition,Bug Need testing,"### Terms
- [X] I'm using the very latest version of ItemsAdder and its dependencies.
- [X] I am sure this is a bug and it is not caused by a misconfiguration or by another plugin.
- [X] I already searched on this [Github page](https://github.com/PluginBugs/Issues-ItemsAdder/issues) to check if the same issue was already reported.
- [X] I already searched on the [plugin wiki](https://itemsadder.devs.beer/) to know if a solution is already known.
- [X] I already searched on the [forums](https://forum.devs.beer/) to check if anyone already has a solution for this.
### Discord tag (optional)
Nailm#9364
### What happened?
IA cannot correctly merge the item model overrides if:
1. a model is defined in the IA config, with custom model id being X and material being Y
2. then I manually define a model override by writing a json file, with custom model id also being X but for a different material other than Y
Ideally, IA should be able to merge the files because the material is different, despite the custom model id is the same.
### Steps to reproduce the issue
I have defined a custom furniture like this (the file is at `plugins/ItemsAdder/contents/furnitures/configs/anniversary/1st_year_cake.yml`):
```yaml
info:
namespace: furnitures
items:
1st_year_cake:
display_name: 1st Year Cake
resource:
material: PAPER
generate: false
model_id: 40000
model_path: anniversary/1st_year_cake
behaviours:
furniture:
entity: item_frame
gravity: false
small: false
solid: true
fixed_rotation: false
placeable_on:
floor: true
ceiling: false
walls: false
```
**Note that the `material` is `PAPER` and the `model_id` is 40000**
Now if I were to define a custom model override by manually writing a jsosn file: the used model and custom model data are identical to the one defined in IA config above but it's for **`leather_horse_armor` other than `paper`**. I put it in the path `plugins/ItemsAdder/contents/_colorable/resourcepack/minecraft/models/item/leather_horse_armor.json`:
```json
{
""parent"": ""item/generated"",
""textures"": {
""layer0"": ""item/leather_horse_armor""
},
""overrides"": [
{
""predicate"": {
""custom_model_data"": 40000
},
""model"": ""furnitures:anniversary/1st_year_cake""
}
]
}
```
Then, run `iazip`
IA would throw a warning:
> [23:24:12 WARN]: [!] CustomModelData 40000 for item 'leather_horse_armor' already used by ItemsAdder custom item 'furnitures:1st_year_cake'. Skipped.
As a result, the `overrides` that are manually written by me is not included in the output pack.
### Server version
Current: git-Purpur-1920 (MC: 1.19.3)*
Previous: git-Purpur-1919 (MC: 1.19.3)
### ItemsAdder Version
ItemsAdder version 3.3.1
### ProtocolLib Version
ProtocolLib version 5.0.0-SNAPSHOT-b612
### LoneLibs Version
LoneLibs version 1.0.23
### LightAPI Version (optional)
_No response_
### LibsDisguises Version (optional)
_No response_
### FULL server log
_No response_
### Error (optional)
_No response_
### Problematic items yml configuration file (optional)
_No response_
### Other files, you can drag and drop them here to upload. (optional)
My ItemsAdder `config.yml`: https://pastes.dev/uo04MGQkkr
### Screenshots/Videos (you can drag and drop files or paste links)
_No response_",1.0,"Same `custom model data` with different `material type` cannot merge if using custom json `overrides` definition - ### Terms
- [X] I'm using the very latest version of ItemsAdder and its dependencies.
- [X] I am sure this is a bug and it is not caused by a misconfiguration or by another plugin.
- [X] I already searched on this [Github page](https://github.com/PluginBugs/Issues-ItemsAdder/issues) to check if the same issue was already reported.
- [X] I already searched on the [plugin wiki](https://itemsadder.devs.beer/) to know if a solution is already known.
- [X] I already searched on the [forums](https://forum.devs.beer/) to check if anyone already has a solution for this.
### Discord tag (optional)
Nailm#9364
### What happened?
IA cannot correctly merge the item model overrides if:
1. a model is defined in the IA config, with custom model id being X and material being Y
2. then I manually define a model override by writing a json file, with custom model id also being X but for a different material other than Y
Ideally, IA should be able to merge the files because the material is different, despite the custom model id is the same.
### Steps to reproduce the issue
I have defined a custom furniture like this (the file is at `plugins/ItemsAdder/contents/furnitures/configs/anniversary/1st_year_cake.yml`):
```yaml
info:
namespace: furnitures
items:
1st_year_cake:
display_name: 1st Year Cake
resource:
material: PAPER
generate: false
model_id: 40000
model_path: anniversary/1st_year_cake
behaviours:
furniture:
entity: item_frame
gravity: false
small: false
solid: true
fixed_rotation: false
placeable_on:
floor: true
ceiling: false
walls: false
```
**Note that the `material` is `PAPER` and the `model_id` is 40000**
Now if I were to define a custom model override by manually writing a jsosn file: the used model and custom model data are identical to the one defined in IA config above but it's for **`leather_horse_armor` other than `paper`**. I put it in the path `plugins/ItemsAdder/contents/_colorable/resourcepack/minecraft/models/item/leather_horse_armor.json`:
```json
{
""parent"": ""item/generated"",
""textures"": {
""layer0"": ""item/leather_horse_armor""
},
""overrides"": [
{
""predicate"": {
""custom_model_data"": 40000
},
""model"": ""furnitures:anniversary/1st_year_cake""
}
]
}
```
Then, run `iazip`
IA would throw a warning:
> [23:24:12 WARN]: [!] CustomModelData 40000 for item 'leather_horse_armor' already used by ItemsAdder custom item 'furnitures:1st_year_cake'. Skipped.
As a result, the `overrides` that are manually written by me is not included in the output pack.
### Server version
Current: git-Purpur-1920 (MC: 1.19.3)*
Previous: git-Purpur-1919 (MC: 1.19.3)
### ItemsAdder Version
ItemsAdder version 3.3.1
### ProtocolLib Version
ProtocolLib version 5.0.0-SNAPSHOT-b612
### LoneLibs Version
LoneLibs version 1.0.23
### LightAPI Version (optional)
_No response_
### LibsDisguises Version (optional)
_No response_
### FULL server log
_No response_
### Error (optional)
_No response_
### Problematic items yml configuration file (optional)
_No response_
### Other files, you can drag and drop them here to upload. (optional)
My ItemsAdder `config.yml`: https://pastes.dev/uo04MGQkkr
### Screenshots/Videos (you can drag and drop files or paste links)
_No response_",0,same custom model data with different material type cannot merge if using custom json overrides definition terms i m using the very latest version of itemsadder and its dependencies i am sure this is a bug and it is not caused by a misconfiguration or by another plugin i already searched on this to check if the same issue was already reported i already searched on the to know if a solution is already known i already searched on the to check if anyone already has a solution for this discord tag optional nailm what happened ia cannot correctly merge the item model overrides if a model is defined in the ia config with custom model id being x and material being y then i manually define a model override by writing a json file with custom model id also being x but for a different material other than y ideally ia should be able to merge the files because the material is different despite the custom model id is the same steps to reproduce the issue i have defined a custom furniture like this the file is at plugins itemsadder contents furnitures configs anniversary year cake yml yaml info namespace furnitures items year cake display name year cake resource material paper generate false model id model path anniversary year cake behaviours furniture entity item frame gravity false small false solid true fixed rotation false placeable on floor true ceiling false walls false note that the material is paper and the model id is now if i were to define a custom model override by manually writing a jsosn file the used model and custom model data are identical to the one defined in ia config above but it s for leather horse armor other than paper i put it in the path plugins itemsadder contents colorable resourcepack minecraft models item leather horse armor json json parent item generated textures item leather horse armor overrides predicate custom model data model furnitures anniversary year cake then run iazip ia would throw a warning custommodeldata for item leather horse armor already used by itemsadder custom item furnitures year cake skipped as a result the overrides that are manually written by me is not included in the output pack server version current git purpur mc previous git purpur mc itemsadder version itemsadder version protocollib version protocollib version snapshot lonelibs version lonelibs version lightapi version optional no response libsdisguises version optional no response full server log no response error optional no response problematic items yml configuration file optional no response other files you can drag and drop them here to upload optional my itemsadder config yml screenshots videos you can drag and drop files or paste links no response ,0
3589,14480916217.0,IssuesEvent,2020-12-10 11:52:04,grey-software/org,https://api.github.com/repos/grey-software/org,opened,🥅 Initiative: Create a dashboard for open source organizations,Domain: User Experience Role: Maintainer Role: Product Owner,"### Motivation 🏁
As the technical lead for an open-source organization, I have found managing multiple software repositories and informing myself of all the events occurring throughout the various platforms I'm on.
At the moment, if I'd like an overview of the analytics, discussions, for all my repositories, I have to click through multiple web pages and parse the valuable information myself.
If I get an insight from a high-level look at the repositories and I want to create an issue, I'll have to once again navigate to the repo's page and create the issue.
### Initiative Overview 👁️🗨️
I propose creating a dashboard to help open-source organization teams get relevant information and act quickly.
**Implementation Details 🛠️ **
Here are some early ideas I have:
- I should be able to view the project boards, analytics, issues, and PRs for all/pinned repositories
- I should be able to create an issue without having to make multiple clicks
- I should be able to view a feed of the community discussions
- I should have relevant notifications enter my feed",True,"🥅 Initiative: Create a dashboard for open source organizations - ### Motivation 🏁
As the technical lead for an open-source organization, I have found managing multiple software repositories and informing myself of all the events occurring throughout the various platforms I'm on.
At the moment, if I'd like an overview of the analytics, discussions, for all my repositories, I have to click through multiple web pages and parse the valuable information myself.
If I get an insight from a high-level look at the repositories and I want to create an issue, I'll have to once again navigate to the repo's page and create the issue.
### Initiative Overview 👁️🗨️
I propose creating a dashboard to help open-source organization teams get relevant information and act quickly.
**Implementation Details 🛠️ **
Here are some early ideas I have:
- I should be able to view the project boards, analytics, issues, and PRs for all/pinned repositories
- I should be able to create an issue without having to make multiple clicks
- I should be able to view a feed of the community discussions
- I should have relevant notifications enter my feed",1,🥅 initiative create a dashboard for open source organizations motivation 🏁 a clear and concise motivation for this initiative how will this help execute the vision of the org as the technical lead for an open source organization i have found managing multiple software repositories and informing myself of all the events occurring throughout the various platforms i m on at the moment if i d like an overview of the analytics discussions for all my repositories i have to click through multiple web pages and parse the valuable information myself if i get an insight from a high level look at the repositories and i want to create an issue i ll have to once again navigate to the repo s page and create the issue initiative overview 👁️🗨️ a clear and concise description of what the initiative is i propose creating a dashboard to help open source organization teams get relevant information and act quickly implementation details 🛠️ here are some early ideas i have i should be able to view the project boards analytics issues and prs for all pinned repositories i should be able to create an issue without having to make multiple clicks i should be able to view a feed of the community discussions i should have relevant notifications enter my feed,1
46259,13055880015.0,IssuesEvent,2020-07-30 03:00:31,icecube-trac/tix2,https://api.github.com/repos/icecube-trac/tix2,opened,test ticket (Trac #869),Incomplete Migration Migrated from Trac cmake defect,"Migrated from https://code.icecube.wisc.edu/ticket/869
```json
{
""status"": ""closed"",
""changetime"": ""2015-02-12T06:31:40"",
""description"": """",
""reporter"": ""nega"",
""cc"": """",
""resolution"": ""invalid"",
""_ts"": ""1423722700498868"",
""component"": ""cmake"",
""summary"": ""test ticket"",
""priority"": ""normal"",
""keywords"": """",
""time"": ""2015-02-11T23:11:24"",
""milestone"": """",
""owner"": ""nega"",
""type"": ""defect""
}
```
",1.0,"test ticket (Trac #869) - Migrated from https://code.icecube.wisc.edu/ticket/869
```json
{
""status"": ""closed"",
""changetime"": ""2015-02-12T06:31:40"",
""description"": """",
""reporter"": ""nega"",
""cc"": """",
""resolution"": ""invalid"",
""_ts"": ""1423722700498868"",
""component"": ""cmake"",
""summary"": ""test ticket"",
""priority"": ""normal"",
""keywords"": """",
""time"": ""2015-02-11T23:11:24"",
""milestone"": """",
""owner"": ""nega"",
""type"": ""defect""
}
```
",0,test ticket trac migrated from json status closed changetime description reporter nega cc resolution invalid ts component cmake summary test ticket priority normal keywords time milestone owner nega type defect ,0
1633,6572657467.0,IssuesEvent,2017-09-11 04:08:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,stat module doesn't return lnk_source when follow=yes,affects_2.0 bug_report waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
stat
##### ANSIBLE VERSION
```
ansible 2.0.1.0 (detached HEAD bb6cadefa2) last updated 2016/04/13 12:36:28 (GMT -700)
lib/ansible/modules/core: (detached HEAD 262e2a3302) last updated 2016/04/13 12:36:28 (GMT -700)
lib/ansible/modules/extras: (detached HEAD e0be11da08) last updated 2016/04/13 12:36:28 (GMT -700)
config file = /ansible.cfg
configured module search path = /usr/share/ansible:playbooks/library
```
##### CONFIGURATION
Irrelevant
##### OS / ENVIRONMENT
Host: Fedora 23, kernel 4.4.6-300.fc23.x86_64, python 2.7.11
Target: CentOS 6.7, kernel 2.6.32-573.7.1.el6.x86_64, python 2.6.6
##### SUMMARY
The stat module fails to return the lnk_source attribute as part of its registered output. When trying to retrieve the target path and target attributes of a symlink, one must use stat twice - the first time with follow=no to retrieve lnk_source, and a second time with follow=yes to retrieve the remaining attributes. In both follow settings, 'path' is returned as the symlink path being inspected.
##### STEPS TO REPRODUCE
```
- stat: path=""/some/symlink""
follow=yes
register: stat_result
- debug: var=stat_result.stat
```
##### EXPECTED RESULTS
```
""stat"": {
""atime"": 1460493437.5765483,
""ctime"": 1459867813.2843106,
""dev"": 2064,
""exists"": true,
""gid"": 1003,
""gr_name"": ""nobody"",
""inode"": 2107891,
""isblk"": false,
""ischr"": false,
""isdir"": true,
""isfifo"": false,
""isgid"": false,
""islnk"": false,
""isreg"": false,
""issock"": false,
""isuid"": false,
""lnk_source"": ""/target/file"",
""mode"": ""0755"",
""mtime"": 1459867813.2843106,
""nlink"": 17,
""path"": ""/some/symlink"",
""pw_name"": ""nobody"",
""rgrp"": true,
""roth"": true,
""rusr"": true,
""size"": 4096,
""uid"": 1003,
""wgrp"": false,
""woth"": false,
""wusr"": true,
""xgrp"": true,
""xoth"": true,
""xusr"": true
}
```
##### ACTUAL RESULTS
```
""stat"": {
""atime"": 1460493437.5765483,
""ctime"": 1459867813.2843106,
""dev"": 2064,
""exists"": true,
""gid"": 1003,
""gr_name"": ""nobody"",
""inode"": 2107891,
""isblk"": false,
""ischr"": false,
""isdir"": true,
""isfifo"": false,
""isgid"": false,
""islnk"": false,
""isreg"": false,
""issock"": false,
""isuid"": false,
""mode"": ""0755"",
""mtime"": 1459867813.2843106,
""nlink"": 17,
""path"": ""/some/symlink"",
""pw_name"": ""nobody"",
""rgrp"": true,
""roth"": true,
""rusr"": true,
""size"": 4096,
""uid"": 1003,
""wgrp"": false,
""woth"": false,
""wusr"": true,
""xgrp"": true,
""xoth"": true,
""xusr"": true
}
```
",True,"stat module doesn't return lnk_source when follow=yes - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
stat
##### ANSIBLE VERSION
```
ansible 2.0.1.0 (detached HEAD bb6cadefa2) last updated 2016/04/13 12:36:28 (GMT -700)
lib/ansible/modules/core: (detached HEAD 262e2a3302) last updated 2016/04/13 12:36:28 (GMT -700)
lib/ansible/modules/extras: (detached HEAD e0be11da08) last updated 2016/04/13 12:36:28 (GMT -700)
config file = /ansible.cfg
configured module search path = /usr/share/ansible:playbooks/library
```
##### CONFIGURATION
Irrelevant
##### OS / ENVIRONMENT
Host: Fedora 23, kernel 4.4.6-300.fc23.x86_64, python 2.7.11
Target: CentOS 6.7, kernel 2.6.32-573.7.1.el6.x86_64, python 2.6.6
##### SUMMARY
The stat module fails to return the lnk_source attribute as part of its registered output. When trying to retrieve the target path and target attributes of a symlink, one must use stat twice - the first time with follow=no to retrieve lnk_source, and a second time with follow=yes to retrieve the remaining attributes. In both follow settings, 'path' is returned as the symlink path being inspected.
##### STEPS TO REPRODUCE
```
- stat: path=""/some/symlink""
follow=yes
register: stat_result
- debug: var=stat_result.stat
```
##### EXPECTED RESULTS
```
""stat"": {
""atime"": 1460493437.5765483,
""ctime"": 1459867813.2843106,
""dev"": 2064,
""exists"": true,
""gid"": 1003,
""gr_name"": ""nobody"",
""inode"": 2107891,
""isblk"": false,
""ischr"": false,
""isdir"": true,
""isfifo"": false,
""isgid"": false,
""islnk"": false,
""isreg"": false,
""issock"": false,
""isuid"": false,
""lnk_source"": ""/target/file"",
""mode"": ""0755"",
""mtime"": 1459867813.2843106,
""nlink"": 17,
""path"": ""/some/symlink"",
""pw_name"": ""nobody"",
""rgrp"": true,
""roth"": true,
""rusr"": true,
""size"": 4096,
""uid"": 1003,
""wgrp"": false,
""woth"": false,
""wusr"": true,
""xgrp"": true,
""xoth"": true,
""xusr"": true
}
```
##### ACTUAL RESULTS
```
""stat"": {
""atime"": 1460493437.5765483,
""ctime"": 1459867813.2843106,
""dev"": 2064,
""exists"": true,
""gid"": 1003,
""gr_name"": ""nobody"",
""inode"": 2107891,
""isblk"": false,
""ischr"": false,
""isdir"": true,
""isfifo"": false,
""isgid"": false,
""islnk"": false,
""isreg"": false,
""issock"": false,
""isuid"": false,
""mode"": ""0755"",
""mtime"": 1459867813.2843106,
""nlink"": 17,
""path"": ""/some/symlink"",
""pw_name"": ""nobody"",
""rgrp"": true,
""roth"": true,
""rusr"": true,
""size"": 4096,
""uid"": 1003,
""wgrp"": false,
""woth"": false,
""wusr"": true,
""xgrp"": true,
""xoth"": true,
""xusr"": true
}
```
",1,stat module doesn t return lnk source when follow yes issue type bug report component name stat ansible version ansible detached head last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file ansible cfg configured module search path usr share ansible playbooks library configuration irrelevant os environment host fedora kernel python target centos kernel python summary the stat module fails to return the lnk source attribute as part of its registered output when trying to retrieve the target path and target attributes of a symlink one must use stat twice the first time with follow no to retrieve lnk source and a second time with follow yes to retrieve the remaining attributes in both follow settings path is returned as the symlink path being inspected steps to reproduce stat path some symlink follow yes register stat result debug var stat result stat expected results stat atime ctime dev exists true gid gr name nobody inode isblk false ischr false isdir true isfifo false isgid false islnk false isreg false issock false isuid false lnk source target file mode mtime nlink path some symlink pw name nobody rgrp true roth true rusr true size uid wgrp false woth false wusr true xgrp true xoth true xusr true actual results stat atime ctime dev exists true gid gr name nobody inode isblk false ischr false isdir true isfifo false isgid false islnk false isreg false issock false isuid false mode mtime nlink path some symlink pw name nobody rgrp true roth true rusr true size uid wgrp false woth false wusr true xgrp true xoth true xusr true ,1
104956,13147938716.0,IssuesEvent,2020-08-08 18:26:52,mrkirkmorgan/song-builder,https://api.github.com/repos/mrkirkmorgan/song-builder,opened,User Authentication Process,design rollout,This ticket covers creating an authentication process by which users can log into their account and access their information/music. ,1.0,User Authentication Process - This ticket covers creating an authentication process by which users can log into their account and access their information/music. ,0,user authentication process this ticket covers creating an authentication process by which users can log into their account and access their information music ,0
137557,12758986155.0,IssuesEvent,2020-06-29 04:19:47,ocaml/ocaml,https://api.github.com/repos/ocaml/ocaml,closed,[> {typexpr} as 't ] yields a syntax error with yacc syntax,Stale bug documentation,"**Original bug ID:** 3957
**Reporter:** alexbaretta
**Status:** acknowledged (set by @damiendoligez on 2006-03-29T14:34:19Z)
**Resolution:** open
**Priority:** normal
**Severity:** minor
**Category:** documentation
**Related to:** #3835
## Bug description
root@alex:~# ledit ocaml
Objective Caml version 3.09.1+dev5 (2005-12-05)
# type 'a foo = [ `Foo of 'a ];;
type 'a foo = [ `Foo of 'a ]
# let x : [> 'x foo as 'x] option = None;;
Syntax error
# #load ""camlp4o.cma"";;
Camlp4 Parsing version 3.09.1+dev5 (2005-12-05)
# let x : [> 'x foo as 'x] option = None;;
val x : [> ('a foo as 'a) foo ] option = None
## Additional information
The problem is hardly significant, as the camlp4 based parsers easily allow to overcome it.
",1.0,"[> {typexpr} as 't ] yields a syntax error with yacc syntax - **Original bug ID:** 3957
**Reporter:** alexbaretta
**Status:** acknowledged (set by @damiendoligez on 2006-03-29T14:34:19Z)
**Resolution:** open
**Priority:** normal
**Severity:** minor
**Category:** documentation
**Related to:** #3835
## Bug description
root@alex:~# ledit ocaml
Objective Caml version 3.09.1+dev5 (2005-12-05)
# type 'a foo = [ `Foo of 'a ];;
type 'a foo = [ `Foo of 'a ]
# let x : [> 'x foo as 'x] option = None;;
Syntax error
# #load ""camlp4o.cma"";;
Camlp4 Parsing version 3.09.1+dev5 (2005-12-05)
# let x : [> 'x foo as 'x] option = None;;
val x : [> ('a foo as 'a) foo ] option = None
## Additional information
The problem is hardly significant, as the camlp4 based parsers easily allow to overcome it.
",0, yields a syntax error with yacc syntax original bug id reporter alexbaretta status acknowledged set by damiendoligez on resolution open priority normal severity minor category documentation related to bug description root alex ledit ocaml objective caml version type a foo type a foo let x option none syntax error load cma parsing version let x option none val x option none additional information the problem is hardly significant as the based parsers easily allow to overcome it ,0
3709,15188224725.0,IssuesEvent,2021-02-15 14:50:15,carbon-design-system/carbon,https://api.github.com/repos/carbon-design-system/carbon,closed,Tag component with css variables,status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 type: enhancement 💡,"Hi!
As I see, this example not working: https://www.carbondesignsystem.com/components/tag/code
If I change the theme, the colors don't react, because the css custom variable is undefined and always the fallback is active.
We are facing the same problem, because if we add this to our root scss, it's don't contains the vars for tag:
```scss
@include carbon--theme($carbon--theme--g100, true);
```
Is this intentional? Or is there a mixin/function to get this variables?
For the quick fix, I made a small mixin:
```scss
@mixin get-variables($globalTheme, $tokens) {
@each $key, $options in $tokens {
$values: map-get($options, 'values');
@each $valueObject in $values {
$theme: map-get($valueObject, 'theme');
$value: map-get($valueObject, 'value');
@if $theme == $globalTheme {
@include custom-property($key, $value);
}
}
}
}
```
Before this we include the tokens of tag (https://github.com/carbon-design-system/carbon/blob/master/packages/components/src/components/tag/_tokens.scss), and we include our mixin like this:
```scss
.root-element {
@include get-variables($carbon--theme--g100, $tag-colors);
}
```
And it's will generate:
```css
.root-element {
--cds-tag-background-red: #ffd7d9;
--cds-tag-color-red: #750e13;
--cds-tag-hover-red: #ffb3b8;
...
}
```
What do you think about it?",True,"Tag component with css variables - Hi!
As I see, this example not working: https://www.carbondesignsystem.com/components/tag/code
If I change the theme, the colors don't react, because the css custom variable is undefined and always the fallback is active.
We are facing the same problem, because if we add this to our root scss, it's don't contains the vars for tag:
```scss
@include carbon--theme($carbon--theme--g100, true);
```
Is this intentional? Or is there a mixin/function to get this variables?
For the quick fix, I made a small mixin:
```scss
@mixin get-variables($globalTheme, $tokens) {
@each $key, $options in $tokens {
$values: map-get($options, 'values');
@each $valueObject in $values {
$theme: map-get($valueObject, 'theme');
$value: map-get($valueObject, 'value');
@if $theme == $globalTheme {
@include custom-property($key, $value);
}
}
}
}
```
Before this we include the tokens of tag (https://github.com/carbon-design-system/carbon/blob/master/packages/components/src/components/tag/_tokens.scss), and we include our mixin like this:
```scss
.root-element {
@include get-variables($carbon--theme--g100, $tag-colors);
}
```
And it's will generate:
```css
.root-element {
--cds-tag-background-red: #ffd7d9;
--cds-tag-color-red: #750e13;
--cds-tag-hover-red: #ffb3b8;
...
}
```
What do you think about it?",1,tag component with css variables hi as i see this example not working if i change the theme the colors don t react because the css custom variable is undefined and always the fallback is active we are facing the same problem because if we add this to our root scss it s don t contains the vars for tag scss include carbon theme carbon theme true is this intentional or is there a mixin function to get this variables for the quick fix i made a small mixin scss mixin get variables globaltheme tokens each key options in tokens values map get options values each valueobject in values theme map get valueobject theme value map get valueobject value if theme globaltheme include custom property key value before this we include the tokens of tag and we include our mixin like this scss root element include get variables carbon theme tag colors and it s will generate css root element cds tag background red cds tag color red cds tag hover red what do you think about it ,1
221579,17359049267.0,IssuesEvent,2021-07-29 17:52:41,nasa/cFE,https://api.github.com/repos/nasa/cFE,closed,Hard coded time print format checks fail when non-default epoch is used,unit-test,"**Is your feature request related to a problem? Please describe.**
Epoch is configurable:
https://github.com/nasa/cFE/blob/063b4d8a9c4a7e822af5f3e4017599159b985bb0/cmake/sample_defs/sample_mission_cfg.h#L186-L190
Time unit tests hard-code checks that are impacted by epoch configuration, and fail when it's changed (example):
https://github.com/nasa/cFE/blob/063b4d8a9c4a7e822af5f3e4017599159b985bb0/modules/time/ut-coverage/time_UT.c#L398-L424
**Describe the solution you'd like**
Update tests to work with configured epoch. Either adjust for configured epoch or test the actual values (not print time).
**Describe alternatives you've considered**
None
**Additional context**
None
**Requester Info**
Jacob Hageman - NASA/GSFC, @excaliburtb
",1.0,"Hard coded time print format checks fail when non-default epoch is used - **Is your feature request related to a problem? Please describe.**
Epoch is configurable:
https://github.com/nasa/cFE/blob/063b4d8a9c4a7e822af5f3e4017599159b985bb0/cmake/sample_defs/sample_mission_cfg.h#L186-L190
Time unit tests hard-code checks that are impacted by epoch configuration, and fail when it's changed (example):
https://github.com/nasa/cFE/blob/063b4d8a9c4a7e822af5f3e4017599159b985bb0/modules/time/ut-coverage/time_UT.c#L398-L424
**Describe the solution you'd like**
Update tests to work with configured epoch. Either adjust for configured epoch or test the actual values (not print time).
**Describe alternatives you've considered**
None
**Additional context**
None
**Requester Info**
Jacob Hageman - NASA/GSFC, @excaliburtb
",0,hard coded time print format checks fail when non default epoch is used is your feature request related to a problem please describe epoch is configurable time unit tests hard code checks that are impacted by epoch configuration and fail when it s changed example describe the solution you d like update tests to work with configured epoch either adjust for configured epoch or test the actual values not print time describe alternatives you ve considered none additional context none requester info jacob hageman nasa gsfc excaliburtb ,0
42658,22758955863.0,IssuesEvent,2022-07-07 19:11:59,scylladb/scylla,https://api.github.com/repos/scylladb/scylla,closed,Schema change statements are slow due to memtable flush latency,performance,"_Installation details_
Scylla version (or git commit hash): any
Executing DDL statements takes significantly more time on Scylla than on Cassandra. For instance, `drop keyspace` takes about a second on an idle S\* server. I traced that down to latency of the flush of schema tables. The `create keyspace` statement is noticeably faster than `drop keyspace` because it flushes much fewer tables.
It looks like the latency comes mainly from a large number of `fdatasync` calls which we execute sequentially during schema tables flush (I counted 77 calls). When I disable them, `drop keyspace` time drops down to about 100ms. Maybe some of them could be avoided or parallelized.
Here's a detailed trace during `drop keyspace`:
```
TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Taking the merge lock
TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Took the merge lock
TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Reading old schema
TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Applying schema changes
TRACE 2016-07-15 17:41:00,019 [shard 0] database - apply {system.schema_keyspaces key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }}
TRACE 2016-07-15 17:41:00,019 [shard 0] database - apply {system.schema_columnfamilies key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }}
TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.schema_columns key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }}
TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.schema_triggers key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }}
TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.schema_usertypes key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }}
TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.IndexInfo key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }}
TRACE 2016-07-15 17:41:00,020 [shard 0] schema_tables - Flushing {9f5c6374-d485-3229-9a0a-5094af9ad1e3, b0f22357-4458-3cdb-9631-c43e59ce3676, 0359bc71-7123-3ee1-9a4a-b9dfb11fc125, 296e9c04-9bec-3085-827d-c17d3df2122a, 3aa75225-4f82-350b-8d5c-430fa221fa0a, 45f5b360-24bc-3f83-a363-1034ea4fa697}
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of IndexInfo.system, partitions: 1, occupancy: 0.14%, 376 / 262144 [B]
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db
DEBUG 2016-07-15 17:41:00,020 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-TOC.txt.tmp
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_keyspaces.system, partitions: 1, occupancy: 0.14%, 376 / 262144 [B]
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_triggers.system, partitions: 2, occupancy: 0.29%, 752 / 262144 [B]
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_columns.system, partitions: 2, occupancy: 31.28%, 81992 / 262144 [B]
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_usertypes.system, partitions: 2, occupancy: 0.29%, 752 / 262144 [B]
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_columnfamilies.system, partitions: 2, occupancy: 14.61%, 38312 / 262144 [B]
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db
DEBUG 2016-07-15 17:41:00,021 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-TOC.txt.tmp
TRACE 2016-07-15 17:41:00,022 [shard 0] seastar - starting flush, id=149
TRACE 2016-07-15 17:41:00,022 seastar - running fdatasync() from 0 id=149
TRACE 2016-07-15 17:41:00,022 [shard 0] seastar - starting flush, id=150
TRACE 2016-07-15 17:41:00,066 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,066 seastar - running fdatasync() from 0 id=150
TRACE 2016-07-15 17:41:00,066 [shard 0] seastar - flush done, id=149
TRACE 2016-07-15 17:41:00,077 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - flush done, id=150
TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - starting flush, id=151
TRACE 2016-07-15 17:41:00,077 seastar - running fdatasync() from 0 id=151
TRACE 2016-07-15 17:41:00,077 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - flush done, id=151
TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - starting flush, id=152
TRACE 2016-07-15 17:41:00,077 seastar - running fdatasync() from 0 id=152
TRACE 2016-07-15 17:41:00,078 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,078 [shard 0] seastar - flush done, id=152
TRACE 2016-07-15 17:41:00,078 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: end of stream
TRACE 2016-07-15 17:41:00,078 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: end of stream
TRACE 2016-07-15 17:41:00,085 [shard 0] seastar - starting flush, id=153
TRACE 2016-07-15 17:41:00,085 seastar - running fdatasync() from 0 id=153
TRACE 2016-07-15 17:41:00,085 [shard 0] seastar - starting flush, id=154
TRACE 2016-07-15 17:41:00,113 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,113 seastar - running fdatasync() from 0 id=154
TRACE 2016-07-15 17:41:00,113 [shard 0] seastar - flush done, id=153
TRACE 2016-07-15 17:41:00,125 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,126 [shard 0] seastar - flush done, id=154
TRACE 2016-07-15 17:41:00,126 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,126 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,130 [shard 0] seastar - starting flush, id=155
TRACE 2016-07-15 17:41:00,130 seastar - running fdatasync() from 0 id=155
TRACE 2016-07-15 17:41:00,130 [shard 0] seastar - starting flush, id=156
TRACE 2016-07-15 17:41:00,142 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,142 seastar - running fdatasync() from 0 id=156
TRACE 2016-07-15 17:41:00,142 [shard 0] seastar - flush done, id=155
TRACE 2016-07-15 17:41:00,156 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,156 [shard 0] seastar - flush done, id=156
DEBUG 2016-07-15 17:41:00,156 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Digest.sha1
DEBUG 2016-07-15 17:41:00,156 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Digest.sha1
TRACE 2016-07-15 17:41:00,163 [shard 0] seastar - starting flush, id=157
TRACE 2016-07-15 17:41:00,163 seastar - running fdatasync() from 0 id=157
TRACE 2016-07-15 17:41:00,164 [shard 0] seastar - starting flush, id=158
TRACE 2016-07-15 17:41:00,192 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,192 seastar - running fdatasync() from 0 id=158
TRACE 2016-07-15 17:41:00,192 [shard 0] seastar - flush done, id=157
TRACE 2016-07-15 17:41:00,207 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,207 [shard 0] seastar - flush done, id=158
DEBUG 2016-07-15 17:41:00,207 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-CRC.db
DEBUG 2016-07-15 17:41:00,207 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-CRC.db
TRACE 2016-07-15 17:41:00,213 [shard 0] seastar - starting flush, id=159
TRACE 2016-07-15 17:41:00,213 seastar - running fdatasync() from 0 id=159
TRACE 2016-07-15 17:41:00,213 [shard 0] seastar - starting flush, id=160
TRACE 2016-07-15 17:41:00,239 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,239 seastar - running fdatasync() from 0 id=160
TRACE 2016-07-15 17:41:00,239 [shard 0] seastar - flush done, id=159
TRACE 2016-07-15 17:41:00,244 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,244 [shard 0] seastar - flush done, id=160
TRACE 2016-07-15 17:41:00,244 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:00,244 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Summary.db
TRACE 2016-07-15 17:41:00,244 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:00,244 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Summary.db
TRACE 2016-07-15 17:41:00,248 [shard 0] seastar - starting flush, id=161
TRACE 2016-07-15 17:41:00,248 seastar - running fdatasync() from 0 id=161
TRACE 2016-07-15 17:41:00,248 [shard 0] seastar - starting flush, id=162
TRACE 2016-07-15 17:41:00,273 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,273 seastar - running fdatasync() from 0 id=162
TRACE 2016-07-15 17:41:00,273 [shard 0] seastar - flush done, id=161
TRACE 2016-07-15 17:41:00,286 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,286 [shard 0] seastar - flush done, id=162
DEBUG 2016-07-15 17:41:00,286 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Filter.db
DEBUG 2016-07-15 17:41:00,286 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Filter.db
TRACE 2016-07-15 17:41:00,291 [shard 0] seastar - starting flush, id=163
TRACE 2016-07-15 17:41:00,291 seastar - running fdatasync() from 0 id=163
TRACE 2016-07-15 17:41:00,291 [shard 0] seastar - starting flush, id=164
TRACE 2016-07-15 17:41:00,317 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,317 seastar - running fdatasync() from 0 id=164
TRACE 2016-07-15 17:41:00,317 [shard 0] seastar - flush done, id=163
TRACE 2016-07-15 17:41:00,328 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,328 [shard 0] seastar - flush done, id=164
DEBUG 2016-07-15 17:41:00,329 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Statistics.db
DEBUG 2016-07-15 17:41:00,329 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Statistics.db
TRACE 2016-07-15 17:41:00,334 [shard 0] seastar - starting flush, id=165
TRACE 2016-07-15 17:41:00,334 seastar - running fdatasync() from 0 id=165
TRACE 2016-07-15 17:41:00,334 [shard 0] seastar - starting flush, id=166
TRACE 2016-07-15 17:41:00,359 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,359 seastar - running fdatasync() from 0 id=166
TRACE 2016-07-15 17:41:00,359 [shard 0] seastar - flush done, id=165
TRACE 2016-07-15 17:41:00,367 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - flush done, id=166
TRACE 2016-07-15 17:41:00,367 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: sealing
TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - starting flush, id=167
TRACE 2016-07-15 17:41:00,367 seastar - running fdatasync() from 0 id=167
TRACE 2016-07-15 17:41:00,367 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,367 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: sealing
TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - flush done, id=167
TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - starting flush, id=168
TRACE 2016-07-15 17:41:00,367 seastar - running fdatasync() from 0 id=168
TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - starting flush, id=169
TRACE 2016-07-15 17:41:00,386 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,386 seastar - running fdatasync() from 0 id=169
TRACE 2016-07-15 17:41:00,386 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,386 [shard 0] seastar - flush done, id=168
TRACE 2016-07-15 17:41:00,386 [shard 0] seastar - flush done, id=169
TRACE 2016-07-15 17:41:00,386 [shard 0] seastar - starting flush, id=170
TRACE 2016-07-15 17:41:00,386 seastar - running fdatasync() from 0 id=170
DEBUG 2016-07-15 17:41:00,386 [shard 0] sstable - SSTable with generation 182 of system.schema_keyspaces was sealed successfully.
TRACE 2016-07-15 17:41:00,386 [shard 0] database - Written. Opening the sstable...
TRACE 2016-07-15 17:41:00,395 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,395 [shard 0] seastar - flush done, id=170
DEBUG 2016-07-15 17:41:00,396 [shard 0] sstable - SSTable with generation 82 of system.IndexInfo was sealed successfully.
TRACE 2016-07-15 17:41:00,396 [shard 0] database - Written. Opening the sstable...
DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db done
DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db replaced
DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db
DEBUG 2016-07-15 17:41:00,396 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-TOC.txt.tmp
DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db done
DEBUG 2016-07-15 17:41:00,397 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db replaced
DEBUG 2016-07-15 17:41:00,397 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db
DEBUG 2016-07-15 17:41:00,397 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-TOC.txt.tmp
TRACE 2016-07-15 17:41:00,397 [shard 0] seastar - starting flush, id=171
TRACE 2016-07-15 17:41:00,397 seastar - running fdatasync() from 0 id=171
TRACE 2016-07-15 17:41:00,398 [shard 0] seastar - starting flush, id=172
TRACE 2016-07-15 17:41:00,415 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,415 seastar - running fdatasync() from 0 id=172
TRACE 2016-07-15 17:41:00,415 [shard 0] seastar - flush done, id=171
TRACE 2016-07-15 17:41:00,424 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,424 [shard 0] seastar - flush done, id=172
TRACE 2016-07-15 17:41:00,424 [shard 0] seastar - starting flush, id=173
TRACE 2016-07-15 17:41:00,425 seastar - running fdatasync() from 0 id=173
TRACE 2016-07-15 17:41:00,425 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,425 [shard 0] seastar - flush done, id=173
TRACE 2016-07-15 17:41:00,425 [shard 0] seastar - starting flush, id=174
TRACE 2016-07-15 17:41:00,425 seastar - running fdatasync() from 0 id=174
TRACE 2016-07-15 17:41:00,425 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,425 [shard 0] seastar - flush done, id=174
TRACE 2016-07-15 17:41:00,425 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: end of stream
TRACE 2016-07-15 17:41:00,426 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: end of stream
TRACE 2016-07-15 17:41:00,431 [shard 0] seastar - starting flush, id=175
TRACE 2016-07-15 17:41:00,431 seastar - running fdatasync() from 0 id=175
TRACE 2016-07-15 17:41:00,431 [shard 0] seastar - starting flush, id=176
TRACE 2016-07-15 17:41:00,456 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,456 seastar - running fdatasync() from 0 id=176
TRACE 2016-07-15 17:41:00,456 [shard 0] seastar - flush done, id=175
TRACE 2016-07-15 17:41:00,464 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,464 [shard 0] seastar - flush done, id=176
TRACE 2016-07-15 17:41:00,464 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,464 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,471 [shard 0] seastar - starting flush, id=177
TRACE 2016-07-15 17:41:00,471 seastar - running fdatasync() from 0 id=177
TRACE 2016-07-15 17:41:00,471 [shard 0] seastar - starting flush, id=178
TRACE 2016-07-15 17:41:00,490 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,490 seastar - running fdatasync() from 0 id=178
TRACE 2016-07-15 17:41:00,490 [shard 0] seastar - flush done, id=177
TRACE 2016-07-15 17:41:00,497 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,497 [shard 0] seastar - flush done, id=178
DEBUG 2016-07-15 17:41:00,497 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Digest.sha1
DEBUG 2016-07-15 17:41:00,498 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Digest.sha1
TRACE 2016-07-15 17:41:00,500 [shard 0] seastar - starting flush, id=179
TRACE 2016-07-15 17:41:00,500 seastar - running fdatasync() from 0 id=179
TRACE 2016-07-15 17:41:00,500 [shard 0] seastar - starting flush, id=180
TRACE 2016-07-15 17:41:00,528 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,528 seastar - running fdatasync() from 0 id=180
TRACE 2016-07-15 17:41:00,528 [shard 0] seastar - flush done, id=179
TRACE 2016-07-15 17:41:00,540 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,540 [shard 0] seastar - flush done, id=180
DEBUG 2016-07-15 17:41:00,540 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-CRC.db
DEBUG 2016-07-15 17:41:00,541 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-CRC.db
TRACE 2016-07-15 17:41:00,547 [shard 0] seastar - starting flush, id=181
TRACE 2016-07-15 17:41:00,547 seastar - running fdatasync() from 0 id=181
TRACE 2016-07-15 17:41:00,547 [shard 0] seastar - starting flush, id=182
TRACE 2016-07-15 17:41:00,565 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,565 seastar - running fdatasync() from 0 id=182
TRACE 2016-07-15 17:41:00,565 [shard 0] seastar - flush done, id=181
TRACE 2016-07-15 17:41:00,575 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,575 [shard 0] seastar - flush done, id=182
TRACE 2016-07-15 17:41:00,575 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:00,575 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Summary.db
TRACE 2016-07-15 17:41:00,575 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:00,575 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Summary.db
TRACE 2016-07-15 17:41:00,581 [shard 0] seastar - starting flush, id=183
TRACE 2016-07-15 17:41:00,581 seastar - running fdatasync() from 0 id=183
TRACE 2016-07-15 17:41:00,581 [shard 0] seastar - starting flush, id=184
TRACE 2016-07-15 17:41:00,607 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,607 seastar - running fdatasync() from 0 id=184
TRACE 2016-07-15 17:41:00,607 [shard 0] seastar - flush done, id=183
TRACE 2016-07-15 17:41:00,616 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,616 [shard 0] seastar - flush done, id=184
DEBUG 2016-07-15 17:41:00,616 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Filter.db
DEBUG 2016-07-15 17:41:00,617 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Filter.db
TRACE 2016-07-15 17:41:00,624 [shard 0] seastar - starting flush, id=185
TRACE 2016-07-15 17:41:00,625 seastar - running fdatasync() from 0 id=185
TRACE 2016-07-15 17:41:00,625 [shard 0] seastar - starting flush, id=186
TRACE 2016-07-15 17:41:00,657 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,657 seastar - running fdatasync() from 0 id=186
TRACE 2016-07-15 17:41:00,657 [shard 0] seastar - flush done, id=185
TRACE 2016-07-15 17:41:00,673 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,673 [shard 0] seastar - flush done, id=186
DEBUG 2016-07-15 17:41:00,673 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Statistics.db
DEBUG 2016-07-15 17:41:00,674 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Statistics.db
TRACE 2016-07-15 17:41:00,680 [shard 0] seastar - starting flush, id=187
TRACE 2016-07-15 17:41:00,680 seastar - running fdatasync() from 0 id=187
TRACE 2016-07-15 17:41:00,680 [shard 0] seastar - starting flush, id=188
TRACE 2016-07-15 17:41:00,703 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,704 seastar - running fdatasync() from 0 id=188
TRACE 2016-07-15 17:41:00,704 [shard 0] seastar - flush done, id=187
TRACE 2016-07-15 17:41:00,712 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,712 [shard 0] seastar - flush done, id=188
TRACE 2016-07-15 17:41:00,713 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: sealing
TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - starting flush, id=189
TRACE 2016-07-15 17:41:00,713 seastar - running fdatasync() from 0 id=189
TRACE 2016-07-15 17:41:00,713 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,713 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: sealing
TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - flush done, id=189
TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - starting flush, id=190
TRACE 2016-07-15 17:41:00,713 seastar - running fdatasync() from 0 id=190
TRACE 2016-07-15 17:41:00,713 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - flush done, id=190
TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - starting flush, id=191
TRACE 2016-07-15 17:41:00,713 seastar - running fdatasync() from 0 id=191
TRACE 2016-07-15 17:41:00,728 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,728 [shard 0] seastar - flush done, id=191
TRACE 2016-07-15 17:41:00,728 [shard 0] seastar - starting flush, id=192
TRACE 2016-07-15 17:41:00,728 seastar - running fdatasync() from 0 id=192
TRACE 2016-07-15 17:41:00,749 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,749 [shard 0] seastar - flush done, id=192
DEBUG 2016-07-15 17:41:00,749 [shard 0] sstable - SSTable with generation 84 of system.schema_triggers was sealed successfully.
TRACE 2016-07-15 17:41:00,749 [shard 0] database - Written. Opening the sstable...
DEBUG 2016-07-15 17:41:00,749 [shard 0] sstable - SSTable with generation 95 of system.schema_columns was sealed successfully.
TRACE 2016-07-15 17:41:00,750 [shard 0] database - Written. Opening the sstable...
DEBUG 2016-07-15 17:41:00,750 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db done
INFO 2016-07-15 17:41:00,750 [shard 0] compaction - Compacting [/home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-81-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_tr
iggers-ka-82-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-83-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db:level=0, ]
DEBUG 2016-07-15 17:41:00,750 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db replaced
DEBUG 2016-07-15 17:41:00,750 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db
DEBUG 2016-07-15 17:41:00,750 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-TOC.txt.tmp
DEBUG 2016-07-15 17:41:00,751 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db done
DEBUG 2016-07-15 17:41:00,751 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db replaced
DEBUG 2016-07-15 17:41:00,751 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db
DEBUG 2016-07-15 17:41:00,751 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-TOC.txt.tmp
TRACE 2016-07-15 17:41:00,753 [shard 0] seastar - starting flush, id=193
TRACE 2016-07-15 17:41:00,753 seastar - running fdatasync() from 0 id=193
TRACE 2016-07-15 17:41:00,753 [shard 0] seastar - starting flush, id=194
DEBUG 2016-07-15 17:41:00,753 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-TOC.txt.tmp
TRACE 2016-07-15 17:41:00,772 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,772 seastar - running fdatasync() from 0 id=194
TRACE 2016-07-15 17:41:00,772 [shard 0] seastar - flush done, id=193
TRACE 2016-07-15 17:41:00,786 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,786 [shard 0] seastar - flush done, id=194
TRACE 2016-07-15 17:41:00,786 [shard 0] seastar - starting flush, id=195
TRACE 2016-07-15 17:41:00,786 seastar - running fdatasync() from 0 id=195
TRACE 2016-07-15 17:41:00,786 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,787 [shard 0] seastar - flush done, id=195
TRACE 2016-07-15 17:41:00,787 [shard 0] seastar - starting flush, id=196
TRACE 2016-07-15 17:41:00,787 seastar - running fdatasync() from 0 id=196
TRACE 2016-07-15 17:41:00,787 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,787 [shard 0] seastar - flush done, id=196
TRACE 2016-07-15 17:41:00,787 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: end of stream
TRACE 2016-07-15 17:41:00,788 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: end of stream
TRACE 2016-07-15 17:41:00,806 [shard 0] seastar - starting flush, id=197
TRACE 2016-07-15 17:41:00,809 seastar - running fdatasync() from 0 id=197
TRACE 2016-07-15 17:41:00,809 [shard 0] seastar - starting flush, id=198
TRACE 2016-07-15 17:41:00,809 [shard 0] seastar - starting flush, id=199
TRACE 2016-07-15 17:41:00,867 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,867 seastar - running fdatasync() from 0 id=198
TRACE 2016-07-15 17:41:00,867 [shard 0] seastar - flush done, id=197
TRACE 2016-07-15 17:41:00,873 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,873 seastar - running fdatasync() from 0 id=199
TRACE 2016-07-15 17:41:00,873 [shard 0] seastar - flush done, id=198
TRACE 2016-07-15 17:41:00,884 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,885 [shard 0] seastar - flush done, id=199
TRACE 2016-07-15 17:41:00,885 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,885 [shard 0] seastar - starting flush, id=200
TRACE 2016-07-15 17:41:00,885 seastar - running fdatasync() from 0 id=200
TRACE 2016-07-15 17:41:00,885 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,885 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,885 [shard 0] seastar - flush done, id=200
TRACE 2016-07-15 17:41:00,885 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: end of stream
TRACE 2016-07-15 17:41:00,887 [shard 0] seastar - starting flush, id=201
TRACE 2016-07-15 17:41:00,887 seastar - running fdatasync() from 0 id=201
TRACE 2016-07-15 17:41:00,887 [shard 0] seastar - starting flush, id=202
TRACE 2016-07-15 17:41:00,887 [shard 0] seastar - starting flush, id=203
TRACE 2016-07-15 17:41:00,920 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,920 seastar - running fdatasync() from 0 id=202
TRACE 2016-07-15 17:41:00,920 [shard 0] seastar - flush done, id=201
TRACE 2016-07-15 17:41:00,947 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,947 seastar - running fdatasync() from 0 id=203
TRACE 2016-07-15 17:41:00,956 [shard 0] seastar - flush done, id=202
TRACE 2016-07-15 17:41:00,963 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,963 [shard 0] seastar - flush done, id=203
DEBUG 2016-07-15 17:41:00,963 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Digest.sha1
DEBUG 2016-07-15 17:41:00,964 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Digest.sha1
TRACE 2016-07-15 17:41:00,964 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,968 [shard 0] seastar - starting flush, id=204
TRACE 2016-07-15 17:41:00,968 seastar - running fdatasync() from 0 id=204
TRACE 2016-07-15 17:41:00,968 [shard 0] seastar - starting flush, id=205
TRACE 2016-07-15 17:41:00,968 [shard 0] seastar - starting flush, id=206
TRACE 2016-07-15 17:41:00,986 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,986 seastar - running fdatasync() from 0 id=205
TRACE 2016-07-15 17:41:00,986 [shard 0] seastar - flush done, id=204
TRACE 2016-07-15 17:41:01,002 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,002 seastar - running fdatasync() from 0 id=206
TRACE 2016-07-15 17:41:01,006 [shard 0] seastar - flush done, id=205
TRACE 2016-07-15 17:41:01,007 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,007 [shard 0] seastar - flush done, id=206
DEBUG 2016-07-15 17:41:01,007 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-CRC.db
DEBUG 2016-07-15 17:41:01,008 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Digest.sha1
DEBUG 2016-07-15 17:41:01,008 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-CRC.db
TRACE 2016-07-15 17:41:01,009 [shard 0] seastar - starting flush, id=207
TRACE 2016-07-15 17:41:01,016 [shard 0] seastar - starting flush, id=208
TRACE 2016-07-15 17:41:01,022 seastar - running fdatasync() from 0 id=207
TRACE 2016-07-15 17:41:01,022 [shard 0] seastar - starting flush, id=209
TRACE 2016-07-15 17:41:01,047 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,047 seastar - running fdatasync() from 0 id=208
TRACE 2016-07-15 17:41:01,056 [shard 0] seastar - flush done, id=207
TRACE 2016-07-15 17:41:01,062 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,062 seastar - running fdatasync() from 0 id=209
TRACE 2016-07-15 17:41:01,062 [shard 0] seastar - flush done, id=208
TRACE 2016-07-15 17:41:01,076 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,076 [shard 0] seastar - flush done, id=209
DEBUG 2016-07-15 17:41:01,076 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-CRC.db
TRACE 2016-07-15 17:41:01,076 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:01,076 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Summary.db
TRACE 2016-07-15 17:41:01,077 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:01,077 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Summary.db
TRACE 2016-07-15 17:41:01,086 [shard 0] seastar - starting flush, id=210
TRACE 2016-07-15 17:41:01,086 seastar - running fdatasync() from 0 id=210
TRACE 2016-07-15 17:41:01,086 [shard 0] seastar - starting flush, id=211
TRACE 2016-07-15 17:41:01,086 [shard 0] seastar - starting flush, id=212
TRACE 2016-07-15 17:41:01,118 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,118 seastar - running fdatasync() from 0 id=211
TRACE 2016-07-15 17:41:01,118 [shard 0] seastar - flush done, id=210
TRACE 2016-07-15 17:41:01,125 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,125 seastar - running fdatasync() from 0 id=212
TRACE 2016-07-15 17:41:01,127 [shard 0] seastar - flush done, id=211
TRACE 2016-07-15 17:41:01,137 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,137 [shard 0] seastar - flush done, id=212
TRACE 2016-07-15 17:41:01,137 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:01,137 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Summary.db
DEBUG 2016-07-15 17:41:01,137 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Filter.db
DEBUG 2016-07-15 17:41:01,137 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Filter.db
TRACE 2016-07-15 17:41:01,143 [shard 0] seastar - starting flush, id=213
TRACE 2016-07-15 17:41:01,143 seastar - running fdatasync() from 0 id=213
TRACE 2016-07-15 17:41:01,143 [shard 0] seastar - starting flush, id=214
TRACE 2016-07-15 17:41:01,143 [shard 0] seastar - starting flush, id=215
TRACE 2016-07-15 17:41:01,175 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,175 seastar - running fdatasync() from 0 id=214
TRACE 2016-07-15 17:41:01,175 [shard 0] seastar - flush done, id=213
TRACE 2016-07-15 17:41:01,184 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,184 seastar - running fdatasync() from 0 id=215
TRACE 2016-07-15 17:41:01,188 [shard 0] seastar - flush done, id=214
TRACE 2016-07-15 17:41:01,194 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,194 [shard 0] seastar - flush done, id=215
DEBUG 2016-07-15 17:41:01,194 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Filter.db
DEBUG 2016-07-15 17:41:01,195 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Statistics.db
DEBUG 2016-07-15 17:41:01,195 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Statistics.db
TRACE 2016-07-15 17:41:01,201 [shard 0] seastar - starting flush, id=216
TRACE 2016-07-15 17:41:01,201 [shard 0] seastar - starting flush, id=217
TRACE 2016-07-15 17:41:01,201 seastar - running fdatasync() from 0 id=216
TRACE 2016-07-15 17:41:01,201 [shard 0] seastar - starting flush, id=218
TRACE 2016-07-15 17:41:01,235 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,235 seastar - running fdatasync() from 0 id=217
TRACE 2016-07-15 17:41:01,238 [shard 0] seastar - flush done, id=216
TRACE 2016-07-15 17:41:01,243 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,243 seastar - running fdatasync() from 0 id=218
TRACE 2016-07-15 17:41:01,243 [shard 0] seastar - flush done, id=217
TRACE 2016-07-15 17:41:01,255 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,255 [shard 0] seastar - flush done, id=218
DEBUG 2016-07-15 17:41:01,255 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Statistics.db
TRACE 2016-07-15 17:41:01,256 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: sealing
TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - starting flush, id=219
TRACE 2016-07-15 17:41:01,256 seastar - running fdatasync() from 0 id=219
TRACE 2016-07-15 17:41:01,256 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,256 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: sealing
TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - flush done, id=219
TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - starting flush, id=220
TRACE 2016-07-15 17:41:01,256 seastar - running fdatasync() from 0 id=220
TRACE 2016-07-15 17:41:01,256 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - flush done, id=220
TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - starting flush, id=221
TRACE 2016-07-15 17:41:01,256 seastar - running fdatasync() from 0 id=221
TRACE 2016-07-15 17:41:01,280 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,280 [shard 0] seastar - flush done, id=221
TRACE 2016-07-15 17:41:01,280 [shard 0] seastar - starting flush, id=222
TRACE 2016-07-15 17:41:01,281 seastar - running fdatasync() from 0 id=222
TRACE 2016-07-15 17:41:01,281 [shard 0] seastar - starting flush, id=223
TRACE 2016-07-15 17:41:01,293 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,294 seastar - running fdatasync() from 0 id=223
TRACE 2016-07-15 17:41:01,294 [shard 0] seastar - flush done, id=222
DEBUG 2016-07-15 17:41:01,294 [shard 0] sstable - SSTable with generation 84 of system.schema_usertypes was sealed successfully.
TRACE 2016-07-15 17:41:01,294 [shard 0] database - Written. Opening the sstable...
TRACE 2016-07-15 17:41:01,310 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - flush done, id=223
DEBUG 2016-07-15 17:41:01,310 [shard 0] sstable - SSTable with generation 95 of system.schema_columnfamilies was sealed successfully.
TRACE 2016-07-15 17:41:01,310 [shard 0] database - Written. Opening the sstable...
TRACE 2016-07-15 17:41:01,310 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: sealing
TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - starting flush, id=224
TRACE 2016-07-15 17:41:01,310 seastar - running fdatasync() from 0 id=224
TRACE 2016-07-15 17:41:01,310 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - flush done, id=224
TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - starting flush, id=225
TRACE 2016-07-15 17:41:01,310 seastar - running fdatasync() from 0 id=225
TRACE 2016-07-15 17:41:01,324 [shard 0] query_processor - execute_internal: ""INSERT INTO system.peers (peer, schema_version) VALUES (?, ?)"" (127.0.0.3, 67d1e0b4-d995-38fa-9e92-075d046a09fe)
TRACE 2016-07-15 17:41:01,324 [shard 0] database - apply {system.peers key {key: pk{00047f000003}, token:-4598924402677416620} data {mutation_partition: {tombstone: none} () static {row: } clustered {rows_entry: ckp{} {deletable_row: {row_marker 1468597261324000 0 0} {tombstone: none} {row: {column: 6 01000537ae72144e
e067d1e0b4d99538fa9e92075d046a09fe}}}}}}
DEBUG 2016-07-15 17:41:01,325 [shard 0] migration_manager - Submitting migration task for 127.0.0.3
TRACE 2016-07-15 17:41:01,327 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,327 [shard 0] seastar - flush done, id=225
DEBUG 2016-07-15 17:41:01,327 [shard 0] sstable - SSTable with generation 85 of system.schema_triggers was sealed successfully.
DEBUG 2016-07-15 17:41:01,327 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db done
INFO 2016-07-15 17:41:01,327 [shard 0] compaction - Compacting [/home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-81-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema
_usertypes-ka-82-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-83-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db:level=
0, ]
DEBUG 2016-07-15 17:41:01,328 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db replaced
DEBUG 2016-07-15 17:41:01,328 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db done
DEBUG 2016-07-15 17:41:01,328 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db replaced
TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Reading new schema
TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Merging keyspaces
INFO 2016-07-15 17:41:01,328 [shard 0] schema_tables - Dropping keyspace testxyz
TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Merging tables
TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Merging types
TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Dropping keyspaces
TRACE 2016-07-15 17:41:01,329 [shard 0] schema_tables - Schema merged
```
",True,"Schema change statements are slow due to memtable flush latency - _Installation details_
Scylla version (or git commit hash): any
Executing DDL statements takes significantly more time on Scylla than on Cassandra. For instance, `drop keyspace` takes about a second on an idle S\* server. I traced that down to latency of the flush of schema tables. The `create keyspace` statement is noticeably faster than `drop keyspace` because it flushes much fewer tables.
It looks like the latency comes mainly from a large number of `fdatasync` calls which we execute sequentially during schema tables flush (I counted 77 calls). When I disable them, `drop keyspace` time drops down to about 100ms. Maybe some of them could be avoided or parallelized.
Here's a detailed trace during `drop keyspace`:
```
TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Taking the merge lock
TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Took the merge lock
TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Reading old schema
TRACE 2016-07-15 17:41:00,019 [shard 0] schema_tables - Applying schema changes
TRACE 2016-07-15 17:41:00,019 [shard 0] database - apply {system.schema_keyspaces key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }}
TRACE 2016-07-15 17:41:00,019 [shard 0] database - apply {system.schema_columnfamilies key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }}
TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.schema_columns key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }}
TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.schema_triggers key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }}
TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.schema_usertypes key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }}
TRACE 2016-07-15 17:41:00,020 [shard 0] database - apply {system.IndexInfo key {key: pk{00077465737478797a}, token:9106523439940282999} data {mutation_partition: {tombstone: timestamp=1468597260019000, deletion_time=1468597260} () static {row: } clustered }}
TRACE 2016-07-15 17:41:00,020 [shard 0] schema_tables - Flushing {9f5c6374-d485-3229-9a0a-5094af9ad1e3, b0f22357-4458-3cdb-9631-c43e59ce3676, 0359bc71-7123-3ee1-9a4a-b9dfb11fc125, 296e9c04-9bec-3085-827d-c17d3df2122a, 3aa75225-4f82-350b-8d5c-430fa221fa0a, 45f5b360-24bc-3f83-a363-1034ea4fa697}
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of IndexInfo.system, partitions: 1, occupancy: 0.14%, 376 / 262144 [B]
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db
DEBUG 2016-07-15 17:41:00,020 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-TOC.txt.tmp
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_keyspaces.system, partitions: 1, occupancy: 0.14%, 376 / 262144 [B]
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_triggers.system, partitions: 2, occupancy: 0.29%, 752 / 262144 [B]
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_columns.system, partitions: 2, occupancy: 31.28%, 81992 / 262144 [B]
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_usertypes.system, partitions: 2, occupancy: 0.29%, 752 / 262144 [B]
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Sealing active memtable of schema_columnfamilies.system, partitions: 2, occupancy: 14.61%, 38312 / 262144 [B]
DEBUG 2016-07-15 17:41:00,020 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db
DEBUG 2016-07-15 17:41:00,021 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-TOC.txt.tmp
TRACE 2016-07-15 17:41:00,022 [shard 0] seastar - starting flush, id=149
TRACE 2016-07-15 17:41:00,022 seastar - running fdatasync() from 0 id=149
TRACE 2016-07-15 17:41:00,022 [shard 0] seastar - starting flush, id=150
TRACE 2016-07-15 17:41:00,066 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,066 seastar - running fdatasync() from 0 id=150
TRACE 2016-07-15 17:41:00,066 [shard 0] seastar - flush done, id=149
TRACE 2016-07-15 17:41:00,077 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - flush done, id=150
TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - starting flush, id=151
TRACE 2016-07-15 17:41:00,077 seastar - running fdatasync() from 0 id=151
TRACE 2016-07-15 17:41:00,077 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - flush done, id=151
TRACE 2016-07-15 17:41:00,077 [shard 0] seastar - starting flush, id=152
TRACE 2016-07-15 17:41:00,077 seastar - running fdatasync() from 0 id=152
TRACE 2016-07-15 17:41:00,078 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,078 [shard 0] seastar - flush done, id=152
TRACE 2016-07-15 17:41:00,078 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: end of stream
TRACE 2016-07-15 17:41:00,078 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: end of stream
TRACE 2016-07-15 17:41:00,085 [shard 0] seastar - starting flush, id=153
TRACE 2016-07-15 17:41:00,085 seastar - running fdatasync() from 0 id=153
TRACE 2016-07-15 17:41:00,085 [shard 0] seastar - starting flush, id=154
TRACE 2016-07-15 17:41:00,113 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,113 seastar - running fdatasync() from 0 id=154
TRACE 2016-07-15 17:41:00,113 [shard 0] seastar - flush done, id=153
TRACE 2016-07-15 17:41:00,125 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,126 [shard 0] seastar - flush done, id=154
TRACE 2016-07-15 17:41:00,126 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,126 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,130 [shard 0] seastar - starting flush, id=155
TRACE 2016-07-15 17:41:00,130 seastar - running fdatasync() from 0 id=155
TRACE 2016-07-15 17:41:00,130 [shard 0] seastar - starting flush, id=156
TRACE 2016-07-15 17:41:00,142 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,142 seastar - running fdatasync() from 0 id=156
TRACE 2016-07-15 17:41:00,142 [shard 0] seastar - flush done, id=155
TRACE 2016-07-15 17:41:00,156 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,156 [shard 0] seastar - flush done, id=156
DEBUG 2016-07-15 17:41:00,156 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Digest.sha1
DEBUG 2016-07-15 17:41:00,156 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Digest.sha1
TRACE 2016-07-15 17:41:00,163 [shard 0] seastar - starting flush, id=157
TRACE 2016-07-15 17:41:00,163 seastar - running fdatasync() from 0 id=157
TRACE 2016-07-15 17:41:00,164 [shard 0] seastar - starting flush, id=158
TRACE 2016-07-15 17:41:00,192 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,192 seastar - running fdatasync() from 0 id=158
TRACE 2016-07-15 17:41:00,192 [shard 0] seastar - flush done, id=157
TRACE 2016-07-15 17:41:00,207 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,207 [shard 0] seastar - flush done, id=158
DEBUG 2016-07-15 17:41:00,207 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-CRC.db
DEBUG 2016-07-15 17:41:00,207 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-CRC.db
TRACE 2016-07-15 17:41:00,213 [shard 0] seastar - starting flush, id=159
TRACE 2016-07-15 17:41:00,213 seastar - running fdatasync() from 0 id=159
TRACE 2016-07-15 17:41:00,213 [shard 0] seastar - starting flush, id=160
TRACE 2016-07-15 17:41:00,239 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,239 seastar - running fdatasync() from 0 id=160
TRACE 2016-07-15 17:41:00,239 [shard 0] seastar - flush done, id=159
TRACE 2016-07-15 17:41:00,244 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,244 [shard 0] seastar - flush done, id=160
TRACE 2016-07-15 17:41:00,244 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:00,244 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Summary.db
TRACE 2016-07-15 17:41:00,244 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:00,244 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Summary.db
TRACE 2016-07-15 17:41:00,248 [shard 0] seastar - starting flush, id=161
TRACE 2016-07-15 17:41:00,248 seastar - running fdatasync() from 0 id=161
TRACE 2016-07-15 17:41:00,248 [shard 0] seastar - starting flush, id=162
TRACE 2016-07-15 17:41:00,273 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,273 seastar - running fdatasync() from 0 id=162
TRACE 2016-07-15 17:41:00,273 [shard 0] seastar - flush done, id=161
TRACE 2016-07-15 17:41:00,286 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,286 [shard 0] seastar - flush done, id=162
DEBUG 2016-07-15 17:41:00,286 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Filter.db
DEBUG 2016-07-15 17:41:00,286 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Filter.db
TRACE 2016-07-15 17:41:00,291 [shard 0] seastar - starting flush, id=163
TRACE 2016-07-15 17:41:00,291 seastar - running fdatasync() from 0 id=163
TRACE 2016-07-15 17:41:00,291 [shard 0] seastar - starting flush, id=164
TRACE 2016-07-15 17:41:00,317 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,317 seastar - running fdatasync() from 0 id=164
TRACE 2016-07-15 17:41:00,317 [shard 0] seastar - flush done, id=163
TRACE 2016-07-15 17:41:00,328 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,328 [shard 0] seastar - flush done, id=164
DEBUG 2016-07-15 17:41:00,329 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Statistics.db
DEBUG 2016-07-15 17:41:00,329 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Statistics.db
TRACE 2016-07-15 17:41:00,334 [shard 0] seastar - starting flush, id=165
TRACE 2016-07-15 17:41:00,334 seastar - running fdatasync() from 0 id=165
TRACE 2016-07-15 17:41:00,334 [shard 0] seastar - starting flush, id=166
TRACE 2016-07-15 17:41:00,359 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,359 seastar - running fdatasync() from 0 id=166
TRACE 2016-07-15 17:41:00,359 [shard 0] seastar - flush done, id=165
TRACE 2016-07-15 17:41:00,367 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - flush done, id=166
TRACE 2016-07-15 17:41:00,367 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db: sealing
TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - starting flush, id=167
TRACE 2016-07-15 17:41:00,367 seastar - running fdatasync() from 0 id=167
TRACE 2016-07-15 17:41:00,367 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,367 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db: sealing
TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - flush done, id=167
TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - starting flush, id=168
TRACE 2016-07-15 17:41:00,367 seastar - running fdatasync() from 0 id=168
TRACE 2016-07-15 17:41:00,367 [shard 0] seastar - starting flush, id=169
TRACE 2016-07-15 17:41:00,386 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,386 seastar - running fdatasync() from 0 id=169
TRACE 2016-07-15 17:41:00,386 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,386 [shard 0] seastar - flush done, id=168
TRACE 2016-07-15 17:41:00,386 [shard 0] seastar - flush done, id=169
TRACE 2016-07-15 17:41:00,386 [shard 0] seastar - starting flush, id=170
TRACE 2016-07-15 17:41:00,386 seastar - running fdatasync() from 0 id=170
DEBUG 2016-07-15 17:41:00,386 [shard 0] sstable - SSTable with generation 182 of system.schema_keyspaces was sealed successfully.
TRACE 2016-07-15 17:41:00,386 [shard 0] database - Written. Opening the sstable...
TRACE 2016-07-15 17:41:00,395 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,395 [shard 0] seastar - flush done, id=170
DEBUG 2016-07-15 17:41:00,396 [shard 0] sstable - SSTable with generation 82 of system.IndexInfo was sealed successfully.
TRACE 2016-07-15 17:41:00,396 [shard 0] database - Written. Opening the sstable...
DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db done
DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-182-Data.db replaced
DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db
DEBUG 2016-07-15 17:41:00,396 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-TOC.txt.tmp
DEBUG 2016-07-15 17:41:00,396 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db done
DEBUG 2016-07-15 17:41:00,397 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/system-IndexInfo-ka-82-Data.db replaced
DEBUG 2016-07-15 17:41:00,397 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db
DEBUG 2016-07-15 17:41:00,397 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-TOC.txt.tmp
TRACE 2016-07-15 17:41:00,397 [shard 0] seastar - starting flush, id=171
TRACE 2016-07-15 17:41:00,397 seastar - running fdatasync() from 0 id=171
TRACE 2016-07-15 17:41:00,398 [shard 0] seastar - starting flush, id=172
TRACE 2016-07-15 17:41:00,415 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,415 seastar - running fdatasync() from 0 id=172
TRACE 2016-07-15 17:41:00,415 [shard 0] seastar - flush done, id=171
TRACE 2016-07-15 17:41:00,424 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,424 [shard 0] seastar - flush done, id=172
TRACE 2016-07-15 17:41:00,424 [shard 0] seastar - starting flush, id=173
TRACE 2016-07-15 17:41:00,425 seastar - running fdatasync() from 0 id=173
TRACE 2016-07-15 17:41:00,425 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,425 [shard 0] seastar - flush done, id=173
TRACE 2016-07-15 17:41:00,425 [shard 0] seastar - starting flush, id=174
TRACE 2016-07-15 17:41:00,425 seastar - running fdatasync() from 0 id=174
TRACE 2016-07-15 17:41:00,425 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,425 [shard 0] seastar - flush done, id=174
TRACE 2016-07-15 17:41:00,425 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: end of stream
TRACE 2016-07-15 17:41:00,426 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: end of stream
TRACE 2016-07-15 17:41:00,431 [shard 0] seastar - starting flush, id=175
TRACE 2016-07-15 17:41:00,431 seastar - running fdatasync() from 0 id=175
TRACE 2016-07-15 17:41:00,431 [shard 0] seastar - starting flush, id=176
TRACE 2016-07-15 17:41:00,456 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,456 seastar - running fdatasync() from 0 id=176
TRACE 2016-07-15 17:41:00,456 [shard 0] seastar - flush done, id=175
TRACE 2016-07-15 17:41:00,464 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,464 [shard 0] seastar - flush done, id=176
TRACE 2016-07-15 17:41:00,464 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,464 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,471 [shard 0] seastar - starting flush, id=177
TRACE 2016-07-15 17:41:00,471 seastar - running fdatasync() from 0 id=177
TRACE 2016-07-15 17:41:00,471 [shard 0] seastar - starting flush, id=178
TRACE 2016-07-15 17:41:00,490 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,490 seastar - running fdatasync() from 0 id=178
TRACE 2016-07-15 17:41:00,490 [shard 0] seastar - flush done, id=177
TRACE 2016-07-15 17:41:00,497 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,497 [shard 0] seastar - flush done, id=178
DEBUG 2016-07-15 17:41:00,497 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Digest.sha1
DEBUG 2016-07-15 17:41:00,498 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Digest.sha1
TRACE 2016-07-15 17:41:00,500 [shard 0] seastar - starting flush, id=179
TRACE 2016-07-15 17:41:00,500 seastar - running fdatasync() from 0 id=179
TRACE 2016-07-15 17:41:00,500 [shard 0] seastar - starting flush, id=180
TRACE 2016-07-15 17:41:00,528 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,528 seastar - running fdatasync() from 0 id=180
TRACE 2016-07-15 17:41:00,528 [shard 0] seastar - flush done, id=179
TRACE 2016-07-15 17:41:00,540 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,540 [shard 0] seastar - flush done, id=180
DEBUG 2016-07-15 17:41:00,540 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-CRC.db
DEBUG 2016-07-15 17:41:00,541 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-CRC.db
TRACE 2016-07-15 17:41:00,547 [shard 0] seastar - starting flush, id=181
TRACE 2016-07-15 17:41:00,547 seastar - running fdatasync() from 0 id=181
TRACE 2016-07-15 17:41:00,547 [shard 0] seastar - starting flush, id=182
TRACE 2016-07-15 17:41:00,565 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,565 seastar - running fdatasync() from 0 id=182
TRACE 2016-07-15 17:41:00,565 [shard 0] seastar - flush done, id=181
TRACE 2016-07-15 17:41:00,575 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,575 [shard 0] seastar - flush done, id=182
TRACE 2016-07-15 17:41:00,575 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:00,575 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Summary.db
TRACE 2016-07-15 17:41:00,575 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:00,575 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Summary.db
TRACE 2016-07-15 17:41:00,581 [shard 0] seastar - starting flush, id=183
TRACE 2016-07-15 17:41:00,581 seastar - running fdatasync() from 0 id=183
TRACE 2016-07-15 17:41:00,581 [shard 0] seastar - starting flush, id=184
TRACE 2016-07-15 17:41:00,607 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,607 seastar - running fdatasync() from 0 id=184
TRACE 2016-07-15 17:41:00,607 [shard 0] seastar - flush done, id=183
TRACE 2016-07-15 17:41:00,616 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,616 [shard 0] seastar - flush done, id=184
DEBUG 2016-07-15 17:41:00,616 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Filter.db
DEBUG 2016-07-15 17:41:00,617 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Filter.db
TRACE 2016-07-15 17:41:00,624 [shard 0] seastar - starting flush, id=185
TRACE 2016-07-15 17:41:00,625 seastar - running fdatasync() from 0 id=185
TRACE 2016-07-15 17:41:00,625 [shard 0] seastar - starting flush, id=186
TRACE 2016-07-15 17:41:00,657 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,657 seastar - running fdatasync() from 0 id=186
TRACE 2016-07-15 17:41:00,657 [shard 0] seastar - flush done, id=185
TRACE 2016-07-15 17:41:00,673 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,673 [shard 0] seastar - flush done, id=186
DEBUG 2016-07-15 17:41:00,673 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Statistics.db
DEBUG 2016-07-15 17:41:00,674 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Statistics.db
TRACE 2016-07-15 17:41:00,680 [shard 0] seastar - starting flush, id=187
TRACE 2016-07-15 17:41:00,680 seastar - running fdatasync() from 0 id=187
TRACE 2016-07-15 17:41:00,680 [shard 0] seastar - starting flush, id=188
TRACE 2016-07-15 17:41:00,703 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,704 seastar - running fdatasync() from 0 id=188
TRACE 2016-07-15 17:41:00,704 [shard 0] seastar - flush done, id=187
TRACE 2016-07-15 17:41:00,712 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,712 [shard 0] seastar - flush done, id=188
TRACE 2016-07-15 17:41:00,713 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db: sealing
TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - starting flush, id=189
TRACE 2016-07-15 17:41:00,713 seastar - running fdatasync() from 0 id=189
TRACE 2016-07-15 17:41:00,713 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,713 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db: sealing
TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - flush done, id=189
TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - starting flush, id=190
TRACE 2016-07-15 17:41:00,713 seastar - running fdatasync() from 0 id=190
TRACE 2016-07-15 17:41:00,713 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - flush done, id=190
TRACE 2016-07-15 17:41:00,713 [shard 0] seastar - starting flush, id=191
TRACE 2016-07-15 17:41:00,713 seastar - running fdatasync() from 0 id=191
TRACE 2016-07-15 17:41:00,728 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,728 [shard 0] seastar - flush done, id=191
TRACE 2016-07-15 17:41:00,728 [shard 0] seastar - starting flush, id=192
TRACE 2016-07-15 17:41:00,728 seastar - running fdatasync() from 0 id=192
TRACE 2016-07-15 17:41:00,749 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,749 [shard 0] seastar - flush done, id=192
DEBUG 2016-07-15 17:41:00,749 [shard 0] sstable - SSTable with generation 84 of system.schema_triggers was sealed successfully.
TRACE 2016-07-15 17:41:00,749 [shard 0] database - Written. Opening the sstable...
DEBUG 2016-07-15 17:41:00,749 [shard 0] sstable - SSTable with generation 95 of system.schema_columns was sealed successfully.
TRACE 2016-07-15 17:41:00,750 [shard 0] database - Written. Opening the sstable...
DEBUG 2016-07-15 17:41:00,750 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db done
INFO 2016-07-15 17:41:00,750 [shard 0] compaction - Compacting [/home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-81-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_tr
iggers-ka-82-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-83-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db:level=0, ]
DEBUG 2016-07-15 17:41:00,750 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-84-Data.db replaced
DEBUG 2016-07-15 17:41:00,750 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db
DEBUG 2016-07-15 17:41:00,750 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-TOC.txt.tmp
DEBUG 2016-07-15 17:41:00,751 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db done
DEBUG 2016-07-15 17:41:00,751 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columns-296e9c049bec3085827dc17d3df2122a/system-schema_columns-ka-95-Data.db replaced
DEBUG 2016-07-15 17:41:00,751 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db
DEBUG 2016-07-15 17:41:00,751 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-TOC.txt.tmp
TRACE 2016-07-15 17:41:00,753 [shard 0] seastar - starting flush, id=193
TRACE 2016-07-15 17:41:00,753 seastar - running fdatasync() from 0 id=193
TRACE 2016-07-15 17:41:00,753 [shard 0] seastar - starting flush, id=194
DEBUG 2016-07-15 17:41:00,753 [shard 0] sstable - Writing TOC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-TOC.txt.tmp
TRACE 2016-07-15 17:41:00,772 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,772 seastar - running fdatasync() from 0 id=194
TRACE 2016-07-15 17:41:00,772 [shard 0] seastar - flush done, id=193
TRACE 2016-07-15 17:41:00,786 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,786 [shard 0] seastar - flush done, id=194
TRACE 2016-07-15 17:41:00,786 [shard 0] seastar - starting flush, id=195
TRACE 2016-07-15 17:41:00,786 seastar - running fdatasync() from 0 id=195
TRACE 2016-07-15 17:41:00,786 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,787 [shard 0] seastar - flush done, id=195
TRACE 2016-07-15 17:41:00,787 [shard 0] seastar - starting flush, id=196
TRACE 2016-07-15 17:41:00,787 seastar - running fdatasync() from 0 id=196
TRACE 2016-07-15 17:41:00,787 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,787 [shard 0] seastar - flush done, id=196
TRACE 2016-07-15 17:41:00,787 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: end of stream
TRACE 2016-07-15 17:41:00,788 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: end of stream
TRACE 2016-07-15 17:41:00,806 [shard 0] seastar - starting flush, id=197
TRACE 2016-07-15 17:41:00,809 seastar - running fdatasync() from 0 id=197
TRACE 2016-07-15 17:41:00,809 [shard 0] seastar - starting flush, id=198
TRACE 2016-07-15 17:41:00,809 [shard 0] seastar - starting flush, id=199
TRACE 2016-07-15 17:41:00,867 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,867 seastar - running fdatasync() from 0 id=198
TRACE 2016-07-15 17:41:00,867 [shard 0] seastar - flush done, id=197
TRACE 2016-07-15 17:41:00,873 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,873 seastar - running fdatasync() from 0 id=199
TRACE 2016-07-15 17:41:00,873 [shard 0] seastar - flush done, id=198
TRACE 2016-07-15 17:41:00,884 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,885 [shard 0] seastar - flush done, id=199
TRACE 2016-07-15 17:41:00,885 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,885 [shard 0] seastar - starting flush, id=200
TRACE 2016-07-15 17:41:00,885 seastar - running fdatasync() from 0 id=200
TRACE 2016-07-15 17:41:00,885 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,885 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,885 [shard 0] seastar - flush done, id=200
TRACE 2016-07-15 17:41:00,885 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: end of stream
TRACE 2016-07-15 17:41:00,887 [shard 0] seastar - starting flush, id=201
TRACE 2016-07-15 17:41:00,887 seastar - running fdatasync() from 0 id=201
TRACE 2016-07-15 17:41:00,887 [shard 0] seastar - starting flush, id=202
TRACE 2016-07-15 17:41:00,887 [shard 0] seastar - starting flush, id=203
TRACE 2016-07-15 17:41:00,920 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,920 seastar - running fdatasync() from 0 id=202
TRACE 2016-07-15 17:41:00,920 [shard 0] seastar - flush done, id=201
TRACE 2016-07-15 17:41:00,947 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,947 seastar - running fdatasync() from 0 id=203
TRACE 2016-07-15 17:41:00,956 [shard 0] seastar - flush done, id=202
TRACE 2016-07-15 17:41:00,963 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,963 [shard 0] seastar - flush done, id=203
DEBUG 2016-07-15 17:41:00,963 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Digest.sha1
DEBUG 2016-07-15 17:41:00,964 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Digest.sha1
TRACE 2016-07-15 17:41:00,964 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: after consume_end_of_stream()
TRACE 2016-07-15 17:41:00,968 [shard 0] seastar - starting flush, id=204
TRACE 2016-07-15 17:41:00,968 seastar - running fdatasync() from 0 id=204
TRACE 2016-07-15 17:41:00,968 [shard 0] seastar - starting flush, id=205
TRACE 2016-07-15 17:41:00,968 [shard 0] seastar - starting flush, id=206
TRACE 2016-07-15 17:41:00,986 seastar - fdatasync() done
TRACE 2016-07-15 17:41:00,986 seastar - running fdatasync() from 0 id=205
TRACE 2016-07-15 17:41:00,986 [shard 0] seastar - flush done, id=204
TRACE 2016-07-15 17:41:01,002 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,002 seastar - running fdatasync() from 0 id=206
TRACE 2016-07-15 17:41:01,006 [shard 0] seastar - flush done, id=205
TRACE 2016-07-15 17:41:01,007 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,007 [shard 0] seastar - flush done, id=206
DEBUG 2016-07-15 17:41:01,007 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-CRC.db
DEBUG 2016-07-15 17:41:01,008 [shard 0] sstable - Writing Digest file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Digest.sha1
DEBUG 2016-07-15 17:41:01,008 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-CRC.db
TRACE 2016-07-15 17:41:01,009 [shard 0] seastar - starting flush, id=207
TRACE 2016-07-15 17:41:01,016 [shard 0] seastar - starting flush, id=208
TRACE 2016-07-15 17:41:01,022 seastar - running fdatasync() from 0 id=207
TRACE 2016-07-15 17:41:01,022 [shard 0] seastar - starting flush, id=209
TRACE 2016-07-15 17:41:01,047 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,047 seastar - running fdatasync() from 0 id=208
TRACE 2016-07-15 17:41:01,056 [shard 0] seastar - flush done, id=207
TRACE 2016-07-15 17:41:01,062 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,062 seastar - running fdatasync() from 0 id=209
TRACE 2016-07-15 17:41:01,062 [shard 0] seastar - flush done, id=208
TRACE 2016-07-15 17:41:01,076 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,076 [shard 0] seastar - flush done, id=209
DEBUG 2016-07-15 17:41:01,076 [shard 0] sstable - Writing CRC file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-CRC.db
TRACE 2016-07-15 17:41:01,076 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:01,076 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Summary.db
TRACE 2016-07-15 17:41:01,077 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:01,077 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Summary.db
TRACE 2016-07-15 17:41:01,086 [shard 0] seastar - starting flush, id=210
TRACE 2016-07-15 17:41:01,086 seastar - running fdatasync() from 0 id=210
TRACE 2016-07-15 17:41:01,086 [shard 0] seastar - starting flush, id=211
TRACE 2016-07-15 17:41:01,086 [shard 0] seastar - starting flush, id=212
TRACE 2016-07-15 17:41:01,118 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,118 seastar - running fdatasync() from 0 id=211
TRACE 2016-07-15 17:41:01,118 [shard 0] seastar - flush done, id=210
TRACE 2016-07-15 17:41:01,125 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,125 seastar - running fdatasync() from 0 id=212
TRACE 2016-07-15 17:41:01,127 [shard 0] seastar - flush done, id=211
TRACE 2016-07-15 17:41:01,137 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,137 [shard 0] seastar - flush done, id=212
TRACE 2016-07-15 17:41:01,137 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: after finish_file_writer()
DEBUG 2016-07-15 17:41:01,137 [shard 0] sstable - Writing Summary.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Summary.db
DEBUG 2016-07-15 17:41:01,137 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Filter.db
DEBUG 2016-07-15 17:41:01,137 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Filter.db
TRACE 2016-07-15 17:41:01,143 [shard 0] seastar - starting flush, id=213
TRACE 2016-07-15 17:41:01,143 seastar - running fdatasync() from 0 id=213
TRACE 2016-07-15 17:41:01,143 [shard 0] seastar - starting flush, id=214
TRACE 2016-07-15 17:41:01,143 [shard 0] seastar - starting flush, id=215
TRACE 2016-07-15 17:41:01,175 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,175 seastar - running fdatasync() from 0 id=214
TRACE 2016-07-15 17:41:01,175 [shard 0] seastar - flush done, id=213
TRACE 2016-07-15 17:41:01,184 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,184 seastar - running fdatasync() from 0 id=215
TRACE 2016-07-15 17:41:01,188 [shard 0] seastar - flush done, id=214
TRACE 2016-07-15 17:41:01,194 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,194 [shard 0] seastar - flush done, id=215
DEBUG 2016-07-15 17:41:01,194 [shard 0] sstable - Writing Filter.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Filter.db
DEBUG 2016-07-15 17:41:01,195 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Statistics.db
DEBUG 2016-07-15 17:41:01,195 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Statistics.db
TRACE 2016-07-15 17:41:01,201 [shard 0] seastar - starting flush, id=216
TRACE 2016-07-15 17:41:01,201 [shard 0] seastar - starting flush, id=217
TRACE 2016-07-15 17:41:01,201 seastar - running fdatasync() from 0 id=216
TRACE 2016-07-15 17:41:01,201 [shard 0] seastar - starting flush, id=218
TRACE 2016-07-15 17:41:01,235 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,235 seastar - running fdatasync() from 0 id=217
TRACE 2016-07-15 17:41:01,238 [shard 0] seastar - flush done, id=216
TRACE 2016-07-15 17:41:01,243 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,243 seastar - running fdatasync() from 0 id=218
TRACE 2016-07-15 17:41:01,243 [shard 0] seastar - flush done, id=217
TRACE 2016-07-15 17:41:01,255 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,255 [shard 0] seastar - flush done, id=218
DEBUG 2016-07-15 17:41:01,255 [shard 0] sstable - Writing Statistics.db file /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Statistics.db
TRACE 2016-07-15 17:41:01,256 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db: sealing
TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - starting flush, id=219
TRACE 2016-07-15 17:41:01,256 seastar - running fdatasync() from 0 id=219
TRACE 2016-07-15 17:41:01,256 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,256 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db: sealing
TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - flush done, id=219
TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - starting flush, id=220
TRACE 2016-07-15 17:41:01,256 seastar - running fdatasync() from 0 id=220
TRACE 2016-07-15 17:41:01,256 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - flush done, id=220
TRACE 2016-07-15 17:41:01,256 [shard 0] seastar - starting flush, id=221
TRACE 2016-07-15 17:41:01,256 seastar - running fdatasync() from 0 id=221
TRACE 2016-07-15 17:41:01,280 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,280 [shard 0] seastar - flush done, id=221
TRACE 2016-07-15 17:41:01,280 [shard 0] seastar - starting flush, id=222
TRACE 2016-07-15 17:41:01,281 seastar - running fdatasync() from 0 id=222
TRACE 2016-07-15 17:41:01,281 [shard 0] seastar - starting flush, id=223
TRACE 2016-07-15 17:41:01,293 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,294 seastar - running fdatasync() from 0 id=223
TRACE 2016-07-15 17:41:01,294 [shard 0] seastar - flush done, id=222
DEBUG 2016-07-15 17:41:01,294 [shard 0] sstable - SSTable with generation 84 of system.schema_usertypes was sealed successfully.
TRACE 2016-07-15 17:41:01,294 [shard 0] database - Written. Opening the sstable...
TRACE 2016-07-15 17:41:01,310 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - flush done, id=223
DEBUG 2016-07-15 17:41:01,310 [shard 0] sstable - SSTable with generation 95 of system.schema_columnfamilies was sealed successfully.
TRACE 2016-07-15 17:41:01,310 [shard 0] database - Written. Opening the sstable...
TRACE 2016-07-15 17:41:01,310 [shard 0] sstable - /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_triggers-0359bc7171233ee19a4ab9dfb11fc125/system-schema_triggers-ka-85-Data.db: sealing
TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - starting flush, id=224
TRACE 2016-07-15 17:41:01,310 seastar - running fdatasync() from 0 id=224
TRACE 2016-07-15 17:41:01,310 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - flush done, id=224
TRACE 2016-07-15 17:41:01,310 [shard 0] seastar - starting flush, id=225
TRACE 2016-07-15 17:41:01,310 seastar - running fdatasync() from 0 id=225
TRACE 2016-07-15 17:41:01,324 [shard 0] query_processor - execute_internal: ""INSERT INTO system.peers (peer, schema_version) VALUES (?, ?)"" (127.0.0.3, 67d1e0b4-d995-38fa-9e92-075d046a09fe)
TRACE 2016-07-15 17:41:01,324 [shard 0] database - apply {system.peers key {key: pk{00047f000003}, token:-4598924402677416620} data {mutation_partition: {tombstone: none} () static {row: } clustered {rows_entry: ckp{} {deletable_row: {row_marker 1468597261324000 0 0} {tombstone: none} {row: {column: 6 01000537ae72144e
e067d1e0b4d99538fa9e92075d046a09fe}}}}}}
DEBUG 2016-07-15 17:41:01,325 [shard 0] migration_manager - Submitting migration task for 127.0.0.3
TRACE 2016-07-15 17:41:01,327 seastar - fdatasync() done
TRACE 2016-07-15 17:41:01,327 [shard 0] seastar - flush done, id=225
DEBUG 2016-07-15 17:41:01,327 [shard 0] sstable - SSTable with generation 85 of system.schema_triggers was sealed successfully.
DEBUG 2016-07-15 17:41:01,327 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db done
INFO 2016-07-15 17:41:01,327 [shard 0] compaction - Compacting [/home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-81-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema
_usertypes-ka-82-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-83-Data.db:level=0, /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db:level=
0, ]
DEBUG 2016-07-15 17:41:01,328 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/system-schema_usertypes-ka-84-Data.db replaced
DEBUG 2016-07-15 17:41:01,328 [shard 0] database - Flushing to /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db done
DEBUG 2016-07-15 17:41:01,328 [shard 0] database - Memtable for /home/tgrabiec/.ccm/scylla-3/node1/data/system/schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697/system-schema_columnfamilies-ka-95-Data.db replaced
TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Reading new schema
TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Merging keyspaces
INFO 2016-07-15 17:41:01,328 [shard 0] schema_tables - Dropping keyspace testxyz
TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Merging tables
TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Merging types
TRACE 2016-07-15 17:41:01,328 [shard 0] schema_tables - Dropping keyspaces
TRACE 2016-07-15 17:41:01,329 [shard 0] schema_tables - Schema merged
```
",0,schema change statements are slow due to memtable flush latency installation details scylla version or git commit hash any executing ddl statements takes significantly more time on scylla than on cassandra for instance drop keyspace takes about a second on an idle s server i traced that down to latency of the flush of schema tables the create keyspace statement is noticeably faster than drop keyspace because it flushes much fewer tables it looks like the latency comes mainly from a large number of fdatasync calls which we execute sequentially during schema tables flush i counted calls when i disable them drop keyspace time drops down to about maybe some of them could be avoided or parallelized here s a detailed trace during drop keyspace trace schema tables taking the merge lock trace schema tables took the merge lock trace schema tables reading old schema trace schema tables applying schema changes trace database apply system schema keyspaces key key pk token data mutation partition tombstone timestamp deletion time static row clustered trace database apply system schema columnfamilies key key pk token data mutation partition tombstone timestamp deletion time static row clustered trace database apply system schema columns key key pk token data mutation partition tombstone timestamp deletion time static row clustered trace database apply system schema triggers key key pk token data mutation partition tombstone timestamp deletion time static row clustered trace database apply system schema usertypes key key pk token data mutation partition tombstone timestamp deletion time static row clustered trace database apply system indexinfo key key pk token data mutation partition tombstone timestamp deletion time static row clustered trace schema tables flushing debug database sealing active memtable of indexinfo system partitions occupancy debug database flushing to home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db debug sstable writing toc file home tgrabiec ccm scylla data system indexinfo system indexinfo ka toc txt tmp debug database sealing active memtable of schema keyspaces system partitions occupancy debug database sealing active memtable of schema triggers system partitions occupancy debug database sealing active memtable of schema columns system partitions occupancy debug database sealing active memtable of schema usertypes system partitions occupancy debug database sealing active memtable of schema columnfamilies system partitions occupancy debug database flushing to home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db debug sstable writing toc file home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka toc txt tmp trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db end of stream trace sstable home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db after consume end of stream trace sstable home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db after consume end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing digest file home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka digest debug sstable writing digest file home tgrabiec ccm scylla data system indexinfo system indexinfo ka digest trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing crc file home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka crc db debug sstable writing crc file home tgrabiec ccm scylla data system indexinfo system indexinfo ka crc db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka summary db trace sstable home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system indexinfo system indexinfo ka summary db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing filter db file home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka filter db debug sstable writing filter db file home tgrabiec ccm scylla data system indexinfo system indexinfo ka filter db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing statistics db file home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka statistics db debug sstable writing statistics db file home tgrabiec ccm scylla data system indexinfo system indexinfo ka statistics db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db sealing trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace sstable home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db sealing trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id debug sstable sstable with generation of system schema keyspaces was sealed successfully trace database written opening the sstable trace seastar fdatasync done trace seastar flush done id debug sstable sstable with generation of system indexinfo was sealed successfully trace database written opening the sstable debug database flushing to home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db done debug database memtable for home tgrabiec ccm scylla data system schema keyspaces system schema keyspaces ka data db replaced debug database flushing to home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db debug sstable writing toc file home tgrabiec ccm scylla data system schema triggers system schema triggers ka toc txt tmp debug database flushing to home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db done debug database memtable for home tgrabiec ccm scylla data system indexinfo system indexinfo ka data db replaced debug database flushing to home tgrabiec ccm scylla data system schema columns system schema columns ka data db debug sstable writing toc file home tgrabiec ccm scylla data system schema columns system schema columns ka toc txt tmp trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db end of stream trace sstable home tgrabiec ccm scylla data system schema columns system schema columns ka data db end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db after consume end of stream trace sstable home tgrabiec ccm scylla data system schema columns system schema columns ka data db after consume end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing digest file home tgrabiec ccm scylla data system schema triggers system schema triggers ka digest debug sstable writing digest file home tgrabiec ccm scylla data system schema columns system schema columns ka digest trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing crc file home tgrabiec ccm scylla data system schema triggers system schema triggers ka crc db debug sstable writing crc file home tgrabiec ccm scylla data system schema columns system schema columns ka crc db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system schema triggers system schema triggers ka summary db trace sstable home tgrabiec ccm scylla data system schema columns system schema columns ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system schema columns system schema columns ka summary db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing filter db file home tgrabiec ccm scylla data system schema triggers system schema triggers ka filter db debug sstable writing filter db file home tgrabiec ccm scylla data system schema columns system schema columns ka filter db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing statistics db file home tgrabiec ccm scylla data system schema triggers system schema triggers ka statistics db debug sstable writing statistics db file home tgrabiec ccm scylla data system schema columns system schema columns ka statistics db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db sealing trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace sstable home tgrabiec ccm scylla data system schema columns system schema columns ka data db sealing trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id debug sstable sstable with generation of system schema triggers was sealed successfully trace database written opening the sstable debug sstable sstable with generation of system schema columns was sealed successfully trace database written opening the sstable debug database flushing to home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db done info compaction compacting home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db level home tgrabiec ccm scylla data system schema triggers system schema tr iggers ka data db level home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db level home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db level debug database memtable for home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db replaced debug database flushing to home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db debug sstable writing toc file home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka toc txt tmp debug database flushing to home tgrabiec ccm scylla data system schema columns system schema columns ka data db done debug database memtable for home tgrabiec ccm scylla data system schema columns system schema columns ka data db replaced debug database flushing to home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db debug sstable writing toc file home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka toc txt tmp trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id debug sstable writing toc file home tgrabiec ccm scylla data system schema triggers system schema triggers ka toc txt tmp trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db end of stream trace sstable home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db after consume end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace sstable home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db after consume end of stream trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing digest file home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka digest debug sstable writing digest file home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka digest trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db after consume end of stream trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing crc file home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka crc db debug sstable writing digest file home tgrabiec ccm scylla data system schema triggers system schema triggers ka digest debug sstable writing crc file home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka crc db trace seastar starting flush id trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing crc file home tgrabiec ccm scylla data system schema triggers system schema triggers ka crc db trace sstable home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka summary db trace sstable home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka summary db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db after finish file writer debug sstable writing summary db file home tgrabiec ccm scylla data system schema triggers system schema triggers ka summary db debug sstable writing filter db file home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka filter db debug sstable writing filter db file home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka filter db trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing filter db file home tgrabiec ccm scylla data system schema triggers system schema triggers ka filter db debug sstable writing statistics db file home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka statistics db debug sstable writing statistics db file home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka statistics db trace seastar starting flush id trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id trace seastar fdatasync done trace seastar flush done id debug sstable writing statistics db file home tgrabiec ccm scylla data system schema triggers system schema triggers ka statistics db trace sstable home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db sealing trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace sstable home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db sealing trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace seastar starting flush id trace seastar fdatasync done trace seastar running fdatasync from id trace seastar flush done id debug sstable sstable with generation of system schema usertypes was sealed successfully trace database written opening the sstable trace seastar fdatasync done trace seastar flush done id debug sstable sstable with generation of system schema columnfamilies was sealed successfully trace database written opening the sstable trace sstable home tgrabiec ccm scylla data system schema triggers system schema triggers ka data db sealing trace seastar starting flush id trace seastar running fdatasync from id trace seastar fdatasync done trace seastar flush done id trace seastar starting flush id trace seastar running fdatasync from id trace query processor execute internal insert into system peers peer schema version values trace database apply system peers key key pk token data mutation partition tombstone none static row clustered rows entry ckp deletable row row marker tombstone none row column debug migration manager submitting migration task for trace seastar fdatasync done trace seastar flush done id debug sstable sstable with generation of system schema triggers was sealed successfully debug database flushing to home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db done info compaction compacting home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db level home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db level home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db level home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db level debug database memtable for home tgrabiec ccm scylla data system schema usertypes system schema usertypes ka data db replaced debug database flushing to home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db done debug database memtable for home tgrabiec ccm scylla data system schema columnfamilies system schema columnfamilies ka data db replaced trace schema tables reading new schema trace schema tables merging keyspaces info schema tables dropping keyspace testxyz trace schema tables merging tables trace schema tables merging types trace schema tables dropping keyspaces trace schema tables schema merged ,0
3517,13779975156.0,IssuesEvent,2020-10-08 14:23:27,exercism/python,https://api.github.com/repos/exercism/python,closed,CI: disable Travis-CI in favor of Github Actions,maintainer action required,"GitHub Actions has matured to the point that Travis is now obsolete in this repository.
@exercism/python Can one of you take care of this please?",True,"CI: disable Travis-CI in favor of Github Actions - GitHub Actions has matured to the point that Travis is now obsolete in this repository.
@exercism/python Can one of you take care of this please?",1,ci disable travis ci in favor of github actions github actions has matured to the point that travis is now obsolete in this repository exercism python can one of you take care of this please ,1
4787,24628453134.0,IssuesEvent,2022-10-16 20:17:20,centerofci/mathesar,https://api.github.com/repos/centerofci/mathesar,opened,RecursionError in records endpoint,type: bug work: backend status: ready restricted: maintainers,"## Description
I've been getting this error for a few tables in my environment.
* These tables have been created using the 'Create new table' button, not using file import.
* These tables have no rows.
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/db/v0/tables/2/records/
Django Version: 3.1.14
Python Version: 3.9.14
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File ""/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py"", line 47, in inner
response = get_response(request)
File ""/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py"", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File ""/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py"", line 54, in wrapped_view
return view_func(*args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py"", line 125, in view
return self.dispatch(request, *args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/rest_framework/views.py"", line 509, in dispatch
response = self.handle_exception(exc)
File ""/usr/local/lib/python3.9/site-packages/rest_framework/views.py"", line 466, in handle_exception
response = exception_handler(exc, context)
File ""/code/mathesar/exception_handlers.py"", line 55, in mathesar_exception_handler
raise exc
File ""/usr/local/lib/python3.9/site-packages/rest_framework/views.py"", line 506, in dispatch
response = handler(request, *args, **kwargs)
File ""/code/mathesar/api/db/viewsets/records.py"", line 67, in list
records = paginator.paginate_queryset(
File ""/code/mathesar/api/pagination.py"", line 82, in paginate_queryset
preview_metadata, preview_columns = get_preview_info(table.id)
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 70, in get_preview_info
fk_constraints = [
File ""/code/mathesar/utils/preview.py"", line 73, in
if table_constraint.type == ConstraintType.FOREIGN_KEY.value
File ""/code/mathesar/models/base.py"", line 774, in type
return constraint_utils.get_constraint_type_from_char(self._constraint_record['contype'])
File ""/code/mathesar/models/base.py"", line 766, in _constraint_record
return get_constraint_record_from_oid(self.oid, engine)
File ""/code/db/constraints/operations/select.py"", line 33, in get_constraint_record_from_oid
pg_constraint = get_pg_catalog_table(""pg_constraint"", engine, metadata=metadata)
File ""/code/db/utils.py"", line 92, in warning_ignored_func
return f(*args, **kwargs)
File ""/code/db/utils.py"", line 99, in get_pg_catalog_table
return sqlalchemy.Table(table_name, metadata, autoload_with=engine, schema='pg_catalog')
File """", line 2, in __new__
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/deprecations.py"", line 298, in warned
return fn(*args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 600, in __new__
metadata._remove_table(name, schema)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py"", line 70, in __exit__
compat.raise_(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py"", line 207, in raise_
raise exception
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 595, in __new__
table._init(name, metadata, *args, **kw)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 670, in _init
self._autoload(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 705, in _autoload
conn_insp.reflect_table(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 774, in reflect_table
for col_d in self.get_columns(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 497, in get_columns
col_defs = self.dialect.get_columns(
File """", line 2, in get_columns
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 55, in cache
ret = fn(self, con, *args, **kw)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/base.py"", line 3585, in get_columns
table_oid = self.get_table_oid(
File """", line 2, in get_table_oid
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 55, in cache
ret = fn(self, con, *args, **kw)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/base.py"", line 3462, in get_table_oid
c = connection.execute(s, dict(table_name=table_name, schema=schema))
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/future/engine.py"", line 280, in execute
return self._execute_20(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1582, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py"", line 324, in _execute_on_connection
return connection._execute_clauseelement(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1451, in _execute_clauseelement
ret = self._execute_context(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1813, in _execute_context
self._handle_dbapi_exception(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1998, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py"", line 207, in raise_
raise exception
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1786, in _execute_context
result = context._setup_result_proxy()
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py"", line 1406, in _setup_result_proxy
result = self._setup_dml_or_text_result()
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py"", line 1494, in _setup_dml_or_text_result
result = _cursor.CursorResult(self, strategy, cursor_description)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py"", line 1253, in __init__
metadata = self._init_metadata(context, cursor_description)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py"", line 1310, in _init_metadata
metadata = metadata._adapt_to_context(context)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py"", line 136, in _adapt_to_context
invoked_statement._exported_columns_iterator()
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py"", line 126, in _exported_columns_iterator
return iter(self.exported_columns)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py"", line 2870, in exported_columns
return self.selected_columns
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py"", line 1180, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py"", line 6354, in selected_columns
return ColumnCollection(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py"", line 1128, in __init__
self._initial_populate(columns)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py"", line 1131, in _initial_populate
self._populate_separate_keys(iter_)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py"", line 1227, in _populate_separate_keys
self._colset.update(c for k, c in self._collection)
Exception Type: RecursionError at /api/db/v0/tables/2/records/
Exception Value: maximum recursion depth exceeded
```
I'm not sure about the cause and it's occuring consistently for me but unable to reproduce it on staging.",True,"RecursionError in records endpoint - ## Description
I've been getting this error for a few tables in my environment.
* These tables have been created using the 'Create new table' button, not using file import.
* These tables have no rows.
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/db/v0/tables/2/records/
Django Version: 3.1.14
Python Version: 3.9.14
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File ""/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py"", line 47, in inner
response = get_response(request)
File ""/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py"", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File ""/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py"", line 54, in wrapped_view
return view_func(*args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py"", line 125, in view
return self.dispatch(request, *args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/rest_framework/views.py"", line 509, in dispatch
response = self.handle_exception(exc)
File ""/usr/local/lib/python3.9/site-packages/rest_framework/views.py"", line 466, in handle_exception
response = exception_handler(exc, context)
File ""/code/mathesar/exception_handlers.py"", line 55, in mathesar_exception_handler
raise exc
File ""/usr/local/lib/python3.9/site-packages/rest_framework/views.py"", line 506, in dispatch
response = handler(request, *args, **kwargs)
File ""/code/mathesar/api/db/viewsets/records.py"", line 67, in list
records = paginator.paginate_queryset(
File ""/code/mathesar/api/pagination.py"", line 82, in paginate_queryset
preview_metadata, preview_columns = get_preview_info(table.id)
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 70, in get_preview_info
fk_constraints = [
File ""/code/mathesar/utils/preview.py"", line 73, in
if table_constraint.type == ConstraintType.FOREIGN_KEY.value
File ""/code/mathesar/models/base.py"", line 774, in type
return constraint_utils.get_constraint_type_from_char(self._constraint_record['contype'])
File ""/code/mathesar/models/base.py"", line 766, in _constraint_record
return get_constraint_record_from_oid(self.oid, engine)
File ""/code/db/constraints/operations/select.py"", line 33, in get_constraint_record_from_oid
pg_constraint = get_pg_catalog_table(""pg_constraint"", engine, metadata=metadata)
File ""/code/db/utils.py"", line 92, in warning_ignored_func
return f(*args, **kwargs)
File ""/code/db/utils.py"", line 99, in get_pg_catalog_table
return sqlalchemy.Table(table_name, metadata, autoload_with=engine, schema='pg_catalog')
File """", line 2, in __new__
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/deprecations.py"", line 298, in warned
return fn(*args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 600, in __new__
metadata._remove_table(name, schema)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py"", line 70, in __exit__
compat.raise_(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py"", line 207, in raise_
raise exception
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 595, in __new__
table._init(name, metadata, *args, **kw)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 670, in _init
self._autoload(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 705, in _autoload
conn_insp.reflect_table(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 774, in reflect_table
for col_d in self.get_columns(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 497, in get_columns
col_defs = self.dialect.get_columns(
File """", line 2, in get_columns
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 55, in cache
ret = fn(self, con, *args, **kw)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/base.py"", line 3585, in get_columns
table_oid = self.get_table_oid(
File """", line 2, in get_table_oid
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 55, in cache
ret = fn(self, con, *args, **kw)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/base.py"", line 3462, in get_table_oid
c = connection.execute(s, dict(table_name=table_name, schema=schema))
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/future/engine.py"", line 280, in execute
return self._execute_20(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1582, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py"", line 324, in _execute_on_connection
return connection._execute_clauseelement(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1451, in _execute_clauseelement
ret = self._execute_context(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1813, in _execute_context
self._handle_dbapi_exception(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1998, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py"", line 207, in raise_
raise exception
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1786, in _execute_context
result = context._setup_result_proxy()
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py"", line 1406, in _setup_result_proxy
result = self._setup_dml_or_text_result()
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py"", line 1494, in _setup_dml_or_text_result
result = _cursor.CursorResult(self, strategy, cursor_description)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py"", line 1253, in __init__
metadata = self._init_metadata(context, cursor_description)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py"", line 1310, in _init_metadata
metadata = metadata._adapt_to_context(context)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py"", line 136, in _adapt_to_context
invoked_statement._exported_columns_iterator()
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py"", line 126, in _exported_columns_iterator
return iter(self.exported_columns)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py"", line 2870, in exported_columns
return self.selected_columns
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py"", line 1180, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py"", line 6354, in selected_columns
return ColumnCollection(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py"", line 1128, in __init__
self._initial_populate(columns)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py"", line 1131, in _initial_populate
self._populate_separate_keys(iter_)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py"", line 1227, in _populate_separate_keys
self._colset.update(c for k, c in self._collection)
Exception Type: RecursionError at /api/db/v0/tables/2/records/
Exception Value: maximum recursion depth exceeded
```
I'm not sure about the cause and it's occuring consistently for me but unable to reproduce it on staging.",1,recursionerror in records endpoint description i ve been getting this error for a few tables in my environment these tables have been created using the create new table button not using file import these tables have no rows environment request method get request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware traceback most recent call last file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file code mathesar api db viewsets records py line in list records paginator paginate queryset file code mathesar api pagination py line in paginate queryset preview metadata preview columns get preview info table id file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info fk constraints file code mathesar utils preview py line in if table constraint type constrainttype foreign key value file code mathesar models base py line in type return constraint utils get constraint type from char self constraint record file code mathesar models base py line in constraint record return get constraint record from oid self oid engine file code db constraints operations select py line in get constraint record from oid pg constraint get pg catalog table pg constraint engine metadata metadata file code db utils py line in warning ignored func return f args kwargs file code db utils py line in get pg catalog table return sqlalchemy table table name metadata autoload with engine schema pg catalog file line in new file usr local lib site packages sqlalchemy util deprecations py line in warned return fn args kwargs file usr local lib site packages sqlalchemy sql schema py line in new metadata remove table name schema file usr local lib site packages sqlalchemy util langhelpers py line in exit compat raise file usr local lib site packages sqlalchemy util compat py line in raise raise exception file usr local lib site packages sqlalchemy sql schema py line in new table init name metadata args kw file usr local lib site packages sqlalchemy sql schema py line in init self autoload file usr local lib site packages sqlalchemy sql schema py line in autoload conn insp reflect table file usr local lib site packages sqlalchemy engine reflection py line in reflect table for col d in self get columns file usr local lib site packages sqlalchemy engine reflection py line in get columns col defs self dialect get columns file line in get columns file usr local lib site packages sqlalchemy engine reflection py line in cache ret fn self con args kw file usr local lib site packages sqlalchemy dialects postgresql base py line in get columns table oid self get table oid file line in get table oid file usr local lib site packages sqlalchemy engine reflection py line in cache ret fn self con args kw file usr local lib site packages sqlalchemy dialects postgresql base py line in get table oid c connection execute s dict table name table name schema schema file usr local lib site packages sqlalchemy future engine py line in execute return self execute file usr local lib site packages sqlalchemy engine base py line in execute return meth self args kwargs execution options file usr local lib site packages sqlalchemy sql elements py line in execute on connection return connection execute clauseelement file usr local lib site packages sqlalchemy engine base py line in execute clauseelement ret self execute context file usr local lib site packages sqlalchemy engine base py line in execute context self handle dbapi exception file usr local lib site packages sqlalchemy engine base py line in handle dbapi exception util raise exc info with traceback exc info file usr local lib site packages sqlalchemy util compat py line in raise raise exception file usr local lib site packages sqlalchemy engine base py line in execute context result context setup result proxy file usr local lib site packages sqlalchemy engine default py line in setup result proxy result self setup dml or text result file usr local lib site packages sqlalchemy engine default py line in setup dml or text result result cursor cursorresult self strategy cursor description file usr local lib site packages sqlalchemy engine cursor py line in init metadata self init metadata context cursor description file usr local lib site packages sqlalchemy engine cursor py line in init metadata metadata metadata adapt to context context file usr local lib site packages sqlalchemy engine cursor py line in adapt to context invoked statement exported columns iterator file usr local lib site packages sqlalchemy sql selectable py line in exported columns iterator return iter self exported columns file usr local lib site packages sqlalchemy sql selectable py line in exported columns return self selected columns file usr local lib site packages sqlalchemy util langhelpers py line in get obj dict result self fget obj file usr local lib site packages sqlalchemy sql selectable py line in selected columns return columncollection file usr local lib site packages sqlalchemy sql base py line in init self initial populate columns file usr local lib site packages sqlalchemy sql base py line in initial populate self populate separate keys iter file usr local lib site packages sqlalchemy sql base py line in populate separate keys self colset update c for k c in self collection exception type recursionerror at api db tables records exception value maximum recursion depth exceeded i m not sure about the cause and it s occuring consistently for me but unable to reproduce it on staging ,1
412281,12037643103.0,IssuesEvent,2020-04-13 22:20:12,department-of-veterans-affairs/caseflow,https://api.github.com/repos/department-of-veterans-affairs/caseflow,closed,Hearing Details layout updates,Priority: Medium Product: caseflow-hearings Stakeholder: BVA Team: Tango 💃,"### Description
Update the layout and information hierarchy of the Hearing Details page.
### Designs
Figma: https://www.figma.com/file/V87TZArfdurCGJiEjQ73ES/Virtual-Hearings?node-id=109%3A17309
### Acceptance criteria
- Hearing Details fields are re-arranged into the following sections separated by divider lines:
- section with `VLJ`, `Hearing Coordinator`, `Hearing Room` fields
- section with `Hearing Type` field
- section with Virtual Hearing Details (h3) heading; `VLJ Virtual Hearing Link`, `Veteran Email for Notifications`, and `POA/Representative Email for Notifications` fields; and the Email Notification History accordion/section
- section with `Waive 90 Day Evidence Hold` field
- section with `Notes` field
- The following Transcription Details headings are changed to h3:
- Transcription Problem
- Transcription Request
### Background/context/resources
This work has been broken out from Display more info about sent Virtual Hearings emails (#13370), which was based on usability testing with Hearing Coordinators (#12960)",1.0,"Hearing Details layout updates - ### Description
Update the layout and information hierarchy of the Hearing Details page.
### Designs
Figma: https://www.figma.com/file/V87TZArfdurCGJiEjQ73ES/Virtual-Hearings?node-id=109%3A17309
### Acceptance criteria
- Hearing Details fields are re-arranged into the following sections separated by divider lines:
- section with `VLJ`, `Hearing Coordinator`, `Hearing Room` fields
- section with `Hearing Type` field
- section with Virtual Hearing Details (h3) heading; `VLJ Virtual Hearing Link`, `Veteran Email for Notifications`, and `POA/Representative Email for Notifications` fields; and the Email Notification History accordion/section
- section with `Waive 90 Day Evidence Hold` field
- section with `Notes` field
- The following Transcription Details headings are changed to h3:
- Transcription Problem
- Transcription Request
### Background/context/resources
This work has been broken out from Display more info about sent Virtual Hearings emails (#13370), which was based on usability testing with Hearing Coordinators (#12960)",0,hearing details layout updates description update the layout and information hierarchy of the hearing details page designs figma acceptance criteria hearing details fields are re arranged into the following sections separated by divider lines section with vlj hearing coordinator hearing room fields section with hearing type field section with virtual hearing details heading vlj virtual hearing link veteran email for notifications and poa representative email for notifications fields and the email notification history accordion section section with waive day evidence hold field section with notes field the following transcription details headings are changed to transcription problem transcription request background context resources this work has been broken out from display more info about sent virtual hearings emails which was based on usability testing with hearing coordinators ,0
245592,20779671017.0,IssuesEvent,2022-03-16 13:45:33,vaop/vaop,https://api.github.com/repos/vaop/vaop,opened,[Admin][Administrators] Unit Testing,testing,Unit testing needs to be added to the `AdminCrudController` covering both admin operations as well as RBAC/Policy/Permission enforcement.,1.0,[Admin][Administrators] Unit Testing - Unit testing needs to be added to the `AdminCrudController` covering both admin operations as well as RBAC/Policy/Permission enforcement.,0, unit testing unit testing needs to be added to the admincrudcontroller covering both admin operations as well as rbac policy permission enforcement ,0
2261,7937339432.0,IssuesEvent,2018-07-09 12:35:16,DynamoRIO/drmemory,https://api.github.com/repos/DynamoRIO/drmemory,opened,use clang-format for automated formatting,Maintainability Type-Feature,"For DR we've gone to full clang-format: https://github.com/DynamoRIO/dynamorio/issues/2876
This issue covers doing the same for Dr. Memory.",True,"use clang-format for automated formatting - For DR we've gone to full clang-format: https://github.com/DynamoRIO/dynamorio/issues/2876
This issue covers doing the same for Dr. Memory.",1,use clang format for automated formatting for dr we ve gone to full clang format this issue covers doing the same for dr memory ,1
1269,5375433003.0,IssuesEvent,2017-02-23 04:46:23,wojno/movie_manager,https://api.github.com/repos/wojno/movie_manager,opened,"As an authenticated user, I want to edit a movie in my collection",Maintain Collection,"As an `authenticated user`, I want to edit a movie in my `collection` so that when I change my mind about a `rating` or make a mistake in the `format selected` it can be `adjusted`",True,"As an authenticated user, I want to edit a movie in my collection - As an `authenticated user`, I want to edit a movie in my `collection` so that when I change my mind about a `rating` or make a mistake in the `format selected` it can be `adjusted`",1,as an authenticated user i want to edit a movie in my collection as an authenticated user i want to edit a movie in my collection so that when i change my mind about a rating or make a mistake in the format selected it can be adjusted ,1
72875,31769573477.0,IssuesEvent,2023-09-12 10:53:30,gauravrs18/issue_onboarding,https://api.github.com/repos/gauravrs18/issue_onboarding,closed,"dev-angular-style-account-services-new-connection-component-connect-component
-consumer-details-component
-application-component
-payment-component",CX-account-services,"dev-angular-style-account-services-new-connection-component-connect-component
-consumer-details-component
-application-component
-payment-component",1.0,"dev-angular-style-account-services-new-connection-component-connect-component
-consumer-details-component
-application-component
-payment-component - dev-angular-style-account-services-new-connection-component-connect-component
-consumer-details-component
-application-component
-payment-component",0,dev angular style account services new connection component connect component consumer details component application component payment component dev angular style account services new connection component connect component consumer details component application component payment component,0
291336,25138559274.0,IssuesEvent,2022-11-09 20:50:20,istio/ztunnel,https://api.github.com/repos/istio/ztunnel,opened,istio/istio is tested with new zTunnel,area/testing P0 size/TBD,The istio/istio repo should have a blocking prow job that runs our ambient integration tests using the new zTunnel. This may replace or temporarily run alongside a job that tests the original Envoy implementation.,1.0,istio/istio is tested with new zTunnel - The istio/istio repo should have a blocking prow job that runs our ambient integration tests using the new zTunnel. This may replace or temporarily run alongside a job that tests the original Envoy implementation.,0,istio istio is tested with new ztunnel the istio istio repo should have a blocking prow job that runs our ambient integration tests using the new ztunnel this may replace or temporarily run alongside a job that tests the original envoy implementation ,0
66114,6989196901.0,IssuesEvent,2017-12-14 15:28:24,edenlabllc/ehealth.api,https://api.github.com/repos/edenlabllc/ehealth.api,closed,Implement password expiration & rotation policy,BE epic/Auth kind/task project/CR status/test,"Also, implement password expiration period: enforce password change every `PASSWORD_EXPIRATION_DAYS` (env variable) days.
if password is expired - do not allow user to login untill new password is set, expiry all the refresh tokens
do not allow to use 3 previously used passwords - save history
- [x] update .erd
- [x] update Mithrill apiary with new error codes
related to #1556 ",1.0,"Implement password expiration & rotation policy - Also, implement password expiration period: enforce password change every `PASSWORD_EXPIRATION_DAYS` (env variable) days.
if password is expired - do not allow user to login untill new password is set, expiry all the refresh tokens
do not allow to use 3 previously used passwords - save history
- [x] update .erd
- [x] update Mithrill apiary with new error codes
related to #1556 ",0,implement password expiration rotation policy also implement password expiration period enforce password change every password expiration days env variable days if password is expired do not allow user to login untill new password is set expiry all the refresh tokens do not allow to use previously used passwords save history update erd update mithrill apiary with new error codes related to ,0
3745,15764579442.0,IssuesEvent,2021-03-31 13:22:28,arcticicestudio/styleguide-javascript,https://api.github.com/repos/arcticicestudio/styleguide-javascript,opened,Update Node package dependencies & GitHub Action versions,context-workflow scope-compatibility scope-maintainability scope-quality scope-stability type-task,"In #32 all ESLint packages and dependencies have been updated to the latest version.
This issue updates all repository development packages and GitHub Actions to the latest versions and adapts to the changes:
- **Update to ESLint v7** — bump package version from [`v6.2.0` to `v7.23.0`][gh-eslint/eslint-comp-v6.2.0_v7.23.0]. See #32 and the [official v7 migration guide][esl-docs-guides-mig_v7] for more details.
- **Remove `--ext` option for ESLint tasks** — as of ESLint v7, [files matched by `overrides[].files` are now linted by default][esl-docs-guides-mig_v7#override_file_match] which makes it obsolete to explicitly define file extensions like `*.js`.
- [del-cli][gh-sindresorhus/del-cli] — Bump minimum version from [`v2.0.0` to `v3.0.1`][gh-sindresorhus/del-cli-comp-v2.0.0_v3.0.1].
- [eslint-config-prettier][gh-prettier/eslint-config-prettier] — Bump version from [`v6.1.0` to `v8.1.0`][gh-prettier/eslint-config-prettier-comp-v6.1.0_v8.1.0].
- [eslint-plugin-prettier][gh-prettier/eslint-plugin-prettier] — Bump minimum version from [`v3.1.0` to `v3.3.1`][gh-prettier/eslint-plugin-prettier-comp-v3.1.0_v3.3.1].
- [eslint-plugin-import][gh-benmosher/eslint-plugin-import] — Bump minimum version from [`v2.18.2` to `v2.22.1`][gh-benmosher/eslint-plugin-import-comp-v2.18.2_v2.22.1].
- [husky][gh-typicode/husky] — Bump minimum version from [`v3.0.4` to `v6.0.0`][gh-typicode/husky-comp-v3.0.4_v6.0.0]. This also includes some breaking changes that require migrations. Run the official migration CLI to automatically migrate from v4 to v6: `npx husky-init && npm exec -- github:typicode/husky-4-to-6 --remove-v4-config`
- [lint-staged][gh-okonet/lint-staged] — Bump minimum version from [`v9.2.3` to `v10.5.4`][gh-okonet/lint-staged-comp-v9.2.3_v10.5.4].
- [prettier][gh-prettier/prettier] — Bump minimum version from [`v1.18.2` to `v2.2.1`][gh-prettier/prettier-comp-v1.18.2_v2.2.1].
- [remark-cli][gh-remarkjs/remark] — Bump minimum version from [`v7.0.0` to `v9.0.0`][gh-remarkjs/remark-comp-v7.0.0_v9.0.0].
[esl-docs-guides-mig_v7]: https://eslint.org/docs/user-guide/migrating-to-7.0.0
[esl-docs-guides-mig_v7#override_file_match]: https://eslint.org/docs/user-guide/migrating-to-7.0.0#lint-files-matched-by-overridesfiles-by-default
[gh-benmosher/eslint-plugin-import-comp-v2.18.2_v2.22.1]: https://github.com/benmosher/eslint-plugin-import/compare/v2.18.2...v2.22.1
[gh-benmosher/eslint-plugin-import]: https://github.com/benmosher/eslint-plugin-import
[gh-eslint/eslint-comp-v6.2.0_v7.23.0]: https://github.com/eslint/eslint/compare/v6.2.0....v7.23.0
[gh-okonet/lint-staged-comp-v9.2.3_v10.5.4]: https://github.com/typicode/husky/compare/v9.2.3...v10.5.4
[gh-okonet/lint-staged]: https://github.com/okonet/lint-staged
[gh-prettier/eslint-config-prettier-comp-v6.1.0_v8.1.0]: https://github.com/prettier/eslint-config-prettier/compare/v6.1.0...v8.1.0
[gh-prettier/eslint-config-prettier]: https://github.com/prettier/eslint-config-prettier
[gh-prettier/eslint-plugin-prettier-comp-v3.1.0_v3.3.1]: https://github.com/prettier/eslint-plugin-prettier/compare/v3.1.0...v3.3.1
[gh-prettier/eslint-plugin-prettier]: https://github.com/prettier/eslint-plugin-prettier
[gh-prettier/prettier-comp-v1.18.2_v2.2.1]: https://github.com/typicode/husky/compare/v1.18.2...v2.2.1
[gh-prettier/prettier]: https://github.com/prettier/prettier
[gh-remarkjs/remark-comp-v7.0.0_v9.0.0]: https://github.com/typicode/husky/compare/v7.0.0...v9.0.0
[gh-remarkjs/remark]: https://github.com/remarkjs/remark/releases
[gh-sindresorhus/del-cli-comp-v2.0.0_v3.0.1]: https://github.com/sindresorhus/del-cli/compare/v2.0.0...v3.0.1
[gh-sindresorhus/del-cli]: https://github.com/sindresorhus/del-cli
[gh-typicode/husky-comp-v3.0.4_v6.0.0]: https://github.com/typicode/husky/compare/v3.0.4...v6.0.0
[gh-typicode/husky]: https://github.com/typicode/husky
",True,"Update Node package dependencies & GitHub Action versions - In #32 all ESLint packages and dependencies have been updated to the latest version.
This issue updates all repository development packages and GitHub Actions to the latest versions and adapts to the changes:
- **Update to ESLint v7** — bump package version from [`v6.2.0` to `v7.23.0`][gh-eslint/eslint-comp-v6.2.0_v7.23.0]. See #32 and the [official v7 migration guide][esl-docs-guides-mig_v7] for more details.
- **Remove `--ext` option for ESLint tasks** — as of ESLint v7, [files matched by `overrides[].files` are now linted by default][esl-docs-guides-mig_v7#override_file_match] which makes it obsolete to explicitly define file extensions like `*.js`.
- [del-cli][gh-sindresorhus/del-cli] — Bump minimum version from [`v2.0.0` to `v3.0.1`][gh-sindresorhus/del-cli-comp-v2.0.0_v3.0.1].
- [eslint-config-prettier][gh-prettier/eslint-config-prettier] — Bump version from [`v6.1.0` to `v8.1.0`][gh-prettier/eslint-config-prettier-comp-v6.1.0_v8.1.0].
- [eslint-plugin-prettier][gh-prettier/eslint-plugin-prettier] — Bump minimum version from [`v3.1.0` to `v3.3.1`][gh-prettier/eslint-plugin-prettier-comp-v3.1.0_v3.3.1].
- [eslint-plugin-import][gh-benmosher/eslint-plugin-import] — Bump minimum version from [`v2.18.2` to `v2.22.1`][gh-benmosher/eslint-plugin-import-comp-v2.18.2_v2.22.1].
- [husky][gh-typicode/husky] — Bump minimum version from [`v3.0.4` to `v6.0.0`][gh-typicode/husky-comp-v3.0.4_v6.0.0]. This also includes some breaking changes that require migrations. Run the official migration CLI to automatically migrate from v4 to v6: `npx husky-init && npm exec -- github:typicode/husky-4-to-6 --remove-v4-config`
- [lint-staged][gh-okonet/lint-staged] — Bump minimum version from [`v9.2.3` to `v10.5.4`][gh-okonet/lint-staged-comp-v9.2.3_v10.5.4].
- [prettier][gh-prettier/prettier] — Bump minimum version from [`v1.18.2` to `v2.2.1`][gh-prettier/prettier-comp-v1.18.2_v2.2.1].
- [remark-cli][gh-remarkjs/remark] — Bump minimum version from [`v7.0.0` to `v9.0.0`][gh-remarkjs/remark-comp-v7.0.0_v9.0.0].
[esl-docs-guides-mig_v7]: https://eslint.org/docs/user-guide/migrating-to-7.0.0
[esl-docs-guides-mig_v7#override_file_match]: https://eslint.org/docs/user-guide/migrating-to-7.0.0#lint-files-matched-by-overridesfiles-by-default
[gh-benmosher/eslint-plugin-import-comp-v2.18.2_v2.22.1]: https://github.com/benmosher/eslint-plugin-import/compare/v2.18.2...v2.22.1
[gh-benmosher/eslint-plugin-import]: https://github.com/benmosher/eslint-plugin-import
[gh-eslint/eslint-comp-v6.2.0_v7.23.0]: https://github.com/eslint/eslint/compare/v6.2.0....v7.23.0
[gh-okonet/lint-staged-comp-v9.2.3_v10.5.4]: https://github.com/typicode/husky/compare/v9.2.3...v10.5.4
[gh-okonet/lint-staged]: https://github.com/okonet/lint-staged
[gh-prettier/eslint-config-prettier-comp-v6.1.0_v8.1.0]: https://github.com/prettier/eslint-config-prettier/compare/v6.1.0...v8.1.0
[gh-prettier/eslint-config-prettier]: https://github.com/prettier/eslint-config-prettier
[gh-prettier/eslint-plugin-prettier-comp-v3.1.0_v3.3.1]: https://github.com/prettier/eslint-plugin-prettier/compare/v3.1.0...v3.3.1
[gh-prettier/eslint-plugin-prettier]: https://github.com/prettier/eslint-plugin-prettier
[gh-prettier/prettier-comp-v1.18.2_v2.2.1]: https://github.com/typicode/husky/compare/v1.18.2...v2.2.1
[gh-prettier/prettier]: https://github.com/prettier/prettier
[gh-remarkjs/remark-comp-v7.0.0_v9.0.0]: https://github.com/typicode/husky/compare/v7.0.0...v9.0.0
[gh-remarkjs/remark]: https://github.com/remarkjs/remark/releases
[gh-sindresorhus/del-cli-comp-v2.0.0_v3.0.1]: https://github.com/sindresorhus/del-cli/compare/v2.0.0...v3.0.1
[gh-sindresorhus/del-cli]: https://github.com/sindresorhus/del-cli
[gh-typicode/husky-comp-v3.0.4_v6.0.0]: https://github.com/typicode/husky/compare/v3.0.4...v6.0.0
[gh-typicode/husky]: https://github.com/typicode/husky
",1,update node package dependencies github action versions in all eslint packages and dependencies have been updated to the latest version this issue updates all repository development packages and github actions to the latest versions and adapts to the changes update to eslint — bump package version from see and the for more details remove ext option for eslint tasks — as of eslint files are now linted by default which makes it obsolete to explicitly define file extensions like js — bump minimum version from — bump version from — bump minimum version from — bump minimum version from — bump minimum version from this also includes some breaking changes that require migrations run the official migration cli to automatically migrate from to npx husky init npm exec github typicode husky to remove config — bump minimum version from — bump minimum version from — bump minimum version from ,1
3529,13906845488.0,IssuesEvent,2020-10-20 11:50:20,grey-software/LinkedIn-Focus,https://api.github.com/repos/grey-software/LinkedIn-Focus,opened,🚀 Feature Request: Add the donate buttons to README.md,Domain: User Experience Role: Maintainer Type: Enhancement,"### Problem Overview 👁️🗨️
Users should be able to donate/sponsor to Grey Software via the donate buttons on README.md for LinkedIn-Focus.
### What would you like? 🧰
Add the three donate buttons (PayPal, GitHub Sponsors and open-collective) to README.md for LinkedIn-Focus. The button style should be exactly like the one that can be found on the 'Call to Donate' box when the Linked-In Focus extension is used.
The image below shows the GitHub sponsors and the PayPal button. You would also need to add the open-collective donate button along with these two to README.md.

### What alternatives have you considered? 🔍
N/A
### Additional details ℹ️
Here is a linked issue https://github.com/grey-software/LinkedIn-Focus/issues/35.
Here is a linked PR https://github.com/grey-software/LinkedIn-Focus/pull/27.
",True,"🚀 Feature Request: Add the donate buttons to README.md - ### Problem Overview 👁️🗨️
Users should be able to donate/sponsor to Grey Software via the donate buttons on README.md for LinkedIn-Focus.
### What would you like? 🧰
Add the three donate buttons (PayPal, GitHub Sponsors and open-collective) to README.md for LinkedIn-Focus. The button style should be exactly like the one that can be found on the 'Call to Donate' box when the Linked-In Focus extension is used.
The image below shows the GitHub sponsors and the PayPal button. You would also need to add the open-collective donate button along with these two to README.md.

### What alternatives have you considered? 🔍
N/A
### Additional details ℹ️
Here is a linked issue https://github.com/grey-software/LinkedIn-Focus/issues/35.
Here is a linked PR https://github.com/grey-software/LinkedIn-Focus/pull/27.
",1,🚀 feature request add the donate buttons to readme md problem overview 👁️🗨️ users should be able to donate sponsor to grey software via the donate buttons on readme md for linkedin focus what would you like 🧰 add the three donate buttons paypal github sponsors and open collective to readme md for linkedin focus the button style should be exactly like the one that can be found on the call to donate box when the linked in focus extension is used the image below shows the github sponsors and the paypal button you would also need to add the open collective donate button along with these two to readme md what alternatives have you considered 🔍 n a additional details ℹ️ here is a linked issue here is a linked pr ,1
5753,30491346221.0,IssuesEvent,2023-07-18 07:55:38,jupyter-naas/awesome-notebooks,https://api.github.com/repos/jupyter-naas/awesome-notebooks,closed,Mixpanel - Get Profile Event Activity,templates maintainer," This notebook returns the activity feed for specified users. It is usefull for organizations to track user activity and get insights from it.
",True,"Mixpanel - Get Profile Event Activity - This notebook returns the activity feed for specified users. It is usefull for organizations to track user activity and get insights from it.
",1,mixpanel get profile event activity this notebook returns the activity feed for specified users it is usefull for organizations to track user activity and get insights from it ,1
179634,13892235463.0,IssuesEvent,2020-10-19 11:54:38,CSOIreland/PxStat,https://api.github.com/repos/CSOIreland/PxStat,closed,[BUG] Invalid daily time range displayed in pill when Search or listing page used,bug fixed released tested,"**Describe the bug**
The incorrect daily time range is displayed for a table when the listing page or search option is used
**To Reproduce**
Searched for table CBM03 or selected from BERD Region listing
**Expected behavior**
Correct date should appear in the pill
**Screenshots**
Correct details taken form Last Updated page

Incorrect details taken form search or listing page option

",1.0,"[BUG] Invalid daily time range displayed in pill when Search or listing page used - **Describe the bug**
The incorrect daily time range is displayed for a table when the listing page or search option is used
**To Reproduce**
Searched for table CBM03 or selected from BERD Region listing
**Expected behavior**
Correct date should appear in the pill
**Screenshots**
Correct details taken form Last Updated page

Incorrect details taken form search or listing page option

",0, invalid daily time range displayed in pill when search or listing page used describe the bug the incorrect daily time range is displayed for a table when the listing page or search option is used to reproduce searched for table or selected from berd region listing expected behavior correct date should appear in the pill screenshots correct details taken form last updated page incorrect details taken form search or listing page option ,0
3493,13634716556.0,IssuesEvent,2020-09-25 00:41:46,amyjko/faculty,https://api.github.com/repos/amyjko/faculty,closed,Extract paper author name resolution,maintainability,It's currently in the `paper.js` rendering but should be in `model.js`,True,Extract paper author name resolution - It's currently in the `paper.js` rendering but should be in `model.js`,1,extract paper author name resolution it s currently in the paper js rendering but should be in model js ,1
1465,6363153396.0,IssuesEvent,2017-07-31 16:24:27,duckduckgo/zeroclickinfo-goodies,https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies,closed,Conversions: Add support for transfer rate conversions,Category: Highest Impact Tasks Maintainer Approved Status: Work In Progress Topic: Conversions,"**convert 50 Mbps to Kbps**
[https://duckduckgo.com/?q=convert%2050%20Mbps%20to%20Kbps](https://duckduckgo.com/?q=convert%2050%20Mbps%20to%20Kbps)
------
IA Page: http://duck.co/ia/view/conversions
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mintsoft",True,"Conversions: Add support for transfer rate conversions - **convert 50 Mbps to Kbps**
[https://duckduckgo.com/?q=convert%2050%20Mbps%20to%20Kbps](https://duckduckgo.com/?q=convert%2050%20Mbps%20to%20Kbps)
------
IA Page: http://duck.co/ia/view/conversions
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @mintsoft",1,conversions add support for transfer rate conversions convert mbps to kbps ia page mintsoft,1
16,2515195346.0,IssuesEvent,2015-01-15 17:01:06,simplesamlphp/simplesamlphp,https://api.github.com/repos/simplesamlphp/simplesamlphp,opened,Extract the oauth module out of the repository,enhancement low maintainability,"It should get its own repository and allow installation through composer. For other modules depending on this one, add a composer dependency on the module.",True,"Extract the oauth module out of the repository - It should get its own repository and allow installation through composer. For other modules depending on this one, add a composer dependency on the module.",1,extract the oauth module out of the repository it should get its own repository and allow installation through composer for other modules depending on this one add a composer dependency on the module ,1
296267,22293444181.0,IssuesEvent,2022-06-12 17:57:12,xam1002/TFG_Deteccion_Parkinson,https://api.github.com/repos/xam1002/TFG_Deteccion_Parkinson,closed,Trabajos relacionados,documentation,"Te dejo algunos artículos/proyectos relacionados con visión artificial y Parkinson. La mayoría se centran en identificarlo a partir de fotos de dibujos o escritura, pero son trabajos que deberían ir a la parte de ""Trabajos Relacionados"":
- https://link.springer.com/chapter/10.1007/978-981-16-2937-2_15
- https://www.bradford.ac.uk/dhez/projects/parkinsons-vision/
- https://pyimagesearch.com/2019/04/29/detecting-parkinsons-disease-with-opencv-computer-vision-and-the-spiral-wave-test/
- https://pubmed.ncbi.nlm.nih.gov/27686705/
De hecho este trabajo publicado el año pasado hacen exactamente lo que nosotros:
- https://link.springer.com/chapter/10.1007/978-3-030-87094-2_38",1.0,"Trabajos relacionados - Te dejo algunos artículos/proyectos relacionados con visión artificial y Parkinson. La mayoría se centran en identificarlo a partir de fotos de dibujos o escritura, pero son trabajos que deberían ir a la parte de ""Trabajos Relacionados"":
- https://link.springer.com/chapter/10.1007/978-981-16-2937-2_15
- https://www.bradford.ac.uk/dhez/projects/parkinsons-vision/
- https://pyimagesearch.com/2019/04/29/detecting-parkinsons-disease-with-opencv-computer-vision-and-the-spiral-wave-test/
- https://pubmed.ncbi.nlm.nih.gov/27686705/
De hecho este trabajo publicado el año pasado hacen exactamente lo que nosotros:
- https://link.springer.com/chapter/10.1007/978-3-030-87094-2_38",0,trabajos relacionados te dejo algunos artículos proyectos relacionados con visión artificial y parkinson la mayoría se centran en identificarlo a partir de fotos de dibujos o escritura pero son trabajos que deberían ir a la parte de trabajos relacionados de hecho este trabajo publicado el año pasado hacen exactamente lo que nosotros ,0
3940,17766870824.0,IssuesEvent,2021-08-30 08:40:11,DLR-RM/rl-baselines3-zoo,https://api.github.com/repos/DLR-RM/rl-baselines3-zoo,closed,"""not iterable"" TypeError due to exp_manager.py's is_atari()/is_bullet()/is_robotics_env() assuming entry_point of type str",bug Maintainers on vacation,"**Describe the bug**
In OpenAI Gym's [`register` function](https://github.com/openai/gym/blob/master/gym/envs/__init__.py), the keyword argument `entry_point` accepts a value of type `str` as well as a `gym.Env` subclass.
If a custom environment has been registered using the latter option (as a class), `exp_manager.py` produces a `TypeError: 'type' object is not iterable` when trying to check whether the string `""AtariEnv""` is contained in `gym.envs.registry.env_specs[env_id].entry_point`.
**Code example**
```python
# myexample.py
# ...assuming that some gym env class MyExample exists...
from gym.envs.registration import register
register(
id=""MyExample-v0"",
entry_point=MyExample,
)
```
```python
# utils/import_envs.py
import myexample
```
(as [recommended here](https://github.com/DLR-RM/rl-baselines3-zoo#custom-environment))
```sh
> python train.py --env MyExample-v0
========== MyExample-v0 ==========
Seed: 1193172183
EnvSpec(MyExample-v0)
Traceback (most recent call last):
File ""train.py"", line 181, in
no_optim_plots=args.no_optim_plots,
File ""/Users/asschude/Documents/PhD/code/rl-baselines3-zoo/utils/exp_manager.py"", line 116, in __init__
self._is_atari = self.is_atari(env_id)
File ""/Users/asschude/Documents/PhD/code/rl-baselines3-zoo/utils/exp_manager.py"", line 426, in is_atari
return ""AtariEnv"" in gym.envs.registry.env_specs[env_id].entry_point
TypeError: argument of type 'type' is not iterable
```
To restore full compatibility with gym's `register`, I suggest simply changing the check to `""AtariEnv"" in str(...)`, which will be using the class `__str__`/`__repr__` representation. Same for the other two checks.
I came across this when implementing a custom environment.
**System Info**
Describe the characteristic of your environment:
* cloned SB3 from GitHub (https://github.com/DLR-RM/rl-baselines3-zoo/commit/4f97b7348ccddf387462de8c14d39b1e49bf9d99)
* Python 3.7.7
* torch==1.9.0
* gym==0.18.3
",True,"""not iterable"" TypeError due to exp_manager.py's is_atari()/is_bullet()/is_robotics_env() assuming entry_point of type str - **Describe the bug**
In OpenAI Gym's [`register` function](https://github.com/openai/gym/blob/master/gym/envs/__init__.py), the keyword argument `entry_point` accepts a value of type `str` as well as a `gym.Env` subclass.
If a custom environment has been registered using the latter option (as a class), `exp_manager.py` produces a `TypeError: 'type' object is not iterable` when trying to check whether the string `""AtariEnv""` is contained in `gym.envs.registry.env_specs[env_id].entry_point`.
**Code example**
```python
# myexample.py
# ...assuming that some gym env class MyExample exists...
from gym.envs.registration import register
register(
id=""MyExample-v0"",
entry_point=MyExample,
)
```
```python
# utils/import_envs.py
import myexample
```
(as [recommended here](https://github.com/DLR-RM/rl-baselines3-zoo#custom-environment))
```sh
> python train.py --env MyExample-v0
========== MyExample-v0 ==========
Seed: 1193172183
EnvSpec(MyExample-v0)
Traceback (most recent call last):
File ""train.py"", line 181, in
no_optim_plots=args.no_optim_plots,
File ""/Users/asschude/Documents/PhD/code/rl-baselines3-zoo/utils/exp_manager.py"", line 116, in __init__
self._is_atari = self.is_atari(env_id)
File ""/Users/asschude/Documents/PhD/code/rl-baselines3-zoo/utils/exp_manager.py"", line 426, in is_atari
return ""AtariEnv"" in gym.envs.registry.env_specs[env_id].entry_point
TypeError: argument of type 'type' is not iterable
```
To restore full compatibility with gym's `register`, I suggest simply changing the check to `""AtariEnv"" in str(...)`, which will be using the class `__str__`/`__repr__` representation. Same for the other two checks.
I came across this when implementing a custom environment.
**System Info**
Describe the characteristic of your environment:
* cloned SB3 from GitHub (https://github.com/DLR-RM/rl-baselines3-zoo/commit/4f97b7348ccddf387462de8c14d39b1e49bf9d99)
* Python 3.7.7
* torch==1.9.0
* gym==0.18.3
",1, not iterable typeerror due to exp manager py s is atari is bullet is robotics env assuming entry point of type str describe the bug in openai gym s the keyword argument entry point accepts a value of type str as well as a gym env subclass if a custom environment has been registered using the latter option as a class exp manager py produces a typeerror type object is not iterable when trying to check whether the string atarienv is contained in gym envs registry env specs entry point code example python myexample py assuming that some gym env class myexample exists from gym envs registration import register register id myexample entry point myexample python utils import envs py import myexample as sh python train py env myexample myexample seed envspec myexample traceback most recent call last file train py line in no optim plots args no optim plots file users asschude documents phd code rl zoo utils exp manager py line in init self is atari self is atari env id file users asschude documents phd code rl zoo utils exp manager py line in is atari return atarienv in gym envs registry env specs entry point typeerror argument of type type is not iterable to restore full compatibility with gym s register i suggest simply changing the check to atarienv in str which will be using the class str repr representation same for the other two checks i came across this when implementing a custom environment system info describe the characteristic of your environment cloned from github python torch gym ,1
2049,6902062327.0,IssuesEvent,2017-11-25 16:00:35,NucleusPowered/Nucleus,https://api.github.com/repos/NucleusPowered/Nucleus,closed,CommandSpy not outputting anything in-game,bug for-maintainence-release,"[nucleus-info-20171116-180753](https://github.com/NucleusPowered/Nucleus/files/1479462/nucleus-info-20171116-180753.txt)
After the latest update (**Nucleus-1.1.7-LTS-S5.1-MC1.10.2**) for Sponge, CommandSpy has stopped outputting anything to my Staff members. /commandspy says it enables/disables it, yet nothing changes. Console command logging still works as configured in my settings.
No errors occur in the console/chat.
SocialSpy still works as expected.
**Settings**
```
# Allows users with permission to see commands that other players are executing in real time
command-spy=ENABLED
```
```
command-spy {
# The blacklist (or whitelist if filter-is-whitelist is true) to use when determining which commands to spy on.
command-filter=[]
# If true, command-filter acts as a whitelist of commands to spy on, else, it functions as a blacklist.
filter-is-whitelist=false
# The prefix to use when displaying the player's command.
prefix=""&8[&c!&8]&c{{name}} : ""
}
```
",True,"CommandSpy not outputting anything in-game - [nucleus-info-20171116-180753](https://github.com/NucleusPowered/Nucleus/files/1479462/nucleus-info-20171116-180753.txt)
After the latest update (**Nucleus-1.1.7-LTS-S5.1-MC1.10.2**) for Sponge, CommandSpy has stopped outputting anything to my Staff members. /commandspy says it enables/disables it, yet nothing changes. Console command logging still works as configured in my settings.
No errors occur in the console/chat.
SocialSpy still works as expected.
**Settings**
```
# Allows users with permission to see commands that other players are executing in real time
command-spy=ENABLED
```
```
command-spy {
# The blacklist (or whitelist if filter-is-whitelist is true) to use when determining which commands to spy on.
command-filter=[]
# If true, command-filter acts as a whitelist of commands to spy on, else, it functions as a blacklist.
filter-is-whitelist=false
# The prefix to use when displaying the player's command.
prefix=""&8[&c!&8]&c{{name}} : ""
}
```
",1,commandspy not outputting anything in game after the latest update nucleus lts for sponge commandspy has stopped outputting anything to my staff members commandspy says it enables disables it yet nothing changes console command logging still works as configured in my settings no errors occur in the console chat socialspy still works as expected settings allows users with permission to see commands that other players are executing in real time command spy enabled command spy the blacklist or whitelist if filter is whitelist is true to use when determining which commands to spy on command filter if true command filter acts as a whitelist of commands to spy on else it functions as a blacklist filter is whitelist false the prefix to use when displaying the player s command prefix c name ,1
1046,4859904450.0,IssuesEvent,2016-11-13 21:42:49,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,stat: readable and writeable are missing,affects_2.1 docs_report waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
stat
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = xxx/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
Debian 8
##### SUMMARY
In the stat dict the keys ""readable"" and ""writeable"" are missing. These keys are mentioned in the documentation [http://docs.ansible.com/ansible/stat_module.html](url).
##### STEPS TO REPRODUCE
```
- stat: path=/etc
name: check /etc
register: path
- fail: msg=""/etc is not there""
when: not path.stat.exists
- fail: msg=""/etc is not readable""
when: not path.stat.readable
```
##### EXPECTED RESULTS
correct evaluation of path.stat.readable
##### ACTUAL RESULTS
```
fatal: [xxx]: FAILED! => {""failed"": true, ""msg"": ""The conditional check 'not path.stat.readable' failed. The error was: error while evaluating conditional (not path.stat.readable): 'dict object' has no attribute 'readable'\n\nThe error appears to have been in 'xxx/playbook.yml': line x, column x, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n when: not path.stat.exists\n - fail: msg=\""/etc is not readable\""\n ^ here\n""}
```
",True,"stat: readable and writeable are missing - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
stat
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = xxx/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
Debian 8
##### SUMMARY
In the stat dict the keys ""readable"" and ""writeable"" are missing. These keys are mentioned in the documentation [http://docs.ansible.com/ansible/stat_module.html](url).
##### STEPS TO REPRODUCE
```
- stat: path=/etc
name: check /etc
register: path
- fail: msg=""/etc is not there""
when: not path.stat.exists
- fail: msg=""/etc is not readable""
when: not path.stat.readable
```
##### EXPECTED RESULTS
correct evaluation of path.stat.readable
##### ACTUAL RESULTS
```
fatal: [xxx]: FAILED! => {""failed"": true, ""msg"": ""The conditional check 'not path.stat.readable' failed. The error was: error while evaluating conditional (not path.stat.readable): 'dict object' has no attribute 'readable'\n\nThe error appears to have been in 'xxx/playbook.yml': line x, column x, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n when: not path.stat.exists\n - fail: msg=\""/etc is not readable\""\n ^ here\n""}
```
",1,stat readable and writeable are missing issue type bug report component name stat ansible version ansible config file xxx ansible cfg configured module search path default w o overrides os environment debian summary in the stat dict the keys readable and writeable are missing these keys are mentioned in the documentation url steps to reproduce stat path etc name check etc register path fail msg etc is not there when not path stat exists fail msg etc is not readable when not path stat readable expected results correct evaluation of path stat readable actual results fatal failed failed true msg the conditional check not path stat readable failed the error was error while evaluating conditional not path stat readable dict object has no attribute readable n nthe error appears to have been in xxx playbook yml line x column x but may nbe elsewhere in the file depending on the exact syntax problem n nthe offending line appears to be n n when not path stat exists n fail msg etc is not readable n here n ,1
811925,30306221175.0,IssuesEvent,2023-07-10 09:39:15,jpmorganchase/salt-ds,https://api.github.com/repos/jpmorganchase/salt-ds,closed,Salt ag grid theme doesn't style filter panel,type: bug 🪲 community priority: medium 😠,"### Package name(s)
AG Grid Theme (@salt-ds/ag-grid-theme)
### Package version(s)
""@salt-ds/ag-grid-theme"": ""1.1.6""
### Description
Out of box ag grid filter panel doesn't match overall styling
- Spacing within the panel are tighter than before
- Buttons looks default HTML buttons, instead of Salt ones
Raised by MTK
### Steps to reproduce
Use below code in one of column def, then open the filter panel using triple dots menu on hovering header
filter: 'agTextColumnFilter',
filterParams: {
buttons: ['reset', 'apply'],
},
https://stackblitz.com/edit/salt-ag-grid-theme-g6m6gj?file=package.json,App.jsx
### Expected behavior
Spacing should match general Salt design language, button should look like Salt ones
### Operating system
- [X] macOS
- [ ] Windows
- [ ] Linux
- [ ] iOS
- [ ] Android
### Browser
- [X] Chrome
- [ ] Safari
- [ ] Firefox
- [ ] Edge
### Are you a JPMorgan Chase & Co. employee?
- [X] I am an employee of JPMorgan Chase & Co.",1.0,"Salt ag grid theme doesn't style filter panel - ### Package name(s)
AG Grid Theme (@salt-ds/ag-grid-theme)
### Package version(s)
""@salt-ds/ag-grid-theme"": ""1.1.6""
### Description
Out of box ag grid filter panel doesn't match overall styling
- Spacing within the panel are tighter than before
- Buttons looks default HTML buttons, instead of Salt ones
Raised by MTK
### Steps to reproduce
Use below code in one of column def, then open the filter panel using triple dots menu on hovering header
filter: 'agTextColumnFilter',
filterParams: {
buttons: ['reset', 'apply'],
},
https://stackblitz.com/edit/salt-ag-grid-theme-g6m6gj?file=package.json,App.jsx
### Expected behavior
Spacing should match general Salt design language, button should look like Salt ones
### Operating system
- [X] macOS
- [ ] Windows
- [ ] Linux
- [ ] iOS
- [ ] Android
### Browser
- [X] Chrome
- [ ] Safari
- [ ] Firefox
- [ ] Edge
### Are you a JPMorgan Chase & Co. employee?
- [X] I am an employee of JPMorgan Chase & Co.",0,salt ag grid theme doesn t style filter panel package name s ag grid theme salt ds ag grid theme package version s salt ds ag grid theme description out of box ag grid filter panel doesn t match overall styling spacing within the panel are tighter than before buttons looks default html buttons instead of salt ones raised by mtk steps to reproduce use below code in one of column def then open the filter panel using triple dots menu on hovering header filter agtextcolumnfilter filterparams buttons expected behavior spacing should match general salt design language button should look like salt ones operating system macos windows linux ios android browser chrome safari firefox edge are you a jpmorgan chase co employee i am an employee of jpmorgan chase co ,0
90564,11419748056.0,IssuesEvent,2020-02-03 08:42:35,undercasetype/fraunces-minisite,https://api.github.com/repos/undercasetype/fraunces-minisite,closed,Thank You For Shopping: think of alternative for the floating labels,needs-design,"
",1.0,"Thank You For Shopping: think of alternative for the floating labels -
",0,thank you for shopping think of alternative for the floating labels img width alt image src ,0
590038,17769346703.0,IssuesEvent,2021-08-30 11:48:58,o3de/o3de,https://api.github.com/repos/o3de/o3de,opened,Unable to add the AutomatedTesting project to the Project Manager,kind/bug needs-sig needs-triage priority/major,"**Describe the bug**
It is not possible to add the AutomatedTesting project to the Project Manager when it is launched from the o3de-install folder (after following the [Pre-built SDK engine guide](https://o3deorg.netlify.app/docs/welcome-guide/setup/setup-from-github/)).
The workaround for the issue is to manually add the path to the project into the o3de_manifest.json file.
Please refer to the attached video for more details.
**To Reproduce**
Steps to reproduce the behavior:
1. Follow the [Pre-built SDK engine guide](https://o3deorg.netlify.app/docs/welcome-guide/setup/setup-from-github/).
2. Launch Editor.exe from the o3de-install/bin/Windows/profile folder.
3. Try to add the AutomatedTesting project to the Project Manager.
**Expected behavior**
AutomatedTesting project is added to the Project Manager successfully.
**Video**
https://user-images.githubusercontent.com/86953108/131334244-bd32d226-227b-4409-b2e8-f8cdce42046b.mp4
**Desktop/Device:**
- Device: PC
- OS: Windows
- Version 10
- CPU AMD Ryzen 5 3600
- GPU Nvidia RTX 2060 SUPER
- Memory 16GB
",1.0,"Unable to add the AutomatedTesting project to the Project Manager - **Describe the bug**
It is not possible to add the AutomatedTesting project to the Project Manager when it is launched from the o3de-install folder (after following the [Pre-built SDK engine guide](https://o3deorg.netlify.app/docs/welcome-guide/setup/setup-from-github/)).
The workaround for the issue is to manually add the path to the project into the o3de_manifest.json file.
Please refer to the attached video for more details.
**To Reproduce**
Steps to reproduce the behavior:
1. Follow the [Pre-built SDK engine guide](https://o3deorg.netlify.app/docs/welcome-guide/setup/setup-from-github/).
2. Launch Editor.exe from the o3de-install/bin/Windows/profile folder.
3. Try to add the AutomatedTesting project to the Project Manager.
**Expected behavior**
AutomatedTesting project is added to the Project Manager successfully.
**Video**
https://user-images.githubusercontent.com/86953108/131334244-bd32d226-227b-4409-b2e8-f8cdce42046b.mp4
**Desktop/Device:**
- Device: PC
- OS: Windows
- Version 10
- CPU AMD Ryzen 5 3600
- GPU Nvidia RTX 2060 SUPER
- Memory 16GB
",0,unable to add the automatedtesting project to the project manager describe the bug it is not possible to add the automatedtesting project to the project manager when it is launched from the install folder after following the the workaround for the issue is to manually add the path to the project into the manifest json file please refer to the attached video for more details to reproduce steps to reproduce the behavior follow the launch editor exe from the install bin windows profile folder try to add the automatedtesting project to the project manager expected behavior automatedtesting project is added to the project manager successfully video desktop device device pc os windows version cpu amd ryzen gpu nvidia rtx super memory ,0
105041,16623634634.0,IssuesEvent,2021-06-03 06:46:19,Thanraj/OpenSSL_1.0.1,https://api.github.com/repos/Thanraj/OpenSSL_1.0.1,opened,CVE-2013-6449 (Medium) detected in opensslOpenSSL_1_0_1,security vulnerability,"## CVE-2013-6449 - Medium Severity Vulnerability
Vulnerable Library - opensslOpenSSL_1_0_1
The ssl_get_algorithm2 function in ssl/s3_lib.c in OpenSSL before 1.0.2 obtains a certain version number from an incorrect data structure, which allows remote attackers to cause a denial of service (daemon crash) via crafted traffic from a TLS 1.2 client.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2013-6449 (Medium) detected in opensslOpenSSL_1_0_1 - ## CVE-2013-6449 - Medium Severity Vulnerability
Vulnerable Library - opensslOpenSSL_1_0_1
The ssl_get_algorithm2 function in ssl/s3_lib.c in OpenSSL before 1.0.2 obtains a certain version number from an incorrect data structure, which allows remote attackers to cause a denial of service (daemon crash) via crafted traffic from a TLS 1.2 client.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in opensslopenssl cve medium severity vulnerability vulnerable library opensslopenssl akamai fork of openssl master library home page a href found in head commit a href found in base branch master vulnerable source files openssl ssl lib c openssl ssl lib c openssl ssl lib c vulnerability details the ssl get function in ssl lib c in openssl before obtains a certain version number from an incorrect data structure which allows remote attackers to cause a denial of service daemon crash via crafted traffic from a tls client publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
352382,25064733336.0,IssuesEvent,2022-11-07 07:08:36,AY2223S1-CS2103-W14-2/tp,https://api.github.com/repos/AY2223S1-CS2103-W14-2/tp,closed,Improve DG,documentation,"Improve DG grammar
Check for serious mistakes
Check American English spelling",1.0,"Improve DG - Improve DG grammar
Check for serious mistakes
Check American English spelling",0,improve dg improve dg grammar check for serious mistakes check american english spelling,0
360798,25311127226.0,IssuesEvent,2022-11-17 17:31:36,komutilo/dmark,https://api.github.com/repos/komutilo/dmark,opened,Provide documentation,documentation,"# Dmark docs by topic:
- [ ] Overview
- [ ] Why Dmark?
- [ ] local state management
- [ ] handling multiple config files
- [ ] DRY and multi-stage management
- [ ] managing no HCP projects
- [ ] Terragrunt (compare solutions)
- [ ] Dmark CLI
- [ ] Dmark and Terraform commands
- [ ] Dmark CLI parameters
- [ ] config
- [ ] stage
- [ ] stack
- [ ] label
- [ ] fmt
- [ ] upgrade
- [ ] migrate-state
- [ ] auto-approve
- [ ] no-init
- [ ] delete-lock
- [ ] Stack
- [ ] What is a stack?
- [ ] Stage
- [ ] What is a stage?
- [ ] The `__all__` stage
- [ ] Ignoring stages
- [ ] Stageless project
- [ ] Local state management
- [ ] Local property
- [ ] boolean
- [ ] with path
- [ ] Multiple stages for the same local stack
- [ ] Remote state management
- [ ] How to setup a remote stack
- [ ] general
- [ ] AWS S3
- [ ] Order
- [ ] Managing deploy order
- [ ] Labels
- [ ] Labeling the stacks
- [ ] Errors handling
- [ ] Deploy error handling by Dmark
- [ ] Dmark source code
- [ ] Dmark license
- [ ] Dmark maintainers
- [ ] Dmark brand
- [ ] Node.js, npm and pnpm motivations
- [ ] How to setup development environment
- [ ] How to contribute
- [ ] issues
- [ ] lint
- [ ] unit tests and coverage
- [ ] integration tests
- [ ] PR",1.0,"Provide documentation - # Dmark docs by topic:
- [ ] Overview
- [ ] Why Dmark?
- [ ] local state management
- [ ] handling multiple config files
- [ ] DRY and multi-stage management
- [ ] managing no HCP projects
- [ ] Terragrunt (compare solutions)
- [ ] Dmark CLI
- [ ] Dmark and Terraform commands
- [ ] Dmark CLI parameters
- [ ] config
- [ ] stage
- [ ] stack
- [ ] label
- [ ] fmt
- [ ] upgrade
- [ ] migrate-state
- [ ] auto-approve
- [ ] no-init
- [ ] delete-lock
- [ ] Stack
- [ ] What is a stack?
- [ ] Stage
- [ ] What is a stage?
- [ ] The `__all__` stage
- [ ] Ignoring stages
- [ ] Stageless project
- [ ] Local state management
- [ ] Local property
- [ ] boolean
- [ ] with path
- [ ] Multiple stages for the same local stack
- [ ] Remote state management
- [ ] How to setup a remote stack
- [ ] general
- [ ] AWS S3
- [ ] Order
- [ ] Managing deploy order
- [ ] Labels
- [ ] Labeling the stacks
- [ ] Errors handling
- [ ] Deploy error handling by Dmark
- [ ] Dmark source code
- [ ] Dmark license
- [ ] Dmark maintainers
- [ ] Dmark brand
- [ ] Node.js, npm and pnpm motivations
- [ ] How to setup development environment
- [ ] How to contribute
- [ ] issues
- [ ] lint
- [ ] unit tests and coverage
- [ ] integration tests
- [ ] PR",0,provide documentation dmark docs by topic overview why dmark local state management handling multiple config files dry and multi stage management managing no hcp projects terragrunt compare solutions dmark cli dmark and terraform commands dmark cli parameters config stage stack label fmt upgrade migrate state auto approve no init delete lock stack what is a stack stage what is a stage the all stage ignoring stages stageless project local state management local property boolean with path multiple stages for the same local stack remote state management how to setup a remote stack general aws order managing deploy order labels labeling the stacks errors handling deploy error handling by dmark dmark source code dmark license dmark maintainers dmark brand node js npm and pnpm motivations how to setup development environment how to contribute issues lint unit tests and coverage integration tests pr,0
296919,25584082345.0,IssuesEvent,2022-12-01 07:54:06,openBackhaul/ApplicationPattern,https://api.github.com/repos/openBackhaul/ApplicationPattern,opened,release-number pattern update,testsuite_to_be_changed,"Pattern of release-number has been updated to '^([0-9]{1,2})\.([0-9]{1,2})\.([0-9]{1,2})$'.
Already, testcases are available to check for too short release-number, too-long release-number, letters in release-number, sign in release-number , incorrect separator.
Additionally a scenario can be added to test whether in each placeholder for a number, only two one or digits are allowed. In earlier release-number more than two digits are allowed in a placeholder
This scenario ""multiple digit in a placeholder"" can b:e added to following services:
- Service Layer - Acceptance :: Attribute correctness :: release-number checked?
- [ ] /v1/register-yourself - registry-office-application-release-number
- [ ] /v1/embed-yourself - registry-office-application-release-number
- [ ] /v1/redirect-service-request-information - service-log-application-release-number
- [ ] /v1/redirect-oam-request-information - oam-log-application-release-number
- [ ] /v1/end-subscription - subscriber-release-number
- [ ] /v1/inquire-oam-request-approvals - oam-approval-application-release-number
- [ ] /v1/update-client - old-application-release-number and new-application-release-number
- [ ] /v1/redirect-topology-change-information - topology-application-release-number
- [ ] /v1/update-operation-client - application-release-number
- Oam Layer:
- [ ] http-client/release-number :: Acceptance :: Attribute checked?
",1.0,"release-number pattern update - Pattern of release-number has been updated to '^([0-9]{1,2})\.([0-9]{1,2})\.([0-9]{1,2})$'.
Already, testcases are available to check for too short release-number, too-long release-number, letters in release-number, sign in release-number , incorrect separator.
Additionally a scenario can be added to test whether in each placeholder for a number, only two one or digits are allowed. In earlier release-number more than two digits are allowed in a placeholder
This scenario ""multiple digit in a placeholder"" can b:e added to following services:
- Service Layer - Acceptance :: Attribute correctness :: release-number checked?
- [ ] /v1/register-yourself - registry-office-application-release-number
- [ ] /v1/embed-yourself - registry-office-application-release-number
- [ ] /v1/redirect-service-request-information - service-log-application-release-number
- [ ] /v1/redirect-oam-request-information - oam-log-application-release-number
- [ ] /v1/end-subscription - subscriber-release-number
- [ ] /v1/inquire-oam-request-approvals - oam-approval-application-release-number
- [ ] /v1/update-client - old-application-release-number and new-application-release-number
- [ ] /v1/redirect-topology-change-information - topology-application-release-number
- [ ] /v1/update-operation-client - application-release-number
- Oam Layer:
- [ ] http-client/release-number :: Acceptance :: Attribute checked?
",0,release number pattern update pattern of release number has been updated to already testcases are available to check for too short release number too long release number letters in release number sign in release number incorrect separator additionally a scenario can be added to test whether in each placeholder for a number only two one or digits are allowed in earlier release number more than two digits are allowed in a placeholder this scenario multiple digit in a placeholder can b e added to following services service layer acceptance attribute correctness release number checked register yourself registry office application release number embed yourself registry office application release number redirect service request information service log application release number redirect oam request information oam log application release number end subscription subscriber release number inquire oam request approvals oam approval application release number update client old application release number and new application release number redirect topology change information topology application release number update operation client application release number oam layer http client release number acceptance attribute checked ,0
53992,29418643354.0,IssuesEvent,2023-05-31 00:38:10,flutter/devtools,https://api.github.com/repos/flutter/devtools,closed,How do I find the cause of the Jank Slow Frame?,waiting for customer response performance page P6,"Hello, I have several frames that take more than 500 ms to complete, I am trying to find out why, in which line of dart code does this slow exist in my app, but it does not tell me in which line this is happening or what is running.
In my widget, I only have one container, nothing more, because the slowness is caused by the dart code, but I would like devtools to tell me what line is making it take too long, this makes it freeze for 2 or 3 seconds, maybe already devtools brings it but I can't find it, I hope you can help me please
",True,"How do I find the cause of the Jank Slow Frame? - Hello, I have several frames that take more than 500 ms to complete, I am trying to find out why, in which line of dart code does this slow exist in my app, but it does not tell me in which line this is happening or what is running.
In my widget, I only have one container, nothing more, because the slowness is caused by the dart code, but I would like devtools to tell me what line is making it take too long, this makes it freeze for 2 or 3 seconds, maybe already devtools brings it but I can't find it, I hope you can help me please
",0,how do i find the cause of the jank slow frame hello i have several frames that take more than ms to complete i am trying to find out why in which line of dart code does this slow exist in my app but it does not tell me in which line this is happening or what is running in my widget i only have one container nothing more because the slowness is caused by the dart code but i would like devtools to tell me what line is making it take too long this makes it freeze for or seconds maybe already devtools brings it but i can t find it i hope you can help me please ,0
731873,25234799701.0,IssuesEvent,2022-11-14 23:22:13,brave/brave-browser,https://api.github.com/repos/brave/brave-browser,closed,Adjust scroll position when removing tabs and activating tabs,OS/Linux OS/Windows priority/P3 QA/No release-notes/exclude,"> When closing the active tab (through any method: Ctrl+W, X button, context menu), the vertical tab list scrolling will be reset to the top. You need enough tabs open to spill off the bottom of the screen for it to become a problem, but it's quite inconvenient when attempting to close multiple tabs in a row out of a large set.",1.0,"Adjust scroll position when removing tabs and activating tabs - > When closing the active tab (through any method: Ctrl+W, X button, context menu), the vertical tab list scrolling will be reset to the top. You need enough tabs open to spill off the bottom of the screen for it to become a problem, but it's quite inconvenient when attempting to close multiple tabs in a row out of a large set.",0,adjust scroll position when removing tabs and activating tabs when closing the active tab through any method ctrl w x button context menu the vertical tab list scrolling will be reset to the top you need enough tabs open to spill off the bottom of the screen for it to become a problem but it s quite inconvenient when attempting to close multiple tabs in a row out of a large set ,0
5225,26507019092.0,IssuesEvent,2023-01-18 14:32:26,precice/precice,https://api.github.com/repos/precice/precice,closed,Clarify mesh API,enhancement usability maintainability breaking change,"**Please describe the problem you are trying to solve.**
The API for setting mesh primitives is confusing and tedious.
* Triangles have to be set using edges
* `setMeshTriangleWithEdges` sounds like it would take edges, but it actually takes vertices. #1057
* There are no bulk functions for setting edges and triangles. #465
* Adding `setMeshTetrahedron` requiring triangles would be a huge pain for users. #1314
* Exposing handles to connectivity (EdgeID) prevents us from optimizing meshes #1313
**Describe the solution you propose.**
1. Change the API to a vertex-only style. **:warning: breaking**
2. Remove `XWithEdges`
3. Add bulk functions
Function | Inputs + MeshID | Outputs | Comment
--- | --- | --- | ---
setMeshVertex | Coords | VertexID | _unchanged_
setMeshVertices | Count, Coords | VertexIDs | _unchanged_
setMeshEdge | VertexIDs | | _no return_
setMeshEdges | Count, VertexIDs | | _no return_
setMeshTriangle | VertexIDs | | _changed input_
setMeshTriangles | Count, VertexIDs | | _new_
setMeshQuad | VertexIDs | | _changed input_
setMeshQuads | Count, VertexIDs | | _new_
setMeshTetrahedron | VertexIDs | | _unchanged_
setMeshTetrahedra | Count, VertexIDs | | _new_
**Describe alternatives you've considered**
Leave it as it and end up with an increasingly confusing API.
",True,"Clarify mesh API - **Please describe the problem you are trying to solve.**
The API for setting mesh primitives is confusing and tedious.
* Triangles have to be set using edges
* `setMeshTriangleWithEdges` sounds like it would take edges, but it actually takes vertices. #1057
* There are no bulk functions for setting edges and triangles. #465
* Adding `setMeshTetrahedron` requiring triangles would be a huge pain for users. #1314
* Exposing handles to connectivity (EdgeID) prevents us from optimizing meshes #1313
**Describe the solution you propose.**
1. Change the API to a vertex-only style. **:warning: breaking**
2. Remove `XWithEdges`
3. Add bulk functions
Function | Inputs + MeshID | Outputs | Comment
--- | --- | --- | ---
setMeshVertex | Coords | VertexID | _unchanged_
setMeshVertices | Count, Coords | VertexIDs | _unchanged_
setMeshEdge | VertexIDs | | _no return_
setMeshEdges | Count, VertexIDs | | _no return_
setMeshTriangle | VertexIDs | | _changed input_
setMeshTriangles | Count, VertexIDs | | _new_
setMeshQuad | VertexIDs | | _changed input_
setMeshQuads | Count, VertexIDs | | _new_
setMeshTetrahedron | VertexIDs | | _unchanged_
setMeshTetrahedra | Count, VertexIDs | | _new_
**Describe alternatives you've considered**
Leave it as it and end up with an increasingly confusing API.
",1,clarify mesh api please describe the problem you are trying to solve the api for setting mesh primitives is confusing and tedious triangles have to be set using edges setmeshtrianglewithedges sounds like it would take edges but it actually takes vertices there are no bulk functions for setting edges and triangles adding setmeshtetrahedron requiring triangles would be a huge pain for users exposing handles to connectivity edgeid prevents us from optimizing meshes describe the solution you propose change the api to a vertex only style warning breaking remove xwithedges add bulk functions function inputs meshid outputs comment setmeshvertex coords vertexid unchanged setmeshvertices count coords vertexids unchanged setmeshedge vertexids no return setmeshedges count vertexids no return setmeshtriangle vertexids changed input setmeshtriangles count vertexids new setmeshquad vertexids changed input setmeshquads count vertexids new setmeshtetrahedron vertexids unchanged setmeshtetrahedra count vertexids new describe alternatives you ve considered leave it as it and end up with an increasingly confusing api ,1
1538,6572229272.0,IssuesEvent,2017-09-11 00:20:15,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,Ansible (1.9.4) netscaler module got error “msg”: “'NoneType' object has no attribute 'read'”,affects_1.9 bug_report networking waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### ANSIBLE VERSION
```
ansible 1.9.4
```
##### CONFIGURATION
```
No configuration in ansible.cfg, ansible.cfg is empty.
```
##### OS / ENVIRONMENT
```
Red Hat Enterprise Linux Server
```
##### SUMMARY
```
netscaler module got error “msg”: “'NoneType' object has no attribute 'read'”
```
##### STEPS TO REPRODUCE
```
ansible localhost -m netscaler -a ""nsc_host=nsc.example.com user=nscuser password=nscpassword name=node1.example.com type=service action=disable validate_certs=False""
localhost | FAILED >> {
""failed"": true,
""msg"": ""'NoneType' object has no attribute 'read'""
}
```
```
$ ansible-playbook netscaler.yml
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [disable service in the lb] *********************************************
failed: [localhost] => {""failed"": true}
msg: 'NoneType' object has no attribute 'read'
FATAL: all hosts have already failed -- aborting
```
##### EXPECTED RESULTS
```
success
```
##### ACTUAL RESULTS
```
""msg"": ""'NoneType' object has no attribute 'read'""
```
",True,"Ansible (1.9.4) netscaler module got error “msg”: “'NoneType' object has no attribute 'read'” -
##### ISSUE TYPE
- Bug Report
##### ANSIBLE VERSION
```
ansible 1.9.4
```
##### CONFIGURATION
```
No configuration in ansible.cfg, ansible.cfg is empty.
```
##### OS / ENVIRONMENT
```
Red Hat Enterprise Linux Server
```
##### SUMMARY
```
netscaler module got error “msg”: “'NoneType' object has no attribute 'read'”
```
##### STEPS TO REPRODUCE
```
ansible localhost -m netscaler -a ""nsc_host=nsc.example.com user=nscuser password=nscpassword name=node1.example.com type=service action=disable validate_certs=False""
localhost | FAILED >> {
""failed"": true,
""msg"": ""'NoneType' object has no attribute 'read'""
}
```
```
$ ansible-playbook netscaler.yml
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [disable service in the lb] *********************************************
failed: [localhost] => {""failed"": true}
msg: 'NoneType' object has no attribute 'read'
FATAL: all hosts have already failed -- aborting
```
##### EXPECTED RESULTS
```
success
```
##### ACTUAL RESULTS
```
""msg"": ""'NoneType' object has no attribute 'read'""
```
",1,ansible netscaler module got error “msg” “ nonetype object has no attribute read ” issue type bug report ansible version ansible configuration no configuration in ansible cfg ansible cfg is empty os environment red hat enterprise linux server summary netscaler module got error “msg” “ nonetype object has no attribute read ” steps to reproduce ansible localhost m netscaler a nsc host nsc example com user nscuser password nscpassword name example com type service action disable validate certs false localhost failed failed true msg nonetype object has no attribute read ansible playbook netscaler yml play gathering facts ok task failed failed true msg nonetype object has no attribute read fatal all hosts have already failed aborting expected results success actual results msg nonetype object has no attribute read ,1
3041,11273940544.0,IssuesEvent,2020-01-14 17:30:17,precice/precice,https://api.github.com/repos/precice/precice,closed,Remove Server Mode,breaking change maintainability,"# Description
It is time to remove the server mode.
# Rational
It is a large and unused feature of the code.
It forces us to rewrite every function in the interface multiple times and requires additional logic:
* Every interface function X additionally requires:
1. `RequestManager::requestX`
2. `RequestManager::handleX`
3. An new enum value `RequestManager::Request::X`
4. A case in `RequestManager::handlereques()` handling the above
* Calls to interface functions need to do different things depending on `client/serverMode`
Removing it:
* Reduces :ghost:-code
* Reduces the size of the interface functions and makes them easier to implement.
* Reduces the hooks one _has to think of_",True,"Remove Server Mode - # Description
It is time to remove the server mode.
# Rational
It is a large and unused feature of the code.
It forces us to rewrite every function in the interface multiple times and requires additional logic:
* Every interface function X additionally requires:
1. `RequestManager::requestX`
2. `RequestManager::handleX`
3. An new enum value `RequestManager::Request::X`
4. A case in `RequestManager::handlereques()` handling the above
* Calls to interface functions need to do different things depending on `client/serverMode`
Removing it:
* Reduces :ghost:-code
* Reduces the size of the interface functions and makes them easier to implement.
* Reduces the hooks one _has to think of_",1,remove server mode description it is time to remove the server mode rational it is a large and unused feature of the code it forces us to rewrite every function in the interface multiple times and requires additional logic every interface function x additionally requires requestmanager requestx requestmanager handlex an new enum value requestmanager request x a case in requestmanager handlereques handling the above calls to interface functions need to do different things depending on client servermode removing it reduces ghost code reduces the size of the interface functions and makes them easier to implement reduces the hooks one has to think of ,1
202201,15822683188.0,IssuesEvent,2021-04-05 22:50:31,lenaschimmel/schnelltestrechner,https://api.github.com/repos/lenaschimmel/schnelltestrechner,opened,Generelle Unklarheit: Infiziert vs. infektiös,data documentation enhancement help wanted rapidtests-feedback ui,"Eine Unklarheit zieht sich wie ein roter Faden durch die Seite und durch fast alle verfügbaren Daten: es ist unklar ob der Test prüfen soll, ob man Infiziert oder infektiös ist. Dabei ist die Sensitivität für diese beiden Fragen verschieden, und für manche Tests haben wir sogar die Daten getrennt zu beidem vorliegen. In anderen Fällen liegen sie getrennt für hohe und niedrige Ct-Werte vor, was ungefähr das gleiche bedeutet.
Da die Herstellerangaben zur Sensitivität stets deutlich höher sind als praktisch alle Studien, können wir standartmäßig davon ausgehen, dass sie sich nur auf ""infektiös sein"" beziehen.
Die Seite müsste diese Unterscheidung an vielen Stellen in Erklärungstexten, UI, Berechnungen und Datenanzeige vornehmen. Andererseits fehlen uns derzeit noch die Daten (bzw. die Stuktur in den Daten), um diese Unterscheidung wirklich konsequent durchzuziehen.",1.0,"Generelle Unklarheit: Infiziert vs. infektiös - Eine Unklarheit zieht sich wie ein roter Faden durch die Seite und durch fast alle verfügbaren Daten: es ist unklar ob der Test prüfen soll, ob man Infiziert oder infektiös ist. Dabei ist die Sensitivität für diese beiden Fragen verschieden, und für manche Tests haben wir sogar die Daten getrennt zu beidem vorliegen. In anderen Fällen liegen sie getrennt für hohe und niedrige Ct-Werte vor, was ungefähr das gleiche bedeutet.
Da die Herstellerangaben zur Sensitivität stets deutlich höher sind als praktisch alle Studien, können wir standartmäßig davon ausgehen, dass sie sich nur auf ""infektiös sein"" beziehen.
Die Seite müsste diese Unterscheidung an vielen Stellen in Erklärungstexten, UI, Berechnungen und Datenanzeige vornehmen. Andererseits fehlen uns derzeit noch die Daten (bzw. die Stuktur in den Daten), um diese Unterscheidung wirklich konsequent durchzuziehen.",0,generelle unklarheit infiziert vs infektiös eine unklarheit zieht sich wie ein roter faden durch die seite und durch fast alle verfügbaren daten es ist unklar ob der test prüfen soll ob man infiziert oder infektiös ist dabei ist die sensitivität für diese beiden fragen verschieden und für manche tests haben wir sogar die daten getrennt zu beidem vorliegen in anderen fällen liegen sie getrennt für hohe und niedrige ct werte vor was ungefähr das gleiche bedeutet da die herstellerangaben zur sensitivität stets deutlich höher sind als praktisch alle studien können wir standartmäßig davon ausgehen dass sie sich nur auf infektiös sein beziehen die seite müsste diese unterscheidung an vielen stellen in erklärungstexten ui berechnungen und datenanzeige vornehmen andererseits fehlen uns derzeit noch die daten bzw die stuktur in den daten um diese unterscheidung wirklich konsequent durchzuziehen ,0
4672,24160487289.0,IssuesEvent,2022-09-22 11:13:27,centerofci/mathesar,https://api.github.com/repos/centerofci/mathesar,closed,Vertically scroll content of table inspector instead of viewport,type: bug work: frontend restricted: maintainers status: review,"If the contents of the table inspector are tall enough, they'll force the entire page to scroll. Instead, we should be scrolling only the content below the tabs within the table inspector
",True,"Vertically scroll content of table inspector instead of viewport - If the contents of the table inspector are tall enough, they'll force the entire page to scroll. Instead, we should be scrolling only the content below the tabs within the table inspector
",1,vertically scroll content of table inspector instead of viewport if the contents of the table inspector are tall enough they ll force the entire page to scroll instead we should be scrolling only the content below the tabs within the table inspector ,1
134919,30212806770.0,IssuesEvent,2023-07-05 13:49:21,KDWSS/Java-Demo-2,https://api.github.com/repos/KDWSS/Java-Demo-2,opened,"Code Security Report: 17 high severity findings, 58 total findings",Mend: code security findings,"# Code Security Report
### Scan Metadata
**Latest Scan:** 2023-07-05 01:48pm
**Total Findings:** 58 | **New Findings:** 0 | **Resolved Findings:** 0
**Tested Project Files:** 102
**Detected Programming Languages:** 1 (Java)
- [ ] Check this box to manually trigger a scan
### Most Relevant Findings
> The below list presents the 10 most relevant findings that need your attention. To view information on the remaining findings, navigate to the [Mend SAST Application](https://saas.mend.io/sast/#/scans/f3cabc86-1bf1-48ec-8350-fe38e8c84911/details).
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L41-L46
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L35
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L35
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L40
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L46
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L79-L84
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L128-L133
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L125
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L125
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L127
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L133
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L79-L84
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L71
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L84
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L137-L142
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L141
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33-L38
4 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
View Data Flow 2
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
View Data Flow 3
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
[View more Data Flows](https://saas.mend.io/sast/#/scans/f3cabc86-1bf1-48ec-8350-fe38e8c84911/details?vulnId=4f62b182-e897-4384-b29a-afddd3f631e3&filtered=yes)
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L60-L65
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L25
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L25
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L44
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L45
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L46
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L47
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L61
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L65
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L191-L196
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L141
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L141
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L148
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L161
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L192
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L196
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L130-L135
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L106
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L135
### Findings Overview
| Severity | Vulnerability Type | CWE | Language | Count |
|-|-|-|-|-|
| High|Code Injection|[CWE-94](https://cwe.mitre.org/data/definitions/94.html)|Java|1|
| High|File Manipulation|[CWE-73](https://cwe.mitre.org/data/definitions/73.html)|Java|3|
| High|Cross-Site Scripting|[CWE-79](https://cwe.mitre.org/data/definitions/79.html)|Java|2|
| High|Path/Directory Traversal|[CWE-22](https://cwe.mitre.org/data/definitions/22.html)|Java|9|
| High|Server Side Request Forgery|[CWE-918](https://cwe.mitre.org/data/definitions/918.html)|Java|1|
| High|SQL Injection|[CWE-89](https://cwe.mitre.org/data/definitions/89.html)|Java|1|
| Medium|Error Messages Information Exposure|[CWE-209](https://cwe.mitre.org/data/definitions/209.html)|Java|15|
| Medium|Trust Boundary Violation|[CWE-501](https://cwe.mitre.org/data/definitions/501.html)|Java|5|
| Medium|Weak Pseudo-Random|[CWE-338](https://cwe.mitre.org/data/definitions/338.html)|Java|2|
| Medium|Heap Inspection|[CWE-244](https://cwe.mitre.org/data/definitions/244.html)|Java|5|
| Low|HTTP Header Injection|[CWE-113](https://cwe.mitre.org/data/definitions/113.html)|Java|1|
| Low|Session Poisoning|[CWE-20](https://cwe.mitre.org/data/definitions/20.html)|Java|5|
| Low|Unvalidated/Open Redirect|[CWE-601](https://cwe.mitre.org/data/definitions/601.html)|Java|5|
| Low|Log Forging|[CWE-117](https://cwe.mitre.org/data/definitions/117.html)|Java|3|
",1.0,"Code Security Report: 17 high severity findings, 58 total findings - # Code Security Report
### Scan Metadata
**Latest Scan:** 2023-07-05 01:48pm
**Total Findings:** 58 | **New Findings:** 0 | **Resolved Findings:** 0
**Tested Project Files:** 102
**Detected Programming Languages:** 1 (Java)
- [ ] Check this box to manually trigger a scan
### Most Relevant Findings
> The below list presents the 10 most relevant findings that need your attention. To view information on the remaining findings, navigate to the [Mend SAST Application](https://saas.mend.io/sast/#/scans/f3cabc86-1bf1-48ec-8350-fe38e8c84911/details).
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L41-L46
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L35
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L35
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L40
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/NullByteInjectionServlet.java#L46
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L79-L84
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L128-L133
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L125
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L125
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L127
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L133
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L79-L84
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L70
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L71
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedSizeUploadServlet.java#L84
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L137-L142
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/MailHeaderInjectionServlet.java#L141
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L33-L38
4 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
View Data Flow 2
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
View Data Flow 3
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L37
[View more Data Flows](https://saas.mend.io/sast/#/scans/f3cabc86-1bf1-48ec-8350-fe38e8c84911/details?vulnId=4f62b182-e897-4384-b29a-afddd3f631e3&filtered=yes)
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L60-L65
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L25
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L25
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L44
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L45
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L46
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L47
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L61
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/CodeInjectionServlet.java#L65
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L191-L196
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L141
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L141
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L148
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L161
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L192
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/XEEandXXEServlet.java#L196
More info
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L130-L135
1 Data Flow/s detectedView Data Flow 1
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L69
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L57
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/core/utils/MultiPartFileUtils.java#L59
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L76
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L84
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L106
https://github.com/KDWSS/Java-Demo-2/blob/c2a7c73894ab87aba9be657a5cd49504ecee0022/src/main/java/org/t246osslab/easybuggy/vulnerabilities/UnrestrictedExtensionUploadServlet.java#L135
### Findings Overview
| Severity | Vulnerability Type | CWE | Language | Count |
|-|-|-|-|-|
| High|Code Injection|[CWE-94](https://cwe.mitre.org/data/definitions/94.html)|Java|1|
| High|File Manipulation|[CWE-73](https://cwe.mitre.org/data/definitions/73.html)|Java|3|
| High|Cross-Site Scripting|[CWE-79](https://cwe.mitre.org/data/definitions/79.html)|Java|2|
| High|Path/Directory Traversal|[CWE-22](https://cwe.mitre.org/data/definitions/22.html)|Java|9|
| High|Server Side Request Forgery|[CWE-918](https://cwe.mitre.org/data/definitions/918.html)|Java|1|
| High|SQL Injection|[CWE-89](https://cwe.mitre.org/data/definitions/89.html)|Java|1|
| Medium|Error Messages Information Exposure|[CWE-209](https://cwe.mitre.org/data/definitions/209.html)|Java|15|
| Medium|Trust Boundary Violation|[CWE-501](https://cwe.mitre.org/data/definitions/501.html)|Java|5|
| Medium|Weak Pseudo-Random|[CWE-338](https://cwe.mitre.org/data/definitions/338.html)|Java|2|
| Medium|Heap Inspection|[CWE-244](https://cwe.mitre.org/data/definitions/244.html)|Java|5|
| Low|HTTP Header Injection|[CWE-113](https://cwe.mitre.org/data/definitions/113.html)|Java|1|
| Low|Session Poisoning|[CWE-20](https://cwe.mitre.org/data/definitions/20.html)|Java|5|
| Low|Unvalidated/Open Redirect|[CWE-601](https://cwe.mitre.org/data/definitions/601.html)|Java|5|
| Low|Log Forging|[CWE-117](https://cwe.mitre.org/data/definitions/117.html)|Java|3|
",0,code security report high severity findings total findings code security report scan metadata latest scan total findings new findings resolved findings tested project files detected programming languages java check this box to manually trigger a scan most relevant findings the below list presents the most relevant findings that need your attention to view information on the remaining findings navigate to the severity vulnerability type cwe file data flows date high sql injection more info data flow s detected view data flow view data flow view data flow high path directory traversal more info data flow s detected view data flow high path directory traversal more info data flow s detected view data flow high path directory traversal more info data flow s detected view data flow high path directory traversal more info data flow s detected view data flow high file manipulation more info data flow s detected view data flow high file manipulation more info data flow s detected view data flow view data flow view data flow high code injection more info data flow s detected view data flow high path directory traversal more info data flow s detected view data flow high path directory traversal more info data flow s detected view data flow findings overview severity vulnerability type cwe language count high code injection high file manipulation high cross site scripting high path directory traversal high server side request forgery high sql injection medium error messages information exposure medium trust boundary violation medium weak pseudo random medium heap inspection low http header injection low session poisoning low unvalidated open redirect low log forging ,0
94884,10860587006.0,IssuesEvent,2019-11-14 09:23:12,PaffaLon/LabbFighterArena,https://api.github.com/repos/PaffaLon/LabbFighterArena,opened,Application Menu Documentation,documentation,"The application will contain a few menus to allow the user to navigate in the application. The can navigate thoughe the menus by pressing the arrows keys on the keyabord and the enter key to prgress frome on part in the prgram to another. Frome a higher plane to a lower plane.
The application contains the followig menus.
**Mainmenu**
The _MainMenu_ (splash screen menu) is the applications primarey menu that first apears when the application launches.
_This menu contains the following_
- options: Play
- options: Scoreboard
- options: Combatlog
- options: Exit
**HeroMenu**
The _HerMenu_ is a 2nd layer menu derived from the play button from the main menu. The _HeroMenu_
_This menu contains the following:_
- options: New Hero
- options: Load Hero
- options: Exit
**CombatLog**
The combatlog menu displays the combat history of the most resent played game.
_This menu contains the following:_
- options: Exit
**NewHeroMenu**
Here the user can cosumize the the name of the new hero and randomly generat the remaning attribuets. The user also has to the option to either exit back to the pervious menu or to press play and begin the game.
_This menu contains the following:_
- options: Play
- options: Exit
- options: Edit
",1.0,"Application Menu Documentation - The application will contain a few menus to allow the user to navigate in the application. The can navigate thoughe the menus by pressing the arrows keys on the keyabord and the enter key to prgress frome on part in the prgram to another. Frome a higher plane to a lower plane.
The application contains the followig menus.
**Mainmenu**
The _MainMenu_ (splash screen menu) is the applications primarey menu that first apears when the application launches.
_This menu contains the following_
- options: Play
- options: Scoreboard
- options: Combatlog
- options: Exit
**HeroMenu**
The _HerMenu_ is a 2nd layer menu derived from the play button from the main menu. The _HeroMenu_
_This menu contains the following:_
- options: New Hero
- options: Load Hero
- options: Exit
**CombatLog**
The combatlog menu displays the combat history of the most resent played game.
_This menu contains the following:_
- options: Exit
**NewHeroMenu**
Here the user can cosumize the the name of the new hero and randomly generat the remaning attribuets. The user also has to the option to either exit back to the pervious menu or to press play and begin the game.
_This menu contains the following:_
- options: Play
- options: Exit
- options: Edit
",0,application menu documentation the application will contain a few menus to allow the user to navigate in the application the can navigate thoughe the menus by pressing the arrows keys on the keyabord and the enter key to prgress frome on part in the prgram to another frome a higher plane to a lower plane the application contains the followig menus mainmenu the mainmenu splash screen menu is the applications primarey menu that first apears when the application launches this menu contains the following options play options scoreboard options combatlog options exit heromenu the hermenu is a layer menu derived from the play button from the main menu the heromenu this menu contains the following options new hero options load hero options exit combatlog the combatlog menu displays the combat history of the most resent played game this menu contains the following options exit newheromenu here the user can cosumize the the name of the new hero and randomly generat the remaning attribuets the user also has to the option to either exit back to the pervious menu or to press play and begin the game this menu contains the following options play options exit options edit ,0
150446,5767144935.0,IssuesEvent,2017-04-27 09:14:07,minishift/minishift,https://api.github.com/repos/minishift/minishift,closed,minishift ssh doesn't work after PHP template deployment failure ,kind/bug priority/major status/needs-info,"Steps to reproduce this
```
1. Download minishift v1.0.0-rc.1
1. set cpu, memory and vm driver by using minishift config set
minishift config set cpu 4
minishift config set memory 2094
minishift config set vm-driver virtualbox
2. Then execute the command ""minishift start --iso-url https://github.com/minishift/minishift-centos-iso/releases/download/v1.0.0-rc.4/minishift-centos7.iso""
3. Open web console and deploy php template (php + mysql). Here it stuck after pushing 73% of image.
Pushing image 172.30.1.1:5000/php/cakephp-mysql-persistent:latest ...
Pushed 0/9 layers, 1% complete
Pushed 1/9 layers, 18% complete
Pushed 2/9 layers, 27% complete
Pushed 3/9 layers, 36% complete
Pushed 4/9 layers, 47% complete
Pushed 5/9 layers, 57% complete
Pushed 6/9 layers, 73% complete
4. Then go to the terminal and run ""minishift ssh"" and it throws the error
$ minishift ssh
E0410 14:34:32.721166 48701 ssh.go:38] Cannot establish SSH connection to the VM: exit status 255
$minishift status
Running
```
Environment
```
os : OS X
Minishift : MiniShift-1.0.0-rc.1
iso image : centos v1.0.0-rc.4
vm-driver : virtualbox
```",1.0,"minishift ssh doesn't work after PHP template deployment failure - Steps to reproduce this
```
1. Download minishift v1.0.0-rc.1
1. set cpu, memory and vm driver by using minishift config set
minishift config set cpu 4
minishift config set memory 2094
minishift config set vm-driver virtualbox
2. Then execute the command ""minishift start --iso-url https://github.com/minishift/minishift-centos-iso/releases/download/v1.0.0-rc.4/minishift-centos7.iso""
3. Open web console and deploy php template (php + mysql). Here it stuck after pushing 73% of image.
Pushing image 172.30.1.1:5000/php/cakephp-mysql-persistent:latest ...
Pushed 0/9 layers, 1% complete
Pushed 1/9 layers, 18% complete
Pushed 2/9 layers, 27% complete
Pushed 3/9 layers, 36% complete
Pushed 4/9 layers, 47% complete
Pushed 5/9 layers, 57% complete
Pushed 6/9 layers, 73% complete
4. Then go to the terminal and run ""minishift ssh"" and it throws the error
$ minishift ssh
E0410 14:34:32.721166 48701 ssh.go:38] Cannot establish SSH connection to the VM: exit status 255
$minishift status
Running
```
Environment
```
os : OS X
Minishift : MiniShift-1.0.0-rc.1
iso image : centos v1.0.0-rc.4
vm-driver : virtualbox
```",0,minishift ssh doesn t work after php template deployment failure steps to reproduce this download minishift rc set cpu memory and vm driver by using minishift config set minishift config set cpu minishift config set memory minishift config set vm driver virtualbox then execute the command minishift start iso url open web console and deploy php template php mysql here it stuck after pushing of image pushing image php cakephp mysql persistent latest pushed layers complete pushed layers complete pushed layers complete pushed layers complete pushed layers complete pushed layers complete pushed layers complete then go to the terminal and run minishift ssh and it throws the error minishift ssh ssh go cannot establish ssh connection to the vm exit status minishift status running environment os os x minishift minishift rc iso image centos rc vm driver virtualbox ,0
807,4425735034.0,IssuesEvent,2016-08-16 16:14:13,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Request for increased speed from git module when ensuring repo is on a hash,bug_report P3 waiting_on_maintainer,"##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
git module
##### ANSIBLE VERSION
N/A
##### SUMMARY
Same as https://github.com/ansible/ansible/issues/8916
> When the git module is given a hash as the version of the repo to checkout the git module should first do a quick check to see if we are already on the given hash!
> Currently even if the repo is already on the version / hash we want it seems to pull from the remote and update.
> Instead we should make sure the repo exists (pull if it doesn't) and then check the current hash of HEAD and if it matches the hash version we want, do nothing, if it doesn't, pull said hash version.
> This would drastically increase the speed of playbooks that depend on, say 100 git repos and specify a given version hash
",True,"Request for increased speed from git module when ensuring repo is on a hash - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
git module
##### ANSIBLE VERSION
N/A
##### SUMMARY
Same as https://github.com/ansible/ansible/issues/8916
> When the git module is given a hash as the version of the repo to checkout the git module should first do a quick check to see if we are already on the given hash!
> Currently even if the repo is already on the version / hash we want it seems to pull from the remote and update.
> Instead we should make sure the repo exists (pull if it doesn't) and then check the current hash of HEAD and if it matches the hash version we want, do nothing, if it doesn't, pull said hash version.
> This would drastically increase the speed of playbooks that depend on, say 100 git repos and specify a given version hash
",1,request for increased speed from git module when ensuring repo is on a hash issue type bug report component name git module ansible version n a summary same as when the git module is given a hash as the version of the repo to checkout the git module should first do a quick check to see if we are already on the given hash currently even if the repo is already on the version hash we want it seems to pull from the remote and update instead we should make sure the repo exists pull if it doesn t and then check the current hash of head and if it matches the hash version we want do nothing if it doesn t pull said hash version this would drastically increase the speed of playbooks that depend on say git repos and specify a given version hash ,1
463230,13261967788.0,IssuesEvent,2020-08-20 20:51:48,googleapis/google-api-php-client,https://api.github.com/repos/googleapis/google-api-php-client,closed,Basic Example in README.md does not work,priority: p2 type: bug type: docs,"In the current README.md the following basic example is provided, however this is not correct, I suspect it's a bit outdated.
```php
// include your composer dependencies
require_once 'vendor/autoload.php';
$client = new Google_Client();
$client->setApplicationName(""Client_Library_Examples"");
$client->setDeveloperKey(""YOUR_APP_KEY"");
$service = new Google_Service_Books($client);
$optParams = array('filter' => 'free-ebooks');
$results = $service->volumes->listVolumes('Henry David Thoreau', $optParams);
foreach ($results->getItems() as $item) {
echo $item['volumeInfo']['title'], "" \n"";
}
```
I changed the following lines to make it work, once I noticed my IDE telling me that listVolumes only expects 1 parameter.
```
$optParams = array(
'filter' => 'free-ebooks',
'q' => 'Henry David Thoreau',
);
$results = $service->volumes->listVolumes( $optParams);
```
",1.0,"Basic Example in README.md does not work - In the current README.md the following basic example is provided, however this is not correct, I suspect it's a bit outdated.
```php
// include your composer dependencies
require_once 'vendor/autoload.php';
$client = new Google_Client();
$client->setApplicationName(""Client_Library_Examples"");
$client->setDeveloperKey(""YOUR_APP_KEY"");
$service = new Google_Service_Books($client);
$optParams = array('filter' => 'free-ebooks');
$results = $service->volumes->listVolumes('Henry David Thoreau', $optParams);
foreach ($results->getItems() as $item) {
echo $item['volumeInfo']['title'], "" \n"";
}
```
I changed the following lines to make it work, once I noticed my IDE telling me that listVolumes only expects 1 parameter.
```
$optParams = array(
'filter' => 'free-ebooks',
'q' => 'Henry David Thoreau',
);
$results = $service->volumes->listVolumes( $optParams);
```
",0,basic example in readme md does not work in the current readme md the following basic example is provided however this is not correct i suspect it s a bit outdated php include your composer dependencies require once vendor autoload php client new google client client setapplicationname client library examples client setdeveloperkey your app key service new google service books client optparams array filter free ebooks results service volumes listvolumes henry david thoreau optparams foreach results getitems as item echo item n i changed the following lines to make it work once i noticed my ide telling me that listvolumes only expects parameter optparams array filter free ebooks q henry david thoreau results service volumes listvolumes optparams ,0
137629,18755114447.0,IssuesEvent,2021-11-05 09:44:20,Dima2022/node-jose,https://api.github.com/repos/Dima2022/node-jose,opened,CVE-2021-33623 (High) detected in trim-newlines-1.0.0.tgz,security vulnerability,"## CVE-2021-33623 - High Severity Vulnerability
Vulnerable Library - trim-newlines-1.0.0.tgz
Trim newlines from the start and/or end of a string
The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method.
The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method.
",0,cve high detected in trim newlines tgz cve high severity vulnerability vulnerable library trim newlines tgz trim newlines from the start and or end of a string library home page a href path to dependency file node jose package json path to vulnerable library node jose node modules trim newlines package json dependency hierarchy karma coverage tgz root library dateformat tgz meow tgz x trim newlines tgz vulnerable library found in head commit a href found in base branch master vulnerability details the trim newlines package before and x before for node js has an issue related to regular expression denial of service redos for the end method publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution trim newlines isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree karma coverage dateformat meow trim newlines isminimumfixversionavailable true minimumfixversion trim newlines basebranches vulnerabilityidentifier cve vulnerabilitydetails the trim newlines package before and x before for node js has an issue related to regular expression denial of service redos for the end method vulnerabilityurl ,0
298,2569928864.0,IssuesEvent,2015-02-10 03:16:04,jadrake75/ng-scrolling-table,https://api.github.com/repos/jadrake75/ng-scrolling-table,closed,caption tag is removed,bug usability,"If you create a table like this using ng-scrolling-table, the caption tag is removed in the rendered HTML:
My Table
Stuff
Things
{{junk.stuff}}
{{junk.things}}
Refer to this tryit: http://www.w3schools.com/tags/tryit.asp?filename=tryhtml_caption_default_css
",True,"caption tag is removed - If you create a table like this using ng-scrolling-table, the caption tag is removed in the rendered HTML:
My Table
Stuff
Things
{{junk.stuff}}
{{junk.things}}
Refer to this tryit: http://www.w3schools.com/tags/tryit.asp?filename=tryhtml_caption_default_css
",0,caption tag is removed if you create a table like this using ng scrolling table the caption tag is removed in the rendered html my table stuff things junk stuff junk things refer to this tryit ,0
35,2582640188.0,IssuesEvent,2015-02-15 13:50:23,0robustus1/savage,https://api.github.com/repos/0robustus1/savage,opened,switch test-framework,maintainability,I don't really like the standard unit-testing syntax. We should take a look at other clojure libraries. Maybe try out Midje?,True,switch test-framework - I don't really like the standard unit-testing syntax. We should take a look at other clojure libraries. Maybe try out Midje?,1,switch test framework i don t really like the standard unit testing syntax we should take a look at other clojure libraries maybe try out midje ,1
2734,9673616417.0,IssuesEvent,2019-05-22 08:00:38,RalfKoban/MiKo-Analyzers,https://api.github.com/repos/RalfKoban/MiKo-Analyzers,opened,Public Methods should not return List<> or Dictionary<>,Area: analyzer Area: maintainability feature,"Methods that are `public` visible should not return a `List<>` or `Dictionary<>`. Instead, they should return the interfaces.
Doing so allows to change the implementation when it's needed because otherwise the method is bound to always and forever return a `List<>` or `Dictionary<>`.",True,"Public Methods should not return List<> or Dictionary<> - Methods that are `public` visible should not return a `List<>` or `Dictionary<>`. Instead, they should return the interfaces.
Doing so allows to change the implementation when it's needed because otherwise the method is bound to always and forever return a `List<>` or `Dictionary<>`.",1,public methods should not return list or dictionary methods that are public visible should not return a list or dictionary instead they should return the interfaces doing so allows to change the implementation when it s needed because otherwise the method is bound to always and forever return a list or dictionary ,1
1867,6577487584.0,IssuesEvent,2017-09-12 01:15:46,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"sysctl: set in /sys, not in /proc",affects_2.0 bug_report waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
sysctl
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
nocows = 1
hostfile = foo-hosts.txt
fact_caching = jsonfile
fact_caching_connection = /tmp/anscache
fact_caching_timeout = 86400
hash_behaviour = replace
```
##### OS / ENVIRONMENT
ansible: debian 8.4/64, linux 3.16.7-ckt20-1+deb8u3
managed os: debian 8.4/x64, linux 3.16.7-ckt25-2
##### SUMMARY
try to set sysctl value that is only present in /sys, not in /proc/sys fails
##### STEPS TO REPRODUCE
```
- sysctl: name=""kernel.mm.ksm.run"" value=1 sysctl_set=yes state=present
```
##### EXPECTED RESULTS
sould set the value in /sys/... and not in /proc/...
##### ACTUAL RESULTS
```
TASK [netdata : enable KSM] ****************************************************
task path: /root/devel/ansible-pb/roles/netdata/tasks/main.yml:97
ESTABLISH SSH CONNECTION FOR USER: root
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt t.domain.tld '/bin/sh -c '""'""'mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710 `""'""'""''
PUT /tmp/tmp73kpL8 TO /root/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710/sysctl
SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[t.domain.tld]'
ESTABLISH SSH CONNECTION FOR USER: root
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt t.domain.tld '/bin/sh -c '""'""'LANG=en_US.UTF-8 GIT_COMMITTER_EMAIL=ansible@debian-workstation.domain2.tld LC_MESSAGES=en_US.UTF-8 GIT_AUTOCOMMIT=true LC_ALL=en_US.UTF-8 GIT_COMMITTER_NAME='""'""'""'""'""'""'""'""'ansible on debian-workstation.domain2.tld'""'""'""'""'""'""'""'""' /usr/bin/python /root/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710/sysctl; rm -rf ""/root/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710/"" > /dev/null 2>&1'""'""''
fatal: [t.domain.tld]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""ignoreerrors"": false, ""name"": ""kernel.mm.ksm.run"", ""reload"": true, ""state"": ""present"", ""sysctl_file"": ""/etc/sysctl.conf"", ""sysctl_set"": true, ""value"": ""1""}, ""module_name"": ""sysctl""}, ""msg"": ""setting kernel.mm.ksm.run failed: sysctl: cannot stat /proc/sys/kernel/mm/ksm/run: No such file or directory\n""}
```
info about the sys/proc files:
```
root@t ~ # ls -la /proc/sys/kernel/mm/ksm/run
ls: cannot access /proc/sys/kernel/mm/ksm/run: No such file or directory
root@t ~ # ls -la /sys/kernel/mm/ksm/run
-rw-r--r-- 1 root root 4096 Apr 11 13:25 /sys/kernel/mm/ksm/run
```
",True,"sysctl: set in /sys, not in /proc -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
sysctl
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
nocows = 1
hostfile = foo-hosts.txt
fact_caching = jsonfile
fact_caching_connection = /tmp/anscache
fact_caching_timeout = 86400
hash_behaviour = replace
```
##### OS / ENVIRONMENT
ansible: debian 8.4/64, linux 3.16.7-ckt20-1+deb8u3
managed os: debian 8.4/x64, linux 3.16.7-ckt25-2
##### SUMMARY
try to set sysctl value that is only present in /sys, not in /proc/sys fails
##### STEPS TO REPRODUCE
```
- sysctl: name=""kernel.mm.ksm.run"" value=1 sysctl_set=yes state=present
```
##### EXPECTED RESULTS
sould set the value in /sys/... and not in /proc/...
##### ACTUAL RESULTS
```
TASK [netdata : enable KSM] ****************************************************
task path: /root/devel/ansible-pb/roles/netdata/tasks/main.yml:97
ESTABLISH SSH CONNECTION FOR USER: root
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt t.domain.tld '/bin/sh -c '""'""'mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710 `""'""'""''
PUT /tmp/tmp73kpL8 TO /root/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710/sysctl
SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[t.domain.tld]'
ESTABLISH SSH CONNECTION FOR USER: root
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt t.domain.tld '/bin/sh -c '""'""'LANG=en_US.UTF-8 GIT_COMMITTER_EMAIL=ansible@debian-workstation.domain2.tld LC_MESSAGES=en_US.UTF-8 GIT_AUTOCOMMIT=true LC_ALL=en_US.UTF-8 GIT_COMMITTER_NAME='""'""'""'""'""'""'""'""'ansible on debian-workstation.domain2.tld'""'""'""'""'""'""'""'""' /usr/bin/python /root/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710/sysctl; rm -rf ""/root/.ansible/tmp/ansible-tmp-1460374066.43-102994390096710/"" > /dev/null 2>&1'""'""''
fatal: [t.domain.tld]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""ignoreerrors"": false, ""name"": ""kernel.mm.ksm.run"", ""reload"": true, ""state"": ""present"", ""sysctl_file"": ""/etc/sysctl.conf"", ""sysctl_set"": true, ""value"": ""1""}, ""module_name"": ""sysctl""}, ""msg"": ""setting kernel.mm.ksm.run failed: sysctl: cannot stat /proc/sys/kernel/mm/ksm/run: No such file or directory\n""}
```
info about the sys/proc files:
```
root@t ~ # ls -la /proc/sys/kernel/mm/ksm/run
ls: cannot access /proc/sys/kernel/mm/ksm/run: No such file or directory
root@t ~ # ls -la /sys/kernel/mm/ksm/run
-rw-r--r-- 1 root root 4096 Apr 11 13:25 /sys/kernel/mm/ksm/run
```
",1,sysctl set in sys not in proc issue type bug report component name sysctl ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration nocows hostfile foo hosts txt fact caching jsonfile fact caching connection tmp anscache fact caching timeout hash behaviour replace os environment ansible debian linux managed os debian linux summary try to set sysctl value that is only present in sys not in proc sys fails steps to reproduce sysctl name kernel mm ksm run value sysctl set yes state present expected results sould set the value in sys and not in proc actual results task task path root devel ansible pb roles netdata tasks main yml establish ssh connection for user root ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath root ansible cp ansible ssh h p r tt t domain tld bin sh c mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put tmp to root ansible tmp ansible tmp sysctl ssh exec sftp b c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath root ansible cp ansible ssh h p r establish ssh connection for user root ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath root ansible cp ansible ssh h p r tt t domain tld bin sh c lang en us utf git committer email ansible debian workstation tld lc messages en us utf git autocommit true lc all en us utf git committer name ansible on debian workstation tld usr bin python root ansible tmp ansible tmp sysctl rm rf root ansible tmp ansible tmp dev null fatal failed changed false failed true invocation module args ignoreerrors false name kernel mm ksm run reload true state present sysctl file etc sysctl conf sysctl set true value module name sysctl msg setting kernel mm ksm run failed sysctl cannot stat proc sys kernel mm ksm run no such file or directory n info about the sys proc files root t ls la proc sys kernel mm ksm run ls cannot access proc sys kernel mm ksm run no such file or directory root t ls la sys kernel mm ksm run rw r r root root apr sys kernel mm ksm run ,1
1593,6572373301.0,IssuesEvent,2017-09-11 01:48:43,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,nmcli module throws IndexError in Subprocess,affects_2.1 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
nmcli
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /Users/.../ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
remote_user=root
#host_key_checking=False
# Uncomment to get profiling information for executed tasks
#callback_whitelist = profile_tasks
#retry_files_enabled = False
retry_files_save_path = .
[ssh_connection]
pipelining = True
```
##### OS / ENVIRONMENT
macOS 10.11.6
Remote target:
```
# cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core)
```
##### SUMMARY
##### STEPS TO REPRODUCE
```
# Enable Ansible's nmcli module
- name: Provide python networkmanager API
yum:
name: NetworkManager-glib
- name: setup VIP
nmcli:
state: present
conn_name: lo
ip4: ""{{ VIP }}/32""
```
##### EXPECTED RESULTS
The ip addr should be added to the existing interface lo.
##### ACTUAL RESULTS
```
TASK [broker : Provide python networkmanager API] ******************************
ok: [local-broker-1]
TASK [broker : setup VIP] ********************************
fatal: [local-broker-1]: FAILED! => {""changed"": false, ""cmd"": """", ""failed"": true, ""msg"": ""Traceback (most recent call last):\n File \""/tmp/ansible_SfwO6E/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 2093, in run_command\n cmd = subprocess.Popen(args, **kwargs)\n File \""/usr/lib64/python2.7/subprocess.py\"", line 711, in __init__\n errread, errwrite)\n File \""/usr/lib64/python2.7/subprocess.py\"", line 1207, in _execute_child\n executable = args[0]\nIndexError: list index out of range\n"", ""rc"": 257}
```
",True,"nmcli module throws IndexError in Subprocess - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
nmcli
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /Users/.../ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
remote_user=root
#host_key_checking=False
# Uncomment to get profiling information for executed tasks
#callback_whitelist = profile_tasks
#retry_files_enabled = False
retry_files_save_path = .
[ssh_connection]
pipelining = True
```
##### OS / ENVIRONMENT
macOS 10.11.6
Remote target:
```
# cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core)
```
##### SUMMARY
##### STEPS TO REPRODUCE
```
# Enable Ansible's nmcli module
- name: Provide python networkmanager API
yum:
name: NetworkManager-glib
- name: setup VIP
nmcli:
state: present
conn_name: lo
ip4: ""{{ VIP }}/32""
```
##### EXPECTED RESULTS
The ip addr should be added to the existing interface lo.
##### ACTUAL RESULTS
```
TASK [broker : Provide python networkmanager API] ******************************
ok: [local-broker-1]
TASK [broker : setup VIP] ********************************
fatal: [local-broker-1]: FAILED! => {""changed"": false, ""cmd"": """", ""failed"": true, ""msg"": ""Traceback (most recent call last):\n File \""/tmp/ansible_SfwO6E/ansible_modlib.zip/ansible/module_utils/basic.py\"", line 2093, in run_command\n cmd = subprocess.Popen(args, **kwargs)\n File \""/usr/lib64/python2.7/subprocess.py\"", line 711, in __init__\n errread, errwrite)\n File \""/usr/lib64/python2.7/subprocess.py\"", line 1207, in _execute_child\n executable = args[0]\nIndexError: list index out of range\n"", ""rc"": 257}
```
",1,nmcli module throws indexerror in subprocess issue type bug report component name nmcli ansible version ansible config file users ansible ansible cfg configured module search path default w o overrides configuration remote user root host key checking false uncomment to get profiling information for executed tasks callback whitelist profile tasks retry files enabled false retry files save path pipelining true os environment macos remote target cat etc centos release centos linux release core summary steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used enable ansible s nmcli module name provide python networkmanager api yum name networkmanager glib name setup vip nmcli state present conn name lo vip expected results the ip addr should be added to the existing interface lo actual results task ok task fatal failed changed false cmd failed true msg traceback most recent call last n file tmp ansible ansible modlib zip ansible module utils basic py line in run command n cmd subprocess popen args kwargs n file usr subprocess py line in init n errread errwrite n file usr subprocess py line in execute child n executable args nindexerror list index out of range n rc ,1
514899,14946342387.0,IssuesEvent,2021-01-26 06:34:38,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,mail.google.com - site is not usable,browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical,"
**URL**: https://mail.google.com/mail/u/0/?tab=wm
**Browser / Version**: Firefox 85.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Internet Explorer
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
the page loads, but it appears blank
View the screenshotBrowser Configuration
gfx.webrender.all: false
gfx.webrender.blob-images: true
gfx.webrender.enabled: false
image.mem.shared: true
buildID: 20210118153634
channel: release
hasTouchScreen: false
mixed active content blocked: false
mixed passive content blocked: false
tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2021/1/c68eefd5-7ee3-4a43-8121-4199b0e090d2)
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"mail.google.com - site is not usable -
**URL**: https://mail.google.com/mail/u/0/?tab=wm
**Browser / Version**: Firefox 85.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Internet Explorer
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
the page loads, but it appears blank
View the screenshotBrowser Configuration
gfx.webrender.all: false
gfx.webrender.blob-images: true
gfx.webrender.enabled: false
image.mem.shared: true
buildID: 20210118153634
channel: release
hasTouchScreen: false
mixed active content blocked: false
mixed passive content blocked: false
tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2021/1/c68eefd5-7ee3-4a43-8121-4199b0e090d2)
_From [webcompat.com](https://webcompat.com/) with ❤️_",0,mail google com site is not usable url browser version firefox operating system windows tested another browser yes internet explorer problem type site is not usable description page not loading correctly steps to reproduce the page loads but it appears blank view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ ,0
2509,8655459870.0,IssuesEvent,2018-11-27 16:00:31,codestation/qcma,https://api.github.com/repos/codestation/qcma,closed,systray doesn't seem to show up on debian,unmaintained,"I know appindicator is no longre built but for some reason it doesn't show up I get notifications but the icon isn't shown. I do have other QT based apps that I guess are using systray and their icons are shown easily.
I am on debian and the startup seems normal.
Starting Qcma 0.4.1
PTP: Opening session
Total entries added to the database: 429
Vita connected, id: 4471254501417XXXX
I am curious if it saying PTP i the reason behind the io being ~4MB/s with slight peaks at ~5MB/s with random stops and starts.
Fwiw My vita is on 3.51
Distributor ID: Debian
Description: Debian GNU/Linux 8.6 (jessie)
Release: 8.6
Codename: jessie
Package: qcma
Status: install ok installed
Priority: extra
Section: utils
Installed-Size: 595
Maintainer: codestation
Architecture: amd64
Version: 0.4.1
I should also say that the thing without the option to get to settings I cant seem to updat emy vita I kept putting off updating to 3.6 as I cant live without pspkvm(rarely play it but still need it)
I tried to update to 3.6 a ltitle while after henaku came out and it would've worked I guess but I decided to hold off for awhile and now it won't work for some reason so I don't know if it's a weird libraries conflict or something.",True,"systray doesn't seem to show up on debian - I know appindicator is no longre built but for some reason it doesn't show up I get notifications but the icon isn't shown. I do have other QT based apps that I guess are using systray and their icons are shown easily.
I am on debian and the startup seems normal.
Starting Qcma 0.4.1
PTP: Opening session
Total entries added to the database: 429
Vita connected, id: 4471254501417XXXX
I am curious if it saying PTP i the reason behind the io being ~4MB/s with slight peaks at ~5MB/s with random stops and starts.
Fwiw My vita is on 3.51
Distributor ID: Debian
Description: Debian GNU/Linux 8.6 (jessie)
Release: 8.6
Codename: jessie
Package: qcma
Status: install ok installed
Priority: extra
Section: utils
Installed-Size: 595
Maintainer: codestation
Architecture: amd64
Version: 0.4.1
I should also say that the thing without the option to get to settings I cant seem to updat emy vita I kept putting off updating to 3.6 as I cant live without pspkvm(rarely play it but still need it)
I tried to update to 3.6 a ltitle while after henaku came out and it would've worked I guess but I decided to hold off for awhile and now it won't work for some reason so I don't know if it's a weird libraries conflict or something.",1,systray doesn t seem to show up on debian i know appindicator is no longre built but for some reason it doesn t show up i get notifications but the icon isn t shown i do have other qt based apps that i guess are using systray and their icons are shown easily i am on debian and the startup seems normal starting qcma ptp opening session total entries added to the database vita connected id i am curious if it saying ptp i the reason behind the io being s with slight peaks at s with random stops and starts fwiw my vita is on distributor id debian description debian gnu linux jessie release codename jessie package qcma status install ok installed priority extra section utils installed size maintainer codestation architecture version i should also say that the thing without the option to get to settings i cant seem to updat emy vita i kept putting off updating to as i cant live without pspkvm rarely play it but still need it i tried to update to a ltitle while after henaku came out and it would ve worked i guess but i decided to hold off for awhile and now it won t work for some reason so i don t know if it s a weird libraries conflict or something ,1
191347,14593932920.0,IssuesEvent,2020-12-20 02:06:19,github-vet/rangeloop-pointer-findings,https://api.github.com/repos/github-vet/rangeloop-pointer-findings,closed,genuinetools/binctr: vendor/github.com/containerd/containerd/metadata/images_test.go; 3 LoC,fresh test tiny vendored,"
Found a possible issue in [genuinetools/binctr](https://www.github.com/genuinetools/binctr) at [vendor/github.com/containerd/containerd/metadata/images_test.go](https://github.com/genuinetools/binctr/blob/66e393036fad10eb6b9252036d3b90d9ccff7660/vendor/github.com/containerd/containerd/metadata/images_test.go#L128-L130)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to result at line 129 may start a goroutine
[Click here to see the code in its original context.](https://github.com/genuinetools/binctr/blob/66e393036fad10eb6b9252036d3b90d9ccff7660/vendor/github.com/containerd/containerd/metadata/images_test.go#L128-L130)
Click here to show the 3 line(s) of Go which triggered the analyzer.
```go
for _, result := range results {
checkImagesEqual(t, &result, testset[result.Name], ""list results did not match"")
}
```
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 66e393036fad10eb6b9252036d3b90d9ccff7660
",1.0,"genuinetools/binctr: vendor/github.com/containerd/containerd/metadata/images_test.go; 3 LoC -
Found a possible issue in [genuinetools/binctr](https://www.github.com/genuinetools/binctr) at [vendor/github.com/containerd/containerd/metadata/images_test.go](https://github.com/genuinetools/binctr/blob/66e393036fad10eb6b9252036d3b90d9ccff7660/vendor/github.com/containerd/containerd/metadata/images_test.go#L128-L130)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to result at line 129 may start a goroutine
[Click here to see the code in its original context.](https://github.com/genuinetools/binctr/blob/66e393036fad10eb6b9252036d3b90d9ccff7660/vendor/github.com/containerd/containerd/metadata/images_test.go#L128-L130)
Click here to show the 3 line(s) of Go which triggered the analyzer.
```go
for _, result := range results {
checkImagesEqual(t, &result, testset[result.Name], ""list results did not match"")
}
```
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 66e393036fad10eb6b9252036d3b90d9ccff7660
",0,genuinetools binctr vendor github com containerd containerd metadata images test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to result at line may start a goroutine click here to show the line s of go which triggered the analyzer go for result range results checkimagesequal t result testset list results did not match leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id ,0
150062,5735812619.0,IssuesEvent,2017-04-22 01:48:54,zeit/next.js,https://api.github.com/repos/zeit/next.js,closed,Resolve Webpack deprecation warning for loaderUtils.parseQuery(),Priority: Low,"Seeing the following deprecation warning after updating to the latest version of Next beta. Can anyone else confirm?
```
DeprecationWarning: loaderUtils.parseQuery() received a non-string value which can be problematic, see https://github.com/webpack/loader-utils/issues/56 parseQuery() will be replaced with getOptions() in the next major version of loader-utils.
````",1.0,"Resolve Webpack deprecation warning for loaderUtils.parseQuery() - Seeing the following deprecation warning after updating to the latest version of Next beta. Can anyone else confirm?
```
DeprecationWarning: loaderUtils.parseQuery() received a non-string value which can be problematic, see https://github.com/webpack/loader-utils/issues/56 parseQuery() will be replaced with getOptions() in the next major version of loader-utils.
````",0,resolve webpack deprecation warning for loaderutils parsequery seeing the following deprecation warning after updating to the latest version of next beta can anyone else confirm deprecationwarning loaderutils parsequery received a non string value which can be problematic see parsequery will be replaced with getoptions in the next major version of loader utils ,0
1010,4787260955.0,IssuesEvent,2016-10-29 22:08:53,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Return codes for command-module whould be configurable,affects_2.2 feature_idea waiting_on_maintainer,"##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
command-module
##### ANSIBLE VERSION
```
ansible 2.2.0.0 (detached HEAD bce9bfce51) last updated 2016/10/24 14:13:42 (GMT +000)
```
##### SUMMARY
Allowed return codes to be successful should be configurable in commands module
e.g. the command `grep` has three possible return codes: 0, 1, 2
but on only 2 signals an error.
So it should be possible, to configure 0 AND 1 as ""good"" return codes.",True,"Return codes for command-module whould be configurable - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
command-module
##### ANSIBLE VERSION
```
ansible 2.2.0.0 (detached HEAD bce9bfce51) last updated 2016/10/24 14:13:42 (GMT +000)
```
##### SUMMARY
Allowed return codes to be successful should be configurable in commands module
e.g. the command `grep` has three possible return codes: 0, 1, 2
but on only 2 signals an error.
So it should be possible, to configure 0 AND 1 as ""good"" return codes.",1,return codes for command module whould be configurable issue type feature idea component name command module ansible version ansible detached head last updated gmt summary allowed return codes to be successful should be configurable in commands module e g the command grep has three possible return codes but on only signals an error so it should be possible to configure and as good return codes ,1
4820,24847836015.0,IssuesEvent,2022-10-26 17:19:54,centerofci/mathesar,https://api.github.com/repos/centerofci/mathesar,closed,RecursionError in records endpoint,type: bug work: backend status: ready restricted: maintainers,"## Description
I've been getting this error for a few tables in my environment.
* These tables have been created using the 'Create new table' button, not using file import.
* These tables have no rows.
* The tables have a link to another table.
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/db/v0/tables/2/records/
Django Version: 3.1.14
Python Version: 3.9.14
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File ""/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py"", line 47, in inner
response = get_response(request)
File ""/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py"", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File ""/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py"", line 54, in wrapped_view
return view_func(*args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py"", line 125, in view
return self.dispatch(request, *args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/rest_framework/views.py"", line 509, in dispatch
response = self.handle_exception(exc)
File ""/usr/local/lib/python3.9/site-packages/rest_framework/views.py"", line 466, in handle_exception
response = exception_handler(exc, context)
File ""/code/mathesar/exception_handlers.py"", line 55, in mathesar_exception_handler
raise exc
File ""/usr/local/lib/python3.9/site-packages/rest_framework/views.py"", line 506, in dispatch
response = handler(request, *args, **kwargs)
File ""/code/mathesar/api/db/viewsets/records.py"", line 67, in list
records = paginator.paginate_queryset(
File ""/code/mathesar/api/pagination.py"", line 82, in paginate_queryset
preview_metadata, preview_columns = get_preview_info(table.id)
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 70, in get_preview_info
fk_constraints = [
File ""/code/mathesar/utils/preview.py"", line 73, in
if table_constraint.type == ConstraintType.FOREIGN_KEY.value
File ""/code/mathesar/models/base.py"", line 774, in type
return constraint_utils.get_constraint_type_from_char(self._constraint_record['contype'])
File ""/code/mathesar/models/base.py"", line 766, in _constraint_record
return get_constraint_record_from_oid(self.oid, engine)
File ""/code/db/constraints/operations/select.py"", line 33, in get_constraint_record_from_oid
pg_constraint = get_pg_catalog_table(""pg_constraint"", engine, metadata=metadata)
File ""/code/db/utils.py"", line 92, in warning_ignored_func
return f(*args, **kwargs)
File ""/code/db/utils.py"", line 99, in get_pg_catalog_table
return sqlalchemy.Table(table_name, metadata, autoload_with=engine, schema='pg_catalog')
File """", line 2, in __new__
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/deprecations.py"", line 298, in warned
return fn(*args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 600, in __new__
metadata._remove_table(name, schema)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py"", line 70, in __exit__
compat.raise_(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py"", line 207, in raise_
raise exception
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 595, in __new__
table._init(name, metadata, *args, **kw)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 670, in _init
self._autoload(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 705, in _autoload
conn_insp.reflect_table(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 774, in reflect_table
for col_d in self.get_columns(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 497, in get_columns
col_defs = self.dialect.get_columns(
File """", line 2, in get_columns
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 55, in cache
ret = fn(self, con, *args, **kw)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/base.py"", line 3585, in get_columns
table_oid = self.get_table_oid(
File """", line 2, in get_table_oid
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 55, in cache
ret = fn(self, con, *args, **kw)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/base.py"", line 3462, in get_table_oid
c = connection.execute(s, dict(table_name=table_name, schema=schema))
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/future/engine.py"", line 280, in execute
return self._execute_20(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1582, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py"", line 324, in _execute_on_connection
return connection._execute_clauseelement(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1451, in _execute_clauseelement
ret = self._execute_context(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1813, in _execute_context
self._handle_dbapi_exception(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1998, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py"", line 207, in raise_
raise exception
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1786, in _execute_context
result = context._setup_result_proxy()
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py"", line 1406, in _setup_result_proxy
result = self._setup_dml_or_text_result()
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py"", line 1494, in _setup_dml_or_text_result
result = _cursor.CursorResult(self, strategy, cursor_description)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py"", line 1253, in __init__
metadata = self._init_metadata(context, cursor_description)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py"", line 1310, in _init_metadata
metadata = metadata._adapt_to_context(context)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py"", line 136, in _adapt_to_context
invoked_statement._exported_columns_iterator()
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py"", line 126, in _exported_columns_iterator
return iter(self.exported_columns)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py"", line 2870, in exported_columns
return self.selected_columns
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py"", line 1180, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py"", line 6354, in selected_columns
return ColumnCollection(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py"", line 1128, in __init__
self._initial_populate(columns)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py"", line 1131, in _initial_populate
self._populate_separate_keys(iter_)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py"", line 1227, in _populate_separate_keys
self._colset.update(c for k, c in self._collection)
Exception Type: RecursionError at /api/db/v0/tables/2/records/
Exception Value: maximum recursion depth exceeded
```
I'm not sure about the cause and it's occuring consistently for me but unable to reproduce it on staging.",True,"RecursionError in records endpoint - ## Description
I've been getting this error for a few tables in my environment.
* These tables have been created using the 'Create new table' button, not using file import.
* These tables have no rows.
* The tables have a link to another table.
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/db/v0/tables/2/records/
Django Version: 3.1.14
Python Version: 3.9.14
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File ""/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py"", line 47, in inner
response = get_response(request)
File ""/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py"", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File ""/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py"", line 54, in wrapped_view
return view_func(*args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py"", line 125, in view
return self.dispatch(request, *args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/rest_framework/views.py"", line 509, in dispatch
response = self.handle_exception(exc)
File ""/usr/local/lib/python3.9/site-packages/rest_framework/views.py"", line 466, in handle_exception
response = exception_handler(exc, context)
File ""/code/mathesar/exception_handlers.py"", line 55, in mathesar_exception_handler
raise exc
File ""/usr/local/lib/python3.9/site-packages/rest_framework/views.py"", line 506, in dispatch
response = handler(request, *args, **kwargs)
File ""/code/mathesar/api/db/viewsets/records.py"", line 67, in list
records = paginator.paginate_queryset(
File ""/code/mathesar/api/pagination.py"", line 82, in paginate_queryset
preview_metadata, preview_columns = get_preview_info(table.id)
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File ""/code/mathesar/utils/preview.py"", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File ""/code/mathesar/utils/preview.py"", line 70, in get_preview_info
fk_constraints = [
File ""/code/mathesar/utils/preview.py"", line 73, in
if table_constraint.type == ConstraintType.FOREIGN_KEY.value
File ""/code/mathesar/models/base.py"", line 774, in type
return constraint_utils.get_constraint_type_from_char(self._constraint_record['contype'])
File ""/code/mathesar/models/base.py"", line 766, in _constraint_record
return get_constraint_record_from_oid(self.oid, engine)
File ""/code/db/constraints/operations/select.py"", line 33, in get_constraint_record_from_oid
pg_constraint = get_pg_catalog_table(""pg_constraint"", engine, metadata=metadata)
File ""/code/db/utils.py"", line 92, in warning_ignored_func
return f(*args, **kwargs)
File ""/code/db/utils.py"", line 99, in get_pg_catalog_table
return sqlalchemy.Table(table_name, metadata, autoload_with=engine, schema='pg_catalog')
File """", line 2, in __new__
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/deprecations.py"", line 298, in warned
return fn(*args, **kwargs)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 600, in __new__
metadata._remove_table(name, schema)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py"", line 70, in __exit__
compat.raise_(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py"", line 207, in raise_
raise exception
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 595, in __new__
table._init(name, metadata, *args, **kw)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 670, in _init
self._autoload(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py"", line 705, in _autoload
conn_insp.reflect_table(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 774, in reflect_table
for col_d in self.get_columns(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 497, in get_columns
col_defs = self.dialect.get_columns(
File """", line 2, in get_columns
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 55, in cache
ret = fn(self, con, *args, **kw)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/base.py"", line 3585, in get_columns
table_oid = self.get_table_oid(
File """", line 2, in get_table_oid
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py"", line 55, in cache
ret = fn(self, con, *args, **kw)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/base.py"", line 3462, in get_table_oid
c = connection.execute(s, dict(table_name=table_name, schema=schema))
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/future/engine.py"", line 280, in execute
return self._execute_20(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1582, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py"", line 324, in _execute_on_connection
return connection._execute_clauseelement(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1451, in _execute_clauseelement
ret = self._execute_context(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1813, in _execute_context
self._handle_dbapi_exception(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1998, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py"", line 207, in raise_
raise exception
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py"", line 1786, in _execute_context
result = context._setup_result_proxy()
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py"", line 1406, in _setup_result_proxy
result = self._setup_dml_or_text_result()
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py"", line 1494, in _setup_dml_or_text_result
result = _cursor.CursorResult(self, strategy, cursor_description)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py"", line 1253, in __init__
metadata = self._init_metadata(context, cursor_description)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py"", line 1310, in _init_metadata
metadata = metadata._adapt_to_context(context)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py"", line 136, in _adapt_to_context
invoked_statement._exported_columns_iterator()
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py"", line 126, in _exported_columns_iterator
return iter(self.exported_columns)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py"", line 2870, in exported_columns
return self.selected_columns
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py"", line 1180, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py"", line 6354, in selected_columns
return ColumnCollection(
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py"", line 1128, in __init__
self._initial_populate(columns)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py"", line 1131, in _initial_populate
self._populate_separate_keys(iter_)
File ""/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py"", line 1227, in _populate_separate_keys
self._colset.update(c for k, c in self._collection)
Exception Type: RecursionError at /api/db/v0/tables/2/records/
Exception Value: maximum recursion depth exceeded
```
I'm not sure about the cause and it's occuring consistently for me but unable to reproduce it on staging.",1,recursionerror in records endpoint description i ve been getting this error for a few tables in my environment these tables have been created using the create new table button not using file import these tables have no rows the tables have a link to another table environment request method get request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware traceback most recent call last file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file code mathesar api db viewsets records py line in list records paginator paginate queryset file code mathesar api pagination py line in paginate queryset preview metadata preview columns get preview info table id file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info fk constraints file code mathesar utils preview py line in if table constraint type constrainttype foreign key value file code mathesar models base py line in type return constraint utils get constraint type from char self constraint record file code mathesar models base py line in constraint record return get constraint record from oid self oid engine file code db constraints operations select py line in get constraint record from oid pg constraint get pg catalog table pg constraint engine metadata metadata file code db utils py line in warning ignored func return f args kwargs file code db utils py line in get pg catalog table return sqlalchemy table table name metadata autoload with engine schema pg catalog file line in new file usr local lib site packages sqlalchemy util deprecations py line in warned return fn args kwargs file usr local lib site packages sqlalchemy sql schema py line in new metadata remove table name schema file usr local lib site packages sqlalchemy util langhelpers py line in exit compat raise file usr local lib site packages sqlalchemy util compat py line in raise raise exception file usr local lib site packages sqlalchemy sql schema py line in new table init name metadata args kw file usr local lib site packages sqlalchemy sql schema py line in init self autoload file usr local lib site packages sqlalchemy sql schema py line in autoload conn insp reflect table file usr local lib site packages sqlalchemy engine reflection py line in reflect table for col d in self get columns file usr local lib site packages sqlalchemy engine reflection py line in get columns col defs self dialect get columns file line in get columns file usr local lib site packages sqlalchemy engine reflection py line in cache ret fn self con args kw file usr local lib site packages sqlalchemy dialects postgresql base py line in get columns table oid self get table oid file line in get table oid file usr local lib site packages sqlalchemy engine reflection py line in cache ret fn self con args kw file usr local lib site packages sqlalchemy dialects postgresql base py line in get table oid c connection execute s dict table name table name schema schema file usr local lib site packages sqlalchemy future engine py line in execute return self execute file usr local lib site packages sqlalchemy engine base py line in execute return meth self args kwargs execution options file usr local lib site packages sqlalchemy sql elements py line in execute on connection return connection execute clauseelement file usr local lib site packages sqlalchemy engine base py line in execute clauseelement ret self execute context file usr local lib site packages sqlalchemy engine base py line in execute context self handle dbapi exception file usr local lib site packages sqlalchemy engine base py line in handle dbapi exception util raise exc info with traceback exc info file usr local lib site packages sqlalchemy util compat py line in raise raise exception file usr local lib site packages sqlalchemy engine base py line in execute context result context setup result proxy file usr local lib site packages sqlalchemy engine default py line in setup result proxy result self setup dml or text result file usr local lib site packages sqlalchemy engine default py line in setup dml or text result result cursor cursorresult self strategy cursor description file usr local lib site packages sqlalchemy engine cursor py line in init metadata self init metadata context cursor description file usr local lib site packages sqlalchemy engine cursor py line in init metadata metadata metadata adapt to context context file usr local lib site packages sqlalchemy engine cursor py line in adapt to context invoked statement exported columns iterator file usr local lib site packages sqlalchemy sql selectable py line in exported columns iterator return iter self exported columns file usr local lib site packages sqlalchemy sql selectable py line in exported columns return self selected columns file usr local lib site packages sqlalchemy util langhelpers py line in get obj dict result self fget obj file usr local lib site packages sqlalchemy sql selectable py line in selected columns return columncollection file usr local lib site packages sqlalchemy sql base py line in init self initial populate columns file usr local lib site packages sqlalchemy sql base py line in initial populate self populate separate keys iter file usr local lib site packages sqlalchemy sql base py line in populate separate keys self colset update c for k c in self collection exception type recursionerror at api db tables records exception value maximum recursion depth exceeded i m not sure about the cause and it s occuring consistently for me but unable to reproduce it on staging ,1
4839,24949840814.0,IssuesEvent,2022-11-01 05:48:39,usefulmove/comp,https://api.github.com/repos/usefulmove/comp,opened,Improve maintainability of pop_stack family of stack manipulation functions,good first issue maintainability,"The maintainability of the `pop_stack` family of functions can be improved (e.g., better code reuse) by passing behavior as functions and-or generic functions.",True,"Improve maintainability of pop_stack family of stack manipulation functions - The maintainability of the `pop_stack` family of functions can be improved (e.g., better code reuse) by passing behavior as functions and-or generic functions.",1,improve maintainability of pop stack family of stack manipulation functions the maintainability of the pop stack family of functions can be improved e g better code reuse by passing behavior as functions and or generic functions ,1
379313,11219841016.0,IssuesEvent,2020-01-07 14:42:04,jenkins-x/jx,https://api.github.com/repos/jenkins-x/jx,closed,Remove Knative Build and JFR from Jenkins X installation,area/fox area/install area/knative kind/enhancement priority/important-longterm,"### Summary
The knative build and JFR should be removed from Jenkins X installation, as soon as can be used as a buildpack in JenkinsX/Tekton pipeline. #4229
The the moment this topology is using a old fork of prow which is unmaintained.
### Steps to reproduce the behavior
### Expected behavior
### Actual behavior
### Jx version
The output of `jx version` is:
```
COPY OUTPUT HERE
```
### Jenkins type
- [ ] Next Generation (Tekton + Prow)
- [ ] Classic Jenkins
- [x] Serverless Jenkins (JenkinsFileRunner + Prow)
### Kubernetes cluster
### Operating system / Environment
",1.0,"Remove Knative Build and JFR from Jenkins X installation - ### Summary
The knative build and JFR should be removed from Jenkins X installation, as soon as can be used as a buildpack in JenkinsX/Tekton pipeline. #4229
The the moment this topology is using a old fork of prow which is unmaintained.
### Steps to reproduce the behavior
### Expected behavior
### Actual behavior
### Jx version
The output of `jx version` is:
```
COPY OUTPUT HERE
```
### Jenkins type
- [ ] Next Generation (Tekton + Prow)
- [ ] Classic Jenkins
- [x] Serverless Jenkins (JenkinsFileRunner + Prow)
### Kubernetes cluster
### Operating system / Environment
",0,remove knative build and jfr from jenkins x installation summary the knative build and jfr should be removed from jenkins x installation as soon as can be used as a buildpack in jenkinsx tekton pipeline the the moment this topology is using a old fork of prow which is unmaintained steps to reproduce the behavior expected behavior actual behavior jx version the output of jx version is copy output here jenkins type select which installation type are you using next generation tekton prow classic jenkins serverless jenkins jenkinsfilerunner prow kubernetes cluster what kind of kubernetes cluster are you using how did you create it operating system environment in which environment are you running the jx cli ,0
5717,30218783472.0,IssuesEvent,2023-07-05 17:36:01,camunda/zeebe,https://api.github.com/repos/camunda/zeebe,closed,Update Elasticsearch client to 8.7 ,kind/toil good first issue area/maintainability component/exporter onboarding target:8.3,"Zeebe 8.3 is [supposed to](https://confluence.camunda.com/display/HAN/Camunda+8+Supported+Environments) support Elasticsearch 8.7+
The curator is no longer compatible with ES8, so we've added the ability to configure an ILM policy:
- https://github.com/camunda/zeebe/pull/12147
- https://github.com/camunda/zeebe/issues/12132
What is still left over is to update the Elasticsearch client to the latest 8.7 version.
This is not updated automatically, due to our [dependabot configuration](https://github.com/camunda/zeebe/blob/main/.github/dependabot.yml#L20C1-L23), so we'll need to update it manually:
- https://github.com/camunda/zeebe/pull/9308
- https://github.com/camunda/zeebe/pull/11563
Also see, the last dependabot PR for this:
- https://github.com/camunda/zeebe/issues/9287",True,"Update Elasticsearch client to 8.7 - Zeebe 8.3 is [supposed to](https://confluence.camunda.com/display/HAN/Camunda+8+Supported+Environments) support Elasticsearch 8.7+
The curator is no longer compatible with ES8, so we've added the ability to configure an ILM policy:
- https://github.com/camunda/zeebe/pull/12147
- https://github.com/camunda/zeebe/issues/12132
What is still left over is to update the Elasticsearch client to the latest 8.7 version.
This is not updated automatically, due to our [dependabot configuration](https://github.com/camunda/zeebe/blob/main/.github/dependabot.yml#L20C1-L23), so we'll need to update it manually:
- https://github.com/camunda/zeebe/pull/9308
- https://github.com/camunda/zeebe/pull/11563
Also see, the last dependabot PR for this:
- https://github.com/camunda/zeebe/issues/9287",1,update elasticsearch client to zeebe is support elasticsearch the curator is no longer compatible with so we ve added the ability to configure an ilm policy what is still left over is to update the elasticsearch client to the latest version this is not updated automatically due to our so we ll need to update it manually also see the last dependabot pr for this ,1
606221,18757628215.0,IssuesEvent,2021-11-05 12:56:35,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,"www.google.com - ""Street View"" image doesn't load",browser-firefox priority-critical priority-normal severity-critical engine-gecko ml-needsdiagnosis-false,"
**URL**: https://www.google.com/search?q=Wiejska+94%2C+Inwa%C5%82d
**Browser / Version**: Firefox 79.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
Image background for Google street view is invisible due incorrect linear-gradient generated by Gooogle:
`linear-gradient(top, rgba(0,0,0,0), rgba(0,0,0,.5))`
Google ""script"" for Firefox 57+ ommit needed `-moz-` prefix also no try use corect syntax:
`linear-gradient(to bottom, rgba(0,0,0,0), rgba(0,0,0,.5))`
Supported by Firefox Quantum.
View the screenshotBrowser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",2.0,"www.google.com - ""Street View"" image doesn't load -
**URL**: https://www.google.com/search?q=Wiejska+94%2C+Inwa%C5%82d
**Browser / Version**: Firefox 79.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
Image background for Google street view is invisible due incorrect linear-gradient generated by Gooogle:
`linear-gradient(top, rgba(0,0,0,0), rgba(0,0,0,.5))`
Google ""script"" for Firefox 57+ ommit needed `-moz-` prefix also no try use corect syntax:
`linear-gradient(to bottom, rgba(0,0,0,0), rgba(0,0,0,.5))`
Supported by Firefox Quantum.
View the screenshotBrowser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0, street view image doesn t load url browser version firefox operating system windows tested another browser yes chrome problem type design is broken description images not loaded steps to reproduce image background for google street view is invisible due incorrect linear gradient generated by gooogle linear gradient top rgba rgba google script for firefox ommit needed moz prefix also no try use corect syntax linear gradient to bottom rgba rgba supported by firefox quantum view the screenshot img alt screenshot src browser configuration none from with ❤️ ,0
1313,5559438429.0,IssuesEvent,2017-03-24 16:56:30,WhitestormJS/whitestorm.js,https://api.github.com/repos/WhitestormJS/whitestorm.js,closed,Update CONTRIBUTING.md guide,FIXME MAINTAINANCE,"Some use cases of [`CONTRIBUTING.md`](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md) file are deprecated since of refactoring changes in V2 and we should update them:
- [x] [**CLI**](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md#cli)
- [x] Fix commands, add new ones.
- [x] Force the use of **npm commands**
- [x] [**Commit names**](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md#commiting)
- [x] New rule: no dot after short codes
- Use of **WIP** is now undesirable.
- [x] [**Changelog**](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md#-adding-changes-to-changelogmd) - is now deprecated. Use [github releases](https://github.com/WhitestormJS/whitestorm.js/releases)
###### Version:
- [x] v2.x.x
- [ ] v1.x.x
###### Issue type:
- [ ] Bug
- [x] Proposal/Enhancement
- [ ] Question
------
Tested on:
###### --- Desktop
- [ ] Chrome
- [ ] Chrome Canary
- [ ] Chrome dev-channel
- [ ] Firefox
- [ ] Opera
- [ ] Microsoft IE
- [ ] Microsoft Edge
###### --- Android
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
###### --- IOS
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
",True,"Update CONTRIBUTING.md guide - Some use cases of [`CONTRIBUTING.md`](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md) file are deprecated since of refactoring changes in V2 and we should update them:
- [x] [**CLI**](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md#cli)
- [x] Fix commands, add new ones.
- [x] Force the use of **npm commands**
- [x] [**Commit names**](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md#commiting)
- [x] New rule: no dot after short codes
- Use of **WIP** is now undesirable.
- [x] [**Changelog**](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md#-adding-changes-to-changelogmd) - is now deprecated. Use [github releases](https://github.com/WhitestormJS/whitestorm.js/releases)
###### Version:
- [x] v2.x.x
- [ ] v1.x.x
###### Issue type:
- [ ] Bug
- [x] Proposal/Enhancement
- [ ] Question
------
Tested on:
###### --- Desktop
- [ ] Chrome
- [ ] Chrome Canary
- [ ] Chrome dev-channel
- [ ] Firefox
- [ ] Opera
- [ ] Microsoft IE
- [ ] Microsoft Edge
###### --- Android
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
###### --- IOS
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
",1,update contributing md guide some use cases of file are deprecated since of refactoring changes in and we should update them fix commands add new ones force the use of npm commands new rule no dot after short codes use of wip is now undesirable is now deprecated use version x x x x issue type bug proposal enhancement question tested on desktop chrome chrome canary chrome dev channel firefox opera microsoft ie microsoft edge android chrome firefox opera ios chrome firefox opera ,1
809628,30202465300.0,IssuesEvent,2023-07-05 07:08:05,googleapis/google-cloud-go,https://api.github.com/repos/googleapis/google-cloud-go,reopened,storage: TestRetryConformance failed,type: bug api: storage priority: p1 flakybot: issue flakybot: flaky,"Note: #7968 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 5cdf4e2668ef73b6b38c3d8c89073863e7f3dc77
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/ce31ef57-1946-443c-a18d-42c926218e96), [Sponge](http://sponge2/ce31ef57-1946-443c-a18d-42c926218e96)
status: failed
Test output
",1.0,"storage: TestRetryConformance failed - Note: #7968 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 5cdf4e2668ef73b6b38c3d8c89073863e7f3dc77
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/ce31ef57-1946-443c-a18d-42c926218e96), [Sponge](http://sponge2/ce31ef57-1946-443c-a18d-42c926218e96)
status: failed
Test output
",0,storage testretryconformance failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output retry conformance test go roundtrip error may be expected write tcp write broken pipe request post upload storage b bucket o alt json ifgenerationmatch name new object txt prettyprint false projection full uploadtype multipart http host localhost content type multipart related boundary user agent google api go client x goog api client gccl invocation id gccl attempt count gl go gccl x goog gcs idempotency token x retry test id retry conformance test go want success got writer close post write tcp write broken pipe retry conformance test go test not completed unused instructions map ,0
99028,4045240996.0,IssuesEvent,2016-05-21 21:33:11,minj/foxtrick,https://api.github.com/repos/minj/foxtrick,closed,Foxtrick shows wrong sublevel name in MatchSimulator,bug MatchOrder Priority-Medium started,"This makes HatStats not match the sublevel name.
---
**From:** Schumi-
**PostID:** [16751289.262](https://www.hattrick.org/goto.ashx?path=%2FForum%2FRead.aspx%3Ft%3D16751289%26n%3D262%26v%3D0)
**To:** LA-MJ
**Re:** [16751289.1](https://www.hattrick.org/goto.ashx?path=%2FForum%2FRead.aspx%3Ft%3D16751289%26n%3D1%26v%3D0)
**Datetime:** 2016-05-13 19:29
**Message:**
If for the NT, I use the Predictor, the numbers for HatStats don't add up to what HT says. The HatStats number is 3 too high for every area. So Attack is 3 too high, Midfield is 3 too higher and Defense is 3 too high..?
",1.0,"Foxtrick shows wrong sublevel name in MatchSimulator - This makes HatStats not match the sublevel name.
---
**From:** Schumi-
**PostID:** [16751289.262](https://www.hattrick.org/goto.ashx?path=%2FForum%2FRead.aspx%3Ft%3D16751289%26n%3D262%26v%3D0)
**To:** LA-MJ
**Re:** [16751289.1](https://www.hattrick.org/goto.ashx?path=%2FForum%2FRead.aspx%3Ft%3D16751289%26n%3D1%26v%3D0)
**Datetime:** 2016-05-13 19:29
**Message:**
If for the NT, I use the Predictor, the numbers for HatStats don't add up to what HT says. The HatStats number is 3 too high for every area. So Attack is 3 too high, Midfield is 3 too higher and Defense is 3 too high..?
",0,foxtrick shows wrong sublevel name in matchsimulator this makes hatstats not match the sublevel name from schumi postid to la mj re datetime message if for the nt i use the predictor the numbers for hatstats don t add up to what ht says the hatstats number is too high for every area so attack is too high midfield is too higher and defense is too high ,0
3903,17376851919.0,IssuesEvent,2021-07-30 23:28:21,chorman0773/Clever-ISA,https://api.github.com/repos/chorman0773/Clever-ISA,closed,"Long Immediate Operand references ""Operand Control Structure"" but the term is not defined anywhere",I-unclear S-blocked-on-maintainer X-main,"Long Immediate Operand references ""Operand Control Structure"" but the term is not defined anywhere",True,"Long Immediate Operand references ""Operand Control Structure"" but the term is not defined anywhere - Long Immediate Operand references ""Operand Control Structure"" but the term is not defined anywhere",1,long immediate operand references operand control structure but the term is not defined anywhere long immediate operand references operand control structure but the term is not defined anywhere,1
1176,5096330755.0,IssuesEvent,2017-01-03 17:51:19,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"[FeatureRequest] file module with recurse: differentiate file mode and directory mode, exclude options",affects_2.1 feature_idea waiting_on_maintainer,"##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
file module
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
Ubuntu host to various guest
##### SUMMARY
When using file module with recurse, settings are set for all contents regardless of type.
Ideally, mode should be given the option to be set differently for file or directory like rsync
ex:
```
- file: path=/etc/some_directory state=directory mode=D0755,F0644 recurse=yes
[vs]
$ rsync --chmod=D0755,F564 ...
```
Also in many web tree, you have a tmp/cache folder with larger permissions (and some with stricter). Usual playbook now can't be idempotent easily without listing everything, especially as it seems with_fileglob is file only, not directory
```
[current playbook]
- file: path=/var/www/html/app state=directory mode=0755 recurse=yes
- file: path=/var/www/html/app/tmp state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/data state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/config state=directory mode=0640 group=www-data
[wish]
- file: path=/var/www/html/app state=directory mode=0755 recurse=yes exclude='(tmp|data|config)'
- file: path=/var/www/html/app/tmp state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/data state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/config state=directory mode=0640 group=www-data
```
this way playbook can be idempotent and still easily maintained.
If there is a better way, please advised.
Thanks
",True,"[FeatureRequest] file module with recurse: differentiate file mode and directory mode, exclude options - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
file module
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
Ubuntu host to various guest
##### SUMMARY
When using file module with recurse, settings are set for all contents regardless of type.
Ideally, mode should be given the option to be set differently for file or directory like rsync
ex:
```
- file: path=/etc/some_directory state=directory mode=D0755,F0644 recurse=yes
[vs]
$ rsync --chmod=D0755,F564 ...
```
Also in many web tree, you have a tmp/cache folder with larger permissions (and some with stricter). Usual playbook now can't be idempotent easily without listing everything, especially as it seems with_fileglob is file only, not directory
```
[current playbook]
- file: path=/var/www/html/app state=directory mode=0755 recurse=yes
- file: path=/var/www/html/app/tmp state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/data state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/config state=directory mode=0640 group=www-data
[wish]
- file: path=/var/www/html/app state=directory mode=0755 recurse=yes exclude='(tmp|data|config)'
- file: path=/var/www/html/app/tmp state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/data state=directory mode=0775 owner=www-data
- file: path=/var/www/html/app/config state=directory mode=0640 group=www-data
```
this way playbook can be idempotent and still easily maintained.
If there is a better way, please advised.
Thanks
",1, file module with recurse differentiate file mode and directory mode exclude options issue type feature idea component name file module ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides os environment ubuntu host to various guest summary when using file module with recurse settings are set for all contents regardless of type ideally mode should be given the option to be set differently for file or directory like rsync ex file path etc some directory state directory mode recurse yes rsync chmod also in many web tree you have a tmp cache folder with larger permissions and some with stricter usual playbook now can t be idempotent easily without listing everything especially as it seems with fileglob is file only not directory file path var www html app state directory mode recurse yes file path var www html app tmp state directory mode owner www data file path var www html app data state directory mode owner www data file path var www html app config state directory mode group www data file path var www html app state directory mode recurse yes exclude tmp data config file path var www html app tmp state directory mode owner www data file path var www html app data state directory mode owner www data file path var www html app config state directory mode group www data this way playbook can be idempotent and still easily maintained if there is a better way please advised thanks ,1
406768,11902684938.0,IssuesEvent,2020-03-30 14:18:34,robotframework/robotframework,https://api.github.com/repos/robotframework/robotframework,closed,Enhance test message when results are merged with `rebot --merge`,acknowledge enhancement priority: medium rc 1,"Let's say we have a suite with 3 test cases, on a 1st run last 2 fail, on 2nd run last one fails, on 3rd run all pass.
It is a bit confusing that A.C_1 (failed twice, passed once) test has 4 statuses:
```
Re-executed test has been merged.
New status: PASS
New message:
Old status: FAIL
Old message: Re-executed test has been merged.
New status: FAIL
New message: AssertionError
Old status: FAIL
Old message: AssertionError
```
I use `rebot --merge output_1.xml output_2.xml output_3.xml` to merge 3 outputs.
Refer to logs: [logs.zip](https://github.com/robotframework/robotframework/files/3659270/logs.zip)
",1.0,"Enhance test message when results are merged with `rebot --merge` - Let's say we have a suite with 3 test cases, on a 1st run last 2 fail, on 2nd run last one fails, on 3rd run all pass.
It is a bit confusing that A.C_1 (failed twice, passed once) test has 4 statuses:
```
Re-executed test has been merged.
New status: PASS
New message:
Old status: FAIL
Old message: Re-executed test has been merged.
New status: FAIL
New message: AssertionError
Old status: FAIL
Old message: AssertionError
```
I use `rebot --merge output_1.xml output_2.xml output_3.xml` to merge 3 outputs.
Refer to logs: [logs.zip](https://github.com/robotframework/robotframework/files/3659270/logs.zip)
",0,enhance test message when results are merged with rebot merge let s say we have a suite with test cases on a run last fail on run last one fails on run all pass it is a bit confusing that a c failed twice passed once test has statuses re executed test has been merged new status pass new message old status fail old message re executed test has been merged new status fail new message assertionerror old status fail old message assertionerror i use rebot merge output xml output xml output xml to merge outputs refer to logs ,0
1438,6237240731.0,IssuesEvent,2017-07-12 00:35:53,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,ec2_vpc_route_table not updating?,affects_2.3 aws bug_report cloud waiting_on_maintainer,"Not sure if Im doing something wrong, but I can add the routes using ec2_vpc_route_table module but if I terminate the NAT instances that are in the routes (instance_id), you see the ""black hole"" in AWS GUI but for some reason when I go to run my playbook again, it creates new NAT's, gets the instance id's and then attempts to apply them to the route table but fails. If I manually go in and delete the ""black hole"" routes and run the playbook, its fine.
Version:
```
ansible 2.0.0.2
```
playbook code:
```
name: App Private Route Table
ec2_vpc_route_table:
vpc_id: ""{{ vpc_id }}""
region: ""{{ aws_region }}""
subnets: ""{{ item.subnet }}""
tags: Name: ""{{ env | default('test') }}app_private{{ item.az }}""
routes:
- dest: 0.0.0.0/0
instance_id: ""{{ item.instance }}""
with_items:
- { Name: app_a, subnet: ""{{ cidr }}.5.0/24"", instance: ""{{ nat_servers.results[0].tagged_instances[0].id }}"", az: a }
- { Name: app_b, subnet: ""{{ cidr }}.6.0/24"", instance: ""{{ nat_servers.results[1].tagged_instances[0].id }}"", az: b }
```
Error:
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: argument of type 'NoneType' is not iterable
failed: [localhost] => (item={u'subnet': u'10.40.5.0/24', u'az': u'a', u'Name': u'app_a', u'instance': u'i-fddcfa22'}) => {""failed"": true, ""item"": {""Name"": ""app_a"", ""az"": ""a"", ""instance"": ""i-fddcfa22"", ""subnet"": ""10.40.5.0/24""}, ""parsed"": false}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: argument of type 'NoneType' is not iterable
failed: [localhost] => (item={u'subnet': u'10.40.6.0/24', u'az': u'b', u'Name': u'app_b', u'instance': u'i-d6b1b908'}) => {""failed"": true, ""item"": {""Name"": ""app_b"", ""az"": ""b"", ""instance"": ""i-d6b1b908"", ""subnet"": ""10.40.6.0/24""}, ""parsed"": false}
```
Current route table looks like:
```
0.0.0.0/0 eni-af3902c8 / i-3a0b2de5 Black Hole No
```
",True,"ec2_vpc_route_table not updating? - Not sure if Im doing something wrong, but I can add the routes using ec2_vpc_route_table module but if I terminate the NAT instances that are in the routes (instance_id), you see the ""black hole"" in AWS GUI but for some reason when I go to run my playbook again, it creates new NAT's, gets the instance id's and then attempts to apply them to the route table but fails. If I manually go in and delete the ""black hole"" routes and run the playbook, its fine.
Version:
```
ansible 2.0.0.2
```
playbook code:
```
name: App Private Route Table
ec2_vpc_route_table:
vpc_id: ""{{ vpc_id }}""
region: ""{{ aws_region }}""
subnets: ""{{ item.subnet }}""
tags: Name: ""{{ env | default('test') }}app_private{{ item.az }}""
routes:
- dest: 0.0.0.0/0
instance_id: ""{{ item.instance }}""
with_items:
- { Name: app_a, subnet: ""{{ cidr }}.5.0/24"", instance: ""{{ nat_servers.results[0].tagged_instances[0].id }}"", az: a }
- { Name: app_b, subnet: ""{{ cidr }}.6.0/24"", instance: ""{{ nat_servers.results[1].tagged_instances[0].id }}"", az: b }
```
Error:
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: argument of type 'NoneType' is not iterable
failed: [localhost] => (item={u'subnet': u'10.40.5.0/24', u'az': u'a', u'Name': u'app_a', u'instance': u'i-fddcfa22'}) => {""failed"": true, ""item"": {""Name"": ""app_a"", ""az"": ""a"", ""instance"": ""i-fddcfa22"", ""subnet"": ""10.40.5.0/24""}, ""parsed"": false}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: argument of type 'NoneType' is not iterable
failed: [localhost] => (item={u'subnet': u'10.40.6.0/24', u'az': u'b', u'Name': u'app_b', u'instance': u'i-d6b1b908'}) => {""failed"": true, ""item"": {""Name"": ""app_b"", ""az"": ""b"", ""instance"": ""i-d6b1b908"", ""subnet"": ""10.40.6.0/24""}, ""parsed"": false}
```
Current route table looks like:
```
0.0.0.0/0 eni-af3902c8 / i-3a0b2de5 Black Hole No
```
",1, vpc route table not updating not sure if im doing something wrong but i can add the routes using vpc route table module but if i terminate the nat instances that are in the routes instance id you see the black hole in aws gui but for some reason when i go to run my playbook again it creates new nat s gets the instance id s and then attempts to apply them to the route table but fails if i manually go in and delete the black hole routes and run the playbook its fine version ansible playbook code name app private route table vpc route table vpc id vpc id region aws region subnets item subnet tags name env default test app private item az routes dest instance id item instance with items name app a subnet cidr instance nat servers results tagged instances id az a name app b subnet cidr instance nat servers results tagged instances id az b error an exception occurred during task execution to see the full traceback use vvv the error was typeerror argument of type nonetype is not iterable failed item u subnet u u az u a u name u app a u instance u i failed true item name app a az a instance i subnet parsed false an exception occurred during task execution to see the full traceback use vvv the error was typeerror argument of type nonetype is not iterable failed item u subnet u u az u b u name u app b u instance u i failed true item name app b az b instance i subnet parsed false current route table looks like eni i black hole no ,1
2672,9198632032.0,IssuesEvent,2019-03-07 13:09:11,Chromeroni/Hera-Chatbot,https://api.github.com/repos/Chromeroni/Hera-Chatbot,opened,Migration to Discord4Java V3.0,maintainability,"**Describe the current situation / your motivation for the change**
Hera currently uses Discord4Java V2.x.
Just recently the new version of Discord4Java released and with it come major changes which should make us consider to migrate.
**Describe the solution you'd like**
I'd like to discuss and review the changes made in Discord4Java V3.0 and decide if we're going to migrate over to it.
If yes, I'd like to define the boundaries of this change too (approximate effort needed, time constraints, etc.).
**Additional context**
Link to Discord4Java V3.0: https://github.com/Discord4J/Discord4J/releases/tag/3.0.0",True,"Migration to Discord4Java V3.0 - **Describe the current situation / your motivation for the change**
Hera currently uses Discord4Java V2.x.
Just recently the new version of Discord4Java released and with it come major changes which should make us consider to migrate.
**Describe the solution you'd like**
I'd like to discuss and review the changes made in Discord4Java V3.0 and decide if we're going to migrate over to it.
If yes, I'd like to define the boundaries of this change too (approximate effort needed, time constraints, etc.).
**Additional context**
Link to Discord4Java V3.0: https://github.com/Discord4J/Discord4J/releases/tag/3.0.0",1,migration to describe the current situation your motivation for the change hera currently uses x just recently the new version of released and with it come major changes which should make us consider to migrate describe the solution you d like i d like to discuss and review the changes made in and decide if we re going to migrate over to it if yes i d like to define the boundaries of this change too approximate effort needed time constraints etc additional context link to ,1
34387,6329005135.0,IssuesEvent,2017-07-26 01:00:03,test-kitchen/test-kitchen,https://api.github.com/repos/test-kitchen/test-kitchen,closed,Reference-Style Documentation for Kitchen file,Documentation,"I often wish I could give someone a URL, or simply browse a page myself, to a _reference_ for the .kitchen.yml . Not a guide or tutorial, but an exhaustive guide to what options are permitted or expected.
I've been told much of this information is available from 'kitchen diagnose', I'm looking to have it accessible on the web.
Granted, plugins may extend what may be permitted in many places.
",1.0,"Reference-Style Documentation for Kitchen file - I often wish I could give someone a URL, or simply browse a page myself, to a _reference_ for the .kitchen.yml . Not a guide or tutorial, but an exhaustive guide to what options are permitted or expected.
I've been told much of this information is available from 'kitchen diagnose', I'm looking to have it accessible on the web.
Granted, plugins may extend what may be permitted in many places.
",0,reference style documentation for kitchen file i often wish i could give someone a url or simply browse a page myself to a reference for the kitchen yml not a guide or tutorial but an exhaustive guide to what options are permitted or expected i ve been told much of this information is available from kitchen diagnose i m looking to have it accessible on the web granted plugins may extend what may be permitted in many places ,0
67283,14861164822.0,IssuesEvent,2021-01-18 22:09:21,gate5/test2,https://api.github.com/repos/gate5/test2,opened,CVE-2020-36182 (High) detected in jackson-databind-2.9.10.7.jar,security vulnerability,"## CVE-2020-36182 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.9.10.7.jar
General data-binding functionality for Jackson: works on core streaming API
Path to vulnerable library: test2/target/BookStore-6.0.1/WEB-INF/lib/jackson-databind-2.9.10.7.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.10.7/jackson-databind-2.9.10.7.jar
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.cpdsadapter.DriverAdapterCPDS.
Path to vulnerable library: test2/target/BookStore-6.0.1/WEB-INF/lib/jackson-databind-2.9.10.7.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.10.7/jackson-databind-2.9.10.7.jar
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.cpdsadapter.DriverAdapterCPDS.
",0,cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library target bookstore web inf lib jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind ,0
555457,16455030598.0,IssuesEvent,2021-05-21 11:19:46,StrangeLoopGames/EcoIssues,https://api.github.com/repos/StrangeLoopGames/EcoIssues,closed,[0.9.2 staging-1852] Face remover troubles with glass and lumber,Category: Tech Priority: Medium Squad: Otter Status: Fixed Type: Bug,"Use Lumber Wall and GlassWindow:


",1.0,"[0.9.2 staging-1852] Face remover troubles with glass and lumber - Use Lumber Wall and GlassWindow:


",0, face remover troubles with glass and lumber use lumber wall and glasswindow ,0
1182,5097754030.0,IssuesEvent,2017-01-03 22:32:32,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,vmware_guest module documentation is broken,affects_2.3 bug_report cloud vmware waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest module
##### ANSIBLE VERSION
```
2.3
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
ansible-doc vmware_guest fails ...
```
jtanner-OSX:AME-3237 jtanner$ ansible-doc -vvvv vmware_guest
No config file found; using defaults
Traceback (most recent call last):
File ""/Users/jtanner/workspace/issues/AME-3237/ansible/lib/ansible/cli/doc.py"", line 130, in run
text += self.get_man_text(doc)
File ""/Users/jtanner/workspace/issues/AME-3237/ansible/lib/ansible/cli/doc.py"", line 287, in get_man_text
text.append(textwrap.fill(CLI.tty_ify(choices + default), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
UnboundLocalError: local variable 'choices' referenced before assignment
None
ERROR! module vmware_guest missing documentation (or could not parse documentation): local variable 'choices' referenced before assignment
```
",True,"vmware_guest module documentation is broken -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest module
##### ANSIBLE VERSION
```
2.3
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
ansible-doc vmware_guest fails ...
```
jtanner-OSX:AME-3237 jtanner$ ansible-doc -vvvv vmware_guest
No config file found; using defaults
Traceback (most recent call last):
File ""/Users/jtanner/workspace/issues/AME-3237/ansible/lib/ansible/cli/doc.py"", line 130, in run
text += self.get_man_text(doc)
File ""/Users/jtanner/workspace/issues/AME-3237/ansible/lib/ansible/cli/doc.py"", line 287, in get_man_text
text.append(textwrap.fill(CLI.tty_ify(choices + default), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
UnboundLocalError: local variable 'choices' referenced before assignment
None
ERROR! module vmware_guest missing documentation (or could not parse documentation): local variable 'choices' referenced before assignment
```
",1,vmware guest module documentation is broken issue type bug report component name vmware guest module ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary ansible doc vmware guest fails jtanner osx ame jtanner ansible doc vvvv vmware guest no config file found using defaults traceback most recent call last file users jtanner workspace issues ame ansible lib ansible cli doc py line in run text self get man text doc file users jtanner workspace issues ame ansible lib ansible cli doc py line in get man text text append textwrap fill cli tty ify choices default limit initial indent opt indent subsequent indent opt indent unboundlocalerror local variable choices referenced before assignment none error module vmware guest missing documentation or could not parse documentation local variable choices referenced before assignment ,1
504516,14620016818.0,IssuesEvent,2020-12-22 18:55:21,metal3-io/baremetal-operator,https://api.github.com/repos/metal3-io/baremetal-operator,closed,On bmo pod restart unable to provision ready state BMH's ,kind/bug lifecycle/stale priority/backlog,"There appears to be a bug where after creating a number of bmh that go through the registering -> inspecting -> ready phases, after which the baremetal operator pod is restarted, you are no longer able to provision any kube machines on these bmhs, and the following error is thrown: `host validation error: Node 3a0e38e3-6d5f-424e-b4cb-cb4758506da5 does not have any port associated with it.; Node 3a0e38e3-6d5f-424e-b4cb-cb4758506da5 does not have any port associated with it.`
Here's the full output and and logs from bmo:
```
kubectl get bmh ✔ 11342 11:06:45
NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR
baremetal0003d1mdw1.sendgrid.net OK inspecting ipmi://10.16.5.211/ true
baremetal0004d1mdw1.sendgrid.net OK ready ipmi://10.16.6.14/ unknown true
baremetal0005d1mdw1.sendgrid.net OK ready ipmi://10.16.5.227/ unknown true
baremetal0006d1mdw1.sendgrid.net OK ready ipmi://10.16.5.241/ unknown true
baremetal0007d1mdw1.sendgrid.net OK ready ipmi://10.16.5.239/ unknown true
baremetal0008d1mdw1.sendgrid.net OK ready ipmi://10.16.5.235/ unknown true
baremetal0009d1mdw1.sendgrid.net OK ready ipmi://10.16.5.207/ unknown true
baremetal0010d1mdw1.sendgrid.net OK ready ipmi://10.16.5.170/ unknown true
baremetal0011d1mdw1.sendgrid.net OK ready ipmi://10.16.5.179/ unknown true
baremetal0012d1mdw1.sendgrid.net OK ready ipmi://10.16.5.237/ unknown true
baremetal0013d1mdw1.sendgrid.net OK ready ipmi://10.16.6.18/ unknown true
baremetal0014d1mdw1.sendgrid.net OK ready ipmi://10.16.6.60/ unknown true
baremetal0015d1mdw1.sendgrid.net OK ready ipmi://10.16.5.185/ unknown true
baremetal0016d1mdw1.sendgrid.net OK ready ipmi://10.16.5.164/ unknown true
baremetal0017d1mdw1.sendgrid.net OK ready ipmi://10.16.5.183/ unknown true
baremetal0018d1mdw1.sendgrid.net OK ready ipmi://10.16.5.242/ unknown true
baremetal0019d1mdw1.sendgrid.net OK ready ipmi://10.16.5.174/ unknown true
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ? ✔ 11343 11:06:46
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ? kubectl apply -f _out/ 1 ↵ 11344 11:06:54
cluster.cluster.x-k8s.io/lukasz unchanged
baremetalcluster.infrastructure.cluster.x-k8s.io/lukasz unchanged
kubeadmconfig.bootstrap.cluster.x-k8s.io/lukasz-controlplane-0 created
kubeadmconfig.bootstrap.cluster.x-k8s.io/lukasz-controlplane-1 created
kubeadmconfig.bootstrap.cluster.x-k8s.io/lukasz-controlplane-2 created
machine.cluster.x-k8s.io/lukasz-controlplane-0 created
machine.cluster.x-k8s.io/lukasz-controlplane-1 created
machine.cluster.x-k8s.io/lukasz-controlplane-2 created
baremetalmachine.infrastructure.cluster.x-k8s.io/lukasz-controlplane-0 created
baremetalmachine.infrastructure.cluster.x-k8s.io/lukasz-controlplane-1 created
baremetalmachine.infrastructure.cluster.x-k8s.io/lukasz-controlplane-2 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/lukasz-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/lukasz-old-md-0 created
machinedeployment.cluster.x-k8s.io/lukasz-md-0 created
machinedeployment.cluster.x-k8s.io/lukasz-old-md-0 created
baremetalmachinetemplate.infrastructure.cluster.x-k8s.io/lukasz-md-0 created
baremetalmachinetemplate.infrastructure.cluster.x-k8s.io/lukasz-old-md-0 created
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ? ✔ 11345 11:06:59
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ? kubectl get machine ✔ 11345 11:06:59
NAME PROVIDERID PHASE
lukasz-controlplane-0 provisioning
lukasz-controlplane-1 pending
lukasz-controlplane-2 pending
lukasz-md-0-cf48d76f6-jjrv7 pending
lukasz-old-md-0-5d997d4449-r4s5r pending
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ? kubectl get bmh ✔ 11346 11:07:07
NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR
baremetal0003d1mdw1.sendgrid.net OK inspecting ipmi://10.16.5.211/ true
baremetal0004d1mdw1.sendgrid.net OK ready ipmi://10.16.6.14/ unknown true
baremetal0005d1mdw1.sendgrid.net OK ready ipmi://10.16.5.227/ unknown true
baremetal0006d1mdw1.sendgrid.net OK ready ipmi://10.16.5.241/ unknown true
baremetal0007d1mdw1.sendgrid.net OK ready ipmi://10.16.5.239/ unknown true
baremetal0008d1mdw1.sendgrid.net OK ready ipmi://10.16.5.235/ unknown true
baremetal0009d1mdw1.sendgrid.net OK ready ipmi://10.16.5.207/ unknown true
baremetal0010d1mdw1.sendgrid.net OK ready ipmi://10.16.5.170/ unknown true
baremetal0011d1mdw1.sendgrid.net OK ready ipmi://10.16.5.179/ unknown true
baremetal0012d1mdw1.sendgrid.net OK ready ipmi://10.16.5.237/ unknown true
baremetal0013d1mdw1.sendgrid.net OK ready ipmi://10.16.6.18/ unknown true
baremetal0014d1mdw1.sendgrid.net OK ready ipmi://10.16.6.60/ unknown true
baremetal0015d1mdw1.sendgrid.net error provisioning lukasz-controlplane-0 ipmi://10.16.5.185/ unknown true host validation error: Node 3a0e38e3-6d5f-424e-b4cb-cb4758506da5 does not have any port associated with it.; Node 3a0e38e3-6d5f-424e-b4cb-cb4758506da5 does not have any port associated with it.
baremetal0016d1mdw1.sendgrid.net OK ready ipmi://10.16.5.164/ unknown true
baremetal0017d1mdw1.sendgrid.net OK ready ipmi://10.16.5.183/ unknown true
baremetal0018d1mdw1.sendgrid.net OK ready ipmi://10.16.5.242/ unknown true
baremetal0019d1mdw1.sendgrid.net OK ready ipmi://10.16.5.174/ unknown true
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ? ✔ 11347 11:07:20
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ?
{""level"":""info"",""ts"":1581703726.561492,""logger"":""baremetalhost"",""msg"":""Reconciling BareMetalHost"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0003d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703726.5615559,""logger"":""baremetalhost"",""msg"":""inspecting hardware"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0003d1mdw1.sendgrid.net"",""provisioningState"":""inspecting""}
{""level"":""info"",""ts"":1581703726.5615826,""logger"":""baremetalhost_ironic"",""msg"":""inspecting hardware"",""host"":""baremetal0003d1mdw1.sendgrid.net"",""status"":""OK""}
{""level"":""info"",""ts"":1581703726.5742512,""logger"":""baremetalhost_ironic"",""msg"":""looking for existing node by name"",""host"":""baremetal0003d1mdw1.sendgrid.net"",""name"":""baremetal0003d1mdw1.sendgrid.net""}
{""level"":""error"",""ts"":1581703726.5862474,""logger"":""controller-runtime.controller"",""msg"":""Reconciler error"",""controller"":""metal3-baremetalhost-controller"",""request"":""metal3/baremetal0003d1mdw1.sendgrid.net"",""error"":""action \""inspecting\"" failed: hardware inspection failed: no ironic node for host"",""errorVerbose"":""no ironic node for host\nhardware inspection failed\ngithub.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).actionInspecting\n\t/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:491\ngithub.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).handleInspecting\n\t/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:197\ngithub.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).(github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.handleInspecting)-fm\n\t/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:41\ngithub.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).ReconcileState\n\t/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:109\ngithub.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).Reconcile\n\t/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:281\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).(github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.worker)-fm\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:2361\naction \""inspecting\"" failed\ngithub.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).Reconcile\n\t/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:285\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).(github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.worker)-fm\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:2361"",""stacktrace"":""github.com/metal3-io/baremetal-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).(github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.worker)-fm\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88""}
{""level"":""info"",""ts"":1581703727.5868793,""logger"":""baremetalhost"",""msg"":""Reconciling BareMetalHost"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0004d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.5870068,""logger"":""baremetalhost_ironic"",""msg"":""validating management access"",""host"":""baremetal0004d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.6093876,""logger"":""baremetalhost_ironic"",""msg"":""found existing node by ID"",""host"":""baremetal0004d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.6094184,""logger"":""baremetalhost_ironic"",""msg"":""current provision state"",""host"":""baremetal0004d1mdw1.sendgrid.net"",""lastError"":"""",""current"":""manageable"",""target"":""""}
{""level"":""info"",""ts"":1581703727.60943,""logger"":""baremetalhost_ironic"",""msg"":""have manageable host"",""host"":""baremetal0004d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.609438,""logger"":""baremetalhost_ironic"",""msg"":""updating hardware state"",""host"":""baremetal0004d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.6298566,""logger"":""baremetalhost_ironic"",""msg"":""found existing node by ID"",""host"":""baremetal0004d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.6298933,""logger"":""baremetalhost"",""msg"":""saving host status"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0004d1mdw1.sendgrid.net"",""provisioningState"":""ready"",""operational status"":""OK"",""provisioning state"":""ready""}
{""level"":""info"",""ts"":1581703727.6373432,""logger"":""baremetalhost"",""msg"":""done"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0004d1mdw1.sendgrid.net"",""provisioningState"":""ready"",""requeue"":true,""after"":60}
{""level"":""info"",""ts"":1581703727.637434,""logger"":""baremetalhost"",""msg"":""Reconciling BareMetalHost"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0015d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.6375093,""logger"":""baremetalhost"",""msg"":""provisioning"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0015d1mdw1.sendgrid.net"",""provisioningState"":""provisioning""}
{""level"":""info"",""ts"":1581703727.6551394,""logger"":""baremetalhost_ironic"",""msg"":""found existing node by ID"",""host"":""baremetal0015d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.6551712,""logger"":""baremetalhost_ironic"",""msg"":""provisioning image to host"",""host"":""baremetal0015d1mdw1.sendgrid.net"",""state"":""manageable""}
E0214 18:08:47.655283 1 runtime.go:78] Observed a panic: ""invalid memory address or nil pointer dereference"" (runtime error: invalid memory address or nil pointer dereference)
goroutine 270 [running]:
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x137efe0, 0x20309c0)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xaa
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x137efe0, 0x20309c0)
/usr/local/go/src/runtime/panic.go:502 +0x229
github.com/metal3-io/baremetal-operator/pkg/provisioner/ironic.(*ironicProvisioner).Provision(0xc420169030, 0x1649da0, 0xc4207c7a40, 0x0, 0x0, 0x0, 0xa858015299444fb4, 0x4122f8, 0x10)
/go/src/github.com/metal3-io/baremetal-operator/pkg/provisioner/ironic/ironic.go:924 +0x156
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).actionProvisioning(0xc4208b9ac0, 0x1672860, 0xc420169030, 0xc4207f7400, 0x9e0000c420787260, 0xc420787240)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:563 +0x117
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).handleProvisioning(0xc4207c79e0, 0xc4207f7400, 0xc4207871f0, 0xc4200e1790)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:246 +0xb8
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).(github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.handleProvisioning)-fm(0xc4207f7400, 0xc4207c7a10, 0xc42047ac90)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:45 +0x34
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).ReconcileState(0xc4207c79e0, 0xc4207f7400, 0x0, 0x0)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:109 +0x327
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).Reconcile(0xc4208b9ac0, 0xc42047a8da, 0x6, 0xc4206b17e0, 0x20, 0x204ac00, 0x0, 0x0, 0x0)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:281 +0x987
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc4205623c0, 0x13c9ea0, 0xc420916ee0, 0x13c9e00)
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256 +0x100
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc4205623c0, 0xc4206be300)
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232 +0xb7
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc4205623c0)
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211 +0x2b
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).(github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.worker)-fm()
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x2a
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc42065a490)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc42065a490, 0x3b9aca00, 0x0, 0x1, 0xc420023a40)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xbd
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc42065a490, 0x3b9aca00, 0xc420023a40)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x63b
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0xb40e76]
goroutine 270 [running]:
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x107
panic(0x137efe0, 0x20309c0)
/usr/local/go/src/runtime/panic.go:502 +0x229
github.com/metal3-io/baremetal-operator/pkg/provisioner/ironic.(*ironicProvisioner).Provision(0xc420169030, 0x1649da0, 0xc4207c7a40, 0x0, 0x0, 0x0, 0xa858015299444fb4, 0x4122f8, 0x10)
/go/src/github.com/metal3-io/baremetal-operator/pkg/provisioner/ironic/ironic.go:924 +0x156
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).actionProvisioning(0xc4208b9ac0, 0x1672860, 0xc420169030, 0xc4207f7400, 0x9e0000c420787260, 0xc420787240)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:563 +0x117
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).handleProvisioning(0xc4207c79e0, 0xc4207f7400, 0xc4207871f0, 0xc4200e1790)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:246 +0xb8
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).(github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.handleProvisioning)-fm(0xc4207f7400, 0xc4207c7a10, 0xc42047ac90)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:45 +0x34
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).ReconcileState(0xc4207c79e0, 0xc4207f7400, 0x0, 0x0)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:109 +0x327
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).Reconcile(0xc4208b9ac0, 0xc42047a8da, 0x6, 0xc4206b17e0, 0x20, 0x204ac00, 0x0, 0x0, 0x0)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:281 +0x987
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc4205623c0, 0x13c9ea0, 0xc420916ee0, 0x13c9e00)
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256 +0x100
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc4205623c0, 0xc4206be300)
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232 +0xb7
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc4205623c0)
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211 +0x2b
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).(github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.worker)-fm()
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x2a
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc42065a490)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc42065a490, 0x3b9aca00, 0x0, 0x1, 0xc420023a40)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xbd
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc42065a490, 0x3b9aca00, 0xc420023a40)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x63b
```
Steps to recreate:
Use bmo as normal and get some bmh into the ready state.
Bounce the pod.
After the operator comes back up attempt to provision a new cluster the above error should be thrown.
We have also encountered a similar error, although haven't been able to reproduce it 100% of the time as the one above, where a bmh previous to the pod bounce was in the provisioned state, in the example provided we provisioned the bmh by adding the image to the spec it wasn't provisioned as part of a cluster, but we have seen the exact same issue below when the bmhs were provisioned as part of a cluster, after the pod is bounced do not get adopted properly, i.e.:
```
baremetal0012d1mdw1.sendgrid.net error registration error ipmi://10.16.5.237/ unknown true Host adoption failed: Error while attempting to adopt node 60895ea0-419a-4138-aac3-f4fd72f320a4: Node 60895ea0-419a-4138-aac3-f4fd72f320a4 does not have any port associated with it..
```
And the logs for the event:
```
{""level"":""info"",""ts"":1582047700.842536,""logger"":""baremetalhost"",""msg"":""Reconciling BareMetalHost"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047700.856129,""logger"":""baremetalhost_ironic"",""msg"":""looking for existing node by name"",""host"":""baremetal0012d1mdw1.sendgrid.net"",""name"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047700.8713014,""logger"":""baremetalhost_ironic"",""msg"":""re-registering host"",""host"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047700.8713233,""logger"":""baremetalhost_ironic"",""msg"":""validating management access"",""host"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047700.886314,""logger"":""baremetalhost_ironic"",""msg"":""looking for existing node by name"",""host"":""baremetal0012d1mdw1.sendgrid.net"",""name"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047700.8993719,""logger"":""baremetalhost_ironic"",""msg"":""registering host in ironic"",""host"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047700.9414284,""logger"":""baremetalhost_ironic"",""msg"":""setting provisioning id"",""host"":""baremetal0012d1mdw1.sendgrid.net"",""ID"":""60895ea0-419a-4138-aac3-f4fd72f320a4""}
{""level"":""info"",""ts"":1582047700.941458,""logger"":""baremetalhost_ironic"",""msg"":""setting instance info"",""host"":""baremetal0012d1mdw1.sendgrid.net"",""image_source"":""https://filestore-staging.sendgrid.net/met
al3-image-build/fedora-31-metal3-0.0.20200213231225.qcow2"",""checksum"":""https://filestore-staging.sendgrid.net/metal3-image-build/fedora-31-metal3-0.0.20200213231225.qcow2.md5sum""}
{""level"":""info"",""ts"":1582047701.0355387,""logger"":""baremetalhost_ironic"",""msg"":""current provision state"",""host"":""baremetal0012d1mdw1.sendgrid.net"",""lastError"":"""",""current"":""enroll"",""target"":""""}
{""level"":""info"",""ts"":1582047701.0355792,""logger"":""baremetalhost_ironic"",""msg"":""changing provisioning state"",""host"":""baremetal0012d1mdw1.sendgrid.net"",""current"":""enroll"",""existing target"":"""",""new target
"":""manage""}
{""level"":""info"",""ts"":1582047701.1334321,""logger"":""baremetalhost"",""msg"":""saving host status"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0012d1mdw1.sendgrid.net"",""provisioningState"":""provision
ed"",""operational status"":""OK"",""provisioning state"":""provisioned""}
{""level"":""info"",""ts"":1582047701.1406105,""logger"":""baremetalhost"",""msg"":""publishing event"",""reason"":""Registered"",""message"":""Registered new host""}
{""level"":""info"",""ts"":1582047701.144376,""logger"":""baremetalhost"",""msg"":""done"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0012d1mdw1.sendgrid.net"",""provisioningState"":""provisioned"",""requeue"":t
rue,""after"":10}
A little while later
{""level"":""info"",""ts"":1582047704.92819,""logger"":""baremetalhost"",""msg"":""Reconciling BareMetalHost"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047704.9464087,""logger"":""baremetalhost_ironic"",""msg"":""found existing node by ID"",""host"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047704.9464579,""logger"":""baremetalhost"",""msg"":""saving host status"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0012d1mdw1.sendgrid.net"",""provisioningState"":""provisioned"",""operational status"":""error"",""provisioning state"":""registration error""}
{""level"":""info"",""ts"":1582047704.9539585,""logger"":""baremetalhost"",""msg"":""publishing event"",""reason"":""RegistrationError"",""message"":""Host adoption failed: Error while attempting to adopt node 60895ea0-419a-4138-aac3-f4fd72f320a4: Node 60895ea0-419a-4138-aac3-f4fd72f320a4 does not have any port associated with it..""}
{""level"":""info"",""ts"":1582047704.9578178,""logger"":""baremetalhost"",""msg"":""stopping on host error"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0012d1mdw1.sendgrid.net"",""provisioningState"":""provisioned"",""message"":""Host adoption failed: Error while attempting to adopt node 60895ea0-419a-4138-aac3-f4fd72f320a4: Node 60895ea0-419a-4138-aac3-f4fd72f320a4 does not have any port associated with it..""}
```",1.0,"On bmo pod restart unable to provision ready state BMH's - There appears to be a bug where after creating a number of bmh that go through the registering -> inspecting -> ready phases, after which the baremetal operator pod is restarted, you are no longer able to provision any kube machines on these bmhs, and the following error is thrown: `host validation error: Node 3a0e38e3-6d5f-424e-b4cb-cb4758506da5 does not have any port associated with it.; Node 3a0e38e3-6d5f-424e-b4cb-cb4758506da5 does not have any port associated with it.`
Here's the full output and and logs from bmo:
```
kubectl get bmh ✔ 11342 11:06:45
NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR
baremetal0003d1mdw1.sendgrid.net OK inspecting ipmi://10.16.5.211/ true
baremetal0004d1mdw1.sendgrid.net OK ready ipmi://10.16.6.14/ unknown true
baremetal0005d1mdw1.sendgrid.net OK ready ipmi://10.16.5.227/ unknown true
baremetal0006d1mdw1.sendgrid.net OK ready ipmi://10.16.5.241/ unknown true
baremetal0007d1mdw1.sendgrid.net OK ready ipmi://10.16.5.239/ unknown true
baremetal0008d1mdw1.sendgrid.net OK ready ipmi://10.16.5.235/ unknown true
baremetal0009d1mdw1.sendgrid.net OK ready ipmi://10.16.5.207/ unknown true
baremetal0010d1mdw1.sendgrid.net OK ready ipmi://10.16.5.170/ unknown true
baremetal0011d1mdw1.sendgrid.net OK ready ipmi://10.16.5.179/ unknown true
baremetal0012d1mdw1.sendgrid.net OK ready ipmi://10.16.5.237/ unknown true
baremetal0013d1mdw1.sendgrid.net OK ready ipmi://10.16.6.18/ unknown true
baremetal0014d1mdw1.sendgrid.net OK ready ipmi://10.16.6.60/ unknown true
baremetal0015d1mdw1.sendgrid.net OK ready ipmi://10.16.5.185/ unknown true
baremetal0016d1mdw1.sendgrid.net OK ready ipmi://10.16.5.164/ unknown true
baremetal0017d1mdw1.sendgrid.net OK ready ipmi://10.16.5.183/ unknown true
baremetal0018d1mdw1.sendgrid.net OK ready ipmi://10.16.5.242/ unknown true
baremetal0019d1mdw1.sendgrid.net OK ready ipmi://10.16.5.174/ unknown true
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ? ✔ 11343 11:06:46
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ? kubectl apply -f _out/ 1 ↵ 11344 11:06:54
cluster.cluster.x-k8s.io/lukasz unchanged
baremetalcluster.infrastructure.cluster.x-k8s.io/lukasz unchanged
kubeadmconfig.bootstrap.cluster.x-k8s.io/lukasz-controlplane-0 created
kubeadmconfig.bootstrap.cluster.x-k8s.io/lukasz-controlplane-1 created
kubeadmconfig.bootstrap.cluster.x-k8s.io/lukasz-controlplane-2 created
machine.cluster.x-k8s.io/lukasz-controlplane-0 created
machine.cluster.x-k8s.io/lukasz-controlplane-1 created
machine.cluster.x-k8s.io/lukasz-controlplane-2 created
baremetalmachine.infrastructure.cluster.x-k8s.io/lukasz-controlplane-0 created
baremetalmachine.infrastructure.cluster.x-k8s.io/lukasz-controlplane-1 created
baremetalmachine.infrastructure.cluster.x-k8s.io/lukasz-controlplane-2 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/lukasz-md-0 created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/lukasz-old-md-0 created
machinedeployment.cluster.x-k8s.io/lukasz-md-0 created
machinedeployment.cluster.x-k8s.io/lukasz-old-md-0 created
baremetalmachinetemplate.infrastructure.cluster.x-k8s.io/lukasz-md-0 created
baremetalmachinetemplate.infrastructure.cluster.x-k8s.io/lukasz-old-md-0 created
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ? ✔ 11345 11:06:59
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ? kubectl get machine ✔ 11345 11:06:59
NAME PROVIDERID PHASE
lukasz-controlplane-0 provisioning
lukasz-controlplane-1 pending
lukasz-controlplane-2 pending
lukasz-md-0-cf48d76f6-jjrv7 pending
lukasz-old-md-0-5d997d4449-r4s5r pending
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ? kubectl get bmh ✔ 11346 11:07:07
NAME STATUS PROVISIONING STATUS CONSUMER BMC HARDWARE PROFILE ONLINE ERROR
baremetal0003d1mdw1.sendgrid.net OK inspecting ipmi://10.16.5.211/ true
baremetal0004d1mdw1.sendgrid.net OK ready ipmi://10.16.6.14/ unknown true
baremetal0005d1mdw1.sendgrid.net OK ready ipmi://10.16.5.227/ unknown true
baremetal0006d1mdw1.sendgrid.net OK ready ipmi://10.16.5.241/ unknown true
baremetal0007d1mdw1.sendgrid.net OK ready ipmi://10.16.5.239/ unknown true
baremetal0008d1mdw1.sendgrid.net OK ready ipmi://10.16.5.235/ unknown true
baremetal0009d1mdw1.sendgrid.net OK ready ipmi://10.16.5.207/ unknown true
baremetal0010d1mdw1.sendgrid.net OK ready ipmi://10.16.5.170/ unknown true
baremetal0011d1mdw1.sendgrid.net OK ready ipmi://10.16.5.179/ unknown true
baremetal0012d1mdw1.sendgrid.net OK ready ipmi://10.16.5.237/ unknown true
baremetal0013d1mdw1.sendgrid.net OK ready ipmi://10.16.6.18/ unknown true
baremetal0014d1mdw1.sendgrid.net OK ready ipmi://10.16.6.60/ unknown true
baremetal0015d1mdw1.sendgrid.net error provisioning lukasz-controlplane-0 ipmi://10.16.5.185/ unknown true host validation error: Node 3a0e38e3-6d5f-424e-b4cb-cb4758506da5 does not have any port associated with it.; Node 3a0e38e3-6d5f-424e-b4cb-cb4758506da5 does not have any port associated with it.
baremetal0016d1mdw1.sendgrid.net OK ready ipmi://10.16.5.164/ unknown true
baremetal0017d1mdw1.sendgrid.net OK ready ipmi://10.16.5.183/ unknown true
baremetal0018d1mdw1.sendgrid.net OK ready ipmi://10.16.5.242/ unknown true
baremetal0019d1mdw1.sendgrid.net OK ready ipmi://10.16.5.174/ unknown true
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ? ✔ 11347 11:07:20
lukasz@lukasz ~/go/src/github.com/sendgrid/cluster-api-provider-bmo/armada lukaszbranch ● ?
{""level"":""info"",""ts"":1581703726.561492,""logger"":""baremetalhost"",""msg"":""Reconciling BareMetalHost"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0003d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703726.5615559,""logger"":""baremetalhost"",""msg"":""inspecting hardware"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0003d1mdw1.sendgrid.net"",""provisioningState"":""inspecting""}
{""level"":""info"",""ts"":1581703726.5615826,""logger"":""baremetalhost_ironic"",""msg"":""inspecting hardware"",""host"":""baremetal0003d1mdw1.sendgrid.net"",""status"":""OK""}
{""level"":""info"",""ts"":1581703726.5742512,""logger"":""baremetalhost_ironic"",""msg"":""looking for existing node by name"",""host"":""baremetal0003d1mdw1.sendgrid.net"",""name"":""baremetal0003d1mdw1.sendgrid.net""}
{""level"":""error"",""ts"":1581703726.5862474,""logger"":""controller-runtime.controller"",""msg"":""Reconciler error"",""controller"":""metal3-baremetalhost-controller"",""request"":""metal3/baremetal0003d1mdw1.sendgrid.net"",""error"":""action \""inspecting\"" failed: hardware inspection failed: no ironic node for host"",""errorVerbose"":""no ironic node for host\nhardware inspection failed\ngithub.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).actionInspecting\n\t/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:491\ngithub.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).handleInspecting\n\t/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:197\ngithub.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).(github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.handleInspecting)-fm\n\t/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:41\ngithub.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).ReconcileState\n\t/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:109\ngithub.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).Reconcile\n\t/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:281\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).(github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.worker)-fm\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:2361\naction \""inspecting\"" failed\ngithub.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).Reconcile\n\t/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:285\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).(github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.worker)-fm\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:2361"",""stacktrace"":""github.com/metal3-io/baremetal-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\ngithub.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).(github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.worker)-fm\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\ngithub.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88""}
{""level"":""info"",""ts"":1581703727.5868793,""logger"":""baremetalhost"",""msg"":""Reconciling BareMetalHost"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0004d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.5870068,""logger"":""baremetalhost_ironic"",""msg"":""validating management access"",""host"":""baremetal0004d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.6093876,""logger"":""baremetalhost_ironic"",""msg"":""found existing node by ID"",""host"":""baremetal0004d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.6094184,""logger"":""baremetalhost_ironic"",""msg"":""current provision state"",""host"":""baremetal0004d1mdw1.sendgrid.net"",""lastError"":"""",""current"":""manageable"",""target"":""""}
{""level"":""info"",""ts"":1581703727.60943,""logger"":""baremetalhost_ironic"",""msg"":""have manageable host"",""host"":""baremetal0004d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.609438,""logger"":""baremetalhost_ironic"",""msg"":""updating hardware state"",""host"":""baremetal0004d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.6298566,""logger"":""baremetalhost_ironic"",""msg"":""found existing node by ID"",""host"":""baremetal0004d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.6298933,""logger"":""baremetalhost"",""msg"":""saving host status"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0004d1mdw1.sendgrid.net"",""provisioningState"":""ready"",""operational status"":""OK"",""provisioning state"":""ready""}
{""level"":""info"",""ts"":1581703727.6373432,""logger"":""baremetalhost"",""msg"":""done"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0004d1mdw1.sendgrid.net"",""provisioningState"":""ready"",""requeue"":true,""after"":60}
{""level"":""info"",""ts"":1581703727.637434,""logger"":""baremetalhost"",""msg"":""Reconciling BareMetalHost"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0015d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.6375093,""logger"":""baremetalhost"",""msg"":""provisioning"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0015d1mdw1.sendgrid.net"",""provisioningState"":""provisioning""}
{""level"":""info"",""ts"":1581703727.6551394,""logger"":""baremetalhost_ironic"",""msg"":""found existing node by ID"",""host"":""baremetal0015d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1581703727.6551712,""logger"":""baremetalhost_ironic"",""msg"":""provisioning image to host"",""host"":""baremetal0015d1mdw1.sendgrid.net"",""state"":""manageable""}
E0214 18:08:47.655283 1 runtime.go:78] Observed a panic: ""invalid memory address or nil pointer dereference"" (runtime error: invalid memory address or nil pointer dereference)
goroutine 270 [running]:
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x137efe0, 0x20309c0)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xaa
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x137efe0, 0x20309c0)
/usr/local/go/src/runtime/panic.go:502 +0x229
github.com/metal3-io/baremetal-operator/pkg/provisioner/ironic.(*ironicProvisioner).Provision(0xc420169030, 0x1649da0, 0xc4207c7a40, 0x0, 0x0, 0x0, 0xa858015299444fb4, 0x4122f8, 0x10)
/go/src/github.com/metal3-io/baremetal-operator/pkg/provisioner/ironic/ironic.go:924 +0x156
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).actionProvisioning(0xc4208b9ac0, 0x1672860, 0xc420169030, 0xc4207f7400, 0x9e0000c420787260, 0xc420787240)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:563 +0x117
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).handleProvisioning(0xc4207c79e0, 0xc4207f7400, 0xc4207871f0, 0xc4200e1790)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:246 +0xb8
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).(github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.handleProvisioning)-fm(0xc4207f7400, 0xc4207c7a10, 0xc42047ac90)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:45 +0x34
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).ReconcileState(0xc4207c79e0, 0xc4207f7400, 0x0, 0x0)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:109 +0x327
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).Reconcile(0xc4208b9ac0, 0xc42047a8da, 0x6, 0xc4206b17e0, 0x20, 0x204ac00, 0x0, 0x0, 0x0)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:281 +0x987
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc4205623c0, 0x13c9ea0, 0xc420916ee0, 0x13c9e00)
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256 +0x100
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc4205623c0, 0xc4206be300)
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232 +0xb7
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc4205623c0)
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211 +0x2b
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).(github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.worker)-fm()
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x2a
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc42065a490)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc42065a490, 0x3b9aca00, 0x0, 0x1, 0xc420023a40)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xbd
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc42065a490, 0x3b9aca00, 0xc420023a40)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x63b
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0xb40e76]
goroutine 270 [running]:
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x107
panic(0x137efe0, 0x20309c0)
/usr/local/go/src/runtime/panic.go:502 +0x229
github.com/metal3-io/baremetal-operator/pkg/provisioner/ironic.(*ironicProvisioner).Provision(0xc420169030, 0x1649da0, 0xc4207c7a40, 0x0, 0x0, 0x0, 0xa858015299444fb4, 0x4122f8, 0x10)
/go/src/github.com/metal3-io/baremetal-operator/pkg/provisioner/ironic/ironic.go:924 +0x156
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).actionProvisioning(0xc4208b9ac0, 0x1672860, 0xc420169030, 0xc4207f7400, 0x9e0000c420787260, 0xc420787240)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:563 +0x117
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).handleProvisioning(0xc4207c79e0, 0xc4207f7400, 0xc4207871f0, 0xc4200e1790)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:246 +0xb8
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).(github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.handleProvisioning)-fm(0xc4207f7400, 0xc4207c7a10, 0xc42047ac90)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:45 +0x34
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*hostStateMachine).ReconcileState(0xc4207c79e0, 0xc4207f7400, 0x0, 0x0)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/host_state_machine.go:109 +0x327
github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost.(*ReconcileBareMetalHost).Reconcile(0xc4208b9ac0, 0xc42047a8da, 0x6, 0xc4206b17e0, 0x20, 0x204ac00, 0x0, 0x0, 0x0)
/go/src/github.com/metal3-io/baremetal-operator/pkg/controller/baremetalhost/baremetalhost_controller.go:281 +0x987
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc4205623c0, 0x13c9ea0, 0xc420916ee0, 0x13c9e00)
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256 +0x100
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc4205623c0, 0xc4206be300)
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232 +0xb7
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc4205623c0)
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211 +0x2b
github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).(github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.worker)-fm()
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x2a
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc42065a490)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc42065a490, 0x3b9aca00, 0x0, 0x1, 0xc420023a40)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xbd
github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc42065a490, 0x3b9aca00, 0xc420023a40)
/go/src/github.com/metal3-io/baremetal-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/go/src/github.com/metal3-io/baremetal-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x63b
```
Steps to recreate:
Use bmo as normal and get some bmh into the ready state.
Bounce the pod.
After the operator comes back up attempt to provision a new cluster the above error should be thrown.
We have also encountered a similar error, although haven't been able to reproduce it 100% of the time as the one above, where a bmh previous to the pod bounce was in the provisioned state, in the example provided we provisioned the bmh by adding the image to the spec it wasn't provisioned as part of a cluster, but we have seen the exact same issue below when the bmhs were provisioned as part of a cluster, after the pod is bounced do not get adopted properly, i.e.:
```
baremetal0012d1mdw1.sendgrid.net error registration error ipmi://10.16.5.237/ unknown true Host adoption failed: Error while attempting to adopt node 60895ea0-419a-4138-aac3-f4fd72f320a4: Node 60895ea0-419a-4138-aac3-f4fd72f320a4 does not have any port associated with it..
```
And the logs for the event:
```
{""level"":""info"",""ts"":1582047700.842536,""logger"":""baremetalhost"",""msg"":""Reconciling BareMetalHost"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047700.856129,""logger"":""baremetalhost_ironic"",""msg"":""looking for existing node by name"",""host"":""baremetal0012d1mdw1.sendgrid.net"",""name"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047700.8713014,""logger"":""baremetalhost_ironic"",""msg"":""re-registering host"",""host"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047700.8713233,""logger"":""baremetalhost_ironic"",""msg"":""validating management access"",""host"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047700.886314,""logger"":""baremetalhost_ironic"",""msg"":""looking for existing node by name"",""host"":""baremetal0012d1mdw1.sendgrid.net"",""name"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047700.8993719,""logger"":""baremetalhost_ironic"",""msg"":""registering host in ironic"",""host"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047700.9414284,""logger"":""baremetalhost_ironic"",""msg"":""setting provisioning id"",""host"":""baremetal0012d1mdw1.sendgrid.net"",""ID"":""60895ea0-419a-4138-aac3-f4fd72f320a4""}
{""level"":""info"",""ts"":1582047700.941458,""logger"":""baremetalhost_ironic"",""msg"":""setting instance info"",""host"":""baremetal0012d1mdw1.sendgrid.net"",""image_source"":""https://filestore-staging.sendgrid.net/met
al3-image-build/fedora-31-metal3-0.0.20200213231225.qcow2"",""checksum"":""https://filestore-staging.sendgrid.net/metal3-image-build/fedora-31-metal3-0.0.20200213231225.qcow2.md5sum""}
{""level"":""info"",""ts"":1582047701.0355387,""logger"":""baremetalhost_ironic"",""msg"":""current provision state"",""host"":""baremetal0012d1mdw1.sendgrid.net"",""lastError"":"""",""current"":""enroll"",""target"":""""}
{""level"":""info"",""ts"":1582047701.0355792,""logger"":""baremetalhost_ironic"",""msg"":""changing provisioning state"",""host"":""baremetal0012d1mdw1.sendgrid.net"",""current"":""enroll"",""existing target"":"""",""new target
"":""manage""}
{""level"":""info"",""ts"":1582047701.1334321,""logger"":""baremetalhost"",""msg"":""saving host status"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0012d1mdw1.sendgrid.net"",""provisioningState"":""provision
ed"",""operational status"":""OK"",""provisioning state"":""provisioned""}
{""level"":""info"",""ts"":1582047701.1406105,""logger"":""baremetalhost"",""msg"":""publishing event"",""reason"":""Registered"",""message"":""Registered new host""}
{""level"":""info"",""ts"":1582047701.144376,""logger"":""baremetalhost"",""msg"":""done"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0012d1mdw1.sendgrid.net"",""provisioningState"":""provisioned"",""requeue"":t
rue,""after"":10}
A little while later
{""level"":""info"",""ts"":1582047704.92819,""logger"":""baremetalhost"",""msg"":""Reconciling BareMetalHost"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047704.9464087,""logger"":""baremetalhost_ironic"",""msg"":""found existing node by ID"",""host"":""baremetal0012d1mdw1.sendgrid.net""}
{""level"":""info"",""ts"":1582047704.9464579,""logger"":""baremetalhost"",""msg"":""saving host status"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0012d1mdw1.sendgrid.net"",""provisioningState"":""provisioned"",""operational status"":""error"",""provisioning state"":""registration error""}
{""level"":""info"",""ts"":1582047704.9539585,""logger"":""baremetalhost"",""msg"":""publishing event"",""reason"":""RegistrationError"",""message"":""Host adoption failed: Error while attempting to adopt node 60895ea0-419a-4138-aac3-f4fd72f320a4: Node 60895ea0-419a-4138-aac3-f4fd72f320a4 does not have any port associated with it..""}
{""level"":""info"",""ts"":1582047704.9578178,""logger"":""baremetalhost"",""msg"":""stopping on host error"",""Request.Namespace"":""metal3"",""Request.Name"":""baremetal0012d1mdw1.sendgrid.net"",""provisioningState"":""provisioned"",""message"":""Host adoption failed: Error while attempting to adopt node 60895ea0-419a-4138-aac3-f4fd72f320a4: Node 60895ea0-419a-4138-aac3-f4fd72f320a4 does not have any port associated with it..""}
```",0,on bmo pod restart unable to provision ready state bmh s there appears to be a bug where after creating a number of bmh that go through the registering inspecting ready phases after which the baremetal operator pod is restarted you are no longer able to provision any kube machines on these bmhs and the following error is thrown host validation error node does not have any port associated with it node does not have any port associated with it here s the full output and and logs from bmo kubectl get bmh ✔ name status provisioning status consumer bmc hardware profile online error sendgrid net ok inspecting ipmi true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true lukasz lukasz go src github com sendgrid cluster api provider bmo armada lukaszbranch ● ✔ lukasz lukasz go src github com sendgrid cluster api provider bmo armada lukaszbranch ● kubectl apply f out ↵ cluster cluster x io lukasz unchanged baremetalcluster infrastructure cluster x io lukasz unchanged kubeadmconfig bootstrap cluster x io lukasz controlplane created kubeadmconfig bootstrap cluster x io lukasz controlplane created kubeadmconfig bootstrap cluster x io lukasz controlplane created machine cluster x io lukasz controlplane created machine cluster x io lukasz controlplane created machine cluster x io lukasz controlplane created baremetalmachine infrastructure cluster x io lukasz controlplane created baremetalmachine infrastructure cluster x io lukasz controlplane created baremetalmachine infrastructure cluster x io lukasz controlplane created kubeadmconfigtemplate bootstrap cluster x io lukasz md created kubeadmconfigtemplate bootstrap cluster x io lukasz old md created machinedeployment cluster x io lukasz md created machinedeployment cluster x io lukasz old md created baremetalmachinetemplate infrastructure cluster x io lukasz md created baremetalmachinetemplate infrastructure cluster x io lukasz old md created lukasz lukasz go src github com sendgrid cluster api provider bmo armada lukaszbranch ● ✔ lukasz lukasz go src github com sendgrid cluster api provider bmo armada lukaszbranch ● kubectl get machine ✔ name providerid phase lukasz controlplane provisioning lukasz controlplane pending lukasz controlplane pending lukasz md pending lukasz old md pending lukasz lukasz go src github com sendgrid cluster api provider bmo armada lukaszbranch ● kubectl get bmh ✔ name status provisioning status consumer bmc hardware profile online error sendgrid net ok inspecting ipmi true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net error provisioning lukasz controlplane ipmi unknown true host validation error node does not have any port associated with it node does not have any port associated with it sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true sendgrid net ok ready ipmi unknown true lukasz lukasz go src github com sendgrid cluster api provider bmo armada lukaszbranch ● ✔ lukasz lukasz go src github com sendgrid cluster api provider bmo armada lukaszbranch ● level info ts logger baremetalhost msg reconciling baremetalhost request namespace request name sendgrid net level info ts logger baremetalhost msg inspecting hardware request namespace request name sendgrid net provisioningstate inspecting level info ts logger baremetalhost ironic msg inspecting hardware host sendgrid net status ok level info ts logger baremetalhost ironic msg looking for existing node by name host sendgrid net name sendgrid net level error ts logger controller runtime controller msg reconciler error controller baremetalhost controller request sendgrid net error action inspecting failed hardware inspection failed no ironic node for host errorverbose no ironic node for host nhardware inspection failed ngithub com io baremetal operator pkg controller baremetalhost reconcilebaremetalhost actioninspecting n t go src github com io baremetal operator pkg controller baremetalhost baremetalhost controller go ngithub com io baremetal operator pkg controller baremetalhost hoststatemachine handleinspecting n t go src github com io baremetal operator pkg controller baremetalhost host state machine go ngithub com io baremetal operator pkg controller baremetalhost hoststatemachine github com io baremetal operator pkg controller baremetalhost handleinspecting fm n t go src github com io baremetal operator pkg controller baremetalhost host state machine go ngithub com io baremetal operator pkg controller baremetalhost hoststatemachine reconcilestate n t go src github com io baremetal operator pkg controller baremetalhost host state machine go ngithub com io baremetal operator pkg controller baremetalhost reconcilebaremetalhost reconcile n t go src github com io baremetal operator pkg controller baremetalhost baremetalhost controller go ngithub com io baremetal operator vendor sigs io controller runtime pkg internal controller controller reconcilehandler n t go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go ngithub com io baremetal operator vendor sigs io controller runtime pkg internal controller controller processnextworkitem n t go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go ngithub com io baremetal operator vendor sigs io controller runtime pkg internal controller controller worker n t go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go ngithub com io baremetal operator vendor sigs io controller runtime pkg internal controller controller github com io baremetal operator vendor sigs io controller runtime pkg internal controller worker fm n t go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go ngithub com io baremetal operator vendor io apimachinery pkg util wait jitteruntil n t go src github com io baremetal operator vendor io apimachinery pkg util wait wait go ngithub com io baremetal operator vendor io apimachinery pkg util wait jitteruntil n t go src github com io baremetal operator vendor io apimachinery pkg util wait wait go ngithub com io baremetal operator vendor io apimachinery pkg util wait until n t go src github com io baremetal operator vendor io apimachinery pkg util wait wait go nruntime goexit n t usr local go src runtime asm s naction inspecting failed ngithub com io baremetal operator pkg controller baremetalhost reconcilebaremetalhost reconcile n t go src github com io baremetal operator pkg controller baremetalhost baremetalhost controller go ngithub com io baremetal operator vendor sigs io controller runtime pkg internal controller controller reconcilehandler n t go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go ngithub com io baremetal operator vendor sigs io controller runtime pkg internal controller controller processnextworkitem n t go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go ngithub com io baremetal operator vendor sigs io controller runtime pkg internal controller controller worker n t go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go ngithub com io baremetal operator vendor sigs io controller runtime pkg internal controller controller github com io baremetal operator vendor sigs io controller runtime pkg internal controller worker fm n t go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go ngithub com io baremetal operator vendor io apimachinery pkg util wait jitteruntil n t go src github com io baremetal operator vendor io apimachinery pkg util wait wait go ngithub com io baremetal operator vendor io apimachinery pkg util wait jitteruntil n t go src github com io baremetal operator vendor io apimachinery pkg util wait wait go ngithub com io baremetal operator vendor io apimachinery pkg util wait until n t go src github com io baremetal operator vendor io apimachinery pkg util wait wait go nruntime goexit n t usr local go src runtime asm s stacktrace github com io baremetal operator vendor github com go logr zapr zaplogger error n t go src github com io baremetal operator vendor github com go logr zapr zapr go ngithub com io baremetal operator vendor sigs io controller runtime pkg internal controller controller reconcilehandler n t go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go ngithub com io baremetal operator vendor sigs io controller runtime pkg internal controller controller processnextworkitem n t go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go ngithub com io baremetal operator vendor sigs io controller runtime pkg internal controller controller worker n t go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go ngithub com io baremetal operator vendor sigs io controller runtime pkg internal controller controller github com io baremetal operator vendor sigs io controller runtime pkg internal controller worker fm n t go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go ngithub com io baremetal operator vendor io apimachinery pkg util wait jitteruntil n t go src github com io baremetal operator vendor io apimachinery pkg util wait wait go ngithub com io baremetal operator vendor io apimachinery pkg util wait jitteruntil n t go src github com io baremetal operator vendor io apimachinery pkg util wait wait go ngithub com io baremetal operator vendor io apimachinery pkg util wait until n t go src github com io baremetal operator vendor io apimachinery pkg util wait wait go level info ts logger baremetalhost msg reconciling baremetalhost request namespace request name sendgrid net level info ts logger baremetalhost ironic msg validating management access host sendgrid net level info ts logger baremetalhost ironic msg found existing node by id host sendgrid net level info ts logger baremetalhost ironic msg current provision state host sendgrid net lasterror current manageable target level info ts logger baremetalhost ironic msg have manageable host host sendgrid net level info ts logger baremetalhost ironic msg updating hardware state host sendgrid net level info ts logger baremetalhost ironic msg found existing node by id host sendgrid net level info ts logger baremetalhost msg saving host status request namespace request name sendgrid net provisioningstate ready operational status ok provisioning state ready level info ts logger baremetalhost msg done request namespace request name sendgrid net provisioningstate ready requeue true after level info ts logger baremetalhost msg reconciling baremetalhost request namespace request name sendgrid net level info ts logger baremetalhost msg provisioning request namespace request name sendgrid net provisioningstate provisioning level info ts logger baremetalhost ironic msg found existing node by id host sendgrid net level info ts logger baremetalhost ironic msg provisioning image to host host sendgrid net state manageable runtime go observed a panic invalid memory address or nil pointer dereference runtime error invalid memory address or nil pointer dereference goroutine github com io baremetal operator vendor io apimachinery pkg util runtime logpanic go src github com io baremetal operator vendor io apimachinery pkg util runtime runtime go github com io baremetal operator vendor io apimachinery pkg util runtime handlecrash go src github com io baremetal operator vendor io apimachinery pkg util runtime runtime go panic usr local go src runtime panic go github com io baremetal operator pkg provisioner ironic ironicprovisioner provision go src github com io baremetal operator pkg provisioner ironic ironic go github com io baremetal operator pkg controller baremetalhost reconcilebaremetalhost actionprovisioning go src github com io baremetal operator pkg controller baremetalhost baremetalhost controller go github com io baremetal operator pkg controller baremetalhost hoststatemachine handleprovisioning go src github com io baremetal operator pkg controller baremetalhost host state machine go github com io baremetal operator pkg controller baremetalhost hoststatemachine github com io baremetal operator pkg controller baremetalhost handleprovisioning fm go src github com io baremetal operator pkg controller baremetalhost host state machine go github com io baremetal operator pkg controller baremetalhost hoststatemachine reconcilestate go src github com io baremetal operator pkg controller baremetalhost host state machine go github com io baremetal operator pkg controller baremetalhost reconcilebaremetalhost reconcile go src github com io baremetal operator pkg controller baremetalhost baremetalhost controller go github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller reconcilehandler go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller processnextworkitem go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller worker go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller github com io baremetal operator vendor sigs io controller runtime pkg internal controller worker fm go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go github com io baremetal operator vendor io apimachinery pkg util wait jitteruntil go src github com io baremetal operator vendor io apimachinery pkg util wait wait go github com io baremetal operator vendor io apimachinery pkg util wait jitteruntil go src github com io baremetal operator vendor io apimachinery pkg util wait wait go github com io baremetal operator vendor io apimachinery pkg util wait until go src github com io baremetal operator vendor io apimachinery pkg util wait wait go created by github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller start go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go panic runtime error invalid memory address or nil pointer dereference panic runtime error invalid memory address or nil pointer dereference goroutine github com io baremetal operator vendor io apimachinery pkg util runtime handlecrash go src github com io baremetal operator vendor io apimachinery pkg util runtime runtime go panic usr local go src runtime panic go github com io baremetal operator pkg provisioner ironic ironicprovisioner provision go src github com io baremetal operator pkg provisioner ironic ironic go github com io baremetal operator pkg controller baremetalhost reconcilebaremetalhost actionprovisioning go src github com io baremetal operator pkg controller baremetalhost baremetalhost controller go github com io baremetal operator pkg controller baremetalhost hoststatemachine handleprovisioning go src github com io baremetal operator pkg controller baremetalhost host state machine go github com io baremetal operator pkg controller baremetalhost hoststatemachine github com io baremetal operator pkg controller baremetalhost handleprovisioning fm go src github com io baremetal operator pkg controller baremetalhost host state machine go github com io baremetal operator pkg controller baremetalhost hoststatemachine reconcilestate go src github com io baremetal operator pkg controller baremetalhost host state machine go github com io baremetal operator pkg controller baremetalhost reconcilebaremetalhost reconcile go src github com io baremetal operator pkg controller baremetalhost baremetalhost controller go github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller reconcilehandler go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller processnextworkitem go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller worker go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller github com io baremetal operator vendor sigs io controller runtime pkg internal controller worker fm go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go github com io baremetal operator vendor io apimachinery pkg util wait jitteruntil go src github com io baremetal operator vendor io apimachinery pkg util wait wait go github com io baremetal operator vendor io apimachinery pkg util wait jitteruntil go src github com io baremetal operator vendor io apimachinery pkg util wait wait go github com io baremetal operator vendor io apimachinery pkg util wait until go src github com io baremetal operator vendor io apimachinery pkg util wait wait go created by github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller start go src github com io baremetal operator vendor sigs io controller runtime pkg internal controller controller go steps to recreate use bmo as normal and get some bmh into the ready state bounce the pod after the operator comes back up attempt to provision a new cluster the above error should be thrown we have also encountered a similar error although haven t been able to reproduce it of the time as the one above where a bmh previous to the pod bounce was in the provisioned state in the example provided we provisioned the bmh by adding the image to the spec it wasn t provisioned as part of a cluster but we have seen the exact same issue below when the bmhs were provisioned as part of a cluster after the pod is bounced do not get adopted properly i e sendgrid net error registration error ipmi unknown true host adoption failed error while attempting to adopt node node does not have any port associated with it and the logs for the event level info ts logger baremetalhost msg reconciling baremetalhost request namespace request name sendgrid net level info ts logger baremetalhost ironic msg looking for existing node by name host sendgrid net name sendgrid net level info ts logger baremetalhost ironic msg re registering host host sendgrid net level info ts logger baremetalhost ironic msg validating management access host sendgrid net level info ts logger baremetalhost ironic msg looking for existing node by name host sendgrid net name sendgrid net level info ts logger baremetalhost ironic msg registering host in ironic host sendgrid net level info ts logger baremetalhost ironic msg setting provisioning id host sendgrid net id level info ts logger baremetalhost ironic msg setting instance info host sendgrid net image source image build fedora checksum level info ts logger baremetalhost ironic msg current provision state host sendgrid net lasterror current enroll target level info ts logger baremetalhost ironic msg changing provisioning state host sendgrid net current enroll existing target new target manage level info ts logger baremetalhost msg saving host status request namespace request name sendgrid net provisioningstate provision ed operational status ok provisioning state provisioned level info ts logger baremetalhost msg publishing event reason registered message registered new host level info ts logger baremetalhost msg done request namespace request name sendgrid net provisioningstate provisioned requeue t rue after a little while later level info ts logger baremetalhost msg reconciling baremetalhost request namespace request name sendgrid net level info ts logger baremetalhost ironic msg found existing node by id host sendgrid net level info ts logger baremetalhost msg saving host status request namespace request name sendgrid net provisioningstate provisioned operational status error provisioning state registration error level info ts logger baremetalhost msg publishing event reason registrationerror message host adoption failed error while attempting to adopt node node does not have any port associated with it level info ts logger baremetalhost msg stopping on host error request namespace request name sendgrid net provisioningstate provisioned message host adoption failed error while attempting to adopt node node does not have any port associated with it ,0
57563,15862769901.0,IssuesEvent,2021-04-08 12:02:02,galasa-dev/projectmanagement,https://api.github.com/repos/galasa-dev/projectmanagement,opened,Firefox layout defects,defect webui,"In Firefox (86.0.1 (64-bit) on Mac Big Sur), the slideout hamburger menu is placed too high

The same issue does **not** occur with the `organise table`, `Filter`, `Work list`, `Compare list` or `Help` flyouts.",1.0,"Firefox layout defects - In Firefox (86.0.1 (64-bit) on Mac Big Sur), the slideout hamburger menu is placed too high

The same issue does **not** occur with the `organise table`, `Filter`, `Work list`, `Compare list` or `Help` flyouts.",0,firefox layout defects in firefox bit on mac big sur the slideout hamburger menu is placed too high the same issue does not occur with the organise table filter work list compare list or help flyouts ,0
111469,14100179210.0,IssuesEvent,2020-11-06 03:25:40,jaypeasee/overlook-hotel,https://api.github.com/repos/jaypeasee/overlook-hotel,closed,Display available rooms for Guest,design/accessibility new feature,"### As a user, when I pick a date in the calendar on the nav:
1. I should be able to see all available bookings for that day. If there are none.
1. I should be able to pick an available room.
1. I should see the main section title update.
1. If there are no availabilities, the Main section should show that.
1. If I pick a date in the past, the Main section should show that.",1.0,"Display available rooms for Guest - ### As a user, when I pick a date in the calendar on the nav:
1. I should be able to see all available bookings for that day. If there are none.
1. I should be able to pick an available room.
1. I should see the main section title update.
1. If there are no availabilities, the Main section should show that.
1. If I pick a date in the past, the Main section should show that.",0,display available rooms for guest as a user when i pick a date in the calendar on the nav i should be able to see all available bookings for that day if there are none i should be able to pick an available room i should see the main section title update if there are no availabilities the main section should show that if i pick a date in the past the main section should show that ,0
1141,4998883182.0,IssuesEvent,2016-12-09 21:20:57,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"elasticache misses ""Description"" parameter",affects_2.1 aws cloud feature_idea waiting_on_maintainer,"
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
elasticache
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
From the AWS Console I can set a ""Description"" -- via this Ansible module I can't. So I would suggest to add this parameter.
##### STEPS TO REPRODUCE
```
- name: ""mgmt::db Create ElastiCache Redis""
elasticache:
state: present
region: ""{{ region }}""
vpc_id: ""{{ mgmt_vpc.id }}""
# Name must be <= 20 chars
name: ""{{ owner }}-ec-{{ env }}-db-red""
description: ""ElastiCache Redis for Sentry""
node_type: ""cache.m4.large""
num_nodes: 1 # no replicas
engine: redis
cache_engine_version: ""3.2.4""
cache_port: 6379
cache_parameter_group: ""default.redis3.2""
cache_subnet_group: ""{{ owner }}-sngrp-{{ env }}-db-redis""
security_group_ids:
- ""{{ mgmt_sg_redis.group_id }}""
register: mgmt_ec_redis
```
##### EXPECTED RESULTS
I expected that the module call would be run without any errors.
##### ACTUAL RESULTS
Got the following err msg:
```
unsupported parameter for module: description
```
",True,"elasticache misses ""Description"" parameter -
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
elasticache
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
From the AWS Console I can set a ""Description"" -- via this Ansible module I can't. So I would suggest to add this parameter.
##### STEPS TO REPRODUCE
```
- name: ""mgmt::db Create ElastiCache Redis""
elasticache:
state: present
region: ""{{ region }}""
vpc_id: ""{{ mgmt_vpc.id }}""
# Name must be <= 20 chars
name: ""{{ owner }}-ec-{{ env }}-db-red""
description: ""ElastiCache Redis for Sentry""
node_type: ""cache.m4.large""
num_nodes: 1 # no replicas
engine: redis
cache_engine_version: ""3.2.4""
cache_port: 6379
cache_parameter_group: ""default.redis3.2""
cache_subnet_group: ""{{ owner }}-sngrp-{{ env }}-db-redis""
security_group_ids:
- ""{{ mgmt_sg_redis.group_id }}""
register: mgmt_ec_redis
```
##### EXPECTED RESULTS
I expected that the module call would be run without any errors.
##### ACTUAL RESULTS
Got the following err msg:
```
unsupported parameter for module: description
```
",1,elasticache misses description parameter issue type feature idea component name elasticache ansible version ansible config file configured module search path default w o overrides os environment n a summary from the aws console i can set a description via this ansible module i can t so i would suggest to add this parameter steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name mgmt db create elasticache redis elasticache state present region region vpc id mgmt vpc id name must be chars name owner ec env db red description elasticache redis for sentry node type cache large num nodes no replicas engine redis cache engine version cache port cache parameter group default cache subnet group owner sngrp env db redis security group ids mgmt sg redis group id register mgmt ec redis expected results i expected that the module call would be run without any errors actual results got the following err msg unsupported parameter for module description ,1
14790,9524268539.0,IssuesEvent,2019-04-28 01:30:02,TheCacophonyProject/cacophonometer,https://api.github.com/repos/TheCacophonyProject/cacophonometer,reopened,Sort groups in group list,good first issue usability,The list of groups shown in the setup wizard is unsorted which makes it hard to find the right one if there's a lot of groups available.,True,Sort groups in group list - The list of groups shown in the setup wizard is unsorted which makes it hard to find the right one if there's a lot of groups available.,0,sort groups in group list the list of groups shown in the setup wizard is unsorted which makes it hard to find the right one if there s a lot of groups available ,0
5738,30339102335.0,IssuesEvent,2023-07-11 11:34:26,IPVS-AS/MBP,https://api.github.com/repos/IPVS-AS/MBP,opened,Upgrade to newer JDKs,maintainance,"Currently only JDK 8 is supported, JDK 9 does not work due to compatibility issues regarding WRO.",True,"Upgrade to newer JDKs - Currently only JDK 8 is supported, JDK 9 does not work due to compatibility issues regarding WRO.",1,upgrade to newer jdks currently only jdk is supported jdk does not work due to compatibility issues regarding wro ,1
3997,18526091089.0,IssuesEvent,2021-10-20 20:41:42,carbon-design-system/carbon,https://api.github.com/repos/carbon-design-system/carbon,closed,[Feature Request]: To disable the options in MultiSelect,type: enhancement 💡 status: waiting for maintainer response 💬,"### Summary
We would like to disable the option(s) in MultiSelect because in the beginning, the options is valid, but afterward, it becomes invalid. But we don't want to remove the invalid options directly because of user experience concerns.
### Justification
As above.
### Desired UX and success metrics
We want something like below.

### Required functionality
_No response_
### Specific timeline issues / requests
As soon as possible. Thanks.
### Available extra resources
_No response_
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)",True,"[Feature Request]: To disable the options in MultiSelect - ### Summary
We would like to disable the option(s) in MultiSelect because in the beginning, the options is valid, but afterward, it becomes invalid. But we don't want to remove the invalid options directly because of user experience concerns.
### Justification
As above.
### Desired UX and success metrics
We want something like below.

### Required functionality
_No response_
### Specific timeline issues / requests
As soon as possible. Thanks.
### Available extra resources
_No response_
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)",1, to disable the options in multiselect summary we would like to disable the option s in multiselect because in the beginning the options is valid but afterward it becomes invalid but we don t want to remove the invalid options directly because of user experience concerns justification as above desired ux and success metrics we want something like below required functionality no response specific timeline issues requests as soon as possible thanks available extra resources no response code of conduct i agree to follow this project s ,1
5868,31836059885.0,IssuesEvent,2023-09-14 13:35:42,GaloyMoney/galoy,https://api.github.com/repos/GaloyMoney/galoy,closed,switch to new graphql subscription protocol,graphql api maintainability,"We are using the old, deprecated and no longer maintained, protocol.
https://www.apollographql.com/docs/apollo-server/data/subscriptions/#switching-from-subscriptions-transport-ws
this requires updating both backend and frontend at the same time. ",True,"switch to new graphql subscription protocol - We are using the old, deprecated and no longer maintained, protocol.
https://www.apollographql.com/docs/apollo-server/data/subscriptions/#switching-from-subscriptions-transport-ws
this requires updating both backend and frontend at the same time. ",1,switch to new graphql subscription protocol we are using the old deprecated and no longer maintained protocol this requires updating both backend and frontend at the same time ,1
138261,30839774713.0,IssuesEvent,2023-08-02 09:51:21,SambhaviPD/your-recipebuddy,https://api.github.com/repos/SambhaviPD/your-recipebuddy,closed,Write a common method that uses OpenAI's API by sending appropriate prompt as an input,code-refactoring backend-development,"Random Recipe, Recipe by Cuisine, Recipe by Ingredients, Recipe by Meal course - The only difference between all these menu options are the prompts with appropriate inputs, the calling logic remains the same. Hence we need to write a common method to invoke the actual API.",1.0,"Write a common method that uses OpenAI's API by sending appropriate prompt as an input - Random Recipe, Recipe by Cuisine, Recipe by Ingredients, Recipe by Meal course - The only difference between all these menu options are the prompts with appropriate inputs, the calling logic remains the same. Hence we need to write a common method to invoke the actual API.",0,write a common method that uses openai s api by sending appropriate prompt as an input random recipe recipe by cuisine recipe by ingredients recipe by meal course the only difference between all these menu options are the prompts with appropriate inputs the calling logic remains the same hence we need to write a common method to invoke the actual api ,0
1205,5143265351.0,IssuesEvent,2017-01-12 15:35:21,Particular/NServiceBus.Host.AzureCloudService,https://api.github.com/repos/Particular/NServiceBus.Host.AzureCloudService,closed,V7 RTM,Tag: Maintainer Prio,"## Items to complete
- ~~Change package author name -> use updated NugetPackage https://github.com/Particular/V6Launch/issues/4~~ not needed
- [ ] Create release notes (general ones, similar to the [Core ones with milestones](https://github.com/Particular/V6Launch/issues/75#issuecomment-251098093))
- [ ] Update [V6Launch status list](https://github.com/Particular/V6Launch/issues/4)
",True,"V7 RTM - ## Items to complete
- ~~Change package author name -> use updated NugetPackage https://github.com/Particular/V6Launch/issues/4~~ not needed
- [ ] Create release notes (general ones, similar to the [Core ones with milestones](https://github.com/Particular/V6Launch/issues/75#issuecomment-251098093))
- [ ] Update [V6Launch status list](https://github.com/Particular/V6Launch/issues/4)
",1, rtm items to complete change package author name use updated nugetpackage not needed create release notes general ones similar to the update ,1
435463,30501973725.0,IssuesEvent,2023-07-18 14:28:02,usnistgov/dioptra,https://api.github.com/repos/usnistgov/dioptra,opened,Add Release instructions for merging dev to main into a RELEASE.md file,documentation,"## Definition of Done
- [ ] Instructions for what to do to update the main branch from dev are written up in Markdown format and stored in a RELEASE.md file",1.0,"Add Release instructions for merging dev to main into a RELEASE.md file - ## Definition of Done
- [ ] Instructions for what to do to update the main branch from dev are written up in Markdown format and stored in a RELEASE.md file",0,add release instructions for merging dev to main into a release md file definition of done instructions for what to do to update the main branch from dev are written up in markdown format and stored in a release md file,0
528,3925713600.0,IssuesEvent,2016-04-22 20:05:19,heiglandreas/authLdap,https://api.github.com/repos/heiglandreas/authLdap,closed,Stop Password-Change Email on password-Update via LDAP,bug maintainer reply expected,"Currently a user will get an Email after login that the password has changed when password-caching is enabled. The preffered behaviour would be to simply not send a password-change-Email when a user-password is changed via LDAP.
THis has been reported on https://wordpress.org/support/topic/authldap-authentication-triggers-email-password-change-notification",True,"Stop Password-Change Email on password-Update via LDAP - Currently a user will get an Email after login that the password has changed when password-caching is enabled. The preffered behaviour would be to simply not send a password-change-Email when a user-password is changed via LDAP.
THis has been reported on https://wordpress.org/support/topic/authldap-authentication-triggers-email-password-change-notification",1,stop password change email on password update via ldap currently a user will get an email after login that the password has changed when password caching is enabled the preffered behaviour would be to simply not send a password change email when a user password is changed via ldap this has been reported on ,1
239831,19957572451.0,IssuesEvent,2022-01-28 02:19:52,microsoft/AzureStorageExplorer,https://api.github.com/repos/microsoft/AzureStorageExplorer,closed,There is a failed activity log with an error 'TypeError:Cannot read properties of undefined (reading 'type')' when attaching one table,🧪 testing :gear: files :beetle: regression,"**Storage Explorer Version**: 1.23.0-dev
**Build Number**: 20220125.10
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Big Sur 11.6.1
**How Found**: From running test case
**Regression From**: Previous release (1.22.0)
## Steps to Reproduce ##
1. Expand one storage account -> Tables.
2. Right click one table -> Click 'Get Shared Access Signature...' -> Click 'Create' -> Copy the SAS URL.
3. Open the connect dialog -> Attach the table via the copied SAS URL.
4. Check there is a successful activity log.
## Expected Experience ##
There is a successful activity log.
## Actual Experience ##
There is a failed activity log with an error 'TypeError:Cannot read properties of undefined (reading 'type')'.
",1.0,"There is a failed activity log with an error 'TypeError:Cannot read properties of undefined (reading 'type')' when attaching one table - **Storage Explorer Version**: 1.23.0-dev
**Build Number**: 20220125.10
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Big Sur 11.6.1
**How Found**: From running test case
**Regression From**: Previous release (1.22.0)
## Steps to Reproduce ##
1. Expand one storage account -> Tables.
2. Right click one table -> Click 'Get Shared Access Signature...' -> Click 'Create' -> Copy the SAS URL.
3. Open the connect dialog -> Attach the table via the copied SAS URL.
4. Check there is a successful activity log.
## Expected Experience ##
There is a successful activity log.
## Actual Experience ##
There is a failed activity log with an error 'TypeError:Cannot read properties of undefined (reading 'type')'.
",0,there is a failed activity log with an error typeerror cannot read properties of undefined reading type when attaching one table storage explorer version dev build number branch main platform os windows linux ubuntu macos big sur how found from running test case regression from previous release steps to reproduce expand one storage account tables right click one table click get shared access signature click create copy the sas url open the connect dialog attach the table via the copied sas url check there is a successful activity log expected experience there is a successful activity log actual experience there is a failed activity log with an error typeerror cannot read properties of undefined reading type ,0
250408,27086166596.0,IssuesEvent,2023-02-14 17:08:31,solana-labs/solana,https://api.github.com/repos/solana-labs/solana,closed,Gossip is vulnerable to UDP reflection,security,"#### Problem
Gossip responses are sent to the source IP, rather than the IP address in gossip. This means that the mechanism is vulnerable to source IP spoofing, with responses sent to the spoofed IP.
The real-world impact of this ranges from ""nuisance"" to ""very bad"" depending on the amplification factor.
Spoofing is still trivial in today's internet - many providers still don't do proper filtering.
#### Proposed Solution
In a connectionless protocol, the source IP can never be trusted. The canonical solution to this is a three-way handshake that authenticates the source IP.
",True,"Gossip is vulnerable to UDP reflection - #### Problem
Gossip responses are sent to the source IP, rather than the IP address in gossip. This means that the mechanism is vulnerable to source IP spoofing, with responses sent to the spoofed IP.
The real-world impact of this ranges from ""nuisance"" to ""very bad"" depending on the amplification factor.
Spoofing is still trivial in today's internet - many providers still don't do proper filtering.
#### Proposed Solution
In a connectionless protocol, the source IP can never be trusted. The canonical solution to this is a three-way handshake that authenticates the source IP.
",0,gossip is vulnerable to udp reflection problem gossip responses are sent to the source ip rather than the ip address in gossip this means that the mechanism is vulnerable to source ip spoofing with responses sent to the spoofed ip the real world impact of this ranges from nuisance to very bad depending on the amplification factor spoofing is still trivial in today s internet many providers still don t do proper filtering proposed solution in a connectionless protocol the source ip can never be trusted the canonical solution to this is a three way handshake that authenticates the source ip ,0
290786,25095674374.0,IssuesEvent,2022-11-08 09:59:50,OskarMorel/GORAS_EditeurGrapheProbalistiques,https://api.github.com/repos/OskarMorel/GORAS_EditeurGrapheProbalistiques,closed,US5 - Réenregistrement d'un graphe,redigerTestsAcceptation,"### User story
En tant qu'utilisateur
Je veux enregistrer un graphe ouvert à partir d’un fichier dans un fichier différent
Afin de créer une copie
### Tests d'acceptation
",1.0,"US5 - Réenregistrement d'un graphe - ### User story
En tant qu'utilisateur
Je veux enregistrer un graphe ouvert à partir d’un fichier dans un fichier différent
Afin de créer une copie
### Tests d'acceptation
",0, réenregistrement d un graphe user story en tant qu utilisateur je veux enregistrer un graphe ouvert à partir d’un fichier dans un fichier différent afin de créer une copie tests d acceptation ,0
3641,14730765981.0,IssuesEvent,2021-01-06 13:44:34,AMYMEME/re-cycle-app,https://api.github.com/repos/AMYMEME/re-cycle-app,closed,firebase 통합환경 만들기,maintain,"# Database
찾아보니까 Firebase에 DB로 쓸 수 있는게 있는데, 원래 Google Storage를 포함해서 실시간 DB, firestore가 있음
## Google Storage
구글 스토리지는 90일 300$이용이나 for firebase용도 따로 있는 것 같은데,
java지원이 안되고, 우리가 갖고 있는 데이터에 적합하지 않아보임.
이미지같은 큰 바이너리 데이터를 저장할 때 가장 유용할 것 같음
## realtime DB
NoSQL 형태, Android, iOS, 자바스크립트 SDK로 연동할 수 있음
따라서 클라이언트 쪽에서는 잘 모르겠는데, 백엔드 쪽에서는 잘 모르겠음
## firestore
NoSQL 형태. 백엔드 쪽에도 적합해 보임
안드로이드(자바, 코틀린 모두 지원), iOS, 노드JS, 스프링, 파이썬, golang 모두 지원하고
우리가 저장할 데이터가 그렇게 큰 편이 아니어서 적합해보임",True,"firebase 통합환경 만들기 - # Database
찾아보니까 Firebase에 DB로 쓸 수 있는게 있는데, 원래 Google Storage를 포함해서 실시간 DB, firestore가 있음
## Google Storage
구글 스토리지는 90일 300$이용이나 for firebase용도 따로 있는 것 같은데,
java지원이 안되고, 우리가 갖고 있는 데이터에 적합하지 않아보임.
이미지같은 큰 바이너리 데이터를 저장할 때 가장 유용할 것 같음
## realtime DB
NoSQL 형태, Android, iOS, 자바스크립트 SDK로 연동할 수 있음
따라서 클라이언트 쪽에서는 잘 모르겠는데, 백엔드 쪽에서는 잘 모르겠음
## firestore
NoSQL 형태. 백엔드 쪽에도 적합해 보임
안드로이드(자바, 코틀린 모두 지원), iOS, 노드JS, 스프링, 파이썬, golang 모두 지원하고
우리가 저장할 데이터가 그렇게 큰 편이 아니어서 적합해보임",1,firebase 통합환경 만들기 database 찾아보니까 firebase에 db로 쓸 수 있는게 있는데 원래 google storage를 포함해서 실시간 db firestore가 있음 google storage 구글 스토리지는 이용이나 for firebase용도 따로 있는 것 같은데 java지원이 안되고 우리가 갖고 있는 데이터에 적합하지 않아보임 이미지같은 큰 바이너리 데이터를 저장할 때 가장 유용할 것 같음 realtime db nosql 형태 android ios 자바스크립트 sdk로 연동할 수 있음 따라서 클라이언트 쪽에서는 잘 모르겠는데 백엔드 쪽에서는 잘 모르겠음 firestore nosql 형태 백엔드 쪽에도 적합해 보임 안드로이드 자바 코틀린 모두 지원 ios 노드js 스프링 파이썬 golang 모두 지원하고 우리가 저장할 데이터가 그렇게 큰 편이 아니어서 적합해보임,1
2020,6757626700.0,IssuesEvent,2017-10-24 11:32:50,Kristinita/Erics-Green-Room,https://api.github.com/repos/Kristinita/Erics-Green-Room,closed,[Feature request] Hotkeys,need-maintainer web wontfix,"### 1. Запрос
Было бы неплохо, если бы можно было вводить команды посредством горячих клавиш. Хорошо бы иметь возможность горячих клавиш для всех команд, для запуска которых сейчас нужно писать в строке ввода более 1—2 символов.
### 2. Аргументация
Экономия времени игрока. Быстрее нажать шорткат, чем писать команды.
### 3. Пример реализации
+ игрок нажимает Super+P → передаётся команда `Паузу`,
+ игрок нажимает Super+N → передаётся команда `Дальше`.
### 4. Рекомендация
Полагаю, что в шорткатах неплохо бы задействовать клавишу Super (она же Win), редко встречающуюся в дефолтных сочетаниях наиболее распространённых браузеров.
Спасибо.",True,"[Feature request] Hotkeys - ### 1. Запрос
Было бы неплохо, если бы можно было вводить команды посредством горячих клавиш. Хорошо бы иметь возможность горячих клавиш для всех команд, для запуска которых сейчас нужно писать в строке ввода более 1—2 символов.
### 2. Аргументация
Экономия времени игрока. Быстрее нажать шорткат, чем писать команды.
### 3. Пример реализации
+ игрок нажимает Super+P → передаётся команда `Паузу`,
+ игрок нажимает Super+N → передаётся команда `Дальше`.
### 4. Рекомендация
Полагаю, что в шорткатах неплохо бы задействовать клавишу Super (она же Win), редко встречающуюся в дефолтных сочетаниях наиболее распространённых браузеров.
Спасибо.",1, hotkeys запрос было бы неплохо если бы можно было вводить команды посредством горячих клавиш хорошо бы иметь возможность горячих клавиш для всех команд для запуска которых сейчас нужно писать в строке ввода более — символов аргументация экономия времени игрока быстрее нажать шорткат чем писать команды пример реализации игрок нажимает super p → передаётся команда паузу игрок нажимает super n → передаётся команда дальше рекомендация полагаю что в шорткатах неплохо бы задействовать клавишу super она же win редко встречающуюся в дефолтных сочетаниях наиболее распространённых браузеров спасибо ,1
459,3640520370.0,IssuesEvent,2016-02-13 00:54:37,dotnet/roslyn-analyzers,https://api.github.com/repos/dotnet/roslyn-analyzers,closed,Port FxCop rule CA1801: ReviewUnusedParameters,Area-Microsoft.Maintainability.Analyzers FxCop-Port Urgency-Soon,"**Title:** Review unused parameters
**Description:**
A method signature includes a parameter that is not used in the method body.
**Dependency:** None, can be based on: https://github.com/dotnet/roslyn/blob/master/src/Samples/CSharp/Analyzers/CSharpAnalyzers/CSharpAnalyzers/StatefulAnalyzers/CodeBlockStartedAnalyzer.cs
**Notes:**
Don't fire if the parameter comes from an interface you're implementing or a virtual method you're overriding.",True,"Port FxCop rule CA1801: ReviewUnusedParameters - **Title:** Review unused parameters
**Description:**
A method signature includes a parameter that is not used in the method body.
**Dependency:** None, can be based on: https://github.com/dotnet/roslyn/blob/master/src/Samples/CSharp/Analyzers/CSharpAnalyzers/CSharpAnalyzers/StatefulAnalyzers/CodeBlockStartedAnalyzer.cs
**Notes:**
Don't fire if the parameter comes from an interface you're implementing or a virtual method you're overriding.",1,port fxcop rule reviewunusedparameters title review unused parameters description a method signature includes a parameter that is not used in the method body dependency none can be based on notes don t fire if the parameter comes from an interface you re implementing or a virtual method you re overriding ,1
4103,19430005276.0,IssuesEvent,2021-12-21 10:48:43,chocolatey-community/chocolatey-package-requests,https://api.github.com/repos/chocolatey-community/chocolatey-package-requests,closed,RFM - solr,Status: Available For Maintainer(s),"## Current Maintainer
- [x] I am the maintainer of the package and wish to pass it to someone else;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://chocolatey.org/packages/solr
Package source URL: https://github.com/majkinetor/au-packages/tree/master/solr
Package became too big to be embedded. I don't want to maintain non-embeddable packages.
",True,"RFM - solr - ## Current Maintainer
- [x] I am the maintainer of the package and wish to pass it to someone else;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://chocolatey.org/packages/solr
Package source URL: https://github.com/majkinetor/au-packages/tree/master/solr
Package became too big to be embedded. I don't want to maintain non-embeddable packages.
",1,rfm solr current maintainer i am the maintainer of the package and wish to pass it to someone else checklist issue title starts with rfm existing package details package url package source url package became too big to be embedded i don t want to maintain non embeddable packages ,1
4492,23391995205.0,IssuesEvent,2022-08-11 18:48:02,deislabs/spiderlightning,https://api.github.com/repos/deislabs/spiderlightning,opened,change all examples to use `configs.azapp` for their secret store,💫 refactor 🚧 maintainer issue,"**Describe the solution you'd like**
n/a
**Additional context**
n/a",True,"change all examples to use `configs.azapp` for their secret store - **Describe the solution you'd like**
n/a
**Additional context**
n/a",1,change all examples to use configs azapp for their secret store describe the solution you d like n a additional context n a,1
5153,26254104891.0,IssuesEvent,2023-01-05 22:18:20,aws/serverless-application-model,https://api.github.com/repos/aws/serverless-application-model,closed,Orphan Log Group,type/bug stage/bug-repro area/api-gateway maintainer/need-followup,"This is not (I'm pretty sure) related to this bug: https://github.com/aws/serverless-application-model/issues/1216
Log group created by template with:
```yaml
Type: 'AWS::Logs::LogGroup
DeletionPolicy: Delete
Properties:
RetentionInDays: 30
LogGroupName
```
The above is not a copy paste, not the real template.
The ApiGW ( 'AWS::Serverless::Api' ) ref's the log group like this:
```
AccessLogSetting: {
'DestinationArn': {
'Fn::Sub': '${WalletApiGatewayAccessLogGroup.Arn}'
},
'Format':""{'method':'$context.httpMethod','path':'$context.path','requestId':'$context.requestId','resourcePath':'$context.resourcePath','status':'$context.status','responseLatency':'$context.responseLatency','responseLength':'$context.responseLength','sourceIp':'$context.identity.sourceIp','xrayTraceId':'$context.xrayTraceId','requestTime':'$context.requestTime','gatewayStage':'$context.stage','protocol':'$context.protocol','gatewayErrorMsg':'$context.error.message','integrationErrorMsg':'$context.integration.error','integrationLatency':'$context.integrationLatency'}""
}
```
If I remove the above reference, deploy and then delete the stack, the log group is created, then deleted. If not, the stack deletes _without_ error, and the log group is left behind as an orphan. The 'DeletionPolicy: Delete' changes nothing.
If I remove the log group def and leave the reference to the log group in the apigw, the log group is not eventually created like with a lambda. The deployment fails.
I think this is a bug. If both the apigw and the log group are defined and created in/by CF then deleting the stack should delete the entire stack.
The stack is deployed by an Ubuntu 18 vm in Azure DevOps using SAM cli v1.36.0
AWS Region us-east-2
",True,"Orphan Log Group - This is not (I'm pretty sure) related to this bug: https://github.com/aws/serverless-application-model/issues/1216
Log group created by template with:
```yaml
Type: 'AWS::Logs::LogGroup
DeletionPolicy: Delete
Properties:
RetentionInDays: 30
LogGroupName
```
The above is not a copy paste, not the real template.
The ApiGW ( 'AWS::Serverless::Api' ) ref's the log group like this:
```
AccessLogSetting: {
'DestinationArn': {
'Fn::Sub': '${WalletApiGatewayAccessLogGroup.Arn}'
},
'Format':""{'method':'$context.httpMethod','path':'$context.path','requestId':'$context.requestId','resourcePath':'$context.resourcePath','status':'$context.status','responseLatency':'$context.responseLatency','responseLength':'$context.responseLength','sourceIp':'$context.identity.sourceIp','xrayTraceId':'$context.xrayTraceId','requestTime':'$context.requestTime','gatewayStage':'$context.stage','protocol':'$context.protocol','gatewayErrorMsg':'$context.error.message','integrationErrorMsg':'$context.integration.error','integrationLatency':'$context.integrationLatency'}""
}
```
If I remove the above reference, deploy and then delete the stack, the log group is created, then deleted. If not, the stack deletes _without_ error, and the log group is left behind as an orphan. The 'DeletionPolicy: Delete' changes nothing.
If I remove the log group def and leave the reference to the log group in the apigw, the log group is not eventually created like with a lambda. The deployment fails.
I think this is a bug. If both the apigw and the log group are defined and created in/by CF then deleting the stack should delete the entire stack.
The stack is deployed by an Ubuntu 18 vm in Azure DevOps using SAM cli v1.36.0
AWS Region us-east-2
",1,orphan log group this is not i m pretty sure related to this bug log group created by template with yaml type aws logs loggroup deletionpolicy delete properties retentionindays loggroupname the above is not a copy paste not the real template the apigw aws serverless api ref s the log group like this accesslogsetting destinationarn fn sub walletapigatewayaccessloggroup arn format method context httpmethod path context path requestid context requestid resourcepath context resourcepath status context status responselatency context responselatency responselength context responselength sourceip context identity sourceip xraytraceid context xraytraceid requesttime context requesttime gatewaystage context stage protocol context protocol gatewayerrormsg context error message integrationerrormsg context integration error integrationlatency context integrationlatency if i remove the above reference deploy and then delete the stack the log group is created then deleted if not the stack deletes without error and the log group is left behind as an orphan the deletionpolicy delete changes nothing if i remove the log group def and leave the reference to the log group in the apigw the log group is not eventually created like with a lambda the deployment fails i think this is a bug if both the apigw and the log group are defined and created in by cf then deleting the stack should delete the entire stack the stack is deployed by an ubuntu vm in azure devops using sam cli aws region us east ,1
86513,24873521013.0,IssuesEvent,2022-10-27 17:02:58,nextcloud/richdocuments,https://api.github.com/repos/nextcloud/richdocuments,closed,GuzzleHttp 404 Not Found Response Client error,build-in code server,"I use Nexcloud 25 configured on Debian Bullseye with Apache/PHP/MariaDb server,
I have installed `richdocument` and `richdocumentcode_arm64` today, everything updated to latest version available
- Nextcloud Office 7.0.0
- Collabora Online - Built-in CODE Server (ARM64) 22.5.702
It seemed to work (documents are opened and saved) but unfortunately something isn't going properly and this caused an explosion of log file size due to this continuous error repeated indefinitely:
```
GuzzleHttp\Exception\ClientException: Client error: `GET http://localhost/nextcloud/apps/richdocumentscode_arm64/proxy.php?req=/hosting/capabilities` resulted in a `404 Not Found` response: run()
/var/www/html/nextcloud/3rdparty/guzzlehttp/promises/src/Promise.php - line 224:
GuzzleHttp\Promise\Promise->invokeWaitFn()
/var/www/html/nextcloud/3rdparty/guzzlehttp/promises/src/Promise.php - line 269:
GuzzleHttp\Promise\Promise->waitIfPending()
/var/www/html/nextcloud/3rdparty/guzzlehttp/promises/src/Promise.php - line 226:
GuzzleHttp\Promise\Promise->invokeWaitList()
/var/www/html/nextcloud/3rdparty/guzzlehttp/promises/src/Promise.php - line 62:
GuzzleHttp\Promise\Promise->waitIfPending()
/var/www/html/nextcloud/3rdparty/guzzlehttp/guzzle/src/Client.php - line 187:
GuzzleHttp\Promise\Promise->wait()
/var/www/html/nextcloud/lib/private/Http/Client/Client.php - line 218:
GuzzleHttp\Client->request()
/var/www/html/nextcloud/apps/richdocuments/lib/Service/CapabilitiesService.php - line 131:
OC\Http\Client\Client->get()
/var/www/html/nextcloud/apps/richdocuments/lib/AppInfo/Application.php - line 203:
OCA\Richdocuments\Service\CapabilitiesService->refetch()
/var/www/html/nextcloud/apps/richdocuments/lib/AppInfo/Application.php - line 135:
OCA\Richdocuments\AppInfo\Application->checkAndEnableCODEServer()
/var/www/html/nextcloud/lib/private/AppFramework/Bootstrap/Coordinator.php - line 190:
OCA\Richdocuments\AppInfo\Application->boot()
/var/www/html/nextcloud/lib/private/legacy/OC_App.php - line 208:
OC\AppFramework\Bootstrap\Coordinator->bootApp()
/var/www/html/nextcloud/lib/private/legacy/OC_App.php - line 141:
OC_App::loadApp()
/var/www/html/nextcloud/cron.php - line 55:
OC_App::loadApps()
```
",1.0,"GuzzleHttp 404 Not Found Response Client error - I use Nexcloud 25 configured on Debian Bullseye with Apache/PHP/MariaDb server,
I have installed `richdocument` and `richdocumentcode_arm64` today, everything updated to latest version available
- Nextcloud Office 7.0.0
- Collabora Online - Built-in CODE Server (ARM64) 22.5.702
It seemed to work (documents are opened and saved) but unfortunately something isn't going properly and this caused an explosion of log file size due to this continuous error repeated indefinitely:
```
GuzzleHttp\Exception\ClientException: Client error: `GET http://localhost/nextcloud/apps/richdocumentscode_arm64/proxy.php?req=/hosting/capabilities` resulted in a `404 Not Found` response: run()
/var/www/html/nextcloud/3rdparty/guzzlehttp/promises/src/Promise.php - line 224:
GuzzleHttp\Promise\Promise->invokeWaitFn()
/var/www/html/nextcloud/3rdparty/guzzlehttp/promises/src/Promise.php - line 269:
GuzzleHttp\Promise\Promise->waitIfPending()
/var/www/html/nextcloud/3rdparty/guzzlehttp/promises/src/Promise.php - line 226:
GuzzleHttp\Promise\Promise->invokeWaitList()
/var/www/html/nextcloud/3rdparty/guzzlehttp/promises/src/Promise.php - line 62:
GuzzleHttp\Promise\Promise->waitIfPending()
/var/www/html/nextcloud/3rdparty/guzzlehttp/guzzle/src/Client.php - line 187:
GuzzleHttp\Promise\Promise->wait()
/var/www/html/nextcloud/lib/private/Http/Client/Client.php - line 218:
GuzzleHttp\Client->request()
/var/www/html/nextcloud/apps/richdocuments/lib/Service/CapabilitiesService.php - line 131:
OC\Http\Client\Client->get()
/var/www/html/nextcloud/apps/richdocuments/lib/AppInfo/Application.php - line 203:
OCA\Richdocuments\Service\CapabilitiesService->refetch()
/var/www/html/nextcloud/apps/richdocuments/lib/AppInfo/Application.php - line 135:
OCA\Richdocuments\AppInfo\Application->checkAndEnableCODEServer()
/var/www/html/nextcloud/lib/private/AppFramework/Bootstrap/Coordinator.php - line 190:
OCA\Richdocuments\AppInfo\Application->boot()
/var/www/html/nextcloud/lib/private/legacy/OC_App.php - line 208:
OC\AppFramework\Bootstrap\Coordinator->bootApp()
/var/www/html/nextcloud/lib/private/legacy/OC_App.php - line 141:
OC_App::loadApp()
/var/www/html/nextcloud/cron.php - line 55:
OC_App::loadApps()
```
",0,guzzlehttp not found response client error i use nexcloud configured on debian bullseye with apache php mariadb server i have installed richdocument and richdocumentcode today everything updated to latest version available nextcloud office collabora online built in code server it seemed to work documents are opened and saved but unfortunately something isn t going properly and this caused an explosion of log file size due to this continuous error repeated indefinitely guzzlehttp exception clientexception client error get resulted in a not found response meta name viewport content widt truncated var www html nextcloud guzzlehttp guzzle src middleware php line guzzlehttp exception requestexception create var www html nextcloud guzzlehttp promises src promise php line guzzlehttp middleware guzzlehttp closure sensiti var www html nextcloud guzzlehttp promises src promise php line guzzlehttp promise promise callhandler var www html nextcloud guzzlehttp promises src taskqueue php line guzzlehttp promise promise guzzlehttp promise closure sensiti var www html nextcloud guzzlehttp promises src promise php line guzzlehttp promise taskqueue run var www html nextcloud guzzlehttp promises src promise php line guzzlehttp promise promise invokewaitfn var www html nextcloud guzzlehttp promises src promise php line guzzlehttp promise promise waitifpending var www html nextcloud guzzlehttp promises src promise php line guzzlehttp promise promise invokewaitlist var www html nextcloud guzzlehttp promises src promise php line guzzlehttp promise promise waitifpending var www html nextcloud guzzlehttp guzzle src client php line guzzlehttp promise promise wait var www html nextcloud lib private http client client php line guzzlehttp client request var www html nextcloud apps richdocuments lib service capabilitiesservice php line oc http client client get var www html nextcloud apps richdocuments lib appinfo application php line oca richdocuments service capabilitiesservice refetch var www html nextcloud apps richdocuments lib appinfo application php line oca richdocuments appinfo application checkandenablecodeserver var www html nextcloud lib private appframework bootstrap coordinator php line oca richdocuments appinfo application boot var www html nextcloud lib private legacy oc app php line oc appframework bootstrap coordinator bootapp var www html nextcloud lib private legacy oc app php line oc app loadapp var www html nextcloud cron php line oc app loadapps ,0
373283,26047476668.0,IssuesEvent,2022-12-22 15:33:14,arcanus55/neodigm55,https://api.github.com/repos/arcanus55/neodigm55,closed,Enchanted CTA | Support Material Design icons (google font) via neodigm-icon element,documentation enhancement,"Adding a neodigm-icon element within the button text should display an inline icon.
Create a wiki recipe.",1.0,"Enchanted CTA | Support Material Design icons (google font) via neodigm-icon element - Adding a neodigm-icon element within the button text should display an inline icon.
Create a wiki recipe.",0,enchanted cta support material design icons google font via neodigm icon element adding a neodigm icon element within the button text should display an inline icon create a wiki recipe ,0
2904,10325346095.0,IssuesEvent,2019-09-01 16:35:12,frej/fast-export,https://api.github.com/repos/frej/fast-export,closed,fast-export/hg-fast-export.sh: line 179: python2: command not found ,not-available-to-maintainer user-support wintendo,"I can't convert any hg to git and got line as in title, 179 line it's a comment ""# move recent marks cache out of the way...""
I have both pythons installed: 2.7.16 and 3.7.3 Also Win 10",True,"fast-export/hg-fast-export.sh: line 179: python2: command not found - I can't convert any hg to git and got line as in title, 179 line it's a comment ""# move recent marks cache out of the way...""
I have both pythons installed: 2.7.16 and 3.7.3 Also Win 10",1,fast export hg fast export sh line command not found i can t convert any hg to git and got line as in title line it s a comment move recent marks cache out of the way i have both pythons installed and also win ,1
244881,18768825426.0,IssuesEvent,2021-11-06 12:48:43,Team-Hydra-Discord/Feedback,https://api.github.com/repos/Team-Hydra-Discord/Feedback,closed,[Documentation] Create AppBot Docs,Documentation,"### Describe The Issue With This Content
Create AppBot docs.
### Where Does This Issue Reside?
```bash
Team Hydra Docs
```
### Expected Content
Full AppBot Docs
### Additional Context
_No response_",1.0,"[Documentation] Create AppBot Docs - ### Describe The Issue With This Content
Create AppBot docs.
### Where Does This Issue Reside?
```bash
Team Hydra Docs
```
### Expected Content
Full AppBot Docs
### Additional Context
_No response_",0, create appbot docs describe the issue with this content create appbot docs where does this issue reside bash team hydra docs expected content full appbot docs additional context no response ,0
166747,12970986039.0,IssuesEvent,2020-07-21 10:10:47,WoWManiaUK/Redemption,https://api.github.com/repos/WoWManiaUK/Redemption,closed,Karazhan Opera ewent loot,Fix - Tester Confirmed,"There is a problem with Karazan opera ewent loot. Ewery bos in krazan drops 2 items per kill, but opera drops only one and it is always from shared loot. Items wich should drop depend of wich bos u kill dont drop at all. ",1.0,"Karazhan Opera ewent loot - There is a problem with Karazan opera ewent loot. Ewery bos in krazan drops 2 items per kill, but opera drops only one and it is always from shared loot. Items wich should drop depend of wich bos u kill dont drop at all. ",0,karazhan opera ewent loot there is a problem with karazan opera ewent loot ewery bos in krazan drops items per kill but opera drops only one and it is always from shared loot items wich should drop depend of wich bos u kill dont drop at all ,0
2310,8279120360.0,IssuesEvent,2018-09-18 01:18:27,spacetelescope/wfc3tools,https://api.github.com/repos/spacetelescope/wfc3tools,opened,TST: Add real tests and put them on Jenkins/Artifactory,maintainance,"For working examples, see `hstcal`, `acstools`, `stistools`, or `calcos`. ",True,"TST: Add real tests and put them on Jenkins/Artifactory - For working examples, see `hstcal`, `acstools`, `stistools`, or `calcos`. ",1,tst add real tests and put them on jenkins artifactory for working examples see hstcal acstools stistools or calcos ,1
5643,28369876415.0,IssuesEvent,2023-04-12 16:10:41,camunda/zeebe,https://api.github.com/repos/camunda/zeebe,opened,Allow listening for updates in BrokerTopologyListener,kind/toil area/reliability area/maintainability component/gateway,"**Description**
In order for the job push' `ClientStreamService` to detect brokers being added/removed, we need a way to get this information via the topology. We could use a plain membership listener, but that one is unaware of whether a node is a broker or a gateway, whereas the `BrokerTopologyManager` can already provide this information.
We should add the following capabilities:
- [ ] Add a new listener which gets notified when a broker is added or removed
- [ ] When the listener is initially added, it is also initialized the current list of known brokers, to avoid race conditions
- [ ] The listener can be removed via its identity
",True,"Allow listening for updates in BrokerTopologyListener - **Description**
In order for the job push' `ClientStreamService` to detect brokers being added/removed, we need a way to get this information via the topology. We could use a plain membership listener, but that one is unaware of whether a node is a broker or a gateway, whereas the `BrokerTopologyManager` can already provide this information.
We should add the following capabilities:
- [ ] Add a new listener which gets notified when a broker is added or removed
- [ ] When the listener is initially added, it is also initialized the current list of known brokers, to avoid race conditions
- [ ] The listener can be removed via its identity
",1,allow listening for updates in brokertopologylistener description in order for the job push clientstreamservice to detect brokers being added removed we need a way to get this information via the topology we could use a plain membership listener but that one is unaware of whether a node is a broker or a gateway whereas the brokertopologymanager can already provide this information we should add the following capabilities add a new listener which gets notified when a broker is added or removed when the listener is initially added it is also initialized the current list of known brokers to avoid race conditions the listener can be removed via its identity ,1
1992,6694297401.0,IssuesEvent,2017-10-10 00:58:42,duckduckgo/zeroclickinfo-spice,https://api.github.com/repos/duckduckgo/zeroclickinfo-spice,closed,Forecast: Rounding Issue,Maintainer Input Requested Status: PR Received," The humidity has a rounding issue - a long series of 9's. See the attached screen shot (far left box).

---
IA Page: http://duck.co/ia/view/forecast
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @himanshu0113
",True,"Forecast: Rounding Issue - The humidity has a rounding issue - a long series of 9's. See the attached screen shot (far left box).

---
IA Page: http://duck.co/ia/view/forecast
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @himanshu0113
",1,forecast rounding issue the humidity has a rounding issue a long series of s see the attached screen shot far left box ia page ,1
4113,19529524888.0,IssuesEvent,2021-12-30 14:15:18,NixOS/nixpkgs,https://api.github.com/repos/NixOS/nixpkgs,closed,gpgme needs a new maintainer,9.needs: maintainer,I removed myself as maintainer in #128098. Would anyone be interested to maintain it?,True,gpgme needs a new maintainer - I removed myself as maintainer in #128098. Would anyone be interested to maintain it?,1,gpgme needs a new maintainer i removed myself as maintainer in would anyone be interested to maintain it ,1
5539,27735433247.0,IssuesEvent,2023-03-15 10:53:30,precice/precice,https://api.github.com/repos/precice/precice,closed,Nightly build of dockerimage precice/precice:develop,maintainability compatibility,"**Please describe the problem you are trying to solve.**
The python bindings use the docker image `precice/precice` provided via (https://github.com/precice/precice/blob/v2.3.0/.github/workflows/release-docker.yml) in their CI pipeline to create and push a docker image with the python bindings `precice/python-bindings` (https://github.com/precice/python-bindings/blob/develop/.github/workflows/build-docker.yml).
Currently `precice/precice` is only updated, when there is a release. See https://github.com/precice/precice/blob/v2.3.0/.github/workflows/release-docker.yml. But for being able to run tests on the latest version of preCICE, we actually need `precice/precice:develop`. This will also help us downstream to detect early.
**Describe the solution you propose.**
Change the action https://github.com/precice/precice/blob/v2.3.0/.github/workflows/release-docker.yml to create and push `precice/precice:develop` nightly.
**Describe alternatives you've considered**
We will use `precice/precice:latest` in the python bindings until this issue is solved. However, this is not optimal, because this does not allow us to detect compatibility issues with the develop version of preCICE early. We will only see compatibility issues as soon as preCICE is released and will, therefore, only be able to fix them after a release of preCICE and before a release of the bindings.
**Additional context**
As far as I understand we want to support the path marked as ""nigthly"" from the preCICE paper (https://arxiv.org/pdf/2109.14470.pdf). Solving this issue helps to support this path.

",True,"Nightly build of dockerimage precice/precice:develop - **Please describe the problem you are trying to solve.**
The python bindings use the docker image `precice/precice` provided via (https://github.com/precice/precice/blob/v2.3.0/.github/workflows/release-docker.yml) in their CI pipeline to create and push a docker image with the python bindings `precice/python-bindings` (https://github.com/precice/python-bindings/blob/develop/.github/workflows/build-docker.yml).
Currently `precice/precice` is only updated, when there is a release. See https://github.com/precice/precice/blob/v2.3.0/.github/workflows/release-docker.yml. But for being able to run tests on the latest version of preCICE, we actually need `precice/precice:develop`. This will also help us downstream to detect early.
**Describe the solution you propose.**
Change the action https://github.com/precice/precice/blob/v2.3.0/.github/workflows/release-docker.yml to create and push `precice/precice:develop` nightly.
**Describe alternatives you've considered**
We will use `precice/precice:latest` in the python bindings until this issue is solved. However, this is not optimal, because this does not allow us to detect compatibility issues with the develop version of preCICE early. We will only see compatibility issues as soon as preCICE is released and will, therefore, only be able to fix them after a release of preCICE and before a release of the bindings.
**Additional context**
As far as I understand we want to support the path marked as ""nigthly"" from the preCICE paper (https://arxiv.org/pdf/2109.14470.pdf). Solving this issue helps to support this path.

",1,nightly build of dockerimage precice precice develop please describe the problem you are trying to solve the python bindings use the docker image precice precice provided via in their ci pipeline to create and push a docker image with the python bindings precice python bindings currently precice precice is only updated when there is a release see but for being able to run tests on the latest version of precice we actually need precice precice develop this will also help us downstream to detect early describe the solution you propose change the action to create and push precice precice develop nightly describe alternatives you ve considered we will use precice precice latest in the python bindings until this issue is solved however this is not optimal because this does not allow us to detect compatibility issues with the develop version of precice early we will only see compatibility issues as soon as precice is released and will therefore only be able to fix them after a release of precice and before a release of the bindings additional context as far as i understand we want to support the path marked as nigthly from the precice paper solving this issue helps to support this path ,1
4946,25455551843.0,IssuesEvent,2022-11-24 13:55:24,pace/bricks,https://api.github.com/repos/pace/bricks,closed,Upgrade go-pg dependency,T::Maintainance,"### Problem
We are currently using `github.com/go-pg/pg v6.14.5` which might be outdated.
As far as I can tell the only impact this has on us is a performance one. When using the `Exists()` method on a query (e.g. `db.Model(&m).Where(...).Exists()`) go-pg [performs a regular select and checks whether the number of rows returned](https://github.com/go-pg/pg/blob/v6.14.5/orm/query.go#L1054). This is far from efficient.
### Suggested solution
Upgrade to a newer version, like v8.0.4 where [this seems to be fixed](https://github.com/go-pg/pg/blob/v8.0.4/orm/query.go#L1130).
[Changelog of v.8.0.4](https://github.com/go-pg/pg/blob/v8.0.4/CHANGELOG.md). Upgrading needs adjustments in our code:
> DB.OnQueryProcessed is replaced with DB.AddQueryHook
The format of the hook changes also.
If the impact of this upgrade is too huge, we can live with or work around the problem mentioned. But we probably have to upgrade eventually.",True,"Upgrade go-pg dependency - ### Problem
We are currently using `github.com/go-pg/pg v6.14.5` which might be outdated.
As far as I can tell the only impact this has on us is a performance one. When using the `Exists()` method on a query (e.g. `db.Model(&m).Where(...).Exists()`) go-pg [performs a regular select and checks whether the number of rows returned](https://github.com/go-pg/pg/blob/v6.14.5/orm/query.go#L1054). This is far from efficient.
### Suggested solution
Upgrade to a newer version, like v8.0.4 where [this seems to be fixed](https://github.com/go-pg/pg/blob/v8.0.4/orm/query.go#L1130).
[Changelog of v.8.0.4](https://github.com/go-pg/pg/blob/v8.0.4/CHANGELOG.md). Upgrading needs adjustments in our code:
> DB.OnQueryProcessed is replaced with DB.AddQueryHook
The format of the hook changes also.
If the impact of this upgrade is too huge, we can live with or work around the problem mentioned. But we probably have to upgrade eventually.",1,upgrade go pg dependency problem we are currently using github com go pg pg which might be outdated as far as i can tell the only impact this has on us is a performance one when using the exists method on a query e g db model m where exists go pg this is far from efficient suggested solution upgrade to a newer version like where upgrading needs adjustments in our code db onqueryprocessed is replaced with db addqueryhook the format of the hook changes also if the impact of this upgrade is too huge we can live with or work around the problem mentioned but we probably have to upgrade eventually ,1
3579,14353967104.0,IssuesEvent,2020-11-30 07:54:02,cloverhearts/quilljs-markdown,https://api.github.com/repos/cloverhearts/quilljs-markdown,closed,TypeError: Cannot read property 'on' of undefined,RESEARCH Saw with Maintainer WILL MAKE IT,"when i try use this
import Quill from 'quill'
import QuillMarkdown from 'quilljs-markdown'
const editor = new Quill('#editor', {})
new QuillMarkdown(editor)
I get this error

",True,"TypeError: Cannot read property 'on' of undefined - when i try use this
import Quill from 'quill'
import QuillMarkdown from 'quilljs-markdown'
const editor = new Quill('#editor', {})
new QuillMarkdown(editor)
I get this error

",1,typeerror cannot read property on of undefined when i try use this import quill from quill import quillmarkdown from quilljs markdown const editor new quill editor new quillmarkdown editor i get this error ,1
5584,27985289168.0,IssuesEvent,2023-03-26 16:20:28,rollerderby/scoreboard,https://api.github.com/repos/rollerderby/scoreboard,closed,Allow user to set order of jams in score view,Feature Request maintainer needed,"In the current dev version, the operator screen shows the jams with the most recent at the top of the list. There is certainly a logic for this order, but I suspect there will be strong demand for the option to set it in the opposite order. I would suggest adding this as a configurable option if it isn't already and I just haven't found it.",True,"Allow user to set order of jams in score view - In the current dev version, the operator screen shows the jams with the most recent at the top of the list. There is certainly a logic for this order, but I suspect there will be strong demand for the option to set it in the opposite order. I would suggest adding this as a configurable option if it isn't already and I just haven't found it.",1,allow user to set order of jams in score view in the current dev version the operator screen shows the jams with the most recent at the top of the list there is certainly a logic for this order but i suspect there will be strong demand for the option to set it in the opposite order i would suggest adding this as a configurable option if it isn t already and i just haven t found it ,1
3773,15864469764.0,IssuesEvent,2021-04-08 13:49:56,heroku/heroku-buildpack-python,https://api.github.com/repos/heroku/heroku-buildpack-python,closed,Only export BUILD_DIR and CACHE_DIR once on compile,maintainability-issue,"Currently, we export BUILD_DIR and CACHE_DIR as build vars twice.
Once on `bin/compile` line 36:
https://github.com/heroku/heroku-buildpack-python/blob/master/bin/compile#L35-L36
And then again on line 141:
https://github.com/heroku/heroku-buildpack-python/blob/master/bin/compile#L140-L141
- [ ] Add test for presence of BUILD_DIR and CACHE_DIR on compile
- [ ] Remove second instance of BUILD_DIR and CACHE_DIR",True,"Only export BUILD_DIR and CACHE_DIR once on compile - Currently, we export BUILD_DIR and CACHE_DIR as build vars twice.
Once on `bin/compile` line 36:
https://github.com/heroku/heroku-buildpack-python/blob/master/bin/compile#L35-L36
And then again on line 141:
https://github.com/heroku/heroku-buildpack-python/blob/master/bin/compile#L140-L141
- [ ] Add test for presence of BUILD_DIR and CACHE_DIR on compile
- [ ] Remove second instance of BUILD_DIR and CACHE_DIR",1,only export build dir and cache dir once on compile currently we export build dir and cache dir as build vars twice once on bin compile line and then again on line add test for presence of build dir and cache dir on compile remove second instance of build dir and cache dir,1
111551,17028311732.0,IssuesEvent,2021-07-04 02:28:44,ballerina-platform/ballerina-standard-library,https://api.github.com/repos/ballerina-platform/ballerina-standard-library,closed,Security Implementation for Swan Lake,SwanLakeDump Team/PCP Type/Summary Type/Task area/security module/auth module/jwt module/ldap module/oauth2,"## Important Links
- Dashboard: https://ldclakmal.me/ballerina-security
- `Area/Security` Issues: https://ldclakmal.me/ballerina-security/issues/
### Proposed Designs:
- Design of Ballerina Authentication & Authorization Framework - Swan Lake Version
https://docs.google.com/document/d/1dGw5uUP6kqZNTwMfQ_Ik-k0HTMKhX70XpEA3tys9_kk/edit?usp=sharing
- Re-Design of Ballerina SecureSocket API - Swan Lake Version
https://docs.google.com/document/d/1Y2kLTOw9-sRK1vSEzw5NYdWSA4nwVCvPf3wrbwNDA4s/edit?usp=sharing
- [Review] Ballerina Security APIs of StdLib PCMs https://docs.google.com/document/d/16r_gjBi7SIqVffKVLtKGBevHQRxp7Fnoo9ELyIWV1BM/edit?usp=sharing
---
# Swan Lake Alpha
#### ballerina/auth
- [x] Update and refactor ballerina/auth module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/715
#### ballerina/jwt
- [x] Update and refactor ballerina/jwt module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/716
#### ballerina/oauth2
- [x] Update and refactor ballerina/oauth2 module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/717
- [x] Add support to add optional parameters in OAuth2 introspection request https://github.com/ballerina-platform/ballerina-standard-library/issues/23
- [x] Add support to read custom fields of OAuth2 introspection response https://github.com/ballerina-platform/ballerina-standard-library/issues/16
- [x] oauth2:OutboundOAuth2Provider is not renewing access token when downstream web API returns 403 https://github.com/ballerina-platform/ballerina-standard-library/issues/17
#### ballerina/ldap
- [x] Remove ballerina/ldap module by moving implementation to ballerina/auth module https://github.com/ballerina-platform/ballerina-standard-library/issues/718
#### ballerina/http
- [x] Implement imperative auth design for ballerina/http module https://github.com/ballerina-platform/ballerina-standard-library/issues/752
- [x] Implement declarative auth design for ballerina/http module https://github.com/ballerina-platform/ballerina-standard-library/issues/813
- [x] Align stdlib annotations with spec https://github.com/ballerina-platform/ballerina-standard-library/issues/74
- [x] Improve Ballerina authn & authz configurations https://github.com/ballerina-platform/ballerina-standard-library/issues/63
- [x] Add support to provide a custom claim name as authorization claim field https://github.com/ballerina-platform/ballerina-standard-library/issues/553
#### Common
- [x] Revisit security related BBEs with all the supported features https://github.com/ballerina-platform/ballerina-standard-library/issues/60
---
# Swan Lake Beta
#### Common
- [x] Improve error messages and log messages of security modules https://github.com/ballerina-platform/ballerina-standard-library/issues/1242
#### ballerina/http
- [x] Error while trying to authorize the request when `scopes` filed is not configured https://github.com/ballerina-platform/ballerina-standard-library/issues/972
- [x] Append auth provider error message to `http:Unauthorized` and `http:Forbidden` response types https://github.com/ballerina-platform/ballerina-standard-library/issues/974
- [x] Replace ballerina/reflect API usages in ballerina/http module https://github.com/ballerina-platform/ballerina-standard-library/issues/1012
- [x] Extend listener auth handler APIs for `http:Headers` class https://github.com/ballerina-platform/ballerina-standard-library/issues/1013
- [x] Update `SecureSocket` API of HTTP https://github.com/ballerina-platform/ballerina-standard-library/issues/917
#### ballerina/auth
- [x] Enable basic auth file user store support https://github.com/ballerina-platform/ballerina-standard-library/issues/862
- [x] Update SecureSocket API of LDAP https://github.com/ballerina-platform/ballerina-standard-library/issues/1215
- [x] Remove encrypted and hashed password support https://github.com/ballerina-platform/ballerina-standard-library/issues/1214
- [x] Improve ballerina/auth test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1011
#### ballerina/jwt
- [x] Split JWT validation API for 2 APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/1213
- [x] Replace base64 URL encode/decode APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/1212
- [x] Extend private key/public cert support for JWT signature generation/validation https://github.com/ballerina-platform/ballerina-standard-library/issues/822
- [x] Improve SSL configurations in JDK11 HTTP client used for auth modules https://github.com/ballerina-platform/ballerina-standard-library/issues/936
- [x] Add `jti` claim as a user input for JWT generation https://github.com/ballerina-platform/ballerina-standard-library/issues/1210
- [x] Improve ballerina/jwt test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1010
#### ballerina/oauth2
- [x] JDK11 HTTP client used for OAuth2 introspection should support OAuth2 client authentication https://github.com/ballerina-platform/ballerina-standard-library/issues/935
- [x] Improve SSL configurations in JDK11 HTTP client used for auth modules https://github.com/ballerina-platform/ballerina-standard-library/issues/936
- [x] Improve the logic of extracting refresh_token from the authorization endpoint response https://github.com/ballerina-platform/ballerina-standard-library/issues/1206
- [x] Improve ballerina/oauth2 test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1009
#### ballerina/ldap
- [x] Move ballerina/ldap module to [ballerina-attic](https://github.com/ballerina-attic)
#### ballerina/crypto
- [x] Add support for reading public/private keys from PEM files https://github.com/ballerina-platform/ballerina-standard-library/issues/67
- [x] Improve private key decoding for PKCS8 format https://github.com/ballerina-platform/ballerina-standard-library/issues/1208
- [x] Update and refactor ballerina/crypto module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/908
- [x] Improve ballerina/crypto test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1297
#### ballerina/encoding
- [x] Update and refactor ballerina/encoding module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/907
#### ballerina/websocket
- [x] Add auth support for WebSocket clients https://github.com/ballerina-platform/ballerina-standard-library/issues/820
#### ballerina/graphql
- [x] Implement declarative auth design for GraphQL module https://github.com/ballerina-platform/ballerina-standard-library/issues/1336
---
# Swan Lake GA
#### Common
- [x] Revisit security related APIs across all StdLibs https://github.com/ballerina-platform/ballerina-standard-library/issues/1066
#### ballerina/websocket
- [x] Implement declarative auth design for server side https://github.com/ballerina-platform/ballerina-standard-library/issues/1405
- [x] Need to improve return of WebSocket server side auth errors https://github.com/ballerina-platform/ballerina-standard-library/issues/1230
#### ballerina/ftp
- [x] Implement Security for FTP https://github.com/ballerina-platform/ballerina-standard-library/issues/1438",True,"Security Implementation for Swan Lake - ## Important Links
- Dashboard: https://ldclakmal.me/ballerina-security
- `Area/Security` Issues: https://ldclakmal.me/ballerina-security/issues/
### Proposed Designs:
- Design of Ballerina Authentication & Authorization Framework - Swan Lake Version
https://docs.google.com/document/d/1dGw5uUP6kqZNTwMfQ_Ik-k0HTMKhX70XpEA3tys9_kk/edit?usp=sharing
- Re-Design of Ballerina SecureSocket API - Swan Lake Version
https://docs.google.com/document/d/1Y2kLTOw9-sRK1vSEzw5NYdWSA4nwVCvPf3wrbwNDA4s/edit?usp=sharing
- [Review] Ballerina Security APIs of StdLib PCMs https://docs.google.com/document/d/16r_gjBi7SIqVffKVLtKGBevHQRxp7Fnoo9ELyIWV1BM/edit?usp=sharing
---
# Swan Lake Alpha
#### ballerina/auth
- [x] Update and refactor ballerina/auth module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/715
#### ballerina/jwt
- [x] Update and refactor ballerina/jwt module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/716
#### ballerina/oauth2
- [x] Update and refactor ballerina/oauth2 module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/717
- [x] Add support to add optional parameters in OAuth2 introspection request https://github.com/ballerina-platform/ballerina-standard-library/issues/23
- [x] Add support to read custom fields of OAuth2 introspection response https://github.com/ballerina-platform/ballerina-standard-library/issues/16
- [x] oauth2:OutboundOAuth2Provider is not renewing access token when downstream web API returns 403 https://github.com/ballerina-platform/ballerina-standard-library/issues/17
#### ballerina/ldap
- [x] Remove ballerina/ldap module by moving implementation to ballerina/auth module https://github.com/ballerina-platform/ballerina-standard-library/issues/718
#### ballerina/http
- [x] Implement imperative auth design for ballerina/http module https://github.com/ballerina-platform/ballerina-standard-library/issues/752
- [x] Implement declarative auth design for ballerina/http module https://github.com/ballerina-platform/ballerina-standard-library/issues/813
- [x] Align stdlib annotations with spec https://github.com/ballerina-platform/ballerina-standard-library/issues/74
- [x] Improve Ballerina authn & authz configurations https://github.com/ballerina-platform/ballerina-standard-library/issues/63
- [x] Add support to provide a custom claim name as authorization claim field https://github.com/ballerina-platform/ballerina-standard-library/issues/553
#### Common
- [x] Revisit security related BBEs with all the supported features https://github.com/ballerina-platform/ballerina-standard-library/issues/60
---
# Swan Lake Beta
#### Common
- [x] Improve error messages and log messages of security modules https://github.com/ballerina-platform/ballerina-standard-library/issues/1242
#### ballerina/http
- [x] Error while trying to authorize the request when `scopes` filed is not configured https://github.com/ballerina-platform/ballerina-standard-library/issues/972
- [x] Append auth provider error message to `http:Unauthorized` and `http:Forbidden` response types https://github.com/ballerina-platform/ballerina-standard-library/issues/974
- [x] Replace ballerina/reflect API usages in ballerina/http module https://github.com/ballerina-platform/ballerina-standard-library/issues/1012
- [x] Extend listener auth handler APIs for `http:Headers` class https://github.com/ballerina-platform/ballerina-standard-library/issues/1013
- [x] Update `SecureSocket` API of HTTP https://github.com/ballerina-platform/ballerina-standard-library/issues/917
#### ballerina/auth
- [x] Enable basic auth file user store support https://github.com/ballerina-platform/ballerina-standard-library/issues/862
- [x] Update SecureSocket API of LDAP https://github.com/ballerina-platform/ballerina-standard-library/issues/1215
- [x] Remove encrypted and hashed password support https://github.com/ballerina-platform/ballerina-standard-library/issues/1214
- [x] Improve ballerina/auth test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1011
#### ballerina/jwt
- [x] Split JWT validation API for 2 APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/1213
- [x] Replace base64 URL encode/decode APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/1212
- [x] Extend private key/public cert support for JWT signature generation/validation https://github.com/ballerina-platform/ballerina-standard-library/issues/822
- [x] Improve SSL configurations in JDK11 HTTP client used for auth modules https://github.com/ballerina-platform/ballerina-standard-library/issues/936
- [x] Add `jti` claim as a user input for JWT generation https://github.com/ballerina-platform/ballerina-standard-library/issues/1210
- [x] Improve ballerina/jwt test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1010
#### ballerina/oauth2
- [x] JDK11 HTTP client used for OAuth2 introspection should support OAuth2 client authentication https://github.com/ballerina-platform/ballerina-standard-library/issues/935
- [x] Improve SSL configurations in JDK11 HTTP client used for auth modules https://github.com/ballerina-platform/ballerina-standard-library/issues/936
- [x] Improve the logic of extracting refresh_token from the authorization endpoint response https://github.com/ballerina-platform/ballerina-standard-library/issues/1206
- [x] Improve ballerina/oauth2 test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1009
#### ballerina/ldap
- [x] Move ballerina/ldap module to [ballerina-attic](https://github.com/ballerina-attic)
#### ballerina/crypto
- [x] Add support for reading public/private keys from PEM files https://github.com/ballerina-platform/ballerina-standard-library/issues/67
- [x] Improve private key decoding for PKCS8 format https://github.com/ballerina-platform/ballerina-standard-library/issues/1208
- [x] Update and refactor ballerina/crypto module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/908
- [x] Improve ballerina/crypto test coverage https://github.com/ballerina-platform/ballerina-standard-library/issues/1297
#### ballerina/encoding
- [x] Update and refactor ballerina/encoding module APIs https://github.com/ballerina-platform/ballerina-standard-library/issues/907
#### ballerina/websocket
- [x] Add auth support for WebSocket clients https://github.com/ballerina-platform/ballerina-standard-library/issues/820
#### ballerina/graphql
- [x] Implement declarative auth design for GraphQL module https://github.com/ballerina-platform/ballerina-standard-library/issues/1336
---
# Swan Lake GA
#### Common
- [x] Revisit security related APIs across all StdLibs https://github.com/ballerina-platform/ballerina-standard-library/issues/1066
#### ballerina/websocket
- [x] Implement declarative auth design for server side https://github.com/ballerina-platform/ballerina-standard-library/issues/1405
- [x] Need to improve return of WebSocket server side auth errors https://github.com/ballerina-platform/ballerina-standard-library/issues/1230
#### ballerina/ftp
- [x] Implement Security for FTP https://github.com/ballerina-platform/ballerina-standard-library/issues/1438",0,security implementation for swan lake important links dashboard area security issues proposed designs design of ballerina authentication authorization framework swan lake version re design of ballerina securesocket api swan lake version ballerina security apis of stdlib pcms swan lake alpha ballerina auth update and refactor ballerina auth module apis ballerina jwt update and refactor ballerina jwt module apis ballerina update and refactor ballerina module apis add support to add optional parameters in introspection request add support to read custom fields of introspection response is not renewing access token when downstream web api returns ballerina ldap remove ballerina ldap module by moving implementation to ballerina auth module ballerina http implement imperative auth design for ballerina http module implement declarative auth design for ballerina http module align stdlib annotations with spec improve ballerina authn authz configurations add support to provide a custom claim name as authorization claim field common revisit security related bbes with all the supported features swan lake beta common improve error messages and log messages of security modules ballerina http error while trying to authorize the request when scopes filed is not configured append auth provider error message to http unauthorized and http forbidden response types replace ballerina reflect api usages in ballerina http module extend listener auth handler apis for http headers class update securesocket api of http ballerina auth enable basic auth file user store support update securesocket api of ldap remove encrypted and hashed password support improve ballerina auth test coverage ballerina jwt split jwt validation api for apis replace url encode decode apis extend private key public cert support for jwt signature generation validation improve ssl configurations in http client used for auth modules add jti claim as a user input for jwt generation improve ballerina jwt test coverage ballerina http client used for introspection should support client authentication improve ssl configurations in http client used for auth modules improve the logic of extracting refresh token from the authorization endpoint response improve ballerina test coverage ballerina ldap move ballerina ldap module to ballerina crypto add support for reading public private keys from pem files improve private key decoding for format update and refactor ballerina crypto module apis improve ballerina crypto test coverage ballerina encoding update and refactor ballerina encoding module apis ballerina websocket add auth support for websocket clients ballerina graphql implement declarative auth design for graphql module swan lake ga common revisit security related apis across all stdlibs ballerina websocket implement declarative auth design for server side need to improve return of websocket server side auth errors ballerina ftp implement security for ftp ,0
9089,4413688691.0,IssuesEvent,2016-08-13 01:02:30,facebook/osquery,https://api.github.com/repos/facebook/osquery,closed,Flaky tests: DaemonTests::test_5_daemon_sigint variance in return code,build/test test error,"See:
```
5/9 Test #5: python_test_osqueryd .............***Failed 33.75 sec
.I0721 12:22:59.882045 26907 options.cpp:61] Verbose logging enabled by config option
I0721 12:22:59.882532 26907 daemon.cpp:38] Not starting the distributed query service: Distributed query service not enabled.
...FI0721 12:23:29.781241 26946 options.cpp:61] Verbose logging enabled by config option
I0721 12:23:29.781740 26946 daemon.cpp:38] Not starting the distributed query service: Distributed query service not enabled.
.
======================================================================
FAIL: test_5_daemon_sigint (__main__.DaemonTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File ""/home/osquery/jenkins/workspace/osqueryPullRequestBuild/TargetSystem/centos7/tools/tests/test_base.py"", line 455, in wrapper
raise exceptions[0][0]
AssertionError: -2 != 130
----------------------------------------------------------------------
Ran 6 tests in 33.695s
FAILED (failures=1)
Test (attempt 1) DaemonTests::test_5_daemon_sigint failed: -2 != 130 (test_base.py:437)
Test (attempt 2) DaemonTests::test_5_daemon_sigint failed: -2 != 130 (test_base.py:437)
Test (attempt 3) DaemonTests::test_5_daemon_sigint failed: -2 != 130 (test_base.py:437)
```
For an example see: https://jenkins.osquery.io/job/osqueryPullRequestBuild/2912/TargetSystem=centos7/console",1.0,"Flaky tests: DaemonTests::test_5_daemon_sigint variance in return code - See:
```
5/9 Test #5: python_test_osqueryd .............***Failed 33.75 sec
.I0721 12:22:59.882045 26907 options.cpp:61] Verbose logging enabled by config option
I0721 12:22:59.882532 26907 daemon.cpp:38] Not starting the distributed query service: Distributed query service not enabled.
...FI0721 12:23:29.781241 26946 options.cpp:61] Verbose logging enabled by config option
I0721 12:23:29.781740 26946 daemon.cpp:38] Not starting the distributed query service: Distributed query service not enabled.
.
======================================================================
FAIL: test_5_daemon_sigint (__main__.DaemonTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File ""/home/osquery/jenkins/workspace/osqueryPullRequestBuild/TargetSystem/centos7/tools/tests/test_base.py"", line 455, in wrapper
raise exceptions[0][0]
AssertionError: -2 != 130
----------------------------------------------------------------------
Ran 6 tests in 33.695s
FAILED (failures=1)
Test (attempt 1) DaemonTests::test_5_daemon_sigint failed: -2 != 130 (test_base.py:437)
Test (attempt 2) DaemonTests::test_5_daemon_sigint failed: -2 != 130 (test_base.py:437)
Test (attempt 3) DaemonTests::test_5_daemon_sigint failed: -2 != 130 (test_base.py:437)
```
For an example see: https://jenkins.osquery.io/job/osqueryPullRequestBuild/2912/TargetSystem=centos7/console",0,flaky tests daemontests test daemon sigint variance in return code see test python test osqueryd failed sec options cpp verbose logging enabled by config option daemon cpp not starting the distributed query service distributed query service not enabled options cpp verbose logging enabled by config option daemon cpp not starting the distributed query service distributed query service not enabled fail test daemon sigint main daemontests traceback most recent call last file home osquery jenkins workspace osquerypullrequestbuild targetsystem tools tests test base py line in wrapper raise exceptions assertionerror ran tests in failed failures test attempt daemontests test daemon sigint failed test base py test attempt daemontests test daemon sigint failed test base py test attempt daemontests test daemon sigint failed test base py for an example see ,0
118628,9996697158.0,IssuesEvent,2019-07-12 00:49:06,kostmo/circleci-failure-tracker,https://api.github.com/repos/kostmo/circleci-failure-tracker,opened,Easy import of prod data into local database,testing,"There should be a script that can dump and import the complete remote database into a local database, for testing purposes.
Due to network bandwidth constraints, it may be that the local database will have to skip downloading the full console log.",1.0,"Easy import of prod data into local database - There should be a script that can dump and import the complete remote database into a local database, for testing purposes.
Due to network bandwidth constraints, it may be that the local database will have to skip downloading the full console log.",0,easy import of prod data into local database there should be a script that can dump and import the complete remote database into a local database for testing purposes due to network bandwidth constraints it may be that the local database will have to skip downloading the full console log ,0
3284,12537507240.0,IssuesEvent,2020-06-05 03:39:25,short-d/short,https://api.github.com/repos/short-d/short,opened,[Refactor] Remove business logic from gqlapi resolver,maintainability,"**What is frustrating you?**
The gqlapi resolver uses a singular url input with all fields as optional. Since the create and update functions differ in optional field requirements, the resolver has to format the inputs so that it fits their expected inputs. Thus business logic exists within the resolver which should exist within each use case.
**Your solution**
Update Creator and Updater to do this business logic instead of the gqlapi resolver.
",True,"[Refactor] Remove business logic from gqlapi resolver - **What is frustrating you?**
The gqlapi resolver uses a singular url input with all fields as optional. Since the create and update functions differ in optional field requirements, the resolver has to format the inputs so that it fits their expected inputs. Thus business logic exists within the resolver which should exist within each use case.
**Your solution**
Update Creator and Updater to do this business logic instead of the gqlapi resolver.
",1, remove business logic from gqlapi resolver what is frustrating you the gqlapi resolver uses a singular url input with all fields as optional since the create and update functions differ in optional field requirements the resolver has to format the inputs so that it fits their expected inputs thus business logic exists within the resolver which should exist within each use case your solution update creator and updater to do this business logic instead of the gqlapi resolver ,1
341997,24724858975.0,IssuesEvent,2022-10-20 13:25:02,SandraScherer/EntertainmentInfothek,https://api.github.com/repos/SandraScherer/EntertainmentInfothek,closed,Add genre information to series,documentation enhancement database program,"- [x] Add table Series_Genre to database
- [x] Add/adapt Genre classes in EntertainmentDB.dll
- [x] Add tests to EntertainmentDB.Tests
- [x] Add/adapt ContentCreator classes in WikiPageCreator
- [x] Add tests to WikiPageCreator.Tests
- [x] Update documentation
- [x] EntertainmentInfothek_Database.vpp
- [x] EntertainmentInfothek_EntertainmentDB.dll.vpp
- [x] EntertainmentInfothek_WikiPageCreator.vpp
- [x] Doxygen",1.0,"Add genre information to series - - [x] Add table Series_Genre to database
- [x] Add/adapt Genre classes in EntertainmentDB.dll
- [x] Add tests to EntertainmentDB.Tests
- [x] Add/adapt ContentCreator classes in WikiPageCreator
- [x] Add tests to WikiPageCreator.Tests
- [x] Update documentation
- [x] EntertainmentInfothek_Database.vpp
- [x] EntertainmentInfothek_EntertainmentDB.dll.vpp
- [x] EntertainmentInfothek_WikiPageCreator.vpp
- [x] Doxygen",0,add genre information to series add table series genre to database add adapt genre classes in entertainmentdb dll add tests to entertainmentdb tests add adapt contentcreator classes in wikipagecreator add tests to wikipagecreator tests update documentation entertainmentinfothek database vpp entertainmentinfothek entertainmentdb dll vpp entertainmentinfothek wikipagecreator vpp doxygen,0
3302,12719007791.0,IssuesEvent,2020-06-24 08:33:43,cthit/react-digit-components,https://api.github.com/repos/cthit/react-digit-components,opened,Rewrite DigitLayouts to avoid code duplication ,Component: DigitLayout Type: Maintainence,"Right now there's no real logic behind the DigitLayout file, there might be hidden bugs because of the poor state that it's in.
Note that the actual result in using DigitLayout shouldn't be breaking changes.",True,"Rewrite DigitLayouts to avoid code duplication - Right now there's no real logic behind the DigitLayout file, there might be hidden bugs because of the poor state that it's in.
Note that the actual result in using DigitLayout shouldn't be breaking changes.",1,rewrite digitlayouts to avoid code duplication right now there s no real logic behind the digitlayout file there might be hidden bugs because of the poor state that it s in note that the actual result in using digitlayout shouldn t be breaking changes ,1
2038,6850153623.0,IssuesEvent,2017-11-14 01:34:51,caskroom/homebrew-cask,https://api.github.com/repos/caskroom/homebrew-cask,closed,_audit_modified_casks: fails or audits multiple Casks,awaiting maintainer feedback,"https://travis-ci.org/caskroom/homebrew-cask/builds/301185254
```
>>> brew cask _audit_modified_casks 6eae398988688a5f1fdf5695ea10794c3da6b4b2...76a94d96f9b2571f373874896da7c7e7aa3c6770
Error: Cask 'abgx360' is unavailable: No Cask with this name exists.
```
```
(I thought I had a couple of examples for multiple Casks saved but I can't find them)
```
These occur when the contributors fork is outdated (sometimes by only a few hours).
It's easily fixed by rebasing the branch against master but on several occasions I've had contributors inadvertently revert the rebase and also any changes that have been made.
Looking the the history of the travis scripts there have been several ""commit range"" workarounds added and removed previously so I not sure if this is a problem we have encountered before, thought I'd ask first before I try to come up with another workaround.",True,"_audit_modified_casks: fails or audits multiple Casks - https://travis-ci.org/caskroom/homebrew-cask/builds/301185254
```
>>> brew cask _audit_modified_casks 6eae398988688a5f1fdf5695ea10794c3da6b4b2...76a94d96f9b2571f373874896da7c7e7aa3c6770
Error: Cask 'abgx360' is unavailable: No Cask with this name exists.
```
```
(I thought I had a couple of examples for multiple Casks saved but I can't find them)
```
These occur when the contributors fork is outdated (sometimes by only a few hours).
It's easily fixed by rebasing the branch against master but on several occasions I've had contributors inadvertently revert the rebase and also any changes that have been made.
Looking the the history of the travis scripts there have been several ""commit range"" workarounds added and removed previously so I not sure if this is a problem we have encountered before, thought I'd ask first before I try to come up with another workaround.",1, audit modified casks fails or audits multiple casks brew cask audit modified casks error cask is unavailable no cask with this name exists i thought i had a couple of examples for multiple casks saved but i can t find them these occur when the contributors fork is outdated sometimes by only a few hours it s easily fixed by rebasing the branch against master but on several occasions i ve had contributors inadvertently revert the rebase and also any changes that have been made looking the the history of the travis scripts there have been several commit range workarounds added and removed previously so i not sure if this is a problem we have encountered before thought i d ask first before i try to come up with another workaround ,1
34223,12258179941.0,IssuesEvent,2020-05-06 14:45:19,kids-first/kf-portal-ui,https://api.github.com/repos/kids-first/kf-portal-ui,closed,Arrange Component Upgrade,Security arranger,"We have currently 2 high risk vulnerabilities in the portal injected by arranger component.
First step will be to upgrade our version to see if it fix the issues
Then, fix arranger component if the warning is still present

",True,"Arrange Component Upgrade - We have currently 2 high risk vulnerabilities in the portal injected by arranger component.
First step will be to upgrade our version to see if it fix the issues
Then, fix arranger component if the warning is still present

",0,arrange component upgrade we have currently high risk vulnerabilities in the portal injected by arranger component first step will be to upgrade our version to see if it fix the issues then fix arranger component if the warning is still present ,0
1730,6574837726.0,IssuesEvent,2017-09-11 14:14:40,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ipv4_secondaries displays duplicate information,affects_2.2 bug_report waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
setup
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/vagrant/ansible/ansible.cfg
configured module search path = ['/home/vagrant/ansible/library']
```
##### CONFIGURATION
# Enabled smart gathering
gathering: smart
##### OS / ENVIRONMENT
Ubuntu 16.04.1
##### SUMMARY
ipv4_secondaries displays duplicate address information
##### STEPS TO REPRODUCE
Run
ansible -m setup hostname.foo -a ""filter=ansible_eth1""
Receive a filtered response with eth1. Here is example of secondaries
""ipv4_secondaries"": [
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
},
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
}
],
Information is repeated
```
ansible -m setup hostname.foo -a ""filter=ansible_eth1""
```
##### EXPECTED RESULTS
...
```
""ipv4_secondaries"": [
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
},
],
...
```
##### ACTUAL RESULTS
Received
```
...
""ipv4_secondaries"": [
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
},
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
}
],
...
```
Posting the full verbose output
```
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py
<10.10.10.83> ESTABLISH SSH CONNECTION FOR USER: root
<10.10.10.83> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 10.10.10.83 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234 `"" && echo ansible-tmp-1477526333.03-252143209167234=""` echo $HOME/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234 `"" ) && sleep 0'""'""''
<10.10.10.83> PUT /tmp/tmpZmW3aJ TO /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/setup.py
<10.10.10.83> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[10.10.10.83]'
<10.10.10.83> ESTABLISH SSH CONNECTION FOR USER: root
<10.10.10.83> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 10.10.10.83 '/bin/sh -c '""'""'chmod u+x /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/ /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/setup.py && sleep 0'""'""''
<10.10.10.83> ESTABLISH SSH CONNECTION FOR USER: root
<10.10.10.83> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.10.10.83 '/bin/sh -c '""'""'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/setup.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/"" > /dev/null 2>&1 && sleep 0'""'""''
voippbx.xcastlabs.com | SUCCESS => {
""ansible_facts"": {
""ansible_eth1"": {
""active"": true,
""device"": ""eth1"",
""features"": {
""busy_poll"": ""on [fixed]"",
""fcoe_mtu"": ""off [fixed]"",
""generic_receive_offload"": ""on"",
""generic_segmentation_offload"": ""on"",
""highdma"": ""on [fixed]"",
""l2_fwd_offload"": ""off [fixed]"",
""large_receive_offload"": ""off [fixed]"",
""loopback"": ""off [fixed]"",
""netns_local"": ""off [fixed]"",
""ntuple_filters"": ""off [fixed]"",
""receive_hashing"": ""off [fixed]"",
""rx_all"": ""off [fixed]"",
""rx_checksumming"": ""on [fixed]"",
""rx_fcs"": ""off [fixed]"",
""rx_vlan_filter"": ""on [fixed]"",
""rx_vlan_offload"": ""off [fixed]"",
""rx_vlan_stag_filter"": ""off [fixed]"",
""rx_vlan_stag_hw_parse"": ""off [fixed]"",
""scatter_gather"": ""on"",
""tcp_segmentation_offload"": ""on"",
""tx_checksum_fcoe_crc"": ""off [fixed]"",
""tx_checksum_ip_generic"": ""on"",
""tx_checksum_ipv4"": ""off [fixed]"",
""tx_checksum_ipv6"": ""off [fixed]"",
""tx_checksum_sctp"": ""off [fixed]"",
""tx_checksumming"": ""on"",
""tx_fcoe_segmentation"": ""off [fixed]"",
""tx_gre_segmentation"": ""off [fixed]"",
""tx_gso_robust"": ""on [fixed]"",
""tx_ipip_segmentation"": ""off [fixed]"",
""tx_lockless"": ""off [fixed]"",
""tx_nocache_copy"": ""off"",
""tx_scatter_gather"": ""on"",
""tx_scatter_gather_fraglist"": ""off [fixed]"",
""tx_sit_segmentation"": ""off [fixed]"",
""tx_tcp6_segmentation"": ""on"",
""tx_tcp_ecn_segmentation"": ""on"",
""tx_tcp_segmentation"": ""on"",
""tx_udp_tnl_segmentation"": ""off [fixed]"",
""tx_vlan_offload"": ""off [fixed]"",
""tx_vlan_stag_hw_insert"": ""off [fixed]"",
""udp_fragmentation_offload"": ""on"",
""vlan_challenged"": ""off [fixed]""
},
""ipv4"": {
""address"": ""75.145.154.230"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
},
""ipv4_secondaries"": [
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
},
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
}
],
""ipv6"": [
{
""address"": ""fe80::5c1c:e5ff:fe35:7c81"",
""prefix"": ""64"",
""scope"": ""link""
}
],
""macaddress"": ""5e:1c:e5:35:7c:81"",
""module"": ""virtio_net"",
""mtu"": 1500,
""pciid"": ""virtio4"",
""promisc"": false,
""type"": ""ether""
}
},
""changed"": false,
""invocation"": {
""module_args"": {
""fact_path"": ""/etc/ansible/facts.d"",
""filter"": ""ansible_eth1"",
""gather_subset"": [
""all""
],
""gather_timeout"": 10
},
""module_name"": ""setup""
}
}
```
",True,"ipv4_secondaries displays duplicate information -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
setup
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/vagrant/ansible/ansible.cfg
configured module search path = ['/home/vagrant/ansible/library']
```
##### CONFIGURATION
# Enabled smart gathering
gathering: smart
##### OS / ENVIRONMENT
Ubuntu 16.04.1
##### SUMMARY
ipv4_secondaries displays duplicate address information
##### STEPS TO REPRODUCE
Run
ansible -m setup hostname.foo -a ""filter=ansible_eth1""
Receive a filtered response with eth1. Here is example of secondaries
""ipv4_secondaries"": [
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
},
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
}
],
Information is repeated
```
ansible -m setup hostname.foo -a ""filter=ansible_eth1""
```
##### EXPECTED RESULTS
...
```
""ipv4_secondaries"": [
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
},
],
...
```
##### ACTUAL RESULTS
Received
```
...
""ipv4_secondaries"": [
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
},
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
}
],
...
```
Posting the full verbose output
```
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py
<10.10.10.83> ESTABLISH SSH CONNECTION FOR USER: root
<10.10.10.83> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 10.10.10.83 '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234 `"" && echo ansible-tmp-1477526333.03-252143209167234=""` echo $HOME/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234 `"" ) && sleep 0'""'""''
<10.10.10.83> PUT /tmp/tmpZmW3aJ TO /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/setup.py
<10.10.10.83> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[10.10.10.83]'
<10.10.10.83> ESTABLISH SSH CONNECTION FOR USER: root
<10.10.10.83> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 10.10.10.83 '/bin/sh -c '""'""'chmod u+x /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/ /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/setup.py && sleep 0'""'""''
<10.10.10.83> ESTABLISH SSH CONNECTION FOR USER: root
<10.10.10.83> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.10.10.83 '/bin/sh -c '""'""'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/setup.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/"" > /dev/null 2>&1 && sleep 0'""'""''
voippbx.xcastlabs.com | SUCCESS => {
""ansible_facts"": {
""ansible_eth1"": {
""active"": true,
""device"": ""eth1"",
""features"": {
""busy_poll"": ""on [fixed]"",
""fcoe_mtu"": ""off [fixed]"",
""generic_receive_offload"": ""on"",
""generic_segmentation_offload"": ""on"",
""highdma"": ""on [fixed]"",
""l2_fwd_offload"": ""off [fixed]"",
""large_receive_offload"": ""off [fixed]"",
""loopback"": ""off [fixed]"",
""netns_local"": ""off [fixed]"",
""ntuple_filters"": ""off [fixed]"",
""receive_hashing"": ""off [fixed]"",
""rx_all"": ""off [fixed]"",
""rx_checksumming"": ""on [fixed]"",
""rx_fcs"": ""off [fixed]"",
""rx_vlan_filter"": ""on [fixed]"",
""rx_vlan_offload"": ""off [fixed]"",
""rx_vlan_stag_filter"": ""off [fixed]"",
""rx_vlan_stag_hw_parse"": ""off [fixed]"",
""scatter_gather"": ""on"",
""tcp_segmentation_offload"": ""on"",
""tx_checksum_fcoe_crc"": ""off [fixed]"",
""tx_checksum_ip_generic"": ""on"",
""tx_checksum_ipv4"": ""off [fixed]"",
""tx_checksum_ipv6"": ""off [fixed]"",
""tx_checksum_sctp"": ""off [fixed]"",
""tx_checksumming"": ""on"",
""tx_fcoe_segmentation"": ""off [fixed]"",
""tx_gre_segmentation"": ""off [fixed]"",
""tx_gso_robust"": ""on [fixed]"",
""tx_ipip_segmentation"": ""off [fixed]"",
""tx_lockless"": ""off [fixed]"",
""tx_nocache_copy"": ""off"",
""tx_scatter_gather"": ""on"",
""tx_scatter_gather_fraglist"": ""off [fixed]"",
""tx_sit_segmentation"": ""off [fixed]"",
""tx_tcp6_segmentation"": ""on"",
""tx_tcp_ecn_segmentation"": ""on"",
""tx_tcp_segmentation"": ""on"",
""tx_udp_tnl_segmentation"": ""off [fixed]"",
""tx_vlan_offload"": ""off [fixed]"",
""tx_vlan_stag_hw_insert"": ""off [fixed]"",
""udp_fragmentation_offload"": ""on"",
""vlan_challenged"": ""off [fixed]""
},
""ipv4"": {
""address"": ""75.145.154.230"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
},
""ipv4_secondaries"": [
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
},
{
""address"": ""75.145.154.231"",
""broadcast"": ""75.145.154.239"",
""netmask"": ""255.255.255.240"",
""network"": ""75.145.154.224""
}
],
""ipv6"": [
{
""address"": ""fe80::5c1c:e5ff:fe35:7c81"",
""prefix"": ""64"",
""scope"": ""link""
}
],
""macaddress"": ""5e:1c:e5:35:7c:81"",
""module"": ""virtio_net"",
""mtu"": 1500,
""pciid"": ""virtio4"",
""promisc"": false,
""type"": ""ether""
}
},
""changed"": false,
""invocation"": {
""module_args"": {
""fact_path"": ""/etc/ansible/facts.d"",
""filter"": ""ansible_eth1"",
""gather_subset"": [
""all""
],
""gather_timeout"": 10
},
""module_name"": ""setup""
}
}
```
",1, secondaries displays duplicate information issue type bug report component name setup ansible version ansible config file home vagrant ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables enabled smart gathering gathering smart os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary secondaries displays duplicate address information steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used run ansible m setup hostname foo a filter ansible receive a filtered response with here is example of secondaries secondaries address broadcast netmask network address broadcast netmask network information is repeated ansible m setup hostname foo a filter ansible expected results secondaries address broadcast netmask network actual results received secondaries address broadcast netmask network address broadcast netmask network posting the full verbose output loading callback plugin minimal of type stdout from usr lib dist packages ansible plugins callback init pyc using module file usr lib dist packages ansible modules core system setup py establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp setup py ssh exec sftp b vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp setup py sleep establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c usr bin python root ansible tmp ansible tmp setup py rm rf root ansible tmp ansible tmp dev null sleep voippbx xcastlabs com success ansible facts ansible active true device features busy poll on fcoe mtu off generic receive offload on generic segmentation offload on highdma on fwd offload off large receive offload off loopback off netns local off ntuple filters off receive hashing off rx all off rx checksumming on rx fcs off rx vlan filter on rx vlan offload off rx vlan stag filter off rx vlan stag hw parse off scatter gather on tcp segmentation offload on tx checksum fcoe crc off tx checksum ip generic on tx checksum off tx checksum off tx checksum sctp off tx checksumming on tx fcoe segmentation off tx gre segmentation off tx gso robust on tx ipip segmentation off tx lockless off tx nocache copy off tx scatter gather on tx scatter gather fraglist off tx sit segmentation off tx segmentation on tx tcp ecn segmentation on tx tcp segmentation on tx udp tnl segmentation off tx vlan offload off tx vlan stag hw insert off udp fragmentation offload on vlan challenged off address broadcast netmask network secondaries address broadcast netmask network address broadcast netmask network address prefix scope link macaddress module virtio net mtu pciid promisc false type ether changed false invocation module args fact path etc ansible facts d filter ansible gather subset all gather timeout module name setup ,1
260635,8212682546.0,IssuesEvent,2018-09-04 17:05:56,phetsims/axon,https://api.github.com/repos/phetsims/axon,closed,detect Property loops,priority:2-high,"A Property loop occurs when a Property's `set` method is entered before a previous call to `set` exits. PhET-iO makes it necessary to deal with these loops because it results in intermediate/redundant data in the message stream. See for example https://github.com/phetsims/hookes-law/issues/52.
PhetioObject previously was responsible for detecting Property loops, but that was removed in https://github.com/phetsims/tandem/issues/57. And I believe it was @samreid who suggested that this responsibility does not belong in PhET-iO; it belongs in axon.
In https://github.com/phetsims/tandem/issues/57#issuecomment-396712827, I suggested a way to add responsibility (conditionally) to Property, reproduced below. And I found this to be invaluable in troubleshooting https://github.com/phetsims/hookes-law/issues/52. I'd like to see this added to Property.
```js
// @private
_notifyListeners: function( oldValue ) {
// We must short circuit based on tandem here as a guard against the toStateObject calls
this.tandem.isSuppliedAndEnabled() && this.startEvent( 'model', 'changed', {
oldValue: this.phetioType.elementType.toStateObject( oldValue ),
newValue: this.phetioType.elementType.toStateObject( this.get() ),
units: this.phetioType && this.phetioType.units
}, this.changeEventOptions );
// notify listeners, optionally detect loops where this Property is set again before this completes.
assert && assert( !this.notifying || !phet.chipper.queryParameters.detectPropertyLoops,
'Property loop detected, value=' + this.get() + ', oldValue=' + oldValue );
this.notifying = true;
this.changedEmitter.emit2( this.get(), oldValue );
this.notifying = false;
this.tandem.isSuppliedAndEnabled() && this.endEvent();
},
````
",1.0,"detect Property loops - A Property loop occurs when a Property's `set` method is entered before a previous call to `set` exits. PhET-iO makes it necessary to deal with these loops because it results in intermediate/redundant data in the message stream. See for example https://github.com/phetsims/hookes-law/issues/52.
PhetioObject previously was responsible for detecting Property loops, but that was removed in https://github.com/phetsims/tandem/issues/57. And I believe it was @samreid who suggested that this responsibility does not belong in PhET-iO; it belongs in axon.
In https://github.com/phetsims/tandem/issues/57#issuecomment-396712827, I suggested a way to add responsibility (conditionally) to Property, reproduced below. And I found this to be invaluable in troubleshooting https://github.com/phetsims/hookes-law/issues/52. I'd like to see this added to Property.
```js
// @private
_notifyListeners: function( oldValue ) {
// We must short circuit based on tandem here as a guard against the toStateObject calls
this.tandem.isSuppliedAndEnabled() && this.startEvent( 'model', 'changed', {
oldValue: this.phetioType.elementType.toStateObject( oldValue ),
newValue: this.phetioType.elementType.toStateObject( this.get() ),
units: this.phetioType && this.phetioType.units
}, this.changeEventOptions );
// notify listeners, optionally detect loops where this Property is set again before this completes.
assert && assert( !this.notifying || !phet.chipper.queryParameters.detectPropertyLoops,
'Property loop detected, value=' + this.get() + ', oldValue=' + oldValue );
this.notifying = true;
this.changedEmitter.emit2( this.get(), oldValue );
this.notifying = false;
this.tandem.isSuppliedAndEnabled() && this.endEvent();
},
````
",0,detect property loops a property loop occurs when a property s set method is entered before a previous call to set exits phet io makes it necessary to deal with these loops because it results in intermediate redundant data in the message stream see for example phetioobject previously was responsible for detecting property loops but that was removed in and i believe it was samreid who suggested that this responsibility does not belong in phet io it belongs in axon in i suggested a way to add responsibility conditionally to property reproduced below and i found this to be invaluable in troubleshooting i d like to see this added to property js private notifylisteners function oldvalue we must short circuit based on tandem here as a guard against the tostateobject calls this tandem issuppliedandenabled this startevent model changed oldvalue this phetiotype elementtype tostateobject oldvalue newvalue this phetiotype elementtype tostateobject this get units this phetiotype this phetiotype units this changeeventoptions notify listeners optionally detect loops where this property is set again before this completes assert assert this notifying phet chipper queryparameters detectpropertyloops property loop detected value this get oldvalue oldvalue this notifying true this changedemitter this get oldvalue this notifying false this tandem issuppliedandenabled this endevent ,0
3234,12368706405.0,IssuesEvent,2020-05-18 14:13:30,Kashdeya/Tiny-Progressions,https://api.github.com/repos/Kashdeya/Tiny-Progressions,closed,Watering Can multiplayer behaviour,Version not Maintainted,"When multiple players have watering cans in their inventory cans cannot be activated, and one player sneaking while another right clicks will activate the sneaking players watering can.
Have seen this in multiple mod packs including Project Ozone 3 and Sky Factory 4.
Running on a LAN World with two players. Both have a watering can.
Have done some testing and it seems that when the last player to pick up a watering can tries to activate the can, it activates for a split second, and then deactivates. It appears to happen in less than a frame in some cases as the item flickers with the enchanted 'glow'.
If one player activates their watering can while the other is holding it in their hand, both watering cans are activated.
If the other player shift right clicks both cans are deactivated.
It seems as though there is some sort of global state for the cans that isn't working with multiple players.",True,"Watering Can multiplayer behaviour - When multiple players have watering cans in their inventory cans cannot be activated, and one player sneaking while another right clicks will activate the sneaking players watering can.
Have seen this in multiple mod packs including Project Ozone 3 and Sky Factory 4.
Running on a LAN World with two players. Both have a watering can.
Have done some testing and it seems that when the last player to pick up a watering can tries to activate the can, it activates for a split second, and then deactivates. It appears to happen in less than a frame in some cases as the item flickers with the enchanted 'glow'.
If one player activates their watering can while the other is holding it in their hand, both watering cans are activated.
If the other player shift right clicks both cans are deactivated.
It seems as though there is some sort of global state for the cans that isn't working with multiple players.",1,watering can multiplayer behaviour when multiple players have watering cans in their inventory cans cannot be activated and one player sneaking while another right clicks will activate the sneaking players watering can have seen this in multiple mod packs including project ozone and sky factory running on a lan world with two players both have a watering can have done some testing and it seems that when the last player to pick up a watering can tries to activate the can it activates for a split second and then deactivates it appears to happen in less than a frame in some cases as the item flickers with the enchanted glow if one player activates their watering can while the other is holding it in their hand both watering cans are activated if the other player shift right clicks both cans are deactivated it seems as though there is some sort of global state for the cans that isn t working with multiple players ,1
52550,7769217417.0,IssuesEvent,2018-06-04 02:18:21,chrissimpkins/Crunch,https://api.github.com/repos/chrissimpkins/Crunch,closed,Update bug report template to include new macOS GUI + macOS right-click menu service log files in report,documentation,"Log files will be available for image optimization failures as of the v3.0.0 release. These logs should be included in all bug reports.
TODO:
- [x] Add to bug report template markdown file",1.0,"Update bug report template to include new macOS GUI + macOS right-click menu service log files in report - Log files will be available for image optimization failures as of the v3.0.0 release. These logs should be included in all bug reports.
TODO:
- [x] Add to bug report template markdown file",0,update bug report template to include new macos gui macos right click menu service log files in report log files will be available for image optimization failures as of the release these logs should be included in all bug reports todo add to bug report template markdown file,0
3981,18344603645.0,IssuesEvent,2021-10-08 03:25:12,pmqueiroz/mask-wizard,https://api.github.com/repos/pmqueiroz/mask-wizard,opened,Add linting and formatter,enhancement Maintainers Only,"### Preliminary checks
- [X] I've checked that there aren't [**other open issues**](https://github.com/pmqueiroz/mask-wizard/issues?q=is%3Aissue) on the same topic.
- [X] I want to work on this.
### Describe the problem requiring a solution
Create a code style guide and add set up linting and formatter to the project
### Describe the possible solution
Eslint and prettier
Add code style guid to Github Wiki
### Additional info
_No response_",True,"Add linting and formatter - ### Preliminary checks
- [X] I've checked that there aren't [**other open issues**](https://github.com/pmqueiroz/mask-wizard/issues?q=is%3Aissue) on the same topic.
- [X] I want to work on this.
### Describe the problem requiring a solution
Create a code style guide and add set up linting and formatter to the project
### Describe the possible solution
Eslint and prettier
Add code style guid to Github Wiki
### Additional info
_No response_",1,add linting and formatter preliminary checks i ve checked that there aren t on the same topic i want to work on this describe the problem requiring a solution create a code style guide and add set up linting and formatter to the project describe the possible solution eslint and prettier add code style guid to github wiki additional info no response ,1
5296,26761302137.0,IssuesEvent,2023-01-31 07:08:55,bazelbuild/intellij,https://api.github.com/repos/bazelbuild/intellij,closed,go_tool_library sources marked as unsynced,type: bug lang: go product: IntelliJ os: linux topic: sync awaiting-maintainer,"#### Description of the issue. Please be specific.
Go sources using the `go_tool_library` are marked as unsynced. The `go_tool_library` is necessary for `nogo` rules to avoid a cycle in dependencies. The normal `go_library` uses `nogo`.
#### What's the simplest set of steps to reproduce this issue? Please provide an example project, if possible.
https://github.com/jschaf/bazel-bug-go-tool/tree/master/lint

#### Version information
IdeaUltimate: 2020.2.3
Platform: Linux 5.4.0-7642-generic
Bazel plugin: 9999
Bazel: 3.7.0
",True,"go_tool_library sources marked as unsynced - #### Description of the issue. Please be specific.
Go sources using the `go_tool_library` are marked as unsynced. The `go_tool_library` is necessary for `nogo` rules to avoid a cycle in dependencies. The normal `go_library` uses `nogo`.
#### What's the simplest set of steps to reproduce this issue? Please provide an example project, if possible.
https://github.com/jschaf/bazel-bug-go-tool/tree/master/lint

#### Version information
IdeaUltimate: 2020.2.3
Platform: Linux 5.4.0-7642-generic
Bazel plugin: 9999
Bazel: 3.7.0
",1,go tool library sources marked as unsynced description of the issue please be specific go sources using the go tool library are marked as unsynced the go tool library is necessary for nogo rules to avoid a cycle in dependencies the normal go library uses nogo what s the simplest set of steps to reproduce this issue please provide an example project if possible version information ideaultimate platform linux generic bazel plugin bazel ,1
5730,30292232158.0,IssuesEvent,2023-07-09 12:27:57,svengreb/wand,https://api.github.com/repos/svengreb/wand,opened,`go run` support for versioned modules (Go 1.17+),context-api context-pkg scope-compatibility scope-dx scope-maintainability scope-stability type-feature,"[As of Go 1.17 the `go run` command can finally run in module-aware mode][1] while not “polluting“ the current module in the working directory, of there is one (`go.mod` file present) 🎉
This finally allows to [run commands _on-the-fly_](https://pkg.go.dev/cmd/go#hdr-Compile_and_run_Go_program) of Go `main` module packages without installing them or without changing dependencies of the current module!
To support this feature with _wand_ a new [`task.GoModule`][2] will be implemented in a new [`golang/run`][3] package.
It can be run using a [command runner][4] that handles tasks of kind [`KindGoModule`][5] so mainly [`gotool.Runner`][6].
The new [`golang/run.Task`][3] will be customizable through the following functions:
- `WithArgs(...string) run.Option` — sets additional arguments to pass to the command.
- `WithEnv(map[string]string) run.Option` — sets the task specific environment.
- `WithModulePath(string) run.Option` — sets the module import path.
- `WithModuleVersion(*semver.Version) run.Option` — sets the module version.
Next to the new task the [`gotool.Runner`][6] will be adjusted to a new [`WithCache(bool)`][9] runner option to toggle the usage of the local cache directory in the root directory of the module. The runner will be made “smart“ in the way that it either…
- installing the executable through a [`golang.Runner`][8], which runs `go install pkg@version` to [leverage Go 1.16‘s feature](https://github.com/svengreb/wand/issues/89), and execute it afterwards. This is the current default behavior of this runner which will be used when [`WithCache(true)`][9] is used.
- pass the task to a [`golang.Runner`][8], using the new [`golang/run`][3] package task, so that it can run `go run pkg@version ` instead. This is the new “smart“ behavior of the runner which will be used when [`WithCache(false)`][9] (default) is used.
The **new default behavior will be to not use a local cache** so that caching will be a opt-in. This decision was made because native support for running commands _on-the-fly_ should always be preferred to custom logic which is what the local cache directory and [`gotool.Runner`][6] purpose is.
> [!warning] Note that the minimum Go version for task runners, the new [`golang/run` task][3] and [the _Elder_ wand][7] will be increased to `1.17.0` since this version initially [introduced `go run` support in module-aware mode][1]!
> This will be enforced through a [build constraint](https://pkg.go.dev/cmd/go#hdr-Build_constraints) (`go:build go1.17`).
The [`Elder`][7] reference implementation will also adapt to this new feature by…
1. **deprecating the `*elder.Elder.Bootstrap(...string) []error` method**! As of _wand_ version `0.9.0` it will be a no-op and will be removed in version `0.10.0`. To install executables anyway the new `*elder.Elder.CacheExecutables error` method should be used instead. To ensure that the wand is properly initialized and operational the `*elder.Elder.Validate(..task.Runner) []error` method is the way to go. A warning message will be printed when the method is called to ensure that users adapt accordionally.
2. providing a new `*elder.Elder.CacheExecutables(...string) error` method which allows to pass paths of Go modules that should be explicitly installed to the local cache directory. This method is a kind of workaround for the, now deprecated, `*elder.Elder.Bootstrap(...string) []error` method to allows users to still cache command executables locally.
3. changing the signature of the `*elder.Elder.Validate() error` method to `*elder.Elder.Validate(...task.Runner) []error` method which allows users to ensure that the _wand_ is properly initialized and operational. Optionally [command runner][4] can be passed that will be validated while passing nothing will validate all currently supported runners.
[1]: https://go.dev/doc/go1.17#go%20run
[2]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task#GoModule
[3]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task/golang/run
[4]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task#Runner
[5]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task#KindGoModule
[6]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task/gotool#Runner
[7]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/elder
[8]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task/golang#Runner
[9]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task/golang/run#WithCache",True,"`go run` support for versioned modules (Go 1.17+) - [As of Go 1.17 the `go run` command can finally run in module-aware mode][1] while not “polluting“ the current module in the working directory, of there is one (`go.mod` file present) 🎉
This finally allows to [run commands _on-the-fly_](https://pkg.go.dev/cmd/go#hdr-Compile_and_run_Go_program) of Go `main` module packages without installing them or without changing dependencies of the current module!
To support this feature with _wand_ a new [`task.GoModule`][2] will be implemented in a new [`golang/run`][3] package.
It can be run using a [command runner][4] that handles tasks of kind [`KindGoModule`][5] so mainly [`gotool.Runner`][6].
The new [`golang/run.Task`][3] will be customizable through the following functions:
- `WithArgs(...string) run.Option` — sets additional arguments to pass to the command.
- `WithEnv(map[string]string) run.Option` — sets the task specific environment.
- `WithModulePath(string) run.Option` — sets the module import path.
- `WithModuleVersion(*semver.Version) run.Option` — sets the module version.
Next to the new task the [`gotool.Runner`][6] will be adjusted to a new [`WithCache(bool)`][9] runner option to toggle the usage of the local cache directory in the root directory of the module. The runner will be made “smart“ in the way that it either…
- installing the executable through a [`golang.Runner`][8], which runs `go install pkg@version` to [leverage Go 1.16‘s feature](https://github.com/svengreb/wand/issues/89), and execute it afterwards. This is the current default behavior of this runner which will be used when [`WithCache(true)`][9] is used.
- pass the task to a [`golang.Runner`][8], using the new [`golang/run`][3] package task, so that it can run `go run pkg@version ` instead. This is the new “smart“ behavior of the runner which will be used when [`WithCache(false)`][9] (default) is used.
The **new default behavior will be to not use a local cache** so that caching will be a opt-in. This decision was made because native support for running commands _on-the-fly_ should always be preferred to custom logic which is what the local cache directory and [`gotool.Runner`][6] purpose is.
> [!warning] Note that the minimum Go version for task runners, the new [`golang/run` task][3] and [the _Elder_ wand][7] will be increased to `1.17.0` since this version initially [introduced `go run` support in module-aware mode][1]!
> This will be enforced through a [build constraint](https://pkg.go.dev/cmd/go#hdr-Build_constraints) (`go:build go1.17`).
The [`Elder`][7] reference implementation will also adapt to this new feature by…
1. **deprecating the `*elder.Elder.Bootstrap(...string) []error` method**! As of _wand_ version `0.9.0` it will be a no-op and will be removed in version `0.10.0`. To install executables anyway the new `*elder.Elder.CacheExecutables error` method should be used instead. To ensure that the wand is properly initialized and operational the `*elder.Elder.Validate(..task.Runner) []error` method is the way to go. A warning message will be printed when the method is called to ensure that users adapt accordionally.
2. providing a new `*elder.Elder.CacheExecutables(...string) error` method which allows to pass paths of Go modules that should be explicitly installed to the local cache directory. This method is a kind of workaround for the, now deprecated, `*elder.Elder.Bootstrap(...string) []error` method to allows users to still cache command executables locally.
3. changing the signature of the `*elder.Elder.Validate() error` method to `*elder.Elder.Validate(...task.Runner) []error` method which allows users to ensure that the _wand_ is properly initialized and operational. Optionally [command runner][4] can be passed that will be validated while passing nothing will validate all currently supported runners.
[1]: https://go.dev/doc/go1.17#go%20run
[2]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task#GoModule
[3]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task/golang/run
[4]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task#Runner
[5]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task#KindGoModule
[6]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task/gotool#Runner
[7]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/elder
[8]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task/golang#Runner
[9]: https://pkg.go.dev/github.com/svengreb/wand@v0.9.0/pkg/task/golang/run#WithCache",1, go run support for versioned modules go while not “polluting“ the current module in the working directory of there is one go mod file present 🎉 this finally allows to of go main module packages without installing them or without changing dependencies of the current module to support this feature with wand a new will be implemented in a new package it can be run using a that handles tasks of kind so mainly the new will be customizable through the following functions withargs string run option — sets additional arguments to pass to the command withenv map string run option — sets the task specific environment withmodulepath string run option — sets the module import path withmoduleversion semver version run option — sets the module version next to the new task the will be adjusted to a new runner option to toggle the usage of the local cache directory in the root directory of the module the runner will be made “smart“ in the way that it either… installing the executable through a which runs go install pkg version to and execute it afterwards this is the current default behavior of this runner which will be used when is used pass the task to a using the new package task so that it can run go run pkg version instead this is the new “smart“ behavior of the runner which will be used when default is used the new default behavior will be to not use a local cache so that caching will be a opt in this decision was made because native support for running commands on the fly should always be preferred to custom logic which is what the local cache directory and purpose is note that the minimum go version for task runners the new and will be increased to since this version initially this will be enforced through a go build the reference implementation will also adapt to this new feature by… deprecating the elder elder bootstrap string error method as of wand version it will be a no op and will be removed in version to install executables anyway the new elder elder cacheexecutables error method should be used instead to ensure that the wand is properly initialized and operational the elder elder validate task runner error method is the way to go a warning message will be printed when the method is called to ensure that users adapt accordionally providing a new elder elder cacheexecutables string error method which allows to pass paths of go modules that should be explicitly installed to the local cache directory this method is a kind of workaround for the now deprecated elder elder bootstrap string error method to allows users to still cache command executables locally changing the signature of the elder elder validate error method to elder elder validate task runner error method which allows users to ensure that the wand is properly initialized and operational optionally can be passed that will be validated while passing nothing will validate all currently supported runners ,1
1260,5348482978.0,IssuesEvent,2017-02-18 05:30:02,diofant/diofant,https://api.github.com/repos/diofant/diofant,opened,"Use ""new"" style for string formatting",maintainability,"I.e. ``""{0:s}"".format(""spam"")`` instead of ``""%s"" % ""spam""``",True,"Use ""new"" style for string formatting - I.e. ``""{0:s}"".format(""spam"")`` instead of ``""%s"" % ""spam""``",1,use new style for string formatting i e s format spam instead of s spam ,1
227,2893505978.0,IssuesEvent,2015-06-15 18:20:59,OpenLightingProject/ola,https://api.github.com/repos/OpenLightingProject/ola,closed,Tests fail when no non-loopback interfaces are present,bug Maintainability OpSys-All,"See the Debian bug here:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=769670
It probably makes sense to fix this locally, in which case we can remove testGetLoopbackInterfaces, we just need to work out what we do on Windows given: https://github.com/OpenLightingProject/ola/blob/master/common/network/InterfacePickerTest.cpp#L99
Do we skip the test on Windows, or let it run, possibly with a log line on Windows, for the tiny userbase who will run Windows with no network interface. Or ideally fix it up so the loopback interface is reported on Windows too.",True,"Tests fail when no non-loopback interfaces are present - See the Debian bug here:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=769670
It probably makes sense to fix this locally, in which case we can remove testGetLoopbackInterfaces, we just need to work out what we do on Windows given: https://github.com/OpenLightingProject/ola/blob/master/common/network/InterfacePickerTest.cpp#L99
Do we skip the test on Windows, or let it run, possibly with a log line on Windows, for the tiny userbase who will run Windows with no network interface. Or ideally fix it up so the loopback interface is reported on Windows too.",1,tests fail when no non loopback interfaces are present see the debian bug here it probably makes sense to fix this locally in which case we can remove testgetloopbackinterfaces we just need to work out what we do on windows given do we skip the test on windows or let it run possibly with a log line on windows for the tiny userbase who will run windows with no network interface or ideally fix it up so the loopback interface is reported on windows too ,1
1910,6577571872.0,IssuesEvent,2017-09-12 01:50:38,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,cloud/docker: update doc for field 'registry',affects_2.1 cloud docker docs_report waiting_on_maintainer,"
##### ISSUE TYPE
Documentation Report
##### COMPONENT NAME
docker
##### ANSIBLE VERSION
```
ansible 2.1.0 (devel 22467a0de8) last updated 2016/04/13 11:42:21 (GMT +200)
lib/ansible/modules/core: (detached HEAD 99cd31140d) last updated 2016/04/13 11:42:31 (GMT +200)
lib/ansible/modules/extras: (detached HEAD ab2f4c4002) last updated 2016/04/13 11:42:40 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### SUMMARY
docker module docs describes field `registry` as the ""Remote registry URL to pull images from"". However, I think this field's only use is for login, not pulling, so the doc is misleading. See the issue I opened on that subject (#3419). It would be nice if the doc could be fixed.
",True,"cloud/docker: update doc for field 'registry' -
##### ISSUE TYPE
Documentation Report
##### COMPONENT NAME
docker
##### ANSIBLE VERSION
```
ansible 2.1.0 (devel 22467a0de8) last updated 2016/04/13 11:42:21 (GMT +200)
lib/ansible/modules/core: (detached HEAD 99cd31140d) last updated 2016/04/13 11:42:31 (GMT +200)
lib/ansible/modules/extras: (detached HEAD ab2f4c4002) last updated 2016/04/13 11:42:40 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### SUMMARY
docker module docs describes field `registry` as the ""Remote registry URL to pull images from"". However, I think this field's only use is for login, not pulling, so the doc is misleading. See the issue I opened on that subject (#3419). It would be nice if the doc could be fixed.
",1,cloud docker update doc for field registry issue type documentation report component name docker ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides summary docker module docs describes field registry as the remote registry url to pull images from however i think this field s only use is for login not pulling so the doc is misleading see the issue i opened on that subject it would be nice if the doc could be fixed ,1
116440,24918341270.0,IssuesEvent,2022-10-30 17:15:02,dotnet/runtime,https://api.github.com/repos/dotnet/runtime,closed,Folding LoadAlignedVector* into the consumer instructions with VEX-encoding,enhancement area-CodeGen-coreclr JitUntriaged,"Currently, `LoadVector128/256` can be folded into its consumer instructions with VEX-encoding but `LoadAlignedVector128/256` not.
`LoadAlignedVector128/256` would throw hardware exceptions if the memory address is not aligned to the specific boundary, but other VEX-encoded instructions (e.g., `vaddps xmm0, xmm1, [unalignedAddr]`) can work with unaligned memory. So, actually, we can fold `LoadAlignedVector128/256` into its consumer instructions with VEX-encoding.
```asm
;;; unoptimized
vmovaps xmm0, [unalignedAddr] ;;; hardware exception
vaddps xmm0, xmm1, xmm0
;;; optimized
vaddps xmm0, xmm1, [unalignedAddr] ;;; ok
```
All the mainstream C/C++ compilers have this behavior.
@CarolEidt @tannergooding @mikedn
category:cq
theme:vector-codegen
skill-level:intermediate
cost:medium",1.0,"Folding LoadAlignedVector* into the consumer instructions with VEX-encoding - Currently, `LoadVector128/256` can be folded into its consumer instructions with VEX-encoding but `LoadAlignedVector128/256` not.
`LoadAlignedVector128/256` would throw hardware exceptions if the memory address is not aligned to the specific boundary, but other VEX-encoded instructions (e.g., `vaddps xmm0, xmm1, [unalignedAddr]`) can work with unaligned memory. So, actually, we can fold `LoadAlignedVector128/256` into its consumer instructions with VEX-encoding.
```asm
;;; unoptimized
vmovaps xmm0, [unalignedAddr] ;;; hardware exception
vaddps xmm0, xmm1, xmm0
;;; optimized
vaddps xmm0, xmm1, [unalignedAddr] ;;; ok
```
All the mainstream C/C++ compilers have this behavior.
@CarolEidt @tannergooding @mikedn
category:cq
theme:vector-codegen
skill-level:intermediate
cost:medium",0,folding loadalignedvector into the consumer instructions with vex encoding currently can be folded into its consumer instructions with vex encoding but not would throw hardware exceptions if the memory address is not aligned to the specific boundary but other vex encoded instructions e g vaddps can work with unaligned memory so actually we can fold into its consumer instructions with vex encoding asm unoptimized vmovaps hardware exception vaddps optimized vaddps ok all the mainstream c c compilers have this behavior caroleidt tannergooding mikedn category cq theme vector codegen skill level intermediate cost medium,0
302025,26118181136.0,IssuesEvent,2022-12-28 09:15:38,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,roachtest: failover/non-system/blackhole-recv failed,C-test-failure O-robot O-roachtest branch-master release-blocker T-kv,"roachtest.failover/non-system/blackhole-recv [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8107043?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8107043?buildTab=artifacts#/failover/non-system/blackhole-recv) on master @ [9c5375f6a7375724cdbcbaa0029ed97a230d7abe](https://github.com/cockroachdb/cockroach/commits/9c5375f6a7375724cdbcbaa0029ed97a230d7abe):
```
test artifacts and logs in: /artifacts/failover/non-system/blackhole-recv/run_1
(test_impl.go:314).Errorf: test timed out (20m0s)
```
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
/cc @cockroachdb/kv-triage
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*failover/non-system/blackhole-recv.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
Jira issue: CRDB-22812",2.0,"roachtest: failover/non-system/blackhole-recv failed - roachtest.failover/non-system/blackhole-recv [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8107043?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/8107043?buildTab=artifacts#/failover/non-system/blackhole-recv) on master @ [9c5375f6a7375724cdbcbaa0029ed97a230d7abe](https://github.com/cockroachdb/cockroach/commits/9c5375f6a7375724cdbcbaa0029ed97a230d7abe):
```
test artifacts and logs in: /artifacts/failover/non-system/blackhole-recv/run_1
(test_impl.go:314).Errorf: test timed out (20m0s)
```
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
/cc @cockroachdb/kv-triage
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*failover/non-system/blackhole-recv.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
Jira issue: CRDB-22812",0,roachtest failover non system blackhole recv failed roachtest failover non system blackhole recv with on master test artifacts and logs in artifacts failover non system blackhole recv run test impl go errorf test timed out parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see cc cockroachdb kv triage jira issue crdb ,0
704260,24190727016.0,IssuesEvent,2022-09-23 17:17:12,AY2223S1-CS2103T-W17-3/tp,https://api.github.com/repos/AY2223S1-CS2103T-W17-3/tp,opened,Add AboutUs Page,priority.High type.Admin,"# AboutUs page:
This page (in the /docs folder) is used for module admin purposes\
Please follow the format closely or else our scripts will not be able to give credit for your work.
Add your own details. Include a suitable photo as described here.
There is no need to mention the the tutor/lecturer, but OK to do so too.
The filename of the profile photo should be docs/images/github_username_in_lower_case.png
Note the need for lower case ( why lowercase?) e.g. JohnDoe123 -> docs/images/johndoe123.png not docs/images/JohnDoe123.png.
If your photo is in jpg format, name the file as .png anyway.
Indicate the different roles played and responsibilities held by each team member. You can reassign these roles and responsibilities (as explained in Admin Project Scope) later in the project, if necessary.",1.0,"Add AboutUs Page - # AboutUs page:
This page (in the /docs folder) is used for module admin purposes\
Please follow the format closely or else our scripts will not be able to give credit for your work.
Add your own details. Include a suitable photo as described here.
There is no need to mention the the tutor/lecturer, but OK to do so too.
The filename of the profile photo should be docs/images/github_username_in_lower_case.png
Note the need for lower case ( why lowercase?) e.g. JohnDoe123 -> docs/images/johndoe123.png not docs/images/JohnDoe123.png.
If your photo is in jpg format, name the file as .png anyway.
Indicate the different roles played and responsibilities held by each team member. You can reassign these roles and responsibilities (as explained in Admin Project Scope) later in the project, if necessary.",0,add aboutus page aboutus page this page in the docs folder is used for module admin purposes please follow the format closely or else our scripts will not be able to give credit for your work add your own details include a suitable photo as described here there is no need to mention the the tutor lecturer but ok to do so too the filename of the profile photo should be docs images github username in lower case png note the need for lower case why lowercase e g docs images png not docs images png if your photo is in jpg format name the file as png anyway indicate the different roles played and responsibilities held by each team member you can reassign these roles and responsibilities as explained in admin project scope later in the project if necessary ,0
160803,20118880275.0,IssuesEvent,2022-02-07 22:52:52,TreyM-WSS/whitesource-demo-1,https://api.github.com/repos/TreyM-WSS/whitesource-demo-1,opened,CVE-2021-23364 (Medium) detected in browserslist-4.7.0.tgz,security vulnerability,"## CVE-2021-23364 - Medium Severity Vulnerability
Vulnerable Library - browserslist-4.7.0.tgz
Share target browsers between different front-end tools, like Autoprefixer, Stylelint and babel-env-preset
",0,cve medium detected in browserslist tgz cve medium severity vulnerability vulnerable library browserslist tgz share target browsers between different front end tools like autoprefixer stylelint and babel env preset library home page a href dependency hierarchy preset env tgz root library x browserslist tgz vulnerable library found in head commit a href vulnerability details the package browserslist from and before are vulnerable to regular expression denial of service redos during parsing of queries publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution browserslist ,0
3094,11744538505.0,IssuesEvent,2020-03-12 07:58:15,PointCloudLibrary/pcl,https://api.github.com/repos/PointCloudLibrary/pcl,opened,Ambiguous comment by codebase to developer,kind: question module: common needs: maintainer feedback,"
## Context
https://github.com/PointCloudLibrary/pcl/blob/master/common/include/pcl/point_traits.h#L200
`point_traits.h`:200 refers to a bug #821 but that doesn't explain anything.
## Possible Solution
What's the actual reason?
It makes sense from C++ perspective that the container of 0 fields will have a 1 byte memory usage, but I'm not sure that's what's happening here
",True,"Ambiguous comment by codebase to developer -
## Context
https://github.com/PointCloudLibrary/pcl/blob/master/common/include/pcl/point_traits.h#L200
`point_traits.h`:200 refers to a bug #821 but that doesn't explain anything.
## Possible Solution
What's the actual reason?
It makes sense from C++ perspective that the container of 0 fields will have a 1 byte memory usage, but I'm not sure that's what's happening here
",1,ambiguous comment by codebase to developer context point traits h refers to a bug but that doesn t explain anything possible solution what s the actual reason it makes sense from c perspective that the container of fields will have a byte memory usage but i m not sure that s what s happening here ,1
617817,19405284822.0,IssuesEvent,2021-12-19 22:10:23,OnTopicCMS/OnTopic-Library,https://api.github.com/repos/OnTopicCMS/OnTopic-Library,opened,Mapping: Ensure cache entries are only pulled once,Area: Mapping Severity 0: Nice to have Priority: 3 Type: Improvement Status 2: Scheduled,"Currently, when pulling objects from the cache, the cache must be queried twice, due to chaining of `MapAsync()` overloads. This should be avoidable. ",1.0,"Mapping: Ensure cache entries are only pulled once - Currently, when pulling objects from the cache, the cache must be queried twice, due to chaining of `MapAsync()` overloads. This should be avoidable. ",0,mapping ensure cache entries are only pulled once currently when pulling objects from the cache the cache must be queried twice due to chaining of mapasync overloads this should be avoidable ,0
2959,10616627764.0,IssuesEvent,2019-10-12 13:14:18,arcticicestudio/snowsaw,https://api.github.com/repos/arcticicestudio/snowsaw,closed,Update to Go 1.13 and latest dependency versions,context-workflow scope-compatibility scope-maintainability scope-performance scope-quality scope-security scope-stability type-task,"[Go 1.13 has been released][blog] over a month ago that comes with some great features and a lot stability, performance and security improvements and bug fixes. The [new `os.UserConfigDir()` function][os] is a great addition for the handling for snowsaw's configuration files that will be implemented late on. See the [Go 1.13 official release notes][rln] for more details.
Since there are no breaking changes snowsaw will now require Go 1.13 as minimum version.
With the update to Go 1.13.x all outdated dependencies should also be updated to their latest versions to prevent possible module incompatibilities as well as including the latest improvements and bug fixes.
[blog]: https://blog.golang.org/go1.13
[os]: https://golang.org/pkg/os/#UserConfigDir
[rln]: https://golang.org/doc/go1.13
",True,"Update to Go 1.13 and latest dependency versions - [Go 1.13 has been released][blog] over a month ago that comes with some great features and a lot stability, performance and security improvements and bug fixes. The [new `os.UserConfigDir()` function][os] is a great addition for the handling for snowsaw's configuration files that will be implemented late on. See the [Go 1.13 official release notes][rln] for more details.
Since there are no breaking changes snowsaw will now require Go 1.13 as minimum version.
With the update to Go 1.13.x all outdated dependencies should also be updated to their latest versions to prevent possible module incompatibilities as well as including the latest improvements and bug fixes.
[blog]: https://blog.golang.org/go1.13
[os]: https://golang.org/pkg/os/#UserConfigDir
[rln]: https://golang.org/doc/go1.13
",1,update to go and latest dependency versions over a month ago that comes with some great features and a lot stability performance and security improvements and bug fixes the is a great addition for the handling for snowsaw s configuration files that will be implemented late on see the for more details since there are no breaking changes snowsaw will now require go as minimum version with the update to go x all outdated dependencies should also be updated to their latest versions to prevent possible module incompatibilities as well as including the latest improvements and bug fixes ,1
84766,10417820304.0,IssuesEvent,2019-09-15 01:58:47,golang/go,https://api.github.com/repos/golang/go,closed,net/http: Content-Length is not set in outgoing request when using ioutil.NopCloser,Documentation,"
### What version of Go are you using (`go version`)?
$ go version
go version go1.13 darwin/amd64
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
go env Output
### What did you do?
Start a httpbin server locally.
docker run -p 80:80 kennethreitz/httpbin
Run the following program
```go
package main
import (
""bytes""
""io/ioutil""
""log""
""net/http""
)
func main() {
reqBody := ioutil.NopCloser(bytes.NewBufferString(`{}`))
req, err := http.NewRequest(""POST"", ""http://localhost:80/post"", reqBody)
if err != nil {
log.Fatalf(""Cannot create request: %v"", err)
}
res, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatalf(""Cannot do: %v"", err)
}
defer res.Body.Close()
resBody, err := ioutil.ReadAll(res.Body)
if err != nil {
log.Fatalf(""Cannot read body: %v"", err)
}
log.Printf(""Response Body: %s"", resBody)
}
```
### What did you expect to see?
Content-Length header is set when it is received by the server.
### What did you see instead?
Content-Length header is missing when it is received by the server.
",1.0,"net/http: Content-Length is not set in outgoing request when using ioutil.NopCloser -
### What version of Go are you using (`go version`)?
$ go version
go version go1.13 darwin/amd64
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
go env Output
### What did you do?
Start a httpbin server locally.
docker run -p 80:80 kennethreitz/httpbin
Run the following program
```go
package main
import (
""bytes""
""io/ioutil""
""log""
""net/http""
)
func main() {
reqBody := ioutil.NopCloser(bytes.NewBufferString(`{}`))
req, err := http.NewRequest(""POST"", ""http://localhost:80/post"", reqBody)
if err != nil {
log.Fatalf(""Cannot create request: %v"", err)
}
res, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatalf(""Cannot do: %v"", err)
}
defer res.Body.Close()
resBody, err := ioutil.ReadAll(res.Body)
if err != nil {
log.Fatalf(""Cannot read body: %v"", err)
}
log.Printf(""Response Body: %s"", resBody)
}
```
### What did you expect to see?
Content-Length header is set when it is received by the server.
### What did you see instead?
Content-Length header is missing when it is received by the server.
",0,net http content length is not set in outgoing request when using ioutil nopcloser what version of go are you using go version go version go version darwin does this issue reproduce with the latest release yes what operating system and processor architecture are you using go env go env output go env goarch gobin gocache users library caches go build goenv users library application support go env goexe goflags gohostarch gohostos darwin gonoproxy gonosumdb goos darwin gopath users go goprivate goproxy goroot usr local cellar go libexec gosumdb sum golang org gotmpdir gotooldir usr local cellar go libexec pkg tool darwin gccgo gccgo ar ar cc clang cxx clang cgo enabled gomod users projects js scripts go mod cgo cflags g cgo cppflags cgo cxxflags g cgo fflags g cgo ldflags g pkg config pkg config gogccflags fpic pthread fno caret diagnostics qunused arguments fmessage length fdebug prefix map var folders t go tmp go build gno record gcc switches fno common what did you do if possible provide a recipe for reproducing the error a complete runnable program is good a link on play golang org is best start a httpbin server locally docker run p kennethreitz httpbin run the following program go package main import bytes io ioutil log net http func main reqbody ioutil nopcloser bytes newbufferstring req err http newrequest post reqbody if err nil log fatalf cannot create request v err res err http defaultclient do req if err nil log fatalf cannot do v err defer res body close resbody err ioutil readall res body if err nil log fatalf cannot read body v err log printf response body s resbody what did you expect to see content length header is set when it is received by the server what did you see instead content length header is missing when it is received by the server response body args data files form headers accept encoding gzip host localhost transfer encoding chunked user agent go http client json origin url versus what i would receive if i use reqbody bytes newbufferstring response body args data files form headers accept encoding gzip content length host localhost user agent go http client json origin url ,0
176225,21390858860.0,IssuesEvent,2022-04-21 06:56:33,turkdevops/update-electron-app,https://api.github.com/repos/turkdevops/update-electron-app,opened,CVE-2020-28500 (Medium) detected in lodash-4.17.20.tgz,security vulnerability,"## CVE-2020-28500 - Medium Severity Vulnerability
Vulnerable Library - lodash-4.17.20.tgz
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
WhiteSource Note: After conducting further research, WhiteSource has determined that CVE-2020-28500 only affects environments with versions 4.0.0 to 4.17.20 of Lodash.
Direct dependency fix Resolution (standard): 15.0.0
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-28500 (Medium) detected in lodash-4.17.20.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability
Vulnerable Library - lodash-4.17.20.tgz
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
WhiteSource Note: After conducting further research, WhiteSource has determined that CVE-2020-28500 only affects environments with versions 4.0.0 to 4.17.20 of Lodash.
Direct dependency fix Resolution (standard): 15.0.0
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules lodash package json dependency hierarchy standard tgz root library eslint tgz x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions whitesource note after conducting further research whitesource has determined that cve only affects environments with versions to of lodash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash direct dependency fix resolution standard step up your open source security game with whitesource ,0
3315,12833941761.0,IssuesEvent,2020-07-07 10:09:05,spack/spack,https://api.github.com/repos/spack/spack,opened,"Remove explicit version enumeration in ""containerize"" related code",feature maintainers,"As a maintainer I want to remove the explicit enumeration of Spack versions in:
- https://github.com/spack/spack/blob/develop/lib/spack/spack/container/images.json
- https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/container.py
so that there will be one place less to update when cutting a new release.
### Rationale
Recently the release process has been documented, with TODO for improvement on the overall process:
https://github.com/spack/spack/blob/9ec9327f5aacc7b62a1469771c8917547393676d/lib/spack/docs/developer_guide.rst#L621-L626
### Description
A proper solution might need some discussion and might involve:
- Computing the versions that are currently in `images.json` dynamically (by querying Dockerhub?)
- Move the check on the YAML file from the schema to a later dynamic check.
This section will be updated as the discussion on this issue progresses.
### Additional information
```console
$ spack --version
0.15.0-62-d65a076c0
```
### General information
- [x] I have run `spack --version` and reported the version of Spack
- [x] I have searched the issues of this repo and believe this is not a duplicate
",True,"Remove explicit version enumeration in ""containerize"" related code - As a maintainer I want to remove the explicit enumeration of Spack versions in:
- https://github.com/spack/spack/blob/develop/lib/spack/spack/container/images.json
- https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/container.py
so that there will be one place less to update when cutting a new release.
### Rationale
Recently the release process has been documented, with TODO for improvement on the overall process:
https://github.com/spack/spack/blob/9ec9327f5aacc7b62a1469771c8917547393676d/lib/spack/docs/developer_guide.rst#L621-L626
### Description
A proper solution might need some discussion and might involve:
- Computing the versions that are currently in `images.json` dynamically (by querying Dockerhub?)
- Move the check on the YAML file from the schema to a later dynamic check.
This section will be updated as the discussion on this issue progresses.
### Additional information
```console
$ spack --version
0.15.0-62-d65a076c0
```
### General information
- [x] I have run `spack --version` and reported the version of Spack
- [x] I have searched the issues of this repo and believe this is not a duplicate
",1,remove explicit version enumeration in containerize related code as a maintainer i want to remove the explicit enumeration of spack versions in so that there will be one place less to update when cutting a new release rationale recently the release process has been documented with todo for improvement on the overall process description a proper solution might need some discussion and might involve computing the versions that are currently in images json dynamically by querying dockerhub move the check on the yaml file from the schema to a later dynamic check this section will be updated as the discussion on this issue progresses additional information console spack version general information i have run spack version and reported the version of spack i have searched the issues of this repo and believe this is not a duplicate if you want to ask a question about the tool how to use it what it can currently do etc try the general channel on our slack first we have a welcoming community and chances are you ll get your reply faster and without opening an issue other than that thanks for taking the time to contribute to spack ,1
359845,10681675650.0,IssuesEvent,2019-10-22 01:53:31,unoplatform/uno,https://api.github.com/repos/unoplatform/uno,closed,"Folder in solution with name ""Uno"" causes namespace resolution errors",kind/bug kind/consumer-experience priority/backlog triage/needs-information,"I'm trying to add Uno to an existing solution.
I ""logically"" created a Uno folder in the solution. I added MyApp.Uno.UPW, MyApp.Uno.iOS and MyApp.Uno.Shared projects with the proper nuget packages and references.
When I build the iOS project I had an error like:
> The type or namespace name 'UI' does not exist in the namespace 'MyApp.Uno' (are you missing an assembly reference?)
I had to rename my folder to UnoApp and projects to MyApp.UnoApp.UWP etc to get it to compile.",1.0,"Folder in solution with name ""Uno"" causes namespace resolution errors - I'm trying to add Uno to an existing solution.
I ""logically"" created a Uno folder in the solution. I added MyApp.Uno.UPW, MyApp.Uno.iOS and MyApp.Uno.Shared projects with the proper nuget packages and references.
When I build the iOS project I had an error like:
> The type or namespace name 'UI' does not exist in the namespace 'MyApp.Uno' (are you missing an assembly reference?)
I had to rename my folder to UnoApp and projects to MyApp.UnoApp.UWP etc to get it to compile.",0,folder in solution with name uno causes namespace resolution errors i m trying to add uno to an existing solution i logically created a uno folder in the solution i added myapp uno upw myapp uno ios and myapp uno shared projects with the proper nuget packages and references when i build the ios project i had an error like the type or namespace name ui does not exist in the namespace myapp uno are you missing an assembly reference i had to rename my folder to unoapp and projects to myapp unoapp uwp etc to get it to compile ,0
58401,24439278147.0,IssuesEvent,2022-10-06 13:37:17,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,.NET not supported by OpenAI?,cognitive-services/svc triaged assigned-to-author product-question Pri1,"I expected to see at least C# since GitHub CoPilot does support it.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 25c15e01-3578-c4ae-afbc-32c7172173d0
* Version Independent ID: 0d20fad8-0e13-2d85-64e5-31861fabdcdd
* Content: [Azure OpenAI Engines - Azure OpenAI](https://docs.microsoft.com/en-us/azure/cognitive-services/openai/concepts/engines)
* Content Source: [articles/cognitive-services/openai/concepts/engines.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cognitive-services/openai/concepts/engines.md)
* Service: **cognitive-services**
* GitHub Login: @mrbullwinkle
* Microsoft Alias: **mbullwin**",1.0,".NET not supported by OpenAI? - I expected to see at least C# since GitHub CoPilot does support it.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 25c15e01-3578-c4ae-afbc-32c7172173d0
* Version Independent ID: 0d20fad8-0e13-2d85-64e5-31861fabdcdd
* Content: [Azure OpenAI Engines - Azure OpenAI](https://docs.microsoft.com/en-us/azure/cognitive-services/openai/concepts/engines)
* Content Source: [articles/cognitive-services/openai/concepts/engines.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cognitive-services/openai/concepts/engines.md)
* Service: **cognitive-services**
* GitHub Login: @mrbullwinkle
* Microsoft Alias: **mbullwin**",0, net not supported by openai i expected to see at least c since github copilot does support it document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id afbc version independent id content content source service cognitive services github login mrbullwinkle microsoft alias mbullwin ,0
5686,29924486339.0,IssuesEvent,2023-06-22 03:26:28,spicetify/spicetify-themes,https://api.github.com/repos/spicetify/spicetify-themes,closed,[BurntSienna] Top Icon Spacing Messed Up ,☠️ unmaintained,"**Describe the bug**
Top icon spacing looks off and has gaps. And some of the icons look squished. Large spacing in friends section compared to before.
**To Reproduce**
Steps to reproduce the behavior:
1. Install BurntSienna Theme
**Expected behavior**
Buttons and search bar aren't' supposed to look cut off and squished and have adequate spacing. Applies to friends' section as well. Also, odd spacing between your library and the back and forward buttons that also look off-center and out of place.
**Screenshots**
**Specifics (please complete the following information):**
Windows 11
Spotify for Windows
Version 1.2.7.1277.g2b3ce637
Spicetify v2.16.2
Burnt Sienna
",True,"[BurntSienna] Top Icon Spacing Messed Up - **Describe the bug**
Top icon spacing looks off and has gaps. And some of the icons look squished. Large spacing in friends section compared to before.
**To Reproduce**
Steps to reproduce the behavior:
1. Install BurntSienna Theme
**Expected behavior**
Buttons and search bar aren't' supposed to look cut off and squished and have adequate spacing. Applies to friends' section as well. Also, odd spacing between your library and the back and forward buttons that also look off-center and out of place.
**Screenshots**
**Specifics (please complete the following information):**
Windows 11
Spotify for Windows
Version 1.2.7.1277.g2b3ce637
Spicetify v2.16.2
Burnt Sienna
",1, top icon spacing messed up describe the bug top icon spacing looks off and has gaps and some of the icons look squished large spacing in friends section compared to before to reproduce steps to reproduce the behavior install burntsienna theme expected behavior buttons and search bar aren t supposed to look cut off and squished and have adequate spacing applies to friends section as well also odd spacing between your library and the back and forward buttons that also look off center and out of place screenshots img width alt screenshot src specifics please complete the following information windows spotify for windows version spicetify burnt sienna ,1
111440,11732603753.0,IssuesEvent,2020-03-11 04:16:43,Students-of-the-city-of-Kostroma/trpo_automation,https://api.github.com/repos/Students-of-the-city-of-Kostroma/trpo_automation,closed,Описать и реализовать процесс валидации письма,Epic Sprint 1 Sprint 2 documentation realization,"Водопад фильтров с чётким приоритетом
На вход подаются тестовые письма, сформированные командой тестирования, разделённые по разным категориям
[Ссылка](https://docs.google.com/document/d/1knlDwZ4lGp7NlXlYRQp_11sQAKZlG4qBdP2qyhC0TQY/edit) на письма
После проверки писем каждому письму назначается код. Ссылка на соответствие состояний и кодов [здесь](https://docs.google.com/document/d/12mSzNBvU_WRPhW6snqCZLgmx2ziJRMnahdztMTvb8wk/edit#heading=h.gjdgxs)
Результаты
- Реализация — класс на языке python, отвечающий за реализацию данного функционала
- Документация
- - Пояснительная записка, в которой будет прописан цикл алгоритма проверки
- - Либо диаграмма вызовов",1.0,"Описать и реализовать процесс валидации письма - Водопад фильтров с чётким приоритетом
На вход подаются тестовые письма, сформированные командой тестирования, разделённые по разным категориям
[Ссылка](https://docs.google.com/document/d/1knlDwZ4lGp7NlXlYRQp_11sQAKZlG4qBdP2qyhC0TQY/edit) на письма
После проверки писем каждому письму назначается код. Ссылка на соответствие состояний и кодов [здесь](https://docs.google.com/document/d/12mSzNBvU_WRPhW6snqCZLgmx2ziJRMnahdztMTvb8wk/edit#heading=h.gjdgxs)
Результаты
- Реализация — класс на языке python, отвечающий за реализацию данного функционала
- Документация
- - Пояснительная записка, в которой будет прописан цикл алгоритма проверки
- - Либо диаграмма вызовов",0,описать и реализовать процесс валидации письма водопад фильтров с чётким приоритетом на вход подаются тестовые письма сформированные командой тестирования разделённые по разным категориям на письма после проверки писем каждому письму назначается код ссылка на соответствие состояний и кодов результаты реализация — класс на языке python отвечающий за реализацию данного функционала документация пояснительная записка в которой будет прописан цикл алгоритма проверки либо диаграмма вызовов,0
144124,11595731180.0,IssuesEvent,2020-02-24 17:33:05,terraform-providers/terraform-provider-google,https://api.github.com/repos/terraform-providers/terraform-provider-google,opened,Fix TestAccAppEngineServiceSplitTraffic_appEngineServiceSplitTrafficExample test,test failure,Missing a mutex I think. ,1.0,Fix TestAccAppEngineServiceSplitTraffic_appEngineServiceSplitTrafficExample test - Missing a mutex I think. ,0,fix testaccappengineservicesplittraffic appengineservicesplittrafficexample test missing a mutex i think ,0
356964,10599839360.0,IssuesEvent,2019-10-10 08:50:20,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,www.google.com - design is broken,ML Correct ML ON browser-fenix engine-gecko priority-critical,"
**URL**: https://www.google.com/search?q=test
**Browser / Version**: Firefox Mobile 70.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: Searches look really ugly compared to other browser. The layout of Google searches is rounder in other browsers.
**Steps to Reproduce**:
Browser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"www.google.com - design is broken -
**URL**: https://www.google.com/search?q=test
**Browser / Version**: Firefox Mobile 70.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: Searches look really ugly compared to other browser. The layout of Google searches is rounder in other browsers.
**Steps to Reproduce**:
Browser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0, design is broken url browser version firefox mobile operating system android tested another browser yes problem type design is broken description searches look really ugly compared to other browser the layout of google searches is rounder in other browsers steps to reproduce browser configuration none from with ❤️ ,0
5557,27807290165.0,IssuesEvent,2023-03-17 21:20:28,microsoft/DirectXMesh,https://api.github.com/repos/microsoft/DirectXMesh,closed,Retire legacy Xbox One XDK support ,maintainence,"The only scenario that still uses VS 2017 is for the legacy Xbox One XDK. This task is drop support for this older Xbox development model and remove the following projects:
```
DirectXMesh_XboxOneXDK_2017.sln
```
",True,"Retire legacy Xbox One XDK support - The only scenario that still uses VS 2017 is for the legacy Xbox One XDK. This task is drop support for this older Xbox development model and remove the following projects:
```
DirectXMesh_XboxOneXDK_2017.sln
```
",1,retire legacy xbox one xdk support the only scenario that still uses vs is for the legacy xbox one xdk this task is drop support for this older xbox development model and remove the following projects directxmesh xboxonexdk sln ,1
19196,11163711269.0,IssuesEvent,2019-12-27 00:32:56,Azure/azure-cli,https://api.github.com/repos/Azure/azure-cli,closed,"Error while following the ""Host a web application with Azure App service"" tutorial",Service Attention Web Apps,"
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az webapp up`
**Errors:**
```
'NoneType' object has no attribute 'upper'
Traceback (most recent call last):
python3.6/site-packages/knack/cli.py, ln 206, in invoke
cmd_result = self.invocation.execute(args)
cli/core/commands/__init__.py, ln 603, in execute
raise ex
cli/core/commands/__init__.py, ln 661, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
cli/core/commands/__init__.py, ln 652, in _run_job
cmd_copy.exception_handler(ex)
...
cli/command_modules/appservice/custom.py, ln 2993, in webapp_up
create_app_service_plan(cmd, rg_name, plan, _is_linux, False, sku, 1 if _is_linux else None, location)
cli/command_modules/appservice/custom.py, ln 1390, in create_app_service_plan
sku = _normalize_sku(sku)
cli/command_modules/appservice/utils.py, ln 20, in _normalize_sku
sku = sku.upper()
AttributeError: 'NoneType' object has no attribute 'upper'
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- Following the tutorial at https://docs.microsoft.com/en-gb/learn/modules/host-a-web-app-with-azure-app-service/6-exercise-deploy-your-code-to-app-service?pivots=csharp the error occurs after entering these commands:
-APPNAME=$(az webapp list --query [0].name --output tsv)
-APPRG=$(az webapp list --query [0].resourceGroup --output tsv)
-APPPLAN=$(az appservice plan list --query [0].name --output tsv)
-APPSKU=$(az appservice plan list --query [0].sku.name --output tsv)
-APPLOCATION=$(az appservice plan list --query [0].location --output tsv)
- az webapp up --name $APPNAME --resource-group $APPRG --plan $APPPLAN --sku $APPSKU --location ""$APPLOCATION""
## Expected Behavior
The Test App deploys as outlined in the tutorial under the step ""Exercise - Deploy your code to App Service""
## Actual Behavior
The error shown above occurs after entering the final command
## Environment Summary
```
Linux-4.15.0-1063-azure-x86_64-with-debian-stretch-sid
Python 3.6.5
Shell: bash
azure-cli 2.0.76
```
## Additional Context
Sandbox is activated and an App Service is deployed
",1.0,"Error while following the ""Host a web application with Azure App service"" tutorial -
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az webapp up`
**Errors:**
```
'NoneType' object has no attribute 'upper'
Traceback (most recent call last):
python3.6/site-packages/knack/cli.py, ln 206, in invoke
cmd_result = self.invocation.execute(args)
cli/core/commands/__init__.py, ln 603, in execute
raise ex
cli/core/commands/__init__.py, ln 661, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
cli/core/commands/__init__.py, ln 652, in _run_job
cmd_copy.exception_handler(ex)
...
cli/command_modules/appservice/custom.py, ln 2993, in webapp_up
create_app_service_plan(cmd, rg_name, plan, _is_linux, False, sku, 1 if _is_linux else None, location)
cli/command_modules/appservice/custom.py, ln 1390, in create_app_service_plan
sku = _normalize_sku(sku)
cli/command_modules/appservice/utils.py, ln 20, in _normalize_sku
sku = sku.upper()
AttributeError: 'NoneType' object has no attribute 'upper'
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- Following the tutorial at https://docs.microsoft.com/en-gb/learn/modules/host-a-web-app-with-azure-app-service/6-exercise-deploy-your-code-to-app-service?pivots=csharp the error occurs after entering these commands:
-APPNAME=$(az webapp list --query [0].name --output tsv)
-APPRG=$(az webapp list --query [0].resourceGroup --output tsv)
-APPPLAN=$(az appservice plan list --query [0].name --output tsv)
-APPSKU=$(az appservice plan list --query [0].sku.name --output tsv)
-APPLOCATION=$(az appservice plan list --query [0].location --output tsv)
- az webapp up --name $APPNAME --resource-group $APPRG --plan $APPPLAN --sku $APPSKU --location ""$APPLOCATION""
## Expected Behavior
The Test App deploys as outlined in the tutorial under the step ""Exercise - Deploy your code to App Service""
## Actual Behavior
The error shown above occurs after entering the final command
## Environment Summary
```
Linux-4.15.0-1063-azure-x86_64-with-debian-stretch-sid
Python 3.6.5
Shell: bash
azure-cli 2.0.76
```
## Additional Context
Sandbox is activated and an App Service is deployed
",0,error while following the host a web application with azure app service tutorial this is autogenerated please review and update as needed describe the bug command name az webapp up errors nonetype object has no attribute upper traceback most recent call last site packages knack cli py ln in invoke cmd result self invocation execute args cli core commands init py ln in execute raise ex cli core commands init py ln in run jobs serially results append self run job expanded arg cmd copy cli core commands init py ln in run job cmd copy exception handler ex cli command modules appservice custom py ln in webapp up create app service plan cmd rg name plan is linux false sku if is linux else none location cli command modules appservice custom py ln in create app service plan sku normalize sku sku cli command modules appservice utils py ln in normalize sku sku sku upper attributeerror nonetype object has no attribute upper to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information following the tutorial at the error occurs after entering these commands appname az webapp list query name output tsv apprg az webapp list query resourcegroup output tsv appplan az appservice plan list query name output tsv appsku az appservice plan list query sku name output tsv applocation az appservice plan list query location output tsv az webapp up name appname resource group apprg plan appplan sku appsku location applocation expected behavior the test app deploys as outlined in the tutorial under the step exercise deploy your code to app service actual behavior the error shown above occurs after entering the final command environment summary linux azure with debian stretch sid python shell bash azure cli additional context sandbox is activated and an app service is deployed ,0
75233,9214850816.0,IssuesEvent,2019-03-10 23:10:28,ServiceInnovationLab/PresenceChecker,https://api.github.com/repos/ServiceInnovationLab/PresenceChecker,closed,Presentation on call-outs / implications of the current solution,design development review,"As the citizenship team
We want to be able to put together a presentation that calls out any implications with the process and the demo data that we've reviewed.
A / C
- [x] Should show any implications, pain points of the current process
- [x] Should show analysis of demo data
requires #62",1.0,"Presentation on call-outs / implications of the current solution - As the citizenship team
We want to be able to put together a presentation that calls out any implications with the process and the demo data that we've reviewed.
A / C
- [x] Should show any implications, pain points of the current process
- [x] Should show analysis of demo data
requires #62",0,presentation on call outs implications of the current solution as the citizenship team we want to be able to put together a presentation that calls out any implications with the process and the demo data that we ve reviewed a c should show any implications pain points of the current process should show analysis of demo data requires ,0
363204,25413313469.0,IssuesEvent,2022-11-22 21:06:09,ruthlennonatu/groot22,https://api.github.com/repos/ruthlennonatu/groot22,closed,As a customer I want to be able to use the product with ease so that my application process will be as simple as possible.,documentation enhancement,"Description:
A merge request with the dev branch must be made and the documentation containing Information about automated Java Documentation Tools
Acceptance Criteria:
Resolve issue.
DoD:
Merge request
Have a document containing where to find information on Java Documentation",1.0,"As a customer I want to be able to use the product with ease so that my application process will be as simple as possible. - Description:
A merge request with the dev branch must be made and the documentation containing Information about automated Java Documentation Tools
Acceptance Criteria:
Resolve issue.
DoD:
Merge request
Have a document containing where to find information on Java Documentation",0,as a customer i want to be able to use the product with ease so that my application process will be as simple as possible description a merge request with the dev branch must be made and the documentation containing information about automated java documentation tools acceptance criteria resolve issue dod merge request have a document containing where to find information on java documentation,0
550507,16114391082.0,IssuesEvent,2021-04-28 04:38:26,calyco-yale/calyco,https://api.github.com/repos/calyco-yale/calyco,closed,Add Push Notifications,priority: high,"Notify users whenever an invite or friend request is sent to them, and (by default) 10 minutes prior to the event.",1.0,"Add Push Notifications - Notify users whenever an invite or friend request is sent to them, and (by default) 10 minutes prior to the event.",0,add push notifications notify users whenever an invite or friend request is sent to them and by default minutes prior to the event ,0
113914,11826473940.0,IssuesEvent,2020-03-21 18:03:16,coatk1/playground,https://api.github.com/repos/coatk1/playground,opened,[DOCUMENTATION],documentation,"**Does the project need documentation**
Explain what the project need further explanantion on (i.e. what does the project do, examples on usage, etc).
**Does the source code need documentation**
Explain what the source code need further explanantion on (i.e. what does a module or class do, examples on usage, etc).
https://help.github.com/en/github/setting-up-and-managing-organizations-and-teams/managing-default-labels-for-repositories-in-your-organization",1.0,"[DOCUMENTATION] - **Does the project need documentation**
Explain what the project need further explanantion on (i.e. what does the project do, examples on usage, etc).
**Does the source code need documentation**
Explain what the source code need further explanantion on (i.e. what does a module or class do, examples on usage, etc).
https://help.github.com/en/github/setting-up-and-managing-organizations-and-teams/managing-default-labels-for-repositories-in-your-organization",0, does the project need documentation explain what the project need further explanantion on i e what does the project do examples on usage etc does the source code need documentation explain what the source code need further explanantion on i e what does a module or class do examples on usage etc ,0
5434,27243567134.0,IssuesEvent,2023-02-21 22:57:10,aws/aws-sam-cli,https://api.github.com/repos/aws/aws-sam-cli,closed,Unable to run docker in ARM Architecture,stage/needs-investigation maintainer/need-followup platform/mac/arm,"
### Description:
### Steps to reproduce:
1. Step : 1 Create a YAML for lambda that uses Docker Image
2. Step: 2 Add Docker File in the metadata of yaml which is similar to
`FROM python:3.6
WORKDIR /src
COPY main.py requirements.txt config.json ./
RUN apt-get update && apt-get install make git
RUN apt-get install -y apt-utils
RUN apt-get install -y cmake
RUN apt-get install -y librdkafka-dev
RUN pip install -r requirements.txt
ENTRYPOINT [ ""/usr/local/bin/python"", ""-m"", ""awslambdaric"" ]
CMD [""main.lambda_handler""]`
3. Fails as it tries to build in arm architecture
### Observed result:
Fails to build the image
`creating build/temp.linux-aarch64-3.6/tmp/pip-install-bur4_y1q/confluent-kafka_3a16b52446ce4d7a82d5dbf75653e9f4/src/confluent_kafka/src
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/usr/local/include/python3.6m -c /tmp/pip-install-bur4_y1q/confluent-kafka_3a16b52446ce4d7a82d5dbf75653e9f4/src/confluent_kafka/src/confluent_kafka.c -o build/temp.linux-aarch64-3.6/tmp/pip-install-bur4_y1q/confluent-kafka_3a16b52446ce4d7a82d5dbf75653e9f4/src/confluent_kafka/src/confluent_kafka.o
In file included from /tmp/pip-install-bur4_y1q/confluent-kafka_3a16b52446ce4d7a82d5dbf75653e9f4/src/confluent_kafka/src/confluent_kafka.c:17:
/tmp/pip-install-bur4_y1q/confluent-kafka_3a16b52446ce4d7a82d5dbf75653e9f4/src/confluent_kafka/src/confluent_kafka.h:66:2: error: #error ""confluent-kafka-python requires librdkafka v1.6.0 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html""
#error ""confluent-kafka-python requires librdkafka v1.6.0 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html""`
### Expected result:
SAM CLI should automatically build Docker image for x86 till the support of Lambda is ready for Graviton
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS:MacOs M1
2. If using SAM CLI, `sam --version`: SAM CLI, version 1.24.0
3. AWS region: ap-southeast-2
`Add --debug flag to any SAM CLI commands you are running`
",True,"Unable to run docker in ARM Architecture -
### Description:
### Steps to reproduce:
1. Step : 1 Create a YAML for lambda that uses Docker Image
2. Step: 2 Add Docker File in the metadata of yaml which is similar to
`FROM python:3.6
WORKDIR /src
COPY main.py requirements.txt config.json ./
RUN apt-get update && apt-get install make git
RUN apt-get install -y apt-utils
RUN apt-get install -y cmake
RUN apt-get install -y librdkafka-dev
RUN pip install -r requirements.txt
ENTRYPOINT [ ""/usr/local/bin/python"", ""-m"", ""awslambdaric"" ]
CMD [""main.lambda_handler""]`
3. Fails as it tries to build in arm architecture
### Observed result:
Fails to build the image
`creating build/temp.linux-aarch64-3.6/tmp/pip-install-bur4_y1q/confluent-kafka_3a16b52446ce4d7a82d5dbf75653e9f4/src/confluent_kafka/src
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/usr/local/include/python3.6m -c /tmp/pip-install-bur4_y1q/confluent-kafka_3a16b52446ce4d7a82d5dbf75653e9f4/src/confluent_kafka/src/confluent_kafka.c -o build/temp.linux-aarch64-3.6/tmp/pip-install-bur4_y1q/confluent-kafka_3a16b52446ce4d7a82d5dbf75653e9f4/src/confluent_kafka/src/confluent_kafka.o
In file included from /tmp/pip-install-bur4_y1q/confluent-kafka_3a16b52446ce4d7a82d5dbf75653e9f4/src/confluent_kafka/src/confluent_kafka.c:17:
/tmp/pip-install-bur4_y1q/confluent-kafka_3a16b52446ce4d7a82d5dbf75653e9f4/src/confluent_kafka/src/confluent_kafka.h:66:2: error: #error ""confluent-kafka-python requires librdkafka v1.6.0 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html""
#error ""confluent-kafka-python requires librdkafka v1.6.0 or later. Install the latest version of librdkafka from the Confluent repositories, see http://docs.confluent.io/current/installation.html""`
### Expected result:
SAM CLI should automatically build Docker image for x86 till the support of Lambda is ready for Graviton
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS:MacOs M1
2. If using SAM CLI, `sam --version`: SAM CLI, version 1.24.0
3. AWS region: ap-southeast-2
`Add --debug flag to any SAM CLI commands you are running`
",1,unable to run docker in arm architecture make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description steps to reproduce step create a yaml for lambda that uses docker image step add docker file in the metadata of yaml which is similar to from python workdir src copy main py requirements txt config json run apt get update apt get install make git run apt get install y apt utils run apt get install y cmake run apt get install y librdkafka dev run pip install r requirements txt entrypoint cmd fails as it tries to build in arm architecture observed result fails to build the image creating build temp linux tmp pip install confluent kafka src confluent kafka src gcc pthread wno unused result wsign compare dndebug g fwrapv wall fpic i usr local include c tmp pip install confluent kafka src confluent kafka src confluent kafka c o build temp linux tmp pip install confluent kafka src confluent kafka src confluent kafka o in file included from tmp pip install confluent kafka src confluent kafka src confluent kafka c tmp pip install confluent kafka src confluent kafka src confluent kafka h error error confluent kafka python requires librdkafka or later install the latest version of librdkafka from the confluent repositories see error confluent kafka python requires librdkafka or later install the latest version of librdkafka from the confluent repositories see expected result sam cli should automatically build docker image for till the support of lambda is ready for graviton additional environment details ex windows mac amazon linux etc os macos if using sam cli sam version sam cli version aws region ap southeast add debug flag to any sam cli commands you are running ,1
325182,27853596434.0,IssuesEvent,2023-03-20 20:44:29,nitnelave/lldap,https://api.github.com/repos/nitnelave/lldap,closed,Upgrade to a recent version of Yew,enhancement help wanted dependencies rust frontend tests,"The front-end was written with yew 0.18, and from 0.19 they have made significant breaking changes. As a result, updating is hard.
However, it is required to address #247 (and maybe #392), as well as keep up with the best practices. It would also help clean up the code. Potentially as well add testing.
Another option, though much heavier, would be to rewrite the entire frontend since it's not too complicated. This could be done either in another, more stable Rust framework, or in another language (typescript?). Rust is preferred for the compatibility with the authentication protocol OPAQUE.",1.0,"Upgrade to a recent version of Yew - The front-end was written with yew 0.18, and from 0.19 they have made significant breaking changes. As a result, updating is hard.
However, it is required to address #247 (and maybe #392), as well as keep up with the best practices. It would also help clean up the code. Potentially as well add testing.
Another option, though much heavier, would be to rewrite the entire frontend since it's not too complicated. This could be done either in another, more stable Rust framework, or in another language (typescript?). Rust is preferred for the compatibility with the authentication protocol OPAQUE.",0,upgrade to a recent version of yew the front end was written with yew and from they have made significant breaking changes as a result updating is hard however it is required to address and maybe as well as keep up with the best practices it would also help clean up the code potentially as well add testing another option though much heavier would be to rewrite the entire frontend since it s not too complicated this could be done either in another more stable rust framework or in another language typescript rust is preferred for the compatibility with the authentication protocol opaque ,0
218566,24376064675.0,IssuesEvent,2022-10-04 01:04:59,joshnewton31080/WebGoat,https://api.github.com/repos/joshnewton31080/WebGoat,opened,CVE-2022-42004 (Medium) detected in jackson-databind-2.12.4.jar,security vulnerability,"## CVE-2022-42004 - Medium Severity Vulnerability
Vulnerable Library - jackson-databind-2.12.4.jar
General data-binding functionality for Jackson: works on core streaming API
Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar
In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.
Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.4/jackson-databind-2.12.4.jar
In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.
",0,cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file webgoat server pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy jjwt jar root library x jackson databind jar vulnerable library found in base branch develop vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in beandeserializer deserializefromarray to prevent use of deeply nested arrays an application is vulnerable only with certain customized choices for deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind ,0
2755,9872875306.0,IssuesEvent,2019-06-22 09:01:11,arcticicestudio/snowsaw,https://api.github.com/repos/arcticicestudio/snowsaw,opened,Git ignore and attribute pattern,context-workflow scope-maintainability type-task,"
Add the [`.gitattributes`][gitattributes] configuration file to allow pattern handling and update the [`.gitignore`][gitignore] file to match the latest _Arctic Ice Studio_ project defaults.
[gitattributes]: https://git-scm.com/docs/gitattributes
[gitignore]: https://git-scm.com/docs/gitignore",True,"Git ignore and attribute pattern -
Add the [`.gitattributes`][gitattributes] configuration file to allow pattern handling and update the [`.gitignore`][gitignore] file to match the latest _Arctic Ice Studio_ project defaults.
[gitattributes]: https://git-scm.com/docs/gitattributes
[gitignore]: https://git-scm.com/docs/gitignore",1,git ignore and attribute pattern add the configuration file to allow pattern handling and update the file to match the latest arctic ice studio project defaults ,1
191695,15301537665.0,IssuesEvent,2021-02-24 13:44:28,crowdsecurity/crowdsec,https://api.github.com/repos/crowdsecurity/crowdsec,opened,Improvement/Documentation multiple goroutines ,documentation enhancement,"**Is your feature request related to a problem? Please describe.**
At the moment, if one wants better performance, he can add goroutines for parser, leakybucket and output stuff. But this is not documented
**Describe the solution you'd like**
Document this feature.
",1.0,"Improvement/Documentation multiple goroutines - **Is your feature request related to a problem? Please describe.**
At the moment, if one wants better performance, he can add goroutines for parser, leakybucket and output stuff. But this is not documented
**Describe the solution you'd like**
Document this feature.
",0,improvement documentation multiple goroutines is your feature request related to a problem please describe at the moment if one wants better performance he can add goroutines for parser leakybucket and output stuff but this is not documented describe the solution you d like document this feature ,0
37135,15180962418.0,IssuesEvent,2021-02-15 01:54:24,Geonovum/disgeo-arch,https://api.github.com/repos/Geonovum/disgeo-arch,closed,5.2.3.1 Afgeleide opslag,Component Opslag In Behandeling In behandeling - voorstel servicelayering Service layering,"In de tekst staat:
> Ten behoeve van afname van gegevens en informatie is naar verwachting in de technische uitwerking afgeleide opslag nodig. Dit is geen zelfstandige component, maar een onderdeel van Afname van gegevens en informatie.
Geen zelfstandige component? Vraag is of dit een juiste keuze is, kan immers gaan om enorme hoeveelheden te verrijken en verrijkte gegevens.
En verder:
> Uitgangspunt voor deze vereisten is dat het koppelvlak tussen de componenten Opslag en Afgeleide Opslag een intern koppelvlak is waarvoor geen vereisten gelden m.b.t. het gebruik van open, leveranciersonafhankelijke standaarden en technologieën.
Nee, omwille van ontkoppeling en portabiliteit ook hiervoor altijd de beschikbare data (technische) services gebruiken (eat your own dogfood).",2.0,"5.2.3.1 Afgeleide opslag - In de tekst staat:
> Ten behoeve van afname van gegevens en informatie is naar verwachting in de technische uitwerking afgeleide opslag nodig. Dit is geen zelfstandige component, maar een onderdeel van Afname van gegevens en informatie.
Geen zelfstandige component? Vraag is of dit een juiste keuze is, kan immers gaan om enorme hoeveelheden te verrijken en verrijkte gegevens.
En verder:
> Uitgangspunt voor deze vereisten is dat het koppelvlak tussen de componenten Opslag en Afgeleide Opslag een intern koppelvlak is waarvoor geen vereisten gelden m.b.t. het gebruik van open, leveranciersonafhankelijke standaarden en technologieën.
Nee, omwille van ontkoppeling en portabiliteit ook hiervoor altijd de beschikbare data (technische) services gebruiken (eat your own dogfood).",0, afgeleide opslag in de tekst staat ten behoeve van afname van gegevens en informatie is naar verwachting in de technische uitwerking afgeleide opslag nodig dit is geen zelfstandige component maar een onderdeel van afname van gegevens en informatie geen zelfstandige component vraag is of dit een juiste keuze is kan immers gaan om enorme hoeveelheden te verrijken en verrijkte gegevens en verder uitgangspunt voor deze vereisten is dat het koppelvlak tussen de componenten opslag en afgeleide opslag een intern koppelvlak is waarvoor geen vereisten gelden m b t het gebruik van open leveranciersonafhankelijke standaarden en technologieën nee omwille van ontkoppeling en portabiliteit ook hiervoor altijd de beschikbare data technische services gebruiken eat your own dogfood ,0
2699,9439262692.0,IssuesEvent,2019-04-14 08:52:59,react-native-community/react-native-cameraroll,https://api.github.com/repos/react-native-community/react-native-cameraroll,closed,Camera Roll not returning any photos on iOS,bug reproduced by maintainer,"I reported this bug in the react-native repo here at [facebook/react-native#24140](https://github.com/facebook/react-native/issues/24140) , but they said to post it here.
## 🐛 Bug Report
When fetching photos from the Camera Roll on **iOS** by calling `CameraRoll.getPhotos()`, it always return an empty array of edges in the data. This problem is not present when I run it on Android nor in a Snack. I tried implementing the [community version](https://github.com/react-native-community/react-native-cameraroll) and the [built-in version](https://facebook.github.io/react-native/docs/cameraroll) of the Camera Roll, but the problem persisted. I also tried running it on the iOS simulator, on an iPhone, and an iPad with no success.
## To Reproduce
1. Create a new react native project (with `react-native init`)
2. Link the Camera Roll library
a. With the RCTCameraRoll library as described [here](https://facebook.github.io/react-native/docs/cameraroll) from facebook's website
b. Or with the RNCCameraRoll library as described in your README
3. Add the permission keys in the Info.plist as described [here](https://facebook.github.io/react-native/docs/cameraroll#permissions) from facebook's website
4. Copy paste the [sample code](https://facebook.github.io/react-native/docs/cameraroll#example) from facebook's website
5. Run `react-native start --reset-cache` (Solved the problems I was having with metro)
5. Run `react-native run-ios`
## Expected Behavior
I would expect the Camera Roll getPhotos function to return a populated edges array in the edges with the information/URI of images in the phones camera roll.
## Code Example
Cannot replicate bug in a [Snack](https://snack.expo.io/@benjeau/react-native-camera-roll-ios-issue).
**With the built-in Camera Roll**
Here is a [repo](https://github.com/BenJeau/reactNativeCameraRollIssue/tree/1919797c473e93c37f56ee1af79ca52dc361553d) with the code example. I also uploaded the iOS release app file to [appetize.io](https://appetize.io/app/xcva0fr6xhqtyt1yvt1jywy61g), which does have the problem.
**With the community version**
Here is a [repo](https://github.com/BenJeau/reactNativeCameraRollIssue/tree/a54275c2283e627dd83d94c338a621349e8cb311) with the code example. I also uploaded the iOS release app file to [appetize.io](https://appetize.io/app/z6je8vmbc6um2d7cd5y46wd188), which does have the same problem.
## Environment
Output of the `react-native info` command
```
React Native Environment Info:
System:
OS: macOS High Sierra 10.13.6
CPU: (12) x64 Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
Memory: 18.59 MB / 16.00 GB
Shell: 3.2.57 - /bin/bash
Binaries:
Node: 11.12.0 - /usr/local/bin/node
Yarn: 1.15.2 - /usr/local/bin/yarn
npm: 6.7.0 - /usr/local/bin/npm
Watchman: 4.9.0 - /usr/local/bin/watchman
SDKs:
iOS SDK:
Platforms: iOS 12.1, macOS 10.14, tvOS 12.1, watchOS 5.1
Android SDK:
API Levels: 21, 23, 25, 26, 27, 28
Build Tools: 21.1.2, 23.0.3, 25.0.2, 26.0.2, 27.0.3, 28.0.2, 28.0.3
System Images: android-28 | Google APIs Intel x86 Atom
IDEs:
Android Studio: 3.3 AI-182.5107.16.33.5264788
Xcode: 10.1/10B61 - /usr/bin/xcodebuild
npmPackages:
react: 16.8.3 => 16.8.3
react-native: 0.59.1 => 0.59.1
npmGlobalPackages:
react-native-cli: 2.0.1
```",True,"Camera Roll not returning any photos on iOS - I reported this bug in the react-native repo here at [facebook/react-native#24140](https://github.com/facebook/react-native/issues/24140) , but they said to post it here.
## 🐛 Bug Report
When fetching photos from the Camera Roll on **iOS** by calling `CameraRoll.getPhotos()`, it always return an empty array of edges in the data. This problem is not present when I run it on Android nor in a Snack. I tried implementing the [community version](https://github.com/react-native-community/react-native-cameraroll) and the [built-in version](https://facebook.github.io/react-native/docs/cameraroll) of the Camera Roll, but the problem persisted. I also tried running it on the iOS simulator, on an iPhone, and an iPad with no success.
## To Reproduce
1. Create a new react native project (with `react-native init`)
2. Link the Camera Roll library
a. With the RCTCameraRoll library as described [here](https://facebook.github.io/react-native/docs/cameraroll) from facebook's website
b. Or with the RNCCameraRoll library as described in your README
3. Add the permission keys in the Info.plist as described [here](https://facebook.github.io/react-native/docs/cameraroll#permissions) from facebook's website
4. Copy paste the [sample code](https://facebook.github.io/react-native/docs/cameraroll#example) from facebook's website
5. Run `react-native start --reset-cache` (Solved the problems I was having with metro)
5. Run `react-native run-ios`
## Expected Behavior
I would expect the Camera Roll getPhotos function to return a populated edges array in the edges with the information/URI of images in the phones camera roll.
## Code Example
Cannot replicate bug in a [Snack](https://snack.expo.io/@benjeau/react-native-camera-roll-ios-issue).
**With the built-in Camera Roll**
Here is a [repo](https://github.com/BenJeau/reactNativeCameraRollIssue/tree/1919797c473e93c37f56ee1af79ca52dc361553d) with the code example. I also uploaded the iOS release app file to [appetize.io](https://appetize.io/app/xcva0fr6xhqtyt1yvt1jywy61g), which does have the problem.
**With the community version**
Here is a [repo](https://github.com/BenJeau/reactNativeCameraRollIssue/tree/a54275c2283e627dd83d94c338a621349e8cb311) with the code example. I also uploaded the iOS release app file to [appetize.io](https://appetize.io/app/z6je8vmbc6um2d7cd5y46wd188), which does have the same problem.
## Environment
Output of the `react-native info` command
```
React Native Environment Info:
System:
OS: macOS High Sierra 10.13.6
CPU: (12) x64 Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
Memory: 18.59 MB / 16.00 GB
Shell: 3.2.57 - /bin/bash
Binaries:
Node: 11.12.0 - /usr/local/bin/node
Yarn: 1.15.2 - /usr/local/bin/yarn
npm: 6.7.0 - /usr/local/bin/npm
Watchman: 4.9.0 - /usr/local/bin/watchman
SDKs:
iOS SDK:
Platforms: iOS 12.1, macOS 10.14, tvOS 12.1, watchOS 5.1
Android SDK:
API Levels: 21, 23, 25, 26, 27, 28
Build Tools: 21.1.2, 23.0.3, 25.0.2, 26.0.2, 27.0.3, 28.0.2, 28.0.3
System Images: android-28 | Google APIs Intel x86 Atom
IDEs:
Android Studio: 3.3 AI-182.5107.16.33.5264788
Xcode: 10.1/10B61 - /usr/bin/xcodebuild
npmPackages:
react: 16.8.3 => 16.8.3
react-native: 0.59.1 => 0.59.1
npmGlobalPackages:
react-native-cli: 2.0.1
```",1,camera roll not returning any photos on ios i reported this bug in the react native repo here at but they said to post it here 🐛 bug report when fetching photos from the camera roll on ios by calling cameraroll getphotos it always return an empty array of edges in the data this problem is not present when i run it on android nor in a snack i tried implementing the and the of the camera roll but the problem persisted i also tried running it on the ios simulator on an iphone and an ipad with no success to reproduce create a new react native project with react native init link the camera roll library a with the rctcameraroll library as described from facebook s website b or with the rnccameraroll library as described in your readme add the permission keys in the info plist as described from facebook s website copy paste the from facebook s website run react native start reset cache solved the problems i was having with metro run react native run ios expected behavior i would expect the camera roll getphotos function to return a populated edges array in the edges with the information uri of images in the phones camera roll code example cannot replicate bug in a with the built in camera roll here is a with the code example i also uploaded the ios release app file to which does have the problem with the community version here is a with the code example i also uploaded the ios release app file to which does have the same problem environment output of the react native info command react native environment info system os macos high sierra cpu intel r core tm cpu memory mb gb shell bin bash binaries node usr local bin node yarn usr local bin yarn npm usr local bin npm watchman usr local bin watchman sdks ios sdk platforms ios macos tvos watchos android sdk api levels build tools system images android google apis intel atom ides android studio ai xcode usr bin xcodebuild npmpackages react react native npmglobalpackages react native cli ,1
21127,6980965289.0,IssuesEvent,2017-12-13 05:14:09,hashicorp/packer,https://api.github.com/repos/hashicorp/packer,closed,Breaking change in 1.1.3,bug builder/amazon docs,"[This commit](https://github.com/hashicorp/packer/commit/a90c45d9bb3f2abd56ea77c8a456df19baaa60a7#diff-76f53be4e00c8508514464a9b9235c4e) which made it into the `1.1.3` release introduces a dependency in AWS for the `ec2:DescribeInstanceStatus` permission on the role that is building AMI's.
This broke our pipelines which were previously working on `1.1.2`. (We used the docker `light` image, which wasn't pinned so it automatically put us on the latest release).
Anyway, the new permission it should probably be documented at https://www.packer.io/docs/builders/amazon.html#using-an-iam-task-or-instance-role
",1.0,"Breaking change in 1.1.3 - [This commit](https://github.com/hashicorp/packer/commit/a90c45d9bb3f2abd56ea77c8a456df19baaa60a7#diff-76f53be4e00c8508514464a9b9235c4e) which made it into the `1.1.3` release introduces a dependency in AWS for the `ec2:DescribeInstanceStatus` permission on the role that is building AMI's.
This broke our pipelines which were previously working on `1.1.2`. (We used the docker `light` image, which wasn't pinned so it automatically put us on the latest release).
Anyway, the new permission it should probably be documented at https://www.packer.io/docs/builders/amazon.html#using-an-iam-task-or-instance-role
",0,breaking change in which made it into the release introduces a dependency in aws for the describeinstancestatus permission on the role that is building ami s this broke our pipelines which were previously working on we used the docker light image which wasn t pinned so it automatically put us on the latest release anyway the new permission it should probably be documented at ,0
703549,24165908642.0,IssuesEvent,2022-09-22 15:01:20,NickleDave/songdkl,https://api.github.com/repos/NickleDave/songdkl,closed,pin scikit-learn version to less than / equal to 0.18.2,bug High Priority,"scripts in PCB paper use `GMM`, deprecated in version 0.18.2
https://scikit-learn.org/0.19/whats_new.html
@dgmets reports that
> there are differences in the Likelihoods emitted from GaussianMixture as compared to the previous GMM module. In particular, the Likelihoods are strongly impacted by the covariance type used. This didn't used to be the case. Anyway, I am going to try to run this down... The current Dkl measures are proportional to the previous ones, but not the same.
in the meantime we can pin to a version before deprecation.
Using 0.18.2 gives a DeperecationWarning that can be annoying when it gets dumped to stdout 150k times, might be worth using a slightly earlier version
",1.0,"pin scikit-learn version to less than / equal to 0.18.2 - scripts in PCB paper use `GMM`, deprecated in version 0.18.2
https://scikit-learn.org/0.19/whats_new.html
@dgmets reports that
> there are differences in the Likelihoods emitted from GaussianMixture as compared to the previous GMM module. In particular, the Likelihoods are strongly impacted by the covariance type used. This didn't used to be the case. Anyway, I am going to try to run this down... The current Dkl measures are proportional to the previous ones, but not the same.
in the meantime we can pin to a version before deprecation.
Using 0.18.2 gives a DeperecationWarning that can be annoying when it gets dumped to stdout 150k times, might be worth using a slightly earlier version
",0,pin scikit learn version to less than equal to scripts in pcb paper use gmm deprecated in version dgmets reports that there are differences in the likelihoods emitted from gaussianmixture as compared to the previous gmm module in particular the likelihoods are strongly impacted by the covariance type used this didn t used to be the case anyway i am going to try to run this down the current dkl measures are proportional to the previous ones but not the same in the meantime we can pin to a version before deprecation using gives a deperecationwarning that can be annoying when it gets dumped to stdout times might be worth using a slightly earlier version ,0
496570,14349811008.0,IssuesEvent,2020-11-29 18:09:51,Poobslag/turbofat,https://api.github.com/repos/Poobslag/turbofat,closed,Add environment details: static objects like trees and bushes which get in your way,priority-3,"These should be used for decoration, and maybe occasionally as obstacles.",1.0,"Add environment details: static objects like trees and bushes which get in your way - These should be used for decoration, and maybe occasionally as obstacles.",0,add environment details static objects like trees and bushes which get in your way these should be used for decoration and maybe occasionally as obstacles ,0
56794,23904369190.0,IssuesEvent,2022-09-08 22:15:06,MicrosoftDocs/windowsserverdocs,https://api.github.com/repos/MicrosoftDocs/windowsserverdocs,closed,Can't follow directions on thispage,Pri1 windows-server/prod remote-desktop-services/tech,"
""From the Connection Center, tap the overflow menu (...) on the command bar at the top of the client.""
Where is the Connection center? What does it look like? I don't see (...). I'm stuck. Please update instructions with screenshots. Thank you.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 09ab98fc-fea2-a11a-e5ca-de1430211d97
* Version Independent ID: 4d9943c3-fef7-ca86-bd03-1241a04b8135
* Content: [Get started with the Windows Desktop client](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/windowsdesktop#install-the-client)
* Content Source: [WindowsServerDocs/remote/remote-desktop-services/clients/windowsdesktop.md](https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/remote/remote-desktop-services/clients/windowsdesktop.md)
* Product: **windows-server**
* Technology: **remote-desktop-services**
* GitHub Login: @Heidilohr
* Microsoft Alias: **helohr**",1.0,"Can't follow directions on thispage -
""From the Connection Center, tap the overflow menu (...) on the command bar at the top of the client.""
Where is the Connection center? What does it look like? I don't see (...). I'm stuck. Please update instructions with screenshots. Thank you.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 09ab98fc-fea2-a11a-e5ca-de1430211d97
* Version Independent ID: 4d9943c3-fef7-ca86-bd03-1241a04b8135
* Content: [Get started with the Windows Desktop client](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/windowsdesktop#install-the-client)
* Content Source: [WindowsServerDocs/remote/remote-desktop-services/clients/windowsdesktop.md](https://github.com/MicrosoftDocs/windowsserverdocs/blob/master/WindowsServerDocs/remote/remote-desktop-services/clients/windowsdesktop.md)
* Product: **windows-server**
* Technology: **remote-desktop-services**
* GitHub Login: @Heidilohr
* Microsoft Alias: **helohr**",0,can t follow directions on thispage from the connection center tap the overflow menu on the command bar at the top of the client where is the connection center what does it look like i don t see i m stuck please update instructions with screenshots thank you document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product windows server technology remote desktop services github login heidilohr microsoft alias helohr ,0
1558,6572253629.0,IssuesEvent,2017-09-11 00:39:06,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,lvol module should not require size if origin is thin,affects_2.3 bug_report waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lvol
##### ANSIBLE VERSION
verified that size requirement still exists in devel branch tip as of today
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When creating a thin snapshot, size is not necessary (this is, in fact, what `lvcreate` uses to differentiate a request to make thin snapshot from a new thick volume in same vg), yet the module requires it anyways for all state=present. For (only) this reason, creating thin snapshots is not possible.
##### STEPS TO REPRODUCE
```
- name: create_thin_snapshot
lvol:
lv: '{{lvname_origin}}'
vg: '{{vgname}}'
snapshot: '{{lvname_snap}}'
state: present
```
##### EXPECTED RESULTS
Should be created without issue. This corresponds to a simple:
```
lvcreate -sn {{lvname_snap}} {{vgname}}/{{lvname_origin}}
```
##### ACTUAL RESULTS
```
TASK [create_thin_snapshot lv={{lvname_origin}}, state=present, snapshot={{lvname_snap}}, vg={{vgname}}] ***
fatal: [myhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""No size given.""}
```
",True,"lvol module should not require size if origin is thin - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lvol
##### ANSIBLE VERSION
verified that size requirement still exists in devel branch tip as of today
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When creating a thin snapshot, size is not necessary (this is, in fact, what `lvcreate` uses to differentiate a request to make thin snapshot from a new thick volume in same vg), yet the module requires it anyways for all state=present. For (only) this reason, creating thin snapshots is not possible.
##### STEPS TO REPRODUCE
```
- name: create_thin_snapshot
lvol:
lv: '{{lvname_origin}}'
vg: '{{vgname}}'
snapshot: '{{lvname_snap}}'
state: present
```
##### EXPECTED RESULTS
Should be created without issue. This corresponds to a simple:
```
lvcreate -sn {{lvname_snap}} {{vgname}}/{{lvname_origin}}
```
##### ACTUAL RESULTS
```
TASK [create_thin_snapshot lv={{lvname_origin}}, state=present, snapshot={{lvname_snap}}, vg={{vgname}}] ***
fatal: [myhost]: FAILED! => {""changed"": false, ""failed"": true, ""msg"": ""No size given.""}
```
",1,lvol module should not require size if origin is thin issue type bug report component name lvol ansible version verified that size requirement still exists in devel branch tip as of today configuration n a os environment n a summary when creating a thin snapshot size is not necessary this is in fact what lvcreate uses to differentiate a request to make thin snapshot from a new thick volume in same vg yet the module requires it anyways for all state present for only this reason creating thin snapshots is not possible steps to reproduce name create thin snapshot lvol lv lvname origin vg vgname snapshot lvname snap state present expected results should be created without issue this corresponds to a simple lvcreate sn lvname snap vgname lvname origin actual results task fatal failed changed false failed true msg no size given ,1
110917,9483473481.0,IssuesEvent,2019-04-22 00:36:32,NayRojas/LIM008-fe-burger-queen,https://api.github.com/repos/NayRojas/LIM008-fe-burger-queen,closed,Ver resumen y el total de la compra,CSS3 JS Testing angular,"- [x] Crear la interfaz del componente order-items
- [x] Crear la interfaz del componente order-total
- [x] Crear fn de suma de precios
- [x] Crear template para pintar los elementos seleccionados del componente menu
",1.0,"Ver resumen y el total de la compra - - [x] Crear la interfaz del componente order-items
- [x] Crear la interfaz del componente order-total
- [x] Crear fn de suma de precios
- [x] Crear template para pintar los elementos seleccionados del componente menu
",0,ver resumen y el total de la compra crear la interfaz del componente order items crear la interfaz del componente order total crear fn de suma de precios crear template para pintar los elementos seleccionados del componente menu ,0
21,2517043870.0,IssuesEvent,2015-01-16 11:08:20,simplesamlphp/simplesamlphp,https://api.github.com/repos/simplesamlphp/simplesamlphp,closed,Remove long-deprecated IdP logout endpoints,enhancement low maintainability started,"More specifically:
* `www/saml2/idp/idpInitSingleLogoutServiceiFrame.php`
* `www/saml2/idp/SingleLogoutServiceiFrame.php`
* `www/saml2/idp/SingleLogoutServiceiFrameResponse.php`",True,"Remove long-deprecated IdP logout endpoints - More specifically:
* `www/saml2/idp/idpInitSingleLogoutServiceiFrame.php`
* `www/saml2/idp/SingleLogoutServiceiFrame.php`
* `www/saml2/idp/SingleLogoutServiceiFrameResponse.php`",1,remove long deprecated idp logout endpoints more specifically www idp idpinitsinglelogoutserviceiframe php www idp singlelogoutserviceiframe php www idp singlelogoutserviceiframeresponse php ,1
4962,25479207790.0,IssuesEvent,2022-11-25 17:57:00,bazelbuild/intellij,https://api.github.com/repos/bazelbuild/intellij,closed,Run gazelle on project sync,type: feature request P1 lang: go product: IntelliJ topic: sync product: GoLand awaiting-maintainer,"## Problem
For projects that use gazelle, moving a single file is a two-step process: First they must run gazelle, then they must sync to trigger re-indexing if IntelliJ didn't pick those changes up.
Ideally, this would be a single-click action, whereby a user only has to click ""sync"" after a refactoring is done.
## Proposed Solution
Run Gazelle on sync, at least for full, non-incremental syncs of projects, before we run the bazel build that enables the actual sync.
Regarding performance, Gazelle accepts packages as arguments, so I believe it should be possible to inspect the `.bazelproject` file to gather which packages it should sync, therefore correlating performance to the size of the indexed project.
The changes I imagine would be needed:
- Additional entries in the Bazel Plugin preferences, where the user can specify:
- The label of the Gazelle target to run.
- The frequency of the gazelle runs (on every sync, or only on full syncs).
- Custom logic to derive go packages from the `directories` entry in the `.bazelproject`.
- A call to gazelle during project sync, which would run before even the initial query.
I'm happy to spend the effort of implementing this, but as per the contributing guidelines I thought I'd ask first if this would be an interesting contribution, and if I missed important pieces.",True,"Run gazelle on project sync - ## Problem
For projects that use gazelle, moving a single file is a two-step process: First they must run gazelle, then they must sync to trigger re-indexing if IntelliJ didn't pick those changes up.
Ideally, this would be a single-click action, whereby a user only has to click ""sync"" after a refactoring is done.
## Proposed Solution
Run Gazelle on sync, at least for full, non-incremental syncs of projects, before we run the bazel build that enables the actual sync.
Regarding performance, Gazelle accepts packages as arguments, so I believe it should be possible to inspect the `.bazelproject` file to gather which packages it should sync, therefore correlating performance to the size of the indexed project.
The changes I imagine would be needed:
- Additional entries in the Bazel Plugin preferences, where the user can specify:
- The label of the Gazelle target to run.
- The frequency of the gazelle runs (on every sync, or only on full syncs).
- Custom logic to derive go packages from the `directories` entry in the `.bazelproject`.
- A call to gazelle during project sync, which would run before even the initial query.
I'm happy to spend the effort of implementing this, but as per the contributing guidelines I thought I'd ask first if this would be an interesting contribution, and if I missed important pieces.",1,run gazelle on project sync problem for projects that use gazelle moving a single file is a two step process first they must run gazelle then they must sync to trigger re indexing if intellij didn t pick those changes up ideally this would be a single click action whereby a user only has to click sync after a refactoring is done proposed solution run gazelle on sync at least for full non incremental syncs of projects before we run the bazel build that enables the actual sync regarding performance gazelle accepts packages as arguments so i believe it should be possible to inspect the bazelproject file to gather which packages it should sync therefore correlating performance to the size of the indexed project the changes i imagine would be needed additional entries in the bazel plugin preferences where the user can specify the label of the gazelle target to run the frequency of the gazelle runs on every sync or only on full syncs custom logic to derive go packages from the directories entry in the bazelproject a call to gazelle during project sync which would run before even the initial query i m happy to spend the effort of implementing this but as per the contributing guidelines i thought i d ask first if this would be an interesting contribution and if i missed important pieces ,1
834,4473414498.0,IssuesEvent,2016-08-26 03:50:08,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,problem with hash character (#) in path in fetch module,bug_report waiting_on_maintainer,"Issue Type: Bug Report
Ansible Version: ansible 1.9.3 (ansible-1.9.3-2.fc21)
Ansible Configuration: no changes to /etc/ansible/ansible.cfg made
Environment: Fedora 21, x86_64
Summary: When I use hash character (#) in path in fetch module, file is not fetched and checksum mismatch msg is returned instead
Steps To Reproduce:
1) create ~/ansible/ansible-error.yml playbook
---
- hosts: gluster-tst
remote_user: root
tasks:
- name: fetch module tester
fetch: src=/tmp/remote_file.txt dest=~/tmp/#test/ flat=yes
2] populate /tmp/remote_file.txt on remote host(s)
3] run playbook ansible-playbook ~/ansible/ansible-error.yml
PLAY [gluster-tst] ************************************************************
GATHERING FACTS ***************************************************************
ok: [gluster-tst01]
TASK: [fetch module tester] ***************************************************
failed: [gluster-tst01] => {""checksum"": null, ""dest"": ""/home/dron/tmp/#test/remote_file.txt"", ""failed"": true, ""file"": ""/tmp/remote_file.txt"", ""md5sum"": null, ""remote_checksum"": ""4fe0b800d221d1a61c44cd81d2975a288ffd22e4"", ""remote_md5sum"": null}
msg: checksum mismatch
Expected Results: fetched file
Actual Results: file is not fetched at all (does not exists locally) and msg: checksum mismatch is returned
",True,"problem with hash character (#) in path in fetch module - Issue Type: Bug Report
Ansible Version: ansible 1.9.3 (ansible-1.9.3-2.fc21)
Ansible Configuration: no changes to /etc/ansible/ansible.cfg made
Environment: Fedora 21, x86_64
Summary: When I use hash character (#) in path in fetch module, file is not fetched and checksum mismatch msg is returned instead
Steps To Reproduce:
1) create ~/ansible/ansible-error.yml playbook
---
- hosts: gluster-tst
remote_user: root
tasks:
- name: fetch module tester
fetch: src=/tmp/remote_file.txt dest=~/tmp/#test/ flat=yes
2] populate /tmp/remote_file.txt on remote host(s)
3] run playbook ansible-playbook ~/ansible/ansible-error.yml
PLAY [gluster-tst] ************************************************************
GATHERING FACTS ***************************************************************
ok: [gluster-tst01]
TASK: [fetch module tester] ***************************************************
failed: [gluster-tst01] => {""checksum"": null, ""dest"": ""/home/dron/tmp/#test/remote_file.txt"", ""failed"": true, ""file"": ""/tmp/remote_file.txt"", ""md5sum"": null, ""remote_checksum"": ""4fe0b800d221d1a61c44cd81d2975a288ffd22e4"", ""remote_md5sum"": null}
msg: checksum mismatch
Expected Results: fetched file
Actual Results: file is not fetched at all (does not exists locally) and msg: checksum mismatch is returned
",1,problem with hash character in path in fetch module issue type bug report ansible version ansible ansible ansible configuration no changes to etc ansible ansible cfg made environment fedora summary when i use hash character in path in fetch module file is not fetched and checksum mismatch msg is returned instead steps to reproduce create ansible ansible error yml playbook hosts gluster tst remote user root tasks name fetch module tester fetch src tmp remote file txt dest tmp test flat yes populate tmp remote file txt on remote host s run playbook ansible playbook ansible ansible error yml play gathering facts ok task failed checksum null dest home dron tmp test remote file txt failed true file tmp remote file txt null remote checksum remote null msg checksum mismatch expected results fetched file actual results file is not fetched at all does not exists locally and msg checksum mismatch is returned ,1
698508,23982997562.0,IssuesEvent,2022-09-13 16:29:57,bcgov/entity,https://api.github.com/repos/bcgov/entity,closed,Backend/Filer: business founding date mismatch between firms and other entity types,bug Priority1 ENTITY,"#### New description
Create UI is saving Start Date as `yyyy-mm-dd`, as expected. The Filer should take this and add the filing time, so that the eventual Founding Date is `yyyy-mm-ddThh:mm:ss` (in UTC). (Same thing with Dissolution Date -- please check if this is incorrect as well.)
(In the db, for some firms, Founding Date is `yyyy-mm-dd 00:00:00+00`, which is incorrect. If possible, please fix these in Dev db; they should include the UTC offset so that the date remains correct after Pacific time conversion.)
#### Old description
For a benefit company, the Founding Date property in the business response is an actual UTC date-time, eg:

However, for a firm, the Founding Date is actually just a date in Pacific timezone (ignore the zero time), eg:

**Proposed to do:**
- [ ] change `foundingDate` to a date-only (in Pacific timezone) for the entity types that this applies to
- [ ] rename `foundingDate` to `foundingDateTime` (or Founding Timestamp or something else that indicates a time is present, and which will be interpreted as a UTC datetime) for the entity types that this applies to
- [ ] or some other design such that the UI does not have to handle the same property in different ways depending on the entity type
Note that some of the changes above will impact Filings UI and possibly Create UI, since they expect to see the foundingDate property.
PS - Also look at this property for other entity types (eg, Coop).",1.0,"Backend/Filer: business founding date mismatch between firms and other entity types - #### New description
Create UI is saving Start Date as `yyyy-mm-dd`, as expected. The Filer should take this and add the filing time, so that the eventual Founding Date is `yyyy-mm-ddThh:mm:ss` (in UTC). (Same thing with Dissolution Date -- please check if this is incorrect as well.)
(In the db, for some firms, Founding Date is `yyyy-mm-dd 00:00:00+00`, which is incorrect. If possible, please fix these in Dev db; they should include the UTC offset so that the date remains correct after Pacific time conversion.)
#### Old description
For a benefit company, the Founding Date property in the business response is an actual UTC date-time, eg:

However, for a firm, the Founding Date is actually just a date in Pacific timezone (ignore the zero time), eg:

**Proposed to do:**
- [ ] change `foundingDate` to a date-only (in Pacific timezone) for the entity types that this applies to
- [ ] rename `foundingDate` to `foundingDateTime` (or Founding Timestamp or something else that indicates a time is present, and which will be interpreted as a UTC datetime) for the entity types that this applies to
- [ ] or some other design such that the UI does not have to handle the same property in different ways depending on the entity type
Note that some of the changes above will impact Filings UI and possibly Create UI, since they expect to see the foundingDate property.
PS - Also look at this property for other entity types (eg, Coop).",0,backend filer business founding date mismatch between firms and other entity types new description create ui is saving start date as yyyy mm dd as expected the filer should take this and add the filing time so that the eventual founding date is yyyy mm ddthh mm ss in utc same thing with dissolution date please check if this is incorrect as well in the db for some firms founding date is yyyy mm dd which is incorrect if possible please fix these in dev db they should include the utc offset so that the date remains correct after pacific time conversion old description for a benefit company the founding date property in the business response is an actual utc date time eg however for a firm the founding date is actually just a date in pacific timezone ignore the zero time eg proposed to do change foundingdate to a date only in pacific timezone for the entity types that this applies to rename foundingdate to foundingdatetime or founding timestamp or something else that indicates a time is present and which will be interpreted as a utc datetime for the entity types that this applies to or some other design such that the ui does not have to handle the same property in different ways depending on the entity type note that some of the changes above will impact filings ui and possibly create ui since they expect to see the foundingdate property ps also look at this property for other entity types eg coop ,0
3632,14680375621.0,IssuesEvent,2020-12-31 09:53:09,RalfKoban/MiKo-Analyzers,https://api.github.com/repos/RalfKoban/MiKo-Analyzers,closed,Assert should be preceded and followed by a blank line,Area: analyzer Area: maintainability feature,"A call to `Assert` should be preceded by a blank line if the preceding line contains a call to something that is no `Assert`.
The reason is ease of reading.
Following should report a violation:
```c#
var x = 42;
var y = ""something"";
Assert.That(x, Is.EqualTo(42));
Assert.That(y, Is.EqualTo(""something""));
```
While following should **not** report a violation:
```c#
var x = 42;
var y = ""something"";
Assert.That(x, Is.EqualTo(42));
Assert.That(y, Is.EqualTo(""something""));
```",True,"Assert should be preceded and followed by a blank line - A call to `Assert` should be preceded by a blank line if the preceding line contains a call to something that is no `Assert`.
The reason is ease of reading.
Following should report a violation:
```c#
var x = 42;
var y = ""something"";
Assert.That(x, Is.EqualTo(42));
Assert.That(y, Is.EqualTo(""something""));
```
While following should **not** report a violation:
```c#
var x = 42;
var y = ""something"";
Assert.That(x, Is.EqualTo(42));
Assert.That(y, Is.EqualTo(""something""));
```",1,assert should be preceded and followed by a blank line a call to assert should be preceded by a blank line if the preceding line contains a call to something that is no assert the reason is ease of reading following should report a violation c var x var y something assert that x is equalto assert that y is equalto something while following should not report a violation c var x var y something assert that x is equalto assert that y is equalto something ,1
1719,6574483553.0,IssuesEvent,2017-09-11 13:03:37,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_service: api_version related problems,affects_2.1 bug_report cloud docker waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
* docker_service
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /usr/local/etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
ansible 2.2.0.0
config file = /usr/local/etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
ansible.cfg:
```ini
[defaults]
inventory = inventory.ini
retry_files_enabled = False
```
##### OS / ENVIRONMENT
ansible target:
* Ubuntu Trusty
* docker-compose==1.7.0 and 1.9.0
##### SUMMARY
I'm unable to use the `docker_service` module because the `docker-compose` client version is incompatible with the server.
Setting the `api_version` either in the task or in the environment to ""auto"" (or to the server version) does not help; maybe related to #5295.
Ubuntu Trusty package a docker with API version 1.18, which translate to a `docker-compose` version 1.3.3 (API version changed just after 1.4.0rc3 for release 1.4.0), incompatible with the ansible module requiring a package version ≥ 1.7.
##### STEPS TO REPRODUCE
Sample playbook:
```yaml
---
-
hosts:
- all
tasks:
- name: 'docker compose'
environment:
DOCKER_API_VERSION: '1.18'
docker_service:
# there is a docker-compose.yml inside,
# not included in the example because it fail before...
project_src: '/srv/dockers/traefik'
api_version: '1.18'
pull: yes
...
```
##### EXPECTED RESULTS
Working role :^)
##### ACTUAL RESULTS
```
TASK [docker compose] **********************************************************
fatal: [docker02]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Shared connection to docker02.prd.iaas-manager.m0.p.fti.net closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_Ow_f99/ansible_module_docker_service.py\"", line 929, in \r\n main()\r\n File \""/tmp/ansible_Ow_f99/ansible_module_docker_service.py\"", line 924, in main\r\n result = ContainerManager(client).exec_module()\r\n File \""/tmp/ansible_Ow_f99/ansible_module_docker_service.py\"", line 575, in exec_module\r\n result = self.cmd_up()\r\n File \""/tmp/ansible_Ow_f99/ansible_module_docker_service.py\"", line 627, in cmd_up\r\n result.update(self.cmd_pull())\r\n File \""/tmp/ansible_Ow_f99/ansible_module_docker_service.py\"", line 739, in cmd_pull\r\n image = service.image()\r\n File \""/usr/local/lib/python2.7/dist-packages/compose/service.py\"", line 307, in image\r\n return self.client.inspect_image(self.image_name)\r\n File \""/usr/local/lib/python2.7/dist-packages/docker/utils/decorators.py\"", line 21, in wrapped\r\n return f(self, resource_id, *args, **kwargs)\r\n File \""/usr/local/lib/python2.7/dist-packages/docker/api/image.py\"", line 136, in inspect_image\r\n self._get(self._url(\""/images/{0}/json\"", image)), True\r\n File \""/usr/local/lib/python2.7/dist-packages/docker/client.py\"", line 178, in _result\r\n self._raise_for_status(response)\r\n File \""/usr/local/lib/python2.7/dist-packages/docker/client.py\"", line 173, in _raise_for_status\r\n raise errors.NotFound(e, response, explanation=explanation)\r\ndocker.errors.NotFound: 404 Client Error: Not Found (\""client and server don't have same version (client : 1.22, server: 1.18)\"")\r\n"", ""msg"": ""MODULE FAILURE""}
msg: MODULE FAILURE
```
As you can see, the error message says it tried with `(client : 1.22, server: 1.18)` and totally ignored the parameter.",True,"docker_service: api_version related problems - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
* docker_service
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /usr/local/etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
ansible 2.2.0.0
config file = /usr/local/etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
ansible.cfg:
```ini
[defaults]
inventory = inventory.ini
retry_files_enabled = False
```
##### OS / ENVIRONMENT
ansible target:
* Ubuntu Trusty
* docker-compose==1.7.0 and 1.9.0
##### SUMMARY
I'm unable to use the `docker_service` module because the `docker-compose` client version is incompatible with the server.
Setting the `api_version` either in the task or in the environment to ""auto"" (or to the server version) does not help; maybe related to #5295.
Ubuntu Trusty package a docker with API version 1.18, which translate to a `docker-compose` version 1.3.3 (API version changed just after 1.4.0rc3 for release 1.4.0), incompatible with the ansible module requiring a package version ≥ 1.7.
##### STEPS TO REPRODUCE
Sample playbook:
```yaml
---
-
hosts:
- all
tasks:
- name: 'docker compose'
environment:
DOCKER_API_VERSION: '1.18'
docker_service:
# there is a docker-compose.yml inside,
# not included in the example because it fail before...
project_src: '/srv/dockers/traefik'
api_version: '1.18'
pull: yes
...
```
##### EXPECTED RESULTS
Working role :^)
##### ACTUAL RESULTS
```
TASK [docker compose] **********************************************************
fatal: [docker02]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": ""Shared connection to docker02.prd.iaas-manager.m0.p.fti.net closed.\r\n"", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/tmp/ansible_Ow_f99/ansible_module_docker_service.py\"", line 929, in \r\n main()\r\n File \""/tmp/ansible_Ow_f99/ansible_module_docker_service.py\"", line 924, in main\r\n result = ContainerManager(client).exec_module()\r\n File \""/tmp/ansible_Ow_f99/ansible_module_docker_service.py\"", line 575, in exec_module\r\n result = self.cmd_up()\r\n File \""/tmp/ansible_Ow_f99/ansible_module_docker_service.py\"", line 627, in cmd_up\r\n result.update(self.cmd_pull())\r\n File \""/tmp/ansible_Ow_f99/ansible_module_docker_service.py\"", line 739, in cmd_pull\r\n image = service.image()\r\n File \""/usr/local/lib/python2.7/dist-packages/compose/service.py\"", line 307, in image\r\n return self.client.inspect_image(self.image_name)\r\n File \""/usr/local/lib/python2.7/dist-packages/docker/utils/decorators.py\"", line 21, in wrapped\r\n return f(self, resource_id, *args, **kwargs)\r\n File \""/usr/local/lib/python2.7/dist-packages/docker/api/image.py\"", line 136, in inspect_image\r\n self._get(self._url(\""/images/{0}/json\"", image)), True\r\n File \""/usr/local/lib/python2.7/dist-packages/docker/client.py\"", line 178, in _result\r\n self._raise_for_status(response)\r\n File \""/usr/local/lib/python2.7/dist-packages/docker/client.py\"", line 173, in _raise_for_status\r\n raise errors.NotFound(e, response, explanation=explanation)\r\ndocker.errors.NotFound: 404 Client Error: Not Found (\""client and server don't have same version (client : 1.22, server: 1.18)\"")\r\n"", ""msg"": ""MODULE FAILURE""}
msg: MODULE FAILURE
```
As you can see, the error message says it tried with `(client : 1.22, server: 1.18)` and totally ignored the parameter.",1,docker service api version related problems issue type bug report component name docker service ansible version ansible config file usr local etc ansible ansible cfg configured module search path default w o overrides ansible config file usr local etc ansible ansible cfg configured module search path default w o overrides configuration ansible cfg ini inventory inventory ini retry files enabled false os environment ansible target ubuntu trusty docker compose and summary i m unable to use the docker service module because the docker compose client version is incompatible with the server setting the api version either in the task or in the environment to auto or to the server version does not help maybe related to ubuntu trusty package a docker with api version which translate to a docker compose version api version changed just after for release incompatible with the ansible module requiring a package version ≥ steps to reproduce sample playbook yaml hosts all tasks name docker compose environment docker api version docker service there is a docker compose yml inside not included in the example because it fail before project src srv dockers traefik api version pull yes expected results working role actual results task fatal failed changed false failed true module stderr shared connection to prd iaas manager p fti net closed r n module stdout traceback most recent call last r n file tmp ansible ow ansible module docker service py line in r n main r n file tmp ansible ow ansible module docker service py line in main r n result containermanager client exec module r n file tmp ansible ow ansible module docker service py line in exec module r n result self cmd up r n file tmp ansible ow ansible module docker service py line in cmd up r n result update self cmd pull r n file tmp ansible ow ansible module docker service py line in cmd pull r n image service image r n file usr local lib dist packages compose service py line in image r n return self client inspect image self image name r n file usr local lib dist packages docker utils decorators py line in wrapped r n return f self resource id args kwargs r n file usr local lib dist packages docker api image py line in inspect image r n self get self url images json image true r n file usr local lib dist packages docker client py line in result r n self raise for status response r n file usr local lib dist packages docker client py line in raise for status r n raise errors notfound e response explanation explanation r ndocker errors notfound client error not found client and server don t have same version client server r n msg module failure msg module failure as you can see the error message says it tried with client server and totally ignored the parameter ,1
224676,24783423457.0,IssuesEvent,2022-10-24 07:50:25,sast-automation-dev/openidm-community-edition-43,https://api.github.com/repos/sast-automation-dev/openidm-community-edition-43,opened,orientdb-server-1.3.0.jar: 1 vulnerabilities (highest severity is: 8.8),security vulnerability," Vulnerable Library - orientdb-server-1.3.0.jar
The JSONP endpoint in the Studio component in OrientDB Server Community Edition before 2.0.15 and 2.1.x before 2.1.1 does not properly restrict callback values, which allows remote attackers to conduct cross-site request forgery (CSRF) attacks, and obtain sensitive information, via a crafted HTTP request.
The JSONP endpoint in the Studio component in OrientDB Server Community Edition before 2.0.15 and 2.1.x before 2.1.1 does not properly restrict callback values, which allows remote attackers to conduct cross-site request forgery (CSRF) attacks, and obtain sensitive information, via a crafted HTTP request.
:rescue_worker_helmet: Automatic Remediation is available for this issue
***
:rescue_worker_helmet: Automatic Remediation is available for this issue.
",0,orientdb server jar vulnerabilities highest severity is vulnerable library orientdb server jar orientdb nosql document graph dbms library home page a href path to dependency file openidm repo orientdb pom xml path to vulnerable library entechnologies orientdb server orientdb server jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in orientdb server version remediation available high orientdb server jar direct details cve vulnerable library orientdb server jar orientdb nosql document graph dbms library home page a href path to dependency file openidm repo orientdb pom xml path to vulnerable library entechnologies orientdb server orientdb server jar dependency hierarchy x orientdb server jar vulnerable library found in head commit a href found in base branch master vulnerability details the jsonp endpoint in the studio component in orientdb server community edition before and x before does not properly restrict callback values which allows remote attackers to conduct cross site request forgery csrf attacks and obtain sensitive information via a crafted http request publish date dec am url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date dec am fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue ,0
422166,12267376537.0,IssuesEvent,2020-05-07 10:34:02,ooni/probe,https://api.github.com/repos/ooni/probe,closed,Discuss the react-native integration plan,discuss ooni/probe-mobile priority/medium research prototype,"We agreed we would be discussing how to move forward the react-native integration.
Relevant to this discussion is also the fact that airbnb stopped using it: https://medium.com/airbnb-engineering/sunsetting-react-native-1868ba28e30a.
We should research prior work to and come up with a set of questions we should answer in order fully evaluate if it's a good idea to move forward with the plan.",1.0,"Discuss the react-native integration plan - We agreed we would be discussing how to move forward the react-native integration.
Relevant to this discussion is also the fact that airbnb stopped using it: https://medium.com/airbnb-engineering/sunsetting-react-native-1868ba28e30a.
We should research prior work to and come up with a set of questions we should answer in order fully evaluate if it's a good idea to move forward with the plan.",0,discuss the react native integration plan we agreed we would be discussing how to move forward the react native integration relevant to this discussion is also the fact that airbnb stopped using it we should research prior work to and come up with a set of questions we should answer in order fully evaluate if it s a good idea to move forward with the plan ,0
4924,25316239216.0,IssuesEvent,2022-11-17 21:55:11,ipfs/ipfs-gui,https://api.github.com/repos/ipfs/ipfs-gui,closed,IPFS Gui & Tools ownership,kind/discussion P0 need/analysis need/maintainers-input need/community-input kind/question Epic,"I want to discuss defining and reducing the surface area of ownership of the GUI & Tools team.
Currently, there are a few areas where we define what the IPFS GUI / IPFS Gui & Tools / IPFS GUI Tools / @ipfs-gui team owns:
1. https://github.com/protocol/w3dt-stewards/blob/main/scripts/create-triage-links/src/repos.js
2. https://github.com/ipfs/ipfs-gui#all-projects
3. https://www.notion.so/pl-strflt/IPFS-GUI-3bc1c1bf54d74f928bf11ef59c876b74#3271edf4345e4a95b163066d5a9f5da6 (points to item 2 above)
### Questions
1. Can we drop support for any of the listed packages? Can any gui&tools packages/repos be archived/deprecated?
* We have already reduced triage-work to what is listed at https://github.com/protocol/w3dt-stewards/blob/main/scripts/create-triage-links/src/repos.js. Is that sufficient?
1. We know that ipfs-desktop, ipfs-webui, and ipfs-companion are our priorities, but what are our priorities beyond those three?
* What support priority do the main three have?
* What are the priorities of the other packages?
* What support do we need from @achingbrain , i.e. what packages from https://github.com/ipfs/js-ipfs/tree/master/packages do we need tier 1 support on?
* What are our other dependencies? What support do we need from other orgs/teams? (ipld, multiformats, etc..)
1. Should we keep a list of unmaintained, yet useful packages? If so, where?
1. Decide whether this repo (ipfs/ipfs-gui) should be the home of all IPFS GUI related efforts across teams, or the centralized repo for the IPFS GUI & Tools team
### Proposal
1. Support changes
1. Drop support for the following packages:
* ipfs-share-files - https://github.com/ipfs-shipyard/ipfs-share-files#maintainers is already asking for official maintainers
* ipfs/in-web-browsers - https://github.com/ipfs/in-web-browsers should be owned by browsers-WG team
* https://github.com/ipfs/kubo/tree/master/assets/dir-index-html - drop support, we're already not including those in triage efforts and I'm not familiar with it.
2. Debateable
* ipld/explore.ipld.io - https://github.com/ipld/explore.ipld.io should be owned by ipld, but we essentially have this entire page inside of webui/desktop via https://webui.ipfs.io/#/explore
* awesome-ipfs - there is already discussion about moving support for this repo into [ecosystems dashboard]. could drop official support if we automate more of the repo(https://ecosystem.ipfs.tech/) so PRs get automerged. Needs better stewards for curation than what IPFS GUI & Tools team can offer
1. I propose the following order of support priorities:
* ipfs-webui (consumed by kubo & desktop)
* ipfs-desktop (unique user sessions over lifetime: 33k Linux, 131k Windows, 39k macOS)
* ipfs-companion ([chrome store says 60k+ users](https://chrome.google.com/webstore/detail/ipfs-companion/nibjojkomfdiaoajekhjakgkdhaomnch))
* ipfs/ipld-explorer-components - consumed by explore.ipld.io & webui
* public-gateway-checker - somewhat useful and popular repo for checking status of ipfs gateways. Lots of opportunity here without a large burden
* ipfs-shipyard/pinning-service-compliance - useful for ensuring pinning providers are compliant, and helpful as a pre-req for adding pinning-service providers to webui pinning provider defaults
* ipld/explore.ipld.io
* ipfs-shipyard/i18n - documentation only, small burden, but required by desktop and webui or other gui&tools projects that need i18n.
* ipfs-shipyard/js-pinning-service-http-client - used only by ipfs-shipyard/pinning-service-compliance currently
* multiformats/cid-utils-website
* ipfs-shipyard/ipfs-css
* awesome-ipfs - lists a lot of ipfs related projects/tools/datasets,
1. I propose we keep a list of packages in this repo's README that include unmaintained & useful repos so they're not lost and can be taken up if the need arises.
1. I think it makes sense to keep this repo as the home for GUI projects, but we may want a similar ipfs/ipfs-tools repo for things that aren't necessarily GUI.
cc @BigLep @lidel @tinytb ",True,"IPFS Gui & Tools ownership - I want to discuss defining and reducing the surface area of ownership of the GUI & Tools team.
Currently, there are a few areas where we define what the IPFS GUI / IPFS Gui & Tools / IPFS GUI Tools / @ipfs-gui team owns:
1. https://github.com/protocol/w3dt-stewards/blob/main/scripts/create-triage-links/src/repos.js
2. https://github.com/ipfs/ipfs-gui#all-projects
3. https://www.notion.so/pl-strflt/IPFS-GUI-3bc1c1bf54d74f928bf11ef59c876b74#3271edf4345e4a95b163066d5a9f5da6 (points to item 2 above)
### Questions
1. Can we drop support for any of the listed packages? Can any gui&tools packages/repos be archived/deprecated?
* We have already reduced triage-work to what is listed at https://github.com/protocol/w3dt-stewards/blob/main/scripts/create-triage-links/src/repos.js. Is that sufficient?
1. We know that ipfs-desktop, ipfs-webui, and ipfs-companion are our priorities, but what are our priorities beyond those three?
* What support priority do the main three have?
* What are the priorities of the other packages?
* What support do we need from @achingbrain , i.e. what packages from https://github.com/ipfs/js-ipfs/tree/master/packages do we need tier 1 support on?
* What are our other dependencies? What support do we need from other orgs/teams? (ipld, multiformats, etc..)
1. Should we keep a list of unmaintained, yet useful packages? If so, where?
1. Decide whether this repo (ipfs/ipfs-gui) should be the home of all IPFS GUI related efforts across teams, or the centralized repo for the IPFS GUI & Tools team
### Proposal
1. Support changes
1. Drop support for the following packages:
* ipfs-share-files - https://github.com/ipfs-shipyard/ipfs-share-files#maintainers is already asking for official maintainers
* ipfs/in-web-browsers - https://github.com/ipfs/in-web-browsers should be owned by browsers-WG team
* https://github.com/ipfs/kubo/tree/master/assets/dir-index-html - drop support, we're already not including those in triage efforts and I'm not familiar with it.
2. Debateable
* ipld/explore.ipld.io - https://github.com/ipld/explore.ipld.io should be owned by ipld, but we essentially have this entire page inside of webui/desktop via https://webui.ipfs.io/#/explore
* awesome-ipfs - there is already discussion about moving support for this repo into [ecosystems dashboard]. could drop official support if we automate more of the repo(https://ecosystem.ipfs.tech/) so PRs get automerged. Needs better stewards for curation than what IPFS GUI & Tools team can offer
1. I propose the following order of support priorities:
* ipfs-webui (consumed by kubo & desktop)
* ipfs-desktop (unique user sessions over lifetime: 33k Linux, 131k Windows, 39k macOS)
* ipfs-companion ([chrome store says 60k+ users](https://chrome.google.com/webstore/detail/ipfs-companion/nibjojkomfdiaoajekhjakgkdhaomnch))
* ipfs/ipld-explorer-components - consumed by explore.ipld.io & webui
* public-gateway-checker - somewhat useful and popular repo for checking status of ipfs gateways. Lots of opportunity here without a large burden
* ipfs-shipyard/pinning-service-compliance - useful for ensuring pinning providers are compliant, and helpful as a pre-req for adding pinning-service providers to webui pinning provider defaults
* ipld/explore.ipld.io
* ipfs-shipyard/i18n - documentation only, small burden, but required by desktop and webui or other gui&tools projects that need i18n.
* ipfs-shipyard/js-pinning-service-http-client - used only by ipfs-shipyard/pinning-service-compliance currently
* multiformats/cid-utils-website
* ipfs-shipyard/ipfs-css
* awesome-ipfs - lists a lot of ipfs related projects/tools/datasets,
1. I propose we keep a list of packages in this repo's README that include unmaintained & useful repos so they're not lost and can be taken up if the need arises.
1. I think it makes sense to keep this repo as the home for GUI projects, but we may want a similar ipfs/ipfs-tools repo for things that aren't necessarily GUI.
cc @BigLep @lidel @tinytb ",1,ipfs gui tools ownership i want to discuss defining and reducing the surface area of ownership of the gui tools team currently there are a few areas where we define what the ipfs gui ipfs gui tools ipfs gui tools ipfs gui team owns points to item above questions can we drop support for any of the listed packages can any gui tools packages repos be archived deprecated we have already reduced triage work to what is listed at is that sufficient we know that ipfs desktop ipfs webui and ipfs companion are our priorities but what are our priorities beyond those three what support priority do the main three have what are the priorities of the other packages what support do we need from achingbrain i e what packages from do we need tier support on what are our other dependencies what support do we need from other orgs teams ipld multiformats etc should we keep a list of unmaintained yet useful packages if so where decide whether this repo ipfs ipfs gui should be the home of all ipfs gui related efforts across teams or the centralized repo for the ipfs gui tools team proposal support changes drop support for the following packages ipfs share files is already asking for official maintainers ipfs in web browsers should be owned by browsers wg team drop support we re already not including those in triage efforts and i m not familiar with it debateable ipld explore ipld io should be owned by ipld but we essentially have this entire page inside of webui desktop via awesome ipfs there is already discussion about moving support for this repo into could drop official support if we automate more of the repo so prs get automerged needs better stewards for curation than what ipfs gui tools team can offer i propose the following order of support priorities ipfs webui consumed by kubo desktop ipfs desktop unique user sessions over lifetime linux windows macos ipfs companion ipfs ipld explorer components consumed by explore ipld io webui public gateway checker somewhat useful and popular repo for checking status of ipfs gateways lots of opportunity here without a large burden ipfs shipyard pinning service compliance useful for ensuring pinning providers are compliant and helpful as a pre req for adding pinning service providers to webui pinning provider defaults ipld explore ipld io ipfs shipyard documentation only small burden but required by desktop and webui or other gui tools projects that need ipfs shipyard js pinning service http client used only by ipfs shipyard pinning service compliance currently multiformats cid utils website ipfs shipyard ipfs css awesome ipfs lists a lot of ipfs related projects tools datasets i propose we keep a list of packages in this repo s readme that include unmaintained useful repos so they re not lost and can be taken up if the need arises i think it makes sense to keep this repo as the home for gui projects but we may want a similar ipfs ipfs tools repo for things that aren t necessarily gui cc biglep lidel tinytb ,1
169650,26836482362.0,IssuesEvent,2023-02-02 19:57:49,cov-lineages/pango-designation,https://api.github.com/repos/cov-lineages/pango-designation,closed,Big sublineage of BN.1.3 defined by Orf7b:C41W emerged in Vietnam977sequences as 2023/02/02,designated BA.2.75,"I randomly met this sublineage while checking airport surveillance it caught my attention cause its main mutation Orf7b:C41W (T27878G) sounded new to me and collection dates were all quite recent.
Digging a bit more i found that this sublineage of BN.1.3 is circulating with a significant prevalence in South Korea hanging around 2% of samples in the last few weeks while growing to 0,5% of cases in Japan.
So i decided to check growth advantages in SOuth Korea and it has a slight but solid advantage [versus parental BN.1.3 ](https://cov-spectrum.org/explore/South%20Korea/AllSamples/Past3M/variants?nextcladePangoLineage=BN.1.3*&aaMutations1=Orf7b%3AC41W&nextcladePangoLineage1=BN.1.3*&analysisMode=CompareToBaseline&)
It seems ahead of BQ.1 clan while just ahead the leading group with CH.1.1, even if it is still far from XBB.1.5.
I would like to highlight that a sublineage of this one gained a further ORf7b:W41L mutation with G27877T (15 sequences) ( @thomaspeacock could you check if i got it right please). Considering those sequences too the advantage seems slighty bigger
**Defining mutations:**
BN.1.3 + ORF1a:M3627I (G11146T) > ORF1a:H110Y (C593T) > C7390A > Orf7b:C41W (T27878G )
**Usher Tree:** i would like to highlight the big saltation branch on the top of the USher tree currently circulating in Japan and internationally and the little cluster with S:E471Q in the bottom part of the tree:
https://nextstrain.org/fetch/genome.ucsc.edu/trash/ct/subtreeAuspice1_genome_2143_5e6d10.json?c=country&label=id:node_7869439
Gisaid query: NS7b_C41W,E_T11A,NS3_T229I
finds 455 Sequences :
Expand for EPI_ISLs
EPI_ISL_15280956, EPI_ISL_15341181, EPI_ISL_15609500,
EPI_ISL_15609618, EPI_ISL_15609627, EPI_ISL_15641407,
EPI_ISL_15653379, EPI_ISL_15671987, EPI_ISL_15672526,
EPI_ISL_15694051, EPI_ISL_15695703, EPI_ISL_15695743,
EPI_ISL_15712256, EPI_ISL_15732187, EPI_ISL_15732258,
EPI_ISL_15732878, EPI_ISL_15733027, EPI_ISL_15736596,
EPI_ISL_15755897, EPI_ISL_15756084, EPI_ISL_15756220,
EPI_ISL_15757122, EPI_ISL_15757130, EPI_ISL_15757137,
EPI_ISL_15783588, EPI_ISL_15783784, EPI_ISL_15784352,
EPI_ISL_15794137, EPI_ISL_15794722, EPI_ISL_15804436,
EPI_ISL_15811198-15811199, EPI_ISL_15811297, EPI_ISL_15811301,
EPI_ISL_15820479, EPI_ISL_15831920, EPI_ISL_15838570,
EPI_ISL_15838573, EPI_ISL_15842274, EPI_ISL_15848957,
EPI_ISL_15849081, EPI_ISL_15850241, EPI_ISL_15850249,
EPI_ISL_15875604, EPI_ISL_15887437, EPI_ISL_15887456,
EPI_ISL_15896856, EPI_ISL_15896972, EPI_ISL_15897014,
EPI_ISL_15905785, EPI_ISL_15905920, EPI_ISL_15906229,
EPI_ISL_15906345, EPI_ISL_15906718, EPI_ISL_15907094,
EPI_ISL_15907356, EPI_ISL_15907753, EPI_ISL_15907982,
EPI_ISL_15910734, EPI_ISL_15916557, EPI_ISL_15917361,
EPI_ISL_15923945, EPI_ISL_15938456, EPI_ISL_15943762,
EPI_ISL_15944351, EPI_ISL_15944372, EPI_ISL_15944966,
EPI_ISL_15946986, EPI_ISL_15950714, EPI_ISL_15957977,
EPI_ISL_15961223, EPI_ISL_15978568, EPI_ISL_15979026,
EPI_ISL_15979247, EPI_ISL_15979270, EPI_ISL_15979469,
EPI_ISL_15979472, EPI_ISL_15979493, EPI_ISL_15979638,
EPI_ISL_15979692, EPI_ISL_15979967, EPI_ISL_15980180,
EPI_ISL_15981420, EPI_ISL_15981556-15981557, EPI_ISL_15981660,
EPI_ISL_15987669, EPI_ISL_15987677, EPI_ISL_15997256,
EPI_ISL_15997337, EPI_ISL_15999784, EPI_ISL_15999963,
EPI_ISL_16004094, EPI_ISL_16004167, EPI_ISL_16004234,
EPI_ISL_16004466, EPI_ISL_16010751, EPI_ISL_16011832,
EPI_ISL_16026624, EPI_ISL_16026626, EPI_ISL_16035681-16035682,
EPI_ISL_16036315, EPI_ISL_16036943, EPI_ISL_16040533,
EPI_ISL_16048728, EPI_ISL_16051039, EPI_ISL_16051737,
EPI_ISL_16051833, EPI_ISL_16058743, EPI_ISL_16058908,
EPI_ISL_16059055, EPI_ISL_16060198, EPI_ISL_16060276,
EPI_ISL_16060522, EPI_ISL_16060639, EPI_ISL_16060762,
EPI_ISL_16067524, EPI_ISL_16069571, EPI_ISL_16073621,
EPI_ISL_16073623, EPI_ISL_16077651, EPI_ISL_16077750,
EPI_ISL_16077768, EPI_ISL_16077785, EPI_ISL_16077845,
EPI_ISL_16078921, EPI_ISL_16086653, EPI_ISL_16091723,
EPI_ISL_16091955, EPI_ISL_16092606, EPI_ISL_16093839,
EPI_ISL_16096196, EPI_ISL_16099089, EPI_ISL_16105767,
EPI_ISL_16107792, EPI_ISL_16109839, EPI_ISL_16109932,
EPI_ISL_16110011, EPI_ISL_16111044, EPI_ISL_16112219,
EPI_ISL_16112822, EPI_ISL_16114521, EPI_ISL_16115590,
EPI_ISL_16115713, EPI_ISL_16120239, EPI_ISL_16120430,
EPI_ISL_16120697, EPI_ISL_16125129, EPI_ISL_16125177,
EPI_ISL_16125267, EPI_ISL_16128398, EPI_ISL_16128861,
EPI_ISL_16129606, EPI_ISL_16129612, EPI_ISL_16130481,
EPI_ISL_16131284, EPI_ISL_16132657, EPI_ISL_16133910,
EPI_ISL_16133933, EPI_ISL_16135111, EPI_ISL_16135153,
EPI_ISL_16135252, EPI_ISL_16135307, EPI_ISL_16135374,
EPI_ISL_16135413, EPI_ISL_16135502, EPI_ISL_16135634,
EPI_ISL_16135699, EPI_ISL_16135915, EPI_ISL_16135938,
EPI_ISL_16135969-16135970, EPI_ISL_16136052, EPI_ISL_16136136,
EPI_ISL_16136186, EPI_ISL_16136340, EPI_ISL_16136364,
EPI_ISL_16136380, EPI_ISL_16136385, EPI_ISL_16136415,
EPI_ISL_16136907, EPI_ISL_16137116, EPI_ISL_16137122,
EPI_ISL_16137170, EPI_ISL_16137232-16137239, EPI_ISL_16137285,
EPI_ISL_16137341, EPI_ISL_16137385, EPI_ISL_16137401,
EPI_ISL_16137403, EPI_ISL_16138779, EPI_ISL_16141406,
EPI_ISL_16143511, EPI_ISL_16144052, EPI_ISL_16144120,
EPI_ISL_16151787, EPI_ISL_16153376, EPI_ISL_16153523,
EPI_ISL_16158537-16158538, EPI_ISL_16161745, EPI_ISL_16161853,
EPI_ISL_16165348, EPI_ISL_16165767, EPI_ISL_16167531,
EPI_ISL_16169389, EPI_ISL_16173920, EPI_ISL_16173943,
EPI_ISL_16174192-16174193, EPI_ISL_16174392, EPI_ISL_16174737,
EPI_ISL_16175239, EPI_ISL_16180229, EPI_ISL_16186420,
EPI_ISL_16186725, EPI_ISL_16186792, EPI_ISL_16186972,
EPI_ISL_16187002, EPI_ISL_16187010, EPI_ISL_16187096,
EPI_ISL_16189318, EPI_ISL_16190550, EPI_ISL_16191270,
EPI_ISL_16191736, EPI_ISL_16192615, EPI_ISL_16192777,
EPI_ISL_16192782, EPI_ISL_16192831, EPI_ISL_16192840,
EPI_ISL_16192986, EPI_ISL_16193424, EPI_ISL_16194038,
EPI_ISL_16196484, EPI_ISL_16201160, EPI_ISL_16202003,
EPI_ISL_16206214, EPI_ISL_16209640, EPI_ISL_16210070,
EPI_ISL_16210745, EPI_ISL_16216251, EPI_ISL_16217368,
EPI_ISL_16217414, EPI_ISL_16217594, EPI_ISL_16220991,
EPI_ISL_16222658, EPI_ISL_16225868, EPI_ISL_16225875,
EPI_ISL_16225883, EPI_ISL_16229254, EPI_ISL_16234714,
EPI_ISL_16238369, EPI_ISL_16245772, EPI_ISL_16245903,
EPI_ISL_16245913, EPI_ISL_16246649, EPI_ISL_16246833,
EPI_ISL_16250577, EPI_ISL_16254458, EPI_ISL_16255347,
EPI_ISL_16255553, EPI_ISL_16256015, EPI_ISL_16256804,
EPI_ISL_16257458, EPI_ISL_16259454, EPI_ISL_16264666,
EPI_ISL_16264821, EPI_ISL_16265428, EPI_ISL_16265768,
EPI_ISL_16270893, EPI_ISL_16270953, EPI_ISL_16271226,
EPI_ISL_16271442, EPI_ISL_16271506, EPI_ISL_16272796,
EPI_ISL_16273338, EPI_ISL_16273357, EPI_ISL_16273613,
EPI_ISL_16273908, EPI_ISL_16273922, EPI_ISL_16274070,
EPI_ISL_16278531, EPI_ISL_16279286, EPI_ISL_16279297,
EPI_ISL_16279305, EPI_ISL_16279329, EPI_ISL_16279348,
EPI_ISL_16279384, EPI_ISL_16279840, EPI_ISL_16279926,
EPI_ISL_16280690, EPI_ISL_16284170, EPI_ISL_16284183,
EPI_ISL_16284429, EPI_ISL_16291136, EPI_ISL_16291409,
EPI_ISL_16291532, EPI_ISL_16291628, EPI_ISL_16291731,
EPI_ISL_16292222, EPI_ISL_16292303, EPI_ISL_16292367,
EPI_ISL_16292466, EPI_ISL_16292548, EPI_ISL_16292607,
EPI_ISL_16292632, EPI_ISL_16292639, EPI_ISL_16292858,
EPI_ISL_16294641, EPI_ISL_16298933, EPI_ISL_16299067,
EPI_ISL_16299085, EPI_ISL_16299236, EPI_ISL_16299254,
EPI_ISL_16301094, EPI_ISL_16301844, EPI_ISL_16301941,
EPI_ISL_16303190, EPI_ISL_16305383, EPI_ISL_16305415,
EPI_ISL_16307771, EPI_ISL_16308392, EPI_ISL_16311860,
EPI_ISL_16313345, EPI_ISL_16314029, EPI_ISL_16315608,
EPI_ISL_16319235, EPI_ISL_16319254, EPI_ISL_16320079,
EPI_ISL_16322030, EPI_ISL_16322041-16322042, EPI_ISL_16322296,
EPI_ISL_16322310-16322311, EPI_ISL_16323732-16323733, EPI_ISL_16323735,
EPI_ISL_16323739, EPI_ISL_16323921, EPI_ISL_16324973,
EPI_ISL_16329712, EPI_ISL_16329836, EPI_ISL_16330482,
EPI_ISL_16330500, EPI_ISL_16333508, EPI_ISL_16333879,
EPI_ISL_16333989, EPI_ISL_16336344, EPI_ISL_16336371,
EPI_ISL_16336453, EPI_ISL_16336470, EPI_ISL_16336518-16336519,
EPI_ISL_16336523, EPI_ISL_16336621, EPI_ISL_16336644,
EPI_ISL_16336689, EPI_ISL_16336772, EPI_ISL_16336812,
EPI_ISL_16337870, EPI_ISL_16337981, EPI_ISL_16338035-16338036,
EPI_ISL_16338045, EPI_ISL_16338095, EPI_ISL_16338198,
EPI_ISL_16338219, EPI_ISL_16338250, EPI_ISL_16339128,
EPI_ISL_16339162, EPI_ISL_16339200, EPI_ISL_16339279,
EPI_ISL_16339343, EPI_ISL_16339365-16339366, EPI_ISL_16339457,
EPI_ISL_16339481, EPI_ISL_16339589, EPI_ISL_16339618,
EPI_ISL_16339630, EPI_ISL_16339704, EPI_ISL_16339744,
EPI_ISL_16339924, EPI_ISL_16339950, EPI_ISL_16339961,
EPI_ISL_16340001, EPI_ISL_16340064, EPI_ISL_16340222,
EPI_ISL_16340262, EPI_ISL_16340269, EPI_ISL_16340297,
EPI_ISL_16340377, EPI_ISL_16340418, EPI_ISL_16340496,
EPI_ISL_16340605, EPI_ISL_16340611, EPI_ISL_16340729,
EPI_ISL_16340741, EPI_ISL_16340805, EPI_ISL_16340910,
EPI_ISL_16340994, EPI_ISL_16341100, EPI_ISL_16341144,
EPI_ISL_16341264, EPI_ISL_16341339, EPI_ISL_16341457,
EPI_ISL_16341485, EPI_ISL_16341488, EPI_ISL_16341510,
EPI_ISL_16341556, EPI_ISL_16341562, EPI_ISL_16342044,
EPI_ISL_16342072, EPI_ISL_16342366, EPI_ISL_16342375,
EPI_ISL_16342388, EPI_ISL_16342419, EPI_ISL_16342421,
EPI_ISL_16342428, EPI_ISL_16342726, EPI_ISL_16344676,
EPI_ISL_16347015, EPI_ISL_16347277, EPI_ISL_16352264,
EPI_ISL_16353616, EPI_ISL_16353868, EPI_ISL_16359036,
EPI_ISL_16359298, EPI_ISL_16359359, EPI_ISL_16359374,
EPI_ISL_16359582, EPI_ISL_16360513, EPI_ISL_16360528,
EPI_ISL_16360530, EPI_ISL_16360544, EPI_ISL_16364226,
EPI_ISL_16370161, EPI_ISL_16370659, EPI_ISL_16372196,
EPI_ISL_16374747, EPI_ISL_16376673, EPI_ISL_16377095,
EPI_ISL_16377108, EPI_ISL_16377955, EPI_ISL_16378818,
EPI_ISL_16379434
",1.0,"Big sublineage of BN.1.3 defined by Orf7b:C41W emerged in Vietnam977sequences as 2023/02/02 - I randomly met this sublineage while checking airport surveillance it caught my attention cause its main mutation Orf7b:C41W (T27878G) sounded new to me and collection dates were all quite recent.
Digging a bit more i found that this sublineage of BN.1.3 is circulating with a significant prevalence in South Korea hanging around 2% of samples in the last few weeks while growing to 0,5% of cases in Japan.
So i decided to check growth advantages in SOuth Korea and it has a slight but solid advantage [versus parental BN.1.3 ](https://cov-spectrum.org/explore/South%20Korea/AllSamples/Past3M/variants?nextcladePangoLineage=BN.1.3*&aaMutations1=Orf7b%3AC41W&nextcladePangoLineage1=BN.1.3*&analysisMode=CompareToBaseline&)
It seems ahead of BQ.1 clan while just ahead the leading group with CH.1.1, even if it is still far from XBB.1.5.
I would like to highlight that a sublineage of this one gained a further ORf7b:W41L mutation with G27877T (15 sequences) ( @thomaspeacock could you check if i got it right please). Considering those sequences too the advantage seems slighty bigger
**Defining mutations:**
BN.1.3 + ORF1a:M3627I (G11146T) > ORF1a:H110Y (C593T) > C7390A > Orf7b:C41W (T27878G )
**Usher Tree:** i would like to highlight the big saltation branch on the top of the USher tree currently circulating in Japan and internationally and the little cluster with S:E471Q in the bottom part of the tree:
https://nextstrain.org/fetch/genome.ucsc.edu/trash/ct/subtreeAuspice1_genome_2143_5e6d10.json?c=country&label=id:node_7869439
Gisaid query: NS7b_C41W,E_T11A,NS3_T229I
finds 455 Sequences :
Expand for EPI_ISLs
EPI_ISL_15280956, EPI_ISL_15341181, EPI_ISL_15609500,
EPI_ISL_15609618, EPI_ISL_15609627, EPI_ISL_15641407,
EPI_ISL_15653379, EPI_ISL_15671987, EPI_ISL_15672526,
EPI_ISL_15694051, EPI_ISL_15695703, EPI_ISL_15695743,
EPI_ISL_15712256, EPI_ISL_15732187, EPI_ISL_15732258,
EPI_ISL_15732878, EPI_ISL_15733027, EPI_ISL_15736596,
EPI_ISL_15755897, EPI_ISL_15756084, EPI_ISL_15756220,
EPI_ISL_15757122, EPI_ISL_15757130, EPI_ISL_15757137,
EPI_ISL_15783588, EPI_ISL_15783784, EPI_ISL_15784352,
EPI_ISL_15794137, EPI_ISL_15794722, EPI_ISL_15804436,
EPI_ISL_15811198-15811199, EPI_ISL_15811297, EPI_ISL_15811301,
EPI_ISL_15820479, EPI_ISL_15831920, EPI_ISL_15838570,
EPI_ISL_15838573, EPI_ISL_15842274, EPI_ISL_15848957,
EPI_ISL_15849081, EPI_ISL_15850241, EPI_ISL_15850249,
EPI_ISL_15875604, EPI_ISL_15887437, EPI_ISL_15887456,
EPI_ISL_15896856, EPI_ISL_15896972, EPI_ISL_15897014,
EPI_ISL_15905785, EPI_ISL_15905920, EPI_ISL_15906229,
EPI_ISL_15906345, EPI_ISL_15906718, EPI_ISL_15907094,
EPI_ISL_15907356, EPI_ISL_15907753, EPI_ISL_15907982,
EPI_ISL_15910734, EPI_ISL_15916557, EPI_ISL_15917361,
EPI_ISL_15923945, EPI_ISL_15938456, EPI_ISL_15943762,
EPI_ISL_15944351, EPI_ISL_15944372, EPI_ISL_15944966,
EPI_ISL_15946986, EPI_ISL_15950714, EPI_ISL_15957977,
EPI_ISL_15961223, EPI_ISL_15978568, EPI_ISL_15979026,
EPI_ISL_15979247, EPI_ISL_15979270, EPI_ISL_15979469,
EPI_ISL_15979472, EPI_ISL_15979493, EPI_ISL_15979638,
EPI_ISL_15979692, EPI_ISL_15979967, EPI_ISL_15980180,
EPI_ISL_15981420, EPI_ISL_15981556-15981557, EPI_ISL_15981660,
EPI_ISL_15987669, EPI_ISL_15987677, EPI_ISL_15997256,
EPI_ISL_15997337, EPI_ISL_15999784, EPI_ISL_15999963,
EPI_ISL_16004094, EPI_ISL_16004167, EPI_ISL_16004234,
EPI_ISL_16004466, EPI_ISL_16010751, EPI_ISL_16011832,
EPI_ISL_16026624, EPI_ISL_16026626, EPI_ISL_16035681-16035682,
EPI_ISL_16036315, EPI_ISL_16036943, EPI_ISL_16040533,
EPI_ISL_16048728, EPI_ISL_16051039, EPI_ISL_16051737,
EPI_ISL_16051833, EPI_ISL_16058743, EPI_ISL_16058908,
EPI_ISL_16059055, EPI_ISL_16060198, EPI_ISL_16060276,
EPI_ISL_16060522, EPI_ISL_16060639, EPI_ISL_16060762,
EPI_ISL_16067524, EPI_ISL_16069571, EPI_ISL_16073621,
EPI_ISL_16073623, EPI_ISL_16077651, EPI_ISL_16077750,
EPI_ISL_16077768, EPI_ISL_16077785, EPI_ISL_16077845,
EPI_ISL_16078921, EPI_ISL_16086653, EPI_ISL_16091723,
EPI_ISL_16091955, EPI_ISL_16092606, EPI_ISL_16093839,
EPI_ISL_16096196, EPI_ISL_16099089, EPI_ISL_16105767,
EPI_ISL_16107792, EPI_ISL_16109839, EPI_ISL_16109932,
EPI_ISL_16110011, EPI_ISL_16111044, EPI_ISL_16112219,
EPI_ISL_16112822, EPI_ISL_16114521, EPI_ISL_16115590,
EPI_ISL_16115713, EPI_ISL_16120239, EPI_ISL_16120430,
EPI_ISL_16120697, EPI_ISL_16125129, EPI_ISL_16125177,
EPI_ISL_16125267, EPI_ISL_16128398, EPI_ISL_16128861,
EPI_ISL_16129606, EPI_ISL_16129612, EPI_ISL_16130481,
EPI_ISL_16131284, EPI_ISL_16132657, EPI_ISL_16133910,
EPI_ISL_16133933, EPI_ISL_16135111, EPI_ISL_16135153,
EPI_ISL_16135252, EPI_ISL_16135307, EPI_ISL_16135374,
EPI_ISL_16135413, EPI_ISL_16135502, EPI_ISL_16135634,
EPI_ISL_16135699, EPI_ISL_16135915, EPI_ISL_16135938,
EPI_ISL_16135969-16135970, EPI_ISL_16136052, EPI_ISL_16136136,
EPI_ISL_16136186, EPI_ISL_16136340, EPI_ISL_16136364,
EPI_ISL_16136380, EPI_ISL_16136385, EPI_ISL_16136415,
EPI_ISL_16136907, EPI_ISL_16137116, EPI_ISL_16137122,
EPI_ISL_16137170, EPI_ISL_16137232-16137239, EPI_ISL_16137285,
EPI_ISL_16137341, EPI_ISL_16137385, EPI_ISL_16137401,
EPI_ISL_16137403, EPI_ISL_16138779, EPI_ISL_16141406,
EPI_ISL_16143511, EPI_ISL_16144052, EPI_ISL_16144120,
EPI_ISL_16151787, EPI_ISL_16153376, EPI_ISL_16153523,
EPI_ISL_16158537-16158538, EPI_ISL_16161745, EPI_ISL_16161853,
EPI_ISL_16165348, EPI_ISL_16165767, EPI_ISL_16167531,
EPI_ISL_16169389, EPI_ISL_16173920, EPI_ISL_16173943,
EPI_ISL_16174192-16174193, EPI_ISL_16174392, EPI_ISL_16174737,
EPI_ISL_16175239, EPI_ISL_16180229, EPI_ISL_16186420,
EPI_ISL_16186725, EPI_ISL_16186792, EPI_ISL_16186972,
EPI_ISL_16187002, EPI_ISL_16187010, EPI_ISL_16187096,
EPI_ISL_16189318, EPI_ISL_16190550, EPI_ISL_16191270,
EPI_ISL_16191736, EPI_ISL_16192615, EPI_ISL_16192777,
EPI_ISL_16192782, EPI_ISL_16192831, EPI_ISL_16192840,
EPI_ISL_16192986, EPI_ISL_16193424, EPI_ISL_16194038,
EPI_ISL_16196484, EPI_ISL_16201160, EPI_ISL_16202003,
EPI_ISL_16206214, EPI_ISL_16209640, EPI_ISL_16210070,
EPI_ISL_16210745, EPI_ISL_16216251, EPI_ISL_16217368,
EPI_ISL_16217414, EPI_ISL_16217594, EPI_ISL_16220991,
EPI_ISL_16222658, EPI_ISL_16225868, EPI_ISL_16225875,
EPI_ISL_16225883, EPI_ISL_16229254, EPI_ISL_16234714,
EPI_ISL_16238369, EPI_ISL_16245772, EPI_ISL_16245903,
EPI_ISL_16245913, EPI_ISL_16246649, EPI_ISL_16246833,
EPI_ISL_16250577, EPI_ISL_16254458, EPI_ISL_16255347,
EPI_ISL_16255553, EPI_ISL_16256015, EPI_ISL_16256804,
EPI_ISL_16257458, EPI_ISL_16259454, EPI_ISL_16264666,
EPI_ISL_16264821, EPI_ISL_16265428, EPI_ISL_16265768,
EPI_ISL_16270893, EPI_ISL_16270953, EPI_ISL_16271226,
EPI_ISL_16271442, EPI_ISL_16271506, EPI_ISL_16272796,
EPI_ISL_16273338, EPI_ISL_16273357, EPI_ISL_16273613,
EPI_ISL_16273908, EPI_ISL_16273922, EPI_ISL_16274070,
EPI_ISL_16278531, EPI_ISL_16279286, EPI_ISL_16279297,
EPI_ISL_16279305, EPI_ISL_16279329, EPI_ISL_16279348,
EPI_ISL_16279384, EPI_ISL_16279840, EPI_ISL_16279926,
EPI_ISL_16280690, EPI_ISL_16284170, EPI_ISL_16284183,
EPI_ISL_16284429, EPI_ISL_16291136, EPI_ISL_16291409,
EPI_ISL_16291532, EPI_ISL_16291628, EPI_ISL_16291731,
EPI_ISL_16292222, EPI_ISL_16292303, EPI_ISL_16292367,
EPI_ISL_16292466, EPI_ISL_16292548, EPI_ISL_16292607,
EPI_ISL_16292632, EPI_ISL_16292639, EPI_ISL_16292858,
EPI_ISL_16294641, EPI_ISL_16298933, EPI_ISL_16299067,
EPI_ISL_16299085, EPI_ISL_16299236, EPI_ISL_16299254,
EPI_ISL_16301094, EPI_ISL_16301844, EPI_ISL_16301941,
EPI_ISL_16303190, EPI_ISL_16305383, EPI_ISL_16305415,
EPI_ISL_16307771, EPI_ISL_16308392, EPI_ISL_16311860,
EPI_ISL_16313345, EPI_ISL_16314029, EPI_ISL_16315608,
EPI_ISL_16319235, EPI_ISL_16319254, EPI_ISL_16320079,
EPI_ISL_16322030, EPI_ISL_16322041-16322042, EPI_ISL_16322296,
EPI_ISL_16322310-16322311, EPI_ISL_16323732-16323733, EPI_ISL_16323735,
EPI_ISL_16323739, EPI_ISL_16323921, EPI_ISL_16324973,
EPI_ISL_16329712, EPI_ISL_16329836, EPI_ISL_16330482,
EPI_ISL_16330500, EPI_ISL_16333508, EPI_ISL_16333879,
EPI_ISL_16333989, EPI_ISL_16336344, EPI_ISL_16336371,
EPI_ISL_16336453, EPI_ISL_16336470, EPI_ISL_16336518-16336519,
EPI_ISL_16336523, EPI_ISL_16336621, EPI_ISL_16336644,
EPI_ISL_16336689, EPI_ISL_16336772, EPI_ISL_16336812,
EPI_ISL_16337870, EPI_ISL_16337981, EPI_ISL_16338035-16338036,
EPI_ISL_16338045, EPI_ISL_16338095, EPI_ISL_16338198,
EPI_ISL_16338219, EPI_ISL_16338250, EPI_ISL_16339128,
EPI_ISL_16339162, EPI_ISL_16339200, EPI_ISL_16339279,
EPI_ISL_16339343, EPI_ISL_16339365-16339366, EPI_ISL_16339457,
EPI_ISL_16339481, EPI_ISL_16339589, EPI_ISL_16339618,
EPI_ISL_16339630, EPI_ISL_16339704, EPI_ISL_16339744,
EPI_ISL_16339924, EPI_ISL_16339950, EPI_ISL_16339961,
EPI_ISL_16340001, EPI_ISL_16340064, EPI_ISL_16340222,
EPI_ISL_16340262, EPI_ISL_16340269, EPI_ISL_16340297,
EPI_ISL_16340377, EPI_ISL_16340418, EPI_ISL_16340496,
EPI_ISL_16340605, EPI_ISL_16340611, EPI_ISL_16340729,
EPI_ISL_16340741, EPI_ISL_16340805, EPI_ISL_16340910,
EPI_ISL_16340994, EPI_ISL_16341100, EPI_ISL_16341144,
EPI_ISL_16341264, EPI_ISL_16341339, EPI_ISL_16341457,
EPI_ISL_16341485, EPI_ISL_16341488, EPI_ISL_16341510,
EPI_ISL_16341556, EPI_ISL_16341562, EPI_ISL_16342044,
EPI_ISL_16342072, EPI_ISL_16342366, EPI_ISL_16342375,
EPI_ISL_16342388, EPI_ISL_16342419, EPI_ISL_16342421,
EPI_ISL_16342428, EPI_ISL_16342726, EPI_ISL_16344676,
EPI_ISL_16347015, EPI_ISL_16347277, EPI_ISL_16352264,
EPI_ISL_16353616, EPI_ISL_16353868, EPI_ISL_16359036,
EPI_ISL_16359298, EPI_ISL_16359359, EPI_ISL_16359374,
EPI_ISL_16359582, EPI_ISL_16360513, EPI_ISL_16360528,
EPI_ISL_16360530, EPI_ISL_16360544, EPI_ISL_16364226,
EPI_ISL_16370161, EPI_ISL_16370659, EPI_ISL_16372196,
EPI_ISL_16374747, EPI_ISL_16376673, EPI_ISL_16377095,
EPI_ISL_16377108, EPI_ISL_16377955, EPI_ISL_16378818,
EPI_ISL_16379434
",0,big sublineage of bn defined by emerged in as i randomly met this sublineage while checking airport surveillance it caught my attention cause its main mutation sounded new to me and collection dates were all quite recent digging a bit more i found that this sublineage of bn is circulating with a significant prevalence in south korea hanging around of samples in the last few weeks while growing to of cases in japan so i decided to check growth advantages in south korea and it has a slight but solid advantage img width alt schermata alle src it seems ahead of bq clan while just ahead the leading group with ch even if it is still far from xbb i would like to highlight that a sublineage of this one gained a further mutation with sequences thomaspeacock could you check if i got it right please considering those sequences too the advantage seems slighty bigger defining mutations bn usher tree i would like to highlight the big saltation branch on the top of the usher tree currently circulating in japan and internationally and the little cluster with s in the bottom part of the tree img width alt schermata alle src gisaid query e finds sequences expand for epi isls epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl epi isl ,0
665772,22329200042.0,IssuesEvent,2022-06-14 13:17:27,yl-ang/NsStayFit,https://api.github.com/repos/yl-ang/NsStayFit,closed,"As a NSMen, I want to keep track of my past IPPT results",type.Story priority.Low IPPT,so that I know whether I improve over time.,1.0,"As a NSMen, I want to keep track of my past IPPT results - so that I know whether I improve over time.",0,as a nsmen i want to keep track of my past ippt results so that i know whether i improve over time ,0
5386,27071867029.0,IssuesEvent,2023-02-14 07:40:43,OpenRefine/OpenRefine,https://api.github.com/repos/OpenRefine/OpenRefine,closed,Unassign contributors automatically after a delay,maintainability,"[Our issues labeled ""good first issue""](https://github.com/OpenRefine/OpenRefine/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) tend to attract new contributors, which is great.
However, contributors often abandon the issue without unassigning themselves.
This means that the pool of good first issues available for prospective contributors shrinks artificially.
As a preparation for our participation in Outreachy/GSoC I have cleared assignees of all good first issues today (after checking that they had been assigned for a long time). But I think we should not have to do this manually.
There is a GitHub Action which seems to do just that:
https://github.com/marketplace/actions/unassign-contributor-after-days-of-inactivity
Any resistance to trying this out?
As an experiment I would first restrict this to the ""good first issue"" tag, and set a fairly generous delay - perhaps 3 months?",True,"Unassign contributors automatically after a delay - [Our issues labeled ""good first issue""](https://github.com/OpenRefine/OpenRefine/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) tend to attract new contributors, which is great.
However, contributors often abandon the issue without unassigning themselves.
This means that the pool of good first issues available for prospective contributors shrinks artificially.
As a preparation for our participation in Outreachy/GSoC I have cleared assignees of all good first issues today (after checking that they had been assigned for a long time). But I think we should not have to do this manually.
There is a GitHub Action which seems to do just that:
https://github.com/marketplace/actions/unassign-contributor-after-days-of-inactivity
Any resistance to trying this out?
As an experiment I would first restrict this to the ""good first issue"" tag, and set a fairly generous delay - perhaps 3 months?",1,unassign contributors automatically after a delay tend to attract new contributors which is great however contributors often abandon the issue without unassigning themselves this means that the pool of good first issues available for prospective contributors shrinks artificially as a preparation for our participation in outreachy gsoc i have cleared assignees of all good first issues today after checking that they had been assigned for a long time but i think we should not have to do this manually there is a github action which seems to do just that any resistance to trying this out as an experiment i would first restrict this to the good first issue tag and set a fairly generous delay perhaps months ,1
5878,31994005688.0,IssuesEvent,2023-09-21 08:03:40,onebeyond/maintainers,https://api.github.com/repos/onebeyond/maintainers,closed,OpenSSF Scorecard Report Updated!,maintainers-agenda,"Hello!
There are changes in your OpenSSF Scorecard report.
Please review the following changes and take action if necessary.
## Summary
There are changes in the following repositories:
| Repository | Commit | Score | Score Delta | Report | StepSecurity |
| -- | -- | -- | -- | -- | -- |
| [guidesmiths/kube-deploy](https://github.com/guidesmiths/kube-deploy) | [9f1708b](https://github.com/guidesmiths/kube-deploy/commit/9f1708b3f3c1b0ba99a41b148dc6c051dbf08cdd) | 3 | 0.1 / [Details](https://kooltheba.github.io/openssf-scorecard-api-visualizer/#/projects/github.com/guidesmiths/kube-deploy/compare/9f1708b3f3c1b0ba99a41b148dc6c051dbf08cdd/9f1708b3f3c1b0ba99a41b148dc6c051dbf08cdd) | [View](https://kooltheba.github.io/openssf-scorecard-api-visualizer/#/projects/github.com/guidesmiths/kube-deploy/commit/9f1708b3f3c1b0ba99a41b148dc6c051dbf08cdd) | [Fix it](https://app.stepsecurity.io/securerepo?repo=guidesmiths/kube-deploy) |
_Report generated by [UlisesGascon/openssf-scorecard-monitor](https://github.com/UlisesGascon/openssf-scorecard-monitor)._",True,"OpenSSF Scorecard Report Updated! - Hello!
There are changes in your OpenSSF Scorecard report.
Please review the following changes and take action if necessary.
## Summary
There are changes in the following repositories:
| Repository | Commit | Score | Score Delta | Report | StepSecurity |
| -- | -- | -- | -- | -- | -- |
| [guidesmiths/kube-deploy](https://github.com/guidesmiths/kube-deploy) | [9f1708b](https://github.com/guidesmiths/kube-deploy/commit/9f1708b3f3c1b0ba99a41b148dc6c051dbf08cdd) | 3 | 0.1 / [Details](https://kooltheba.github.io/openssf-scorecard-api-visualizer/#/projects/github.com/guidesmiths/kube-deploy/compare/9f1708b3f3c1b0ba99a41b148dc6c051dbf08cdd/9f1708b3f3c1b0ba99a41b148dc6c051dbf08cdd) | [View](https://kooltheba.github.io/openssf-scorecard-api-visualizer/#/projects/github.com/guidesmiths/kube-deploy/commit/9f1708b3f3c1b0ba99a41b148dc6c051dbf08cdd) | [Fix it](https://app.stepsecurity.io/securerepo?repo=guidesmiths/kube-deploy) |
_Report generated by [UlisesGascon/openssf-scorecard-monitor](https://github.com/UlisesGascon/openssf-scorecard-monitor)._",1,openssf scorecard report updated hello there are changes in your openssf scorecard report please review the following changes and take action if necessary summary there are changes in the following repositories repository commit score score delta report stepsecurity report generated by ,1
1356,5843699195.0,IssuesEvent,2017-05-10 09:49:57,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,win_lineinfile idempotence broken,affects_2.1 bug_report waiting_on_maintainer windows,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- win_lineinfile
##### ANSIBLE VERSION
```
ansible 2.1.0 (devel cb7b3b489d) last updated 2016/03/30 15:14:21 (GMT +200)
lib/ansible/modules/core: (detached HEAD 0268864211) last updated 2016/03/30 15:14:39 (GMT +200)
lib/ansible/modules/extras: (detached HEAD 6978984244) last updated 2016/03/30 15:14:39 (GMT +200)
```
##### OS / ENVIRONMENT
ubuntu 14 -> windows 2012R2
##### SUMMARY
backrefs option doesn't arrive well inside the module, causing it not to function properly and also breaking idempotence (line will be always inserted at the end of the file on every execution, regardless of backrefs=yes and regardless of regexp match)
##### STEPS TO REPRODUCE
c:\test.txt containing
```
test1
test2
test3
```
part of playbook
```
win_lineinfile:
dest: c:\test.txt
regexp: ""test2""
line: ""this will be added over and over""
backrefs: yes
```
##### EXPECTED RESULTS
according to [this ](https://github.com/ansible/ansible/issues/4531)and similar to the behaviour on linux
i was expecting to have idempotent behaviour (to change the regexp into line if regexp found and do nothing if not found), which works on linux.
upon first execution, changed=1
upon second execution, changed=0
```
test1
this will be added over and over
test3
```
##### ACTUAL RESULTS
upon first execution, changed=1
upon second execution, changed=1
```
test1
this will be added over and over
test3
this will be added over and over
```
##### WORKAROUND/HACK
It seems that the problem comes from the backrefs variable arriving as True/False in the module, so this never gets triggered.
```
ElseIf ($backrefs -ne ""no"") {
# No matches - no-op
```
and it forces it to go two elseifs down and add the line
I used this bit of code at the beginning of the module - at params - since I desperately needed the idempotent behavior :)
```
#dirty hack
$backrefs = Get-Attr $params ""backrefs"" ""no"";
if ( $backrefs -eq ""True"" ) {
$backrefs = ""yes""
} else {
$backrefs = ""no""
}
```
",True,"win_lineinfile idempotence broken - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- win_lineinfile
##### ANSIBLE VERSION
```
ansible 2.1.0 (devel cb7b3b489d) last updated 2016/03/30 15:14:21 (GMT +200)
lib/ansible/modules/core: (detached HEAD 0268864211) last updated 2016/03/30 15:14:39 (GMT +200)
lib/ansible/modules/extras: (detached HEAD 6978984244) last updated 2016/03/30 15:14:39 (GMT +200)
```
##### OS / ENVIRONMENT
ubuntu 14 -> windows 2012R2
##### SUMMARY
backrefs option doesn't arrive well inside the module, causing it not to function properly and also breaking idempotence (line will be always inserted at the end of the file on every execution, regardless of backrefs=yes and regardless of regexp match)
##### STEPS TO REPRODUCE
c:\test.txt containing
```
test1
test2
test3
```
part of playbook
```
win_lineinfile:
dest: c:\test.txt
regexp: ""test2""
line: ""this will be added over and over""
backrefs: yes
```
##### EXPECTED RESULTS
according to [this ](https://github.com/ansible/ansible/issues/4531)and similar to the behaviour on linux
i was expecting to have idempotent behaviour (to change the regexp into line if regexp found and do nothing if not found), which works on linux.
upon first execution, changed=1
upon second execution, changed=0
```
test1
this will be added over and over
test3
```
##### ACTUAL RESULTS
upon first execution, changed=1
upon second execution, changed=1
```
test1
this will be added over and over
test3
this will be added over and over
```
##### WORKAROUND/HACK
It seems that the problem comes from the backrefs variable arriving as True/False in the module, so this never gets triggered.
```
ElseIf ($backrefs -ne ""no"") {
# No matches - no-op
```
and it forces it to go two elseifs down and add the line
I used this bit of code at the beginning of the module - at params - since I desperately needed the idempotent behavior :)
```
#dirty hack
$backrefs = Get-Attr $params ""backrefs"" ""no"";
if ( $backrefs -eq ""True"" ) {
$backrefs = ""yes""
} else {
$backrefs = ""no""
}
```
",1,win lineinfile idempotence broken issue type bug report component name win lineinfile ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt os environment ubuntu windows summary backrefs option doesn t arrive well inside the module causing it not to function properly and also breaking idempotence line will be always inserted at the end of the file on every execution regardless of backrefs yes and regardless of regexp match steps to reproduce c test txt containing part of playbook win lineinfile dest c test txt regexp line this will be added over and over backrefs yes expected results according to similar to the behaviour on linux i was expecting to have idempotent behaviour to change the regexp into line if regexp found and do nothing if not found which works on linux upon first execution changed upon second execution changed this will be added over and over actual results upon first execution changed upon second execution changed this will be added over and over this will be added over and over workaround hack it seems that the problem comes from the backrefs variable arriving as true false in the module so this never gets triggered elseif backrefs ne no no matches no op and it forces it to go two elseifs down and add the line i used this bit of code at the beginning of the module at params since i desperately needed the idempotent behavior dirty hack backrefs get attr params backrefs no if backrefs eq true backrefs yes else backrefs no ,1
4473,23335801265.0,IssuesEvent,2022-08-09 09:49:25,precice/precice,https://api.github.com/repos/precice/precice,opened,Simplification of EventTimings,enhancement maintainability dependencies,"_I open this issue in this repo to preserve the information and make it easier to find from people running into issues regarding this. It also may impact the preCICE lib in the future._
**Please describe the problem you are trying to solve.**
The EventTimings provide a system to:
* measure named sections in the code
* attach data to these sections (used in PETSc RBF mappings)
* synchronize the communicator prior to the recording using a barrier on requested aka `syncmode`
* aggregate and normalize these measurements across ranks
* write a summary of the results to a file as a text table
* write the aggregate results to a file as json
The additional `events2trace` script formats and merges multiple of these outputs into a single eventstracing json file that can be visualized with various tools.
Concerns of this approach:
* The EventsTimings need a way to synchronize all ranks. This currently requires passing a custom MPI comm. Not using MPI only supports a single rank.
* The data aggregation happens during the finalization of preCICE, which is a collective operation on the communicator. If any issue occurs with this communicator during the lifetime of preCICE, then the collective will fail, resulting in a crash/error.
* If preCICE runs into any error, then the events won't be aggregated nor written to a file.
* preCICE requires an additional dependency for the sole purpose of writing the aggregated data to disk. We are currently using a checked-in version of the json library, which will at some point collide with other versions on the system leading to strange problems such as #527 or https://github.com/precice/openfoam-adapter/issues/238 .
**Describe the solution you propose.**
1. Simplify the Events in preCICE as much as possible.
* The synchronization can be handled fully by preCICE, as IntraComm provides a barrier method. Its implementation also works if preCICE is compiled without MPI.
* Use independent rank files, essentially removing the aggregation from the preCICE core. This allows to output events on error.
* Optionally write these files continuously during the lifetime of the program, allowing to inspect the events on a crash. We could even implement a block-wise write to reduce the memory overhead.
* This serialization is so simple that it doesn't require a special library. ( Similar to the clang time-tracing implementation. )
* The above results in additional IO. So, a configuration option to disable the tracing could be beneficial.
2. Move the functionality to normalize, aggregate and format to a separate script (or `precice-tools`)
* Use python pandas or similar for normalizing and aggregating the data.
* Ship this as an extra tool in `/usr/share/precice` or similar.
* This allows us to remove the json dependency from the project.
* Alternatively, move the existing C++ implementation from the preCICE code into a separate executable, or into `precice-tools`.
* The python version would allow everyone to easily add custom functionality such as
* plotting given timings over time
* analyse the comm establishment to detect filesystem issues on some nodes
* find an imbalance of mapping cost over ranks of a participant
**Describe alternatives you've considered**
* Move the events2trace script into the preCICE library, essentially completely integrating the external project.
* Reimplement the events2trace as a subcommand of `precice-tools`.
**Additional context**
* https://github.com/precice/EventTimings/issues/17
* #419 ",True,"Simplification of EventTimings - _I open this issue in this repo to preserve the information and make it easier to find from people running into issues regarding this. It also may impact the preCICE lib in the future._
**Please describe the problem you are trying to solve.**
The EventTimings provide a system to:
* measure named sections in the code
* attach data to these sections (used in PETSc RBF mappings)
* synchronize the communicator prior to the recording using a barrier on requested aka `syncmode`
* aggregate and normalize these measurements across ranks
* write a summary of the results to a file as a text table
* write the aggregate results to a file as json
The additional `events2trace` script formats and merges multiple of these outputs into a single eventstracing json file that can be visualized with various tools.
Concerns of this approach:
* The EventsTimings need a way to synchronize all ranks. This currently requires passing a custom MPI comm. Not using MPI only supports a single rank.
* The data aggregation happens during the finalization of preCICE, which is a collective operation on the communicator. If any issue occurs with this communicator during the lifetime of preCICE, then the collective will fail, resulting in a crash/error.
* If preCICE runs into any error, then the events won't be aggregated nor written to a file.
* preCICE requires an additional dependency for the sole purpose of writing the aggregated data to disk. We are currently using a checked-in version of the json library, which will at some point collide with other versions on the system leading to strange problems such as #527 or https://github.com/precice/openfoam-adapter/issues/238 .
**Describe the solution you propose.**
1. Simplify the Events in preCICE as much as possible.
* The synchronization can be handled fully by preCICE, as IntraComm provides a barrier method. Its implementation also works if preCICE is compiled without MPI.
* Use independent rank files, essentially removing the aggregation from the preCICE core. This allows to output events on error.
* Optionally write these files continuously during the lifetime of the program, allowing to inspect the events on a crash. We could even implement a block-wise write to reduce the memory overhead.
* This serialization is so simple that it doesn't require a special library. ( Similar to the clang time-tracing implementation. )
* The above results in additional IO. So, a configuration option to disable the tracing could be beneficial.
2. Move the functionality to normalize, aggregate and format to a separate script (or `precice-tools`)
* Use python pandas or similar for normalizing and aggregating the data.
* Ship this as an extra tool in `/usr/share/precice` or similar.
* This allows us to remove the json dependency from the project.
* Alternatively, move the existing C++ implementation from the preCICE code into a separate executable, or into `precice-tools`.
* The python version would allow everyone to easily add custom functionality such as
* plotting given timings over time
* analyse the comm establishment to detect filesystem issues on some nodes
* find an imbalance of mapping cost over ranks of a participant
**Describe alternatives you've considered**
* Move the events2trace script into the preCICE library, essentially completely integrating the external project.
* Reimplement the events2trace as a subcommand of `precice-tools`.
**Additional context**
* https://github.com/precice/EventTimings/issues/17
* #419 ",1,simplification of eventtimings i open this issue in this repo to preserve the information and make it easier to find from people running into issues regarding this it also may impact the precice lib in the future please describe the problem you are trying to solve the eventtimings provide a system to measure named sections in the code attach data to these sections used in petsc rbf mappings synchronize the communicator prior to the recording using a barrier on requested aka syncmode aggregate and normalize these measurements across ranks write a summary of the results to a file as a text table write the aggregate results to a file as json the additional script formats and merges multiple of these outputs into a single eventstracing json file that can be visualized with various tools concerns of this approach the eventstimings need a way to synchronize all ranks this currently requires passing a custom mpi comm not using mpi only supports a single rank the data aggregation happens during the finalization of precice which is a collective operation on the communicator if any issue occurs with this communicator during the lifetime of precice then the collective will fail resulting in a crash error if precice runs into any error then the events won t be aggregated nor written to a file precice requires an additional dependency for the sole purpose of writing the aggregated data to disk we are currently using a checked in version of the json library which will at some point collide with other versions on the system leading to strange problems such as or describe the solution you propose simplify the events in precice as much as possible the synchronization can be handled fully by precice as intracomm provides a barrier method its implementation also works if precice is compiled without mpi use independent rank files essentially removing the aggregation from the precice core this allows to output events on error optionally write these files continuously during the lifetime of the program allowing to inspect the events on a crash we could even implement a block wise write to reduce the memory overhead this serialization is so simple that it doesn t require a special library similar to the clang time tracing implementation the above results in additional io so a configuration option to disable the tracing could be beneficial move the functionality to normalize aggregate and format to a separate script or precice tools use python pandas or similar for normalizing and aggregating the data ship this as an extra tool in usr share precice or similar this allows us to remove the json dependency from the project alternatively move the existing c implementation from the precice code into a separate executable or into precice tools the python version would allow everyone to easily add custom functionality such as plotting given timings over time analyse the comm establishment to detect filesystem issues on some nodes find an imbalance of mapping cost over ranks of a participant describe alternatives you ve considered move the script into the precice library essentially completely integrating the external project reimplement the as a subcommand of precice tools additional context ,1
1725,6574506483.0,IssuesEvent,2017-09-11 13:08:43,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2_ami state:absent wait:yes should wait for AMI to be removed,affects_2.1 aws cloud feature_idea waiting_on_maintainer,"##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_ami
##### ANSIBLE VERSION
```
ansible 2.1.2.0
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
`ec2_ami` module with args:
1. `state: absent`
2. `wait: yes`
should wait for AMI to be fully deregistered before continuing.
",True,"ec2_ami state:absent wait:yes should wait for AMI to be removed - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_ami
##### ANSIBLE VERSION
```
ansible 2.1.2.0
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
`ec2_ami` module with args:
1. `state: absent`
2. `wait: yes`
should wait for AMI to be fully deregistered before continuing.
",1, ami state absent wait yes should wait for ami to be removed issue type feature idea component name ami ansible version ansible os environment n a summary ami module with args state absent wait yes should wait for ami to be fully deregistered before continuing ,1
121074,10149402262.0,IssuesEvent,2019-08-05 15:08:53,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,roachtest: tpchbench/tpchVec/nodes=3/cpu=4/sf=1 failed,C-test-failure O-roachtest O-robot,"SHA: https://github.com/cockroachdb/cockroach/commits/cfdaadc3514e7e8660f6c009ba159fdfd604f0a8
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=tpchbench/tpchVec/nodes=3/cpu=4/sf=1 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1409070&tab=buildLog
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20190727-1409070/tpchbench/tpchVec/nodes=3/cpu=4/sf=1/run_1
test_runner.go:706: test timed out (10h0m0s)
tpchbench.go:119,cluster.go:2069,errgroup.go:57: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1564208378-06-n4cpu4:4 -- ./workload run querybench --db=tpch --concurrency=1 --query-file=tpchVec --num-runs=3 --max-ops=27 --vectorized=true {pgurl:1-3} --histograms=perf/stats.json --histograms-max-latency=8m20s returned:
stderr:
stdout:
TH AND l_returnflag = 'R' AND c_nationkey = n_nationkey GROUP BY c_custkey, c_name, c_acctbal, c_phone, n_name, c_address, c_comment ORDER BY revenue DESC LIMIT 20
9h57m15s 0 0.0 0.0 0.0 0.0 0.0 0.0 8: SELECT ps_partkey, sum(ps_supplycost * ps_availqty::float) AS value FROM partsupp, supplier, nation WHERE ps_suppkey = s_suppkey AND s_nationkey = n_nationkey AND n_name = 'GERMANY' GROUP BY ps_partkey HAVING sum(ps_supplycost * ps_availqty::float) > ( SELECT sum(ps_supplycost * ps_availqty::float) * 0.0001 FROM partsupp, supplier, nation WHERE ps_suppkey = s_suppkey AND s_nationkey = n_nationkey AND n_name = 'GERMANY') ORDER BY value DESC
9h57m15s 0 0.0 0.0 0.0 0.0 0.0 0.0 9: SELECT sum(l_extendedprice) / 7.0 AS avg_yearly FROM lineitem, part WHERE p_partkey = l_partkey AND p_brand = 'Brand#23' AND p_container = 'MED BOX' AND l_quantity < ( SELECT 0.2 * avg(l_quantity) FROM lineitem WHERE l_partkey = p_partkey)
: signal: killed
cluster.go:2090,tpchbench.go:123,tpchbench.go:244,test_runner.go:691: Goexit() was called
```",2.0,"roachtest: tpchbench/tpchVec/nodes=3/cpu=4/sf=1 failed - SHA: https://github.com/cockroachdb/cockroach/commits/cfdaadc3514e7e8660f6c009ba159fdfd604f0a8
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=tpchbench/tpchVec/nodes=3/cpu=4/sf=1 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1409070&tab=buildLog
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20190727-1409070/tpchbench/tpchVec/nodes=3/cpu=4/sf=1/run_1
test_runner.go:706: test timed out (10h0m0s)
tpchbench.go:119,cluster.go:2069,errgroup.go:57: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1564208378-06-n4cpu4:4 -- ./workload run querybench --db=tpch --concurrency=1 --query-file=tpchVec --num-runs=3 --max-ops=27 --vectorized=true {pgurl:1-3} --histograms=perf/stats.json --histograms-max-latency=8m20s returned:
stderr:
stdout:
TH AND l_returnflag = 'R' AND c_nationkey = n_nationkey GROUP BY c_custkey, c_name, c_acctbal, c_phone, n_name, c_address, c_comment ORDER BY revenue DESC LIMIT 20
9h57m15s 0 0.0 0.0 0.0 0.0 0.0 0.0 8: SELECT ps_partkey, sum(ps_supplycost * ps_availqty::float) AS value FROM partsupp, supplier, nation WHERE ps_suppkey = s_suppkey AND s_nationkey = n_nationkey AND n_name = 'GERMANY' GROUP BY ps_partkey HAVING sum(ps_supplycost * ps_availqty::float) > ( SELECT sum(ps_supplycost * ps_availqty::float) * 0.0001 FROM partsupp, supplier, nation WHERE ps_suppkey = s_suppkey AND s_nationkey = n_nationkey AND n_name = 'GERMANY') ORDER BY value DESC
9h57m15s 0 0.0 0.0 0.0 0.0 0.0 0.0 9: SELECT sum(l_extendedprice) / 7.0 AS avg_yearly FROM lineitem, part WHERE p_partkey = l_partkey AND p_brand = 'Brand#23' AND p_container = 'MED BOX' AND l_quantity < ( SELECT 0.2 * avg(l_quantity) FROM lineitem WHERE l_partkey = p_partkey)
: signal: killed
cluster.go:2090,tpchbench.go:123,tpchbench.go:244,test_runner.go:691: Goexit() was called
```",0,roachtest tpchbench tpchvec nodes cpu sf failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests tpchbench tpchvec nodes cpu sf pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts tpchbench tpchvec nodes cpu sf run test runner go test timed out tpchbench go cluster go errgroup go home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload run querybench db tpch concurrency query file tpchvec num runs max ops vectorized true pgurl histograms perf stats json histograms max latency returned stderr stdout th and l returnflag r and c nationkey n nationkey group by c custkey c name c acctbal c phone n name c address c comment order by revenue desc limit select ps partkey sum ps supplycost ps availqty float as value from partsupp supplier nation where ps suppkey s suppkey and s nationkey n nationkey and n name germany group by ps partkey having sum ps supplycost ps availqty float select sum ps supplycost ps availqty float from partsupp supplier nation where ps suppkey s suppkey and s nationkey n nationkey and n name germany order by value desc select sum l extendedprice as avg yearly from lineitem part where p partkey l partkey and p brand brand and p container med box and l quantity select avg l quantity from lineitem where l partkey p partkey signal killed cluster go tpchbench go tpchbench go test runner go goexit was called ,0
373,3368098195.0,IssuesEvent,2015-11-22 18:35:51,jenkinsci/slack-plugin,https://api.github.com/repos/jenkinsci/slack-plugin,opened,Don't store global config per job,contributions welcome enhancement maintainer communication,"I feel a major source of issues this plugin faces is how it stores global config in every job. What it should do instead is reference the global config in the Jenkins runtime. Job config should only contain settings related to the job.
This has caused issues in the past like updating global config not properly propagating to all jobs. As we move to slack-2.0 and beyond I could see this causing problems even more. As issues are opened related to this I'll link them to this issue.",True,"Don't store global config per job - I feel a major source of issues this plugin faces is how it stores global config in every job. What it should do instead is reference the global config in the Jenkins runtime. Job config should only contain settings related to the job.
This has caused issues in the past like updating global config not properly propagating to all jobs. As we move to slack-2.0 and beyond I could see this causing problems even more. As issues are opened related to this I'll link them to this issue.",1,don t store global config per job i feel a major source of issues this plugin faces is how it stores global config in every job what it should do instead is reference the global config in the jenkins runtime job config should only contain settings related to the job this has caused issues in the past like updating global config not properly propagating to all jobs as we move to slack and beyond i could see this causing problems even more as issues are opened related to this i ll link them to this issue ,1
285362,8757854566.0,IssuesEvent,2018-12-14 22:59:36,danielcaldas/react-d3-graph,https://api.github.com/repos/danielcaldas/react-d3-graph,closed,Display Name of Edge on Graph,duplicate feature request in progress priority normal wontfix,"I was wondering if there was a way to display the label (or name attribute) of an edge on the graph so that it would be easy to see what the relationship between two nodes is? Read through the documentation but wasn't able to find anything, apologies if i missed it.",1.0,"Display Name of Edge on Graph - I was wondering if there was a way to display the label (or name attribute) of an edge on the graph so that it would be easy to see what the relationship between two nodes is? Read through the documentation but wasn't able to find anything, apologies if i missed it.",0,display name of edge on graph i was wondering if there was a way to display the label or name attribute of an edge on the graph so that it would be easy to see what the relationship between two nodes is read through the documentation but wasn t able to find anything apologies if i missed it ,0
5820,30794685226.0,IssuesEvent,2023-07-31 18:53:02,professor-greebie/SENG8080-1-field_project,https://api.github.com/repos/professor-greebie/SENG8080-1-field_project,opened,Data storage - script,Data Storage and Maintainance,"Hi Data Storage team, @prsnt , can you please provide the script for storing the data to the DevOps team? ",True,"Data storage - script - Hi Data Storage team, @prsnt , can you please provide the script for storing the data to the DevOps team? ",1,data storage script hi data storage team prsnt can you please provide the script for storing the data to the devops team ,1
5178,26347684691.0,IssuesEvent,2023-01-11 00:12:29,mozilla/foundation.mozilla.org,https://api.github.com/repos/mozilla/foundation.mozilla.org,closed,Update node container,engineering maintain unplanned,"# Description
When running commands like `inv new-env` or `inv catch-up` the `package-lock.json` file keeps getting updated.
I believe this might have something to do with the node version that is used in the Dockerfile (which is 14.13.1).
I see errors like this:
```
npm WARN read-shrinkwrap This version of npm is compatible with lockfileVersion@1, but package-lock.json was generated for lockfileVersion@2. I'll try to do my best with it!
```
We should probably update the node version that is used in the container. We also need to make sure that running `inv new-env` or `inv catch-up` can be run without changed to the `package-lock.json` file.
We probably want to make sure that the development node and the live version also match. Production is currently on `19.2.0`. See also: https://devcenter.heroku.com/articles/nodejs-support
We can set this to something else too in the `package.json`.
Considering the support table, using version `18.x` seems reasonable, LTS supported until 2025.
# Acceptance criteria
- [ ] As a developer on my local setup, when I run `inv new-env` or `inv catchup` the `package-lock.json` file is not updated but stays the same as before I ran either command.
- [ ] The development container uses the `node` version as we do in production.
",True,"Update node container - # Description
When running commands like `inv new-env` or `inv catch-up` the `package-lock.json` file keeps getting updated.
I believe this might have something to do with the node version that is used in the Dockerfile (which is 14.13.1).
I see errors like this:
```
npm WARN read-shrinkwrap This version of npm is compatible with lockfileVersion@1, but package-lock.json was generated for lockfileVersion@2. I'll try to do my best with it!
```
We should probably update the node version that is used in the container. We also need to make sure that running `inv new-env` or `inv catch-up` can be run without changed to the `package-lock.json` file.
We probably want to make sure that the development node and the live version also match. Production is currently on `19.2.0`. See also: https://devcenter.heroku.com/articles/nodejs-support
We can set this to something else too in the `package.json`.
Considering the support table, using version `18.x` seems reasonable, LTS supported until 2025.
# Acceptance criteria
- [ ] As a developer on my local setup, when I run `inv new-env` or `inv catchup` the `package-lock.json` file is not updated but stays the same as before I ran either command.
- [ ] The development container uses the `node` version as we do in production.
",1,update node container description when running commands like inv new env or inv catch up the package lock json file keeps getting updated i believe this might have something to do with the node version that is used in the dockerfile which is i see errors like this npm warn read shrinkwrap this version of npm is compatible with lockfileversion but package lock json was generated for lockfileversion i ll try to do my best with it we should probably update the node version that is used in the container we also need to make sure that running inv new env or inv catch up can be run without changed to the package lock json file we probably want to make sure that the development node and the live version also match production is currently on see also we can set this to something else too in the package json considering the support table using version x seems reasonable lts supported until acceptance criteria as a developer on my local setup when i run inv new env or inv catchup the package lock json file is not updated but stays the same as before i ran either command the development container uses the node version as we do in production ,1
1649,6572678727.0,IssuesEvent,2017-09-11 04:20:36,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,IPA: can't set password for ipa_user module,affects_2.3 bug_report waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ipa
##### ANSIBLE VERSION
```
ansible 2.3.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Linux Mint 18
##### SUMMARY
Can't add password for ipa user through ipa_user module - password is always empty in IPA
##### STEPS TO REPRODUCE
Run ipa_user module with all required fields and password field filled.
```
- name: Ensure user is present
ipa_user:
name: ""{{ item.0.login }}""
state: present
givenname: ""{{ item.1.first_name }}""
sn: ""{{ item.1.last_name }}""
mail: ""{{ item.1.mail }}""
password: 123321
telephonenumber: ""{{ item.1.telnum }}""
title: ""{{ item.1.jobtitle }}""
ipa_host: ""{{ global_host }}""
ipa_user: ""{{ global_user }}""
ipa_pass: ""{{ global_pass }}""
validate_certs: no
with_subelements:
- ""{{ users_to_add }}""
- personal_data
ignore_errors: true
users_to_add:
- username: Harley Quinn
login: 90987264
password: ""adasdk212masd""
cluster_zone: Default
group: mininform
group_desc: ""Some random data for description""
personal_data:
- first_name: Harley
last_name: Quinn
mail: harley@gmail.com
telnum: +79788880132
jobtitle: Minister
- username: Vasya Pupkin
login: 77777777
password: ""adasdk212masd""
cluster_zone: Default
group: mininform
group_desc: ""Some random data for description""
personal_data:
- first_name: Vasya
last_name: Pupkin
mail: vasya@gmail.com
telnum: +7970000805
jobtitle: Vice minister
```
##### EXPECTED RESULTS
User creation with password expected.
##### ACTUAL RESULTS
User created has no password set. And module does not change user credentials (password) if you change it in playbook.
```
ok: [ipa111.krtech.loc] => (item=({u'username': u'Harley Quinn', u'group': u'mininform', u'cluster_zone': u'Default', u'group_desc': u'Some rando
m data for description', u'login': 90987264, u'password': u'adasdk212masd'}, {u'mail': u'harley@gmail.com', u'first_name': u'Harley', u'last_name
': u'Quinn', u'jobtitle': u'Minister', u'telnum': 79788880132}))
ok: [ipa111.krtech.loc] => (item=({u'username': u'Vasya Pupkin', u'group': u'mininform', u'cluster_zone': u'Default', u'group_desc': u'Some rando
m data for description', u'login': 77777777, u'password': u'adasdk212masd'}, {u'mail': u'vasya@gmail.com', u'first_name': u'Vasya', u'last_name':
u'Pupkin', u'jobtitle': u'Vice minister', u'telnum': 7970000805}))
```
",True,"IPA: can't set password for ipa_user module -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ipa
##### ANSIBLE VERSION
```
ansible 2.3.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Linux Mint 18
##### SUMMARY
Can't add password for ipa user through ipa_user module - password is always empty in IPA
##### STEPS TO REPRODUCE
Run ipa_user module with all required fields and password field filled.
```
- name: Ensure user is present
ipa_user:
name: ""{{ item.0.login }}""
state: present
givenname: ""{{ item.1.first_name }}""
sn: ""{{ item.1.last_name }}""
mail: ""{{ item.1.mail }}""
password: 123321
telephonenumber: ""{{ item.1.telnum }}""
title: ""{{ item.1.jobtitle }}""
ipa_host: ""{{ global_host }}""
ipa_user: ""{{ global_user }}""
ipa_pass: ""{{ global_pass }}""
validate_certs: no
with_subelements:
- ""{{ users_to_add }}""
- personal_data
ignore_errors: true
users_to_add:
- username: Harley Quinn
login: 90987264
password: ""adasdk212masd""
cluster_zone: Default
group: mininform
group_desc: ""Some random data for description""
personal_data:
- first_name: Harley
last_name: Quinn
mail: harley@gmail.com
telnum: +79788880132
jobtitle: Minister
- username: Vasya Pupkin
login: 77777777
password: ""adasdk212masd""
cluster_zone: Default
group: mininform
group_desc: ""Some random data for description""
personal_data:
- first_name: Vasya
last_name: Pupkin
mail: vasya@gmail.com
telnum: +7970000805
jobtitle: Vice minister
```
##### EXPECTED RESULTS
User creation with password expected.
##### ACTUAL RESULTS
User created has no password set. And module does not change user credentials (password) if you change it in playbook.
```
ok: [ipa111.krtech.loc] => (item=({u'username': u'Harley Quinn', u'group': u'mininform', u'cluster_zone': u'Default', u'group_desc': u'Some rando
m data for description', u'login': 90987264, u'password': u'adasdk212masd'}, {u'mail': u'harley@gmail.com', u'first_name': u'Harley', u'last_name
': u'Quinn', u'jobtitle': u'Minister', u'telnum': 79788880132}))
ok: [ipa111.krtech.loc] => (item=({u'username': u'Vasya Pupkin', u'group': u'mininform', u'cluster_zone': u'Default', u'group_desc': u'Some rando
m data for description', u'login': 77777777, u'password': u'adasdk212masd'}, {u'mail': u'vasya@gmail.com', u'first_name': u'Vasya', u'last_name':
u'Pupkin', u'jobtitle': u'Vice minister', u'telnum': 7970000805}))
```
",1,ipa can t set password for ipa user module issue type bug report component name ipa ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific linux mint summary can t add password for ipa user through ipa user module password is always empty in ipa steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used run ipa user module with all required fields and password field filled name ensure user is present ipa user name item login state present givenname item first name sn item last name mail item mail password telephonenumber item telnum title item jobtitle ipa host global host ipa user global user ipa pass global pass validate certs no with subelements users to add personal data ignore errors true users to add username harley quinn login password cluster zone default group mininform group desc some random data for description personal data first name harley last name quinn mail harley gmail com telnum jobtitle minister username vasya pupkin login password cluster zone default group mininform group desc some random data for description personal data first name vasya last name pupkin mail vasya gmail com telnum jobtitle vice minister expected results user creation with password expected actual results user created has no password set and module does not change user credentials password if you change it in playbook ok item u username u harley quinn u group u mininform u cluster zone u default u group desc u some rando m data for description u login u password u u mail u harley gmail com u first name u harley u last name u quinn u jobtitle u minister u telnum ok item u username u vasya pupkin u group u mininform u cluster zone u default u group desc u some rando m data for description u login u password u u mail u vasya gmail com u first name u vasya u last name u pupkin u jobtitle u vice minister u telnum ,1
763381,26754771178.0,IssuesEvent,2023-01-30 22:54:39,brave/brave-browser,https://api.github.com/repos/brave/brave-browser,opened,Solana provider renderer crash,priority/P1 OS/Desktop feature/web3/wallet/solana feature/web3/wallet/dapps,"da590600-668b-8709-0000-000000000000
795a0600-668b-8709-0000-000000000000
7a5a0600-668b-8709-0000-000000000000
```
[ 00 ] brave_wallet::JSSolanaProvider::OnIsSolanaKeyringCreated(bool) ( render_frame_impl.cc:2299 )
[ 01 ] brave_wallet::JSSolanaProvider::OnIsSolanaKeyringCreated(bool) ( js_solana_provider.cc:1000 )
[ 02 ] network::mojom::CookieManager_DeleteCanonicalCookie_ForwardToCallback::Accept(mojo::Message*) ( callback.h:152 )
[ 03 ] mojo::InterfaceEndpointClient::HandleValidatedMessage(mojo::Message*) ( interface_endpoint_client.cc:1002 )
[ 04 ] mojo::internal::MultiplexRouter::Accept(mojo::Message*) ( message_dispatcher.cc:43 )
[ 05 ] mojo::MessageDispatcher::Accept(mojo::Message*) ( message_dispatcher.cc:43 )
[ 06 ] base::internal::Invoker>, void (unsigned int)>::Run(base::internal::BindStateBase, unsigned int) ( connector.cc:542 )
[ 07 ] base::internal::Invoker, int, unsigned int, mojo::HandleSignalsState>, void ()>::RunOnce(base::internal::BindStateBase) ( callback.h:333 )
[ 08 ] non-virtual thunk to base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::DoWork() ( callback.h:152 )
[ 09 ] base::MessagePumpCFRunLoopBase::RunWork() ( message_pump_mac.mm:475 )
[ 10 ] base::mac::CallWithEHFrame(void () block_pointer)
[ 11 ] base::MessagePumpCFRunLoopBase::RunWorkSource(void*) ( message_pump_mac.mm:447 )
[ 12 ] 0x1ac7a9a30
[ 13 ] 0x1ac7a99c4
[ 14 ] 0x1ac7a9734
[ 15 ] 0x1ac7a8338
[ 16 ] 0x1ac7a78a0
[ 17 ] 0x1ad6afe54
[ 18 ] base::MessagePumpNSRunLoop::DoRun(base::MessagePump::Delegate*) ( message_pump_mac.mm:768 )
[ 19 ] base::MessagePumpCFRunLoopBase::Run(base::MessagePump::Delegate*) ( message_pump_mac.mm:172 )
[ 20 ] base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run(bool, base::TimeDelta) ( thread_controller_with_message_pump_impl.cc:644 )
[ 21 ] base::RunLoop::Run(base::Location const&) ( run_loop.cc:0 )
[ 22 ] content::RendererMain(content::MainFunctionParams) ( renderer_main.cc:330 )
[ 23 ] content::RunOtherNamedProcessTypeMain(std::Cr::basic_string, std::Cr::allocator> const&, content::MainFunctionParams, content::ContentMainDelegate*) ( content_main_runner_impl.cc:746 )
[ 24 ] content::ContentMainRunnerImpl::Run() ( content_main_runner_impl.cc:1100 )
[ 25 ] content::RunContentProcess(content::ContentMainParams, content::ContentMainRunner*) ( content_main.cc:344 )
[ 26 ] content::ContentMain(content::ContentMainParams) ( content_main.cc:372 )
[ 27 ] ChromeMain ( chrome_main.cc:174 )
[ 28 ] main ( chrome_exe_main_mac.cc:216 )
[ 29 ] 0x1ac39fe4c
```",1.0,"Solana provider renderer crash - da590600-668b-8709-0000-000000000000
795a0600-668b-8709-0000-000000000000
7a5a0600-668b-8709-0000-000000000000
```
[ 00 ] brave_wallet::JSSolanaProvider::OnIsSolanaKeyringCreated(bool) ( render_frame_impl.cc:2299 )
[ 01 ] brave_wallet::JSSolanaProvider::OnIsSolanaKeyringCreated(bool) ( js_solana_provider.cc:1000 )
[ 02 ] network::mojom::CookieManager_DeleteCanonicalCookie_ForwardToCallback::Accept(mojo::Message*) ( callback.h:152 )
[ 03 ] mojo::InterfaceEndpointClient::HandleValidatedMessage(mojo::Message*) ( interface_endpoint_client.cc:1002 )
[ 04 ] mojo::internal::MultiplexRouter::Accept(mojo::Message*) ( message_dispatcher.cc:43 )
[ 05 ] mojo::MessageDispatcher::Accept(mojo::Message*) ( message_dispatcher.cc:43 )
[ 06 ] base::internal::Invoker>, void (unsigned int)>::Run(base::internal::BindStateBase, unsigned int) ( connector.cc:542 )
[ 07 ] base::internal::Invoker, int, unsigned int, mojo::HandleSignalsState>, void ()>::RunOnce(base::internal::BindStateBase) ( callback.h:333 )
[ 08 ] non-virtual thunk to base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::DoWork() ( callback.h:152 )
[ 09 ] base::MessagePumpCFRunLoopBase::RunWork() ( message_pump_mac.mm:475 )
[ 10 ] base::mac::CallWithEHFrame(void () block_pointer)
[ 11 ] base::MessagePumpCFRunLoopBase::RunWorkSource(void*) ( message_pump_mac.mm:447 )
[ 12 ] 0x1ac7a9a30
[ 13 ] 0x1ac7a99c4
[ 14 ] 0x1ac7a9734
[ 15 ] 0x1ac7a8338
[ 16 ] 0x1ac7a78a0
[ 17 ] 0x1ad6afe54
[ 18 ] base::MessagePumpNSRunLoop::DoRun(base::MessagePump::Delegate*) ( message_pump_mac.mm:768 )
[ 19 ] base::MessagePumpCFRunLoopBase::Run(base::MessagePump::Delegate*) ( message_pump_mac.mm:172 )
[ 20 ] base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run(bool, base::TimeDelta) ( thread_controller_with_message_pump_impl.cc:644 )
[ 21 ] base::RunLoop::Run(base::Location const&) ( run_loop.cc:0 )
[ 22 ] content::RendererMain(content::MainFunctionParams) ( renderer_main.cc:330 )
[ 23 ] content::RunOtherNamedProcessTypeMain(std::Cr::basic_string, std::Cr::allocator> const&, content::MainFunctionParams, content::ContentMainDelegate*) ( content_main_runner_impl.cc:746 )
[ 24 ] content::ContentMainRunnerImpl::Run() ( content_main_runner_impl.cc:1100 )
[ 25 ] content::RunContentProcess(content::ContentMainParams, content::ContentMainRunner*) ( content_main.cc:344 )
[ 26 ] content::ContentMain(content::ContentMainParams) ( content_main.cc:372 )
[ 27 ] ChromeMain ( chrome_main.cc:174 )
[ 28 ] main ( chrome_exe_main_mac.cc:216 )
[ 29 ] 0x1ac39fe4c
```",0,solana provider renderer crash brave wallet jssolanaprovider onissolanakeyringcreated bool render frame impl cc brave wallet jssolanaprovider onissolanakeyringcreated bool js solana provider cc network mojom cookiemanager deletecanonicalcookie forwardtocallback accept mojo message callback h mojo interfaceendpointclient handlevalidatedmessage mojo message interface endpoint client cc mojo internal multiplexrouter accept mojo message message dispatcher cc mojo messagedispatcher accept mojo message message dispatcher cc base internal invoker void unsigned int run base internal bindstatebase unsigned int connector cc base internal invoker int unsigned int mojo handlesignalsstate void runonce base internal bindstatebase callback h non virtual thunk to base sequence manager internal threadcontrollerwithmessagepumpimpl dowork callback h base messagepumpcfrunloopbase runwork message pump mac mm base mac callwithehframe void block pointer base messagepumpcfrunloopbase runworksource void message pump mac mm base messagepumpnsrunloop dorun base messagepump delegate message pump mac mm base messagepumpcfrunloopbase run base messagepump delegate message pump mac mm base sequence manager internal threadcontrollerwithmessagepumpimpl run bool base timedelta thread controller with message pump impl cc base runloop run base location const run loop cc content renderermain content mainfunctionparams renderer main cc content runothernamedprocesstypemain std cr basic string std cr allocator const content mainfunctionparams content contentmaindelegate content main runner impl cc content contentmainrunnerimpl run content main runner impl cc content runcontentprocess content contentmainparams content contentmainrunner content main cc content contentmain content contentmainparams content main cc chromemain chrome main cc main chrome exe main mac cc ,0
1788,6575880684.0,IssuesEvent,2017-09-11 17:41:25,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,module apt: No package matching 'libXrender1' is available,affects_2.1 bug_report waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apt
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file = /home/mgrimm/workspace/automated-deployment/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
no changes
##### OS / ENVIRONMENT
running from Ubuntu 16.04 LTS
running on Debian 7 Wheezy
##### SUMMARY
When running module apt with name=libXrender1, I get the following error message:
```
No package matching 'libXrender1' is available
```
I can install the lib manually with `sudo apt-get install libXrender1`. The error persists also when the library is already installed.
##### STEPS TO REPRODUCE
Just run the playbook.
playbook `configure-tp-server.yml`:
```
- name: Configure prduction TP server
hosts: all
become: True
roles:
- tpserver-setup
```
`tpserver-setup/tasks/main.yml`:
```
- apt: name=libXrender1
```
##### EXPECTED RESULTS
libXrender1 should get installed without errors.
##### ACTUAL RESULTS
```
$ ansible-playbook configure-tp-server.yml -i production --limit ""tp-10015"" -vvvv
Using /home/mgrimm/workspace/automated-deployment/ansible.cfg as config file
Loaded callback default of type stdout, v2.0
PLAYBOOK: configure-tp-server.yml **********************************************
1 plays in configure-tp-server.yml
PLAY [Configure prduction TP server] *******************************************
TASK [setup] *******************************************************************
ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile=""/home/mgrimm/.ssh/id_rsa""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User= -o ConnectTimeout=10 -o ControlPath=/home/mgrimm/.ansible/cp/ansible-ssh-%h-%p-%r '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474924752.61-24700385564042 `"" && echo ansible-tmp-1474924752.61-24700385564042=""` echo $HOME/.ansible/tmp/ansible-tmp-1474924752.61-24700385564042 `"" ) && sleep 0'""'""''
PUT /tmp/tmpEaNTPo TO /home//.ansible/tmp/ansible-tmp-1474924752.61-24700385564042/setup
SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile=""/home/mgrimm/.ssh/id_rsa""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User= -o ConnectTimeout=10 -o ControlPath=/home/mgrimm/.ansible/cp/ansible-ssh-%h-%p-%r '[]'
<> ESTABLISH SSH CONNECTION FOR USER:
<> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile=""/home/mgrimm/.ssh/id_rsa""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User= -o ConnectTimeout=10 -o ControlPath=/home/mgrimm/.ansible/cp/ansible-ssh-%h-%p-%r -tt '/bin/sh -c '""'""'sudo -H -S -p ""[sudo via ansible, key=gtgdkwqkitajhqfqjqgmesejbryahsec] password: "" -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-gtgdkwqkitajhqfqjqgmesejbryahsec; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home//.ansible/tmp/ansible-tmp-1474924752.61-24700385564042/setup; rm -rf ""/home//.ansible/tmp/ansible-tmp-1474924752.61-24700385564042/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""''
ok: [tp-10015]
TASK [tpserver-setup : apt] ****************************************************
task path: /home/mgrimm/workspace/automated-deployment/roles/tpserver-setup/tasks/main.yml:1
ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile=""/home/mgrimm/.ssh/id_rsa""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User= -o ConnectTimeout=10 -o ControlPath=/home/mgrimm/.ansible/cp/ansible-ssh-%h-%p-%r '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474924754.53-63418801820228 `"" && echo ansible-tmp-1474924754.53-63418801820228=""` echo $HOME/.ansible/tmp/ansible-tmp-1474924754.53-63418801820228 `"" ) && sleep 0'""'""''
<> PUT /tmp/tmp23cv6m TO /home//.ansible/tmp/ansible-tmp-1474924754.53-63418801820228/apt
<> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile=""/home/mgrimm/.ssh/id_rsa""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User= -o ConnectTimeout=10 -o ControlPath=/home/mgrimm/.ansible/cp/ansible-ssh-%h-%p-%r '[]'
<> ESTABLISH SSH CONNECTION FOR USER:
<> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile=""/home/mgrimm/.ssh/id_rsa""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User= -o ConnectTimeout=10 -o ControlPath=/home/mgrimm/.ansible/cp/ansible-ssh-%h-%p-%r -tt '/bin/sh -c '""'""'sudo -H -S -p ""[sudo via ansible, key=myzoijofvcxkaoxxufjaxrpvbmciltvn] password: "" -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-myzoijofvcxkaoxxufjaxrpvbmciltvn; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home//.ansible/tmp/ansible-tmp-1474924754.53-63418801820228/apt; rm -rf ""/home//.ansible/tmp/ansible-tmp-1474924754.53-63418801820228/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""''
fatal: [tp-10015]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": null, ""deb"": null, ""default_release"": null, ""dpkg_options"": ""force-confdef,force-confold"", ""force"": false, ""install_recommends"": null, ""name"": ""libXrender1"", ""only_upgrade"": false, ""package"": [""libXrender1""], ""purge"": false, ""state"": ""present"", ""update_cache"": false, ""upgrade"": null}, ""module_name"": ""apt""}, ""msg"": ""No package matching 'libXrender1' is available""}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
tp-10015 : ok=1 changed=0 unreachable=0 failed=1
```
I have replaced the address and username of the target host by `` and `` for security reasons.
",True,"module apt: No package matching 'libXrender1' is available -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apt
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file = /home/mgrimm/workspace/automated-deployment/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
no changes
##### OS / ENVIRONMENT
running from Ubuntu 16.04 LTS
running on Debian 7 Wheezy
##### SUMMARY
When running module apt with name=libXrender1, I get the following error message:
```
No package matching 'libXrender1' is available
```
I can install the lib manually with `sudo apt-get install libXrender1`. The error persists also when the library is already installed.
##### STEPS TO REPRODUCE
Just run the playbook.
playbook `configure-tp-server.yml`:
```
- name: Configure prduction TP server
hosts: all
become: True
roles:
- tpserver-setup
```
`tpserver-setup/tasks/main.yml`:
```
- apt: name=libXrender1
```
##### EXPECTED RESULTS
libXrender1 should get installed without errors.
##### ACTUAL RESULTS
```
$ ansible-playbook configure-tp-server.yml -i production --limit ""tp-10015"" -vvvv
Using /home/mgrimm/workspace/automated-deployment/ansible.cfg as config file
Loaded callback default of type stdout, v2.0
PLAYBOOK: configure-tp-server.yml **********************************************
1 plays in configure-tp-server.yml
PLAY [Configure prduction TP server] *******************************************
TASK [setup] *******************************************************************
ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile=""/home/mgrimm/.ssh/id_rsa""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User= -o ConnectTimeout=10 -o ControlPath=/home/mgrimm/.ansible/cp/ansible-ssh-%h-%p-%r '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474924752.61-24700385564042 `"" && echo ansible-tmp-1474924752.61-24700385564042=""` echo $HOME/.ansible/tmp/ansible-tmp-1474924752.61-24700385564042 `"" ) && sleep 0'""'""''
PUT /tmp/tmpEaNTPo TO /home//.ansible/tmp/ansible-tmp-1474924752.61-24700385564042/setup
SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile=""/home/mgrimm/.ssh/id_rsa""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User= -o ConnectTimeout=10 -o ControlPath=/home/mgrimm/.ansible/cp/ansible-ssh-%h-%p-%r '[]'
<> ESTABLISH SSH CONNECTION FOR USER:
<> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile=""/home/mgrimm/.ssh/id_rsa""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User= -o ConnectTimeout=10 -o ControlPath=/home/mgrimm/.ansible/cp/ansible-ssh-%h-%p-%r -tt '/bin/sh -c '""'""'sudo -H -S -p ""[sudo via ansible, key=gtgdkwqkitajhqfqjqgmesejbryahsec] password: "" -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-gtgdkwqkitajhqfqjqgmesejbryahsec; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home//.ansible/tmp/ansible-tmp-1474924752.61-24700385564042/setup; rm -rf ""/home//.ansible/tmp/ansible-tmp-1474924752.61-24700385564042/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""''
ok: [tp-10015]
TASK [tpserver-setup : apt] ****************************************************
task path: /home/mgrimm/workspace/automated-deployment/roles/tpserver-setup/tasks/main.yml:1
ESTABLISH SSH CONNECTION FOR USER: SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile=""/home/mgrimm/.ssh/id_rsa""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User= -o ConnectTimeout=10 -o ControlPath=/home/mgrimm/.ansible/cp/ansible-ssh-%h-%p-%r '/bin/sh -c '""'""'( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1474924754.53-63418801820228 `"" && echo ansible-tmp-1474924754.53-63418801820228=""` echo $HOME/.ansible/tmp/ansible-tmp-1474924754.53-63418801820228 `"" ) && sleep 0'""'""''
<> PUT /tmp/tmp23cv6m TO /home//.ansible/tmp/ansible-tmp-1474924754.53-63418801820228/apt
<> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile=""/home/mgrimm/.ssh/id_rsa""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User= -o ConnectTimeout=10 -o ControlPath=/home/mgrimm/.ansible/cp/ansible-ssh-%h-%p-%r '[]'
<> ESTABLISH SSH CONNECTION FOR USER:
<> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile=""/home/mgrimm/.ssh/id_rsa""' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User= -o ConnectTimeout=10 -o ControlPath=/home/mgrimm/.ansible/cp/ansible-ssh-%h-%p-%r -tt '/bin/sh -c '""'""'sudo -H -S -p ""[sudo via ansible, key=myzoijofvcxkaoxxufjaxrpvbmciltvn] password: "" -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-myzoijofvcxkaoxxufjaxrpvbmciltvn; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home//.ansible/tmp/ansible-tmp-1474924754.53-63418801820228/apt; rm -rf ""/home//.ansible/tmp/ansible-tmp-1474924754.53-63418801820228/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""' && sleep 0'""'""''
fatal: [tp-10015]: FAILED! => {""changed"": false, ""failed"": true, ""invocation"": {""module_args"": {""allow_unauthenticated"": false, ""autoremove"": false, ""cache_valid_time"": null, ""deb"": null, ""default_release"": null, ""dpkg_options"": ""force-confdef,force-confold"", ""force"": false, ""install_recommends"": null, ""name"": ""libXrender1"", ""only_upgrade"": false, ""package"": [""libXrender1""], ""purge"": false, ""state"": ""present"", ""update_cache"": false, ""upgrade"": null}, ""module_name"": ""apt""}, ""msg"": ""No package matching 'libXrender1' is available""}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
tp-10015 : ok=1 changed=0 unreachable=0 failed=1
```
I have replaced the address and username of the target host by `` and `` for security reasons.
",1,module apt no package matching is available issue type bug report component name apt ansible version ansible config file home mgrimm workspace automated deployment ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables no changes os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific running from ubuntu lts running on debian wheezy summary when running module apt with name i get the following error message no package matching is available i can install the lib manually with sudo apt get install the error persists also when the library is already installed steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used just run the playbook playbook configure tp server yml name configure prduction tp server hosts all become true roles tpserver setup tpserver setup tasks main yml apt name expected results should get installed without errors actual results ansible playbook configure tp server yml i production limit tp vvvv using home mgrimm workspace automated deployment ansible cfg as config file loaded callback default of type stdout playbook configure tp server yml plays in configure tp server yml play task establish ssh connection for user ssh exec ssh c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile home mgrimm ssh id rsa o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath home mgrimm ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpeantpo to home ansible tmp ansible tmp setup ssh exec sftp b c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile home mgrimm ssh id rsa o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath home mgrimm ansible cp ansible ssh h p r establish ssh connection for user ssh exec ssh c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile home mgrimm ssh id rsa o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath home mgrimm ansible cp ansible ssh h p r tt bin sh c sudo h s p password u root bin sh c echo become success gtgdkwqkitajhqfqjqgmesejbryahsec lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible tmp ansible tmp setup rm rf home ansible tmp ansible tmp dev null sleep ok task task path home mgrimm workspace automated deployment roles tpserver setup tasks main yml establish ssh connection for user ssh exec ssh c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile home mgrimm ssh id rsa o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath home mgrimm ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ansible tmp ansible tmp apt ssh exec sftp b c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile home mgrimm ssh id rsa o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath home mgrimm ansible cp ansible ssh h p r establish ssh connection for user ssh exec ssh c vvv o controlmaster auto o controlpersist o stricthostkeychecking no o port o identityfile home mgrimm ssh id rsa o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user o connecttimeout o controlpath home mgrimm ansible cp ansible ssh h p r tt bin sh c sudo h s p password u root bin sh c echo become success myzoijofvcxkaoxxufjaxrpvbmciltvn lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible tmp ansible tmp apt rm rf home ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args allow unauthenticated false autoremove false cache valid time null deb null default release null dpkg options force confdef force confold force false install recommends null name only upgrade false package purge false state present update cache false upgrade null module name apt msg no package matching is available no more hosts left play recap tp ok changed unreachable failed i have replaced the address and username of the target host by and for security reasons ,1
2988,3995617790.0,IssuesEvent,2016-05-10 16:02:29,Comcast/traffic_control,https://api.github.com/repos/Comcast/traffic_control,closed,TC: Ansible Playbooks - Kibana,enhancement Infrastructure,"Add Ansible playbook for Kibana
Acceptance Criteria
- Add Kibana Role
- Test that playbooks run correctly
- Document",1.0,"TC: Ansible Playbooks - Kibana - Add Ansible playbook for Kibana
Acceptance Criteria
- Add Kibana Role
- Test that playbooks run correctly
- Document",0,tc ansible playbooks kibana add ansible playbook for kibana acceptance criteria add kibana role test that playbooks run correctly document,0
4424,22794596786.0,IssuesEvent,2022-07-10 14:20:43,Lissy93/dashy,https://api.github.com/repos/Lissy93/dashy,closed,[FEATURE_REQUEST] Sabnzbd widget,🦄 Feature Request 👤 Awaiting Maintainer Response,"### Is your feature request related to a problem? If so, please describe.
No problem, but I do miss a feature available in Heimdall. No urgency for you to reply; take your time.
As an aside, I love the app. It's a little memory intensive bit it is incredibly flexible and looks to do everything I'm looking for. Icon handling is great.
### Describe the solution you'd like
1) Display current sizeleft and speed for live queue - This is available in Heimdall
2) Display most recent downloads from history (my example is showing most recent 3) - this would be an extension
I believe this meets your requirements of:
1. Publicly accesable API - Sabnzbd API details are here: (https://sabnzbd.org/wiki/configuration/3.6/api)
2. CORS and HTTPS enabled - Actually I'm not familiar with CORS and I see no mention of it on the API page. I have HTTPS turned off on my Sabnzbd server but I could easily turn it on.
3. Free to use - I'm hosting it locally so it is free.
4. Allow for use in their TOS - I expect so (Heimdall is using it)
5. Would be useful for others - I expect so
Sizeleft and speed are queried by this commend: `http://[ip:port]/sabnzbd/api?output=xml&apikey=[apikey]&mode=queue` with the following results:
`3.6.0False0False383.364913.81383.4 G4.8 T476.6914900.060.79 | 0.42 | 0.32 | V=114M R=82M5026214400.000 False0 00 B0.00
**0 **
0.000.00
**0 B**
0 B00000Idle0:00:00Defaulttvmoviessoftwareaudioreadarr`
History is available via this commend: `http://[ip:port]/sabnzbd/api?output=xml&apikey=[apikey]&mode=history&limit=3` with the following results:
`5.7 T20.6 G13.6 G0 121643546003Diners Drive-Ins and Dives S42E01 From Europe to Asia 720p WEBRip x264-KOMPOSTDiners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.nzbtvDDiners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.nzbCompletedSABnzbd_nzo_xsjmambl/Media/Downloads/SABnzbd/tv/Diners Drive-Ins and Dives S42E01 From Europe to Asia 720p WEBRip x264-KOMPOST/Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.mkv/incomplete-downloads/Sabnzbs-incomplete-downloads/Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST392SourceDiners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.nzbDownloadDownloaded in 39 seconds at an average of 12.0 MB/s Age: 23hServersNewsgroupdirect*Usenetexpress=470.5 MBRepair[f3a59282b6b2e7371b40ef91519b753a] Quick Check OKUnpack[f3a59282b6b2e7371b40ef91519b753a] Unpacked 1 files/folders in 2 seconds493332395493332395diners drive-ins and dives/42/1900ba45a759377097519e9fd11722ef6470.5 MBFalse081639547751Dogs 101 S03E04 720p HDTV x264-CBFMDogs.101.S03E04.720p.HDTV.x264-CBFM.nzbtvDDogs.101.S03E04.720p.HDTV.x264-CBFM.nzbCompletedSABnzbd_nzo_9hm_9e9y/Media/Downloads/SABnzbd/tv/Dogs 101 S03E04 720p HDTV x264-CBFM/incomplete-downloads/Sabnzbs-incomplete-downloads/Dogs.101.S03E04.720p.HDTV.x264-CBFM11421SourceDogs.101.S03E04.720p.HDTV.x264-CBFM.nzbDownloadDownloaded in 1 min 54 seconds at an average of 11.4 MB/s Age: 2140dServersNewsgroupdirect*Usenetexpress=1.2 GB, Thecubenet*Usenetexpress=3 KB, Vipernews=3 KB, Maximumusenet*Omicron=114.7 MBRepair[dogs101.0304.720p-cbfm] Quick Check OKUnpack[dogs101.0304.720p-cbfm] Unpacked 1 files/folders in 21 seconds13686723221368672322dogs 101/3/457fc20c4efbd669d2384ac6e28d3346e1.3 GBFalse071639547700Dogs 101 S04E04 Grooming Special 1080i HDTV DD5 1 MPEG2-TrollHDDogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.nzbtvDDogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.nzbCompletedSABnzbd_nzo_hbw9kocp/Media/Downloads/SABnzbd/tv/Dogs 101 S04E04 Grooming Special 1080i HDTV DD5 1 MPEG2-TrollHD/Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.ts/incomplete-downloads/Sabnzbs-incomplete-downloads/Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD40379SourceDogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.nzbDownloadDownloaded in 6 mins 43 seconds at an average of 11.3 MB/s Age: 3738dServersNewsgroupdirect*Usenetexpress=149 KB, Thecubenet*Usenetexpress=152 KB, Vipernews=152 KB, Maximumusenet*Omicron=4.4 GBRepair[Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD] Quick Check OKUnpack[Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD] Unpacked 1 files/folders in 1 min 19 seconds47696207214769620721dogs 101/4/4b89cf7f92794e65c7bc65487f002a1be4.4 GBFalse071153.6.0`
XML formatting was stripped as part of the copy/paste bu tI'm sure you get the idea. I'll attach file to make it easier.
[mode=queue.txt](https://github.com/Lissy93/dashy/files/8883292/mode.queue.txt)
[mode=history.txt](https://github.com/Lissy93/dashy/files/8883293/mode.history.txt)
### Priority
Low (Nice-to-have)
### Is this something you would be keen to implement
_No response_",True,"[FEATURE_REQUEST] Sabnzbd widget - ### Is your feature request related to a problem? If so, please describe.
No problem, but I do miss a feature available in Heimdall. No urgency for you to reply; take your time.
As an aside, I love the app. It's a little memory intensive bit it is incredibly flexible and looks to do everything I'm looking for. Icon handling is great.
### Describe the solution you'd like
1) Display current sizeleft and speed for live queue - This is available in Heimdall
2) Display most recent downloads from history (my example is showing most recent 3) - this would be an extension
I believe this meets your requirements of:
1. Publicly accesable API - Sabnzbd API details are here: (https://sabnzbd.org/wiki/configuration/3.6/api)
2. CORS and HTTPS enabled - Actually I'm not familiar with CORS and I see no mention of it on the API page. I have HTTPS turned off on my Sabnzbd server but I could easily turn it on.
3. Free to use - I'm hosting it locally so it is free.
4. Allow for use in their TOS - I expect so (Heimdall is using it)
5. Would be useful for others - I expect so
Sizeleft and speed are queried by this commend: `http://[ip:port]/sabnzbd/api?output=xml&apikey=[apikey]&mode=queue` with the following results:
`3.6.0False0False383.364913.81383.4 G4.8 T476.6914900.060.79 | 0.42 | 0.32 | V=114M R=82M5026214400.000 False0 00 B0.00
**0 **
0.000.00
**0 B**
0 B00000Idle0:00:00Defaulttvmoviessoftwareaudioreadarr`
History is available via this commend: `http://[ip:port]/sabnzbd/api?output=xml&apikey=[apikey]&mode=history&limit=3` with the following results:
`5.7 T20.6 G13.6 G0 121643546003Diners Drive-Ins and Dives S42E01 From Europe to Asia 720p WEBRip x264-KOMPOSTDiners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.nzbtvDDiners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.nzbCompletedSABnzbd_nzo_xsjmambl/Media/Downloads/SABnzbd/tv/Diners Drive-Ins and Dives S42E01 From Europe to Asia 720p WEBRip x264-KOMPOST/Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.mkv/incomplete-downloads/Sabnzbs-incomplete-downloads/Diners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST392SourceDiners.Drive-Ins.and.Dives.S42E01.From.Europe.to.Asia.720p.WEBRip.x264-KOMPOST.nzbDownloadDownloaded in 39 seconds at an average of 12.0 MB/s Age: 23hServersNewsgroupdirect*Usenetexpress=470.5 MBRepair[f3a59282b6b2e7371b40ef91519b753a] Quick Check OKUnpack[f3a59282b6b2e7371b40ef91519b753a] Unpacked 1 files/folders in 2 seconds493332395493332395diners drive-ins and dives/42/1900ba45a759377097519e9fd11722ef6470.5 MBFalse081639547751Dogs 101 S03E04 720p HDTV x264-CBFMDogs.101.S03E04.720p.HDTV.x264-CBFM.nzbtvDDogs.101.S03E04.720p.HDTV.x264-CBFM.nzbCompletedSABnzbd_nzo_9hm_9e9y/Media/Downloads/SABnzbd/tv/Dogs 101 S03E04 720p HDTV x264-CBFM/incomplete-downloads/Sabnzbs-incomplete-downloads/Dogs.101.S03E04.720p.HDTV.x264-CBFM11421SourceDogs.101.S03E04.720p.HDTV.x264-CBFM.nzbDownloadDownloaded in 1 min 54 seconds at an average of 11.4 MB/s Age: 2140dServersNewsgroupdirect*Usenetexpress=1.2 GB, Thecubenet*Usenetexpress=3 KB, Vipernews=3 KB, Maximumusenet*Omicron=114.7 MBRepair[dogs101.0304.720p-cbfm] Quick Check OKUnpack[dogs101.0304.720p-cbfm] Unpacked 1 files/folders in 21 seconds13686723221368672322dogs 101/3/457fc20c4efbd669d2384ac6e28d3346e1.3 GBFalse071639547700Dogs 101 S04E04 Grooming Special 1080i HDTV DD5 1 MPEG2-TrollHDDogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.nzbtvDDogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.nzbCompletedSABnzbd_nzo_hbw9kocp/Media/Downloads/SABnzbd/tv/Dogs 101 S04E04 Grooming Special 1080i HDTV DD5 1 MPEG2-TrollHD/Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.ts/incomplete-downloads/Sabnzbs-incomplete-downloads/Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD40379SourceDogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD.nzbDownloadDownloaded in 6 mins 43 seconds at an average of 11.3 MB/s Age: 3738dServersNewsgroupdirect*Usenetexpress=149 KB, Thecubenet*Usenetexpress=152 KB, Vipernews=152 KB, Maximumusenet*Omicron=4.4 GBRepair[Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD] Quick Check OKUnpack[Dogs 101 S04E04 Grooming Special 1080i HDTV DD5.1 MPEG2-TrollHD] Unpacked 1 files/folders in 1 min 19 seconds47696207214769620721dogs 101/4/4b89cf7f92794e65c7bc65487f002a1be4.4 GBFalse071153.6.0`
XML formatting was stripped as part of the copy/paste bu tI'm sure you get the idea. I'll attach file to make it easier.
[mode=queue.txt](https://github.com/Lissy93/dashy/files/8883292/mode.queue.txt)
[mode=history.txt](https://github.com/Lissy93/dashy/files/8883293/mode.history.txt)
### Priority
Low (Nice-to-have)
### Is this something you would be keen to implement
_No response_",1, sabnzbd widget is your feature request related to a problem if so please describe no problem but i do miss a feature available in heimdall no urgency for you to reply take your time as an aside i love the app it s a little memory intensive bit it is incredibly flexible and looks to do everything i m looking for icon handling is great describe the solution you d like display current sizeleft and speed for live queue this is available in heimdall display most recent downloads from history my example is showing most recent this would be an extension i believe this meets your requirements of publicly accesable api sabnzbd api details are here cors and https enabled actually i m not familiar with cors and i see no mention of it on the api page i have https turned off on my sabnzbd server but i could easily turn it on free to use i m hosting it locally so it is free allow for use in their tos i expect so heimdall is using it would be useful for others i expect so sizeleft and speed are queried by this commend http sabnzbd api output xml apikey mode queue with the following results false false g t v r false b b b idle default tv movies software audio readarr history is available via this commend http sabnzbd api output xml apikey mode history limit with the following results t g g diners drive ins and dives from europe to asia webrip kompost diners drive ins and dives from europe to asia webrip kompost nzb tv d none diners drive ins and dives from europe to asia webrip kompost nzb completed sabnzbd nzo xsjmambl media downloads sabnzbd tv diners drive ins and dives from europe to asia webrip kompost diners drive ins and dives from europe to asia webrip kompost mkv incomplete downloads sabnzbs incomplete downloads diners drive ins and dives from europe to asia webrip kompost source diners drive ins and dives from europe to asia webrip kompost nzb download downloaded in seconds at an average of mb s age servers newsgroupdirect usenetexpress mb repair quick check ok unpack unpacked files folders in seconds diners drive ins and dives mb false dogs hdtv cbfm dogs hdtv cbfm nzb tv d none dogs hdtv cbfm nzb completed sabnzbd nzo media downloads sabnzbd tv dogs hdtv cbfm incomplete downloads sabnzbs incomplete downloads dogs hdtv cbfm source dogs hdtv cbfm nzb download downloaded in min seconds at an average of mb s age servers newsgroupdirect usenetexpress gb thecubenet usenetexpress kb vipernews kb maximumusenet omicron mb repair quick check ok unpack unpacked files folders in seconds dogs gb false dogs grooming special hdtv trollhd dogs grooming special hdtv trollhd nzb tv d none dogs grooming special hdtv trollhd nzb completed sabnzbd nzo media downloads sabnzbd tv dogs grooming special hdtv trollhd dogs grooming special hdtv trollhd ts incomplete downloads sabnzbs incomplete downloads dogs grooming special hdtv trollhd source dogs grooming special hdtv trollhd nzb download downloaded in mins seconds at an average of mb s age servers newsgroupdirect usenetexpress kb thecubenet usenetexpress kb vipernews kb maximumusenet omicron gb repair quick check ok unpack unpacked files folders in min seconds dogs gb false xml formatting was stripped as part of the copy paste bu ti m sure you get the idea i ll attach file to make it easier priority low nice to have is this something you would be keen to implement no response ,1
407920,11939302607.0,IssuesEvent,2020-04-02 15:01:12,UniversityOfHelsinkiCS/mobvita,https://api.github.com/repos/UniversityOfHelsinkiCS/mobvita,closed,on-screen RU keyboard,Desktop Only feature priority1,"available through sidebar (?)
There is some weirdly working code in the branch virtual-keyboard",1.0,"on-screen RU keyboard - available through sidebar (?)
There is some weirdly working code in the branch virtual-keyboard",0,on screen ru keyboard available through sidebar there is some weirdly working code in the branch virtual keyboard,0
509701,14741974360.0,IssuesEvent,2021-01-07 11:28:57,mantidproject/mantid,https://api.github.com/repos/mantidproject/mantid,opened,Crash in MA When Autoscale Y Pressed With No Data,High Priority ISIS Team: Spectroscopy,"### Expected behavior
Should have no effect
### Actual behavior
crashes
### Steps to reproduce the behavior
1. Open MA
2. Check AutoScale y on the plot widget
### Platforms affected
all
",1.0,"Crash in MA When Autoscale Y Pressed With No Data - ### Expected behavior
Should have no effect
### Actual behavior
crashes
### Steps to reproduce the behavior
1. Open MA
2. Check AutoScale y on the plot widget
### Platforms affected
all
",0,crash in ma when autoscale y pressed with no data expected behavior should have no effect actual behavior crashes steps to reproduce the behavior open ma check autoscale y on the plot widget platforms affected all ,0
91740,3862440666.0,IssuesEvent,2016-04-08 02:52:21,TranslationWMcs435/TranslationWMcs435,https://api.github.com/repos/TranslationWMcs435/TranslationWMcs435,closed,Make a more robust README and user instructions,Medium Priority,"We know how to use the code, but Carlos (or another group) might not. Make the instructions clear, so that we can be sure people use it right.",1.0,"Make a more robust README and user instructions - We know how to use the code, but Carlos (or another group) might not. Make the instructions clear, so that we can be sure people use it right.",0,make a more robust readme and user instructions we know how to use the code but carlos or another group might not make the instructions clear so that we can be sure people use it right ,0
1793,6575891994.0,IssuesEvent,2017-09-11 17:43:58,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,pip always tell changed for some packages,affects_2.1 bug_report waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
pip
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
forks = 100
vault_password_file = ~/.vault.password
[ssh_connection]
pipelining = False
```
##### OS / ENVIRONMENT
```
$ uname -a
Linux dev4 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u3 (2015-08-04) x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.6 (jessie)
Release: 8.6
Codename: jessie
```
##### SUMMARY
When installing python modules, some of them (always the same) keep reporting ""changed"".
##### STEPS TO REPRODUCE
Not sure if reproductible but here is my playbook:
```
[...]
- name: Install some python3 modules
pip: name={{item}} state=latest executable=pip3
with_items:
- aiohttp
- asyncio
- flake8
- jinja2
- pep8
- polib
- pyflakes
[...]
```
##### EXPECTED RESULTS
When the python module is already up to date, I'd like to get an ""OK"" instead of a changed.
I'd like something like:
```
{
""changed"": false,
""cmd"": ""/usr/bin/pip3 install -U pyflakes"",
""invocation"": {
""module_args"": {
""chdir"": null,
""editable"": true,
""executable"": ""pip3"",
""extra_args"": null,
""name"": [
""pyflakes""
],
""requirements"": null,
""state"": ""latest"",
""umask"": null,
""use_mirrors"": true,
""version"": null,
""virtualenv"": null,
""virtualenv_command"": ""vritualenv"",
""virtualenv_python"": null,
""virtualenv_site_packages"": false
}
},
""name"": [
""pyflakes""
],
""requirements"": null,
""state"": ""latest"",
""stderr"": """",
""stdout"": ""Requirement already up-to-date: pyflakes in /usr/local/lib/python3.5/dist-packages\n"",
""version"": null,
""virtualenv"": null
}
```
##### ACTUAL RESULTS
Ansible tells it changed every time, I'm getting:
```
changed: [dev4] => (item=pyflakes) =>
{
""changed"": true,
""cmd"": ""/usr/bin/pip3 install -U pyflakes"",
""invocation"": {
""module_args"": {
""chdir"": null,
""editable"": true,
""executable"": ""pip3"",
""extra_args"": null,
""name"": ""pyflakes"",
""requirements"": null,
""state"": ""latest"",
""umask"": null,
""use_mirrors"": true,
""version"": null,
""virtualenv"": null,
""virtualenv_command"": ""virtualenv"",
""virtualenv_python"": null,
""virtualenv_site_packages"": false
},
""module_name"": ""pip""
},
""item"": ""pyflakes"",
""name"": ""pyflakes"",
""requirements"": null,
""state"": ""latest"",
""stderr"": """",
""stdout"": ""Collecting pyflakes\n Using cached pyflakes-1.3.0-py2.py3-none-any.whl\nInstalling collected packages: pyflakes\n Found existing installation: pyflakes 1.2.3\n Uninstalling pyflakes-1.2.3:\n Successfully uninstalled pyflakes-1.2.3\nSuccessfully installed pyflakes-1.3.0\n"",
""stdout_lines"": [
""Collecting pyflakes"",
"" Using cached pyflakes-1.3.0-py2.py3-none-any.whl"",
""Installing collected packages: pyflakes"",
"" Found existing installation: pyflakes 1.2.3"",
"" Uninstalling pyflakes-1.2.3:"",
"" Successfully uninstalled pyflakes-1.2.3"",
""Successfully installed pyflakes-1.3.0""
],
""version"": null,
""virtualenv"": null
}
```
The strange thing is that I'm receiving the 'changed: true' when running from ansible, and I'm receiving the 'changed: false' when running from `test-module` to try to understand myself what is broken:
```
~julien/ansible/hacking/test-module -m /usr/local/lib/python2.7/dist-packages/ansible/modules/core/packaging/language/pip.py -a 'editable=true executable=pip3 name=pyflakes state=latest use_mirrors=true virtualenv_command=vritualenv virtualenv_site_packages=false'
```
So I'm unable to reproduce it, there may be a slight difference between the environment used by ansible and mine, but I can't easily spot it.
For the sake of completness, I searched for the infamous pyflakes-1.2.3, without success:
```
# find /usr/ -name *pyflakes* /usr/local/bin/pyflakes
/usr/local/lib/python2.7/dist-packages/pyflakes-1.3.0.dist-info
/usr/local/lib/python2.7/dist-packages/pyflakes
/usr/local/lib/python2.7/dist-packages/pyflakes/scripts/pyflakes.py
/usr/local/lib/python2.7/dist-packages/pyflakes/scripts/pyflakes.pyc
/usr/local/lib/python3.5/dist-packages/flake8/plugins/pyflakes.py
/usr/local/lib/python3.5/dist-packages/flake8/plugins/__pycache__/pyflakes.cpython-35.pyc
/usr/local/lib/python3.5/dist-packages/pyflakes-1.3.0.dist-info
/usr/local/lib/python3.5/dist-packages/pyflakes
/usr/local/lib/python3.5/dist-packages/pyflakes/scripts/pyflakes.py
/usr/local/lib/python3.5/dist-packages/pyflakes/scripts/__pycache__/pyflakes.cpython-35.pyc
```
So I'm unable to understand where this is failing, where the pyflakes-1.2.3 is found, I'm out of ideas from here. I'd like to use strace but stracing ansible-playbook won't be of any help (it's ssh-ing on the machine, I won't be able to see what the process is remotely doing, only commucation between local and remote) and as I can't reproduce it with test-module I'm stuck.
Obviously:
```
# pyflakes --version
1.3.0
```
Any idea ?
",True,"pip always tell changed for some packages - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
pip
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
forks = 100
vault_password_file = ~/.vault.password
[ssh_connection]
pipelining = False
```
##### OS / ENVIRONMENT
```
$ uname -a
Linux dev4 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u3 (2015-08-04) x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.6 (jessie)
Release: 8.6
Codename: jessie
```
##### SUMMARY
When installing python modules, some of them (always the same) keep reporting ""changed"".
##### STEPS TO REPRODUCE
Not sure if reproductible but here is my playbook:
```
[...]
- name: Install some python3 modules
pip: name={{item}} state=latest executable=pip3
with_items:
- aiohttp
- asyncio
- flake8
- jinja2
- pep8
- polib
- pyflakes
[...]
```
##### EXPECTED RESULTS
When the python module is already up to date, I'd like to get an ""OK"" instead of a changed.
I'd like something like:
```
{
""changed"": false,
""cmd"": ""/usr/bin/pip3 install -U pyflakes"",
""invocation"": {
""module_args"": {
""chdir"": null,
""editable"": true,
""executable"": ""pip3"",
""extra_args"": null,
""name"": [
""pyflakes""
],
""requirements"": null,
""state"": ""latest"",
""umask"": null,
""use_mirrors"": true,
""version"": null,
""virtualenv"": null,
""virtualenv_command"": ""vritualenv"",
""virtualenv_python"": null,
""virtualenv_site_packages"": false
}
},
""name"": [
""pyflakes""
],
""requirements"": null,
""state"": ""latest"",
""stderr"": """",
""stdout"": ""Requirement already up-to-date: pyflakes in /usr/local/lib/python3.5/dist-packages\n"",
""version"": null,
""virtualenv"": null
}
```
##### ACTUAL RESULTS
Ansible tells it changed every time, I'm getting:
```
changed: [dev4] => (item=pyflakes) =>
{
""changed"": true,
""cmd"": ""/usr/bin/pip3 install -U pyflakes"",
""invocation"": {
""module_args"": {
""chdir"": null,
""editable"": true,
""executable"": ""pip3"",
""extra_args"": null,
""name"": ""pyflakes"",
""requirements"": null,
""state"": ""latest"",
""umask"": null,
""use_mirrors"": true,
""version"": null,
""virtualenv"": null,
""virtualenv_command"": ""virtualenv"",
""virtualenv_python"": null,
""virtualenv_site_packages"": false
},
""module_name"": ""pip""
},
""item"": ""pyflakes"",
""name"": ""pyflakes"",
""requirements"": null,
""state"": ""latest"",
""stderr"": """",
""stdout"": ""Collecting pyflakes\n Using cached pyflakes-1.3.0-py2.py3-none-any.whl\nInstalling collected packages: pyflakes\n Found existing installation: pyflakes 1.2.3\n Uninstalling pyflakes-1.2.3:\n Successfully uninstalled pyflakes-1.2.3\nSuccessfully installed pyflakes-1.3.0\n"",
""stdout_lines"": [
""Collecting pyflakes"",
"" Using cached pyflakes-1.3.0-py2.py3-none-any.whl"",
""Installing collected packages: pyflakes"",
"" Found existing installation: pyflakes 1.2.3"",
"" Uninstalling pyflakes-1.2.3:"",
"" Successfully uninstalled pyflakes-1.2.3"",
""Successfully installed pyflakes-1.3.0""
],
""version"": null,
""virtualenv"": null
}
```
The strange thing is that I'm receiving the 'changed: true' when running from ansible, and I'm receiving the 'changed: false' when running from `test-module` to try to understand myself what is broken:
```
~julien/ansible/hacking/test-module -m /usr/local/lib/python2.7/dist-packages/ansible/modules/core/packaging/language/pip.py -a 'editable=true executable=pip3 name=pyflakes state=latest use_mirrors=true virtualenv_command=vritualenv virtualenv_site_packages=false'
```
So I'm unable to reproduce it, there may be a slight difference between the environment used by ansible and mine, but I can't easily spot it.
For the sake of completness, I searched for the infamous pyflakes-1.2.3, without success:
```
# find /usr/ -name *pyflakes* /usr/local/bin/pyflakes
/usr/local/lib/python2.7/dist-packages/pyflakes-1.3.0.dist-info
/usr/local/lib/python2.7/dist-packages/pyflakes
/usr/local/lib/python2.7/dist-packages/pyflakes/scripts/pyflakes.py
/usr/local/lib/python2.7/dist-packages/pyflakes/scripts/pyflakes.pyc
/usr/local/lib/python3.5/dist-packages/flake8/plugins/pyflakes.py
/usr/local/lib/python3.5/dist-packages/flake8/plugins/__pycache__/pyflakes.cpython-35.pyc
/usr/local/lib/python3.5/dist-packages/pyflakes-1.3.0.dist-info
/usr/local/lib/python3.5/dist-packages/pyflakes
/usr/local/lib/python3.5/dist-packages/pyflakes/scripts/pyflakes.py
/usr/local/lib/python3.5/dist-packages/pyflakes/scripts/__pycache__/pyflakes.cpython-35.pyc
```
So I'm unable to understand where this is failing, where the pyflakes-1.2.3 is found, I'm out of ideas from here. I'd like to use strace but stracing ansible-playbook won't be of any help (it's ssh-ing on the machine, I won't be able to see what the process is remotely doing, only commucation between local and remote) and as I can't reproduce it with test-module I'm stuck.
Obviously:
```
# pyflakes --version
1.3.0
```
Any idea ?
",1,pip always tell changed for some packages issue type bug report component name pip ansible version ansible config file configured module search path default w o overrides configuration forks vault password file vault password pipelining false os environment uname a linux smp debian gnu linux lsb release a no lsb modules are available distributor id debian description debian gnu linux jessie release codename jessie summary when installing python modules some of them always the same keep reporting changed steps to reproduce not sure if reproductible but here is my playbook name install some modules pip name item state latest executable with items aiohttp asyncio polib pyflakes expected results when the python module is already up to date i d like to get an ok instead of a changed i d like something like changed false cmd usr bin install u pyflakes invocation module args chdir null editable true executable extra args null name pyflakes requirements null state latest umask null use mirrors true version null virtualenv null virtualenv command vritualenv virtualenv python null virtualenv site packages false name pyflakes requirements null state latest stderr stdout requirement already up to date pyflakes in usr local lib dist packages n version null virtualenv null actual results ansible tells it changed every time i m getting changed item pyflakes changed true cmd usr bin install u pyflakes invocation module args chdir null editable true executable extra args null name pyflakes requirements null state latest umask null use mirrors true version null virtualenv null virtualenv command virtualenv virtualenv python null virtualenv site packages false module name pip item pyflakes name pyflakes requirements null state latest stderr stdout collecting pyflakes n using cached pyflakes none any whl ninstalling collected packages pyflakes n found existing installation pyflakes n uninstalling pyflakes n successfully uninstalled pyflakes nsuccessfully installed pyflakes n stdout lines collecting pyflakes using cached pyflakes none any whl installing collected packages pyflakes found existing installation pyflakes uninstalling pyflakes successfully uninstalled pyflakes successfully installed pyflakes version null virtualenv null the strange thing is that i m receiving the changed true when running from ansible and i m receiving the changed false when running from test module to try to understand myself what is broken julien ansible hacking test module m usr local lib dist packages ansible modules core packaging language pip py a editable true executable name pyflakes state latest use mirrors true virtualenv command vritualenv virtualenv site packages false so i m unable to reproduce it there may be a slight difference between the environment used by ansible and mine but i can t easily spot it for the sake of completness i searched for the infamous pyflakes without success find usr name pyflakes usr local bin pyflakes usr local lib dist packages pyflakes dist info usr local lib dist packages pyflakes usr local lib dist packages pyflakes scripts pyflakes py usr local lib dist packages pyflakes scripts pyflakes pyc usr local lib dist packages plugins pyflakes py usr local lib dist packages plugins pycache pyflakes cpython pyc usr local lib dist packages pyflakes dist info usr local lib dist packages pyflakes usr local lib dist packages pyflakes scripts pyflakes py usr local lib dist packages pyflakes scripts pycache pyflakes cpython pyc so i m unable to understand where this is failing where the pyflakes is found i m out of ideas from here i d like to use strace but stracing ansible playbook won t be of any help it s ssh ing on the machine i won t be able to see what the process is remotely doing only commucation between local and remote and as i can t reproduce it with test module i m stuck obviously pyflakes version any idea ,1
4072,19213006431.0,IssuesEvent,2021-12-07 05:40:33,adda-team/adda,https://api.github.com/repos/adda-team/adda,opened,Remove deprecated specification of beam center as argument to -beam,comp-UI maintainability,Such specification has been marked as deprecated by #304 (and #285). At some time it should be removed completely.,True,Remove deprecated specification of beam center as argument to -beam - Such specification has been marked as deprecated by #304 (and #285). At some time it should be removed completely.,1,remove deprecated specification of beam center as argument to beam such specification has been marked as deprecated by and at some time it should be removed completely ,1
379344,26367316537.0,IssuesEvent,2023-01-11 17:34:21,Nucleo-Estudantes-Informatica-ISEP/antirecurso,https://api.github.com/repos/Nucleo-Estudantes-Informatica-ISEP/antirecurso,opened,A comprehensive database scheme,documentation,"In this issue we want:
- a comprehensive database scheme in a png for newcomers to get used to the database and project requests (bonus points if the scheme is editable and online)
",1.0,"A comprehensive database scheme - In this issue we want:
- a comprehensive database scheme in a png for newcomers to get used to the database and project requests (bonus points if the scheme is editable and online)
",0,a comprehensive database scheme in this issue we want a comprehensive database scheme in a png for newcomers to get used to the database and project requests bonus points if the scheme is editable and online ,0
238366,18239490488.0,IssuesEvent,2021-10-01 11:07:25,obophenotype/uberon,https://api.github.com/repos/obophenotype/uberon,closed,Broken Links on UBERON Page,documentation issue,"I am using Ontobee occasionally, and I noticed that the link below is broken; it appears that all/most links on the page result in 404 error. Could someone look into this? Thanks and Happy New Year, Sam Smith – Michigan (retired volunteer with Dr. Oliver He’s lab at U of M)
http://uberon.github.io/browse/ontobee.html
",1.0,"Broken Links on UBERON Page - I am using Ontobee occasionally, and I noticed that the link below is broken; it appears that all/most links on the page result in 404 error. Could someone look into this? Thanks and Happy New Year, Sam Smith – Michigan (retired volunteer with Dr. Oliver He’s lab at U of M)
http://uberon.github.io/browse/ontobee.html
",0,broken links on uberon page i am using ontobee occasionally and i noticed that the link below is broken it appears that all most links on the page result in error could someone look into this thanks and happy new year sam smith – michigan retired volunteer with dr oliver he’s lab at u of m ,0
151576,5824498281.0,IssuesEvent,2017-05-07 13:37:19,javaee/mvc-spec,https://api.github.com/repos/javaee/mvc-spec,closed,Explore ways to avoid hardcoding URIs in templates,Priority: Major Type: Task,"In templates, links (most importantly a elements) and form actions require a URI. If this URI is provided as a string, even if prefixed by a getBaseUri() (or similar) call, this will be redundant to the declarative mapping to URIs on controller methods. JAX-RS provides the UrIBuilder API to address this from within resources, but there should at least be a convenient way to access this from templates, possibly from the MvcContext object.",1.0,"Explore ways to avoid hardcoding URIs in templates - In templates, links (most importantly a elements) and form actions require a URI. If this URI is provided as a string, even if prefixed by a getBaseUri() (or similar) call, this will be redundant to the declarative mapping to URIs on controller methods. JAX-RS provides the UrIBuilder API to address this from within resources, but there should at least be a convenient way to access this from templates, possibly from the MvcContext object.",0,explore ways to avoid hardcoding uris in templates in templates links most importantly a elements and form actions require a uri if this uri is provided as a string even if prefixed by a getbaseuri or similar call this will be redundant to the declarative mapping to uris on controller methods jax rs provides the uribuilder api to address this from within resources but there should at least be a convenient way to access this from templates possibly from the mvccontext object ,0
530977,15438867704.0,IssuesEvent,2021-03-07 21:59:26,iv-org/invidious,https://api.github.com/repos/iv-org/invidious,closed,"[Bug] ""Failed to resolve dependencies"" when updating",bug priority:high type:server-side,"
**Describe the bug**
""Failed to resolve dependencies"" is displayed when updating, even though Crystal 0.36 is indeed installed.
**Steps to Reproduce**
Update to latest master
**Logs**
```
Unable to satisfy the following requirements:
- `crystal (>= 0.35.0, < 2.0.0)` required by `pg 0.23.1`
- `crystal (>= 0.35.0, < 2.0.0)` required by `sqlite3 0.18.0`
- `crystal (~> 0.35, >= 0.35.0)` required by `kemal 0.27.0`
- `crystal (< 1.0.0)` required by `pool 0.2.3`
- `crystal (~> 0.34, >= 0.34.0)` required by `protodec 0.1.3`
- `crystal (~> 0.36, >= 0.36.1)` required by `lsquic 2.23.1`
- `crystal (~> 0.35, >= 0.35.0)` required by `db 0.10.0`
- `crystal (< 1.0.0)` required by `radix 0.3.9`
Failed to resolve dependencies, try updating incompatible shards or use --ignore-crystal-version as a workaround if no update is available.
```
**Screenshots**
N/A
**Additional context**
Crystal 0.36 is installed:
```
# crystal --version
Crystal 0.36.0 (2021-01-26)
```
Any idea @saltycrys ?
",1.0,"[Bug] ""Failed to resolve dependencies"" when updating -
**Describe the bug**
""Failed to resolve dependencies"" is displayed when updating, even though Crystal 0.36 is indeed installed.
**Steps to Reproduce**
Update to latest master
**Logs**
```
Unable to satisfy the following requirements:
- `crystal (>= 0.35.0, < 2.0.0)` required by `pg 0.23.1`
- `crystal (>= 0.35.0, < 2.0.0)` required by `sqlite3 0.18.0`
- `crystal (~> 0.35, >= 0.35.0)` required by `kemal 0.27.0`
- `crystal (< 1.0.0)` required by `pool 0.2.3`
- `crystal (~> 0.34, >= 0.34.0)` required by `protodec 0.1.3`
- `crystal (~> 0.36, >= 0.36.1)` required by `lsquic 2.23.1`
- `crystal (~> 0.35, >= 0.35.0)` required by `db 0.10.0`
- `crystal (< 1.0.0)` required by `radix 0.3.9`
Failed to resolve dependencies, try updating incompatible shards or use --ignore-crystal-version as a workaround if no update is available.
```
**Screenshots**
N/A
**Additional context**
Crystal 0.36 is installed:
```
# crystal --version
Crystal 0.36.0 (2021-01-26)
```
Any idea @saltycrys ?
",0, failed to resolve dependencies when updating describe the bug failed to resolve dependencies is displayed when updating even though crystal is indeed installed steps to reproduce steps to reproduce the behavior go to click on scroll down to see error update to latest master logs unable to satisfy the following requirements crystal required by pg crystal required by crystal required by kemal crystal required by pool crystal required by protodec crystal required by lsquic crystal required by db crystal required by radix failed to resolve dependencies try updating incompatible shards or use ignore crystal version as a workaround if no update is available screenshots n a additional context add any other context about the problem here browser if applicable os if applicable crystal is installed crystal version crystal any idea saltycrys ,0
3968,18161104905.0,IssuesEvent,2021-09-27 09:42:26,pypa/get-pip,https://api.github.com/repos/pypa/get-pip,closed,Version 21.2.4,maintainance,"Just wondering, when will there be a new get-pip that installs version 21.2.4. I'm asking because me and someone else are making a Docker image with the latest versions of python and pip compiled from source, which is actually slimmer than the official image.",True,"Version 21.2.4 - Just wondering, when will there be a new get-pip that installs version 21.2.4. I'm asking because me and someone else are making a Docker image with the latest versions of python and pip compiled from source, which is actually slimmer than the official image.",1,version just wondering when will there be a new get pip that installs version i m asking because me and someone else are making a docker image with the latest versions of python and pip compiled from source which is actually slimmer than the official image ,1
265332,23160823087.0,IssuesEvent,2022-07-29 17:32:05,modin-project/modin,https://api.github.com/repos/modin-project/modin,opened,"TEST: windows ray CI: flaky segmentation fault and ""Windows fatal exception: access violation""",CI Flaky Test,"Here's an instance from `modin/pandas/test/dataframe/test_join_sort.py`: https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true
Stack trace
```
============================= test session starts =============================
platform win32 -- Python 3.8.13, pytest-7.1.2, pluggy-1.0.0
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=[10](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:11) warmup=False warmup_iterations=100000)
rootdir: D:\a\modin\modin, configfile: setup.cfg
plugins: Faker-13.15.1, benchmark-3.4.1, cov-2.[11](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:12).0, forked-1.4.0, xdist-2.5.0
collected [12](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:13)998 items
Windows fatal exception: access violation
Thread 0x000019b8 (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py"", line 1258 in channel_spin
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 870 in run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 932 in _bootstrap_inner
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 890 in _bootstrap
Thread 0x000011a0 (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py"", line 1258 in channel_spin
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 870 in run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 932 in _bootstrap_inner
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 890 in _bootstrap
Thread 0x000007cc (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py"", line 1258 in channel_spin
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 870 in run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 932 in _bootstrap_inner
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 890 in _bootstrap
Thread 0x00001270 (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 306 in wait
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py"", line 106 in _wait_once
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py"", line 148 in wait
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py"", line 733 in result
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py"", line 249 in _poll_locked
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py"", line 351 in poll
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py"", line 475 in print_logs
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 870 in run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 932 in _bootstrap_inner
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 890 in _bootstrap
Thread 0x000001e8 (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 306 in wait
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py"", line 106 in _wait_once
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py"", line 148 in wait
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py"", line 733 in result
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py"", line 249 in _poll_locked
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py"", line 317 in poll
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py"", line [13](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:14)89 in listen_error_messages
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 870 in run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 932 in _bootstrap_inner
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 890 in _bootstrap
Thread 0x000005b4 (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 306 in wait
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py"", line 106 in _wait_once
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py"", line [14](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:15)8 in wait
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py"", line 733 in result
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py"", line 249 in _poll_locked
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py"", line 385 in poll
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\import_thread.py"", line 70 in _run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 870 in run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 932 in _bootstrap_inner
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 890 in _bootstrap
Thread 0x00001b3c (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py"", line 364 in get_objects
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py"", line 1825 in get
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\client_mode_hook.py"", line 105 in wrapper
File ""D:\a\modin\modin\modin\core\execution\ray\implementations\pandas_on_ray\partitioning\partition_manager.py"", line 110 in get_objects_from_partitions
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\core\dataframe\pandas\partitioning\partition_manager.py"", line 866 in get_indices
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py"", line 429 in _compute_axis_labels
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py"", line 2311 in
File ""D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py"", line 2310 in broadcast_apply_full_axis
File ""D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py"", line 1[15](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:16) in run_f_on_minimally_updated_metadata
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py"", line 1876 in apply_full_axis
File ""D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py"", line 115 in run_f_on_minimally_updated_metadata
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\core\storage_formats\pandas\query_compiler.py"", line 505 in join
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\pandas\dataframe.py"", line 1275 in join
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\pandas\test\dataframe\test_join_sort.py"", line 111 in test_join
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\python.py"", line 192 in pytest_pyfunc_call
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py"", line 39 in _multicall
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py"", line 80 in _hookexec
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py"", line 265 in __call__
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\python.py"", line 1761 in runtest
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line [16](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:17)6 in pytest_runtest_call
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py"", line 39 in _multicall
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py"", line 80 in _hookexec
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py"", line 265 in __call__
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line 259 in
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line 338 in from_call
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line 258 in call_runtest_hook
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line 219 in call_and_report
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line 130 in runtestprotocol
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line 111 in pytest_runtest_protocol
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py"", line 39 in _multicall
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py"", line 80 in _hookexec
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py"", line 265 in __call__
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py"", line 347 in pytest_runtestloop
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py"", line 39 in _multicall
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py"", line 80 in _hookexec
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py"", line 265 in __call__
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py"", line 322 in _main
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py"", line 268 in wrap_session
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py"", line 315 in pytest_cmdline_main
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py"", line 39 in _multicall
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py"", line 80 in _hookexec
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py"", line 265 in __call__
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\config\__init__.py"", line 164 in main
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\config\__init__.py"", line [18](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:19)7 in console_main
File ""C:\Miniconda3\envs\modin\lib\site-packages\pytest\__main__.py"", line 5 in
File ""C:\Miniconda3\envs\modin\lib\runpy.py"", line 87 in _run_code
File ""C:\Miniconda3\envs\modin\lib\runpy.py"", line [19](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:20)4 in _run_module_as_main
D:\a\_temp\16c1e1a0-adaf-4bff-9b1c-40fb0dbccb[25](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:26).sh: line 1: 10[32](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:33) Segmentation fault python -m pytest modin/pandas/test/dataframe/test_join_sort.py
modin\pandas\test\dataframe\test_join_sort.py ..
Error: Process completed with exit code 1[39](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:40).
```
",1.0,"TEST: windows ray CI: flaky segmentation fault and ""Windows fatal exception: access violation"" - Here's an instance from `modin/pandas/test/dataframe/test_join_sort.py`: https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true
Stack trace
```
============================= test session starts =============================
platform win32 -- Python 3.8.13, pytest-7.1.2, pluggy-1.0.0
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=[10](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:11) warmup=False warmup_iterations=100000)
rootdir: D:\a\modin\modin, configfile: setup.cfg
plugins: Faker-13.15.1, benchmark-3.4.1, cov-2.[11](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:12).0, forked-1.4.0, xdist-2.5.0
collected [12](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:13)998 items
Windows fatal exception: access violation
Thread 0x000019b8 (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py"", line 1258 in channel_spin
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 870 in run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 932 in _bootstrap_inner
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 890 in _bootstrap
Thread 0x000011a0 (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py"", line 1258 in channel_spin
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 870 in run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 932 in _bootstrap_inner
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 890 in _bootstrap
Thread 0x000007cc (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py"", line 1258 in channel_spin
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 870 in run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 932 in _bootstrap_inner
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 890 in _bootstrap
Thread 0x00001270 (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 306 in wait
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py"", line 106 in _wait_once
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py"", line 148 in wait
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py"", line 733 in result
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py"", line 249 in _poll_locked
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py"", line 351 in poll
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py"", line 475 in print_logs
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 870 in run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 932 in _bootstrap_inner
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 890 in _bootstrap
Thread 0x000001e8 (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 306 in wait
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py"", line 106 in _wait_once
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py"", line 148 in wait
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py"", line 733 in result
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py"", line 249 in _poll_locked
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py"", line 317 in poll
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py"", line [13](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:14)89 in listen_error_messages
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 870 in run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 932 in _bootstrap_inner
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 890 in _bootstrap
Thread 0x000005b4 (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 306 in wait
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py"", line 106 in _wait_once
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py"", line [14](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:15)8 in wait
File ""C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py"", line 733 in result
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py"", line 249 in _poll_locked
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py"", line 385 in poll
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\import_thread.py"", line 70 in _run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 870 in run
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 932 in _bootstrap_inner
File ""C:\Miniconda3\envs\modin\lib\threading.py"", line 890 in _bootstrap
Thread 0x00001b3c (most recent call first):
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py"", line 364 in get_objects
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py"", line 1825 in get
File ""C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\client_mode_hook.py"", line 105 in wrapper
File ""D:\a\modin\modin\modin\core\execution\ray\implementations\pandas_on_ray\partitioning\partition_manager.py"", line 110 in get_objects_from_partitions
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\core\dataframe\pandas\partitioning\partition_manager.py"", line 866 in get_indices
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py"", line 429 in _compute_axis_labels
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py"", line 2311 in
File ""D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py"", line 2310 in broadcast_apply_full_axis
File ""D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py"", line 1[15](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:16) in run_f_on_minimally_updated_metadata
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py"", line 1876 in apply_full_axis
File ""D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py"", line 115 in run_f_on_minimally_updated_metadata
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\core\storage_formats\pandas\query_compiler.py"", line 505 in join
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\pandas\dataframe.py"", line 1275 in join
File ""D:\a\modin\modin\modin\logging\logger_decorator.py"", line 128 in run_and_log
File ""D:\a\modin\modin\modin\pandas\test\dataframe\test_join_sort.py"", line 111 in test_join
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\python.py"", line 192 in pytest_pyfunc_call
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py"", line 39 in _multicall
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py"", line 80 in _hookexec
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py"", line 265 in __call__
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\python.py"", line 1761 in runtest
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line [16](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:17)6 in pytest_runtest_call
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py"", line 39 in _multicall
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py"", line 80 in _hookexec
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py"", line 265 in __call__
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line 259 in
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line 338 in from_call
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line 258 in call_runtest_hook
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line 219 in call_and_report
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line 130 in runtestprotocol
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py"", line 111 in pytest_runtest_protocol
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py"", line 39 in _multicall
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py"", line 80 in _hookexec
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py"", line 265 in __call__
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py"", line 347 in pytest_runtestloop
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py"", line 39 in _multicall
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py"", line 80 in _hookexec
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py"", line 265 in __call__
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py"", line 322 in _main
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py"", line 268 in wrap_session
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py"", line 315 in pytest_cmdline_main
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py"", line 39 in _multicall
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py"", line 80 in _hookexec
File ""C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py"", line 265 in __call__
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\config\__init__.py"", line 164 in main
File ""C:\Miniconda3\envs\modin\lib\site-packages\_pytest\config\__init__.py"", line [18](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:19)7 in console_main
File ""C:\Miniconda3\envs\modin\lib\site-packages\pytest\__main__.py"", line 5 in
File ""C:\Miniconda3\envs\modin\lib\runpy.py"", line 87 in _run_code
File ""C:\Miniconda3\envs\modin\lib\runpy.py"", line [19](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:20)4 in _run_module_as_main
D:\a\_temp\16c1e1a0-adaf-4bff-9b1c-40fb0dbccb[25](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:26).sh: line 1: 10[32](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:33) Segmentation fault python -m pytest modin/pandas/test/dataframe/test_join_sort.py
modin\pandas\test\dataframe\test_join_sort.py ..
Error: Process completed with exit code 1[39](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:40).
```
",0,test windows ray ci flaky segmentation fault and windows fatal exception access violation here s an instance from modin pandas test dataframe test join sort py stack trace test session starts platform python pytest pluggy benchmark defaults timer time perf counter disable gc false min rounds min time max time calibration precision warmup false warmup iterations rootdir d a modin modin configfile setup cfg plugins faker benchmark cov forked xdist collected items windows fatal exception access violation thread most recent call first file c envs modin lib site packages grpc channel py line in channel spin file c envs modin lib threading py line in run file c envs modin lib threading py line in bootstrap inner file c envs modin lib threading py line in bootstrap thread most recent call first file c envs modin lib site packages grpc channel py line in channel spin file c envs modin lib threading py line in run file c envs modin lib threading py line in bootstrap inner file c envs modin lib threading py line in bootstrap thread most recent call first file c envs modin lib site packages grpc channel py line in channel spin file c envs modin lib threading py line in run file c envs modin lib threading py line in bootstrap inner file c envs modin lib threading py line in bootstrap thread most recent call first file c envs modin lib threading py line in wait file c envs modin lib site packages grpc common py line in wait once file c envs modin lib site packages grpc common py line in wait file c envs modin lib site packages grpc channel py line in result file c envs modin lib site packages ray private gcs pubsub py line in poll locked file c envs modin lib site packages ray private gcs pubsub py line in poll file c envs modin lib site packages ray worker py line in print logs file c envs modin lib threading py line in run file c envs modin lib threading py line in bootstrap inner file c envs modin lib threading py line in bootstrap thread most recent call first file c envs modin lib threading py line in wait file c envs modin lib site packages grpc common py line in wait once file c envs modin lib site packages grpc common py line in wait file c envs modin lib site packages grpc channel py line in result file c envs modin lib site packages ray private gcs pubsub py line in poll locked file c envs modin lib site packages ray private gcs pubsub py line in poll file c envs modin lib site packages ray worker py line in listen error messages file c envs modin lib threading py line in run file c envs modin lib threading py line in bootstrap inner file c envs modin lib threading py line in bootstrap thread most recent call first file c envs modin lib threading py line in wait file c envs modin lib site packages grpc common py line in wait once file c envs modin lib site packages grpc common py line in wait file c envs modin lib site packages grpc channel py line in result file c envs modin lib site packages ray private gcs pubsub py line in poll locked file c envs modin lib site packages ray private gcs pubsub py line in poll file c envs modin lib site packages ray private import thread py line in run file c envs modin lib threading py line in run file c envs modin lib threading py line in bootstrap inner file c envs modin lib threading py line in bootstrap thread most recent call first file c envs modin lib site packages ray worker py line in get objects file c envs modin lib site packages ray worker py line in get file c envs modin lib site packages ray private client mode hook py line in wrapper file d a modin modin modin core execution ray implementations pandas on ray partitioning partition manager py line in get objects from partitions file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin core dataframe pandas partitioning partition manager py line in get indices file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin core dataframe pandas dataframe dataframe py line in compute axis labels file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin core dataframe pandas dataframe dataframe py line in file d a modin modin modin core dataframe pandas dataframe dataframe py line in broadcast apply full axis file d a modin modin modin core dataframe pandas dataframe dataframe py line in run f on minimally updated metadata file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin core dataframe pandas dataframe dataframe py line in apply full axis file d a modin modin modin core dataframe pandas dataframe dataframe py line in run f on minimally updated metadata file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin core storage formats pandas query compiler py line in join file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin pandas dataframe py line in join file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin pandas test dataframe test join sort py line in test join file c envs modin lib site packages pytest python py line in pytest pyfunc call file c envs modin lib site packages pluggy callers py line in multicall file c envs modin lib site packages pluggy manager py line in hookexec file c envs modin lib site packages pluggy hooks py line in call file c envs modin lib site packages pytest python py line in runtest file c envs modin lib site packages pytest runner py line in pytest runtest call file c envs modin lib site packages pluggy callers py line in multicall file c envs modin lib site packages pluggy manager py line in hookexec file c envs modin lib site packages pluggy hooks py line in call file c envs modin lib site packages pytest runner py line in file c envs modin lib site packages pytest runner py line in from call file c envs modin lib site packages pytest runner py line in call runtest hook file c envs modin lib site packages pytest runner py line in call and report file c envs modin lib site packages pytest runner py line in runtestprotocol file c envs modin lib site packages pytest runner py line in pytest runtest protocol file c envs modin lib site packages pluggy callers py line in multicall file c envs modin lib site packages pluggy manager py line in hookexec file c envs modin lib site packages pluggy hooks py line in call file c envs modin lib site packages pytest main py line in pytest runtestloop file c envs modin lib site packages pluggy callers py line in multicall file c envs modin lib site packages pluggy manager py line in hookexec file c envs modin lib site packages pluggy hooks py line in call file c envs modin lib site packages pytest main py line in main file c envs modin lib site packages pytest main py line in wrap session file c envs modin lib site packages pytest main py line in pytest cmdline main file c envs modin lib site packages pluggy callers py line in multicall file c envs modin lib site packages pluggy manager py line in hookexec file c envs modin lib site packages pluggy hooks py line in call file c envs modin lib site packages pytest config init py line in main file c envs modin lib site packages pytest config init py line in console main file c envs modin lib site packages pytest main py line in file c envs modin lib runpy py line in run code file c envs modin lib runpy py line in run module as main d a temp adaf line segmentation fault python m pytest modin pandas test dataframe test join sort py modin pandas test dataframe test join sort py error process completed with exit code ,0
31620,11957482386.0,IssuesEvent,2020-04-04 14:32:43,dropwizard/dropwizard,https://api.github.com/repos/dropwizard/dropwizard,closed,update snakeyaml to 1.26+ to address security vulnerability CVE-2017-18640,security,"DESCRIPTION FROM CVE
The Alias feature in SnakeYAML 1.18 allows entity expansion during a load operation, a related issue to CVE-2003-1564.
EXPLANATION
The snakeyaml package is vulnerable to YAML Entity Expansion. The load method in Yaml.class allows for entities to reference other entities. An attacker could potentially exploit this behavior by providing a YAML document with many entities that reference each other, which could take a large amount of memory to process, potentially resulting in a Denial of Service (DoS) situation.
DETECTION
The application is vulnerable by using this component with untrusted user input when the maxAliasesForCollections is set too high or settings.setAllowRecursiveKeys is set to false.
RECOMMENDATION
We recommend upgrading to a version of this component that is not vulnerable to this specific issue.
Note: If this component is included as a bundled/transitive dependency of another component, there may not be an upgrade path. In this instance, we recommend contacting the maintainers who included the vulnerable package. Alternatively, we recommend investigating alternative components or a potential mitigating control.
ROOT CAUSE
snakeyaml-1.24-android.jarorg/yaml/snakeyaml/constructor/BaseConstructor.class( , 1.26)
ADVISORIES
Project:https://bitbucket.org/asomov/snakeyaml/issues/377/allow-configuration-for-preventing-billion
CVSS DETAILS
CVE CVSS 3:7.5
CVSS Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",True,"update snakeyaml to 1.26+ to address security vulnerability CVE-2017-18640 - DESCRIPTION FROM CVE
The Alias feature in SnakeYAML 1.18 allows entity expansion during a load operation, a related issue to CVE-2003-1564.
EXPLANATION
The snakeyaml package is vulnerable to YAML Entity Expansion. The load method in Yaml.class allows for entities to reference other entities. An attacker could potentially exploit this behavior by providing a YAML document with many entities that reference each other, which could take a large amount of memory to process, potentially resulting in a Denial of Service (DoS) situation.
DETECTION
The application is vulnerable by using this component with untrusted user input when the maxAliasesForCollections is set too high or settings.setAllowRecursiveKeys is set to false.
RECOMMENDATION
We recommend upgrading to a version of this component that is not vulnerable to this specific issue.
Note: If this component is included as a bundled/transitive dependency of another component, there may not be an upgrade path. In this instance, we recommend contacting the maintainers who included the vulnerable package. Alternatively, we recommend investigating alternative components or a potential mitigating control.
ROOT CAUSE
snakeyaml-1.24-android.jarorg/yaml/snakeyaml/constructor/BaseConstructor.class( , 1.26)
ADVISORIES
Project:https://bitbucket.org/asomov/snakeyaml/issues/377/allow-configuration-for-preventing-billion
CVSS DETAILS
CVE CVSS 3:7.5
CVSS Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",0,update snakeyaml to to address security vulnerability cve description from cve the alias feature in snakeyaml allows entity expansion during a load operation a related issue to cve explanation the snakeyaml package is vulnerable to yaml entity expansion the load method in yaml class allows for entities to reference other entities an attacker could potentially exploit this behavior by providing a yaml document with many entities that reference each other which could take a large amount of memory to process potentially resulting in a denial of service dos situation detection the application is vulnerable by using this component with untrusted user input when the maxaliasesforcollections is set too high or settings setallowrecursivekeys is set to false recommendation we recommend upgrading to a version of this component that is not vulnerable to this specific issue note if this component is included as a bundled transitive dependency of another component there may not be an upgrade path in this instance we recommend contacting the maintainers who included the vulnerable package alternatively we recommend investigating alternative components or a potential mitigating control root cause snakeyaml android jarorg yaml snakeyaml constructor baseconstructor class advisories project cvss details cve cvss cvss vector cvss av n ac l pr n ui n s u c n i n a h,0
5688,29927342530.0,IssuesEvent,2023-06-22 06:56:21,onebeyond/maintainers,https://api.github.com/repos/onebeyond/maintainers,closed,OpenSSF Scorecard implementation,maintainers-agenda,"### Intro
I reviewed the scores for some key projects ([rascal](https://deps.dev/npm/rascal/16.2.0), [Systemic](https://deps.dev/npm/systemic), [handy-postgres](https://deps.dev/npm/handy-postgres), etc.) and I have identified some clear initiatives or strategies that we can follow to improve the results. Our average score is around 5-5.5 out of 10.
### Opportunities in repo settings
**Code Review**
> Determines if the project requires code review before pull requests
**Branch protection**
> Determines if the default and release branches are protected with GitHub's branch protection settings.
> Info: 'force pushes' disabled on branch 'master'
> Info: 'allow deletion' disabled on branch 'master'
> Warn: no status checks found to merge onto branch 'master'
> Warn: number of required reviewers is only 1 on branch 'master'
> Warn: codeowner review is not required on branch 'master'
### Oportunities in pipelines
**Tokens-permissions**
> Determines if the project's workflows follow the principle of least privilege.
> non read-only tokens detected in GitHub workflows
**Pinned-Dependencies**
> Determines if the project has declared and pinned the dependencies of its build process.
**fuzzing**
> Determines if the project uses fuzzing.
### Other
**License**
Most projects now have a valid license that will patch the scorecard in the next deployments, but I noticed that we have some dependencies with unknown licenses.
**Security-policy**
> Determines if the project has published a security policy.
**SAST**
> Determines if the project uses static code analysis.
We have the possibility to use [CodeQL](https://codeql.github.com/) for free
**Dependency-Update-Tool**
> Determines if the project uses a dependency update tool.
We can set up Dependabot properly to avoid annoying auto-pull requests, but prompt us about relevant security releases only.
**CII-Best-Practices**
> Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.
### Relevant Documentation:
- [Official Documentation](https://securityscorecards.dev/)
- [You should use the OpenSSF Scorecard](https://dev.to/ulisesgascon/you-should-use-the-openssf-scorecard-4eh4)
## Actionable items
- Add a global security Policy in the organization metadata repository
- Add code review mandatory in PRs at global org and/or in each repo. []()
- Add [branch protection rules](https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection) at global org and/or in each repo. [Related documentation](https://docs.github.com/en/code-security/getting-started/securing-your-organization)
- Add secret scanning at global org and/or in each repo. [Related documentation](https://docs.github.com/en/code-security/secret-scanning/about-secret-scanning)
- Add code scanning with CodeQL at global org and/or in each repo. [Related documentation](https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/about-code-scanning)
- Add dependabot with a good non-intrusive settings at global org and/or in each repo. [Related documentation](https://docs.github.com/en/code-security/dependabot/dependabot-alerts/about-dependabot-alerts)
- Create a pipeline to ensure that the projects are following best practices in terms of dependencies (avoid unkwnon, etc..) by using [license-checker](https://github.com/onebeyond/license-checker) in each repository
- Update each repository pipelines to use [pinned versions](https://github.com/ossf/scorecard/blob/main/docs/checks.md#pinned-dependencies) and [read-only tokens](https://github.com/ossf/scorecard/blob/main/docs/checks.md#token-permissions).
- Pin version in pkg for each project to ensure inmutability.
- Add OpenSSF (formerly CII) Best Practices Badge to each repo
",True,"OpenSSF Scorecard implementation - ### Intro
I reviewed the scores for some key projects ([rascal](https://deps.dev/npm/rascal/16.2.0), [Systemic](https://deps.dev/npm/systemic), [handy-postgres](https://deps.dev/npm/handy-postgres), etc.) and I have identified some clear initiatives or strategies that we can follow to improve the results. Our average score is around 5-5.5 out of 10.
### Opportunities in repo settings
**Code Review**
> Determines if the project requires code review before pull requests
**Branch protection**
> Determines if the default and release branches are protected with GitHub's branch protection settings.
> Info: 'force pushes' disabled on branch 'master'
> Info: 'allow deletion' disabled on branch 'master'
> Warn: no status checks found to merge onto branch 'master'
> Warn: number of required reviewers is only 1 on branch 'master'
> Warn: codeowner review is not required on branch 'master'
### Oportunities in pipelines
**Tokens-permissions**
> Determines if the project's workflows follow the principle of least privilege.
> non read-only tokens detected in GitHub workflows
**Pinned-Dependencies**
> Determines if the project has declared and pinned the dependencies of its build process.
**fuzzing**
> Determines if the project uses fuzzing.
### Other
**License**
Most projects now have a valid license that will patch the scorecard in the next deployments, but I noticed that we have some dependencies with unknown licenses.
**Security-policy**
> Determines if the project has published a security policy.
**SAST**
> Determines if the project uses static code analysis.
We have the possibility to use [CodeQL](https://codeql.github.com/) for free
**Dependency-Update-Tool**
> Determines if the project uses a dependency update tool.
We can set up Dependabot properly to avoid annoying auto-pull requests, but prompt us about relevant security releases only.
**CII-Best-Practices**
> Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.
### Relevant Documentation:
- [Official Documentation](https://securityscorecards.dev/)
- [You should use the OpenSSF Scorecard](https://dev.to/ulisesgascon/you-should-use-the-openssf-scorecard-4eh4)
## Actionable items
- Add a global security Policy in the organization metadata repository
- Add code review mandatory in PRs at global org and/or in each repo. []()
- Add [branch protection rules](https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection) at global org and/or in each repo. [Related documentation](https://docs.github.com/en/code-security/getting-started/securing-your-organization)
- Add secret scanning at global org and/or in each repo. [Related documentation](https://docs.github.com/en/code-security/secret-scanning/about-secret-scanning)
- Add code scanning with CodeQL at global org and/or in each repo. [Related documentation](https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/about-code-scanning)
- Add dependabot with a good non-intrusive settings at global org and/or in each repo. [Related documentation](https://docs.github.com/en/code-security/dependabot/dependabot-alerts/about-dependabot-alerts)
- Create a pipeline to ensure that the projects are following best practices in terms of dependencies (avoid unkwnon, etc..) by using [license-checker](https://github.com/onebeyond/license-checker) in each repository
- Update each repository pipelines to use [pinned versions](https://github.com/ossf/scorecard/blob/main/docs/checks.md#pinned-dependencies) and [read-only tokens](https://github.com/ossf/scorecard/blob/main/docs/checks.md#token-permissions).
- Pin version in pkg for each project to ensure inmutability.
- Add OpenSSF (formerly CII) Best Practices Badge to each repo
",1,openssf scorecard implementation intro i reviewed the scores for some key projects etc and i have identified some clear initiatives or strategies that we can follow to improve the results our average score is around out of opportunities in repo settings code review determines if the project requires code review before pull requests branch protection determines if the default and release branches are protected with github s branch protection settings info force pushes disabled on branch master info allow deletion disabled on branch master warn no status checks found to merge onto branch master warn number of required reviewers is only on branch master warn codeowner review is not required on branch master oportunities in pipelines tokens permissions determines if the project s workflows follow the principle of least privilege non read only tokens detected in github workflows pinned dependencies determines if the project has declared and pinned the dependencies of its build process fuzzing determines if the project uses fuzzing other license most projects now have a valid license that will patch the scorecard in the next deployments but i noticed that we have some dependencies with unknown licenses security policy determines if the project has published a security policy sast determines if the project uses static code analysis we have the possibility to use for free dependency update tool determines if the project uses a dependency update tool we can set up dependabot properly to avoid annoying auto pull requests but prompt us about relevant security releases only cii best practices determines if the project has an openssf formerly cii best practices badge relevant documentation actionable items add a global security policy in the organization metadata repository add code review mandatory in prs at global org and or in each repo add at global org and or in each repo add secret scanning at global org and or in each repo add code scanning with codeql at global org and or in each repo add dependabot with a good non intrusive settings at global org and or in each repo create a pipeline to ensure that the projects are following best practices in terms of dependencies avoid unkwnon etc by using in each repository update each repository pipelines to use and pin version in pkg for each project to ensure inmutability add openssf formerly cii best practices badge to each repo ,1
54266,3062156051.0,IssuesEvent,2015-08-16 09:25:41,valnet/valuenetwork,https://api.github.com/repos/valnet/valuenetwork,opened,Map agent locations,enhancement priority,"Now that we got maps working again, let's put agents on the map.
We've created a bit of a mess here because we got Locations that go on the map, but they are connected only to resources. Agents have an address field, that is not mapped. And they also have a primary_location field that is a Location, and would go on the map, but it's not on AgentCreateForm.
So we could either put the address field on the map (which would require geocoding when entered), or put the primary_location on the AgentCreateForm.
Problem with primary_location is that it's an extra set of steps (to create a location, and then add it to the agent).
Maybe better? when they enter the address, we could geocode it, create a Location, and make that the primary_location.
Might want to rethink Locations altogether...",1.0,"Map agent locations - Now that we got maps working again, let's put agents on the map.
We've created a bit of a mess here because we got Locations that go on the map, but they are connected only to resources. Agents have an address field, that is not mapped. And they also have a primary_location field that is a Location, and would go on the map, but it's not on AgentCreateForm.
So we could either put the address field on the map (which would require geocoding when entered), or put the primary_location on the AgentCreateForm.
Problem with primary_location is that it's an extra set of steps (to create a location, and then add it to the agent).
Maybe better? when they enter the address, we could geocode it, create a Location, and make that the primary_location.
Might want to rethink Locations altogether...",0,map agent locations now that we got maps working again let s put agents on the map we ve created a bit of a mess here because we got locations that go on the map but they are connected only to resources agents have an address field that is not mapped and they also have a primary location field that is a location and would go on the map but it s not on agentcreateform so we could either put the address field on the map which would require geocoding when entered or put the primary location on the agentcreateform problem with primary location is that it s an extra set of steps to create a location and then add it to the agent maybe better when they enter the address we could geocode it create a location and make that the primary location might want to rethink locations altogether ,0
995,4759595993.0,IssuesEvent,2016-10-24 23:11:58,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,azure_rm_storageaccount throws SkuName' is not defined,affects_2.2 azure bug_report cloud waiting_on_maintainer,"Moving #17949 to the correct location with my own information.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
azure_rm_storageaccount
##### ANSIBLE VERSION
```
ansible 2.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
RHEL 7
##### SUMMARY
Using a similar example to the one found in the Azure guide, I receive a module failure that the SkuName is not defined
```
- name: testing storage module
hosts: localhost
tasks:
- name: Create storage account
azure_rm_storageaccount:
resource_group: mperz
name: testing
account_type: Standard_GRS
state: present
```
```
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File ""/tmp/ansible_Bb1sra/ansible_module_azure_rm_storageaccount.py"", line 442, in
main()
File ""/tmp/ansible_Bb1sra/ansible_module_azure_rm_storageaccount.py"", line 439, in main
AzureRMStorageAccount()
File ""/tmp/ansible_Bb1sra/ansible_module_azure_rm_storageaccount.py"", line 180, in __init__
for key in SkuName:
NameError: global name 'SkuName' is not defined
fatal: [localhost]: FAILED! => {
""changed"": false,
""failed"": true,
""invocation"": {
""module_name"": ""azure_rm_storageaccount""
},
""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Bb1sra/ansible_module_azure_rm_storageaccount.py\"", line 442, in \n main()\n File \""/tmp/ansible_Bb1sra/ansible_module_azure_rm_storageaccount.py\"", line 439, in main\n AzureRMStorageAccount()\n File \""/tmp/ansible_Bb1sra/ansible_module_azure_rm_storageaccount.py\"", line 180, in __init__\n for key in SkuName:\nNameError: global name 'SkuName' is not defined\n"",
""module_stdout"": """",
""msg"": ""MODULE FAILURE""
}
```
##### Additional Info
```
pip freeze|grep azure
azure==2.0.0rc5
azure-batch==0.30.0rc5
azure-common==1.1.4
azure-graphrbac==0.30.0rc5
azure-mgmt==0.30.0rc5
azure-mgmt-authorization==0.30.0rc5
azure-mgmt-batch==0.30.0rc5
azure-mgmt-cdn==0.30.0rc5
azure-mgmt-cognitiveservices==0.30.0rc5
azure-mgmt-commerce==0.30.0rc5
azure-mgmt-compute==0.30.0rc5
azure-mgmt-keyvault==0.30.0rc5
azure-mgmt-logic==0.30.0rc5
azure-mgmt-network==0.30.0rc5
azure-mgmt-notificationhubs==0.30.0rc5
azure-mgmt-nspkg==1.0.0
azure-mgmt-powerbiembedded==0.30.0rc5
azure-mgmt-redis==0.30.0rc5
azure-mgmt-resource==0.30.0rc5
azure-mgmt-scheduler==0.30.0rc5
azure-mgmt-storage==0.30.0rc5
azure-mgmt-web==0.30.0rc5
azure-nspkg==1.0.0
azure-servicebus==0.20.2
azure-servicemanagement-legacy==0.20.3
azure-storage==0.32.0
```
This happens regardless of which account_type you select.
",True,"azure_rm_storageaccount throws SkuName' is not defined - Moving #17949 to the correct location with my own information.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
azure_rm_storageaccount
##### ANSIBLE VERSION
```
ansible 2.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
RHEL 7
##### SUMMARY
Using a similar example to the one found in the Azure guide, I receive a module failure that the SkuName is not defined
```
- name: testing storage module
hosts: localhost
tasks:
- name: Create storage account
azure_rm_storageaccount:
resource_group: mperz
name: testing
account_type: Standard_GRS
state: present
```
```
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File ""/tmp/ansible_Bb1sra/ansible_module_azure_rm_storageaccount.py"", line 442, in
main()
File ""/tmp/ansible_Bb1sra/ansible_module_azure_rm_storageaccount.py"", line 439, in main
AzureRMStorageAccount()
File ""/tmp/ansible_Bb1sra/ansible_module_azure_rm_storageaccount.py"", line 180, in __init__
for key in SkuName:
NameError: global name 'SkuName' is not defined
fatal: [localhost]: FAILED! => {
""changed"": false,
""failed"": true,
""invocation"": {
""module_name"": ""azure_rm_storageaccount""
},
""module_stderr"": ""Traceback (most recent call last):\n File \""/tmp/ansible_Bb1sra/ansible_module_azure_rm_storageaccount.py\"", line 442, in \n main()\n File \""/tmp/ansible_Bb1sra/ansible_module_azure_rm_storageaccount.py\"", line 439, in main\n AzureRMStorageAccount()\n File \""/tmp/ansible_Bb1sra/ansible_module_azure_rm_storageaccount.py\"", line 180, in __init__\n for key in SkuName:\nNameError: global name 'SkuName' is not defined\n"",
""module_stdout"": """",
""msg"": ""MODULE FAILURE""
}
```
##### Additional Info
```
pip freeze|grep azure
azure==2.0.0rc5
azure-batch==0.30.0rc5
azure-common==1.1.4
azure-graphrbac==0.30.0rc5
azure-mgmt==0.30.0rc5
azure-mgmt-authorization==0.30.0rc5
azure-mgmt-batch==0.30.0rc5
azure-mgmt-cdn==0.30.0rc5
azure-mgmt-cognitiveservices==0.30.0rc5
azure-mgmt-commerce==0.30.0rc5
azure-mgmt-compute==0.30.0rc5
azure-mgmt-keyvault==0.30.0rc5
azure-mgmt-logic==0.30.0rc5
azure-mgmt-network==0.30.0rc5
azure-mgmt-notificationhubs==0.30.0rc5
azure-mgmt-nspkg==1.0.0
azure-mgmt-powerbiembedded==0.30.0rc5
azure-mgmt-redis==0.30.0rc5
azure-mgmt-resource==0.30.0rc5
azure-mgmt-scheduler==0.30.0rc5
azure-mgmt-storage==0.30.0rc5
azure-mgmt-web==0.30.0rc5
azure-nspkg==1.0.0
azure-servicebus==0.20.2
azure-servicemanagement-legacy==0.20.3
azure-storage==0.32.0
```
This happens regardless of which account_type you select.
",1,azure rm storageaccount throws skuname is not defined moving to the correct location with my own information issue type bug report component name azure rm storageaccount ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides os environment rhel summary using a similar example to the one found in the azure guide i receive a module failure that the skuname is not defined name testing storage module hosts localhost tasks name create storage account azure rm storageaccount resource group mperz name testing account type standard grs state present an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module azure rm storageaccount py line in main file tmp ansible ansible module azure rm storageaccount py line in main azurermstorageaccount file tmp ansible ansible module azure rm storageaccount py line in init for key in skuname nameerror global name skuname is not defined fatal failed changed false failed true invocation module name azure rm storageaccount module stderr traceback most recent call last n file tmp ansible ansible module azure rm storageaccount py line in n main n file tmp ansible ansible module azure rm storageaccount py line in main n azurermstorageaccount n file tmp ansible ansible module azure rm storageaccount py line in init n for key in skuname nnameerror global name skuname is not defined n module stdout msg module failure additional info pip freeze grep azure azure azure batch azure common azure graphrbac azure mgmt azure mgmt authorization azure mgmt batch azure mgmt cdn azure mgmt cognitiveservices azure mgmt commerce azure mgmt compute azure mgmt keyvault azure mgmt logic azure mgmt network azure mgmt notificationhubs azure mgmt nspkg azure mgmt powerbiembedded azure mgmt redis azure mgmt resource azure mgmt scheduler azure mgmt storage azure mgmt web azure nspkg azure servicebus azure servicemanagement legacy azure storage this happens regardless of which account type you select ,1
1639,6572661956.0,IssuesEvent,2017-09-11 04:11:14,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,"npm ""fs"" package installation is not idempotent",affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
npm
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
installing ""fs"" package with npm module is always ""changed"" status
##### STEPS TO REPRODUCE
Try to install ""fs"" twice.
In example below, express is already installed.
```
root@g25:~# ansible localhost -m npm -a ""name=fs global=yes executable=/usr/bin/npm state=present""
localhost | SUCCESS => {
""changed"": true
}
root@g25:~# ansible localhost -m npm -a ""name=fs global=yes executable=/usr/bin/npm state=present""
localhost | SUCCESS => {
""changed"": true
}
root@g25:~# ansible localhost -m npm -a ""name=express global=yes executable=/usr/bin/npm state=present""
localhost | SUCCESS => {
""changed"": false
}
```
",True,"npm ""fs"" package installation is not idempotent - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
npm
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
installing ""fs"" package with npm module is always ""changed"" status
##### STEPS TO REPRODUCE
Try to install ""fs"" twice.
In example below, express is already installed.
```
root@g25:~# ansible localhost -m npm -a ""name=fs global=yes executable=/usr/bin/npm state=present""
localhost | SUCCESS => {
""changed"": true
}
root@g25:~# ansible localhost -m npm -a ""name=fs global=yes executable=/usr/bin/npm state=present""
localhost | SUCCESS => {
""changed"": true
}
root@g25:~# ansible localhost -m npm -a ""name=express global=yes executable=/usr/bin/npm state=present""
localhost | SUCCESS => {
""changed"": false
}
```
",1,npm fs package installation is not idempotent issue type bug report component name npm ansible version ansible config file configured module search path default w o overrides os environment n a summary installing fs package with npm module is always changed status steps to reproduce try to install fs twice in example below express is already installed root ansible localhost m npm a name fs global yes executable usr bin npm state present localhost success changed true root ansible localhost m npm a name fs global yes executable usr bin npm state present localhost success changed true root ansible localhost m npm a name express global yes executable usr bin npm state present localhost success changed false ,1
921,4622220856.0,IssuesEvent,2016-09-27 06:31:25,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,docker_image often fails to push with docker-py 1.10.x,affects_2.2 bug_report cloud docker in progress waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_image module
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel 07a76bece1) last updated 2016/09/16 16:50:58 (GMT -700)
lib/ansible/modules/core: (detached HEAD 488f082761) last updated 2016/09/16 16:51:07 (GMT -700)
lib/ansible/modules/extras: (detached HEAD 24da3602c6) last updated 2016/09/16 16:51:07 (GMT -700)
config file = /Users/rmendes/github/roles/roles-docker/docker_new_image/tests/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Our inventory is dictionary based so:
```
hash_behaviour=merge
```
This is on a completely rebuilt laptop with one virtualenv and no docker-py at the system level.
```
pip freeze
-e git+https://github.com/ansible/ansible.git@07a76bece15d2568d7ea76e77266190652a0beec#egg=ansible
backports.ssl-match-hostname==3.5.0.1
cffi==1.8.3
cryptography==1.5
-e git+https://github.com/docker/docker-py.git@6b7a828400f46ea81374bc5764d8aa81bf38f6f7#egg=docker_py
docker-pycreds==0.2.1
enum34==1.1.6
idna==2.1
ipaddress==1.0.17
Jinja2==2.8
MarkupSafe==0.23
paramiko==2.0.2
py2-ipaddress==3.4.1
pyasn1==0.1.9
pycparser==2.14
pycrypto==2.6.1
PyYAML==3.12
requests==2.10.0
six==1.10.0
websocket-client==0.37.0
```
##### OS / ENVIRONMENT
OS X El Capitan
##### SUMMARY
Image pushes often fail with what looks like a JSON handling issue.
##### STEPS TO REPRODUCE
```
---
- name: push image
hosts: localhost
connection: local
gather_facts: False
tasks:
- name: push new image
docker_image:
name: ""test-image-3""
repository: ""127.0.0.1:5000/test-image-3""
tag: ""role-test""
pull: False
push: True
state: present
```
##### EXPECTED RESULTS
Image is successfully pushed.
##### ACTUAL RESULTS
I discovered this in a role I was testing. 4/5 times I ran the role tests I got a failure like the one shown. 1/5 it worked like it should.
```
fatal: [test-image-3]: FAILED! => {
""changed"": false,
""failed"": true,
""invocation"": {
""module_args"": {
""api_version"": null,
""archive_path"": null,
""buildargs"": null,
""cacert_path"": null,
""cert_path"": null,
""container_limits"": null,
""debug"": false,
""docker_host"": null,
""dockerfile"": null,
""filter_logger"": false,
""force"": false,
""http_timeout"": null,
""key_path"": null,
""load_path"": null,
""name"": ""test-image-3"",
""nocache"": ""False"",
""path"": null,
""pull"": false,
""push"": true,
""repository"": ""127.0.0.1:5000/test-image-3"",
""rm"": true,
""ssl_version"": null,
""state"": ""present"",
""tag"": ""role-test"",
""timeout"": null,
""tls"": null,
""tls_hostname"": null,
""tls_verify"": null,
""use_tls"": ""no""
},
""module_name"": ""ilmn_docker_image""
},
""msg"": ""Error pushing image 127.0.0.1:5000/test-image-3: Extra data: line 2 column 1 - line 3 column 1 (char 64 - 128)""
```
I thought this was a docker-py issue, so I reported it there today. @shin worked with me to show it is ultimately an Ansible issue. That thread is here - docker/docker-py#1222.
Here is what he reported:
Oh, I figured out why you're seeing the issue: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker_image.py#L428
Ansible does the decoding of data chunks itself, so it doesn't rely on our JSON parsing code, causing the issue at their level when the API sometimes sends multiple chunks at a time. I'm afraid this is something you'll have to report there, as there's little to be done on our end.
",True,"docker_image often fails to push with docker-py 1.10.x - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_image module
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel 07a76bece1) last updated 2016/09/16 16:50:58 (GMT -700)
lib/ansible/modules/core: (detached HEAD 488f082761) last updated 2016/09/16 16:51:07 (GMT -700)
lib/ansible/modules/extras: (detached HEAD 24da3602c6) last updated 2016/09/16 16:51:07 (GMT -700)
config file = /Users/rmendes/github/roles/roles-docker/docker_new_image/tests/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Our inventory is dictionary based so:
```
hash_behaviour=merge
```
This is on a completely rebuilt laptop with one virtualenv and no docker-py at the system level.
```
pip freeze
-e git+https://github.com/ansible/ansible.git@07a76bece15d2568d7ea76e77266190652a0beec#egg=ansible
backports.ssl-match-hostname==3.5.0.1
cffi==1.8.3
cryptography==1.5
-e git+https://github.com/docker/docker-py.git@6b7a828400f46ea81374bc5764d8aa81bf38f6f7#egg=docker_py
docker-pycreds==0.2.1
enum34==1.1.6
idna==2.1
ipaddress==1.0.17
Jinja2==2.8
MarkupSafe==0.23
paramiko==2.0.2
py2-ipaddress==3.4.1
pyasn1==0.1.9
pycparser==2.14
pycrypto==2.6.1
PyYAML==3.12
requests==2.10.0
six==1.10.0
websocket-client==0.37.0
```
##### OS / ENVIRONMENT
OS X El Capitan
##### SUMMARY
Image pushes often fail with what looks like a JSON handling issue.
##### STEPS TO REPRODUCE
```
---
- name: push image
hosts: localhost
connection: local
gather_facts: False
tasks:
- name: push new image
docker_image:
name: ""test-image-3""
repository: ""127.0.0.1:5000/test-image-3""
tag: ""role-test""
pull: False
push: True
state: present
```
##### EXPECTED RESULTS
Image is successfully pushed.
##### ACTUAL RESULTS
I discovered this in a role I was testing. 4/5 times I ran the role tests I got a failure like the one shown. 1/5 it worked like it should.
```
fatal: [test-image-3]: FAILED! => {
""changed"": false,
""failed"": true,
""invocation"": {
""module_args"": {
""api_version"": null,
""archive_path"": null,
""buildargs"": null,
""cacert_path"": null,
""cert_path"": null,
""container_limits"": null,
""debug"": false,
""docker_host"": null,
""dockerfile"": null,
""filter_logger"": false,
""force"": false,
""http_timeout"": null,
""key_path"": null,
""load_path"": null,
""name"": ""test-image-3"",
""nocache"": ""False"",
""path"": null,
""pull"": false,
""push"": true,
""repository"": ""127.0.0.1:5000/test-image-3"",
""rm"": true,
""ssl_version"": null,
""state"": ""present"",
""tag"": ""role-test"",
""timeout"": null,
""tls"": null,
""tls_hostname"": null,
""tls_verify"": null,
""use_tls"": ""no""
},
""module_name"": ""ilmn_docker_image""
},
""msg"": ""Error pushing image 127.0.0.1:5000/test-image-3: Extra data: line 2 column 1 - line 3 column 1 (char 64 - 128)""
```
I thought this was a docker-py issue, so I reported it there today. @shin worked with me to show it is ultimately an Ansible issue. That thread is here - docker/docker-py#1222.
Here is what he reported:
Oh, I figured out why you're seeing the issue: https://github.com/ansible/ansible-modules-core/blob/devel/cloud/docker/docker_image.py#L428
Ansible does the decoding of data chunks itself, so it doesn't rely on our JSON parsing code, causing the issue at their level when the API sometimes sends multiple chunks at a time. I'm afraid this is something you'll have to report there, as there's little to be done on our end.
",1,docker image often fails to push with docker py x issue type bug report component name docker image module ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file users rmendes github roles roles docker docker new image tests ansible cfg configured module search path default w o overrides configuration our inventory is dictionary based so hash behaviour merge this is on a completely rebuilt laptop with one virtualenv and no docker py at the system level pip freeze e git backports ssl match hostname cffi cryptography e git docker pycreds idna ipaddress markupsafe paramiko ipaddress pycparser pycrypto pyyaml requests six websocket client os environment os x el capitan summary image pushes often fail with what looks like a json handling issue steps to reproduce name push image hosts localhost connection local gather facts false tasks name push new image docker image name test image repository test image tag role test pull false push true state present expected results image is successfully pushed actual results i discovered this in a role i was testing times i ran the role tests i got a failure like the one shown it worked like it should fatal failed changed false failed true invocation module args api version null archive path null buildargs null cacert path null cert path null container limits null debug false docker host null dockerfile null filter logger false force false http timeout null key path null load path null name test image nocache false path null pull false push true repository test image rm true ssl version null state present tag role test timeout null tls null tls hostname null tls verify null use tls no module name ilmn docker image msg error pushing image test image extra data line column line column char i thought this was a docker py issue so i reported it there today shin worked with me to show it is ultimately an ansible issue that thread is here docker docker py here is what he reported oh i figured out why you re seeing the issue ansible does the decoding of data chunks itself so it doesn t rely on our json parsing code causing the issue at their level when the api sometimes sends multiple chunks at a time i m afraid this is something you ll have to report there as there s little to be done on our end ,1
1484,6416007991.0,IssuesEvent,2017-08-08 14:00:28,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Deregistering an instance from an ELB causes health check to fail.,affects_2.2 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
ec2_elb_lb module
##### ANSIBLE VERSION
N/A
##### SUMMARY
When I deregister an instance from an ELB it causes the health check to fail and sets off alerts. Why wouldn't is just remove the instance from the pool immediately? What is actually doing behind the scenes?
",True,"Deregistering an instance from an ELB causes health check to fail. - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
ec2_elb_lb module
##### ANSIBLE VERSION
N/A
##### SUMMARY
When I deregister an instance from an ELB it causes the health check to fail and sets off alerts. Why wouldn't is just remove the instance from the pool immediately? What is actually doing behind the scenes?
",1,deregistering an instance from an elb causes health check to fail issue type bug report component name elb lb module ansible version n a summary when i deregister an instance from an elb it causes the health check to fail and sets off alerts why wouldn t is just remove the instance from the pool immediately what is actually doing behind the scenes ,1
242658,20254766345.0,IssuesEvent,2022-02-14 21:46:28,dotnet/machinelearning-modelbuilder,https://api.github.com/repos/dotnet/machinelearning-modelbuilder,closed,Model Builder Error: can't find image in boxes after input the json file with image labeling.,Priority:1 Test Team Stale,"**System Information (please complete the following information):**
- Model Builder Version (available in Manage Extensions dialog): 16.9.1.2152703
- Microsoft Visual Studio Enterprise 2019: 16.11.5
**Describe the bug**
- On which step of the process did you run into an issue: Object detection>Data page
- Clear description of the problem: after input the json file with image labeling, prompt Model Builder Error: can't find image in boxes, close the error dialog, can continue the next step and training completed.
**To Reproduce**
Steps to reproduce the behavior:
1. Select Create a new project from the Visual Studio 2019 start window;
2. Choose the C# Console App (.NET Core) project template with .Net 5.0;
3. Add model builder by right click on the project;
4. Select Object detection and Azure environment;
5. Input the Json file, prompt Model Builder Error: can't find image in boxes.
**Expected behavior**
No any error after input the data set.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Error message:**
at Microsoft.ML.ModelBuilder.ViewModels.ObjectDetectionDataPreviewViewModel..ctor(String imagePath, IEnumerable`1 tags, IEnumerable`1 boxes, Int32 imageMaxWidthAndHeight, Boolean isShowScore)
at Microsoft.ML.ModelBuilder.ViewModels.ObjectDetectionDataViewModel.OnSelectedImageChange_SetPreviewViewModel()
at Microsoft.ML.ModelBuilder.ViewModels.ObjectDetectionDataViewModel.<<-ctor>b__0_0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.ML.ModelBuilder.Observable.ObservableModel.d__24.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.ML.ModelBuilder.Observable.ObservableModel.d__28.MoveNext()
",1.0,"Model Builder Error: can't find image in boxes after input the json file with image labeling. - **System Information (please complete the following information):**
- Model Builder Version (available in Manage Extensions dialog): 16.9.1.2152703
- Microsoft Visual Studio Enterprise 2019: 16.11.5
**Describe the bug**
- On which step of the process did you run into an issue: Object detection>Data page
- Clear description of the problem: after input the json file with image labeling, prompt Model Builder Error: can't find image in boxes, close the error dialog, can continue the next step and training completed.
**To Reproduce**
Steps to reproduce the behavior:
1. Select Create a new project from the Visual Studio 2019 start window;
2. Choose the C# Console App (.NET Core) project template with .Net 5.0;
3. Add model builder by right click on the project;
4. Select Object detection and Azure environment;
5. Input the Json file, prompt Model Builder Error: can't find image in boxes.
**Expected behavior**
No any error after input the data set.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Error message:**
at Microsoft.ML.ModelBuilder.ViewModels.ObjectDetectionDataPreviewViewModel..ctor(String imagePath, IEnumerable`1 tags, IEnumerable`1 boxes, Int32 imageMaxWidthAndHeight, Boolean isShowScore)
at Microsoft.ML.ModelBuilder.ViewModels.ObjectDetectionDataViewModel.OnSelectedImageChange_SetPreviewViewModel()
at Microsoft.ML.ModelBuilder.ViewModels.ObjectDetectionDataViewModel.<<-ctor>b__0_0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.ML.ModelBuilder.Observable.ObservableModel.d__24.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.ML.ModelBuilder.Observable.ObservableModel.d__28.MoveNext()
",0,model builder error can t find image in boxes after input the json file with image labeling system information please complete the following information model builder version available in manage extensions dialog microsoft visual studio enterprise describe the bug on which step of the process did you run into an issue object detection data page clear description of the problem after input the json file with image labeling prompt model builder error can t find image in boxes close the error dialog can continue the next step and training completed to reproduce steps to reproduce the behavior select create a new project from the visual studio start window choose the c console app net core project template with net add model builder by right click on the project select object detection and azure environment input the json file prompt model builder error can t find image in boxes expected behavior no any error after input the data set screenshots if applicable add screenshots to help explain your problem error message at microsoft ml modelbuilder viewmodels objectdetectiondatapreviewviewmodel ctor string imagepath ienumerable tags ienumerable boxes imagemaxwidthandheight boolean isshowscore at microsoft ml modelbuilder viewmodels objectdetectiondataviewmodel onselectedimagechange setpreviewviewmodel at microsoft ml modelbuilder viewmodels objectdetectiondataviewmodel b d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft ml modelbuilder observable observablemodel d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft ml modelbuilder observable observablemodel d movenext ,0
5201,26440582887.0,IssuesEvent,2023-01-15 22:49:04,MarcusWolschon/osmeditor4android,https://api.github.com/repos/MarcusWolschon/osmeditor4android,opened,Rework GPX file loading,Maintainability,"A lot of the GPX file loading code has no function anymore since everything is being done in setupLayers, this should be cleaned up. ",True,"Rework GPX file loading - A lot of the GPX file loading code has no function anymore since everything is being done in setupLayers, this should be cleaned up. ",1,rework gpx file loading a lot of the gpx file loading code has no function anymore since everything is being done in setuplayers this should be cleaned up ,1
898,4559831477.0,IssuesEvent,2016-09-14 04:49:59,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,Doc: ec2_vpc_route_table_facts: example for VPC ID filter has error,affects_2.1 aws cloud docs_report waiting_on_maintainer,"
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
ec2_vpc_route_table_facts
##### ANSIBLE VERSION
```
ansible 2.1.1.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
At [this URL](https://docs.ansible.com/ansible/ec2_vpc_route_table_facts_module.html) it says:
```
# Gather facts about any VPC route table within VPC with ID vpc-abcdef00
- ec2_vpc_route_table_facts:
filters:
vpc-id: vpc-abcdef00
```
However, the actual filter expression should be as follows (confirmed this thru coding):
```
vpc_id: vpc-abcdef00
```
Note the underscore vs. the dash.
",True,"Doc: ec2_vpc_route_table_facts: example for VPC ID filter has error -
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
ec2_vpc_route_table_facts
##### ANSIBLE VERSION
```
ansible 2.1.1.0
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
At [this URL](https://docs.ansible.com/ansible/ec2_vpc_route_table_facts_module.html) it says:
```
# Gather facts about any VPC route table within VPC with ID vpc-abcdef00
- ec2_vpc_route_table_facts:
filters:
vpc-id: vpc-abcdef00
```
However, the actual filter expression should be as follows (confirmed this thru coding):
```
vpc_id: vpc-abcdef00
```
Note the underscore vs. the dash.
",1,doc vpc route table facts example for vpc id filter has error issue type documentation report component name vpc route table facts ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary at it says gather facts about any vpc route table within vpc with id vpc vpc route table facts filters vpc id vpc however the actual filter expression should be as follows confirmed this thru coding vpc id vpc note the underscore vs the dash ,1
889,4553127777.0,IssuesEvent,2016-09-13 02:46:27,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,lxd_container module can not have remote as part of container name,affects_2.2 bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lxd_container
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel f4237b2151) last updated 2016/08/16 23:49:16 (GMT +200)
lib/ansible/modules/core: (detached HEAD 45c1ae0ac1) last updated 2016/08/16 23:49:20 (GMT +200)
lib/ansible/modules/extras: (detached HEAD a6b34973a8) last updated 2016/08/16 23:49:20 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
default
##### OS / ENVIRONMENT
Ubuntu 16.04 on both local and remote
##### SUMMARY
I want to create a container on a remote lxd server. The server has been registered locally with `lxc remote add nuc1 `.
##### STEPS TO REPRODUCE
`ansible-playbook test.yml`
###### test.yml
```
- hosts: localhost
connection: local
tasks:
- name: create container test1
lxd_container:
name: ""nuc1:test1""
state: started
source:
type: image
mode: pull
server: https://cloud-images.ubuntu.com/daily
protocol: simplestreams
alias: ""16.04""
architecture: x86_64
```
##### EXPECTED RESULTS
I expected the container `test1` to have been launched on remote `nuc1`, essentially executing the command `lxc launch ubuntu-daily:16.04 nuc1:test1`
##### ACTUAL RESULTS
```
No config file found; using defaults
Loaded callback default of type stdout, v2.0
PLAYBOOK: test.yml *************************************************************
1 plays in test.yml
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
Using module file /home/magne/src/ansible/lib/ansible/modules/core/system/setup.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: magne
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055 `"" && echo ansible-tmp-1471409462.83-210640866662055=""` echo $HOME/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055 `"" ) && sleep 0'
<127.0.0.1> PUT /tmp/magne/tmpkYE_4L TO /home/magne/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055/setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/magne/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055/ /home/magne/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055/setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/magne/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055/setup.py; rm -rf ""/home/magne/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055/"" > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [create container test1] **************************************************
task path: /home/magne/development/ansible/t/test.yml:5
Using module file /home/magne/src/ansible/lib/ansible/modules/extras/cloud/lxd/lxd_container.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: magne
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257 `"" && echo ansible-tmp-1471409464.58-225577408881257=""` echo $HOME/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257 `"" ) && sleep 0'
<127.0.0.1> PUT /tmp/magne/tmpkmiVeV TO /home/magne/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257/lxd_container.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/magne/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257/ /home/magne/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257/lxd_container.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/magne/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257/lxd_container.py; rm -rf ""/home/magne/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257/"" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
""actions"": [],
""changed"": false,
""failed"": true,
""invocation"": {
""module_args"": {
""architecture"": null,
""cert_file"": ""/home/magne/.config/lxc/client.crt"",
""config"": null,
""description"": null,
""devices"": null,
""ephemeral"": null,
""force_stop"": false,
""key_file"": ""/home/magne/.config/lxc/client.key"",
""name"": ""nuc1:test1"",
""profiles"": null,
""source"": {
""alias"": ""16.04"",
""architecture"": ""x86_64"",
""mode"": ""pull"",
""protocol"": ""simplestreams"",
""server"": ""https://cloud-images.ubuntu.com/daily"",
""type"": ""image""
},
""state"": ""started"",
""timeout"": 30,
""trust_password"": null,
""url"": ""unix:/var/lib/lxd/unix.socket"",
""wait_for_ipv4_addresses"": false
},
""module_name"": ""lxd_container""
},
""logs"": [
{
""request"": {
""json"": null,
""method"": ""GET"",
""timeout"": null,
""url"": ""/1.0/containers/nuc1:test1""
},
""response"": {
""json"": {
""error"": ""not found"",
""error_code"": 404,
""type"": ""error""
}
},
""type"": ""sent request""
},
{
""request"": {
""json"": {
""name"": ""nuc1:test1"",
""source"": {
""alias"": ""16.04"",
""architecture"": ""x86_64"",
""mode"": ""pull"",
""protocol"": ""simplestreams"",
""server"": ""https://cloud-images.ubuntu.com/daily"",
""type"": ""image""
}
},
""method"": ""POST"",
""timeout"": null,
""url"": ""/1.0/containers""
},
""response"": {
""json"": {
""metadata"": {
""class"": ""task"",
""created_at"": ""2016-08-17T06:51:04.837933973+02:00"",
""err"": """",
""id"": ""8f17ad34-7c9f-4f08-a90b-c2a69ed68fe8"",
""may_cancel"": false,
""metadata"": null,
""resources"": {
""containers"": [
""/1.0/containers/nuc1:test1""
]
},
""status"": ""Running"",
""status_code"": 103,
""updated_at"": ""2016-08-17T06:51:04.837933973+02:00""
},
""operation"": ""/1.0/operations/8f17ad34-7c9f-4f08-a90b-c2a69ed68fe8"",
""status"": ""Operation created"",
""status_code"": 100,
""type"": ""async""
}
},
""type"": ""sent request""
},
{
""request"": {
""json"": null,
""method"": ""GET"",
""timeout"": null,
""url"": ""/1.0/operations/8f17ad34-7c9f-4f08-a90b-c2a69ed68fe8/wait""
},
""response"": {
""json"": {
""metadata"": {
""class"": ""task"",
""created_at"": ""2016-08-17T06:51:04.837933973+02:00"",
""err"": ""Container name isn't a valid hostname."",
""id"": ""8f17ad34-7c9f-4f08-a90b-c2a69ed68fe8"",
""may_cancel"": false,
""metadata"": null,
""resources"": {
""containers"": [
""/1.0/containers/nuc1:test1""
]
},
""status"": ""Failure"",
""status_code"": 400,
""updated_at"": ""2016-08-17T06:51:04.837933973+02:00""
},
""status"": ""Success"",
""status_code"": 200,
""type"": ""sync""
}
},
""type"": ""sent request""
}
],
""msg"": ""Container name isn't a valid hostname.""
}
to retry, use: --limit @test.retry
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
```
",True,"lxd_container module can not have remote as part of container name - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lxd_container
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel f4237b2151) last updated 2016/08/16 23:49:16 (GMT +200)
lib/ansible/modules/core: (detached HEAD 45c1ae0ac1) last updated 2016/08/16 23:49:20 (GMT +200)
lib/ansible/modules/extras: (detached HEAD a6b34973a8) last updated 2016/08/16 23:49:20 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
default
##### OS / ENVIRONMENT
Ubuntu 16.04 on both local and remote
##### SUMMARY
I want to create a container on a remote lxd server. The server has been registered locally with `lxc remote add nuc1 `.
##### STEPS TO REPRODUCE
`ansible-playbook test.yml`
###### test.yml
```
- hosts: localhost
connection: local
tasks:
- name: create container test1
lxd_container:
name: ""nuc1:test1""
state: started
source:
type: image
mode: pull
server: https://cloud-images.ubuntu.com/daily
protocol: simplestreams
alias: ""16.04""
architecture: x86_64
```
##### EXPECTED RESULTS
I expected the container `test1` to have been launched on remote `nuc1`, essentially executing the command `lxc launch ubuntu-daily:16.04 nuc1:test1`
##### ACTUAL RESULTS
```
No config file found; using defaults
Loaded callback default of type stdout, v2.0
PLAYBOOK: test.yml *************************************************************
1 plays in test.yml
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
Using module file /home/magne/src/ansible/lib/ansible/modules/core/system/setup.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: magne
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055 `"" && echo ansible-tmp-1471409462.83-210640866662055=""` echo $HOME/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055 `"" ) && sleep 0'
<127.0.0.1> PUT /tmp/magne/tmpkYE_4L TO /home/magne/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055/setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/magne/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055/ /home/magne/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055/setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/magne/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055/setup.py; rm -rf ""/home/magne/.ansible/tmp/ansible-tmp-1471409462.83-210640866662055/"" > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [create container test1] **************************************************
task path: /home/magne/development/ansible/t/test.yml:5
Using module file /home/magne/src/ansible/lib/ansible/modules/extras/cloud/lxd/lxd_container.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: magne
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257 `"" && echo ansible-tmp-1471409464.58-225577408881257=""` echo $HOME/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257 `"" ) && sleep 0'
<127.0.0.1> PUT /tmp/magne/tmpkmiVeV TO /home/magne/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257/lxd_container.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/magne/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257/ /home/magne/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257/lxd_container.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/magne/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257/lxd_container.py; rm -rf ""/home/magne/.ansible/tmp/ansible-tmp-1471409464.58-225577408881257/"" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
""actions"": [],
""changed"": false,
""failed"": true,
""invocation"": {
""module_args"": {
""architecture"": null,
""cert_file"": ""/home/magne/.config/lxc/client.crt"",
""config"": null,
""description"": null,
""devices"": null,
""ephemeral"": null,
""force_stop"": false,
""key_file"": ""/home/magne/.config/lxc/client.key"",
""name"": ""nuc1:test1"",
""profiles"": null,
""source"": {
""alias"": ""16.04"",
""architecture"": ""x86_64"",
""mode"": ""pull"",
""protocol"": ""simplestreams"",
""server"": ""https://cloud-images.ubuntu.com/daily"",
""type"": ""image""
},
""state"": ""started"",
""timeout"": 30,
""trust_password"": null,
""url"": ""unix:/var/lib/lxd/unix.socket"",
""wait_for_ipv4_addresses"": false
},
""module_name"": ""lxd_container""
},
""logs"": [
{
""request"": {
""json"": null,
""method"": ""GET"",
""timeout"": null,
""url"": ""/1.0/containers/nuc1:test1""
},
""response"": {
""json"": {
""error"": ""not found"",
""error_code"": 404,
""type"": ""error""
}
},
""type"": ""sent request""
},
{
""request"": {
""json"": {
""name"": ""nuc1:test1"",
""source"": {
""alias"": ""16.04"",
""architecture"": ""x86_64"",
""mode"": ""pull"",
""protocol"": ""simplestreams"",
""server"": ""https://cloud-images.ubuntu.com/daily"",
""type"": ""image""
}
},
""method"": ""POST"",
""timeout"": null,
""url"": ""/1.0/containers""
},
""response"": {
""json"": {
""metadata"": {
""class"": ""task"",
""created_at"": ""2016-08-17T06:51:04.837933973+02:00"",
""err"": """",
""id"": ""8f17ad34-7c9f-4f08-a90b-c2a69ed68fe8"",
""may_cancel"": false,
""metadata"": null,
""resources"": {
""containers"": [
""/1.0/containers/nuc1:test1""
]
},
""status"": ""Running"",
""status_code"": 103,
""updated_at"": ""2016-08-17T06:51:04.837933973+02:00""
},
""operation"": ""/1.0/operations/8f17ad34-7c9f-4f08-a90b-c2a69ed68fe8"",
""status"": ""Operation created"",
""status_code"": 100,
""type"": ""async""
}
},
""type"": ""sent request""
},
{
""request"": {
""json"": null,
""method"": ""GET"",
""timeout"": null,
""url"": ""/1.0/operations/8f17ad34-7c9f-4f08-a90b-c2a69ed68fe8/wait""
},
""response"": {
""json"": {
""metadata"": {
""class"": ""task"",
""created_at"": ""2016-08-17T06:51:04.837933973+02:00"",
""err"": ""Container name isn't a valid hostname."",
""id"": ""8f17ad34-7c9f-4f08-a90b-c2a69ed68fe8"",
""may_cancel"": false,
""metadata"": null,
""resources"": {
""containers"": [
""/1.0/containers/nuc1:test1""
]
},
""status"": ""Failure"",
""status_code"": 400,
""updated_at"": ""2016-08-17T06:51:04.837933973+02:00""
},
""status"": ""Success"",
""status_code"": 200,
""type"": ""sync""
}
},
""type"": ""sent request""
}
],
""msg"": ""Container name isn't a valid hostname.""
}
to retry, use: --limit @test.retry
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
```
",1,lxd container module can not have remote as part of container name issue type bug report component name lxd container ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides configuration default os environment ubuntu on both local and remote summary i want to create a container on a remote lxd server the server has been registered locally with lxc remote add steps to reproduce ansible playbook test yml test yml hosts localhost connection local tasks name create container lxd container name state started source type image mode pull server protocol simplestreams alias architecture expected results i expected the container to have been launched on remote essentially executing the command lxc launch ubuntu daily actual results no config file found using defaults loaded callback default of type stdout playbook test yml plays in test yml play task using module file home magne src ansible lib ansible modules core system setup py establish local connection for user magne exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp magne tmpkye to home magne ansible tmp ansible tmp setup py exec bin sh c chmod u x home magne ansible tmp ansible tmp home magne ansible tmp ansible tmp setup py sleep exec bin sh c usr bin python home magne ansible tmp ansible tmp setup py rm rf home magne ansible tmp ansible tmp dev null sleep ok task task path home magne development ansible t test yml using module file home magne src ansible lib ansible modules extras cloud lxd lxd container py establish local connection for user magne exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp magne tmpkmivev to home magne ansible tmp ansible tmp lxd container py exec bin sh c chmod u x home magne ansible tmp ansible tmp home magne ansible tmp ansible tmp lxd container py sleep exec bin sh c usr bin python home magne ansible tmp ansible tmp lxd container py rm rf home magne ansible tmp ansible tmp dev null sleep fatal failed actions changed false failed true invocation module args architecture null cert file home magne config lxc client crt config null description null devices null ephemeral null force stop false key file home magne config lxc client key name profiles null source alias architecture mode pull protocol simplestreams server type image state started timeout trust password null url unix var lib lxd unix socket wait for addresses false module name lxd container logs request json null method get timeout null url containers response json error not found error code type error type sent request request json name source alias architecture mode pull protocol simplestreams server type image method post timeout null url containers response json metadata class task created at err id may cancel false metadata null resources containers containers status running status code updated at operation operations status operation created status code type async type sent request request json null method get timeout null url operations wait response json metadata class task created at err container name isn t a valid hostname id may cancel false metadata null resources containers containers status failure status code updated at status success status code type sync type sent request msg container name isn t a valid hostname to retry use limit test retry play recap localhost ok changed unreachable failed ,1
3292,12627360022.0,IssuesEvent,2020-06-14 21:05:39,short-d/short,https://api.github.com/repos/short-d/short,opened,[Refactor] Refactor graphQL related schemas into a singe source,maintainability refactor,"**What is frustrating you?**
There are multiple places where the GraphQL schema is being reused in services like `ChangeLogGraphQLApi`, `ShortLinkGraphQLApi` and `UrlService`. When there is a change in the server graphQL schema, it requires changes in all these files which makes it harder to maintain.
**Your solution**
Create a single source for all these schema and make the service import and use the interface structures.",True,"[Refactor] Refactor graphQL related schemas into a singe source - **What is frustrating you?**
There are multiple places where the GraphQL schema is being reused in services like `ChangeLogGraphQLApi`, `ShortLinkGraphQLApi` and `UrlService`. When there is a change in the server graphQL schema, it requires changes in all these files which makes it harder to maintain.
**Your solution**
Create a single source for all these schema and make the service import and use the interface structures.",1, refactor graphql related schemas into a singe source what is frustrating you there are multiple places where the graphql schema is being reused in services like changeloggraphqlapi shortlinkgraphqlapi and urlservice when there is a change in the server graphql schema it requires changes in all these files which makes it harder to maintain your solution create a single source for all these schema and make the service import and use the interface structures ,1
4860,25012288884.0,IssuesEvent,2022-11-03 16:04:42,centerofci/mathesar,https://api.github.com/repos/centerofci/mathesar,opened,Re-focus table cell after opening record selector on that cell,type: bug work: frontend status: ready restricted: maintainers,"## Steps to reproduce
1. Go to the Table Page for a table that has a foreign key column.
1. Open the Record Selector from an FK cell.
1. Close the Record Selector, either via Esc, the close button, or by submitting a value.
1. Expect the cell within the table to remain selected and active.
1. Instead observe that after closing the Record Selector, the cell is no longer active.
",True,"Re-focus table cell after opening record selector on that cell - ## Steps to reproduce
1. Go to the Table Page for a table that has a foreign key column.
1. Open the Record Selector from an FK cell.
1. Close the Record Selector, either via Esc, the close button, or by submitting a value.
1. Expect the cell within the table to remain selected and active.
1. Instead observe that after closing the Record Selector, the cell is no longer active.
",1,re focus table cell after opening record selector on that cell steps to reproduce go to the table page for a table that has a foreign key column open the record selector from an fk cell close the record selector either via esc the close button or by submitting a value expect the cell within the table to remain selected and active instead observe that after closing the record selector the cell is no longer active ,1
5027,25801825584.0,IssuesEvent,2022-12-11 03:23:11,deislabs/spiderlightning,https://api.github.com/repos/deislabs/spiderlightning,closed,configs.usersecrets doesn't work on Windows,🐛 bug 🚧 maintainer issue,"**Description of the bug**
It doesn't truncate the file.
**To Reproduce**
n/a
**Additional context**
n/a",True,"configs.usersecrets doesn't work on Windows - **Description of the bug**
It doesn't truncate the file.
**To Reproduce**
n/a
**Additional context**
n/a",1,configs usersecrets doesn t work on windows description of the bug it doesn t truncate the file to reproduce n a additional context n a,1
116641,14986552743.0,IssuesEvent,2021-01-28 21:23:44,flutter/flutter,https://api.github.com/repos/flutter/flutter,closed,[Web] OutlineInputBorder doesn't render properly when there is DropdownButtonFormField with custom decoration in a column,P3 a: fidelity a: text input e: web_html engine f: material design found in release: 1.24 framework has reproducible steps platform-web severe: regression severe: rendering,"## Steps to Reproduce
I was **unable** to create simple test case to consistently reproduce this problem.
Code was working just fine in **1.23.0-18.1.pre**
After upgrading to **1.24.0-10.2.pre** I see ""random"" border corruption:

When I'm running with: --dart-define=FLUTTER_WEB_USE_SKIA=true
**no such corruption occurs**.

```
[√] Flutter (Channel beta, 1.24.0-10.2.pre, on Microsoft Windows [Version 10.0.19041.630], locale en-US)
• Flutter version 1.24.0-10.2.pre at c:\Programs\flutter
• Framework revision 022b333a08 (2 days ago), 2020-11-18 11:35:09 -0800
• Engine revision 07c1eed46b
• Dart version 2.12.0 (build 2.12.0-29.10.beta)
[√] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
• Android SDK at C:\Users\slavap\AppData\Local\Android\sdk
• Platform android-29, build-tools 29.0.2
• Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b01)
• All Android licenses accepted.
[√] Chrome - develop for the web
• CHROME_EXECUTABLE = c:\Programs\chrome-debug.bat
[√] Android Studio (version 4.1.0)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b01)
[√] VS Code, 64-bit edition (version 1.51.1)
• VS Code at C:\Program Files\Microsoft VS Code
• Flutter extension version 3.16.0
[√] Connected device (2 available)
• Web Server (web) • web-server • web-javascript • Flutter Tools
• Chrome (web) • chrome • web-javascript • Google Chrome 87.0.4280.66
• No issues found!
```
",1.0,"[Web] OutlineInputBorder doesn't render properly when there is DropdownButtonFormField with custom decoration in a column - ## Steps to Reproduce
I was **unable** to create simple test case to consistently reproduce this problem.
Code was working just fine in **1.23.0-18.1.pre**
After upgrading to **1.24.0-10.2.pre** I see ""random"" border corruption:

When I'm running with: --dart-define=FLUTTER_WEB_USE_SKIA=true
**no such corruption occurs**.

```
[√] Flutter (Channel beta, 1.24.0-10.2.pre, on Microsoft Windows [Version 10.0.19041.630], locale en-US)
• Flutter version 1.24.0-10.2.pre at c:\Programs\flutter
• Framework revision 022b333a08 (2 days ago), 2020-11-18 11:35:09 -0800
• Engine revision 07c1eed46b
• Dart version 2.12.0 (build 2.12.0-29.10.beta)
[√] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
• Android SDK at C:\Users\slavap\AppData\Local\Android\sdk
• Platform android-29, build-tools 29.0.2
• Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b01)
• All Android licenses accepted.
[√] Chrome - develop for the web
• CHROME_EXECUTABLE = c:\Programs\chrome-debug.bat
[√] Android Studio (version 4.1.0)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b01)
[√] VS Code, 64-bit edition (version 1.51.1)
• VS Code at C:\Program Files\Microsoft VS Code
• Flutter extension version 3.16.0
[√] Connected device (2 available)
• Web Server (web) • web-server • web-javascript • Flutter Tools
• Chrome (web) • chrome • web-javascript • Google Chrome 87.0.4280.66
• No issues found!
```
",0, outlineinputborder doesn t render properly when there is dropdownbuttonformfield with custom decoration in a column steps to reproduce i was unable to create simple test case to consistently reproduce this problem code was working just fine in pre after upgrading to pre i see random border corruption when i m running with dart define flutter web use skia true no such corruption occurs flutter channel beta pre on microsoft windows locale en us • flutter version pre at c programs flutter • framework revision days ago • engine revision • dart version build beta android toolchain develop for android devices android sdk version • android sdk at c users slavap appdata local android sdk • platform android build tools • java binary at c program files android android studio jre bin java • java version openjdk runtime environment build release • all android licenses accepted chrome develop for the web • chrome executable c programs chrome debug bat android studio version • android studio at c program files android android studio • flutter plugin can be installed from • dart plugin can be installed from • java version openjdk runtime environment build release vs code bit edition version • vs code at c program files microsoft vs code • flutter extension version connected device available • web server web • web server • web javascript • flutter tools • chrome web • chrome • web javascript • google chrome • no issues found ,0
4725,24380930686.0,IssuesEvent,2022-10-04 07:46:05,rustsec/advisory-db,https://api.github.com/repos/rustsec/advisory-db,closed,`badge` is unmaintained,Unmaintained,"The [`badge`](https://crates.io/crates/badge) crate is unmaintained and will not receive further updates, as the [code has been removed from the repository](https://github.com/rust-lang/docs.rs/commit/94f3bba6815412bc4672621c4690a93e656486c7).
It is no longer used by the authors and therefore will not receive any updates: https://github.com/rust-lang/docs.rs/issues/1813#issuecomment-1232875809",True,"`badge` is unmaintained - The [`badge`](https://crates.io/crates/badge) crate is unmaintained and will not receive further updates, as the [code has been removed from the repository](https://github.com/rust-lang/docs.rs/commit/94f3bba6815412bc4672621c4690a93e656486c7).
It is no longer used by the authors and therefore will not receive any updates: https://github.com/rust-lang/docs.rs/issues/1813#issuecomment-1232875809",1, badge is unmaintained the crate is unmaintained and will not receive further updates as the it is no longer used by the authors and therefore will not receive any updates ,1
500379,14497612957.0,IssuesEvent,2020-12-11 14:27:51,telerik/kendo-ui-core,https://api.github.com/repos/telerik/kendo-ui-core,closed,Unable to databind Gantt with taskId and parentId string fields,Bug C: Gantt FP: Completed Kendo2 Next LIB Priority 5 S: Wrappers (ASP.NET Core) S: Wrappers (ASP.NET MVC) SEV: High,"### Bug report
In a Razor Pages project, the Gantt's tasks are not binding if the taskID and parentID fields are strings.
This is a regression introduced in version 2020.3.915.
### Reproduction of the problem
1. Open and run [this example](https://github.com/telerik/ui-for-aspnet-core-examples/tree/master/Telerik.Examples.RazorPages/Telerik.Examples.RazorPages/Pages/Gantt)
2. Switch to Kendo version after 2020.2.617
### Current behavior
There are no tasks displayed in the Gantt.
### Expected/desired behavior
The tasks should be displayed.
### Environment
* **Kendo UI version:** 2020.3.1118
* **Browser:** [all]
",1.0,"Unable to databind Gantt with taskId and parentId string fields - ### Bug report
In a Razor Pages project, the Gantt's tasks are not binding if the taskID and parentID fields are strings.
This is a regression introduced in version 2020.3.915.
### Reproduction of the problem
1. Open and run [this example](https://github.com/telerik/ui-for-aspnet-core-examples/tree/master/Telerik.Examples.RazorPages/Telerik.Examples.RazorPages/Pages/Gantt)
2. Switch to Kendo version after 2020.2.617
### Current behavior
There are no tasks displayed in the Gantt.
### Expected/desired behavior
The tasks should be displayed.
### Environment
* **Kendo UI version:** 2020.3.1118
* **Browser:** [all]
",0,unable to databind gantt with taskid and parentid string fields bug report in a razor pages project the gantt s tasks are not binding if the taskid and parentid fields are strings this is a regression introduced in version reproduction of the problem open and run switch to kendo version after current behavior there are no tasks displayed in the gantt expected desired behavior the tasks should be displayed environment kendo ui version browser ,0
414915,12121461958.0,IssuesEvent,2020-04-22 09:20:35,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,www.google.com - site is not usable,browser-firefox engine-gecko priority-critical,"
**URL**: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwimpZn2h_roAhUexDgGHeTKBjcQFjAAegQICRAC&url=https%3A%2F%2F3dwarehouse.sketchup.com%2F%3Fhl%3Den&usg=AOvVaw3r_zxTadmQ_YWO3IdunfNF
**Browser / Version**: Firefox 76.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
i am continously browsing 3d warehouse but its its still loading
View the screenshotBrowser Configuration
gfx.webrender.all: false
gfx.webrender.blob-images: true
gfx.webrender.enabled: false
image.mem.shared: true
buildID: 20200412214314
channel: beta
hasTouchScreen: false
mixed active content blocked: false
mixed passive content blocked: false
tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2020/4/6b58ad1d-0805-4df4-bc29-7f23d9c63a99)
Submitted in the name of `@ashok`
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"www.google.com - site is not usable -
**URL**: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwimpZn2h_roAhUexDgGHeTKBjcQFjAAegQICRAC&url=https%3A%2F%2F3dwarehouse.sketchup.com%2F%3Fhl%3Den&usg=AOvVaw3r_zxTadmQ_YWO3IdunfNF
**Browser / Version**: Firefox 76.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
i am continously browsing 3d warehouse but its its still loading
View the screenshotBrowser Configuration
gfx.webrender.all: false
gfx.webrender.blob-images: true
gfx.webrender.enabled: false
image.mem.shared: true
buildID: 20200412214314
channel: beta
hasTouchScreen: false
mixed active content blocked: false
mixed passive content blocked: false
tracking content blocked: false
[View console log messages](https://webcompat.com/console_logs/2020/4/6b58ad1d-0805-4df4-bc29-7f23d9c63a99)
Submitted in the name of `@ashok`
_From [webcompat.com](https://webcompat.com/) with ❤️_",0, site is not usable url browser version firefox operating system windows tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce i am continously browsing warehouse but its its still loading view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false submitted in the name of ashok from with ❤️ ,0
139337,5367566721.0,IssuesEvent,2017-02-22 04:56:52,projectcalico/calico,https://api.github.com/repos/projectcalico/calico,opened,Update IPIP env var docs,area/docs content/out-of-date priority/P1,PR https://github.com/projectcalico/calicoctl/pull/1542 changes the IPIP environment variable and those changes need to be reflected in the docs.,1.0,Update IPIP env var docs - PR https://github.com/projectcalico/calicoctl/pull/1542 changes the IPIP environment variable and those changes need to be reflected in the docs.,0,update ipip env var docs pr changes the ipip environment variable and those changes need to be reflected in the docs ,0
3346,12972351145.0,IssuesEvent,2020-07-21 12:27:34,ipfs-shipyard/ipld-explorer-components,https://api.github.com/repos/ipfs-shipyard/ipld-explorer-components,opened,Missing support for Filecoin codecs,P0 dif/expert effort/weeks kind/bug kind/enhancement need/analysis need/community-input need/maintainer-input,"## Problem
Even when we update to latest `cids` library, when user enters a CID with Filecoin-related codec, they get error because `ipld-filecoin` decoder does not exist:
> 
Test CIDs from https://github.com/multiformats/multihash/issues/129#issuecomment-661040091:
```
baga6ea4seaqggjjfh7whhdoxvhrix6jbcgobmdyhajcimfn33iedcp3kr23gruq
baga6ea4seaqidbk23bub2dmg2hur4aawpe44wzuu2lccflgsbcqaokjzjb7wtgi
bagboea4b5abcax5zbow3g7cyeg3nsvjguqnjkbdibnhhzc3whinr2sousvoijbrb
bagboea4b5abcb245dcsepbaelwd7hrt46itun2mvv5nckntzkg5kf73m2ry4ja7r
```
## Solution
IPLD Explorer already supports Bitcoin and Ethereum:
```
""ipld-bitcoin"": ""^0.3.0"",
""ipld-ethereum"": ""^4.0.0"",
```
I believe IPLD Explorer should support Filecoin CIDs.
@ribasushi @vmx @rvagg – were there any prior/ongoing discussions regarding creating `ipld-filecoin` ?",True,"Missing support for Filecoin codecs - ## Problem
Even when we update to latest `cids` library, when user enters a CID with Filecoin-related codec, they get error because `ipld-filecoin` decoder does not exist:
> 
Test CIDs from https://github.com/multiformats/multihash/issues/129#issuecomment-661040091:
```
baga6ea4seaqggjjfh7whhdoxvhrix6jbcgobmdyhajcimfn33iedcp3kr23gruq
baga6ea4seaqidbk23bub2dmg2hur4aawpe44wzuu2lccflgsbcqaokjzjb7wtgi
bagboea4b5abcax5zbow3g7cyeg3nsvjguqnjkbdibnhhzc3whinr2sousvoijbrb
bagboea4b5abcb245dcsepbaelwd7hrt46itun2mvv5nckntzkg5kf73m2ry4ja7r
```
## Solution
IPLD Explorer already supports Bitcoin and Ethereum:
```
""ipld-bitcoin"": ""^0.3.0"",
""ipld-ethereum"": ""^4.0.0"",
```
I believe IPLD Explorer should support Filecoin CIDs.
@ribasushi @vmx @rvagg – were there any prior/ongoing discussions regarding creating `ipld-filecoin` ?",1,missing support for filecoin codecs problem even when we update to latest cids library when user enters a cid with filecoin related codec they get error because ipld filecoin decoder does not exist test cids from solution ipld explorer already supports bitcoin and ethereum ipld bitcoin ipld ethereum i believe ipld explorer should support filecoin cids ribasushi vmx rvagg – were there any prior ongoing discussions regarding creating ipld filecoin ,1
214873,24121050097.0,IssuesEvent,2022-09-20 18:44:43,Azure/AKS,https://api.github.com/repos/Azure/AKS,closed,AKS in VNET behind company HTTP proxy,enhancement security feature-request resolution/shipped,"I need to deploy AKS into a custom VNET, that is behind a company HTTP proxy to access the public internet.
With ACS or acs-engine I couldn't get this working out-of-the-box as the cloud-init scripts need internet access before I'm able to set the http_proxy on all nodes.
Is this possible with AKS once #27 is supported?",True,"AKS in VNET behind company HTTP proxy - I need to deploy AKS into a custom VNET, that is behind a company HTTP proxy to access the public internet.
With ACS or acs-engine I couldn't get this working out-of-the-box as the cloud-init scripts need internet access before I'm able to set the http_proxy on all nodes.
Is this possible with AKS once #27 is supported?",0,aks in vnet behind company http proxy i need to deploy aks into a custom vnet that is behind a company http proxy to access the public internet with acs or acs engine i couldn t get this working out of the box as the cloud init scripts need internet access before i m able to set the http proxy on all nodes is this possible with aks once is supported ,0
226976,25021284899.0,IssuesEvent,2022-11-04 01:05:38,Kijacode/GridfsNodejsFile_Upload,https://api.github.com/repos/Kijacode/GridfsNodejsFile_Upload,closed,CVE-2022-24304 (High) detected in mongoose-5.8.1.tgz - autoclosed,security vulnerability,"## CVE-2022-24304 - High Severity Vulnerability
Vulnerable Library - mongoose-5.8.1.tgz
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2022-08-26
Fix Resolution: 6.4.6
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-24304 (High) detected in mongoose-5.8.1.tgz - autoclosed - ## CVE-2022-24304 - High Severity Vulnerability
Vulnerable Library - mongoose-5.8.1.tgz
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2022-08-26
Fix Resolution: 6.4.6
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in mongoose tgz autoclosed cve high severity vulnerability vulnerable library mongoose tgz mongoose mongodb odm library home page a href path to dependency file package json path to vulnerable library node modules mongoose package json dependency hierarchy x mongoose tgz vulnerable library vulnerability details schema in lib schema js in mongoose before is vulnerable to prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend ,0
123043,16434932924.0,IssuesEvent,2021-05-20 08:11:16,Altinn/altinn-studio,https://api.github.com/repos/Altinn/altinn-studio,closed,Nedtrekkslister for ledetekst og beskrivelse viser ikke tidligere valgt element,area/ui-editor bug/c2 kind/bug solution/studio/designer up-for-grabs ux/visual-design,"## Describe the bug
Nedtrekkslister for ledetekst og beskrivelse viser ikke valg som er gjort tidligere når man velger å redigere komponenten, slik som nedtrekksliste for kobling til datamodell gjør (markert med mørkeblått).
(Hvis man under redigering, velger i listen, blir det markert)
## To Reproduce
Steps to reproduce the behavior:
1. Gå til tjenesten aarsregnskap_rr0002 for Brønnøysundregistrene
2. Velg Lage
3. Velg komponent (har både ledetekst og beskrivelse)

## Expected behavior
Forventer at valgt som allerede er gjort, vises på samme måte i nedtrekkslistene til ledetekst og beskrivelse når man velger redigere, som for nedtrekkslisten for kobling til datamodell.
## Screenshots
Datamodell:

Ledetekst:

Beskrivelse:

## Additional info
Windows 10 Pro, Chrome",2.0,"Nedtrekkslister for ledetekst og beskrivelse viser ikke tidligere valgt element - ## Describe the bug
Nedtrekkslister for ledetekst og beskrivelse viser ikke valg som er gjort tidligere når man velger å redigere komponenten, slik som nedtrekksliste for kobling til datamodell gjør (markert med mørkeblått).
(Hvis man under redigering, velger i listen, blir det markert)
## To Reproduce
Steps to reproduce the behavior:
1. Gå til tjenesten aarsregnskap_rr0002 for Brønnøysundregistrene
2. Velg Lage
3. Velg komponent (har både ledetekst og beskrivelse)

## Expected behavior
Forventer at valgt som allerede er gjort, vises på samme måte i nedtrekkslistene til ledetekst og beskrivelse når man velger redigere, som for nedtrekkslisten for kobling til datamodell.
## Screenshots
Datamodell:

Ledetekst:

Beskrivelse:

## Additional info
Windows 10 Pro, Chrome",0,nedtrekkslister for ledetekst og beskrivelse viser ikke tidligere valgt element describe the bug nedtrekkslister for ledetekst og beskrivelse viser ikke valg som er gjort tidligere når man velger å redigere komponenten slik som nedtrekksliste for kobling til datamodell gjør markert med mørkeblått hvis man under redigering velger i listen blir det markert to reproduce steps to reproduce the behavior gå til tjenesten aarsregnskap for brønnøysundregistrene velg lage velg komponent har både ledetekst og beskrivelse expected behavior forventer at valgt som allerede er gjort vises på samme måte i nedtrekkslistene til ledetekst og beskrivelse når man velger redigere som for nedtrekkslisten for kobling til datamodell screenshots datamodell ledetekst beskrivelse additional info windows pro chrome,0
2947,2651708018.0,IssuesEvent,2015-03-16 13:31:09,MysingleClairesimon/2ABEVKWLYVY7CISYZTMBSQI7,https://api.github.com/repos/MysingleClairesimon/2ABEVKWLYVY7CISYZTMBSQI7,closed,utYHf+7Llbbvkv6HOK2xmLy4EoarNIzj9qfDeUcM1bmlhLEUX4vhtMgS8wMQnNCHSCIrNEAKLTk/wnOgnp3u42DbuMnHoNAPBI+UHMQFPRxkcwH+moAXDLDi4GETBmKdzoycwPz0XQyDlJmynUHi6h8yseYI5A19bp/5ZUe9e9o=,design,Ds7U5bnVNO9t457NkK1RkGKkX11nAKVCJ6liw5pYiI47lxa+NLyqCbMXYyik6SbFEaC8UnN2MAafrURWSqWZwRVpV5lUGT+lXPU2OVMwSCH7hMqDvMn1lhPjIi8qrZjrnuFCu1/vNJkbsRHpEAs4Y6/nP6azWWts31nLwhjLHQrBDDeEwWV7icHIrT+KatKomFBBBkAfkWOIvYExVpGZMpnAMMfo27SPvEC/iH2iO5AOPl3A3YhbK4ZvEZT90qmRK029rx01mGbEKn9+iX6ujAHtB8Z74u6nHI2j0AAzOwUY4GiTHasFKZYKbGBik/DV8avXjprO1Bfgx7n2Q5XNV/lfCer43qMwsJ6CP/G3fYoGC0kcCeaEsXpvGyaCiAW1vyMugduQEPegkRiTis9KHjEkfbxvwyzWOsqyXWGQoWyNVgEtn4p1RyUvGthZw9Mp7e9QfOt24uzvswqVTObJMrKdY/F0kXLfIiB0hXijvVirf/2yzoOweH912VFzEmkWZGYP/xJSA52Jzd+6iS2uq3HGGMyDdWAPeMQe3tCJ4Itti9nBFYLu1hWfctcyGipaABQZxGdF1UilDoECopCYA7v5IlxMMqnYUzegDYaWMIk+14DpbArfdm8FvKN/QvU5/XpjqysD7J7BVKyOT4lElsakq3m3HQFMu4BJEqyR2Nlg06zjH+ByBzWylt8Hf7t/wIpqhQ9rablVHeuX81RfzMfogOKYsn2OZcBO7FabyRGqp2Xs7z0ayY4SpYQTLXc55N+ClafZMEulRux//xRAqGJAUcGaqhOIcyAOkazdiS0B7QfGe+LupxyNo9AAMzsFjoTalVxKP6lmRukgTsb9Z93KeHMG4rSH4SrWBJTvsTnSOfk4BYyY5YNpzvzt6sNq/r9xBbW16IKLymTYtvV84YZNnbSE2LiA5xKACDB5h3YB7QfGe+LupxyNo9AAMzsFN6GshpU5DQRM/n/UwG+JCpTLU5Hyp1CKlSxQyH8XVCZw36jP3Y5hTPF1NCoeCo4mjOIJ7yjKaG2TvaC2Nv7HlYlyXABPzVpgKxzwCVnLjXdw36jP3Y5hTPF1NCoeCo4m/gUn6NbfVC3GBZWLz74kIHQ9hBsByLmeddGoWe5SZQlUl6MVWQQKmgDy2aPlVXCnnXaXtfXHJAbH8wFxGZ6lO3jPew1uQlsExRJFwTRHikcFj0bsqa92kSPmPYv9S6v4iC7UvAaALJsXWt7lSBUJ1HqtrffbLrGHapv5HJS9loYs4ULus9aOy6hxl6EIPvI+bSBYqei3xbSjVAqb/w21zmR3NdV+RLX9ftS76KWkcIV7RRDYTfxLB4kCTkqxgDnsWnNVfvmfulrMcdFsl1bIx6vShqx5ZGYfveGt7RUft8PBawpemHC/3KDNfqPn8CUfHAxNmpzeq2z4DRmGLSQJiGFtlRSNF4CmIJUoXNbfv2dxY4COIwgQuuuD8Z/8lAQinuhQiRE0CGMOLuGLYr2phnn+1mj/OXliPznHUljdVeb2v/9hoZVkLjGqljyx5CmPlix/BNaKYeiVhYJkFrASrDirXVM+tfhW38c8L1hLnaQPQAj+vtXjdrIiKnHV5ws6kPrMLX17ubAwFzEWiAAeEJkSdf+9a9Zze2sHgmolRQMxlkBnCT2lU13XHoLy+WIwnXaXtfXHJAbH8wFxGZ6lO8ZvsAifTX387O2VtZVSyZoAUBR7wQofcr9VGHqoYVOI2MUZyQ5UF34S01heZ0Tr4ztsZaa+79R8klc8KRmRWnN4nHBAXB5bZbdsUT0bcVxvxhO9ZO0VQyt9jSHplH5k2Ufo8t+hFTF7Vk1GmfJ/JOvQH+/GUh8GV8iPl7oABRgMpfBY4EKY81zJw8xkG41r2bUZoFm+8gQuNpwVfnjKxvp2zIZwmwsN19usJsjGPowbUIe6dlLvgJwcTmuX7wjoSe2ZtAHCXEx+8uYmbW4N2UAumJvXXZuiyvYjTbx3GwOA2OwxPmhkXaLERQWbggyyTrrhJuyu2UhXSJhU1pdpob9w36jP3Y5hTPF1NCoeCo4m/ZTdK89QgU1W5QZItz5vNF5f5EfUx4G/l0OHemgj8ruoJ/RsDPdnFPMDUvTu2BvaU/GIMYHMeqf/ldZBtAuAVjKQdqHgh4TxhCHVF58pX9Dx61CwPX28ta4wdndqRTFP74kD3fg0KNUNgjzvQ6T+IdpcVeIwckrgCGonIup4kSMykHah4IeE8YQh1RefKV/QkqaIZl2yikr3882Rg38hYKsS2krEAXuM5LjBs0+HHCcmFB1JOcpz0H9ZKd+Rr8ACPO+935jIaaw9/nDKgLb8X8etxWVBu7m9sZVKvnkWeYCSTGeyENTyV1arv+dwaQgwcyU4Ix57AWLZ2vHU8ErH44QhlTeRx7q7T89dT3HMyKAB7QfGe+LupxyNo9AAMzsFzoiH2XCNXPhNciw9sUwUASu0QoYEClKKNgqRl13l//264SbsrtlIV0iYVNaXaaG/cN+oz92OYUzxdTQqHgqOJj8fRdJTrHIMocdCero3mErGgH8dV23Bz6TwWSQmIXlk2HclVDsdnb2l+MuF+tOHLItFTb4we7HVb2Eff591LkKNHRrxlLppYpgH6rpd2+yGcN+oz92OYUzxdTQqHgqOJrCsKTwjWPzBqe/HXRbPfjaq+1QSbL8qDB1QuBFOsQUcAe0Hxnvi7qccjaPQADM7BYEnsueNsFWUQlj5rohER7GKM/CFrTfHnzj0/rTI8hBpcN+oz92OYUzxdTQqHgqOJsOU7fMXwS3UxMRGOSAioG4wgHeh/xEWoUAhAJv8mdfToGH5BeuKAuzCeFDQ16q1bE3gl0WZ87ROmAgV5/VuUFVjjrBZFr7gZurEKNf7Te/Pj2wVNDRtVfAVbyMTaL+svsDNNoiDh30vRdM1yyv2mK4Vvwc0Q2Z+MckoCXgn5/esJmD0bnv6DJPiplul64CpAR///VGuZhm4tZ4u3lllCOb4dLqD+J7mVJQf4j7CL4UvcN+oz92OYUzxdTQqHgqOJuM64SP8bWhY+tVe/g5+1NwgxFA1UWGA6gzA3L+ZsUFbkXwbQRm2anFxYYNvsE6BZXDfqM/djmFM8XU0Kh4KjiYpxqxSlqsQ6wNKkNIlWRQ1RP+CA1ilTAmu9d5pJv4MUpD6zC19e7mwMBcxFogAHhBkaDKUSWvP5yFsweF4N9axrdCO/N/iasZqSz4soFMVIZ12l7X1xyQGx/MBcRmepTuEDjySx28RZjkuQLN0c+sTe6m8cw2DIVs/HYQCmxUoAcxTZCS5z6ixjpeb9QgFnwzzuaFJnRt8/6FM0s3Ta2GEOmQxR7q/vymToyDZoy6T56vkAYnorJLipIyRnjtDBBaddpe19cckBsfzAXEZnqU79U80waTAiPESfyx4aZkQ09eSkIvLxmh7cnooXWzZfl2Dww1HBdDsvfUq6sL3+74+R2GHlFdpKNBpQyZ2PRff4wZyNNeaRbDdmYQpsimldr7GLlorxYDpbQObAQLSNIRAcYCbSCkRyPi68G9YEjMnEXPN+UqfoosufoACyvPDxmo6cKzsbu5cKw8ucOy/qNnw1XgiVAuRP8lPoEHZlFbo0CRnpaJ7PmzvIK7L5uKhP0eSfvAUPKa918+cvZm5Zjh/HRPQc6G/kugYAlHjoSOPsPiv7db0NOKYVHbZ21niXJAZysHcFXGfGV+NVemT5ehk8oZmCrXwzqpxRS6hOueexwec6LtvpgmaZSpxoFI3294cDE2anN6rbPgNGYYtJAmIAVizaFnT6k3RR7I8jZVotQ2BQsP7t2OGF7vk/6XO3XzguMRVwAXfnnDHL6OVkS8IGu1+g/L0mJUs+WdFBB8R5pS7Q69zZj8Wde8sPJtyqTQ81yEQYshA0jaeVBA+pvXmnXaXtfXHJAbH8wFxGZ6lO5dRRr6u4amJrnuTfmx3/Z0zSAQLAe0Fm+KcdOXxSwgwZvbEeq0oDYQ4mBoOgOahuacZVGXK7Uvs3LWMmSRXvimH1wxGczNpdppzHWvOT8r6IzmXFeqwyF3M+HVttCm2NZ12l7X1xyQGx/MBcRmepTsmhPC2rqHENmWc7dyYYyOehk2dtITYuIDnEoAIMHmHdgHtB8Z74u6nHI2j0AAzOwX0NnkaksbX6/Gxnhswk9LjRMunhwV5wHZ1U+UjlEdD5cISmNOvk+tq03HIf5g/ed8OUHP1uye3OvLeWnQ0WHJyS4i/LRsSXI5CFPjoGLP31PctU4yJK0C3BMm/VNUrNQ3CEpjTr5PratNxyH+YP3nfDlBz9bsntzry3lp0NFhyctTmRc6Nyl8NWVRoE09C5rXibK7pc2sYWHZKHxzvhElslbJ5+NOPWudvZ5kIR6WvCf8/yquZ+DgOy5W0a6JDVK8dx3TBHQ9Ltb8Z2I6cUwKUyujYpy6OEWkGUzaSm64OQQSk8oKp+N8pDMSB6OPPpstM8P9EaKc26YG6c/mIBXwK2M+oy6mJRSQxqyMiBB/lRwq6ba2BshfuWVOJbY5u6F4gVXz4gBFLKRKIwjL0gRD4dqzcactylCrB1FFrOjOrcPTZAvtteFN8aqLrDrUo8quNGAUTCzobKUqTsNYf8SVR,1.0,utYHf+7Llbbvkv6HOK2xmLy4EoarNIzj9qfDeUcM1bmlhLEUX4vhtMgS8wMQnNCHSCIrNEAKLTk/wnOgnp3u42DbuMnHoNAPBI+UHMQFPRxkcwH+moAXDLDi4GETBmKdzoycwPz0XQyDlJmynUHi6h8yseYI5A19bp/5ZUe9e9o= - Ds7U5bnVNO9t457NkK1RkGKkX11nAKVCJ6liw5pYiI47lxa+NLyqCbMXYyik6SbFEaC8UnN2MAafrURWSqWZwRVpV5lUGT+lXPU2OVMwSCH7hMqDvMn1lhPjIi8qrZjrnuFCu1/vNJkbsRHpEAs4Y6/nP6azWWts31nLwhjLHQrBDDeEwWV7icHIrT+KatKomFBBBkAfkWOIvYExVpGZMpnAMMfo27SPvEC/iH2iO5AOPl3A3YhbK4ZvEZT90qmRK029rx01mGbEKn9+iX6ujAHtB8Z74u6nHI2j0AAzOwUY4GiTHasFKZYKbGBik/DV8avXjprO1Bfgx7n2Q5XNV/lfCer43qMwsJ6CP/G3fYoGC0kcCeaEsXpvGyaCiAW1vyMugduQEPegkRiTis9KHjEkfbxvwyzWOsqyXWGQoWyNVgEtn4p1RyUvGthZw9Mp7e9QfOt24uzvswqVTObJMrKdY/F0kXLfIiB0hXijvVirf/2yzoOweH912VFzEmkWZGYP/xJSA52Jzd+6iS2uq3HGGMyDdWAPeMQe3tCJ4Itti9nBFYLu1hWfctcyGipaABQZxGdF1UilDoECopCYA7v5IlxMMqnYUzegDYaWMIk+14DpbArfdm8FvKN/QvU5/XpjqysD7J7BVKyOT4lElsakq3m3HQFMu4BJEqyR2Nlg06zjH+ByBzWylt8Hf7t/wIpqhQ9rablVHeuX81RfzMfogOKYsn2OZcBO7FabyRGqp2Xs7z0ayY4SpYQTLXc55N+ClafZMEulRux//xRAqGJAUcGaqhOIcyAOkazdiS0B7QfGe+LupxyNo9AAMzsFjoTalVxKP6lmRukgTsb9Z93KeHMG4rSH4SrWBJTvsTnSOfk4BYyY5YNpzvzt6sNq/r9xBbW16IKLymTYtvV84YZNnbSE2LiA5xKACDB5h3YB7QfGe+LupxyNo9AAMzsFN6GshpU5DQRM/n/UwG+JCpTLU5Hyp1CKlSxQyH8XVCZw36jP3Y5hTPF1NCoeCo4mjOIJ7yjKaG2TvaC2Nv7HlYlyXABPzVpgKxzwCVnLjXdw36jP3Y5hTPF1NCoeCo4m/gUn6NbfVC3GBZWLz74kIHQ9hBsByLmeddGoWe5SZQlUl6MVWQQKmgDy2aPlVXCnnXaXtfXHJAbH8wFxGZ6lO3jPew1uQlsExRJFwTRHikcFj0bsqa92kSPmPYv9S6v4iC7UvAaALJsXWt7lSBUJ1HqtrffbLrGHapv5HJS9loYs4ULus9aOy6hxl6EIPvI+bSBYqei3xbSjVAqb/w21zmR3NdV+RLX9ftS76KWkcIV7RRDYTfxLB4kCTkqxgDnsWnNVfvmfulrMcdFsl1bIx6vShqx5ZGYfveGt7RUft8PBawpemHC/3KDNfqPn8CUfHAxNmpzeq2z4DRmGLSQJiGFtlRSNF4CmIJUoXNbfv2dxY4COIwgQuuuD8Z/8lAQinuhQiRE0CGMOLuGLYr2phnn+1mj/OXliPznHUljdVeb2v/9hoZVkLjGqljyx5CmPlix/BNaKYeiVhYJkFrASrDirXVM+tfhW38c8L1hLnaQPQAj+vtXjdrIiKnHV5ws6kPrMLX17ubAwFzEWiAAeEJkSdf+9a9Zze2sHgmolRQMxlkBnCT2lU13XHoLy+WIwnXaXtfXHJAbH8wFxGZ6lO8ZvsAifTX387O2VtZVSyZoAUBR7wQofcr9VGHqoYVOI2MUZyQ5UF34S01heZ0Tr4ztsZaa+79R8klc8KRmRWnN4nHBAXB5bZbdsUT0bcVxvxhO9ZO0VQyt9jSHplH5k2Ufo8t+hFTF7Vk1GmfJ/JOvQH+/GUh8GV8iPl7oABRgMpfBY4EKY81zJw8xkG41r2bUZoFm+8gQuNpwVfnjKxvp2zIZwmwsN19usJsjGPowbUIe6dlLvgJwcTmuX7wjoSe2ZtAHCXEx+8uYmbW4N2UAumJvXXZuiyvYjTbx3GwOA2OwxPmhkXaLERQWbggyyTrrhJuyu2UhXSJhU1pdpob9w36jP3Y5hTPF1NCoeCo4m/ZTdK89QgU1W5QZItz5vNF5f5EfUx4G/l0OHemgj8ruoJ/RsDPdnFPMDUvTu2BvaU/GIMYHMeqf/ldZBtAuAVjKQdqHgh4TxhCHVF58pX9Dx61CwPX28ta4wdndqRTFP74kD3fg0KNUNgjzvQ6T+IdpcVeIwckrgCGonIup4kSMykHah4IeE8YQh1RefKV/QkqaIZl2yikr3882Rg38hYKsS2krEAXuM5LjBs0+HHCcmFB1JOcpz0H9ZKd+Rr8ACPO+935jIaaw9/nDKgLb8X8etxWVBu7m9sZVKvnkWeYCSTGeyENTyV1arv+dwaQgwcyU4Ix57AWLZ2vHU8ErH44QhlTeRx7q7T89dT3HMyKAB7QfGe+LupxyNo9AAMzsFzoiH2XCNXPhNciw9sUwUASu0QoYEClKKNgqRl13l//264SbsrtlIV0iYVNaXaaG/cN+oz92OYUzxdTQqHgqOJj8fRdJTrHIMocdCero3mErGgH8dV23Bz6TwWSQmIXlk2HclVDsdnb2l+MuF+tOHLItFTb4we7HVb2Eff591LkKNHRrxlLppYpgH6rpd2+yGcN+oz92OYUzxdTQqHgqOJrCsKTwjWPzBqe/HXRbPfjaq+1QSbL8qDB1QuBFOsQUcAe0Hxnvi7qccjaPQADM7BYEnsueNsFWUQlj5rohER7GKM/CFrTfHnzj0/rTI8hBpcN+oz92OYUzxdTQqHgqOJsOU7fMXwS3UxMRGOSAioG4wgHeh/xEWoUAhAJv8mdfToGH5BeuKAuzCeFDQ16q1bE3gl0WZ87ROmAgV5/VuUFVjjrBZFr7gZurEKNf7Te/Pj2wVNDRtVfAVbyMTaL+svsDNNoiDh30vRdM1yyv2mK4Vvwc0Q2Z+MckoCXgn5/esJmD0bnv6DJPiplul64CpAR///VGuZhm4tZ4u3lllCOb4dLqD+J7mVJQf4j7CL4UvcN+oz92OYUzxdTQqHgqOJuM64SP8bWhY+tVe/g5+1NwgxFA1UWGA6gzA3L+ZsUFbkXwbQRm2anFxYYNvsE6BZXDfqM/djmFM8XU0Kh4KjiYpxqxSlqsQ6wNKkNIlWRQ1RP+CA1ilTAmu9d5pJv4MUpD6zC19e7mwMBcxFogAHhBkaDKUSWvP5yFsweF4N9axrdCO/N/iasZqSz4soFMVIZ12l7X1xyQGx/MBcRmepTuEDjySx28RZjkuQLN0c+sTe6m8cw2DIVs/HYQCmxUoAcxTZCS5z6ixjpeb9QgFnwzzuaFJnRt8/6FM0s3Ta2GEOmQxR7q/vymToyDZoy6T56vkAYnorJLipIyRnjtDBBaddpe19cckBsfzAXEZnqU79U80waTAiPESfyx4aZkQ09eSkIvLxmh7cnooXWzZfl2Dww1HBdDsvfUq6sL3+74+R2GHlFdpKNBpQyZ2PRff4wZyNNeaRbDdmYQpsimldr7GLlorxYDpbQObAQLSNIRAcYCbSCkRyPi68G9YEjMnEXPN+UqfoosufoACyvPDxmo6cKzsbu5cKw8ucOy/qNnw1XgiVAuRP8lPoEHZlFbo0CRnpaJ7PmzvIK7L5uKhP0eSfvAUPKa918+cvZm5Zjh/HRPQc6G/kugYAlHjoSOPsPiv7db0NOKYVHbZ21niXJAZysHcFXGfGV+NVemT5ehk8oZmCrXwzqpxRS6hOueexwec6LtvpgmaZSpxoFI3294cDE2anN6rbPgNGYYtJAmIAVizaFnT6k3RR7I8jZVotQ2BQsP7t2OGF7vk/6XO3XzguMRVwAXfnnDHL6OVkS8IGu1+g/L0mJUs+WdFBB8R5pS7Q69zZj8Wde8sPJtyqTQ81yEQYshA0jaeVBA+pvXmnXaXtfXHJAbH8wFxGZ6lO5dRRr6u4amJrnuTfmx3/Z0zSAQLAe0Fm+KcdOXxSwgwZvbEeq0oDYQ4mBoOgOahuacZVGXK7Uvs3LWMmSRXvimH1wxGczNpdppzHWvOT8r6IzmXFeqwyF3M+HVttCm2NZ12l7X1xyQGx/MBcRmepTsmhPC2rqHENmWc7dyYYyOehk2dtITYuIDnEoAIMHmHdgHtB8Z74u6nHI2j0AAzOwX0NnkaksbX6/Gxnhswk9LjRMunhwV5wHZ1U+UjlEdD5cISmNOvk+tq03HIf5g/ed8OUHP1uye3OvLeWnQ0WHJyS4i/LRsSXI5CFPjoGLP31PctU4yJK0C3BMm/VNUrNQ3CEpjTr5PratNxyH+YP3nfDlBz9bsntzry3lp0NFhyctTmRc6Nyl8NWVRoE09C5rXibK7pc2sYWHZKHxzvhElslbJ5+NOPWudvZ5kIR6WvCf8/yquZ+DgOy5W0a6JDVK8dx3TBHQ9Ltb8Z2I6cUwKUyujYpy6OEWkGUzaSm64OQQSk8oKp+N8pDMSB6OPPpstM8P9EaKc26YG6c/mIBXwK2M+oy6mJRSQxqyMiBB/lRwq6ba2BshfuWVOJbY5u6F4gVXz4gBFLKRKIwjL0gRD4dqzcactylCrB1FFrOjOrcPTZAvtteFN8aqLrDrUo8quNGAUTCzobKUqTsNYf8SVR,0,utyhf uhmqfprxkcwh clafzmeulrux n uwg bnakyeivhyjkfrasrdirxvm jovqh gimyhmeqf cn muf ygcn hxrbpfjaq tve n g yquz ,0
43096,11464349001.0,IssuesEvent,2020-02-07 17:52:21,zfsonlinux/zfs,https://api.github.com/repos/zfsonlinux/zfs,reopened,`zfs send -p | zfs receive -F` destroys all other snapshots,Component: Send/Recv Type: Defect Type: Documentation,"From the man page of zfs receive:
> When a snapshot replication package stream that is generated by using the `zfs send -R` command is received, any snapshots that do not exist on the sending location are destroyed by using the zfs destroy -d command.
I never used `-R`, but I did add `-p` to the send command (`zfs send -p -I ...`) because I wanted to transfer the properties along with the snapshots. But then the receive side (with `-F`) behaved as if I was sending a replication stream: it deleted all snapshots that didn't exist on the sending side.
This was not expected at all. Is this a bug in send, receive, or in the documentation?
Edit: if this is the expected behavior, I'll submit a PR to make it clearer in the man pages.
",1.0,"`zfs send -p | zfs receive -F` destroys all other snapshots - From the man page of zfs receive:
> When a snapshot replication package stream that is generated by using the `zfs send -R` command is received, any snapshots that do not exist on the sending location are destroyed by using the zfs destroy -d command.
I never used `-R`, but I did add `-p` to the send command (`zfs send -p -I ...`) because I wanted to transfer the properties along with the snapshots. But then the receive side (with `-F`) behaved as if I was sending a replication stream: it deleted all snapshots that didn't exist on the sending side.
This was not expected at all. Is this a bug in send, receive, or in the documentation?
Edit: if this is the expected behavior, I'll submit a PR to make it clearer in the man pages.
",0, zfs send p zfs receive f destroys all other snapshots from the man page of zfs receive when a snapshot replication package stream that is generated by using the zfs send r command is received any snapshots that do not exist on the sending location are destroyed by using the zfs destroy d command i never used r but i did add p to the send command zfs send p i because i wanted to transfer the properties along with the snapshots but then the receive side with f behaved as if i was sending a replication stream it deleted all snapshots that didn t exist on the sending side this was not expected at all is this a bug in send receive or in the documentation edit if this is the expected behavior i ll submit a pr to make it clearer in the man pages ,0
2493,8650854520.0,IssuesEvent,2018-11-27 00:17:52,Microsoft/DirectXMesh,https://api.github.com/repos/Microsoft/DirectXMesh,closed,Publish a NuGet packge with DX12 support for Win32 desktop,maintainence,"The NuGet package ``DirectXMesh_Uwp`` includes DirectX 12 support side-by-side with DirectX 11, but the ``directxmesh_desktop_2015`` only supports DirectX 11 for Windows 7 support.
I should publish a ``DirectXMesh_desktop_win10`` package that includes the DirectX 12 support for desktop apps that require Windows 10.",True,"Publish a NuGet packge with DX12 support for Win32 desktop - The NuGet package ``DirectXMesh_Uwp`` includes DirectX 12 support side-by-side with DirectX 11, but the ``directxmesh_desktop_2015`` only supports DirectX 11 for Windows 7 support.
I should publish a ``DirectXMesh_desktop_win10`` package that includes the DirectX 12 support for desktop apps that require Windows 10.",1,publish a nuget packge with support for desktop the nuget package directxmesh uwp includes directx support side by side with directx but the directxmesh desktop only supports directx for windows support i should publish a directxmesh desktop package that includes the directx support for desktop apps that require windows ,1
412946,27881094434.0,IssuesEvent,2023-03-21 19:28:11,bounswe/bounswe2023group6,https://api.github.com/repos/bounswe/bounswe2023group6,closed,Edit questions about requirements and general aspects of project to ask to TA.,type: documentation priority: high status: inprogress area: meeting,"### Problem
We decided some questions about requirements and general aspects of project. Before asking to TA , we will edit questions.[](url)
### Solution
We will meet in Discord and edit questions together.
### Documentation
https://docs.google.com/document/d/1iSIr5YIwcGAGQxxcSxFYsV0xnc8BUettnlRPUugf0_s/edit
### Additional notes
_No response_
### Reviewers
_No response_
### Deadline
21.03.2023 - Tuesday - 23.59",1.0,"Edit questions about requirements and general aspects of project to ask to TA. - ### Problem
We decided some questions about requirements and general aspects of project. Before asking to TA , we will edit questions.[](url)
### Solution
We will meet in Discord and edit questions together.
### Documentation
https://docs.google.com/document/d/1iSIr5YIwcGAGQxxcSxFYsV0xnc8BUettnlRPUugf0_s/edit
### Additional notes
_No response_
### Reviewers
_No response_
### Deadline
21.03.2023 - Tuesday - 23.59",0,edit questions about requirements and general aspects of project to ask to ta problem we decided some questions about requirements and general aspects of project before asking to ta we will edit questions url solution we will meet in discord and edit questions together documentation additional notes no response reviewers no response deadline tuesday ,0
1276,5399957917.0,IssuesEvent,2017-02-27 20:47:43,canadainc/sunnah10,https://api.github.com/repos/canadainc/sunnah10,closed,Implement RSS feed generator,enhancement invalid logic maintainability ui usability,"Allow generating JSON files that can be used in the various BB10 apps for importing.
",True,"Implement RSS feed generator - Allow generating JSON files that can be used in the various BB10 apps for importing.
",1,implement rss feed generator allow generating json files that can be used in the various apps for importing ,1
675,4217030353.0,IssuesEvent,2016-06-30 11:32:27,duckduckgo/zeroclickinfo-spice,https://api.github.com/repos/duckduckgo/zeroclickinfo-spice,opened,Amazon: Review Stars not appearing,Maintainer Input Requested,"All of these have reviews on amazon but only one displays stars.
------

IA Page: http://duck.co/ia/view/products
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @bsstoner",True,"Amazon: Review Stars not appearing - All of these have reviews on amazon but only one displays stars.
------

IA Page: http://duck.co/ia/view/products
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @bsstoner",1,amazon review stars not appearing all of these have reviews on amazon but only one displays stars ia page bsstoner,1
240660,20067723023.0,IssuesEvent,2022-02-04 00:06:39,elastic/kibana,https://api.github.com/repos/elastic/kibana,opened,"[test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/beats/cluster·js - Monitoring app beats cluster ""before all"" hook for ""shows beats panel with data""",failed-test test-cloud,"**Version: 8.1.0**
**Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/beats/cluster·js**
**Stack Trace:**
```
Error: retry.try timeout: Error: expected testSubject(superDatePickerQuickMenu) to exist
at TestSubjects.existOrFail (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/common/test_subjects.ts:44:13)
at /var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/menu_toggle.ts:43:11
at runAttempt (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:29:15)
at retryForSuccess (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:68:21)
at RetryService.try (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry.ts:31:12)
at setState (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/menu_toggle.ts:31:7)
at Object.open (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/menu_toggle.ts:53:9)
at TimePickerPageObject.getRefreshConfig (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/page_objects/time_picker.ts:183:5)
at TimePickerPageObject.pauseAutoRefresh (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/page_objects/time_picker.ts:283:27)
at setup (test/functional/apps/monitoring/_get_lifecycle_methods.js:47:7)
at onFailure (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:17:9)
at retryForSuccess (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:59:13)
at RetryService.try (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry.ts:31:12)
at setState (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/menu_toggle.ts:31:7)
at Object.open (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/menu_toggle.ts:53:9)
at TimePickerPageObject.getRefreshConfig (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/page_objects/time_picker.ts:183:5)
at TimePickerPageObject.pauseAutoRefresh (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/page_objects/time_picker.ts:283:27)
at setup (test/functional/apps/monitoring/_get_lifecycle_methods.js:47:7)
at Context. (test/functional/apps/monitoring/beats/cluster.js:18:7)
at Object.apply (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
**Other test failures:**
_Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/2876/testReport/_",2.0,"[test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/beats/cluster·js - Monitoring app beats cluster ""before all"" hook for ""shows beats panel with data"" - **Version: 8.1.0**
**Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/monitoring/beats/cluster·js**
**Stack Trace:**
```
Error: retry.try timeout: Error: expected testSubject(superDatePickerQuickMenu) to exist
at TestSubjects.existOrFail (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/common/test_subjects.ts:44:13)
at /var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/menu_toggle.ts:43:11
at runAttempt (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:29:15)
at retryForSuccess (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:68:21)
at RetryService.try (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry.ts:31:12)
at setState (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/menu_toggle.ts:31:7)
at Object.open (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/menu_toggle.ts:53:9)
at TimePickerPageObject.getRefreshConfig (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/page_objects/time_picker.ts:183:5)
at TimePickerPageObject.pauseAutoRefresh (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/page_objects/time_picker.ts:283:27)
at setup (test/functional/apps/monitoring/_get_lifecycle_methods.js:47:7)
at onFailure (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:17:9)
at retryForSuccess (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry_for_success.ts:59:13)
at RetryService.try (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/common/services/retry/retry.ts:31:12)
at setState (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/menu_toggle.ts:31:7)
at Object.open (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/services/menu_toggle.ts:53:9)
at TimePickerPageObject.getRefreshConfig (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/page_objects/time_picker.ts:183:5)
at TimePickerPageObject.pauseAutoRefresh (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/test/functional/page_objects/time_picker.ts:283:27)
at setup (test/functional/apps/monitoring/_get_lifecycle_methods.js:47:7)
at Context. (test/functional/apps/monitoring/beats/cluster.js:18:7)
at Object.apply (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/ess-testing/ci/cloud/common/build/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
**Other test failures:**
_Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/2876/testReport/_",0, chrome x pack ui functional x pack test functional apps monitoring beats cluster·js monitoring app beats cluster before all hook for shows beats panel with data version class chrome x pack ui functional x pack test functional apps monitoring beats cluster·js stack trace error retry try timeout error expected testsubject superdatepickerquickmenu to exist at testsubjects existorfail var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test functional services common test subjects ts at var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test functional services menu toggle ts at runattempt var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test common services retry retry for success ts at retryforsuccess var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test common services retry retry for success ts at retryservice try var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test common services retry retry ts at setstate var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test functional services menu toggle ts at object open var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test functional services menu toggle ts at timepickerpageobject getrefreshconfig var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test functional page objects time picker ts at timepickerpageobject pauseautorefresh var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test functional page objects time picker ts at setup test functional apps monitoring get lifecycle methods js at onfailure var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test common services retry retry for success ts at retryforsuccess var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test common services retry retry for success ts at retryservice try var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test common services retry retry ts at setstate var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test functional services menu toggle ts at object open var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test functional services menu toggle ts at timepickerpageobject getrefreshconfig var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test functional page objects time picker ts at timepickerpageobject pauseautorefresh var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana test functional page objects time picker ts at setup test functional apps monitoring get lifecycle methods js at context test functional apps monitoring beats cluster js at object apply var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node ess testing ci cloud common build kibana node modules kbn test target node functional test runner lib mocha wrap function js other test failures test report ,0
22453,31224333295.0,IssuesEvent,2023-08-19 00:07:18,googleapis/google-cloud-node,https://api.github.com/repos/googleapis/google-cloud-node,closed,Warning: a recent release failed,type: process,"The following release PRs may have failed:
* #4497 - The release job failed -- check the build log.
* #4467 - The release job failed -- check the build log.",1.0,"Warning: a recent release failed - The following release PRs may have failed:
* #4497 - The release job failed -- check the build log.
* #4467 - The release job failed -- check the build log.",0,warning a recent release failed the following release prs may have failed the release job failed check the build log the release job failed check the build log ,0
280291,24290338301.0,IssuesEvent,2022-09-29 05:03:32,CodeSoom-Project/my-seat,https://api.github.com/repos/CodeSoom-Project/my-seat,closed,테스트 코드에 Security 적용,backend test,"## AS-IS
현재 테스트 코드에는 Security가 적용되어 있지 않아서 테스트 실행 시 권한 관련 오류가 발생합니다.
## TO-BE
권한을 부여받은 가짜 사용자로 테스트를 할 수 있도록 수정해야 합니다.",1.0,"테스트 코드에 Security 적용 - ## AS-IS
현재 테스트 코드에는 Security가 적용되어 있지 않아서 테스트 실행 시 권한 관련 오류가 발생합니다.
## TO-BE
권한을 부여받은 가짜 사용자로 테스트를 할 수 있도록 수정해야 합니다.",0,테스트 코드에 security 적용 as is 현재 테스트 코드에는 security가 적용되어 있지 않아서 테스트 실행 시 권한 관련 오류가 발생합니다 to be 권한을 부여받은 가짜 사용자로 테스트를 할 수 있도록 수정해야 합니다 ,0
200606,15114478279.0,IssuesEvent,2021-02-09 02:01:27,GlobantUy/STB-Bank,https://api.github.com/repos/GlobantUy/STB-Bank,opened,[Botón Cancelar] Cuando se muestra el modal de confirmación de préstamo y se cliquea 'Cancelar' se cancela la solicitud,TestCase,"**Precondiciones:**
Se debe contar con un usuario válido para solicitar un préstamo
=======================================================
Pasos para la ejecución | Resultado Esperado
------------ | -------------
1: Acceder al simulador de préstamos|
2: En el formulario, ingresar datos válidos|
3: Cliquear 'Simular préstamo'|
4: En la pantalla ""Resultado del préstamo, cliquear 'Solicitar préstamo'| Se despliega un popup con las opciones 'Cancelar' y 'Solicitar'
5: Cliquear 'Cancelar'| El usuario permanece en la pantalla 'Resultado del préstamo'
=======================================================
**US asociada:**
#109
",1.0,"[Botón Cancelar] Cuando se muestra el modal de confirmación de préstamo y se cliquea 'Cancelar' se cancela la solicitud - **Precondiciones:**
Se debe contar con un usuario válido para solicitar un préstamo
=======================================================
Pasos para la ejecución | Resultado Esperado
------------ | -------------
1: Acceder al simulador de préstamos|
2: En el formulario, ingresar datos válidos|
3: Cliquear 'Simular préstamo'|
4: En la pantalla ""Resultado del préstamo, cliquear 'Solicitar préstamo'| Se despliega un popup con las opciones 'Cancelar' y 'Solicitar'
5: Cliquear 'Cancelar'| El usuario permanece en la pantalla 'Resultado del préstamo'
=======================================================
**US asociada:**
#109
",0, cuando se muestra el modal de confirmación de préstamo y se cliquea cancelar se cancela la solicitud precondiciones se debe contar con un usuario válido para solicitar un préstamo pasos para la ejecución resultado esperado acceder al simulador de préstamos en el formulario ingresar datos válidos cliquear simular préstamo en la pantalla resultado del préstamo cliquear solicitar préstamo se despliega un popup con las opciones cancelar y solicitar cliquear cancelar el usuario permanece en la pantalla resultado del préstamo us asociada ,0
852,4513273376.0,IssuesEvent,2016-09-04 06:20:10,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,zypper_repository module should have a fingerprint option,feature_idea waiting_on_maintainer,"When adding a repository using zypper_repository, its public key may not automatically be stored as trusted. In case of the ""zypper"" command, it would ask interactively whether a certain GPG key fingerprint would be accepted by the user or not. However, when scripting this using the ansible zypper_repository module, this is not possible.
Some users use the ""disable_gpg_check"" option, but this disables the GPG check completely, thus opening a security vulnerability.
Thus, the user of the ansible zypper_repository module should be able to explicitly specify an acceptable GPG key (or multiple acceptable GPG keys) by verbatimly quoting the fingerprint of that key. This way, it is prevented that untrusted software is installed using ansible.
## Example
Instead of
```
zypper_repository: repo=http://download.opensuse.org/repositories/Application:/Geo/openSUSE_13.1/ name=/Application:/Geo/openSUSE_13.1/ state=present disable_gpg_check=yes
```
use
```
zypper_repository: repo=http://download.opensuse.org/repositories/Application:/Geo/openSUSE_13.1/ name=/Application:/Geo/openSUSE_13.1/ state=present acceptable_gpg_key_fingerprint=195E211106BC205D2A9C2222CC7F07489591C39B
```
",True,"zypper_repository module should have a fingerprint option - When adding a repository using zypper_repository, its public key may not automatically be stored as trusted. In case of the ""zypper"" command, it would ask interactively whether a certain GPG key fingerprint would be accepted by the user or not. However, when scripting this using the ansible zypper_repository module, this is not possible.
Some users use the ""disable_gpg_check"" option, but this disables the GPG check completely, thus opening a security vulnerability.
Thus, the user of the ansible zypper_repository module should be able to explicitly specify an acceptable GPG key (or multiple acceptable GPG keys) by verbatimly quoting the fingerprint of that key. This way, it is prevented that untrusted software is installed using ansible.
## Example
Instead of
```
zypper_repository: repo=http://download.opensuse.org/repositories/Application:/Geo/openSUSE_13.1/ name=/Application:/Geo/openSUSE_13.1/ state=present disable_gpg_check=yes
```
use
```
zypper_repository: repo=http://download.opensuse.org/repositories/Application:/Geo/openSUSE_13.1/ name=/Application:/Geo/openSUSE_13.1/ state=present acceptable_gpg_key_fingerprint=195E211106BC205D2A9C2222CC7F07489591C39B
```
",1,zypper repository module should have a fingerprint option when adding a repository using zypper repository its public key may not automatically be stored as trusted in case of the zypper command it would ask interactively whether a certain gpg key fingerprint would be accepted by the user or not however when scripting this using the ansible zypper repository module this is not possible some users use the disable gpg check option but this disables the gpg check completely thus opening a security vulnerability thus the user of the ansible zypper repository module should be able to explicitly specify an acceptable gpg key or multiple acceptable gpg keys by verbatimly quoting the fingerprint of that key this way it is prevented that untrusted software is installed using ansible example instead of zypper repository repo name application geo opensuse state present disable gpg check yes use zypper repository repo name application geo opensuse state present acceptable gpg key fingerprint ,1
269686,23459138019.0,IssuesEvent,2022-08-16 11:37:53,wazuh/wazuh,https://api.github.com/repos/wazuh/wazuh,opened,Release 4.3.7 - Release Candidate 1 - Demo use cases,team/cicd release test,"### Demo use cases information
| | |
|---------------------------------|--------------------------------------------|
| **Main release candidate issue** | #14562 |
| **Version** | 4.3.7 |
| **Release candidate #** | RC1 |
| **Tag** | https://github.com/wazuh/wazuh/tree/v4.3.7-rc1 |
| **Previous Demo use cases** | -- |
## Checks
Status | Result | Use case | Issues
:--: | :--: | -- | -- |
⚫ | ⚫ | Audit |
⚫ | ⚫ | AWS Wodle |
⚫ | ⚫ | Brute force |
⚫ | ⚫ | Docker |
⚫ | ⚫ | Emotet |
⚫ | ⚫ | FIM |
⚫ | ⚫ | IP Reputation |
⚫ | ⚫ | Netcat |
⚫ | ⚫ | Osquery |
⚫ | ⚫ | Shellshock |
⚫ | ⚫ | SQL Injection |
⚫ | ⚫ | Slack |
⚫ | ⚫ | Suricata |
⚫ | ⚫ | Trojan |
⚫ | ⚫ | Virustotal |
⚫ | ⚫ | Vulnerability Detector |
⚫ | ⚫ | Yara |
⚫ | ⚫ | Windows Defender |
Result legend:
⚫ - Not started
🕐 - Pending/In progress
✔️ - Results Ready
⚠️ - Review required
Status legend:
⚫ - None
🔴 - Rejected
🟢 - Approved
## Auditors validation
In order to close and proceed with release or the next candidate version, the following auditors must give the green light to this RC.
- [ ] @alberpilot
- [ ] @teddytpc1 ",1.0,"Release 4.3.7 - Release Candidate 1 - Demo use cases - ### Demo use cases information
| | |
|---------------------------------|--------------------------------------------|
| **Main release candidate issue** | #14562 |
| **Version** | 4.3.7 |
| **Release candidate #** | RC1 |
| **Tag** | https://github.com/wazuh/wazuh/tree/v4.3.7-rc1 |
| **Previous Demo use cases** | -- |
## Checks
Status | Result | Use case | Issues
:--: | :--: | -- | -- |
⚫ | ⚫ | Audit |
⚫ | ⚫ | AWS Wodle |
⚫ | ⚫ | Brute force |
⚫ | ⚫ | Docker |
⚫ | ⚫ | Emotet |
⚫ | ⚫ | FIM |
⚫ | ⚫ | IP Reputation |
⚫ | ⚫ | Netcat |
⚫ | ⚫ | Osquery |
⚫ | ⚫ | Shellshock |
⚫ | ⚫ | SQL Injection |
⚫ | ⚫ | Slack |
⚫ | ⚫ | Suricata |
⚫ | ⚫ | Trojan |
⚫ | ⚫ | Virustotal |
⚫ | ⚫ | Vulnerability Detector |
⚫ | ⚫ | Yara |
⚫ | ⚫ | Windows Defender |
Result legend:
⚫ - Not started
🕐 - Pending/In progress
✔️ - Results Ready
⚠️ - Review required
Status legend:
⚫ - None
🔴 - Rejected
🟢 - Approved
## Auditors validation
In order to close and proceed with release or the next candidate version, the following auditors must give the green light to this RC.
- [ ] @alberpilot
- [ ] @teddytpc1 ",0,release release candidate demo use cases demo use cases information main release candidate issue version release candidate tag previous demo use cases checks status result use case issues ⚫ ⚫ audit ⚫ ⚫ aws wodle ⚫ ⚫ brute force ⚫ ⚫ docker ⚫ ⚫ emotet ⚫ ⚫ fim ⚫ ⚫ ip reputation ⚫ ⚫ netcat ⚫ ⚫ osquery ⚫ ⚫ shellshock ⚫ ⚫ sql injection ⚫ ⚫ slack ⚫ ⚫ suricata ⚫ ⚫ trojan ⚫ ⚫ virustotal ⚫ ⚫ vulnerability detector ⚫ ⚫ yara ⚫ ⚫ windows defender result legend ⚫ not started 🕐 pending in progress ✔️ results ready ⚠️ review required status legend ⚫ none 🔴 rejected 🟢 approved auditors validation in order to close and proceed with release or the next candidate version the following auditors must give the green light to this rc alberpilot ,0
63769,12374413646.0,IssuesEvent,2020-05-19 01:30:19,toebes/ciphers,https://api.github.com/repos/toebes/ciphers,opened,Baconian word generator needs a UI to show letters chosen,CodeBusters enhancement,"When generating a word baconian, it needs to have a field for the HINT characters.
With the given Hint characters, it should show in the letter map which letters are covered by the hint.
For example with the sample plain text
SOMETHING
and a HINT of
SOME
With the text chosen as:
BY OUR ERNST ALERT AUDIO --- BE ITS EARTH A BOOK ABBEY
On the mapping, the letters **AB DE I L NO RSTU Y** should be bold or highlighted in a color as well as the A/B letter that they map to
**AB**C**DE**FGH**I**JK**L**M**NO**PQ**RSTU**VWX**Y**Z
Ideally the code should also check the question text to make sure that the hint occurs in the question (like the other generators do). Note that the hint field should only be present and checked for the word baconian.",1.0,"Baconian word generator needs a UI to show letters chosen - When generating a word baconian, it needs to have a field for the HINT characters.
With the given Hint characters, it should show in the letter map which letters are covered by the hint.
For example with the sample plain text
SOMETHING
and a HINT of
SOME
With the text chosen as:
BY OUR ERNST ALERT AUDIO --- BE ITS EARTH A BOOK ABBEY
On the mapping, the letters **AB DE I L NO RSTU Y** should be bold or highlighted in a color as well as the A/B letter that they map to
**AB**C**DE**FGH**I**JK**L**M**NO**PQ**RSTU**VWX**Y**Z
Ideally the code should also check the question text to make sure that the hint occurs in the question (like the other generators do). Note that the hint field should only be present and checked for the word baconian.",0,baconian word generator needs a ui to show letters chosen when generating a word baconian it needs to have a field for the hint characters with the given hint characters it should show in the letter map which letters are covered by the hint for example with the sample plain text something and a hint of some with the text chosen as by our ernst alert audio be its earth a book abbey on the mapping the letters ab de i l no rstu y should be bold or highlighted in a color as well as the a b letter that they map to ab c de fgh i jk l m no pq rstu vwx y z ideally the code should also check the question text to make sure that the hint occurs in the question like the other generators do note that the hint field should only be present and checked for the word baconian ,0
1428,6205983048.0,IssuesEvent,2017-07-06 17:22:32,ocaml/opam-repository,https://api.github.com/repos/ocaml/opam-repository,closed,async_graphics installation failure,depext incorrect constraints needs maintainer action,"```
#=== ERROR while installing async_graphics.0.5.1 ==============================#
# opam-version 1.2.2
# os darwin
# command ./install.sh
# path /Users/blym/.opam/system/build/async_graphics.0.5.1
# compiler system (4.02.3)
# exit-code 2
# env-file /Users/blym/.opam/system/build/async_graphics.0.5.1/async_graphics-41712-60459a.env
# stdout-file /Users/blym/.opam/system/build/async_graphics.0.5.1/async_graphics-41712-60459a.out
# stderr-file /Users/blym/.opam/system/build/async_graphics.0.5.1/async_graphics-41712-60459a.err
### stdout ###
# ocamlfind ocamldep -package graphics -package async -modules async_graphics.mli > async_graphics.mli.depends
# ocamlfind ocamlc -c -thread -package graphics -package async -o async_graphics.cmi async_graphics.mli
# ocamlfind ocamldep -package graphics -package async -modules async_graphics.ml > async_graphics.ml.depends
# ocamlfind ocamlc -c -thread -package graphics -package async -o async_graphics.cmo async_graphics.ml
# + ocamlfind ocamlc -c -thread -package graphics -package async -o async_graphics.cmo async_graphics.ml
# File ""async_graphics.ml"", line 17, characters 8-16:
# Error: Unbound module Graphics
# Command exited with code 2.
### stderr ###
# ocamlfind: _build/async_graphics.a: No such file or directory
```
",True,"async_graphics installation failure - ```
#=== ERROR while installing async_graphics.0.5.1 ==============================#
# opam-version 1.2.2
# os darwin
# command ./install.sh
# path /Users/blym/.opam/system/build/async_graphics.0.5.1
# compiler system (4.02.3)
# exit-code 2
# env-file /Users/blym/.opam/system/build/async_graphics.0.5.1/async_graphics-41712-60459a.env
# stdout-file /Users/blym/.opam/system/build/async_graphics.0.5.1/async_graphics-41712-60459a.out
# stderr-file /Users/blym/.opam/system/build/async_graphics.0.5.1/async_graphics-41712-60459a.err
### stdout ###
# ocamlfind ocamldep -package graphics -package async -modules async_graphics.mli > async_graphics.mli.depends
# ocamlfind ocamlc -c -thread -package graphics -package async -o async_graphics.cmi async_graphics.mli
# ocamlfind ocamldep -package graphics -package async -modules async_graphics.ml > async_graphics.ml.depends
# ocamlfind ocamlc -c -thread -package graphics -package async -o async_graphics.cmo async_graphics.ml
# + ocamlfind ocamlc -c -thread -package graphics -package async -o async_graphics.cmo async_graphics.ml
# File ""async_graphics.ml"", line 17, characters 8-16:
# Error: Unbound module Graphics
# Command exited with code 2.
### stderr ###
# ocamlfind: _build/async_graphics.a: No such file or directory
```
",1,async graphics installation failure error while installing async graphics opam version os darwin command install sh path users blym opam system build async graphics compiler system exit code env file users blym opam system build async graphics async graphics env stdout file users blym opam system build async graphics async graphics out stderr file users blym opam system build async graphics async graphics err stdout ocamlfind ocamldep package graphics package async modules async graphics mli async graphics mli depends ocamlfind ocamlc c thread package graphics package async o async graphics cmi async graphics mli ocamlfind ocamldep package graphics package async modules async graphics ml async graphics ml depends ocamlfind ocamlc c thread package graphics package async o async graphics cmo async graphics ml ocamlfind ocamlc c thread package graphics package async o async graphics cmo async graphics ml file async graphics ml line characters error unbound module graphics command exited with code stderr ocamlfind build async graphics a no such file or directory ,1
237742,26085316826.0,IssuesEvent,2022-12-26 01:31:08,n-devs/testTungTonScript,https://api.github.com/repos/n-devs/testTungTonScript,opened,"CVE-2022-46175 (High) detected in json5-0.5.1.tgz, json5-0.4.0.tgz",security vulnerability,"## CVE-2022-46175 - High Severity Vulnerability
Vulnerable Libraries - json5-0.5.1.tgz, json5-0.4.0.tgz
JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-46175 (High) detected in json5-0.5.1.tgz, json5-0.4.0.tgz - ## CVE-2022-46175 - High Severity Vulnerability
Vulnerable Libraries - json5-0.5.1.tgz, json5-0.4.0.tgz
JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in tgz tgz cve high severity vulnerability vulnerable libraries tgz tgz tgz json for the era library home page a href path to dependency file testtungtonscript package json path to vulnerable library node modules package json dependency hierarchy jest expo tgz root library x tgz vulnerable library tgz json for the era library home page a href path to dependency file testtungtonscript package json path to vulnerable library node modules metro node modules package json dependency hierarchy react native tgz root library metro tgz x tgz vulnerable library found in head commit a href vulnerability details is an extension to the popular json file format that aims to be easier to write and maintain by hand e g for config files the parse method of the library before and including version does not restrict parsing of keys named proto allowing specially crafted strings to pollute the prototype of the resulting object this vulnerability pollutes the prototype of the object returned by parse and not the global object prototype which is the commonly understood definition of prototype pollution however polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations this vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from parse the actual impact will depend on how applications utilize the returned object and how they filter unwanted keys but could include denial of service cross site scripting elevation of privilege and in extreme cases remote code execution parse should restrict parsing of proto keys when parsing json strings to objects as a point of reference the json parse method included in javascript ignores proto keys simply changing parse to json parse in the examples above mitigates this vulnerability this vulnerability is patched in version and later publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
4618,2737057041.0,IssuesEvent,2015-04-20 00:02:18,piwik/piwik,https://api.github.com/repos/piwik/piwik,closed,Marketplace Plugin blocks are too big in some cases,Bug c: Design / UI,"Here is an example of a Marketplace page rendered:

Because of the PerformanceInfo plugin that has 2 yellow boxes and has a tall box, all other plugin boxes are tall as well
We expect to see instead the plugin blocks less high such as in this example:

",1.0,"Marketplace Plugin blocks are too big in some cases - Here is an example of a Marketplace page rendered:

Because of the PerformanceInfo plugin that has 2 yellow boxes and has a tall box, all other plugin boxes are tall as well
We expect to see instead the plugin blocks less high such as in this example:

",0,marketplace plugin blocks are too big in some cases here is an example of a marketplace page rendered because of the performanceinfo plugin that has yellow boxes and has a tall box all other plugin boxes are tall as well we expect to see instead the plugin blocks less high such as in this example ,0
15897,20102575482.0,IssuesEvent,2022-02-07 06:57:32,SAP/openui5-docs,https://api.github.com/repos/SAP/openui5-docs,closed,Missing Deprecation Info for sap.ui.layout.form.GridLayout,In Process,"`GridLayout` is listed as layout option for the `SimpleFormLayout` without any note about its deprecation:
https://openui5.hana.ondemand.com/api/sap.ui.layout.form.SimpleFormLayout",1.0,"Missing Deprecation Info for sap.ui.layout.form.GridLayout - `GridLayout` is listed as layout option for the `SimpleFormLayout` without any note about its deprecation:
https://openui5.hana.ondemand.com/api/sap.ui.layout.form.SimpleFormLayout",0,missing deprecation info for sap ui layout form gridlayout gridlayout is listed as layout option for the simpleformlayout without any note about its deprecation ,0
1295,5518021290.0,IssuesEvent,2017-03-18 04:05:03,OpenLightingProject/ola,https://api.github.com/repos/OpenLightingProject/ola,closed,build failures with gcc7,bug Difficulty-Easy Language-C++ Maintainability OpSys-Linux,"Hi,
I received Debian bug [853583](https://bugs.debian.org/853583) today, which claims that ola fails to build with GCC7.
It's not urgent (gcc7 won't be made the default until after the stretch release), but you might want to look into it. The bug report also contains instructions on how to install gcc7 from experimental on a Debian unstable system, so you can try building things.",True,"build failures with gcc7 - Hi,
I received Debian bug [853583](https://bugs.debian.org/853583) today, which claims that ola fails to build with GCC7.
It's not urgent (gcc7 won't be made the default until after the stretch release), but you might want to look into it. The bug report also contains instructions on how to install gcc7 from experimental on a Debian unstable system, so you can try building things.",1,build failures with hi i received debian bug today which claims that ola fails to build with it s not urgent won t be made the default until after the stretch release but you might want to look into it the bug report also contains instructions on how to install from experimental on a debian unstable system so you can try building things ,1
158052,6020995184.0,IssuesEvent,2017-06-07 17:42:10,jaredpalmer/razzle,https://api.github.com/repos/jaredpalmer/razzle,closed,Importing Font Awesome css,bug priority: medium,"I am trying to extend Razzle to handle font awesome. Font awesome requires a ?v= as part of the path (as discussed here: https://github.com/facebookincubator/create-react-app/issues/295). Razzle appears to break in the same way discussed in that issue when trying to import font-awesome.css. It seems the fix is to add regex to the loader (https://github.com/facebookincubator/create-react-app/pull/298#discussion-diff-72889071L76).
Just for reference, I am working off of the with-typescript example project. Thanks.",1.0,"Importing Font Awesome css - I am trying to extend Razzle to handle font awesome. Font awesome requires a ?v= as part of the path (as discussed here: https://github.com/facebookincubator/create-react-app/issues/295). Razzle appears to break in the same way discussed in that issue when trying to import font-awesome.css. It seems the fix is to add regex to the loader (https://github.com/facebookincubator/create-react-app/pull/298#discussion-diff-72889071L76).
Just for reference, I am working off of the with-typescript example project. Thanks.",0,importing font awesome css i am trying to extend razzle to handle font awesome font awesome requires a v as part of the path as discussed here razzle appears to break in the same way discussed in that issue when trying to import font awesome css it seems the fix is to add regex to the loader just for reference i am working off of the with typescript example project thanks ,0
182827,30989698931.0,IssuesEvent,2023-08-09 02:49:00,appsmithorg/appsmith,https://api.github.com/repos/appsmithorg/appsmith,opened,Support headings props to the Menu Component.,Design System Pod,"I noticed that the menu component doesn't have the ability to support a heading as a prop. This could potentially limit its usefulness in various scenarios.


Design
",1.0,"Support headings props to the Menu Component. - I noticed that the menu component doesn't have the ability to support a heading as a prop. This could potentially limit its usefulness in various scenarios.


Design
",0,support headings props to the menu component i noticed that the menu component doesn t have the ability to support a heading as a prop this could potentially limit its usefulness in various scenarios design ,0
263158,19901253956.0,IssuesEvent,2022-01-25 08:11:24,chocolatey/docs,https://api.github.com/repos/chocolatey/docs,closed,List Simple Server as a Not Supported Repository Option,documentation,"Go through documentation and fix wording to list Chocolatey Server/Simple Server as not covered under the purview of the C4B support structure.
One place to change: https://docs.chocolatey.org/en-us/features/host-packages#known-hosting-options
Another efrence: https://docs.chocolatey.org/en-us/features/host-packages#known-simple-server-options",1.0,"List Simple Server as a Not Supported Repository Option - Go through documentation and fix wording to list Chocolatey Server/Simple Server as not covered under the purview of the C4B support structure.
One place to change: https://docs.chocolatey.org/en-us/features/host-packages#known-hosting-options
Another efrence: https://docs.chocolatey.org/en-us/features/host-packages#known-simple-server-options",0,list simple server as a not supported repository option go through documentation and fix wording to list chocolatey server simple server as not covered under the purview of the support structure one place to change another efrence ,0
1193,5109564624.0,IssuesEvent,2017-01-05 21:12:31,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,ec2_asg_facts not gathering all ASG's,affects_2.3 aws bug_report cloud waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_asg_facts
##### ANSIBLE VERSION
ansible 2.3.0 - devel branch
Also present in 2.2.0 rc1
##### CONFIGURATION
##### OS / ENVIRONMENT
OSX 10.11.5
##### SUMMARY
When running a ec2_asg_facts it does not fetch all ASGs that are in the account
##### STEPS TO REPRODUCE
Difficult - you will need quite a few ASG's. It looks as though the golden number is 51.
Ansible will go off and happily describe the first 50 ASG but completely ignore the 51st. They are reported back in alphabetical order.
I have done some limited debugging of lib/ansible/extras/cloud/amazon/ecs_asg_facts.py.
Add `print asgs` between line 297 and 298 and the module will fail after printing 50 instances.
` - ec2_asg_facts:
profile: ""{{ profile }}""
region: ""{{ region }}""
register: current_instances
- debug: msg=""{{current_instances}}""`
This becomes particualry problematic when adding a name to the above as it will still only get the first 50 ASG's
##### EXPECTED RESULTS
I would expect it to describe all ASG's
",True,"ec2_asg_facts not gathering all ASG's - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_asg_facts
##### ANSIBLE VERSION
ansible 2.3.0 - devel branch
Also present in 2.2.0 rc1
##### CONFIGURATION
##### OS / ENVIRONMENT
OSX 10.11.5
##### SUMMARY
When running a ec2_asg_facts it does not fetch all ASGs that are in the account
##### STEPS TO REPRODUCE
Difficult - you will need quite a few ASG's. It looks as though the golden number is 51.
Ansible will go off and happily describe the first 50 ASG but completely ignore the 51st. They are reported back in alphabetical order.
I have done some limited debugging of lib/ansible/extras/cloud/amazon/ecs_asg_facts.py.
Add `print asgs` between line 297 and 298 and the module will fail after printing 50 instances.
` - ec2_asg_facts:
profile: ""{{ profile }}""
region: ""{{ region }}""
register: current_instances
- debug: msg=""{{current_instances}}""`
This becomes particualry problematic when adding a name to the above as it will still only get the first 50 ASG's
##### EXPECTED RESULTS
I would expect it to describe all ASG's
",1, asg facts not gathering all asg s issue type bug report component name asg facts ansible version ansible devel branch also present in configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment osx summary when running a asg facts it does not fetch all asgs that are in the account steps to reproduce difficult you will need quite a few asg s it looks as though the golden number is ansible will go off and happily describe the first asg but completely ignore the they are reported back in alphabetical order i have done some limited debugging of lib ansible extras cloud amazon ecs asg facts py add print asgs between line and and the module will fail after printing instances asg facts profile profile region region register current instances debug msg current instances this becomes particualry problematic when adding a name to the above as it will still only get the first asg s expected results i would expect it to describe all asg s ,1
263300,19906585859.0,IssuesEvent,2022-01-25 13:25:41,mainflux/mainflux,https://api.github.com/repos/mainflux/mainflux,closed,Add instructions for CoAP CLI,documentation good first issue,"**ENHANCEMENT**
1. Describe the enhancement you are requesting.
Currently our documentation recommends using [Copper for CoAP testing](https://mainflux.readthedocs.io/en/latest/messaging/#coap). Add the information about [coap-cli](https://github.com/mainflux/coap-cli) and give the usage example.
2. Indicate the importance of this enhancement to you (must-have, should-have, nice-to-have).
Must have",1.0,"Add instructions for CoAP CLI - **ENHANCEMENT**
1. Describe the enhancement you are requesting.
Currently our documentation recommends using [Copper for CoAP testing](https://mainflux.readthedocs.io/en/latest/messaging/#coap). Add the information about [coap-cli](https://github.com/mainflux/coap-cli) and give the usage example.
2. Indicate the importance of this enhancement to you (must-have, should-have, nice-to-have).
Must have",0,add instructions for coap cli enhancement describe the enhancement you are requesting currently our documentation recommends using add the information about and give the usage example indicate the importance of this enhancement to you must have should have nice to have must have,0
48632,12225260065.0,IssuesEvent,2020-05-03 04:01:18,Autodesk/arnold-usd,https://api.github.com/repos/Autodesk/arnold-usd,closed,Update testsuite scripts to match arnold ,bug build,"**Describe the bug**
The arnold-usd testsuite scripts have slightly derived from arnold core ones, which is causing issues when running the testsuite from arnold. The parameter `resave` should be called again `resaved`, and we should check whether it's a string or a boolean. When set to true, we assume the scene has to be resaved to .ass. This way, both test scripts will be similar again",1.0,"Update testsuite scripts to match arnold - **Describe the bug**
The arnold-usd testsuite scripts have slightly derived from arnold core ones, which is causing issues when running the testsuite from arnold. The parameter `resave` should be called again `resaved`, and we should check whether it's a string or a boolean. When set to true, we assume the scene has to be resaved to .ass. This way, both test scripts will be similar again",0,update testsuite scripts to match arnold describe the bug the arnold usd testsuite scripts have slightly derived from arnold core ones which is causing issues when running the testsuite from arnold the parameter resave should be called again resaved and we should check whether it s a string or a boolean when set to true we assume the scene has to be resaved to ass this way both test scripts will be similar again,0
1144,5003265062.0,IssuesEvent,2016-12-11 20:35:02,tgstation/tgstation,https://api.github.com/repos/tgstation/tgstation,closed,Crafting system lacks expandability,Maintainability - Hinders improvements -,"Recipes cannot customize output, how materials are consumed, if a material has to be of a specific type or in a specific state.
The pin removal crafting recipe only works because of this bit of code, which is obviously unsustainable;
/obj/item/weapon/gun/CheckParts(list/parts_list)
..()
var/obj/item/weapon/gun/G = locate(/obj/item/weapon/gun) in contents
if(G)
G.loc = loc
qdel(G.pin)
G.pin = null
visible_message(""[G] can now fit a new pin, but old one was destroyed in the process."", null, null, 3)
qdel(src)
",True,"Crafting system lacks expandability - Recipes cannot customize output, how materials are consumed, if a material has to be of a specific type or in a specific state.
The pin removal crafting recipe only works because of this bit of code, which is obviously unsustainable;
/obj/item/weapon/gun/CheckParts(list/parts_list)
..()
var/obj/item/weapon/gun/G = locate(/obj/item/weapon/gun) in contents
if(G)
G.loc = loc
qdel(G.pin)
G.pin = null
visible_message(""[G] can now fit a new pin, but old one was destroyed in the process."", null, null, 3)
qdel(src)
",1,crafting system lacks expandability recipes cannot customize output how materials are consumed if a material has to be of a specific type or in a specific state the pin removal crafting recipe only works because of this bit of code which is obviously unsustainable obj item weapon gun checkparts list parts list var obj item weapon gun g locate obj item weapon gun in contents if g g loc loc qdel g pin g pin null visible message can now fit a new pin but old one was destroyed in the process null null qdel src ,1
25971,12810635893.0,IssuesEvent,2020-07-03 19:23:33,hajimehoshi/ebiten,https://api.github.com/repos/hajimehoshi/ebiten,closed,Low performance with multiple Ebiten processes,bug external performance wontfix,"@Shnifer reported:
> AMD Phenom II X4, Win 7
minimal test case:
> https://github.com/Shnifer/magellan/tree/master/tests/performance_test/
> if i run it 3 times fps = 60/3 = 20 fps, and two instances have 60/2 = 30 fps
> as if they all have one common 60hz ticker and use it consistently )
> ( in fact image and text are not needed, just a black screen shows the same fps )",True,"Low performance with multiple Ebiten processes - @Shnifer reported:
> AMD Phenom II X4, Win 7
minimal test case:
> https://github.com/Shnifer/magellan/tree/master/tests/performance_test/
> if i run it 3 times fps = 60/3 = 20 fps, and two instances have 60/2 = 30 fps
> as if they all have one common 60hz ticker and use it consistently )
> ( in fact image and text are not needed, just a black screen shows the same fps )",0,low performance with multiple ebiten processes shnifer reported amd phenom ii win minimal test case if i run it times fps fps and two instances have fps as if they all have one common ticker and use it consistently in fact image and text are not needed just a black screen shows the same fps ,0
5802,30727495672.0,IssuesEvent,2023-07-27 21:00:22,cncf/tag-contributor-strategy,https://api.github.com/repos/cncf/tag-contributor-strategy,closed,Create a roadmap for the TAG,wg/governance wg/contribgrowth mentoring wg/maintainers-circle,"In #329 we discussed how helpful having a roadmap is to encourage contributors and focus their efforts on high impact items. We should take our own advice and make one ourselves. Couple thoughts:
* Avoid making a wishlist. Limit to what we can realistically do with the current amount of time / velocity on projects.
* If needed call out what isn't on the roadmap that people may be looking for and wondering about.
* Prioritize or otherwise call out hard commitments for supporting the TOC.
* Make sure to include any time spent generally supporting projects, sometimes what we do doesn't fall into a ""feature"" type bucket but is still important. ",True,"Create a roadmap for the TAG - In #329 we discussed how helpful having a roadmap is to encourage contributors and focus their efforts on high impact items. We should take our own advice and make one ourselves. Couple thoughts:
* Avoid making a wishlist. Limit to what we can realistically do with the current amount of time / velocity on projects.
* If needed call out what isn't on the roadmap that people may be looking for and wondering about.
* Prioritize or otherwise call out hard commitments for supporting the TOC.
* Make sure to include any time spent generally supporting projects, sometimes what we do doesn't fall into a ""feature"" type bucket but is still important. ",1,create a roadmap for the tag in we discussed how helpful having a roadmap is to encourage contributors and focus their efforts on high impact items we should take our own advice and make one ourselves couple thoughts avoid making a wishlist limit to what we can realistically do with the current amount of time velocity on projects if needed call out what isn t on the roadmap that people may be looking for and wondering about prioritize or otherwise call out hard commitments for supporting the toc make sure to include any time spent generally supporting projects sometimes what we do doesn t fall into a feature type bucket but is still important ,1
241119,7808927049.0,IssuesEvent,2018-06-11 21:53:42,rogerthat-platform/rogerthat-backend,https://api.github.com/repos/rogerthat-platform/rogerthat-backend,closed,Unhandled payments error - KeyError: 'location',priority_critical state_verification type_bug,"Happened 2 times for OSA Bakker
```
...
2018-06-09 00:33:20.893 CEST
Sending request to https://dev.payconiq.com/v2/transactions (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py:315)
{""currency"":""EUR"",""amount"":200,""description"":""Payment OSA Bakker via Onze Stad App app\nRef.: _js_411efb44-a04d-f762-3784-xxx"",""callbackUrl"":""https://rogerth.at/payments/callbacks/payconiq/transaction/update?id=_js_411efb44-a04d-f762-3784-xxx""}
2018-06-09 00:33:21.005 CEST
{""transactionId"":""xxx""} (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py:327)
2018-06-09 00:33:21.005 CEST
{'x-application-context': 'Payconiq API Gateway:ext:8080', 'transfer-encoding': 'chunked', 'connection': 'keep-alive', 'x-newrelic-app-data': '***', 'date': 'Fri, 08 Jun 2018 22:33:20 GMT', 'content-type': 'application/json;charset=UTF-8'} (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py:328)
2018-06-09 00:33:21.005 CEST
Unhandled payments error (Unhandled payments error (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/add_1_monkey_patches.py:128/base/data/home/apps/e~roger )
Traceback (most recent call last):
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/api/payment.py"", line 160, in _do_call
result = call(app_user, *args, **kwargs)
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py"", line 164, in typechecked_return
result = f(*args, **kwargs)
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py"", line 142, in typechecked_f
return f(**kwargs)
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/__init__.py"", line 347, in create_transaction
return get_api_module(provider_id).create_transaction(app_user, params)
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py"", line 164, in typechecked_return
result = f(*args, **kwargs)
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py"", line 142, in typechecked_f
return f(**kwargs)
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py"", line 329, in create_transaction
payconic_transaction_url = result.headers['Location']
File ""/base/alloc/tmpfs/dynamic_runtimes/python27/277b61042b697c7a_unzipped/python27_lib/versions/1/google/appengine/api/urlfetch.py"", line 109, in __getitem__
return self.data[self.caseless_keys[key.lower()]]
KeyError: 'location'
2018-06-09 00:33:21.080 CEST
[XX-OFFLOADv1]{""timestamp"":1528497201.0803299,""request_data"":{""a"":[],""c"":[{""a"":{""request"":{""provider_id"":""payconiq"",""params"":""{\""target\"":\""service-b6580411-e0a8-4f3b-bcf4-17b50f5f310e@rogerth.at\"",\""currency\"":\""EUR\"",\""amount\"":200,\""precision\"":2,\""memo\"":\""Payment OSA Bakker via Onze Stad App app\"",\""message_key\"":\""_js_411efb44-a04d-f762-3784-52d55d3f148e\"",\""test_mode\"":true}""}},""ci"":""06922f5f-6d2f-4678-9f7d-1562b70c853c"",""av"":1,""t"":1528497201,""f"":""com.mobicage.api.payment.createTransaction""}],""r"":[],""av"":1},""type"":""app"",""response_data"":{""ap"":""https://rogerthat-server.appspot.com/json-rpc"",""r"":[{""s"":""success"",""r"":{""result"":null,""success"":false,""error"":{""message"":""*****"",""code"":""unknown"",""data"":null}},""av"":1,""ci"":""06922f5f-6d2f-4678-9f7d-1562b70c853c"",""t"":1528497201}],""av"":1,""t"":1528497201,""more"":false},""user"":""c2cea01b5830a394e19744762f0f118e:osa-demo2""} (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/lib/log_offload/log_offload.py:58)
```",1.0,"Unhandled payments error - KeyError: 'location' - Happened 2 times for OSA Bakker
```
...
2018-06-09 00:33:20.893 CEST
Sending request to https://dev.payconiq.com/v2/transactions (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py:315)
{""currency"":""EUR"",""amount"":200,""description"":""Payment OSA Bakker via Onze Stad App app\nRef.: _js_411efb44-a04d-f762-3784-xxx"",""callbackUrl"":""https://rogerth.at/payments/callbacks/payconiq/transaction/update?id=_js_411efb44-a04d-f762-3784-xxx""}
2018-06-09 00:33:21.005 CEST
{""transactionId"":""xxx""} (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py:327)
2018-06-09 00:33:21.005 CEST
{'x-application-context': 'Payconiq API Gateway:ext:8080', 'transfer-encoding': 'chunked', 'connection': 'keep-alive', 'x-newrelic-app-data': '***', 'date': 'Fri, 08 Jun 2018 22:33:20 GMT', 'content-type': 'application/json;charset=UTF-8'} (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py:328)
2018-06-09 00:33:21.005 CEST
Unhandled payments error (Unhandled payments error (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/add_1_monkey_patches.py:128/base/data/home/apps/e~roger )
Traceback (most recent call last):
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/api/payment.py"", line 160, in _do_call
result = call(app_user, *args, **kwargs)
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py"", line 164, in typechecked_return
result = f(*args, **kwargs)
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py"", line 142, in typechecked_f
return f(**kwargs)
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/__init__.py"", line 347, in create_transaction
return get_api_module(provider_id).create_transaction(app_user, params)
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py"", line 164, in typechecked_return
result = f(*args, **kwargs)
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/mcfw/rpc.py"", line 142, in typechecked_f
return f(**kwargs)
File ""/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/rogerthat/bizz/payment/providers/payconiq/api.py"", line 329, in create_transaction
payconic_transaction_url = result.headers['Location']
File ""/base/alloc/tmpfs/dynamic_runtimes/python27/277b61042b697c7a_unzipped/python27_lib/versions/1/google/appengine/api/urlfetch.py"", line 109, in __getitem__
return self.data[self.caseless_keys[key.lower()]]
KeyError: 'location'
2018-06-09 00:33:21.080 CEST
[XX-OFFLOADv1]{""timestamp"":1528497201.0803299,""request_data"":{""a"":[],""c"":[{""a"":{""request"":{""provider_id"":""payconiq"",""params"":""{\""target\"":\""service-b6580411-e0a8-4f3b-bcf4-17b50f5f310e@rogerth.at\"",\""currency\"":\""EUR\"",\""amount\"":200,\""precision\"":2,\""memo\"":\""Payment OSA Bakker via Onze Stad App app\"",\""message_key\"":\""_js_411efb44-a04d-f762-3784-52d55d3f148e\"",\""test_mode\"":true}""}},""ci"":""06922f5f-6d2f-4678-9f7d-1562b70c853c"",""av"":1,""t"":1528497201,""f"":""com.mobicage.api.payment.createTransaction""}],""r"":[],""av"":1},""type"":""app"",""response_data"":{""ap"":""https://rogerthat-server.appspot.com/json-rpc"",""r"":[{""s"":""success"",""r"":{""result"":null,""success"":false,""error"":{""message"":""*****"",""code"":""unknown"",""data"":null}},""av"":1,""ci"":""06922f5f-6d2f-4678-9f7d-1562b70c853c"",""t"":1528497201}],""av"":1,""t"":1528497201,""more"":false},""user"":""c2cea01b5830a394e19744762f0f118e:osa-demo2""} (/base/data/home/apps/e~rogerthat-server/20180608t091910.410288129468509991/lib/log_offload/log_offload.py:58)
```",0,unhandled payments error keyerror location happened times for osa bakker cest sending request to base data home apps e rogerthat server rogerthat bizz payment providers payconiq api py currency eur amount description payment osa bakker via onze stad app app nref js xxx callbackurl cest transactionid xxx base data home apps e rogerthat server rogerthat bizz payment providers payconiq api py cest x application context payconiq api gateway ext transfer encoding chunked connection keep alive x newrelic app data date fri jun gmt content type application json charset utf base data home apps e rogerthat server rogerthat bizz payment providers payconiq api py cest unhandled payments error unhandled payments error base data home apps e rogerthat server add monkey patches py base data home apps e roger traceback most recent call last file base data home apps e rogerthat server rogerthat api payment py line in do call result call app user args kwargs file base data home apps e rogerthat server mcfw rpc py line in typechecked return result f args kwargs file base data home apps e rogerthat server mcfw rpc py line in typechecked f return f kwargs file base data home apps e rogerthat server rogerthat bizz payment init py line in create transaction return get api module provider id create transaction app user params file base data home apps e rogerthat server mcfw rpc py line in typechecked return result f args kwargs file base data home apps e rogerthat server mcfw rpc py line in typechecked f return f kwargs file base data home apps e rogerthat server rogerthat bizz payment providers payconiq api py line in create transaction payconic transaction url result headers file base alloc tmpfs dynamic runtimes unzipped lib versions google appengine api urlfetch py line in getitem return self data keyerror location cest timestamp request data a c r av type app response data ap av t more false user osa base data home apps e rogerthat server lib log offload log offload py ,0
22842,10789978349.0,IssuesEvent,2019-11-05 13:09:16,silinternational/simplesamlphp-module-sildisco,https://api.github.com/repos/silinternational/simplesamlphp-module-sildisco,opened,"WS-2016-0090 (Medium) detected in jquery-1.8.3.min.js, simplesamlphp/simplesamlphp-v1.17.6",security vulnerability,"## WS-2016-0090 - Medium Severity Vulnerability
Vulnerable Libraries - jquery-1.8.3.min.js, simplesamlphp/simplesamlphp-v1.17.6
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2016-0090 (Medium) detected in jquery-1.8.3.min.js, simplesamlphp/simplesamlphp-v1.17.6 - ## WS-2016-0090 - Medium Severity Vulnerability
Vulnerable Libraries - jquery-1.8.3.min.js, simplesamlphp/simplesamlphp-v1.17.6
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws medium detected in jquery min js simplesamlphp simplesamlphp ws medium severity vulnerability vulnerable libraries jquery min js simplesamlphp simplesamlphp jquery min js javascript library for dom operations library home page a href path to vulnerable library simplesamlphp module sildisco vendor simplesamlphp simplesamlphp www resources jquery js dependency hierarchy x jquery min js vulnerable library simplesamlphp simplesamlphp simplesamlphp is an award winning application written in native php that deals with authentication dependency hierarchy simplesamlphp composer module installer root library x simplesamlphp simplesamlphp vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks via text javascript response with arbitrary code execution publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
26387,12404876892.0,IssuesEvent,2020-05-21 16:16:37,cityofaustin/atd-data-tech,https://api.github.com/repos/cityofaustin/atd-data-tech,opened,TIA Customer Case Management: Customer Permit Details Page,Need: 1-Must Have Product: TIA Module Service: Apps Status: Done Type: Feature Workgroup: TDSD imported-from-csv,"As a TIA Customer I'd like a case details page that allows me to see basic information about my case, especially the current status.",1.0,"TIA Customer Case Management: Customer Permit Details Page - As a TIA Customer I'd like a case details page that allows me to see basic information about my case, especially the current status.",0,tia customer case management customer permit details page as a tia customer i d like a case details page that allows me to see basic information about my case especially the current status ,0
1832,6577362042.0,IssuesEvent,2017-09-12 00:22:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,django_manage createsuperuser does not seem idempotent,affects_2.0 bug_report waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
django_manage
##### ANSIBLE VERSION
```
ansible 2.0.0.2
```
##### OS / ENVIRONMENT
OSX El Captain / 10.11.5, managing CentOS 7.1
##### SUMMARY
Issuing command createsuperuser ends up in error after second invocation.
##### STEPS TO REPRODUCE
Using this task, hard coding username/mail does not yeald in better effect
```
- name: Django add superusers
django_manage:
app_path: ""{{ seeder_home }}/Seeder""
virtualenv: ""{{ seeder_virtualenv }}""
command: ""createsuperuser --noinput --username={{ item.name }} --email={{ item.mail }}"" # Not idempotent, probably bug in django_manage module
with_items: ""{{ seeder_admins }}""
tags: django
ignore_errors: yes
```
##### EXPECTED RESULTS
It should not try to create user again
##### ACTUAL RESULTS
```
stderr: Traceback (most recent call last):\n File \""/opt/virtualenv/seeder/lib/python3.4/site-packages/django/db/backends/utils.py\"", line 64, in execute\n return self.cursor.execute(sql, params)\npsycopg2.IntegrityError: duplicate key value violates unique constraint \""auth_user_username_key\""\nDETAIL:
Key (username)=(rudolf) already exists.
```
",True,"django_manage createsuperuser does not seem idempotent - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
django_manage
##### ANSIBLE VERSION
```
ansible 2.0.0.2
```
##### OS / ENVIRONMENT
OSX El Captain / 10.11.5, managing CentOS 7.1
##### SUMMARY
Issuing command createsuperuser ends up in error after second invocation.
##### STEPS TO REPRODUCE
Using this task, hard coding username/mail does not yeald in better effect
```
- name: Django add superusers
django_manage:
app_path: ""{{ seeder_home }}/Seeder""
virtualenv: ""{{ seeder_virtualenv }}""
command: ""createsuperuser --noinput --username={{ item.name }} --email={{ item.mail }}"" # Not idempotent, probably bug in django_manage module
with_items: ""{{ seeder_admins }}""
tags: django
ignore_errors: yes
```
##### EXPECTED RESULTS
It should not try to create user again
##### ACTUAL RESULTS
```
stderr: Traceback (most recent call last):\n File \""/opt/virtualenv/seeder/lib/python3.4/site-packages/django/db/backends/utils.py\"", line 64, in execute\n return self.cursor.execute(sql, params)\npsycopg2.IntegrityError: duplicate key value violates unique constraint \""auth_user_username_key\""\nDETAIL:
Key (username)=(rudolf) already exists.
```
",1,django manage createsuperuser does not seem idempotent issue type bug report component name django manage ansible version ansible os environment osx el captain managing centos summary issuing command createsuperuser ends up in error after second invocation steps to reproduce using this task hard coding username mail does not yeald in better effect name django add superusers django manage app path seeder home seeder virtualenv seeder virtualenv command createsuperuser noinput username item name email item mail not idempotent probably bug in django manage module with items seeder admins tags django ignore errors yes expected results it should not try to create user again actual results stderr traceback most recent call last n file opt virtualenv seeder lib site packages django db backends utils py line in execute n return self cursor execute sql params integrityerror duplicate key value violates unique constraint auth user username key ndetail key username rudolf already exists ,1
47215,11984257776.0,IssuesEvent,2020-04-07 15:36:03,Exawind/nalu-wind,https://api.github.com/repos/Exawind/nalu-wind,closed,./abl_mesh -i nalu_abl_mesh.yaml outputs nothing and does not exit with latest wind-utils executables,build-issues,"Hello, I'm trying to build nalu-wind with wind-utils to perform simulation with yaw misalignment.
I follow the installation manual of Development Build of Nalu-Wind(https://nalu-wind.readthedocs.io/en/latest/source/user/build_spack.html#development-build-of-nalu-wind)
cmake -DTrilinos_DIR:PATH=$(spack location -i trilinos) \
-DYAML_DIR:PATH=$(spack location -i yaml-cpp) \
-DCMAKE_BUILD_TYPE=RELEASE \
..
make
with -DENABLE_WIND_UTILS=ON added to enable wind utils
But met the following error:
CMakeFiles/nalu_preprocess.dir/nalu_preprocess.cpp.o:(.data.rel.ro._ZTVN5boost15program_options11typed_valueINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEcEE[_ZTVN5boost15program_options11typed_valueINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEcEE]+0x38): undefined reference to `boost::program_options::value_semantic_codecvt_helper::parse(boost::any&, std::vector, std::allocator >, std::allocator, std::allocator > > > const&, bool) const'
collect2: error: ld returned 1 exit status
wind-utils/src/preprocessing/CMakeFiles/nalu_preprocess.dir/build.make:807: recipe for target 'wind-utils/src/preprocessing/nalu_preprocess' failed
make[2]: *** [wind-utils/src/preprocessing/nalu_preprocess] Error 1
CMakeFiles/Makefile2:1783: recipe for target 'wind-utils/src/preprocessing/CMakeFiles/nalu_preprocess.dir/all' failed
make[1]: *** [wind-utils/src/preprocessing/CMakeFiles/nalu_preprocess.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2
Is there anyone could help me?
Millions of thanks in advance.
",1.0,"./abl_mesh -i nalu_abl_mesh.yaml outputs nothing and does not exit with latest wind-utils executables - Hello, I'm trying to build nalu-wind with wind-utils to perform simulation with yaw misalignment.
I follow the installation manual of Development Build of Nalu-Wind(https://nalu-wind.readthedocs.io/en/latest/source/user/build_spack.html#development-build-of-nalu-wind)
cmake -DTrilinos_DIR:PATH=$(spack location -i trilinos) \
-DYAML_DIR:PATH=$(spack location -i yaml-cpp) \
-DCMAKE_BUILD_TYPE=RELEASE \
..
make
with -DENABLE_WIND_UTILS=ON added to enable wind utils
But met the following error:
CMakeFiles/nalu_preprocess.dir/nalu_preprocess.cpp.o:(.data.rel.ro._ZTVN5boost15program_options11typed_valueINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEcEE[_ZTVN5boost15program_options11typed_valueINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEcEE]+0x38): undefined reference to `boost::program_options::value_semantic_codecvt_helper::parse(boost::any&, std::vector, std::allocator >, std::allocator, std::allocator > > > const&, bool) const'
collect2: error: ld returned 1 exit status
wind-utils/src/preprocessing/CMakeFiles/nalu_preprocess.dir/build.make:807: recipe for target 'wind-utils/src/preprocessing/nalu_preprocess' failed
make[2]: *** [wind-utils/src/preprocessing/nalu_preprocess] Error 1
CMakeFiles/Makefile2:1783: recipe for target 'wind-utils/src/preprocessing/CMakeFiles/nalu_preprocess.dir/all' failed
make[1]: *** [wind-utils/src/preprocessing/CMakeFiles/nalu_preprocess.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2
Is there anyone could help me?
Millions of thanks in advance.
",0, abl mesh i nalu abl mesh yaml outputs nothing and does not exit with latest wind utils executables hello i m trying to build nalu wind with wind utils to perform simulation with yaw misalignment i follow the installation manual of development build of nalu wind cmake dtrilinos dir path spack location i trilinos dyaml dir path spack location i yaml cpp dcmake build type release make with denable wind utils on added to enable wind utils but met the following error cmakefiles nalu preprocess dir nalu preprocess cpp o data rel ro traitsicesaiceeecee undefined reference to boost program options value semantic codecvt helper parse boost any std vector std allocator std allocator std allocator const bool const error ld returned exit status wind utils src preprocessing cmakefiles nalu preprocess dir build make recipe for target wind utils src preprocessing nalu preprocess failed make error cmakefiles recipe for target wind utils src preprocessing cmakefiles nalu preprocess dir all failed make error makefile recipe for target all failed make error is there anyone could help me millions of thanks in advance ,0
12008,9551365616.0,IssuesEvent,2019-05-02 14:18:40,terraform-providers/terraform-provider-azurerm,https://api.github.com/repos/terraform-providers/terraform-provider-azurerm,closed,Cannot create storage account that uses network rules,service/storage,"Hello,
i would like to create storage account that uses network rules, but i get follow error
`data.azurerm_storage_account.StorageAccount: : invalid or unknown key: network_rules`
This my config:
```
data ""azurerm_storage_account"" ""StorageAccount"" {
name = ""${var.infrastructure_storage_account_name}""
resource_group_name = ""${var.resource_group_name}""
network_rules {
ip_rules = [""foo/bar""]
virtual_network_subnet_ids = [""${azurerm_subnet.abc.id}""]
bypass = ""None""
}
}
```
Iam using terraform azurerm 1.24.0. Is this a bug or doing iam something wrong?",1.0,"Cannot create storage account that uses network rules - Hello,
i would like to create storage account that uses network rules, but i get follow error
`data.azurerm_storage_account.StorageAccount: : invalid or unknown key: network_rules`
This my config:
```
data ""azurerm_storage_account"" ""StorageAccount"" {
name = ""${var.infrastructure_storage_account_name}""
resource_group_name = ""${var.resource_group_name}""
network_rules {
ip_rules = [""foo/bar""]
virtual_network_subnet_ids = [""${azurerm_subnet.abc.id}""]
bypass = ""None""
}
}
```
Iam using terraform azurerm 1.24.0. Is this a bug or doing iam something wrong?",0,cannot create storage account that uses network rules hello i would like to create storage account that uses network rules but i get follow error data azurerm storage account storageaccount invalid or unknown key network rules this my config data azurerm storage account storageaccount name var infrastructure storage account name resource group name var resource group name network rules ip rules virtual network subnet ids bypass none iam using terraform azurerm is this a bug or doing iam something wrong ,0
3791,16110197279.0,IssuesEvent,2021-04-27 20:03:42,svengreb/wand,https://api.github.com/repos/svengreb/wand,closed,Update to `tmpl-go` template repository version `0.8.0`,context-workflow scope-maintainability scope-quality type-improvement,"Update to [`tmpl-go` version `0.8.0`][1] which [updates `golangci-lint` to version `1.39.0`][2] and [the `tmpl` repository version `0.9.0`][3].
[1]: https://github.com/svengreb/tmpl-go/releases/tag/v0.8.0
[2]: https://github.com/svengreb/tmpl-go/issues/56
[3]: https://github.com/svengreb/tmpl-go/issues/58",True,"Update to `tmpl-go` template repository version `0.8.0` - Update to [`tmpl-go` version `0.8.0`][1] which [updates `golangci-lint` to version `1.39.0`][2] and [the `tmpl` repository version `0.9.0`][3].
[1]: https://github.com/svengreb/tmpl-go/releases/tag/v0.8.0
[2]: https://github.com/svengreb/tmpl-go/issues/56
[3]: https://github.com/svengreb/tmpl-go/issues/58",1,update to tmpl go template repository version update to which and ,1
350,3252232192.0,IssuesEvent,2015-10-19 14:03:34,Homebrew/homebrew,https://api.github.com/repos/Homebrew/homebrew,closed,List outdated formulae separately in `brew update`,features maintainer feedback,"Recent updates have attempted to highlight outdated brews to users by [adding coloured highlights](https://github.com/Homebrew/homebrew/pull/44335), which [didn't play too well](https://github.com/Homebrew/homebrew/issues/45028) with some terminal colour schemes.
Currently, any installed formulae that're affected by `brew update` are displayed in bold with ` (installed)` appended. While a lot easier on the eyes than white-against-yellow, it adds clutter to the feedback - and some users probably aren't that quick to notice the bold letters.
I suggest simply listing affected formulae after the updates:

Ignore what's listed in the example, I had to improvise with makeshift feedback, since all my brews are currently up-to-date. Heh.
Thoughts?",True,"List outdated formulae separately in `brew update` - Recent updates have attempted to highlight outdated brews to users by [adding coloured highlights](https://github.com/Homebrew/homebrew/pull/44335), which [didn't play too well](https://github.com/Homebrew/homebrew/issues/45028) with some terminal colour schemes.
Currently, any installed formulae that're affected by `brew update` are displayed in bold with ` (installed)` appended. While a lot easier on the eyes than white-against-yellow, it adds clutter to the feedback - and some users probably aren't that quick to notice the bold letters.
I suggest simply listing affected formulae after the updates:

Ignore what's listed in the example, I had to improvise with makeshift feedback, since all my brews are currently up-to-date. Heh.
Thoughts?",1,list outdated formulae separately in brew update recent updates have attempted to highlight outdated brews to users by which with some terminal colour schemes currently any installed formulae that re affected by brew update are displayed in bold with installed appended while a lot easier on the eyes than white against yellow it adds clutter to the feedback and some users probably aren t that quick to notice the bold letters i suggest simply listing affected formulae after the updates ignore what s listed in the example i had to improvise with makeshift feedback since all my brews are currently up to date heh thoughts ,1
82543,10257522331.0,IssuesEvent,2019-08-21 20:20:21,pegnet/pegnet,https://api.github.com/repos/pegnet/pegnet,closed,Evaluating PoW to avoid a 51% attack,design discussion pM1 consideration,"We have an advantage because the selection of the record that matters is a distributed problem (over all entries submitted). But any miner with 51% of the hash power still has the chance of selecting the values actually used in a block. How do we protect ourselves from this?
An analysis of the final 50 OPRs in the current approach has no method to do much but calculate the agreement between the 50 OPRs. A miner with 26 Entries in the 50 can dictate that result. So how can we make it harder to get 26 Entries?
Change how we reduce 100 to 200 entries to 50.
This method works like this:
```
Collect the valid OPRs (all references to OPRs past this step assumes the list of valid OPRs)
Calculate the PoW for all the OPRs.
Take the difficulty of the last OPR submitted, and use it to create a salted hash for all OPRs
Sort by Salted Hash.
Then loop through all the OPRs by pairs
Keep the OPR of the pair that has the highest PoW
if all that is left + what you are to keep == 50, you are done
If at the end of the list (without a pair), and we still have more than 50 OPRs, repeat the loop
with the OPRs we Kept
```
What this does for any set of valid OPRs over 100 is ensure a party submitting 26 OPRs has a much reduced chance of being in set of 50. A mining pool submitting multiple entries is likely to compete with themselves prior to the selection of 50.
To have a good chance to have 26 entries out of 50, 51% is no longer enough. Many of your entries will end up competing with your own entries, ensuring one or the other no longer counts, no matter how high the hash power is for each.
The only entry with 100% certainty to win and go into the 50 is the highest hash power. But the second highest hash power might have been paired with the highest and eliminated. The impact of the algorithm is rather hard for me to calculate. Someone with some statistics might be able to figure it out. I need my stats book to do stats.
",1.0,"Evaluating PoW to avoid a 51% attack - We have an advantage because the selection of the record that matters is a distributed problem (over all entries submitted). But any miner with 51% of the hash power still has the chance of selecting the values actually used in a block. How do we protect ourselves from this?
An analysis of the final 50 OPRs in the current approach has no method to do much but calculate the agreement between the 50 OPRs. A miner with 26 Entries in the 50 can dictate that result. So how can we make it harder to get 26 Entries?
Change how we reduce 100 to 200 entries to 50.
This method works like this:
```
Collect the valid OPRs (all references to OPRs past this step assumes the list of valid OPRs)
Calculate the PoW for all the OPRs.
Take the difficulty of the last OPR submitted, and use it to create a salted hash for all OPRs
Sort by Salted Hash.
Then loop through all the OPRs by pairs
Keep the OPR of the pair that has the highest PoW
if all that is left + what you are to keep == 50, you are done
If at the end of the list (without a pair), and we still have more than 50 OPRs, repeat the loop
with the OPRs we Kept
```
What this does for any set of valid OPRs over 100 is ensure a party submitting 26 OPRs has a much reduced chance of being in set of 50. A mining pool submitting multiple entries is likely to compete with themselves prior to the selection of 50.
To have a good chance to have 26 entries out of 50, 51% is no longer enough. Many of your entries will end up competing with your own entries, ensuring one or the other no longer counts, no matter how high the hash power is for each.
The only entry with 100% certainty to win and go into the 50 is the highest hash power. But the second highest hash power might have been paired with the highest and eliminated. The impact of the algorithm is rather hard for me to calculate. Someone with some statistics might be able to figure it out. I need my stats book to do stats.
",0,evaluating pow to avoid a attack we have an advantage because the selection of the record that matters is a distributed problem over all entries submitted but any miner with of the hash power still has the chance of selecting the values actually used in a block how do we protect ourselves from this an analysis of the final oprs in the current approach has no method to do much but calculate the agreement between the oprs a miner with entries in the can dictate that result so how can we make it harder to get entries change how we reduce to entries to this method works like this collect the valid oprs all references to oprs past this step assumes the list of valid oprs calculate the pow for all the oprs take the difficulty of the last opr submitted and use it to create a salted hash for all oprs sort by salted hash then loop through all the oprs by pairs keep the opr of the pair that has the highest pow if all that is left what you are to keep you are done if at the end of the list without a pair and we still have more than oprs repeat the loop with the oprs we kept what this does for any set of valid oprs over is ensure a party submitting oprs has a much reduced chance of being in set of a mining pool submitting multiple entries is likely to compete with themselves prior to the selection of to have a good chance to have entries out of is no longer enough many of your entries will end up competing with your own entries ensuring one or the other no longer counts no matter how high the hash power is for each the only entry with certainty to win and go into the is the highest hash power but the second highest hash power might have been paired with the highest and eliminated the impact of the algorithm is rather hard for me to calculate someone with some statistics might be able to figure it out i need my stats book to do stats ,0
241113,26256646501.0,IssuesEvent,2023-01-06 01:44:40,belialNZ86/version-control-system,https://api.github.com/repos/belialNZ86/version-control-system,opened,WS-2021-0152 (High) detected in color-string-0.3.0.tgz,security vulnerability,"## WS-2021-0152 - High Severity Vulnerability
Vulnerable Library - color-string-0.3.0.tgz
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2021-03-12
Fix Resolution (color-string): 1.5.5
Direct dependency fix Resolution (css-loader): 1.0.0
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2021-0152 (High) detected in color-string-0.3.0.tgz - ## WS-2021-0152 - High Severity Vulnerability
Vulnerable Library - color-string-0.3.0.tgz
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2021-03-12
Fix Resolution (color-string): 1.5.5
Direct dependency fix Resolution (css-loader): 1.0.0
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws high detected in color string tgz ws high severity vulnerability vulnerable library color string tgz parser and generator for css color strings library home page a href path to dependency file version control system package json path to vulnerable library node modules color string package json dependency hierarchy css loader tgz root library cssnano tgz postcss colormin tgz colormin tgz color tgz x color string tgz vulnerable library vulnerability details regular expression denial of service redos was found in color string before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution color string direct dependency fix resolution css loader step up your open source security game with mend ,0
251657,18957682745.0,IssuesEvent,2021-11-18 22:31:32,KGISELLE/BOG003-md-links,https://api.github.com/repos/KGISELLE/BOG003-md-links,opened, Crear un plan de acción,documentation Planning,"Esto debería quedar detallado en el README.md de tu repo y en una serie de issues y milestones para priorizar y organizar el trabajo, y para poder hacer seguimiento de tu progreso.",1.0," Crear un plan de acción - Esto debería quedar detallado en el README.md de tu repo y en una serie de issues y milestones para priorizar y organizar el trabajo, y para poder hacer seguimiento de tu progreso.",0, crear un plan de acción esto debería quedar detallado en el readme md de tu repo y en una serie de issues y milestones para priorizar y organizar el trabajo y para poder hacer seguimiento de tu progreso ,0
175596,21313860622.0,IssuesEvent,2022-04-16 01:11:12,Nivaskumark/kernel_v4.1.15,https://api.github.com/repos/Nivaskumark/kernel_v4.1.15,opened,CVE-2017-15102 (Medium) detected in linuxlinux-4.6,security vulnerability,"## CVE-2017-15102 - Medium Severity Vulnerability
Vulnerable Library - linuxlinux-4.6
The tower_probe function in drivers/usb/misc/legousbtower.c in the Linux kernel before 4.8.1 allows local users (who are physically proximate for inserting a crafted USB device) to gain privileges by leveraging a write-what-where condition that occurs after a race condition and a NULL pointer dereference.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2017-15102 (Medium) detected in linuxlinux-4.6 - ## CVE-2017-15102 - Medium Severity Vulnerability
Vulnerable Library - linuxlinux-4.6
The tower_probe function in drivers/usb/misc/legousbtower.c in the Linux kernel before 4.8.1 allows local users (who are physically proximate for inserting a crafted USB device) to gain privileges by leveraging a write-what-where condition that occurs after a race condition and a NULL pointer dereference.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files drivers usb misc legousbtower c vulnerability details the tower probe function in drivers usb misc legousbtower c in the linux kernel before allows local users who are physically proximate for inserting a crafted usb device to gain privileges by leveraging a write what where condition that occurs after a race condition and a null pointer dereference publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
2988,10790895495.0,IssuesEvent,2019-11-05 15:45:49,ansible/ansible,https://api.github.com/repos/ansible/ansible,opened,Allow the ability to replace/restore a device configuration.,affects_2.10 feature module needs_maintainer needs_triage support:community,"
##### SUMMARY
Currently we have the ability to push configurations to a device but not completely replace a running/startup configuration. It would be nice to have the ability to replace the device config with a backup configuration from a custom database.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
Network Modules
##### ADDITIONAL INFORMATION
Assuming we have saved configurations in our own database, we would like to replace the running config with the ones from our collection.
```yaml
```
",True,"Allow the ability to replace/restore a device configuration. -
##### SUMMARY
Currently we have the ability to push configurations to a device but not completely replace a running/startup configuration. It would be nice to have the ability to replace the device config with a backup configuration from a custom database.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
Network Modules
##### ADDITIONAL INFORMATION
Assuming we have saved configurations in our own database, we would like to replace the running config with the ones from our collection.
```yaml
```
",1,allow the ability to replace restore a device configuration summary currently we have the ability to push configurations to a device but not completely replace a running startup configuration it would be nice to have the ability to replace the device config with a backup configuration from a custom database issue type feature idea component name network modules additional information assuming we have saved configurations in our own database we would like to replace the running config with the ones from our collection yaml ,1
4723,24375280075.0,IssuesEvent,2022-10-03 23:57:16,aws/aws-sam-cli,https://api.github.com/repos/aws/aws-sam-cli,closed,init an internal lambda function?,area/examples area/init stage/pm-review maintainer/need-followup,`sam init --runtime go1.x` appears to create an API gateway type of function. What happens if I want to create a function that gets triggered by SNS or some direct lambda call (with JSON payload) instead??,True,init an internal lambda function? - `sam init --runtime go1.x` appears to create an API gateway type of function. What happens if I want to create a function that gets triggered by SNS or some direct lambda call (with JSON payload) instead??,1,init an internal lambda function sam init runtime x appears to create an api gateway type of function what happens if i want to create a function that gets triggered by sns or some direct lambda call with json payload instead ,1
88552,11102099276.0,IssuesEvent,2019-12-16 22:58:32,ipfs/docs,https://api.github.com/repos/ipfs/docs,closed,[NEW CONTENT] Dweb addressing,OKR 1: Content improvement Size: M design-content difficulty:easy docs-ipfs help wanted,"At the IPFS developer summit in Berlin in July 2018, we had poster-making sessions where people explored various IPFS concepts. We should expand on the DWeb Addressing poster by adding a doc in the [`content/guides/concepts`](https://github.com/ipfs/docs/tree/master/content/guides/concepts) folder.

This is a subtask of #56.",1.0,"[NEW CONTENT] Dweb addressing - At the IPFS developer summit in Berlin in July 2018, we had poster-making sessions where people explored various IPFS concepts. We should expand on the DWeb Addressing poster by adding a doc in the [`content/guides/concepts`](https://github.com/ipfs/docs/tree/master/content/guides/concepts) folder.

This is a subtask of #56.",0, dweb addressing at the ipfs developer summit in berlin in july we had poster making sessions where people explored various ipfs concepts we should expand on the dweb addressing poster by adding a doc in the folder this is a subtask of ,0
21657,10676150210.0,IssuesEvent,2019-10-21 13:14:09,repo-helper/badgeboard,https://api.github.com/repos/repo-helper/badgeboard,opened,CVE-2015-8857 (High) detected in uglify-js-2.2.5.tgz,security vulnerability,"## CVE-2015-8857 - High Severity Vulnerability
Vulnerable Library - uglify-js-2.2.5.tgz
JavaScript parser, mangler/compressor and beautifier toolkit
The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2015-8857 (High) detected in uglify-js-2.2.5.tgz - ## CVE-2015-8857 - High Severity Vulnerability
Vulnerable Library - uglify-js-2.2.5.tgz
JavaScript parser, mangler/compressor and beautifier toolkit
The uglify-js package before 2.4.24 for Node.js does not properly account for non-boolean values when rewriting boolean expressions, which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten Javascript.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in uglify js tgz cve high severity vulnerability vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file tmp ws scm badgeboard package json path to vulnerable library tmp ws scm badgeboard node modules transformers node modules uglify js package json dependency hierarchy jade tgz root library transformers tgz x uglify js tgz vulnerable library found in head commit a href vulnerability details the uglify js package before for node js does not properly account for non boolean values when rewriting boolean expressions which might allow attackers to bypass security mechanisms or possibly have unspecified other impact by leveraging improperly rewritten javascript publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
77948,22049719371.0,IssuesEvent,2022-05-30 07:31:14,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,opened,Code coverage error should not fail a build,area/build kind/engineering,"Failures like this one: https://github.com/pulumi/pulumi/runs/6614209710?check_suite_focus=true#step:40:25

should not fail the whole build.",1.0,"Code coverage error should not fail a build - Failures like this one: https://github.com/pulumi/pulumi/runs/6614209710?check_suite_focus=true#step:40:25

should not fail the whole build.",0,code coverage error should not fail a build failures like this one should not fail the whole build ,0
1895,6577538836.0,IssuesEvent,2017-09-12 01:37:05,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,user/group not accepting an actual array,affects_2.0 bug_report waiting_on_maintainer,"##### Issue Type:
- Bug Report
##### Plugin Name:
user
##### Ansible Version:
ansible 2.0.1.0
config file =
configured module search path = Default w/o overrides
##### Ansible Configuration:
##### Environment:
raspbian
##### Summary:
user module group param is expected to accept an array
##### Steps To Reproduce:
user:
name: foo
groups:
- a
- b
- c
append: true
##### Expected Results:
user foo added to groups a, b and c
##### Actual Results:
group does not exist ['a','b','c']
",True,"user/group not accepting an actual array - ##### Issue Type:
- Bug Report
##### Plugin Name:
user
##### Ansible Version:
ansible 2.0.1.0
config file =
configured module search path = Default w/o overrides
##### Ansible Configuration:
##### Environment:
raspbian
##### Summary:
user module group param is expected to accept an array
##### Steps To Reproduce:
user:
name: foo
groups:
- a
- b
- c
append: true
##### Expected Results:
user foo added to groups a, b and c
##### Actual Results:
group does not exist ['a','b','c']
",1,user group not accepting an actual array issue type bug report plugin name user ansible version ansible config file configured module search path default w o overrides ansible configuration environment raspbian summary user module group param is expected to accept an array steps to reproduce user name foo groups a b c append true expected results user foo added to groups a b and c actual results group does not exist ,1
4377,22284608945.0,IssuesEvent,2022-06-11 12:19:24,BioArchLinux/Packages,https://api.github.com/repos/BioArchLinux/Packages,closed,[MAINTAIN] groHMM,maintain,"
upstream error
**Log of the bug**
http://bioconductor.org/checkResults/release/bioc-LATEST/groHMM/
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
Add any other context about the problem here.
",True,"[MAINTAIN] groHMM -
upstream error
**Log of the bug**
http://bioconductor.org/checkResults/release/bioc-LATEST/groHMM/
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
Add any other context about the problem here.
",1, grohmm please report the error of one package in one issue use multi issues to report multi bugs thanks upstream error log of the bug packages please complete the following information package name description add any other context about the problem here ,1
2788,9998010740.0,IssuesEvent,2019-07-12 06:56:08,RalfKoban/MiKo-Analyzers,https://api.github.com/repos/RalfKoban/MiKo-Analyzers,opened,Do not use TimeSpan ctors directly,Area: analyzer Area: maintainability feature,"When it comes to code readability, the creation of `TimeSpan` values is hard to read.
This is due to the nature of the ctors that have a lot of parameters - it cannot be easily detected which value is for which parameter.
Example:
```C#
var interval = new TimeSpan(42, 08, 15);
var interval = new TimeSpan(08, 15, 47, 11);
var interval = new TimeSpan(42, 08, 15, 47, 11);
```
The `TimeSpan` type provides static methods, such as `FromMinutes`, `FromMilliseconds`, etc.
So using them would be better because now the value can be easily spot.
````C#
Thread.Sleep(new TimeSpan(0, 3, 0));
vs.
Thread.Sleep(TimeSpan.FromMinutes(3));
```
However, it still is cumbersome to read. Therefore, extension methods could be used.
```C#
Thread.Sleep(3.Minutes());
vs.
Thread.Sleep(TimeSpan.FromMinutes(3));
```
The extension method itself could look like
```C#
public static TimeSpan Minutes(this int value) => TimeSpan.FromMinutes(value);
```",True,"Do not use TimeSpan ctors directly - When it comes to code readability, the creation of `TimeSpan` values is hard to read.
This is due to the nature of the ctors that have a lot of parameters - it cannot be easily detected which value is for which parameter.
Example:
```C#
var interval = new TimeSpan(42, 08, 15);
var interval = new TimeSpan(08, 15, 47, 11);
var interval = new TimeSpan(42, 08, 15, 47, 11);
```
The `TimeSpan` type provides static methods, such as `FromMinutes`, `FromMilliseconds`, etc.
So using them would be better because now the value can be easily spot.
````C#
Thread.Sleep(new TimeSpan(0, 3, 0));
vs.
Thread.Sleep(TimeSpan.FromMinutes(3));
```
However, it still is cumbersome to read. Therefore, extension methods could be used.
```C#
Thread.Sleep(3.Minutes());
vs.
Thread.Sleep(TimeSpan.FromMinutes(3));
```
The extension method itself could look like
```C#
public static TimeSpan Minutes(this int value) => TimeSpan.FromMinutes(value);
```",1,do not use timespan ctors directly when it comes to code readability the creation of timespan values is hard to read this is due to the nature of the ctors that have a lot of parameters it cannot be easily detected which value is for which parameter example c var interval new timespan var interval new timespan var interval new timespan the timespan type provides static methods such as fromminutes frommilliseconds etc so using them would be better because now the value can be easily spot c thread sleep new timespan vs thread sleep timespan fromminutes however it still is cumbersome to read therefore extension methods could be used c thread sleep minutes vs thread sleep timespan fromminutes the extension method itself could look like c public static timespan minutes this int value timespan fromminutes value ,1
2104,7124344077.0,IssuesEvent,2018-01-19 18:31:17,clearlinux/swupd-client,https://api.github.com/repos/clearlinux/swupd-client,closed,Consider adding a string_free() function,maintainability,"So that the code is consistent about resetting a pointer to NULL after freeing dynamic memory allocated at that location, consider adding a `string_free()` wrapper function. For an implementation idea, see the code snippet below (from #356).
```
void free_string(char **s)
{
if (s) {
free(*s);
*s = NULL;
}
}
```",True,"Consider adding a string_free() function - So that the code is consistent about resetting a pointer to NULL after freeing dynamic memory allocated at that location, consider adding a `string_free()` wrapper function. For an implementation idea, see the code snippet below (from #356).
```
void free_string(char **s)
{
if (s) {
free(*s);
*s = NULL;
}
}
```",1,consider adding a string free function so that the code is consistent about resetting a pointer to null after freeing dynamic memory allocated at that location consider adding a string free wrapper function for an implementation idea see the code snippet below from void free string char s if s free s s null ,1
400822,27301874598.0,IssuesEvent,2023-02-24 03:09:54,VIP-LES/EosPayload,https://api.github.com/repos/VIP-LES/EosPayload,closed,Remote OrchEOStrator Debug Technical Specification,documentation,"Exact functionality
How it is implemented?
1/2 to 1 page",1.0,"Remote OrchEOStrator Debug Technical Specification - Exact functionality
How it is implemented?
1/2 to 1 page",0,remote orcheostrator debug technical specification exact functionality how it is implemented to page,0
26244,11277180447.0,IssuesEvent,2020-01-15 01:50:55,yoshi1125hisa/node-bbs,https://api.github.com/repos/yoshi1125hisa/node-bbs,closed,CVE-2019-18797 (Medium) detected in opennms-opennms-source-24.1.3-1,security vulnerability,"## CVE-2019-18797 - Medium Severity Vulnerability
Vulnerable Library - opennmsopennms-source-24.1.3-1
A Java based fault and performance management system
* The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-18797 (Medium) detected in opennms-opennms-source-24.1.3-1 - ## CVE-2019-18797 - Medium Severity Vulnerability
Vulnerable Library - opennmsopennms-source-24.1.3-1
A Java based fault and performance management system
* The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in opennms opennms source cve medium severity vulnerability vulnerable library opennmsopennms source a java based fault and performance management system library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries node bbs node modules nan nan callbacks pre inl h node bbs node modules node sass src libsass src expand hpp node bbs node modules node sass src libsass src expand cpp node bbs node modules node sass src sass types factory cpp node bbs node modules js test yoshinoya js node bbs node modules node sass src sass types boolean cpp node bbs node modules node sass src libsass src util hpp node bbs node modules node sass src sass types value h node bbs node modules node sass src libsass src emitter hpp node bbs node modules nan nan converters pre inl h node bbs node modules node sass src callback bridge h node bbs node modules node sass src libsass src file cpp node bbs node modules node sass src libsass src sass cpp node bbs node modules nan nan persistent inl h node bbs node modules node sass src libsass src operation hpp node bbs node modules nan nan persistent pre inl h node bbs node modules node sass src libsass src operators hpp node bbs node modules node sass src libsass src constants hpp node bbs node modules node sass src libsass src error handling hpp node bbs node modules nan nan implementation pre inl h node bbs node modules js test dankogai js node bbs node modules node sass src custom importer bridge cpp node bbs node modules node sass src libsass src parser hpp node bbs node modules node sass src libsass src constants cpp node bbs node modules node sass src sass types list cpp node bbs node modules node sass src libsass src cssize cpp node bbs node modules node sass src libsass src functions hpp node bbs node modules node sass src libsass src util cpp node bbs node modules node sass src custom function bridge cpp node bbs node modules nan nan typedarray contents h node bbs node modules node sass src custom importer bridge h node bbs node modules node sass src libsass src bind cpp node bbs node modules nan nan json h node bbs node modules node sass src libsass src eval hpp node bbs node modules nan nan converters h node bbs node modules node sass src libsass src backtrace cpp node bbs node modules node sass src libsass src extend cpp node bbs node modules node sass src sass context wrapper h node bbs node modules node sass src sass types sass value wrapper h node bbs node modules node sass src libsass src error handling cpp node bbs node modules node sass src libsass src debugger hpp node bbs node modules node sass src libsass src emitter cpp node bbs node modules node sass src sass types number cpp node bbs node modules node sass src sass types color h node bbs node modules nan nan new h node bbs node modules node sass src libsass src sass values cpp node bbs node modules node sass src libsass src ast hpp node bbs node modules node sass src libsass src output cpp node bbs node modules node sass src libsass src check nesting cpp node bbs node modules node sass src sass types null cpp node bbs node modules node sass src libsass src ast def macros hpp node bbs node modules node sass src libsass src functions cpp node bbs node modules node sass src libsass src cssize hpp node bbs node modules node sass src libsass src prelexer cpp node bbs node modules node sass src libsass src ast cpp node bbs node modules node sass src libsass src to c cpp node bbs node modules node sass src libsass src to value hpp node bbs node modules node sass src libsass src ast fwd decl hpp node bbs node modules nan nan callbacks h node bbs node modules node sass src libsass src inspect hpp node bbs node modules node sass src sass types color cpp node bbs node modules node sass src libsass src values cpp node bbs node modules node sass src sass context wrapper cpp node bbs node modules node sass src sass types list h node bbs node modules node sass src libsass src check nesting hpp node bbs node modules nan nan define own property helper h node bbs node modules js attic test moment js node bbs node modules node sass src sass types map cpp node bbs node modules node sass src libsass src to value cpp node bbs node modules node sass src libsass src context cpp node bbs node modules node sass src sass types string cpp node bbs node modules node sass src libsass src sass context cpp node bbs node modules node sass src libsass src prelexer hpp node bbs node modules node sass src libsass src context hpp node bbs node modules node sass src sass types boolean h node bbs node modules nan nan private h node bbs node modules node sass src libsass src eval cpp vulnerability details libsass has uncontrolled recursion in sass eval operator sass binary expression in eval cpp publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
17287,10679095297.0,IssuesEvent,2019-10-21 18:35:12,cityofaustin/atd-vz-data,https://api.github.com/repos/cityofaustin/atd-vz-data,opened,VZE: Remove Sidebar?,Project: Vision Zero Crash Data System Service: Dev Type: Enhancement Workgroup: VZ,"For discussion: do we even need the sidebar? It takes up a 1/6th of the page width. E.g. can we move to a header row?
",1.0,"VZE: Remove Sidebar? - For discussion: do we even need the sidebar? It takes up a 1/6th of the page width. E.g. can we move to a header row?
",0,vze remove sidebar for discussion do we even need the sidebar it takes up a of the page width e g can we move to a header row ,0
3902,17376851908.0,IssuesEvent,2021-07-30 23:28:21,chorman0773/Clever-ISA,https://api.github.com/repos/chorman0773/Clever-ISA,closed,Encodings (within square brackets) do not denote unit size.,I-unclear S-blocked-on-maintainer X-generic,"Currently the document indicates encodings as serieses of named bits delimited by square brackets and designates the meanings of each bit by group. However, it is not made clear that these groups are bits. This should be solved",True,"Encodings (within square brackets) do not denote unit size. - Currently the document indicates encodings as serieses of named bits delimited by square brackets and designates the meanings of each bit by group. However, it is not made clear that these groups are bits. This should be solved",1,encodings within square brackets do not denote unit size currently the document indicates encodings as serieses of named bits delimited by square brackets and designates the meanings of each bit by group however it is not made clear that these groups are bits this should be solved,1
513773,14926575696.0,IssuesEvent,2021-01-24 12:05:36,robingenz/dhbw-dualis-app,https://api.github.com/repos/robingenz/dhbw-dualis-app,closed,bug: timeout causes unknown error,bug/fix priority: medium,"```
2020-11-29 13:31:53.576 10548-11382/de.robingenz.dhbw.dualis W/Cordova-Plugin-HTTP: Request timed out
com.silkimen.http.HttpRequest$HttpRequestException: java.net.SocketTimeoutException: timeout
at com.silkimen.http.HttpRequest.code(HttpRequest.java:1449)
at com.silkimen.http.HttpRequest.stream(HttpRequest.java:1740)
at com.silkimen.http.HttpRequest.buffer(HttpRequest.java:1729)
at com.silkimen.http.HttpRequest.receive(HttpRequest.java:1856)
at com.silkimen.cordovahttp.CordovaHttpBase.processResponse(CordovaHttpBase.java:195)
at com.silkimen.cordovahttp.CordovaHttpBase.run(CordovaHttpBase.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
Caused by: java.net.SocketTimeoutException: timeout
at com.android.okhttp.okio.Okio$3.newTimeoutException(Okio.java:214)
at com.android.okhttp.okio.AsyncTimeout.exit(AsyncTimeout.java:263)
at com.android.okhttp.okio.AsyncTimeout$2.read(AsyncTimeout.java:217)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:307)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:301)
at com.android.okhttp.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:197)
at com.android.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:188)
at com.android.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:129)
at com.android.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:750)
at com.android.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:622)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:475)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:411)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:542)
at com.android.okhttp.internal.huc.DelegatingHttpsURLConnection.getResponseCode(DelegatingHttpsURLConnection.java:106)
at com.android.okhttp.internal.huc.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:30)
at com.silkimen.http.HttpRequest.code(HttpRequest.java:1447)
at com.silkimen.http.HttpRequest.stream(HttpRequest.java:1740)
at com.silkimen.http.HttpRequest.buffer(HttpRequest.java:1729)
at com.silkimen.http.HttpRequest.receive(HttpRequest.java:1856)
at com.silkimen.cordovahttp.CordovaHttpBase.processResponse(CordovaHttpBase.java:195)
at com.silkimen.cordovahttp.CordovaHttpBase.run(CordovaHttpBase.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
Caused by: java.net.SocketException: socket is closed
at com.android.org.conscrypt.ConscryptFileDescriptorSocket$SSLInputStream.read(ConscryptFileDescriptorSocket.java:554)
at com.android.okhttp.okio.Okio$2.read(Okio.java:138)
at com.android.okhttp.okio.AsyncTimeout$2.read(AsyncTimeout.java:213)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:307)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:301)
at com.android.okhttp.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:197)
at com.android.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:188)
at com.android.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:129)
at com.android.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:750)
at com.android.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:622)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:475)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:411)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:542)
at com.android.okhttp.internal.huc.DelegatingHttpsURLConnection.getResponseCode(DelegatingHttpsURLConnection.java:106)
at com.android.okhttp.internal.huc.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:30)
at com.silkimen.http.HttpRequest.code(HttpRequest.java:1447)
at com.silkimen.http.HttpRequest.stream(HttpRequest.java:1740)
at com.silkimen.http.HttpRequest.buffer(HttpRequest.java:1729)
at com.silkimen.http.HttpRequest.receive(HttpRequest.java:1856)
at com.silkimen.cordovahttp.CordovaHttpBase.processResponse(CordovaHttpBase.java:195)
at com.silkimen.cordovahttp.CordovaHttpBase.run(CordovaHttpBase.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
2020-11-29 13:31:53.586 10548-10548/de.robingenz.dhbw.dualis E/Capacitor/Console: File: http://192.168.2.120:8100/main.js - Line 41 - Msg: [NativeHttpService] [object Object]
2020-11-29 13:31:53.814 10548-10548/de.robingenz.dhbw.dualis E/Capacitor/Console: File: http://192.168.2.120:8100/main.js - Line 135 - Msg: Error: Uncaught (in promise): Error: [NativeHttpService] Unknown error occurred.
Error: [NativeHttpService] Unknown error occurred.
```",1.0,"bug: timeout causes unknown error - ```
2020-11-29 13:31:53.576 10548-11382/de.robingenz.dhbw.dualis W/Cordova-Plugin-HTTP: Request timed out
com.silkimen.http.HttpRequest$HttpRequestException: java.net.SocketTimeoutException: timeout
at com.silkimen.http.HttpRequest.code(HttpRequest.java:1449)
at com.silkimen.http.HttpRequest.stream(HttpRequest.java:1740)
at com.silkimen.http.HttpRequest.buffer(HttpRequest.java:1729)
at com.silkimen.http.HttpRequest.receive(HttpRequest.java:1856)
at com.silkimen.cordovahttp.CordovaHttpBase.processResponse(CordovaHttpBase.java:195)
at com.silkimen.cordovahttp.CordovaHttpBase.run(CordovaHttpBase.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
Caused by: java.net.SocketTimeoutException: timeout
at com.android.okhttp.okio.Okio$3.newTimeoutException(Okio.java:214)
at com.android.okhttp.okio.AsyncTimeout.exit(AsyncTimeout.java:263)
at com.android.okhttp.okio.AsyncTimeout$2.read(AsyncTimeout.java:217)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:307)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:301)
at com.android.okhttp.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:197)
at com.android.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:188)
at com.android.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:129)
at com.android.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:750)
at com.android.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:622)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:475)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:411)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:542)
at com.android.okhttp.internal.huc.DelegatingHttpsURLConnection.getResponseCode(DelegatingHttpsURLConnection.java:106)
at com.android.okhttp.internal.huc.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:30)
at com.silkimen.http.HttpRequest.code(HttpRequest.java:1447)
at com.silkimen.http.HttpRequest.stream(HttpRequest.java:1740)
at com.silkimen.http.HttpRequest.buffer(HttpRequest.java:1729)
at com.silkimen.http.HttpRequest.receive(HttpRequest.java:1856)
at com.silkimen.cordovahttp.CordovaHttpBase.processResponse(CordovaHttpBase.java:195)
at com.silkimen.cordovahttp.CordovaHttpBase.run(CordovaHttpBase.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
Caused by: java.net.SocketException: socket is closed
at com.android.org.conscrypt.ConscryptFileDescriptorSocket$SSLInputStream.read(ConscryptFileDescriptorSocket.java:554)
at com.android.okhttp.okio.Okio$2.read(Okio.java:138)
at com.android.okhttp.okio.AsyncTimeout$2.read(AsyncTimeout.java:213)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:307)
at com.android.okhttp.okio.RealBufferedSource.indexOf(RealBufferedSource.java:301)
at com.android.okhttp.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:197)
at com.android.okhttp.internal.http.Http1xStream.readResponse(Http1xStream.java:188)
at com.android.okhttp.internal.http.Http1xStream.readResponseHeaders(Http1xStream.java:129)
at com.android.okhttp.internal.http.HttpEngine.readNetworkResponse(HttpEngine.java:750)
at com.android.okhttp.internal.http.HttpEngine.readResponse(HttpEngine.java:622)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:475)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:411)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponseCode(HttpURLConnectionImpl.java:542)
at com.android.okhttp.internal.huc.DelegatingHttpsURLConnection.getResponseCode(DelegatingHttpsURLConnection.java:106)
at com.android.okhttp.internal.huc.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:30)
at com.silkimen.http.HttpRequest.code(HttpRequest.java:1447)
at com.silkimen.http.HttpRequest.stream(HttpRequest.java:1740)
at com.silkimen.http.HttpRequest.buffer(HttpRequest.java:1729)
at com.silkimen.http.HttpRequest.receive(HttpRequest.java:1856)
at com.silkimen.cordovahttp.CordovaHttpBase.processResponse(CordovaHttpBase.java:195)
at com.silkimen.cordovahttp.CordovaHttpBase.run(CordovaHttpBase.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:462)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:919)
2020-11-29 13:31:53.586 10548-10548/de.robingenz.dhbw.dualis E/Capacitor/Console: File: http://192.168.2.120:8100/main.js - Line 41 - Msg: [NativeHttpService] [object Object]
2020-11-29 13:31:53.814 10548-10548/de.robingenz.dhbw.dualis E/Capacitor/Console: File: http://192.168.2.120:8100/main.js - Line 135 - Msg: Error: Uncaught (in promise): Error: [NativeHttpService] Unknown error occurred.
Error: [NativeHttpService] Unknown error occurred.
```",0,bug timeout causes unknown error de robingenz dhbw dualis w cordova plugin http request timed out com silkimen http httprequest httprequestexception java net sockettimeoutexception timeout at com silkimen http httprequest code httprequest java at com silkimen http httprequest stream httprequest java at com silkimen http httprequest buffer httprequest java at com silkimen http httprequest receive httprequest java at com silkimen cordovahttp cordovahttpbase processresponse cordovahttpbase java at com silkimen cordovahttp cordovahttpbase run cordovahttpbase java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java net sockettimeoutexception timeout at com android okhttp okio okio newtimeoutexception okio java at com android okhttp okio asynctimeout exit asynctimeout java at com android okhttp okio asynctimeout read asynctimeout java at com android okhttp okio realbufferedsource indexof realbufferedsource java at com android okhttp okio realbufferedsource indexof realbufferedsource java at com android okhttp okio realbufferedsource realbufferedsource java at com android okhttp internal http readresponse java at com android okhttp internal http readresponseheaders java at com android okhttp internal http httpengine readnetworkresponse httpengine java at com android okhttp internal http httpengine readresponse httpengine java at com android okhttp internal huc httpurlconnectionimpl execute httpurlconnectionimpl java at com android okhttp internal huc httpurlconnectionimpl getresponse httpurlconnectionimpl java at com android okhttp internal huc httpurlconnectionimpl getresponsecode httpurlconnectionimpl java at com android okhttp internal huc delegatinghttpsurlconnection getresponsecode delegatinghttpsurlconnection java at com android okhttp internal huc httpsurlconnectionimpl getresponsecode httpsurlconnectionimpl java at com silkimen http httprequest code httprequest java at com silkimen http httprequest stream httprequest java at com silkimen http httprequest buffer httprequest java at com silkimen http httprequest receive httprequest java at com silkimen cordovahttp cordovahttpbase processresponse cordovahttpbase java at com silkimen cordovahttp cordovahttpbase run cordovahttpbase java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java net socketexception socket is closed at com android org conscrypt conscryptfiledescriptorsocket sslinputstream read conscryptfiledescriptorsocket java at com android okhttp okio okio read okio java at com android okhttp okio asynctimeout read asynctimeout java at com android okhttp okio realbufferedsource indexof realbufferedsource java at com android okhttp okio realbufferedsource indexof realbufferedsource java at com android okhttp okio realbufferedsource realbufferedsource java at com android okhttp internal http readresponse java at com android okhttp internal http readresponseheaders java at com android okhttp internal http httpengine readnetworkresponse httpengine java at com android okhttp internal http httpengine readresponse httpengine java at com android okhttp internal huc httpurlconnectionimpl execute httpurlconnectionimpl java at com android okhttp internal huc httpurlconnectionimpl getresponse httpurlconnectionimpl java at com android okhttp internal huc httpurlconnectionimpl getresponsecode httpurlconnectionimpl java at com android okhttp internal huc delegatinghttpsurlconnection getresponsecode delegatinghttpsurlconnection java at com android okhttp internal huc httpsurlconnectionimpl getresponsecode httpsurlconnectionimpl java at com silkimen http httprequest code httprequest java at com silkimen http httprequest stream httprequest java at com silkimen http httprequest buffer httprequest java at com silkimen http httprequest receive httprequest java at com silkimen cordovahttp cordovahttpbase processresponse cordovahttpbase java at com silkimen cordovahttp cordovahttpbase run cordovahttpbase java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java de robingenz dhbw dualis e capacitor console file line msg de robingenz dhbw dualis e capacitor console file line msg error uncaught in promise error unknown error occurred error unknown error occurred ,0
3438,13211537136.0,IssuesEvent,2020-08-15 23:57:34,ansible/ansible,https://api.github.com/repos/ansible/ansible,closed,terraform module '-no-color' conflicts with TF_CLI_ARGS_plan env variable,affects_2.9 bot_closed bug cloud collection collection:community.general module needs_collection_redirect needs_maintainer needs_triage python3 support:community,"##### SUMMARY
terraform module sets `-no-color` but if `-no-color` is set in `TF_CLI_ARGS_plan` this would fail, it's `terraform` itself which fails like this, but here ansible module sets itself explicitly `-no-color` in code.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
terraform module
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /home/jiri/.ansible.cfg
configured module search path = ['/home/jiri/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jiri/stow/ansible/venv/lib/python3.6/site-packages/ansible
executable location = /home/jiri/stow/ansible/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### STEPS TO REPRODUCE
```
$ export TF_CLI_ARGS_plan=-no-color
$ cat > /tmp/test.yml < /tmp/test.yml <API level 16
3. Created AVD with option GPU on
4. Run the application
What is the expected output? What do you see instead?
Expected
Should show the Pagecurl but
Actual
No rendering,Black screen
What version of the product are you using? On what operating system?
Windows 7 32bit,Android Emulator
Please provide any additional information below.
```
Original issue reported on code.google.com by `kuvetha...@gmail.com` on 17 Jul 2012 at 6:17",1.0,"Page Curl is not working in 4.1 - ```
What steps will reproduce the problem?
1. Downloaded pagecurl project
2. Imported to Eclipse ->API level 16
3. Created AVD with option GPU on
4. Run the application
What is the expected output? What do you see instead?
Expected
Should show the Pagecurl but
Actual
No rendering,Black screen
What version of the product are you using? On what operating system?
Windows 7 32bit,Android Emulator
Please provide any additional information below.
```
Original issue reported on code.google.com by `kuvetha...@gmail.com` on 17 Jul 2012 at 6:17",0,page curl is not working in what steps will reproduce the problem downloaded pagecurl project imported to eclipse api level created avd with option gpu on run the application what is the expected output what do you see instead expected should show the pagecurl but actual no rendering black screen what version of the product are you using on what operating system windows android emulator please provide any additional information below original issue reported on code google com by kuvetha gmail com on jul at ,0
2650,9083347076.0,IssuesEvent,2019-02-17 19:45:39,lrozenblyum/chess,https://api.github.com/repos/lrozenblyum/chess,opened,Enforce Maven3 usage,CI maintainability,"Caused by #47
Discovered during #277
We should enforce maven3 usage in the project. Now we don't have forced validation of Maven.
Command execution
versions:display-plugin-updates
shows
[ERROR] Project does not define required minimum version of Maven.
[ERROR] Update the pom.xml to contain maven-enforcer-plugin to
[ERROR] force the Maven version which is needed to build this project.
[ERROR] See https://maven.apache.org/enforcer/enforcer-rules/requireMavenVersion.html
[ERROR] Using the minimum version of Maven: 3.0.5
",True,"Enforce Maven3 usage - Caused by #47
Discovered during #277
We should enforce maven3 usage in the project. Now we don't have forced validation of Maven.
Command execution
versions:display-plugin-updates
shows
[ERROR] Project does not define required minimum version of Maven.
[ERROR] Update the pom.xml to contain maven-enforcer-plugin to
[ERROR] force the Maven version which is needed to build this project.
[ERROR] See https://maven.apache.org/enforcer/enforcer-rules/requireMavenVersion.html
[ERROR] Using the minimum version of Maven: 3.0.5
",1,enforce usage caused by discovered during we should enforce usage in the project now we don t have forced validation of maven command execution versions display plugin updates shows project does not define required minimum version of maven update the pom xml to contain maven enforcer plugin to force the maven version which is needed to build this project see using the minimum version of maven ,1
221290,17010902794.0,IssuesEvent,2021-07-02 04:18:55,devxas/airta-home,https://api.github.com/repos/devxas/airta-home,opened,design for seperate sitemap engine as microservice,documentation enhancement,"extract current sitemap logic from airta-engine, to handle a refactored sitemap logic and function.",1.0,"design for seperate sitemap engine as microservice - extract current sitemap logic from airta-engine, to handle a refactored sitemap logic and function.",0,design for seperate sitemap engine as microservice extract current sitemap logic from airta engine to handle a refactored sitemap logic and function ,0
3396,13170558886.0,IssuesEvent,2020-08-11 15:19:42,carbon-design-system/carbon,https://api.github.com/repos/carbon-design-system/carbon,reopened,[DataTable]: Disabled batch action buttons are visually too prominent,component: data-table role: dev 🤖 status: needs triage 🕵️♀️ status: waiting for maintainer response 💬 type: bug 🐛,"## Detailed description
> Describe in detail the issue you're having.
When a button in the data table batch action bar is disabled, it draws the user's attention more than the actions they are able to perform. Also it looks a bit out-of-place in general and might be confusing to the user since the background color is close to the ""selected row"" background color.
> Is this issue related to a specific component?
`DataTable.TableBatchAction`
> What version of the Carbon Design System are you using?
`carbon-components@10.16.0`
`carbon-components-react@7.16.0`
## Steps to reproduce the issue
1. Open demo: https://6t41j.csb.app/
2. Select at least one row
## Additional information

It seems there is also an inconsistency in the text color compared to the sketch kit, although that's not the issue I think. 😄

",True,"[DataTable]: Disabled batch action buttons are visually too prominent - ## Detailed description
> Describe in detail the issue you're having.
When a button in the data table batch action bar is disabled, it draws the user's attention more than the actions they are able to perform. Also it looks a bit out-of-place in general and might be confusing to the user since the background color is close to the ""selected row"" background color.
> Is this issue related to a specific component?
`DataTable.TableBatchAction`
> What version of the Carbon Design System are you using?
`carbon-components@10.16.0`
`carbon-components-react@7.16.0`
## Steps to reproduce the issue
1. Open demo: https://6t41j.csb.app/
2. Select at least one row
## Additional information

It seems there is also an inconsistency in the text color compared to the sketch kit, although that's not the issue I think. 😄

",1, disabled batch action buttons are visually too prominent detailed description describe in detail the issue you re having when a button in the data table batch action bar is disabled it draws the user s attention more than the actions they are able to perform also it looks a bit out of place in general and might be confusing to the user since the background color is close to the selected row background color is this issue related to a specific component datatable tablebatchaction what version of the carbon design system are you using carbon components carbon components react steps to reproduce the issue open demo select at least one row additional information it seems there is also an inconsistency in the text color compared to the sketch kit although that s not the issue i think 😄 ,1
266,3025489618.0,IssuesEvent,2015-08-03 09:00:25,mesosphere/marathon,https://api.github.com/repos/mesosphere/marathon,closed,Run Integration Tests with Java 8 as well,OKR Maintainability,"As a preparation to https://github.com/mesosphere/marathon/issues/1544 (upgrading to Java 8), we should run our Integration Tests against Java 8 as well.",True,"Run Integration Tests with Java 8 as well - As a preparation to https://github.com/mesosphere/marathon/issues/1544 (upgrading to Java 8), we should run our Integration Tests against Java 8 as well.",1,run integration tests with java as well as a preparation to upgrading to java we should run our integration tests against java as well ,1
364,3343624183.0,IssuesEvent,2015-11-15 17:14:02,caskroom/homebrew-cask,https://api.github.com/repos/caskroom/homebrew-cask,opened,Delete stale branches,awaiting maintainer feedback,"Was looking at the branches, and it seems like there's a bit of cruft. Figured I'd make an issue just to confirm instead of deleting without feedback.
Are the following branches safe to delete?
```
revert-10854-master : Updated 7 months ago by alebcay
f-https-sourceforge-urls: Updated 8 months ago by phinze
fix-alfred-preference-install: Updated 2 years ago by phinze
audit-links: Updated 3 years ago by phinze
gh-pages: Updated 3 years ago by phinze (caskroom/caskroom.github.io seems to take care of this)
```",True,"Delete stale branches - Was looking at the branches, and it seems like there's a bit of cruft. Figured I'd make an issue just to confirm instead of deleting without feedback.
Are the following branches safe to delete?
```
revert-10854-master : Updated 7 months ago by alebcay
f-https-sourceforge-urls: Updated 8 months ago by phinze
fix-alfred-preference-install: Updated 2 years ago by phinze
audit-links: Updated 3 years ago by phinze
gh-pages: Updated 3 years ago by phinze (caskroom/caskroom.github.io seems to take care of this)
```",1,delete stale branches was looking at the branches and it seems like there s a bit of cruft figured i d make an issue just to confirm instead of deleting without feedback are the following branches safe to delete revert master updated months ago by alebcay f https sourceforge urls updated months ago by phinze fix alfred preference install updated years ago by phinze audit links updated years ago by phinze gh pages updated years ago by phinze caskroom caskroom github io seems to take care of this ,1
96415,8614218643.0,IssuesEvent,2018-11-19 16:54:04,SME-Issues/issues,https://api.github.com/repos/SME-Issues/issues,closed,Query Invoice Tests Comprehension Partial - 19/11/2018 - 5004,NLP Api pulse_tests,"**Query Invoice Tests Comprehension Partial**
- Total: 21
- Passed: 7
- **Full Pass: 6 (30%)**
- Not Understood: 1
- Failed but Understood: 14 (70%)
",1.0,"Query Invoice Tests Comprehension Partial - 19/11/2018 - 5004 - **Query Invoice Tests Comprehension Partial**
- Total: 21
- Passed: 7
- **Full Pass: 6 (30%)**
- Not Understood: 1
- Failed but Understood: 14 (70%)
",0,query invoice tests comprehension partial query invoice tests comprehension partial total passed full pass not understood failed but understood ,0
2364,8440431454.0,IssuesEvent,2018-10-18 07:15:00,Kristinita/Erics-Green-Room,https://api.github.com/repos/Kristinita/Erics-Green-Room,opened,feat(enhancement): шаблоны,need-maintainer packages,"### 1. Запрос
Неплохо было бы, если б в пакетах комнат Эрика можно было бы использовать шаблоны: то есть, когда один текст заменялся бы другим.
### 2. Аргументация
#### 2.1. Основное предназначение
Собственно, нужны для того же, для чего применяются шаблоны в программировании.
1. Вместо того, чтобы постоянно писать один и тот же длинный текст, легче применить шаблон.
1. Повторы обычно считаются [**признаком code smell**](https://www.artima.com/intv/dry.html) в программировании и «[**свидетельствуют о стилистической беспомощности автора»**](http://www.textologia.ru/russkiy/stilistika/lexsicheskaya/povtorenie-slov/948/?q=463&n=948)» для художественных, публицистических и научных статей. Почему они должны существовать в наших пакетах?
1. Шаблоны сокращают размеры пакетов.
1. При наличии шаблонов, если что-то захотим изменить, то достаточно будет изменить шаблон, а не производить замены везде.
#### 2.2. Скрытие названий сайтов
Больше вероятность, что от проблем будет убережён Альфа-хаб в том числе.
### 3. Пример технической реализации
Пользователь выбирает пакет для отыгрыша → какой-нибудь модуль, к примеру, [**replace-in-file**](https://www.npmjs.com/package/replace-in-file) производит замены в выбранном файле → пользователь играет пакет, где проведены замены.
### 4. Глобальные шаблоны
Которые будут применяться для всех пакетов.
#### 4.1. Темы
По состоянию на 17 октября 2018 38 пакетов комнат Эрика нуждаются в переписывании по причине [**реализации отыгрыша по темам**](https://github.com/Kristinita/Erics-Green-Room/issues/56). В 31 из них темы будут в следующем формате:
```
Назовите A по B
```
или:
```
Назовите A по B и С
```
+ Примеры замен:
+ `{по2|автора|художественному произведению}` → `Назовите автора по художественному произведению`,
+ `{по3|место катастрофы|году|типу}` → `Назовите место катастрофы по году и типу`.
#### 4.2. Метаданные шапки
Примеры:
+ `{сни}` → `Ссылка(и) на источник(и)`,
+ `{авт}` → `Автор(ы), редакторы и рецензенты (если есть) материалов источника(ов)`.
#### 4.3. Имена сайтов
Производятся прямые замены:
+ `{tw}` → `twitter`,
+ `{fl}` → `facebook`.
И так далее.
### 5. Локальные
Которые будут применяться только в пределах пакета.
Пример — если в шапке пакета содержится следующий текст:
```text
Локальные шаблоны:
{нмнк} — на момент написания книги
```
Это значит, что все вхождения `нмнк` в данном пакете перед отыгрышем будут заменены на `на момент написания книги`.
Спасибо.",True,"feat(enhancement): шаблоны - ### 1. Запрос
Неплохо было бы, если б в пакетах комнат Эрика можно было бы использовать шаблоны: то есть, когда один текст заменялся бы другим.
### 2. Аргументация
#### 2.1. Основное предназначение
Собственно, нужны для того же, для чего применяются шаблоны в программировании.
1. Вместо того, чтобы постоянно писать один и тот же длинный текст, легче применить шаблон.
1. Повторы обычно считаются [**признаком code smell**](https://www.artima.com/intv/dry.html) в программировании и «[**свидетельствуют о стилистической беспомощности автора»**](http://www.textologia.ru/russkiy/stilistika/lexsicheskaya/povtorenie-slov/948/?q=463&n=948)» для художественных, публицистических и научных статей. Почему они должны существовать в наших пакетах?
1. Шаблоны сокращают размеры пакетов.
1. При наличии шаблонов, если что-то захотим изменить, то достаточно будет изменить шаблон, а не производить замены везде.
#### 2.2. Скрытие названий сайтов
Больше вероятность, что от проблем будет убережён Альфа-хаб в том числе.
### 3. Пример технической реализации
Пользователь выбирает пакет для отыгрыша → какой-нибудь модуль, к примеру, [**replace-in-file**](https://www.npmjs.com/package/replace-in-file) производит замены в выбранном файле → пользователь играет пакет, где проведены замены.
### 4. Глобальные шаблоны
Которые будут применяться для всех пакетов.
#### 4.1. Темы
По состоянию на 17 октября 2018 38 пакетов комнат Эрика нуждаются в переписывании по причине [**реализации отыгрыша по темам**](https://github.com/Kristinita/Erics-Green-Room/issues/56). В 31 из них темы будут в следующем формате:
```
Назовите A по B
```
или:
```
Назовите A по B и С
```
+ Примеры замен:
+ `{по2|автора|художественному произведению}` → `Назовите автора по художественному произведению`,
+ `{по3|место катастрофы|году|типу}` → `Назовите место катастрофы по году и типу`.
#### 4.2. Метаданные шапки
Примеры:
+ `{сни}` → `Ссылка(и) на источник(и)`,
+ `{авт}` → `Автор(ы), редакторы и рецензенты (если есть) материалов источника(ов)`.
#### 4.3. Имена сайтов
Производятся прямые замены:
+ `{tw}` → `twitter`,
+ `{fl}` → `facebook`.
И так далее.
### 5. Локальные
Которые будут применяться только в пределах пакета.
Пример — если в шапке пакета содержится следующий текст:
```text
Локальные шаблоны:
{нмнк} — на момент написания книги
```
Это значит, что все вхождения `нмнк` в данном пакете перед отыгрышем будут заменены на `на момент написания книги`.
Спасибо.",1,feat enhancement шаблоны запрос неплохо было бы если б в пакетах комнат эрика можно было бы использовать шаблоны то есть когда один текст заменялся бы другим аргументация основное предназначение собственно нужны для того же для чего применяются шаблоны в программировании вместо того чтобы постоянно писать один и тот же длинный текст легче применить шаблон повторы обычно считаются в программировании и « для художественных публицистических и научных статей почему они должны существовать в наших пакетах шаблоны сокращают размеры пакетов при наличии шаблонов если что то захотим изменить то достаточно будет изменить шаблон а не производить замены везде скрытие названий сайтов больше вероятность что от проблем будет убережён альфа хаб в том числе пример технической реализации пользователь выбирает пакет для отыгрыша → какой нибудь модуль к примеру производит замены в выбранном файле → пользователь играет пакет где проведены замены глобальные шаблоны которые будут применяться для всех пакетов темы по состоянию на октября пакетов комнат эрика нуждаются в переписывании по причине в из них темы будут в следующем формате назовите a по b или назовите a по b и с примеры замен автора художественному произведению → назовите автора по художественному произведению место катастрофы году типу → назовите место катастрофы по году и типу метаданные шапки примеры сни → ссылка и на источник и авт → автор ы редакторы и рецензенты если есть материалов источника ов имена сайтов производятся прямые замены tw → twitter fl → facebook и так далее локальные которые будут применяться только в пределах пакета пример — если в шапке пакета содержится следующий текст text локальные шаблоны нмнк — на момент написания книги это значит что все вхождения нмнк в данном пакете перед отыгрышем будут заменены на на момент написания книги спасибо ,1
200,2832153909.0,IssuesEvent,2015-05-25 04:39:40,tgstation/-tg-station,https://api.github.com/repos/tgstation/-tg-station,closed,Flag NOSHIELD is unused,Maintainability - Hinders improvements Not a bug,"Defined in __DEFINES/flags.dm
The item flag NOSHIELD is meant to be used to allow weapons to bypass the riot shield, however while it is defined, it is not actually used anywhere in the code.",True,"Flag NOSHIELD is unused - Defined in __DEFINES/flags.dm
The item flag NOSHIELD is meant to be used to allow weapons to bypass the riot shield, however while it is defined, it is not actually used anywhere in the code.",1,flag noshield is unused defined in defines flags dm the item flag noshield is meant to be used to allow weapons to bypass the riot shield however while it is defined it is not actually used anywhere in the code ,1
1618,6572644447.0,IssuesEvent,2017-09-11 04:01:39,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,documentation error for sl_vm module,affects_2.1 cloud docs_report waiting_on_maintainer,"
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
sl_vm
##### ANSIBLE VERSION
```
2.1.2
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
In the documentation it says option **wait_timeout** but it should be **wait_time** indeed.
##### STEPS TO REPRODUCE
```
```
##### EXPECTED RESULTS
##### ACTUAL RESULTS
```
```
",True,"documentation error for sl_vm module -
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
sl_vm
##### ANSIBLE VERSION
```
2.1.2
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
In the documentation it says option **wait_timeout** but it should be **wait_time** indeed.
##### STEPS TO REPRODUCE
```
```
##### EXPECTED RESULTS
##### ACTUAL RESULTS
```
```
",1,documentation error for sl vm module issue type documentation report component name sl vm ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary in the documentation it says option wait timeout but it should be wait time indeed steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used expected results actual results ,1
5085,25998346362.0,IssuesEvent,2022-12-20 13:28:10,software-mansion/react-native-reanimated,https://api.github.com/repos/software-mansion/react-native-reanimated,opened,☂️ Deadlock/ANR in performOperations,Platform: Android Platform: iOS Bug Maintainer issue,"### Description
This is an umbrella issue for ANRs/deadlocks on Android/iOS in NodesManager.performOperations.
The bug was introduced in #1215.
### Steps to reproduce
We don't have a repro yet but it needs to use modal or datetime picker as well as animate layout props using Reanimated.
### Snack or a link to a repository
work in progress
### Reanimated version
>= 2.0.0
### React Native version
n/d
### Platforms
Android, iOS
### JavaScript runtime
None
### Workflow
None
### Architecture
None
### Build type
None
### Device
None
### Device model
_No response_
### Acknowledgements
Yes",True,"☂️ Deadlock/ANR in performOperations - ### Description
This is an umbrella issue for ANRs/deadlocks on Android/iOS in NodesManager.performOperations.
The bug was introduced in #1215.
### Steps to reproduce
We don't have a repro yet but it needs to use modal or datetime picker as well as animate layout props using Reanimated.
### Snack or a link to a repository
work in progress
### Reanimated version
>= 2.0.0
### React Native version
n/d
### Platforms
Android, iOS
### JavaScript runtime
None
### Workflow
None
### Architecture
None
### Build type
None
### Device
None
### Device model
_No response_
### Acknowledgements
Yes",1,☂️ deadlock anr in performoperations description this is an umbrella issue for anrs deadlocks on android ios in nodesmanager performoperations the bug was introduced in steps to reproduce we don t have a repro yet but it needs to use modal or datetime picker as well as animate layout props using reanimated snack or a link to a repository work in progress reanimated version react native version n d platforms android ios javascript runtime none workflow none architecture none build type none device none device model no response acknowledgements yes,1
5617,28101303640.0,IssuesEvent,2023-03-30 19:44:39,MozillaFoundation/foundation.mozilla.org,https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org,opened,Factory generates werid text for `article_listing_what_to_read_next.html`,engineering qa maintain,"I happened to spot a weird `A / ABLE` label show up on Percy's snapshot. We should investigate where it is coming from and why.
- [Full page snapshot](https://images.percy.io/0812bfe2823ed1de99f94d9a0b5d66452ff1d6bf6710779a29bc4f55edbaaad9)
- Label in question (cropped screenshot from above):

Not sure if it's coincidental but it seems to match where the date related regression that Percy always nags about.

---
Related ticket: https://github.com/MozillaFoundation/foundation.mozilla.org/issues/10328
",True,"Factory generates werid text for `article_listing_what_to_read_next.html` - I happened to spot a weird `A / ABLE` label show up on Percy's snapshot. We should investigate where it is coming from and why.
- [Full page snapshot](https://images.percy.io/0812bfe2823ed1de99f94d9a0b5d66452ff1d6bf6710779a29bc4f55edbaaad9)
- Label in question (cropped screenshot from above):

Not sure if it's coincidental but it seems to match where the date related regression that Percy always nags about.

---
Related ticket: https://github.com/MozillaFoundation/foundation.mozilla.org/issues/10328
",1,factory generates werid text for article listing what to read next html i happened to spot a weird a able label show up on percy s snapshot we should investigate where it is coming from and why label in question cropped screenshot from above not sure if it s coincidental but it seems to match where the date related regression that percy always nags about related ticket ,1
341532,24702903040.0,IssuesEvent,2022-10-19 16:35:18,franciellyferreira/design-apis-guide,https://api.github.com/repos/franciellyferreira/design-apis-guide,opened,Melhorar a organização dos arquivos da raiz,documentation,"Muitos arquivos na raiz do projeto, necessário organizar melhor e ajustar os links de navegação.",1.0,"Melhorar a organização dos arquivos da raiz - Muitos arquivos na raiz do projeto, necessário organizar melhor e ajustar os links de navegação.",0,melhorar a organização dos arquivos da raiz muitos arquivos na raiz do projeto necessário organizar melhor e ajustar os links de navegação ,0
235298,19322232906.0,IssuesEvent,2021-12-14 07:29:07,pingcap/tidb,https://api.github.com/repos/pingcap/tidb,closed,TiDB CI hang for more then 10 min,type/bug component/test component/tikv severity/major,"## Bug Report
```
[2021-11-23T14:16:43.094Z] FAIL github.com/pingcap/tidb/session 600.096s
```
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
in ci https://ci.pingcap.net/blue/organizations/jenkins/tidb_ghpr_check_2/detail/tidb_ghpr_check_2/47625/pipeline/64
### 2. What did you expect to see? (Required)
### 3. What did you see instead (Required)
### 4. What is your TiDB version? (Required)
master
",1.0,"TiDB CI hang for more then 10 min - ## Bug Report
```
[2021-11-23T14:16:43.094Z] FAIL github.com/pingcap/tidb/session 600.096s
```
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
in ci https://ci.pingcap.net/blue/organizations/jenkins/tidb_ghpr_check_2/detail/tidb_ghpr_check_2/47625/pipeline/64
### 2. What did you expect to see? (Required)
### 3. What did you see instead (Required)
### 4. What is your TiDB version? (Required)
master
",0,tidb ci hang for more then min bug report fail github com pingcap tidb session please answer these questions before submitting your issue thanks minimal reproduce step required in ci what did you expect to see required what did you see instead required what is your tidb version required master ,0
153922,13530712055.0,IssuesEvent,2020-09-15 20:22:04,fga-eps-mds/2020.1-Grupo2-wiki,https://api.github.com/repos/fga-eps-mds/2020.1-Grupo2-wiki,closed,Risk Management Plan Document,documentation eps,"# Description
Eu como gerente gostaria do documento de gerenciamento de riscos para saber como lidar com situações adversas durante o ciclo de vida do produto.",1.0,"Risk Management Plan Document - # Description
Eu como gerente gostaria do documento de gerenciamento de riscos para saber como lidar com situações adversas durante o ciclo de vida do produto.",0,risk management plan document description eu como gerente gostaria do documento de gerenciamento de riscos para saber como lidar com situações adversas durante o ciclo de vida do produto ,0
587333,17613371885.0,IssuesEvent,2021-08-18 06:28:41,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,9gag.com - site is not usable,priority-important browser-focus-geckoview engine-gecko,"
**URL**: https://9gag.com/
**Browser / Version**: Firefox Mobile 91.0
**Operating System**: Android 9
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
I can't slide the drawer to select open un browser.... Then the home page is locked....
Browser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"9gag.com - site is not usable -
**URL**: https://9gag.com/
**Browser / Version**: Firefox Mobile 91.0
**Operating System**: Android 9
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
I can't slide the drawer to select open un browser.... Then the home page is locked....
Browser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0, com site is not usable url browser version firefox mobile operating system android tested another browser yes other problem type site is not usable description buttons or links not working steps to reproduce i can t slide the drawer to select open un browser then the home page is locked browser configuration none from with ❤️ ,0
4403,22617321211.0,IssuesEvent,2022-06-30 00:20:29,aws/aws-sam-cli,https://api.github.com/repos/aws/aws-sam-cli,closed,`sam sync` does not support custom bucket names,type/ux type/feature area/sam-config area/sync maintainer/need-followup area/accelerate,"
### Description:
I don't use the default SAM bucket, I have my own. `sam sync` does not seem to support this.
### Steps to reproduce:
Do `sam init` and create the zip Python 3.9 ""Hello World"" template.
Create the following samconfig.toml
```toml
version = 0.1
[default]
[default.deploy]
[default.deploy.parameters]
stack_name = ""sam-test""
s3_bucket = ""mybucket""
s3_prefix = ""sam-test""
region = ""us-west-2""
capabilities = ""CAPABILITY_IAM""
```
Run `sam build && sam deploy`, which succeeds.
### Observed result:
`sam sync --stack-name sam-test` gives the following output. You can see it's attempting to use the default managed SAM bucket.
```
2021-12-17 11:40:14,807 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2021-12-17 11:40:14,812 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2021-12-17 11:40:14,812 | Sending Telemetry: {'metrics': [{'templateWarning': {'requestId': '5e92f8cb-75e3-4793-81f8-faee808f01a7', 'installationId': '1ef32602-7319-4d1a-bc65-fb2419c3fe35', 'sessionId': 'eeb5b278-0298-446b-9bcc-43424c2cd44d', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.12', 'samcliVersion': '1.36.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'warningName': 'CodeDeployWarning', 'warningCount': 0}}]}
2021-12-17 11:40:15,017 | Telemetry response: 200
2021-12-17 11:40:15,018 | Sending Telemetry: {'metrics': [{'templateWarning': {'requestId': 'd0f3bfd9-c6d7-40db-9c8b-337bf8efcd98', 'installationId': '1ef32602-7319-4d1a-bc65-fb2419c3fe35', 'sessionId': 'eeb5b278-0298-446b-9bcc-43424c2cd44d', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.12', 'samcliVersion': '1.36.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'warningName': 'CodeDeployConditionWarning', 'warningCount': 0}}]}
2021-12-17 11:40:15,283 | Telemetry response: 200
2021-12-17 11:40:15,284 | Using config file: samconfig.toml, config environment: default
2021-12-17 11:40:15,284 | Expand command line arguments to:
2021-12-17 11:40:15,284 | --template_file=/Users/luhn/Code/audit/test/template.yaml --stack_name=sam-test --dependency_layer --capabilities=('CAPABILITY_NAMED_IAM', 'CAPABILITY_AUTO_EXPAND')
Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-1aupim17uw7m6
Default capabilities applied: ('CAPABILITY_NAMED_IAM', 'CAPABILITY_AUTO_EXPAND')
To override with customized capabilities, use --capabilities flag or set it in samconfig.toml
2021-12-17 11:40:16,112 | Using build directory as .aws-sam/auto-dependency-layer
2021-12-17 11:40:16,112 | Using build directory as .aws-sam/auto-dependency-layer
This feature is currently in beta. Visit the docs page to learn more about the AWS Beta terms https://aws.amazon.com/service-terms/.
The SAM CLI will use the AWS Lambda, Amazon API Gateway, and AWS StepFunctions APIs to upload your code without
performing a CloudFormation deployment. This will cause drift in your CloudFormation stack.
**The sync command should only be used against a development stack**.
Confirm that you are synchronizing a development stack and want to turn on beta features.
Enter Y to proceed with the command, or enter N to cancel:
[y/N]: 2021-12-17 11:40:17,467 | [33m
Experimental features are enabled for this session.
Visit the docs page to learn more about the AWS Beta terms https://aws.amazon.com/service-terms/.
[0m
2021-12-17 11:40:17,477 | No Parameters detected in the template
2021-12-17 11:40:17,499 | 2 stacks found in the template
2021-12-17 11:40:17,499 | No Parameters detected in the template
2021-12-17 11:40:17,510 | 2 resources found in the stack
2021-12-17 11:40:17,510 | No Parameters detected in the template
2021-12-17 11:40:17,519 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/'
2021-12-17 11:40:17,519 | --base-dir is not presented, adjusting uri hello_world/ relative to /Users/luhn/Code/audit/test/template.yaml
2021-12-17 11:40:17,519 | No Parameters detected in the template
2021-12-17 11:40:17,538 | Executing the build using build context.
2021-12-17 11:40:17,538 | Instantiating build definitions
2021-12-17 11:40:17,540 | Same function build definition found, adding function (Previous: BuildDefinition(python3.9, /Users/luhn/Code/audit/test/hello_world, Zip, , d23e058e-cbff-4bce-85b2-09954cf33d29, {}, {}, x86_64, []), Current: BuildDefinition(python3.9, /Users/luhn/Code/audit/test/hello_world, Zip, , 85a07967-200c-4a31-81df-7700103e6ad7, {}, {}, x86_64, []), Function: Function(name='HelloWorldFunction', functionname='HelloWorldFunction', runtime='python3.9', memory=None, timeout=3, handler='app.lambda_handler', imageuri=None, packagetype='Zip', imageconfig=None, codeuri='/Users/luhn/Code/audit/test/hello_world', environment=None, rolearn=None, layers=[], events={'HelloWorld': {'Type': 'Api', 'Properties': {'Path': '/hello', 'Method': 'get', 'RestApiId': 'ServerlessRestApi'}}}, metadata=None, inlinecode=None, codesign_config_arn=None, architectures=['x86_64'], stack_path=''))
2021-12-17 11:40:17,541 | Async execution started
2021-12-17 11:40:17,541 | Invoking function functools.partial(>, )
2021-12-17 11:40:17,541 | Running incremental build for runtime python3.9 for build definition d23e058e-cbff-4bce-85b2-09954cf33d29
2021-12-17 11:40:17,541 | Waiting for async results
2021-12-17 11:40:17,541 | Manifest is not changed for d23e058e-cbff-4bce-85b2-09954cf33d29, running incremental build
2021-12-17 11:40:17,541 | Building codeuri: /Users/luhn/Code/audit/test/hello_world runtime: python3.9 metadata: {} architecture: x86_64 functions: ['HelloWorldFunction']
2021-12-17 11:40:17,541 | Building to following folder /Users/luhn/Code/audit/test/.aws-sam/auto-dependency-layer/HelloWorldFunction
2021-12-17 11:40:17,542 | Loading workflow module 'aws_lambda_builders.workflows'
2021-12-17 11:40:17,546 | Registering workflow 'PythonPipBuilder' with capability 'Capability(language='python', dependency_manager='pip', application_framework=None)'
2021-12-17 11:40:17,548 | Registering workflow 'NodejsNpmBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm', application_framework=None)'
2021-12-17 11:40:17,549 | Registering workflow 'RubyBundlerBuilder' with capability 'Capability(language='ruby', dependency_manager='bundler', application_framework=None)'
2021-12-17 11:40:17,551 | Registering workflow 'GoDepBuilder' with capability 'Capability(language='go', dependency_manager='dep', application_framework=None)'
2021-12-17 11:40:17,553 | Registering workflow 'GoModulesBuilder' with capability 'Capability(language='go', dependency_manager='modules', application_framework=None)'
2021-12-17 11:40:17,555 | Registering workflow 'JavaGradleWorkflow' with capability 'Capability(language='java', dependency_manager='gradle', application_framework=None)'
2021-12-17 11:40:17,556 | Registering workflow 'JavaMavenWorkflow' with capability 'Capability(language='java', dependency_manager='maven', application_framework=None)'
2021-12-17 11:40:17,558 | Registering workflow 'DotnetCliPackageBuilder' with capability 'Capability(language='dotnet', dependency_manager='cli-package', application_framework=None)'
2021-12-17 11:40:17,559 | Registering workflow 'CustomMakeBuilder' with capability 'Capability(language='provided', dependency_manager=None, application_framework=None)'
2021-12-17 11:40:17,559 | Found workflow 'PythonPipBuilder' to support capabilities 'Capability(language='python', dependency_manager='pip', application_framework=None)'
2021-12-17 11:40:17,626 | Running workflow 'PythonPipBuilder'
2021-12-17 11:40:17,627 | Running PythonPipBuilder:CopySource
2021-12-17 11:40:17,629 | PythonPipBuilder:CopySource succeeded
2021-12-17 11:40:17,629 | Async execution completed
2021-12-17 11:40:17,630 | Auto creating dependency layer for each function resource into a nested stack
2021-12-17 11:40:17,630 | No Parameters detected in the template
2021-12-17 11:40:17,636 | 2 resources found in the stack sam-test
2021-12-17 11:40:17,636 | No Parameters detected in the template
2021-12-17 11:40:17,641 | Found Serverless function with name='HelloWorldFunction' and CodeUri='.aws-sam/auto-dependency-layer/HelloWorldFunction'
2021-12-17 11:40:17,641 | --base-dir is not presented, adjusting uri .aws-sam/auto-dependency-layer/HelloWorldFunction relative to /Users/luhn/Code/audit/test/template.yaml
Build Succeeded
Built Artifacts : .aws-sam/auto-dependency-layer
Built Template : .aws-sam/auto-dependency-layer/template.yaml
Commands you can use next
=========================
[*] Invoke Function: sam local invoke -t .aws-sam/auto-dependency-layer/template.yaml
[*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch
[*] Deploy: sam deploy --guided --template-file .aws-sam/auto-dependency-layer/template.yaml
2021-12-17 11:40:17,667 | Executing the packaging using package context.
2021-12-17 11:40:18,030 | Unable to export
Traceback (most recent call last):
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/s3_uploader.py"", line 114, in upload
future.result()
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/futures.py"", line 106, in result
return self._coordinator.result()
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/futures.py"", line 265, in result
raise self._exception
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/tasks.py"", line 126, in __call__
return self._execute_main(kwargs)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/tasks.py"", line 150, in _execute_main
return_value = self._main(**kwargs)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/upload.py"", line 694, in _main
client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/botocore/client.py"", line 391, in _api_call
return self._make_api_call(operation_name, kwargs)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/botocore/client.py"", line 719, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/packageable_resources.py"", line 126, in export
self.do_export(resource_id, resource_dict, parent_dir)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/packageable_resources.py"", line 148, in do_export
uploaded_url = upload_local_artifacts(
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/utils.py"", line 171, in upload_local_artifacts
return zip_and_upload(local_path, uploader, extension)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/utils.py"", line 189, in zip_and_upload
return uploader.upload_with_dedup(zip_file, precomputed_md5=md5_hash, extension=extension)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/s3_uploader.py"", line 143, in upload_with_dedup
return self.upload(file_name, remote_path)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/s3_uploader.py"", line 121, in upload
raise NoSuchBucketError(bucket_name=self.bucket_name) from ex
samcli.commands.package.exceptions.NoSuchBucketError:
S3 Bucket does not exist.
2021-12-17 11:40:18,033 | Sending Telemetry: {'metrics': [{'commandRunExperimental': {'requestId': '2898b15c-f378-4219-b192-da75e8d8e59d', 'installationId': '1ef32602-7319-4d1a-bc65-fb2419c3fe35', 'sessionId': 'eeb5b278-0298-446b-9bcc-43424c2cd44d', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.12', 'samcliVersion': '1.36.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam sync', 'metricSpecificAttributes': {'experimentalAccelerate': True, 'experimentalAll': False}, 'duration': 3225, 'exitReason': 'ExportFailedError', 'exitCode': 1}}]}
2021-12-17 11:40:18,278 | Telemetry response: 200
Error: Unable to upload artifact HelloWorldFunction referenced by CodeUri parameter of HelloWorldFunction resource.
S3 Bucket does not exist.
```
### Expected result:
I would expect a) sync to honor the settings in samconfig.toml or b) a CLI flag to set the S3 bucket name.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Mac OS Monterey
2. If using SAM CLI, `sam --version`: `SAM CLI, version 1.36.0`
3. AWS region: us-west-2
",True,"`sam sync` does not support custom bucket names -
### Description:
I don't use the default SAM bucket, I have my own. `sam sync` does not seem to support this.
### Steps to reproduce:
Do `sam init` and create the zip Python 3.9 ""Hello World"" template.
Create the following samconfig.toml
```toml
version = 0.1
[default]
[default.deploy]
[default.deploy.parameters]
stack_name = ""sam-test""
s3_bucket = ""mybucket""
s3_prefix = ""sam-test""
region = ""us-west-2""
capabilities = ""CAPABILITY_IAM""
```
Run `sam build && sam deploy`, which succeeds.
### Observed result:
`sam sync --stack-name sam-test` gives the following output. You can see it's attempting to use the default managed SAM bucket.
```
2021-12-17 11:40:14,807 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2021-12-17 11:40:14,812 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2021-12-17 11:40:14,812 | Sending Telemetry: {'metrics': [{'templateWarning': {'requestId': '5e92f8cb-75e3-4793-81f8-faee808f01a7', 'installationId': '1ef32602-7319-4d1a-bc65-fb2419c3fe35', 'sessionId': 'eeb5b278-0298-446b-9bcc-43424c2cd44d', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.12', 'samcliVersion': '1.36.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'warningName': 'CodeDeployWarning', 'warningCount': 0}}]}
2021-12-17 11:40:15,017 | Telemetry response: 200
2021-12-17 11:40:15,018 | Sending Telemetry: {'metrics': [{'templateWarning': {'requestId': 'd0f3bfd9-c6d7-40db-9c8b-337bf8efcd98', 'installationId': '1ef32602-7319-4d1a-bc65-fb2419c3fe35', 'sessionId': 'eeb5b278-0298-446b-9bcc-43424c2cd44d', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.12', 'samcliVersion': '1.36.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'warningName': 'CodeDeployConditionWarning', 'warningCount': 0}}]}
2021-12-17 11:40:15,283 | Telemetry response: 200
2021-12-17 11:40:15,284 | Using config file: samconfig.toml, config environment: default
2021-12-17 11:40:15,284 | Expand command line arguments to:
2021-12-17 11:40:15,284 | --template_file=/Users/luhn/Code/audit/test/template.yaml --stack_name=sam-test --dependency_layer --capabilities=('CAPABILITY_NAMED_IAM', 'CAPABILITY_AUTO_EXPAND')
Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-1aupim17uw7m6
Default capabilities applied: ('CAPABILITY_NAMED_IAM', 'CAPABILITY_AUTO_EXPAND')
To override with customized capabilities, use --capabilities flag or set it in samconfig.toml
2021-12-17 11:40:16,112 | Using build directory as .aws-sam/auto-dependency-layer
2021-12-17 11:40:16,112 | Using build directory as .aws-sam/auto-dependency-layer
This feature is currently in beta. Visit the docs page to learn more about the AWS Beta terms https://aws.amazon.com/service-terms/.
The SAM CLI will use the AWS Lambda, Amazon API Gateway, and AWS StepFunctions APIs to upload your code without
performing a CloudFormation deployment. This will cause drift in your CloudFormation stack.
**The sync command should only be used against a development stack**.
Confirm that you are synchronizing a development stack and want to turn on beta features.
Enter Y to proceed with the command, or enter N to cancel:
[y/N]: 2021-12-17 11:40:17,467 | [33m
Experimental features are enabled for this session.
Visit the docs page to learn more about the AWS Beta terms https://aws.amazon.com/service-terms/.
[0m
2021-12-17 11:40:17,477 | No Parameters detected in the template
2021-12-17 11:40:17,499 | 2 stacks found in the template
2021-12-17 11:40:17,499 | No Parameters detected in the template
2021-12-17 11:40:17,510 | 2 resources found in the stack
2021-12-17 11:40:17,510 | No Parameters detected in the template
2021-12-17 11:40:17,519 | Found Serverless function with name='HelloWorldFunction' and CodeUri='hello_world/'
2021-12-17 11:40:17,519 | --base-dir is not presented, adjusting uri hello_world/ relative to /Users/luhn/Code/audit/test/template.yaml
2021-12-17 11:40:17,519 | No Parameters detected in the template
2021-12-17 11:40:17,538 | Executing the build using build context.
2021-12-17 11:40:17,538 | Instantiating build definitions
2021-12-17 11:40:17,540 | Same function build definition found, adding function (Previous: BuildDefinition(python3.9, /Users/luhn/Code/audit/test/hello_world, Zip, , d23e058e-cbff-4bce-85b2-09954cf33d29, {}, {}, x86_64, []), Current: BuildDefinition(python3.9, /Users/luhn/Code/audit/test/hello_world, Zip, , 85a07967-200c-4a31-81df-7700103e6ad7, {}, {}, x86_64, []), Function: Function(name='HelloWorldFunction', functionname='HelloWorldFunction', runtime='python3.9', memory=None, timeout=3, handler='app.lambda_handler', imageuri=None, packagetype='Zip', imageconfig=None, codeuri='/Users/luhn/Code/audit/test/hello_world', environment=None, rolearn=None, layers=[], events={'HelloWorld': {'Type': 'Api', 'Properties': {'Path': '/hello', 'Method': 'get', 'RestApiId': 'ServerlessRestApi'}}}, metadata=None, inlinecode=None, codesign_config_arn=None, architectures=['x86_64'], stack_path=''))
2021-12-17 11:40:17,541 | Async execution started
2021-12-17 11:40:17,541 | Invoking function functools.partial(>, )
2021-12-17 11:40:17,541 | Running incremental build for runtime python3.9 for build definition d23e058e-cbff-4bce-85b2-09954cf33d29
2021-12-17 11:40:17,541 | Waiting for async results
2021-12-17 11:40:17,541 | Manifest is not changed for d23e058e-cbff-4bce-85b2-09954cf33d29, running incremental build
2021-12-17 11:40:17,541 | Building codeuri: /Users/luhn/Code/audit/test/hello_world runtime: python3.9 metadata: {} architecture: x86_64 functions: ['HelloWorldFunction']
2021-12-17 11:40:17,541 | Building to following folder /Users/luhn/Code/audit/test/.aws-sam/auto-dependency-layer/HelloWorldFunction
2021-12-17 11:40:17,542 | Loading workflow module 'aws_lambda_builders.workflows'
2021-12-17 11:40:17,546 | Registering workflow 'PythonPipBuilder' with capability 'Capability(language='python', dependency_manager='pip', application_framework=None)'
2021-12-17 11:40:17,548 | Registering workflow 'NodejsNpmBuilder' with capability 'Capability(language='nodejs', dependency_manager='npm', application_framework=None)'
2021-12-17 11:40:17,549 | Registering workflow 'RubyBundlerBuilder' with capability 'Capability(language='ruby', dependency_manager='bundler', application_framework=None)'
2021-12-17 11:40:17,551 | Registering workflow 'GoDepBuilder' with capability 'Capability(language='go', dependency_manager='dep', application_framework=None)'
2021-12-17 11:40:17,553 | Registering workflow 'GoModulesBuilder' with capability 'Capability(language='go', dependency_manager='modules', application_framework=None)'
2021-12-17 11:40:17,555 | Registering workflow 'JavaGradleWorkflow' with capability 'Capability(language='java', dependency_manager='gradle', application_framework=None)'
2021-12-17 11:40:17,556 | Registering workflow 'JavaMavenWorkflow' with capability 'Capability(language='java', dependency_manager='maven', application_framework=None)'
2021-12-17 11:40:17,558 | Registering workflow 'DotnetCliPackageBuilder' with capability 'Capability(language='dotnet', dependency_manager='cli-package', application_framework=None)'
2021-12-17 11:40:17,559 | Registering workflow 'CustomMakeBuilder' with capability 'Capability(language='provided', dependency_manager=None, application_framework=None)'
2021-12-17 11:40:17,559 | Found workflow 'PythonPipBuilder' to support capabilities 'Capability(language='python', dependency_manager='pip', application_framework=None)'
2021-12-17 11:40:17,626 | Running workflow 'PythonPipBuilder'
2021-12-17 11:40:17,627 | Running PythonPipBuilder:CopySource
2021-12-17 11:40:17,629 | PythonPipBuilder:CopySource succeeded
2021-12-17 11:40:17,629 | Async execution completed
2021-12-17 11:40:17,630 | Auto creating dependency layer for each function resource into a nested stack
2021-12-17 11:40:17,630 | No Parameters detected in the template
2021-12-17 11:40:17,636 | 2 resources found in the stack sam-test
2021-12-17 11:40:17,636 | No Parameters detected in the template
2021-12-17 11:40:17,641 | Found Serverless function with name='HelloWorldFunction' and CodeUri='.aws-sam/auto-dependency-layer/HelloWorldFunction'
2021-12-17 11:40:17,641 | --base-dir is not presented, adjusting uri .aws-sam/auto-dependency-layer/HelloWorldFunction relative to /Users/luhn/Code/audit/test/template.yaml
Build Succeeded
Built Artifacts : .aws-sam/auto-dependency-layer
Built Template : .aws-sam/auto-dependency-layer/template.yaml
Commands you can use next
=========================
[*] Invoke Function: sam local invoke -t .aws-sam/auto-dependency-layer/template.yaml
[*] Test Function in the Cloud: sam sync --stack-name {stack-name} --watch
[*] Deploy: sam deploy --guided --template-file .aws-sam/auto-dependency-layer/template.yaml
2021-12-17 11:40:17,667 | Executing the packaging using package context.
2021-12-17 11:40:18,030 | Unable to export
Traceback (most recent call last):
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/s3_uploader.py"", line 114, in upload
future.result()
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/futures.py"", line 106, in result
return self._coordinator.result()
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/futures.py"", line 265, in result
raise self._exception
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/tasks.py"", line 126, in __call__
return self._execute_main(kwargs)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/tasks.py"", line 150, in _execute_main
return_value = self._main(**kwargs)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/s3transfer/upload.py"", line 694, in _main
client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/botocore/client.py"", line 391, in _api_call
return self._make_api_call(operation_name, kwargs)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/botocore/client.py"", line 719, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/packageable_resources.py"", line 126, in export
self.do_export(resource_id, resource_dict, parent_dir)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/packageable_resources.py"", line 148, in do_export
uploaded_url = upload_local_artifacts(
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/utils.py"", line 171, in upload_local_artifacts
return zip_and_upload(local_path, uploader, extension)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/utils.py"", line 189, in zip_and_upload
return uploader.upload_with_dedup(zip_file, precomputed_md5=md5_hash, extension=extension)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/s3_uploader.py"", line 143, in upload_with_dedup
return self.upload(file_name, remote_path)
File ""/opt/homebrew/Cellar/aws-sam-cli/1.36.0/libexec/lib/python3.8/site-packages/samcli/lib/package/s3_uploader.py"", line 121, in upload
raise NoSuchBucketError(bucket_name=self.bucket_name) from ex
samcli.commands.package.exceptions.NoSuchBucketError:
S3 Bucket does not exist.
2021-12-17 11:40:18,033 | Sending Telemetry: {'metrics': [{'commandRunExperimental': {'requestId': '2898b15c-f378-4219-b192-da75e8d8e59d', 'installationId': '1ef32602-7319-4d1a-bc65-fb2419c3fe35', 'sessionId': 'eeb5b278-0298-446b-9bcc-43424c2cd44d', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.8.12', 'samcliVersion': '1.36.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam sync', 'metricSpecificAttributes': {'experimentalAccelerate': True, 'experimentalAll': False}, 'duration': 3225, 'exitReason': 'ExportFailedError', 'exitCode': 1}}]}
2021-12-17 11:40:18,278 | Telemetry response: 200
Error: Unable to upload artifact HelloWorldFunction referenced by CodeUri parameter of HelloWorldFunction resource.
S3 Bucket does not exist.
```
### Expected result:
I would expect a) sync to honor the settings in samconfig.toml or b) a CLI flag to set the S3 bucket name.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Mac OS Monterey
2. If using SAM CLI, `sam --version`: `SAM CLI, version 1.36.0`
3. AWS region: us-west-2
",1, sam sync does not support custom bucket names description i don t use the default sam bucket i have my own sam sync does not seem to support this steps to reproduce do sam init and create the zip python hello world template create the following samconfig toml toml version stack name sam test bucket mybucket prefix sam test region us west capabilities capability iam run sam build sam deploy which succeeds observed result sam sync stack name sam test gives the following output you can see it s attempting to use the default managed sam bucket telemetry endpoint configured to be telemetry endpoint configured to be sending telemetry metrics telemetry response sending telemetry metrics telemetry response using config file samconfig toml config environment default expand command line arguments to template file users luhn code audit test template yaml stack name sam test dependency layer capabilities capability named iam capability auto expand managed bucket aws sam cli managed default samclisourcebucket default capabilities applied capability named iam capability auto expand to override with customized capabilities use capabilities flag or set it in samconfig toml using build directory as aws sam auto dependency layer using build directory as aws sam auto dependency layer this feature is currently in beta visit the docs page to learn more about the aws beta terms the sam cli will use the aws lambda amazon api gateway and aws stepfunctions apis to upload your code without performing a cloudformation deployment this will cause drift in your cloudformation stack the sync command should only be used against a development stack confirm that you are synchronizing a development stack and want to turn on beta features enter y to proceed with the command or enter n to cancel experimental features are enabled for this session visit the docs page to learn more about the aws beta terms no parameters detected in the template stacks found in the template no parameters detected in the template resources found in the stack no parameters detected in the template found serverless function with name helloworldfunction and codeuri hello world base dir is not presented adjusting uri hello world relative to users luhn code audit test template yaml no parameters detected in the template executing the build using build context instantiating build definitions same function build definition found adding function previous builddefinition users luhn code audit test hello world zip cbff current builddefinition users luhn code audit test hello world zip function function name helloworldfunction functionname helloworldfunction runtime memory none timeout handler app lambda handler imageuri none packagetype zip imageconfig none codeuri users luhn code audit test hello world environment none rolearn none layers events helloworld type api properties path hello method get restapiid serverlessrestapi metadata none inlinecode none codesign config arn none architectures stack path async execution started invoking function functools partial running incremental build for runtime for build definition cbff waiting for async results manifest is not changed for cbff running incremental build building codeuri users luhn code audit test hello world runtime metadata architecture functions building to following folder users luhn code audit test aws sam auto dependency layer helloworldfunction loading workflow module aws lambda builders workflows registering workflow pythonpipbuilder with capability capability language python dependency manager pip application framework none registering workflow nodejsnpmbuilder with capability capability language nodejs dependency manager npm application framework none registering workflow rubybundlerbuilder with capability capability language ruby dependency manager bundler application framework none registering workflow godepbuilder with capability capability language go dependency manager dep application framework none registering workflow gomodulesbuilder with capability capability language go dependency manager modules application framework none registering workflow javagradleworkflow with capability capability language java dependency manager gradle application framework none registering workflow javamavenworkflow with capability capability language java dependency manager maven application framework none registering workflow dotnetclipackagebuilder with capability capability language dotnet dependency manager cli package application framework none registering workflow custommakebuilder with capability capability language provided dependency manager none application framework none found workflow pythonpipbuilder to support capabilities capability language python dependency manager pip application framework none running workflow pythonpipbuilder running pythonpipbuilder copysource pythonpipbuilder copysource succeeded async execution completed auto creating dependency layer for each function resource into a nested stack no parameters detected in the template resources found in the stack sam test no parameters detected in the template found serverless function with name helloworldfunction and codeuri aws sam auto dependency layer helloworldfunction base dir is not presented adjusting uri aws sam auto dependency layer helloworldfunction relative to users luhn code audit test template yaml build succeeded built artifacts aws sam auto dependency layer built template aws sam auto dependency layer template yaml commands you can use next invoke function sam local invoke t aws sam auto dependency layer template yaml test function in the cloud sam sync stack name stack name watch deploy sam deploy guided template file aws sam auto dependency layer template yaml executing the packaging using package context unable to export traceback most recent call last file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package uploader py line in upload future result file opt homebrew cellar aws sam cli libexec lib site packages futures py line in result return self coordinator result file opt homebrew cellar aws sam cli libexec lib site packages futures py line in result raise self exception file opt homebrew cellar aws sam cli libexec lib site packages tasks py line in call return self execute main kwargs file opt homebrew cellar aws sam cli libexec lib site packages tasks py line in execute main return value self main kwargs file opt homebrew cellar aws sam cli libexec lib site packages upload py line in main client put object bucket bucket key key body body extra args file opt homebrew cellar aws sam cli libexec lib site packages botocore client py line in api call return self make api call operation name kwargs file opt homebrew cellar aws sam cli libexec lib site packages botocore client py line in make api call raise error class parsed response operation name botocore errorfactory nosuchbucket an error occurred nosuchbucket when calling the putobject operation the specified bucket does not exist the above exception was the direct cause of the following exception traceback most recent call last file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package packageable resources py line in export self do export resource id resource dict parent dir file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package packageable resources py line in do export uploaded url upload local artifacts file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package utils py line in upload local artifacts return zip and upload local path uploader extension file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package utils py line in zip and upload return uploader upload with dedup zip file precomputed hash extension extension file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package uploader py line in upload with dedup return self upload file name remote path file opt homebrew cellar aws sam cli libexec lib site packages samcli lib package uploader py line in upload raise nosuchbucketerror bucket name self bucket name from ex samcli commands package exceptions nosuchbucketerror bucket does not exist sending telemetry metrics telemetry response error unable to upload artifact helloworldfunction referenced by codeuri parameter of helloworldfunction resource bucket does not exist expected result i would expect a sync to honor the settings in samconfig toml or b a cli flag to set the bucket name additional environment details ex windows mac amazon linux etc os mac os monterey if using sam cli sam version sam cli version aws region us west ,1
184973,21785042059.0,IssuesEvent,2022-05-14 02:15:53,ignatandrei/WFH_Resources,https://api.github.com/repos/ignatandrei/WFH_Resources,closed,CVE-2020-8116 (High) detected in dot-prop-4.2.0.tgz - autoclosed,security vulnerability,"## CVE-2020-8116 - High Severity Vulnerability
Vulnerable Library - dot-prop-4.2.0.tgz
Get, set, or delete a property from a nested object using a dot path
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-8116 (High) detected in dot-prop-4.2.0.tgz - autoclosed - ## CVE-2020-8116 - High Severity Vulnerability
Vulnerable Library - dot-prop-4.2.0.tgz
Get, set, or delete a property from a nested object using a dot path
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in dot prop tgz autoclosed cve high severity vulnerability vulnerable library dot prop tgz get set or delete a property from a nested object using a dot path library home page a href path to dependency file tmp ws scm wfh resources makedata package json path to vulnerable library tmp ws scm wfh resources makedata node modules dot prop package json dependency hierarchy nodemon tgz root library update notifier tgz configstore tgz x dot prop tgz vulnerable library found in head commit a href vulnerability details prototype pollution vulnerability in dot prop npm package version and earlier allows an attacker to add arbitrary properties to javascript language constructs such as objects publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution dot prop step up your open source security game with whitesource ,0
9212,24235737696.0,IssuesEvent,2022-09-26 23:00:12,terrapower/armi,https://api.github.com/repos/terrapower/armi,closed,Can we finally remove Settings Rules?,architecture cleanup,"For the past few years, the `SettingsRules` system has been antiquated at best, and redundant now that we have settings validators:
https://github.com/terrapower/armi/blob/f0d27e7405bde450b1ba01825c95783080974c53/armi/settings/settingsRules.py#L20-L22
Generally, we want to move away from ""this code runs when you import ARMI"" to things that more controllable and less magical.
The only place I can see in ARMI this will matter is is `armi/physics/neutronics/settings.py` there are some ""rules"" that will have to be converted to ""validators"". Which is a very small lift.
> But how many downstream projects would have to be converted first? That's what determines our level-of-effort here.",1.0,"Can we finally remove Settings Rules? - For the past few years, the `SettingsRules` system has been antiquated at best, and redundant now that we have settings validators:
https://github.com/terrapower/armi/blob/f0d27e7405bde450b1ba01825c95783080974c53/armi/settings/settingsRules.py#L20-L22
Generally, we want to move away from ""this code runs when you import ARMI"" to things that more controllable and less magical.
The only place I can see in ARMI this will matter is is `armi/physics/neutronics/settings.py` there are some ""rules"" that will have to be converted to ""validators"". Which is a very small lift.
> But how many downstream projects would have to be converted first? That's what determines our level-of-effort here.",0,can we finally remove settings rules for the past few years the settingsrules system has been antiquated at best and redundant now that we have settings validators generally we want to move away from this code runs when you import armi to things that more controllable and less magical the only place i can see in armi this will matter is is armi physics neutronics settings py there are some rules that will have to be converted to validators which is a very small lift but how many downstream projects would have to be converted first that s what determines our level of effort here ,0
83329,24041192440.0,IssuesEvent,2022-09-16 02:05:11,moclojer/moclojer,https://api.github.com/repos/moclojer/moclojer,closed,clojure devcontainer support,documentation docker build,"Whats is [devcontainer](https://code.visualstudio.com/docs/remote/containers)?
Way to leave the development environment inside the container, it is a specification that started in vscode and other editors support.",1.0,"clojure devcontainer support - Whats is [devcontainer](https://code.visualstudio.com/docs/remote/containers)?
Way to leave the development environment inside the container, it is a specification that started in vscode and other editors support.",0,clojure devcontainer support whats is way to leave the development environment inside the container it is a specification that started in vscode and other editors support ,0
516145,14975963202.0,IssuesEvent,2021-01-28 07:12:55,threefoldtech/home,https://api.github.com/repos/threefoldtech/home,opened,Deployed blog appears in the deployed solutions page but not in the deployed blogs overview.,priority_major type_bug,"In VDC: jetserthing (Gold, testnet) a deployed blog using the example blog source from the manual results in successful deployment. But the deployed solution pages (generic and specific) display different results.



",1.0,"Deployed blog appears in the deployed solutions page but not in the deployed blogs overview. - In VDC: jetserthing (Gold, testnet) a deployed blog using the example blog source from the manual results in successful deployment. But the deployed solution pages (generic and specific) display different results.



",0,deployed blog appears in the deployed solutions page but not in the deployed blogs overview in vdc jetserthing gold testnet a deployed blog using the example blog source from the manual results in successful deployment but the deployed solution pages generic and specific display different results ,0
4498,23416387185.0,IssuesEvent,2022-08-13 02:46:54,aws/aws-sam-cli,https://api.github.com/repos/aws/aws-sam-cli,closed,SAM log file - flush,type/feature area/local/start-api area/local/start-lambda area/local/invoke maintainer/need-followup,"### Description
SAM with log file option -l does output to log file, but logs only found after stopping the SAM command.
### Steps to reproduce
sam local start-api -l
### Observed result
The output log file is generated only after the sam command finishes. The log could not be accessed with tail as, there is no output to the log file.
### Expected result
Log file generated and flushed periodically to access the log.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS:CentOS 7
2. `sam --version`: 0.48
",True,"SAM log file - flush - ### Description
SAM with log file option -l does output to log file, but logs only found after stopping the SAM command.
### Steps to reproduce
sam local start-api -l
### Observed result
The output log file is generated only after the sam command finishes. The log could not be accessed with tail as, there is no output to the log file.
### Expected result
Log file generated and flushed periodically to access the log.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS:CentOS 7
2. `sam --version`: 0.48
",1,sam log file flush description sam with log file option l does output to log file but logs only found after stopping the sam command steps to reproduce sam local start api l observed result the output log file is generated only after the sam command finishes the log could not be accessed with tail as there is no output to the log file expected result log file generated and flushed periodically to access the log additional environment details ex windows mac amazon linux etc os centos sam version ,1
5724,30259921589.0,IssuesEvent,2023-07-07 07:23:42,jupyter-naas/awesome-notebooks,https://api.github.com/repos/jupyter-naas/awesome-notebooks,closed,LangChain - Gmail Toolkit,enhancement templates maintainer," This notebook walks through connecting a LangChain email to the Gmail API. It is useful for organizations that need to integrate their email with the Gmail API.
",True,"LangChain - Gmail Toolkit - This notebook walks through connecting a LangChain email to the Gmail API. It is useful for organizations that need to integrate their email with the Gmail API.
",1,langchain gmail toolkit this notebook walks through connecting a langchain email to the gmail api it is useful for organizations that need to integrate their email with the gmail api ,1
177271,28433346929.0,IssuesEvent,2023-04-15 02:41:56,junpotatoes/TopChart,https://api.github.com/repos/junpotatoes/TopChart,closed,TrackDetail 컴포넌트 디자인 수정,Design,"## 만들고자 하는 기능이 무엇인가요?
TrackDetail 컴포넌트 디자인 수정
## 해당 기능을 구현하기 위해 할 일이 무엇인가요?
- [x] 뒤로가기 버튼 모바일 렌더시 위치이동
",1.0,"TrackDetail 컴포넌트 디자인 수정 - ## 만들고자 하는 기능이 무엇인가요?
TrackDetail 컴포넌트 디자인 수정
## 해당 기능을 구현하기 위해 할 일이 무엇인가요?
- [x] 뒤로가기 버튼 모바일 렌더시 위치이동
",0,trackdetail 컴포넌트 디자인 수정 만들고자 하는 기능이 무엇인가요 trackdetail 컴포넌트 디자인 수정 해당 기능을 구현하기 위해 할 일이 무엇인가요 뒤로가기 버튼 모바일 렌더시 위치이동 ,0
4384,22310480704.0,IssuesEvent,2022-06-13 16:31:00,MDAnalysis/mdanalysis,https://api.github.com/repos/MDAnalysis/mdanalysis,closed,"Investigate pins for various CI components (pytest-cov, coverage, msmb_theme)",maintainability Component-Docs Continuous Integration,"Follow up from #3369
Related to #3224
Our docs build infrastructure is starting to be a bit flaky, particularly in #3369 we've had to pin pytest-cov due to some dependency resolution issues we were having when installing it at the same time as the sphinx components.
We should aim to check a) if all the pins are still necessary, b) what steps we should take to remove these pins.
My understanding is that some of this is related to the fact that we can't upgrade to the latest sphinx (see #3224), so we probably will want to fix that first.",True,"Investigate pins for various CI components (pytest-cov, coverage, msmb_theme) - Follow up from #3369
Related to #3224
Our docs build infrastructure is starting to be a bit flaky, particularly in #3369 we've had to pin pytest-cov due to some dependency resolution issues we were having when installing it at the same time as the sphinx components.
We should aim to check a) if all the pins are still necessary, b) what steps we should take to remove these pins.
My understanding is that some of this is related to the fact that we can't upgrade to the latest sphinx (see #3224), so we probably will want to fix that first.",1,investigate pins for various ci components pytest cov coverage msmb theme follow up from related to our docs build infrastructure is starting to be a bit flaky particularly in we ve had to pin pytest cov due to some dependency resolution issues we were having when installing it at the same time as the sphinx components we should aim to check a if all the pins are still necessary b what steps we should take to remove these pins my understanding is that some of this is related to the fact that we can t upgrade to the latest sphinx see so we probably will want to fix that first ,1
4256,21102810495.0,IssuesEvent,2022-04-04 15:51:21,carbon-design-system/carbon,https://api.github.com/repos/carbon-design-system/carbon,closed,carbon-components.min.css and carbon-components.min.js not found ,status: needs triage 🕵️♀️ status: waiting for maintainer response 💬,"### Package
carbon-components
### Browser
_No response_
### Operating System
_No response_
### Package version
11.0.0
### React version
_No response_
### Automated testing tool and ruleset
not needed
### Assistive technology
_No response_
### Description
https://unpkg.com/carbon-components/scripts/carbon-components.min.js
https://unpkg.com/carbon-components/css/carbon-components.min.css
### WCAG 2.1 Violation
_No response_
### CodeSandbox example
not needed
### Steps to reproduce
open unpkg.com links, files not found
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems",True,"carbon-components.min.css and carbon-components.min.js not found - ### Package
carbon-components
### Browser
_No response_
### Operating System
_No response_
### Package version
11.0.0
### React version
_No response_
### Automated testing tool and ruleset
not needed
### Assistive technology
_No response_
### Description
https://unpkg.com/carbon-components/scripts/carbon-components.min.js
https://unpkg.com/carbon-components/css/carbon-components.min.css
### WCAG 2.1 Violation
_No response_
### CodeSandbox example
not needed
### Steps to reproduce
open unpkg.com links, files not found
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
- [X] I checked the [current issues](https://github.com/carbon-design-system/carbon/issues) for duplicate problems",1,carbon components min css and carbon components min js not found package carbon components browser no response operating system no response package version react version no response automated testing tool and ruleset not needed assistive technology no response description wcag violation no response codesandbox example not needed steps to reproduce open unpkg com links files not found code of conduct i agree to follow this project s i checked the for duplicate problems,1
36436,8109950882.0,IssuesEvent,2018-08-14 09:19:21,publiclab/plots2,https://api.github.com/repos/publiclab/plots2,closed,Show most recently updated people on Search API,review-me rgsoc summer-of-code,"Part of #2755
Change the sorting of the results in the API to most recently updated people as discussed in #2925",1.0,"Show most recently updated people on Search API - Part of #2755
Change the sorting of the results in the API to most recently updated people as discussed in #2925",0,show most recently updated people on search api part of change the sorting of the results in the api to most recently updated people as discussed in ,0
4098,19323273005.0,IssuesEvent,2021-12-14 08:42:33,WarenGonzaga/daisy.js,https://api.github.com/repos/WarenGonzaga/daisy.js,opened,maintenance misc updates,chore maintainers only todo tweak,"I just love to put it here so I'm aware of the tasks needed to be done.
- [ ] updated readme format
- [ ] contributing guide
- [ ] security policy
- [ ] code of conduct policy",True,"maintenance misc updates - I just love to put it here so I'm aware of the tasks needed to be done.
- [ ] updated readme format
- [ ] contributing guide
- [ ] security policy
- [ ] code of conduct policy",1,maintenance misc updates i just love to put it here so i m aware of the tasks needed to be done updated readme format contributing guide security policy code of conduct policy,1
5829,30851325728.0,IssuesEvent,2023-08-02 16:58:15,bazelbuild/intellij,https://api.github.com/repos/bazelbuild/intellij,opened,Test tree view is missing (Golang),type: bug awaiting-maintainer,"### Description of the bug:
After running tests in my go project, IntelliJ's [test tree view](https://www.jetbrains.com/help/idea/viewing-and-exploring-test-results.html) no longer shows up.
This problem started with plugin version `2023.07.04.0.1-api-version-231` and does not manifest in version `2023.06.13.0.1-api-version-231`
This is the view that's missing:

### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
_No response_
### Which Intellij IDE are you using? Please provide the specific version.
IntelliJ IDEA 2023.1.4 (Ultimate Edition), Build #IU-231.9225.16, built on July 11, 2023
### What programming languages and tools are you using? Please provide specific versions.
go1.19.5 darwin/arm64; bazel 5.4.0
### What Bazel plugin version are you using?
2023.07.04.0.1-api-version-231
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_",True,"Test tree view is missing (Golang) - ### Description of the bug:
After running tests in my go project, IntelliJ's [test tree view](https://www.jetbrains.com/help/idea/viewing-and-exploring-test-results.html) no longer shows up.
This problem started with plugin version `2023.07.04.0.1-api-version-231` and does not manifest in version `2023.06.13.0.1-api-version-231`
This is the view that's missing:

### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
_No response_
### Which Intellij IDE are you using? Please provide the specific version.
IntelliJ IDEA 2023.1.4 (Ultimate Edition), Build #IU-231.9225.16, built on July 11, 2023
### What programming languages and tools are you using? Please provide specific versions.
go1.19.5 darwin/arm64; bazel 5.4.0
### What Bazel plugin version are you using?
2023.07.04.0.1-api-version-231
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_",1,test tree view is missing golang description of the bug after running tests in my go project intellij s no longer shows up this problem started with plugin version api version and does not manifest in version api version this is the view that s missing what s the simplest easiest way to reproduce this bug please provide a minimal example if possible no response which intellij ide are you using please provide the specific version intellij idea ultimate edition build iu built on july what programming languages and tools are you using please provide specific versions darwin bazel what bazel plugin version are you using api version have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response ,1
3537,13922851966.0,IssuesEvent,2020-10-21 13:46:06,hashicorp/terraform,https://api.github.com/repos/hashicorp/terraform,closed,Puppet provisioner fails to install Puppet on Windows Server 2016 with TLS/SSL Error,bug needs-maintainer provisioner/puppet v0.12,"### Terraform Version
```
Terraform v0.12.18
+ provider.aws v2.42.0
```
### Terraform Configuration Files
```hcl
resource ""aws_instance"" ""instance"" {
ami = data.aws_ami.windows-server.id
instance_type = var.instance_type
key_name = ""provision_key""
availability_zone = ""us-east-1b""
tags = {
Name = ""example-vm""
}
get_password_data = true
vpc_security_group_ids = [""sg-1111111111""]
subnet_id = ""subnet-11111111111""
user_data = <<-DATA
Write-Host ""setting up firewall""
netsh advfirewall firewall add rule name=""WinRM in"" protocol=TCP dir=in profile=any localport=5985 remoteip=any localip=any action=allow
Write-Host ""restarting winrm""
Stop-Service winrm
Start-Service winrm
DATA
provisioner ""puppet"" {
server = ""puppet.test.com""
connection {
host = coalesce(self.private_ip, self.public_ip)
type = ""winrm""
user = ""Administrator""
password = rsadecrypt(self.password_data, file(""~/.ssh/provision_key""))
timeout = ""10m""
}
open_source = false
certname = ""example.test.com""
autosign = false
}
}
```
### Debug Output
https://gist.github.com/camara-tech/f407b8760def43f6c9da56fc45a88efe
### Crash Output
Not Applicable
### Expected Behavior
The terraform puppet provisioner successfully downloads the puppet agent installer for windows from the puppet enterprise server and successfully installs it.
### Actual Behavior
the terraform puppet provisioner failed with the following error from powershell:
```
The request was aborted: Could not create SSL/TLS secure channel.
```
### Steps to Reproduce
terraform init
terraform apply
### Additional Context
In our environment, we have completely disabled TLS protocols below TLS 1.2. Also, our puppet enterprise console uses the same Powershell that is present in the puppet provisioner for windows with the addition of the following line:
```
[System.Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12;
```
Perhaps that needs to be added to the existing provisioner?
### References
Not Applicable
",True,"Puppet provisioner fails to install Puppet on Windows Server 2016 with TLS/SSL Error - ### Terraform Version
```
Terraform v0.12.18
+ provider.aws v2.42.0
```
### Terraform Configuration Files
```hcl
resource ""aws_instance"" ""instance"" {
ami = data.aws_ami.windows-server.id
instance_type = var.instance_type
key_name = ""provision_key""
availability_zone = ""us-east-1b""
tags = {
Name = ""example-vm""
}
get_password_data = true
vpc_security_group_ids = [""sg-1111111111""]
subnet_id = ""subnet-11111111111""
user_data = <<-DATA
Write-Host ""setting up firewall""
netsh advfirewall firewall add rule name=""WinRM in"" protocol=TCP dir=in profile=any localport=5985 remoteip=any localip=any action=allow
Write-Host ""restarting winrm""
Stop-Service winrm
Start-Service winrm
DATA
provisioner ""puppet"" {
server = ""puppet.test.com""
connection {
host = coalesce(self.private_ip, self.public_ip)
type = ""winrm""
user = ""Administrator""
password = rsadecrypt(self.password_data, file(""~/.ssh/provision_key""))
timeout = ""10m""
}
open_source = false
certname = ""example.test.com""
autosign = false
}
}
```
### Debug Output
https://gist.github.com/camara-tech/f407b8760def43f6c9da56fc45a88efe
### Crash Output
Not Applicable
### Expected Behavior
The terraform puppet provisioner successfully downloads the puppet agent installer for windows from the puppet enterprise server and successfully installs it.
### Actual Behavior
the terraform puppet provisioner failed with the following error from powershell:
```
The request was aborted: Could not create SSL/TLS secure channel.
```
### Steps to Reproduce
terraform init
terraform apply
### Additional Context
In our environment, we have completely disabled TLS protocols below TLS 1.2. Also, our puppet enterprise console uses the same Powershell that is present in the puppet provisioner for windows with the addition of the following line:
```
[System.Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12;
```
Perhaps that needs to be added to the existing provisioner?
### References
Not Applicable
",1,puppet provisioner fails to install puppet on windows server with tls ssl error terraform version run terraform v to show the version and paste the result between the marks below if you are not running the latest version of terraform please try upgrading because your issue may have already been fixed terraform provider aws terraform configuration files paste the relevant parts of your terraform configuration between the marks below for large terraform configs please use a service like dropbox and share a link to the zip file for security you can also encrypt the files using our gpg public key hcl resource aws instance instance ami data aws ami windows server id instance type var instance type key name provision key availability zone us east tags name example vm get password data true vpc security group ids subnet id subnet user data data winrm quickconfig q winrm set winrm config maxtimeoutms winrm set winrm config service allowunencrypted true winrm set winrm config service auth basic true write host setting up firewall netsh advfirewall firewall add rule name winrm in protocol tcp dir in profile any localport remoteip any localip any action allow write host restarting winrm stop service winrm start service winrm data provisioner puppet server puppet test com connection host coalesce self private ip self public ip type winrm user administrator password rsadecrypt self password data file ssh provision key timeout open source false certname example test com autosign false debug output crash output not applicable expected behavior the terraform puppet provisioner successfully downloads the puppet agent installer for windows from the puppet enterprise server and successfully installs it actual behavior the terraform puppet provisioner failed with the following error from powershell the request was aborted could not create ssl tls secure channel steps to reproduce terraform init terraform apply additional context in our environment we have completely disabled tls protocols below tls also our puppet enterprise console uses the same powershell that is present in the puppet provisioner for windows with the addition of the following line securityprotocol perhaps that needs to be added to the existing provisioner references not applicable ,1
1688,6574166850.0,IssuesEvent,2017-09-11 11:47:30,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Exception with ios_command: AttributeError: 'list' object has no attribute 'splitlines',affects_2.3 bug_report networking waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ios_command
##### ANSIBLE VERSION
```
ansible --version
2.3.0 (commit 20160706.246e32a)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
inventory = ./hosts
gathering = explicit
roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/Roles/roles
private_role_vars = yes
log_path = /var/log/ansible.log
fact_caching = redis
fact_caching_timeout = 86400
retry_files_enabled = False
##### OS / ENVIRONMENT
- **Local host**: Ubuntu 16.10 4.8
- **Target nodes**: IOSv 15.6(2)T
IOSv_L2 15.2(4.0.55)E
##### SUMMARY
This exception happens with different types of targets (IOSv & IOSv_L2), although there is no issue running ios_facts with the same targets.
##### STEPS TO REPRODUCE
With config=running-config or config=startup-config
```
- name: Fetching config from the remote node
ios_command:
provider: ""{{ connections.ssh }}""
commands:
- ""show {{ config }}""
register: configuration
```
##### EXPECTED RESULTS
Successful ""show running-config"" or ""show startup-config""
##### ACTUAL RESULTS
```
TASK [ios_pull_config : Fetching config from the remote node] ***************************************************************************************************************
task path: /home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles/ios_pull_config/tasks/main.yml:76
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
<172.21.100.210> ESTABLISH LOCAL CONNECTION FOR USER: root
<172.21.100.210> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534 `"" && echo ansible-tmp-1479307602.12-88135957770534=""` echo $HOME/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534 `"" ) && sleep 0'
<172.21.100.220> ESTABLISH LOCAL CONNECTION FOR USER: root
<172.21.100.220> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858 `"" && echo ansible-tmp-1479307602.13-213866714018858=""` echo $HOME/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858 `"" ) && sleep 0'
<172.21.100.220> PUT /tmp/tmpT5ggUh TO /root/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858/ios_command.py
<172.21.100.210> PUT /tmp/tmpqxKZuu TO /root/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534/ios_command.py
<172.21.100.220> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858/ /root/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858/ios_command.py && sleep 0'
<172.21.100.210> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534/ /root/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534/ios_command.py && sleep 0'
<172.21.100.220> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858/ios_command.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858/"" > /dev/null 2>&1 && sleep 0'
<172.21.100.210> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534/ios_command.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534/"" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 126, in run
res = self._execute()
File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 502, in _execute
result = self._handler.run(task_vars=variables)
File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/normal.py"", line 33, in run
results = merge_hash(results, self._execute_module(tmp=tmp, task_vars=task_vars))
File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/__init__.py"", line 662, in _execute_module
data['stdout_lines'] = data.get('stdout', u'').splitlines()
AttributeError: 'list' object has no attribute 'splitlines'
fatal: [IOSv_L2_10]: FAILED! => {
""failed"": true,
""msg"": ""Unexpected failure during module execution."",
""stdout"": """"
}
```
Despite being able to retrieve its facts with ios_facts:
- name: Fetching facts from the remote node
ios_facts:
gather_subset: all
provider: ""{{ connections.ssh }}""
register: facts
```
TASK [ios_pull_facts : Fetching facts from the remote node] *****************************************************************************************************************
ok: [IOSv_L2_10] => {""ansible_facts"": {""ansible_net_all_ipv4_addresses"": [""172.21.100.210""], ""ansible_net_all_ipv6_addresses"": [], ""ansible_net_config"": ""Building configuration...\n\nCurrent configuration : 7126 bytes\n!\n! Last configuration change at 14:42:21 UTC Wed Nov 16 2016 by admin\n!\nversion 15.2\nservice timestamps debug datetime msec\nservice timestamps log datetime msec\nservice password-encryption\nservice compress-config\n!\nhostname IOSv_L2_10\n!\nboot-start-marker\nboot-end-marker\n!\n!\nenable se
...nged"": false, ""failed_commands"": []}
```",True,"Exception with ios_command: AttributeError: 'list' object has no attribute 'splitlines' - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ios_command
##### ANSIBLE VERSION
```
ansible --version
2.3.0 (commit 20160706.246e32a)
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
inventory = ./hosts
gathering = explicit
roles_path = /home/actionmystique/Program-Files/Ubuntu/Ansible/git-Ansible/Roles/roles
private_role_vars = yes
log_path = /var/log/ansible.log
fact_caching = redis
fact_caching_timeout = 86400
retry_files_enabled = False
##### OS / ENVIRONMENT
- **Local host**: Ubuntu 16.10 4.8
- **Target nodes**: IOSv 15.6(2)T
IOSv_L2 15.2(4.0.55)E
##### SUMMARY
This exception happens with different types of targets (IOSv & IOSv_L2), although there is no issue running ios_facts with the same targets.
##### STEPS TO REPRODUCE
With config=running-config or config=startup-config
```
- name: Fetching config from the remote node
ios_command:
provider: ""{{ connections.ssh }}""
commands:
- ""show {{ config }}""
register: configuration
```
##### EXPECTED RESULTS
Successful ""show running-config"" or ""show startup-config""
##### ACTUAL RESULTS
```
TASK [ios_pull_config : Fetching config from the remote node] ***************************************************************************************************************
task path: /home/actionmystique/Program-Files/Ubuntu/Ansible/Roles/roles/ios_pull_config/tasks/main.yml:76
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/network/ios/ios_command.py
<172.21.100.210> ESTABLISH LOCAL CONNECTION FOR USER: root
<172.21.100.210> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534 `"" && echo ansible-tmp-1479307602.12-88135957770534=""` echo $HOME/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534 `"" ) && sleep 0'
<172.21.100.220> ESTABLISH LOCAL CONNECTION FOR USER: root
<172.21.100.220> EXEC /bin/sh -c '( umask 77 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858 `"" && echo ansible-tmp-1479307602.13-213866714018858=""` echo $HOME/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858 `"" ) && sleep 0'
<172.21.100.220> PUT /tmp/tmpT5ggUh TO /root/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858/ios_command.py
<172.21.100.210> PUT /tmp/tmpqxKZuu TO /root/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534/ios_command.py
<172.21.100.220> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858/ /root/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858/ios_command.py && sleep 0'
<172.21.100.210> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534/ /root/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534/ios_command.py && sleep 0'
<172.21.100.220> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858/ios_command.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1479307602.13-213866714018858/"" > /dev/null 2>&1 && sleep 0'
<172.21.100.210> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534/ios_command.py; rm -rf ""/root/.ansible/tmp/ansible-tmp-1479307602.12-88135957770534/"" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 126, in run
res = self._execute()
File ""/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py"", line 502, in _execute
result = self._handler.run(task_vars=variables)
File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/normal.py"", line 33, in run
results = merge_hash(results, self._execute_module(tmp=tmp, task_vars=task_vars))
File ""/usr/lib/python2.7/dist-packages/ansible/plugins/action/__init__.py"", line 662, in _execute_module
data['stdout_lines'] = data.get('stdout', u'').splitlines()
AttributeError: 'list' object has no attribute 'splitlines'
fatal: [IOSv_L2_10]: FAILED! => {
""failed"": true,
""msg"": ""Unexpected failure during module execution."",
""stdout"": """"
}
```
Despite being able to retrieve its facts with ios_facts:
- name: Fetching facts from the remote node
ios_facts:
gather_subset: all
provider: ""{{ connections.ssh }}""
register: facts
```
TASK [ios_pull_facts : Fetching facts from the remote node] *****************************************************************************************************************
ok: [IOSv_L2_10] => {""ansible_facts"": {""ansible_net_all_ipv4_addresses"": [""172.21.100.210""], ""ansible_net_all_ipv6_addresses"": [], ""ansible_net_config"": ""Building configuration...\n\nCurrent configuration : 7126 bytes\n!\n! Last configuration change at 14:42:21 UTC Wed Nov 16 2016 by admin\n!\nversion 15.2\nservice timestamps debug datetime msec\nservice timestamps log datetime msec\nservice password-encryption\nservice compress-config\n!\nhostname IOSv_L2_10\n!\nboot-start-marker\nboot-end-marker\n!\n!\nenable se
...nged"": false, ""failed_commands"": []}
```",1,exception with ios command attributeerror list object has no attribute splitlines issue type bug report component name ios command ansible version ansible version commit config file etc ansible ansible cfg configured module search path default w o overrides configuration inventory hosts gathering explicit roles path home actionmystique program files ubuntu ansible git ansible roles roles private role vars yes log path var log ansible log fact caching redis fact caching timeout retry files enabled false os environment local host ubuntu target nodes iosv t iosv e summary this exception happens with different types of targets iosv iosv although there is no issue running ios facts with the same targets steps to reproduce with config running config or config startup config name fetching config from the remote node ios command provider connections ssh commands show config register configuration expected results successful show running config or show startup config actual results task task path home actionmystique program files ubuntu ansible roles roles ios pull config tasks main yml using module file usr lib dist packages ansible modules core network ios ios command py using module file usr lib dist packages ansible modules core network ios ios command py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp ios command py put tmp tmpqxkzuu to root ansible tmp ansible tmp ios command py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp ios command py sleep exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp ios command py sleep exec bin sh c usr bin python root ansible tmp ansible tmp ios command py rm rf root ansible tmp ansible tmp dev null sleep exec bin sh c usr bin python root ansible tmp ansible tmp ios command py rm rf root ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file usr lib dist packages ansible executor task executor py line in run res self execute file usr lib dist packages ansible executor task executor py line in execute result self handler run task vars variables file usr lib dist packages ansible plugins action normal py line in run results merge hash results self execute module tmp tmp task vars task vars file usr lib dist packages ansible plugins action init py line in execute module data data get stdout u splitlines attributeerror list object has no attribute splitlines fatal failed failed true msg unexpected failure during module execution stdout despite being able to retrieve its facts with ios facts name fetching facts from the remote node ios facts gather subset all provider connections ssh register facts task ok ansible facts ansible net all addresses ansible net all addresses ansible net config building configuration n ncurrent configuration bytes n n last configuration change at utc wed nov by admin n nversion nservice timestamps debug datetime msec nservice timestamps log datetime msec nservice password encryption nservice compress config n nhostname iosv n nboot start marker nboot end marker n n nenable se nged false failed commands ,1
1107,4981804283.0,IssuesEvent,2016-12-07 09:18:52,tgstation/tgstation,https://api.github.com/repos/tgstation/tgstation,closed,Floor tile system limits ability to add new colors and patterns,Maintainability - Hinders improvements - Not a bug Sprites,"As-is, if you want to add a new floor tile color with all the directions and such, you need 13 sprites, four for each half-colored tile, four for the three-quarter colored tiles, four for the corner colored tiles, and one for the full-colored tile. And if you want the color to have a checkerboard pattern, that's two more sprites.
And that's just for one base tile color, so if you want that tile color on white or black tiles, you've got to make that many more sprites just to add one color.
This significantly limits the ability to add more colors and patterns, as well as bloating the floor.dmi excessively due to how many sprites it adds to make a single additional color.
",True,"Floor tile system limits ability to add new colors and patterns - As-is, if you want to add a new floor tile color with all the directions and such, you need 13 sprites, four for each half-colored tile, four for the three-quarter colored tiles, four for the corner colored tiles, and one for the full-colored tile. And if you want the color to have a checkerboard pattern, that's two more sprites.
And that's just for one base tile color, so if you want that tile color on white or black tiles, you've got to make that many more sprites just to add one color.
This significantly limits the ability to add more colors and patterns, as well as bloating the floor.dmi excessively due to how many sprites it adds to make a single additional color.
",1,floor tile system limits ability to add new colors and patterns as is if you want to add a new floor tile color with all the directions and such you need sprites four for each half colored tile four for the three quarter colored tiles four for the corner colored tiles and one for the full colored tile and if you want the color to have a checkerboard pattern that s two more sprites and that s just for one base tile color so if you want that tile color on white or black tiles you ve got to make that many more sprites just to add one color this significantly limits the ability to add more colors and patterns as well as bloating the floor dmi excessively due to how many sprites it adds to make a single additional color ,1
3166,12226516301.0,IssuesEvent,2020-05-03 11:17:59,gfleetwood/asteres,https://api.github.com/repos/gfleetwood/asteres,opened,nocomplexity/SecurityPrivacyReferenceArchitecture (44663811),Python maintain,"https://github.com/nocomplexity/SecurityPrivacyReferenceArchitecture
Open Repository for the Open Security and Privacy Reference Architecture",True,"nocomplexity/SecurityPrivacyReferenceArchitecture (44663811) - https://github.com/nocomplexity/SecurityPrivacyReferenceArchitecture
Open Repository for the Open Security and Privacy Reference Architecture",1,nocomplexity securityprivacyreferencearchitecture open repository for the open security and privacy reference architecture,1
197670,6962483006.0,IssuesEvent,2017-12-08 13:56:20,vanilla-framework/vanilla-framework,https://api.github.com/repos/vanilla-framework/vanilla-framework,opened,Add prefixes to appearence for form elelment,Priority: Medium Type: Bug,"For projects without autoprefixer the form elements styling will not be reset. For example the drop down.
```css
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
```",1.0,"Add prefixes to appearence for form elelment - For projects without autoprefixer the form elements styling will not be reset. For example the drop down.
```css
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
```",0,add prefixes to appearence for form elelment for projects without autoprefixer the form elements styling will not be reset for example the drop down css webkit appearance none moz appearance none appearance none ,0
388019,26748978092.0,IssuesEvent,2023-01-30 18:01:26,WordPress/Advanced-administration-handbook,https://api.github.com/repos/WordPress/Advanced-administration-handbook,opened,Update page: Upgrading WordPress,documentation enhancement help wanted,"File: [upgrade/upgrading.md](https://github.com/WordPress/Advanced-administration-handbook/blob/main/upgrade/upgrading.md)
This page needs a general review and update.
Needs to have some different parts. One, the simple update via the Admin panel, Two, the manual update via FTP.
Furthermore, probably, check the upgrade via WP Toolkit, or refer to it.
Plus, refer to [Upgrading (very old) WordPress](https://make.wordpress.org/hosting/handbook/upgrading/), maybe [moving all this content from the Hosting Handbook (in Markdown)](https://github.com/WordPress/hosting-handbook/blob/main/upgrading.md).
If you add documentation from another WordPress.org page, indicate it in the Changelog or in the comments of this issue.
### To-Do
- [ ] General review and updating
- [ ] Review all the process, both simple (admin) and complex (FTP / SQL)
- [ ] Upgrading (very old) WordPress",1.0,"Update page: Upgrading WordPress - File: [upgrade/upgrading.md](https://github.com/WordPress/Advanced-administration-handbook/blob/main/upgrade/upgrading.md)
This page needs a general review and update.
Needs to have some different parts. One, the simple update via the Admin panel, Two, the manual update via FTP.
Furthermore, probably, check the upgrade via WP Toolkit, or refer to it.
Plus, refer to [Upgrading (very old) WordPress](https://make.wordpress.org/hosting/handbook/upgrading/), maybe [moving all this content from the Hosting Handbook (in Markdown)](https://github.com/WordPress/hosting-handbook/blob/main/upgrading.md).
If you add documentation from another WordPress.org page, indicate it in the Changelog or in the comments of this issue.
### To-Do
- [ ] General review and updating
- [ ] Review all the process, both simple (admin) and complex (FTP / SQL)
- [ ] Upgrading (very old) WordPress",0,update page upgrading wordpress file this page needs a general review and update needs to have some different parts one the simple update via the admin panel two the manual update via ftp furthermore probably check the upgrade via wp toolkit or refer to it plus refer to maybe if you add documentation from another wordpress org page indicate it in the changelog or in the comments of this issue to do general review and updating review all the process both simple admin and complex ftp sql upgrading very old wordpress,0
1402,6025462176.0,IssuesEvent,2017-06-08 08:46:11,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,win_iis_webapplication: overriding upgrade support,affects_2.1 feature_idea waiting_on_maintainer windows,"
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
win_iis_webapplication
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
no changes
##### OS / ENVIRONMENT
running from Ubuntu 14.04
running from: Description: Ubuntu 14.04.4 LTS
managing: windows 2012R2
##### SUMMARY
the module's script (ps1) assumes old webapplication physical path is present during the script runtime. hence, if it isn't the script fails with confusing error message
If you wish to force deploy everything, i.e. delete files under IIS's feet, and create everything anew and reconfig - then the old instance would fail to find its physical path.
this the script fails.
I think force upgrade is a very useful use case, esp. in cloud situations.
It should be supported.
##### STEPS TO REPRODUCE
delete webapplications old physical path,
try deploying over new physical path.
behold:
```
failed: [ec2-x-x-x-x.compute-1.amazonaws.com] (item={u'key1': u'val1', u'application_pool': u'appPool1', u'name': u'myApp', u'site': u'mySite'}) => {""failed"": true, ""invocation"": {""module_name"": ""win_iis_webapplication""}, ""item"": {""application_pool"": ""appPool1"", ""key2"": ""myKey"", ""name"": ""myApp"", ""site"": ""mySite""}, ""msg"":
""The property 'FullName' cannot be found on this object. Verify that the property exists.""}
```
I think a keyword ""force"" needs to be introduced.
If it is ""yes"", then the script should not unconditionally try and access `$application.PhysicalPath,`
but do it in condition, otherwise setting app_folder to be `$env:TEMP`
```
# new keyword is added:
force: true
```
##### EXPECTED RESULTS
I expected the webapp to be reconfigured
##### ACTUAL RESULTS
The script failed, with unclear message, it took me several hours to understand what happens, manually copying ps1 file and running it locally.
```
```
",True,"win_iis_webapplication: overriding upgrade support -
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
win_iis_webapplication
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
no changes
##### OS / ENVIRONMENT
running from Ubuntu 14.04
running from: Description: Ubuntu 14.04.4 LTS
managing: windows 2012R2
##### SUMMARY
the module's script (ps1) assumes old webapplication physical path is present during the script runtime. hence, if it isn't the script fails with confusing error message
If you wish to force deploy everything, i.e. delete files under IIS's feet, and create everything anew and reconfig - then the old instance would fail to find its physical path.
this the script fails.
I think force upgrade is a very useful use case, esp. in cloud situations.
It should be supported.
##### STEPS TO REPRODUCE
delete webapplications old physical path,
try deploying over new physical path.
behold:
```
failed: [ec2-x-x-x-x.compute-1.amazonaws.com] (item={u'key1': u'val1', u'application_pool': u'appPool1', u'name': u'myApp', u'site': u'mySite'}) => {""failed"": true, ""invocation"": {""module_name"": ""win_iis_webapplication""}, ""item"": {""application_pool"": ""appPool1"", ""key2"": ""myKey"", ""name"": ""myApp"", ""site"": ""mySite""}, ""msg"":
""The property 'FullName' cannot be found on this object. Verify that the property exists.""}
```
I think a keyword ""force"" needs to be introduced.
If it is ""yes"", then the script should not unconditionally try and access `$application.PhysicalPath,`
but do it in condition, otherwise setting app_folder to be `$env:TEMP`
```
# new keyword is added:
force: true
```
##### EXPECTED RESULTS
I expected the webapp to be reconfigured
##### ACTUAL RESULTS
The script failed, with unclear message, it took me several hours to understand what happens, manually copying ps1 file and running it locally.
```
```
",1,win iis webapplication overriding upgrade support issue type feature idea component name win iis webapplication ansible version ansible config file configured module search path default w o overrides configuration no changes os environment running from ubuntu running from description ubuntu lts managing windows summary the module s script assumes old webapplication physical path is present during the script runtime hence if it isn t the script fails with confusing error message if you wish to force deploy everything i e delete files under iis s feet and create everything anew and reconfig then the old instance would fail to find its physical path this the script fails i think force upgrade is a very useful use case esp in cloud situations it should be supported steps to reproduce delete webapplications old physical path try deploying over new physical path behold failed item u u u application pool u u name u myapp u site u mysite failed true invocation module name win iis webapplication item application pool mykey name myapp site mysite msg the property fullname cannot be found on this object verify that the property exists i think a keyword force needs to be introduced if it is yes then the script should not unconditionally try and access application physicalpath but do it in condition otherwise setting app folder to be env temp new keyword is added force true expected results i expected the webapp to be reconfigured actual results the script failed with unclear message it took me several hours to understand what happens manually copying file and running it locally ,1
778159,27305560064.0,IssuesEvent,2023-02-24 07:51:41,openforis/arena,https://api.github.com/repos/openforis/arena,closed,Copy/Clone records from cycle to another,Priority_1,"We assume that schema in the source and target cycles are identical.
If these are not identical, a direct cloning will be not possible.",1.0,"Copy/Clone records from cycle to another - We assume that schema in the source and target cycles are identical.
If these are not identical, a direct cloning will be not possible.",0,copy clone records from cycle to another we assume that schema in the source and target cycles are identical if these are not identical a direct cloning will be not possible ,0
2208,7802987465.0,IssuesEvent,2018-06-10 18:35:44,OpenLightingProject/ola,https://api.github.com/repos/OpenLightingProject/ola,closed,libftdi API update,Component-Plugin Language-C++ Maintainability OpSys-Linux,"Hi,
The Debian maintainer of libftdi filed a [bug](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=810374) against ola. I tried the simple fix (""s/libftdi-dev/libftdi1-dev/"" over debian/control), but that results in no FTDI plugin being compiled.
Someone will need to look at the changes that were made in the new FTDI library and update ola accordingly. Mean time, I'll have to still compile against the old library.
",True,"libftdi API update - Hi,
The Debian maintainer of libftdi filed a [bug](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=810374) against ola. I tried the simple fix (""s/libftdi-dev/libftdi1-dev/"" over debian/control), but that results in no FTDI plugin being compiled.
Someone will need to look at the changes that were made in the new FTDI library and update ola accordingly. Mean time, I'll have to still compile against the old library.
",1,libftdi api update hi the debian maintainer of libftdi filed a against ola i tried the simple fix s libftdi dev dev over debian control but that results in no ftdi plugin being compiled someone will need to look at the changes that were made in the new ftdi library and update ola accordingly mean time i ll have to still compile against the old library ,1
166316,14047311892.0,IssuesEvent,2020-11-02 06:53:46,JuanOliveros/git_web_practice,https://api.github.com/repos/JuanOliveros/git_web_practice,opened,Un commit que no sigue la convención de código o FIX a realizar,documentation,"La convención de código a seguir:
- Para los arreglos: `: `
- Para los arreglos con conflictos: `: `
Igualmente, solo hay 3 fixes a realizar. Al realizar uno y completarlo se creará un issue con las instrucciones a realizar para el siguiente.
Para realizar la corrección del mensaje de commit `git commit --amend` y `git commit push -f`
El último commit tiene el siguiente mensaje:
`se cambia el link de la imagen y el texto del titulo p5`
Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.",1.0,"Un commit que no sigue la convención de código o FIX a realizar - La convención de código a seguir:
- Para los arreglos: `: `
- Para los arreglos con conflictos: `: `
Igualmente, solo hay 3 fixes a realizar. Al realizar uno y completarlo se creará un issue con las instrucciones a realizar para el siguiente.
Para realizar la corrección del mensaje de commit `git commit --amend` y `git commit push -f`
El último commit tiene el siguiente mensaje:
`se cambia el link de la imagen y el texto del titulo p5`
Este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado.",0,un commit que no sigue la convención de código o fix a realizar la convención de código a seguir para los arreglos para los arreglos con conflictos igualmente solo hay fixes a realizar al realizar uno y completarlo se creará un issue con las instrucciones a realizar para el siguiente para realizar la corrección del mensaje de commit git commit amend y git commit push f el último commit tiene el siguiente mensaje se cambia el link de la imagen y el texto del titulo este issue es solo un recordatorio de la convención de comentarios en los commits y puede ser cerrado ,0
730825,25190570690.0,IssuesEvent,2022-11-12 00:01:48,simonbaird/tiddlyhost,https://api.github.com/repos/simonbaird/tiddlyhost,closed,Feature request: Clone wiki,priority,"Shortly after the noob stage, a TW user identifies ""favourite plugins, settings and customizations"" that he wants for all his wikis. I dare say this happens to *all* TW users. And those that go deeper into tiddlyverse probably develop fine tuned recurring setups (e.g *public* vs *private* wikis, *work* vs *non-work* etc). But to manually drag'n drop plugins + modified shadow tids and other tidbits is a rather annoying task.
Therefore, I wonder if TH could feature a simple ""Clone wiki"" feature in the *Your sites* page. It could appear as a menu option in the ""Actions"" button and it can lead to the same page as the ""Create site"" button, i.e to register a new TH site, but instead of an empty wiki it is a clone.
Thoughts?",1.0,"Feature request: Clone wiki - Shortly after the noob stage, a TW user identifies ""favourite plugins, settings and customizations"" that he wants for all his wikis. I dare say this happens to *all* TW users. And those that go deeper into tiddlyverse probably develop fine tuned recurring setups (e.g *public* vs *private* wikis, *work* vs *non-work* etc). But to manually drag'n drop plugins + modified shadow tids and other tidbits is a rather annoying task.
Therefore, I wonder if TH could feature a simple ""Clone wiki"" feature in the *Your sites* page. It could appear as a menu option in the ""Actions"" button and it can lead to the same page as the ""Create site"" button, i.e to register a new TH site, but instead of an empty wiki it is a clone.
Thoughts?",0,feature request clone wiki shortly after the noob stage a tw user identifies favourite plugins settings and customizations that he wants for all his wikis i dare say this happens to all tw users and those that go deeper into tiddlyverse probably develop fine tuned recurring setups e g public vs private wikis work vs non work etc but to manually drag n drop plugins modified shadow tids and other tidbits is a rather annoying task therefore i wonder if th could feature a simple clone wiki feature in the your sites page it could appear as a menu option in the actions button and it can lead to the same page as the create site button i e to register a new th site but instead of an empty wiki it is a clone thoughts ,0
1483,6416006777.0,IssuesEvent,2017-08-08 14:00:21,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,"vca_vapp errors in setting the computer_name to vm_name for API 5.5, 5.1, 1.5",affects_2.1 bug_report cloud vmware waiting_on_maintainer,"I have two cloud providers I use, one that uses API version 5.6, and one that uses 5.5
This module works great on version API version 5.6 but when I attempt to create a vapp on the vcd with API version 5.5 (of 5.1, or 1.5 which are the supported APIs on that cloud provider) -- I receive an error.
........
""failed"": true, ""msg"": ""Error in setting the computer_name to vm_name""}
I've run into this before with pyvcloud and basically just implemented my own 'renaming' method
https://gist.github.com/lasko/9ce419800d115e33a8c2
I'm not sure how this could be incorporated into this module, but it sure would save me a lot of headache.
",True,"vca_vapp errors in setting the computer_name to vm_name for API 5.5, 5.1, 1.5 - I have two cloud providers I use, one that uses API version 5.6, and one that uses 5.5
This module works great on version API version 5.6 but when I attempt to create a vapp on the vcd with API version 5.5 (of 5.1, or 1.5 which are the supported APIs on that cloud provider) -- I receive an error.
........
""failed"": true, ""msg"": ""Error in setting the computer_name to vm_name""}
I've run into this before with pyvcloud and basically just implemented my own 'renaming' method
https://gist.github.com/lasko/9ce419800d115e33a8c2
I'm not sure how this could be incorporated into this module, but it sure would save me a lot of headache.
",1,vca vapp errors in setting the computer name to vm name for api i have two cloud providers i use one that uses api version and one that uses this module works great on version api version but when i attempt to create a vapp on the vcd with api version of or which are the supported apis on that cloud provider i receive an error failed true msg error in setting the computer name to vm name i ve run into this before with pyvcloud and basically just implemented my own renaming method i m not sure how this could be incorporated into this module but it sure would save me a lot of headache ,1
643977,20961708107.0,IssuesEvent,2022-03-27 22:08:28,NerdyNomads/Text-Savvy,https://api.github.com/repos/NerdyNomads/Text-Savvy,closed,Create new endpoints to get user's workspaces and texts,high priority back-end,"Create new endpoints:
- In `persistence/accounts.js`:
- get the list of workspaces for the specified user ID
- In `persistence/workspaces.js`:
- get the list of texts for the specified workspace ID",1.0,"Create new endpoints to get user's workspaces and texts - Create new endpoints:
- In `persistence/accounts.js`:
- get the list of workspaces for the specified user ID
- In `persistence/workspaces.js`:
- get the list of texts for the specified workspace ID",0,create new endpoints to get user s workspaces and texts create new endpoints in persistence accounts js get the list of workspaces for the specified user id in persistence workspaces js get the list of texts for the specified workspace id,0
81777,7802950778.0,IssuesEvent,2018-06-10 18:10:38,Students-of-the-city-of-Kostroma/Student-timetable,https://api.github.com/repos/Students-of-the-city-of-Kostroma/Student-timetable,closed,Разработать сценарии функционального тестирования для Story 4,Functional test Script,Разработать сценарии функционального тестирования для Story #4 ,1.0,Разработать сценарии функционального тестирования для Story 4 - Разработать сценарии функционального тестирования для Story #4 ,0,разработать сценарии функционального тестирования для story разработать сценарии функционального тестирования для story ,0
350024,10477331945.0,IssuesEvent,2019-09-23 20:39:19,avalonmediasystem/avalon,https://api.github.com/repos/avalonmediasystem/avalon,closed,Thumbnail grabbing modal too big for small screens,6.x abandoned low priority wontfix,Thumbnail grabbing modal buttons appear below the fold for small screen sizes.,1.0,Thumbnail grabbing modal too big for small screens - Thumbnail grabbing modal buttons appear below the fold for small screen sizes.,0,thumbnail grabbing modal too big for small screens thumbnail grabbing modal buttons appear below the fold for small screen sizes ,0
890,4553165577.0,IssuesEvent,2016-09-13 03:00:35,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,Support starting stopped EC2 instances by tag,affects_1.7 aws cloud feature_idea waiting_on_maintainer,"##### Issue Type:
Feature Idea
##### Ansible Version:
ansible 1.7.2
##### Environment:
N/A
##### Summary:
Allow user to start all stopped EC2 instances associated with a particular tag.
##### Steps To Reproduce:
Extend the ec2 module to enable, say, starting all stopped instances tagged with key=Name, Value=ExtraPower
```
- local_action:
module: ec2
instance_tags:
Name: ExtraPower
state: running
```
##### Expected Results:
Any stopped instance with the associated tag will be started (any instance already running with the tag would be unaffected).
##### Actual Results:
N/A",True,"Support starting stopped EC2 instances by tag - ##### Issue Type:
Feature Idea
##### Ansible Version:
ansible 1.7.2
##### Environment:
N/A
##### Summary:
Allow user to start all stopped EC2 instances associated with a particular tag.
##### Steps To Reproduce:
Extend the ec2 module to enable, say, starting all stopped instances tagged with key=Name, Value=ExtraPower
```
- local_action:
module: ec2
instance_tags:
Name: ExtraPower
state: running
```
##### Expected Results:
Any stopped instance with the associated tag will be started (any instance already running with the tag would be unaffected).
##### Actual Results:
N/A",1,support starting stopped instances by tag issue type feature idea ansible version ansible environment n a summary allow user to start all stopped instances associated with a particular tag steps to reproduce extend the module to enable say starting all stopped instances tagged with key name value extrapower local action module instance tags name extrapower state running expected results any stopped instance with the associated tag will be started any instance already running with the tag would be unaffected actual results n a,1
3243,12368706966.0,IssuesEvent,2020-05-18 14:13:32,Kashdeya/Tiny-Progressions,https://api.github.com/repos/Kashdeya/Tiny-Progressions,closed,Suggestion: Lamps Texture,Version not Maintainted,"I love the lamps with the glass and torch, however it would be nice if the torch did not render.
When building walls or ceilings out of it looks ugly. Could there be a config option to turn off the rendering of the torch?",True,"Suggestion: Lamps Texture - I love the lamps with the glass and torch, however it would be nice if the torch did not render.
When building walls or ceilings out of it looks ugly. Could there be a config option to turn off the rendering of the torch?",1,suggestion lamps texture i love the lamps with the glass and torch however it would be nice if the torch did not render when building walls or ceilings out of it looks ugly could there be a config option to turn off the rendering of the torch ,1
172165,21040461882.0,IssuesEvent,2022-03-31 11:51:25,samq-ghdemo/SEARCH-NCJIS-nibrs,https://api.github.com/repos/samq-ghdemo/SEARCH-NCJIS-nibrs,opened,CVE-2022-27772 (Medium) detected in multiple libraries,security vulnerability,"## CVE-2022-27772 - Medium Severity Vulnerability
Vulnerable Libraries - spring-boot-2.0.5.RELEASE.jar, spring-boot-1.5.7.RELEASE.jar, spring-boot-2.1.5.RELEASE.jar
Path to dependency file: /tools/nibrs-xmlfile/pom.xml
Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar
Path to dependency file: /tools/nibrs-fbi-service/pom.xml
Path to vulnerable library: /tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/spring-boot-1.5.7.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/1.5.7.RELEASE/spring-boot-1.5.7.RELEASE.jar
** UNSUPPORTED WHEN ASSIGNED ** spring-boot versions prior to version v2.2.11.RELEASE was vulnerable to temporary directory hijacking. This vulnerability impacted the org.springframework.boot.web.server.AbstractConfigurableWebServerFactory.createTempDir method. NOTE: This vulnerability only affects products and/or versions that are no longer supported by the maintainer.
Path to dependency file: /tools/nibrs-xmlfile/pom.xml
Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/2.0.5.RELEASE/spring-boot-2.0.5.RELEASE.jar
Path to dependency file: /tools/nibrs-fbi-service/pom.xml
Path to vulnerable library: /tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/spring-boot-1.5.7.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/boot/spring-boot/1.5.7.RELEASE/spring-boot-1.5.7.RELEASE.jar
** UNSUPPORTED WHEN ASSIGNED ** spring-boot versions prior to version v2.2.11.RELEASE was vulnerable to temporary directory hijacking. This vulnerability impacted the org.springframework.boot.web.server.AbstractConfigurableWebServerFactory.createTempDir method. NOTE: This vulnerability only affects products and/or versions that are no longer supported by the maintainer.
***
- [ ] Check this box to open an automated fix PR
",0,cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries spring boot release jar spring boot release jar spring boot release jar spring boot release jar spring boot library home page a href path to dependency file tools nibrs xmlfile pom xml path to vulnerable library home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar web nibrs web target nibrs web web inf lib spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar dependency hierarchy x spring boot release jar vulnerable library spring boot release jar spring boot library home page a href path to dependency file tools nibrs fbi service pom xml path to vulnerable library tools nibrs fbi service target nibrs fbi service web inf lib spring boot release jar home wss scanner repository org springframework boot spring boot release spring boot release jar dependency hierarchy x spring boot release jar vulnerable library spring boot release jar spring boot library home page a href path to dependency file tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository org springframework boot spring boot release spring boot release jar dependency hierarchy spring boot starter web release jar root library spring boot starter release jar x spring boot release jar vulnerable library found in base branch master vulnerability details unsupported when assigned spring boot versions prior to version release was vulnerable to temporary directory hijacking this vulnerability impacted the org springframework boot web server abstractconfigurablewebserverfactory createtempdir method note this vulnerability only affects products and or versions that are no longer supported by the maintainer publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework boot spring boot release check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org springframework boot spring boot release isminimumfixversionavailable true minimumfixversion org springframework boot spring boot release isbinary false packagetype java groupid org springframework boot packagename spring boot packageversion release packagefilepaths istransitivedependency false dependencytree org springframework boot spring boot release isminimumfixversionavailable true minimumfixversion org springframework boot spring boot release isbinary false packagetype java groupid org springframework boot packagename spring boot packageversion release packagefilepaths istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter release org springframework boot spring boot release isminimumfixversionavailable true minimumfixversion org springframework boot spring boot release isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails unsupported when assigned spring boot versions prior to version release was vulnerable to temporary directory hijacking this vulnerability impacted the org springframework boot web server abstractconfigurablewebserverfactory createtempdir method note this vulnerability only affects products and or versions that are no longer supported by the maintainer vulnerabilityurl ,0
5090,26006280933.0,IssuesEvent,2022-12-20 19:42:58,centerofci/mathesar,https://api.github.com/repos/centerofci/mathesar,opened,Implement custom icons for column extracting and moving,type: enhancement work: frontend status: ready restricted: maintainers,"Our Figma design specifies custom icons for these actions, but we don't have them yet.
| Figma | App |
| -- | -- |
|  |  |
",True,"Implement custom icons for column extracting and moving - Our Figma design specifies custom icons for these actions, but we don't have them yet.
| Figma | App |
| -- | -- |
|  |  |
",1,implement custom icons for column extracting and moving our figma design specifies custom icons for these actions but we don t have them yet figma app ,1
4639,24024636243.0,IssuesEvent,2022-09-15 10:26:13,centerofci/mathesar,https://api.github.com/repos/centerofci/mathesar,closed,Filters not applied when calculating count of items within group,type: bug work: backend status: ready restricted: maintainers,"## Reproduce
1. Go to the Library Management schema.
1. Load the Table Page for the Publications table.
1. Group by ""Publication Year"".
1. Observe the first group, for year 1900, to contain 10 records and to display a ""Count"" of 10. Good.
1. Add a filter condition requiring Title to contain the string ""To"".
1. Observe the first group, for year 1900, to contain 2 records.
1. Expect ""Count"" to display 2.
1. Observe ""Count"" displays 10.

",True,"Filters not applied when calculating count of items within group - ## Reproduce
1. Go to the Library Management schema.
1. Load the Table Page for the Publications table.
1. Group by ""Publication Year"".
1. Observe the first group, for year 1900, to contain 10 records and to display a ""Count"" of 10. Good.
1. Add a filter condition requiring Title to contain the string ""To"".
1. Observe the first group, for year 1900, to contain 2 records.
1. Expect ""Count"" to display 2.
1. Observe ""Count"" displays 10.

",1,filters not applied when calculating count of items within group reproduce go to the library management schema load the table page for the publications table group by publication year observe the first group for year to contain records and to display a count of good add a filter condition requiring title to contain the string to observe the first group for year to contain records expect count to display observe count displays ,1
48786,10278924960.0,IssuesEvent,2019-08-25 18:26:21,Serrin/Celestra,https://api.github.com/repos/Serrin/Celestra,closed,Changes in v3.0.1,CUT closed - done or fixed code documentation type - bug type - enhancement,"1. Documentation and pdf fixes.
2. Add a new AJAX function (`ajax();`) which can replace the existing funtions.
3. Deprecate the old AJAX functions, except the shorthands.
4. Replace a the old AJAX shorthands functions with new functions which use the the `ajax();`
5. Deprecate these functions: `isArray();`, `isInteger();`
6. Replace these functions with new versions: `arrayUnion();`, `arrayIntersection();`, `arrayDifference();`, `arraySymmetricDifference();`, `setUnion();`, `setIntersection();`, `setDifference();`, `setSymmetricDifference();`, `isSuperset();`, `min();`, `minIndex();`, `max();`, `maxIndex();`, `arrayRange();`, `unzip();`, `reverseOf();`, `sortOf();`, `domSiblings();`, `getDoNotTrack();`, `isPrimitive();`, `isArraylike();`
7. Remove the undocumented function `__toArray__();`
",1.0,"Changes in v3.0.1 - 1. Documentation and pdf fixes.
2. Add a new AJAX function (`ajax();`) which can replace the existing funtions.
3. Deprecate the old AJAX functions, except the shorthands.
4. Replace a the old AJAX shorthands functions with new functions which use the the `ajax();`
5. Deprecate these functions: `isArray();`, `isInteger();`
6. Replace these functions with new versions: `arrayUnion();`, `arrayIntersection();`, `arrayDifference();`, `arraySymmetricDifference();`, `setUnion();`, `setIntersection();`, `setDifference();`, `setSymmetricDifference();`, `isSuperset();`, `min();`, `minIndex();`, `max();`, `maxIndex();`, `arrayRange();`, `unzip();`, `reverseOf();`, `sortOf();`, `domSiblings();`, `getDoNotTrack();`, `isPrimitive();`, `isArraylike();`
7. Remove the undocumented function `__toArray__();`
",0,changes in documentation and pdf fixes add a new ajax function ajax which can replace the existing funtions deprecate the old ajax functions except the shorthands replace a the old ajax shorthands functions with new functions which use the the ajax deprecate these functions isarray isinteger replace these functions with new versions arrayunion arrayintersection arraydifference arraysymmetricdifference setunion setintersection setdifference setsymmetricdifference issuperset min minindex max maxindex arrayrange unzip reverseof sortof domsiblings getdonottrack isprimitive isarraylike remove the undocumented function toarray ,0
1749,6574943380.0,IssuesEvent,2017-09-11 14:34:10,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,shell command not escaping double quotes correctly,affects_2.2 bug_report waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
shell-module
##### ANSIBLE VERSION
```
ansible 2.2.0.0 (detached HEAD 44faad0593) last updated 2016/10/18 10:21:47 (GMT +000)
```
##### OS / ENVIRONMENT
Debian 8.6
Linux machine0 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux
##### SUMMARY
For some formatting reasons, double curly braces are needed in a shell command. This worked in the past. But how, a hashtag is somehow produces by that workaround. See the steps to reproduce.
##### STEPS TO REPRODUCE
```
- name: Get version of current docker-engine
shell: ""/usr/bin/docker version --format '{{ '{{' }}.Client.Version{{ '}}' }}' 2>/dev/null | true""
register: installed_docker_version
- debug: var=installed_docker_version
when: installed_docker_version is defined
```
##### EXPECTED RESULTS
The actual version of docker
##### ACTUAL RESULTS
The result is:
```
ok: [machine_1] => {
""installed_docker_version"": {
""changed"": false,
""cmd"": ""/usr/bin/docker version --format '{#.Client.Version#}' 2>/dev/null | true"",
""delta"": ""0:00:00.012714"",
""end"": ""2016-10-18 15:40:21.770477"",
""rc"": 0,
""start"": ""2016-10-18 15:40:21.757763"",
""stderr"": """",
""stdout"": """",
""stdout_lines"": [],
""warnings"": []
}
}
```
see the part `cmd` of that command. This is clearly wrong.
It is also not just a debugging issue, because the correct command would have a correct result. This one just returns nothing.
",True,"shell command not escaping double quotes correctly - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
shell-module
##### ANSIBLE VERSION
```
ansible 2.2.0.0 (detached HEAD 44faad0593) last updated 2016/10/18 10:21:47 (GMT +000)
```
##### OS / ENVIRONMENT
Debian 8.6
Linux machine0 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux
##### SUMMARY
For some formatting reasons, double curly braces are needed in a shell command. This worked in the past. But how, a hashtag is somehow produces by that workaround. See the steps to reproduce.
##### STEPS TO REPRODUCE
```
- name: Get version of current docker-engine
shell: ""/usr/bin/docker version --format '{{ '{{' }}.Client.Version{{ '}}' }}' 2>/dev/null | true""
register: installed_docker_version
- debug: var=installed_docker_version
when: installed_docker_version is defined
```
##### EXPECTED RESULTS
The actual version of docker
##### ACTUAL RESULTS
The result is:
```
ok: [machine_1] => {
""installed_docker_version"": {
""changed"": false,
""cmd"": ""/usr/bin/docker version --format '{#.Client.Version#}' 2>/dev/null | true"",
""delta"": ""0:00:00.012714"",
""end"": ""2016-10-18 15:40:21.770477"",
""rc"": 0,
""start"": ""2016-10-18 15:40:21.757763"",
""stderr"": """",
""stdout"": """",
""stdout_lines"": [],
""warnings"": []
}
}
```
see the part `cmd` of that command. This is clearly wrong.
It is also not just a debugging issue, because the correct command would have a correct result. This one just returns nothing.
",1,shell command not escaping double quotes correctly issue type bug report component name shell module ansible version ansible detached head last updated gmt os environment debian linux smp debian gnu linux summary for some formatting reasons double curly braces are needed in a shell command this worked in the past but how a hashtag is somehow produces by that workaround see the steps to reproduce steps to reproduce name get version of current docker engine shell usr bin docker version format client version dev null true register installed docker version debug var installed docker version when installed docker version is defined expected results the actual version of docker actual results the result is ok installed docker version changed false cmd usr bin docker version format client version dev null true delta end rc start stderr stdout stdout lines warnings see the part cmd of that command this is clearly wrong it is also not just a debugging issue because the correct command would have a correct result this one just returns nothing ,1
42359,6975511753.0,IssuesEvent,2017-12-12 07:27:06,php-deal/framework,https://api.github.com/repos/php-deal/framework,closed,Library name in Scrutinizer and Packagist,documentation enhancement,"To be consistent, scrutinizer-ci.com/g/**lisachenko/php-deal** and packagist.org/packages/**lisachenko/php-deal** should be renamed or a new project should be created :)",1.0,"Library name in Scrutinizer and Packagist - To be consistent, scrutinizer-ci.com/g/**lisachenko/php-deal** and packagist.org/packages/**lisachenko/php-deal** should be renamed or a new project should be created :)",0,library name in scrutinizer and packagist to be consistent scrutinizer ci com g lisachenko php deal and packagist org packages lisachenko php deal should be renamed or a new project should be created ,0
2743,9769562074.0,IssuesEvent,2019-06-06 08:52:08,zaproxy/zaproxy,https://api.github.com/repos/zaproxy/zaproxy,closed,Provide universal formatter / code formatting guidelines,Maintainability Type-Task,"It would be great if the zap team could provide an eclipse code formatter file which can be imported to eclipse. So everyone has the same code style when doing Code -> Format in eclipse.
No more whitespace issues in pull requests!",True,"Provide universal formatter / code formatting guidelines - It would be great if the zap team could provide an eclipse code formatter file which can be imported to eclipse. So everyone has the same code style when doing Code -> Format in eclipse.
No more whitespace issues in pull requests!",1,provide universal formatter code formatting guidelines it would be great if the zap team could provide an eclipse code formatter file which can be imported to eclipse so everyone has the same code style when doing code format in eclipse no more whitespace issues in pull requests ,1
1548,6572237228.0,IssuesEvent,2017-09-11 00:26:34,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,Bug in error handling Librato Annotation module,affects_2.0 bug_report waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
monitoring/librato_annotation
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
Ubuntu 14.04
##### SUMMARY
Apparently my Librato credentials are incorrect. But the var 'e' at https://github.com/ansible/ansible-modules-extras/blob/2a0c5e2a8fd7ed3ce6d6eedd08e85e01e1617113/monitoring/librato_annotation.py#L136 seems to be not placed right. I have no knowledge about Python, but seems like a bug to me.
Result:
```
fatal: [IP]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1463652618.03-36649702854390/librato_annotation\"", line 3003, in \r\n main()\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1463652618.03-36649702854390/librato_annotation\"", line 157, in main\r\n post_annotation(module)\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1463652618.03-36649702854390/librato_annotation\"", line 137, in post_annotation\r\n module.fail_json(msg=\""Request Failed\"", reason=e.reason)\r\nNameError: global name 'e' is not defined\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false}
```
##### STEPS TO REPRODUCE
```
- name: Annotate Librato
librato_annotation:
user: ""{{ secret_librato_username }}""
api_key: ""{{ secret_librato_api_key }}""
title: New deploy
name: app-deploys
source: ""{{ application_env }}""
when: '""production"" in group_names'
```
##### EXPECTED RESULTS
I expect a 'normal' error message with ""Request Failed""
##### ACTUAL RESULTS
Got a stacktrace about ""global name 'e' is not defined""
",True,"Bug in error handling Librato Annotation module - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
monitoring/librato_annotation
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
Ubuntu 14.04
##### SUMMARY
Apparently my Librato credentials are incorrect. But the var 'e' at https://github.com/ansible/ansible-modules-extras/blob/2a0c5e2a8fd7ed3ce6d6eedd08e85e01e1617113/monitoring/librato_annotation.py#L136 seems to be not placed right. I have no knowledge about Python, but seems like a bug to me.
Result:
```
fatal: [IP]: FAILED! => {""changed"": false, ""failed"": true, ""module_stderr"": """", ""module_stdout"": ""Traceback (most recent call last):\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1463652618.03-36649702854390/librato_annotation\"", line 3003, in \r\n main()\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1463652618.03-36649702854390/librato_annotation\"", line 157, in main\r\n post_annotation(module)\r\n File \""/home/ubuntu/.ansible/tmp/ansible-tmp-1463652618.03-36649702854390/librato_annotation\"", line 137, in post_annotation\r\n module.fail_json(msg=\""Request Failed\"", reason=e.reason)\r\nNameError: global name 'e' is not defined\r\n"", ""msg"": ""MODULE FAILURE"", ""parsed"": false}
```
##### STEPS TO REPRODUCE
```
- name: Annotate Librato
librato_annotation:
user: ""{{ secret_librato_username }}""
api_key: ""{{ secret_librato_api_key }}""
title: New deploy
name: app-deploys
source: ""{{ application_env }}""
when: '""production"" in group_names'
```
##### EXPECTED RESULTS
I expect a 'normal' error message with ""Request Failed""
##### ACTUAL RESULTS
Got a stacktrace about ""global name 'e' is not defined""
",1,bug in error handling librato annotation module issue type bug report component name monitoring librato annotation ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides os environment ubuntu summary apparently my librato credentials are incorrect but the var e at seems to be not placed right i have no knowledge about python but seems like a bug to me result fatal failed changed false failed true module stderr module stdout traceback most recent call last r n file home ubuntu ansible tmp ansible tmp librato annotation line in r n main r n file home ubuntu ansible tmp ansible tmp librato annotation line in main r n post annotation module r n file home ubuntu ansible tmp ansible tmp librato annotation line in post annotation r n module fail json msg request failed reason e reason r nnameerror global name e is not defined r n msg module failure parsed false steps to reproduce name annotate librato librato annotation user secret librato username api key secret librato api key title new deploy name app deploys source application env when production in group names expected results i expect a normal error message with request failed actual results got a stacktrace about global name e is not defined ,1
5147,26239315361.0,IssuesEvent,2023-01-05 10:09:08,OpenRefine/OpenRefine,https://api.github.com/repos/OpenRefine/OpenRefine,closed,Extensions defining new importing controllers need to import theme.less from OpenRefine,bug UI maintainability extension,"Some extensions define new ""importing controllers"" (for instance the gdata and database extension). This is the expected extension point when implementing an importing mechanism which loads data from other sources.
As part of this, extensions need to define parsing pages which look like the importer preview we have in OpenRefine.
For the styling of those parsing pages, both the gdata and database extensions rely on OpenRefine-internal `.less` files that they explicitly import on their own side.
This looks like this:
https://github.com/OpenRefine/OpenRefine/blob/5746951ec069549345eb368ea74a77c3fec9a912/extensions/database/module/styles/theme.less#L30
While this technique avoids duplicating files from the core to the extensions, this is far from ideal because this mechanism is not available to extensions developed outside this repository.
Intuitively, it should be possible for any extension to use the standard styling of the parsing page, without having to redefine it itself. Of course, we still want extensions to be able to add some custom CSS to this page, should they need to.
Ideally, it would be great if we could fix this in a backwards-compatible way (meaning that extensions which have been built for the current situation would keep working with the new system), although this is likely to be difficult. And it is probably not so important since we can easily fix the extensions in this repository.
@antoine2711 @WaltonG this is an issue you are likely to run into (or have already run into?) for the SPARQL extension, no?",True,"Extensions defining new importing controllers need to import theme.less from OpenRefine - Some extensions define new ""importing controllers"" (for instance the gdata and database extension). This is the expected extension point when implementing an importing mechanism which loads data from other sources.
As part of this, extensions need to define parsing pages which look like the importer preview we have in OpenRefine.
For the styling of those parsing pages, both the gdata and database extensions rely on OpenRefine-internal `.less` files that they explicitly import on their own side.
This looks like this:
https://github.com/OpenRefine/OpenRefine/blob/5746951ec069549345eb368ea74a77c3fec9a912/extensions/database/module/styles/theme.less#L30
While this technique avoids duplicating files from the core to the extensions, this is far from ideal because this mechanism is not available to extensions developed outside this repository.
Intuitively, it should be possible for any extension to use the standard styling of the parsing page, without having to redefine it itself. Of course, we still want extensions to be able to add some custom CSS to this page, should they need to.
Ideally, it would be great if we could fix this in a backwards-compatible way (meaning that extensions which have been built for the current situation would keep working with the new system), although this is likely to be difficult. And it is probably not so important since we can easily fix the extensions in this repository.
@antoine2711 @WaltonG this is an issue you are likely to run into (or have already run into?) for the SPARQL extension, no?",1,extensions defining new importing controllers need to import theme less from openrefine some extensions define new importing controllers for instance the gdata and database extension this is the expected extension point when implementing an importing mechanism which loads data from other sources as part of this extensions need to define parsing pages which look like the importer preview we have in openrefine for the styling of those parsing pages both the gdata and database extensions rely on openrefine internal less files that they explicitly import on their own side this looks like this while this technique avoids duplicating files from the core to the extensions this is far from ideal because this mechanism is not available to extensions developed outside this repository intuitively it should be possible for any extension to use the standard styling of the parsing page without having to redefine it itself of course we still want extensions to be able to add some custom css to this page should they need to ideally it would be great if we could fix this in a backwards compatible way meaning that extensions which have been built for the current situation would keep working with the new system although this is likely to be difficult and it is probably not so important since we can easily fix the extensions in this repository waltong this is an issue you are likely to run into or have already run into for the sparql extension no ,1
5318,26839240214.0,IssuesEvent,2023-02-02 22:23:05,aws/aws-sam-cli,https://api.github.com/repos/aws/aws-sam-cli,closed,"add deploy --outputs-file option, like AWS CDK",type/feature area/deploy stage/pm-review maintainer/need-followup,"### Describe your idea/feature/enhancement
I wish SAM CLI would have an `--outputs-file` optional CLI argument for `sam deploy`, like the one `cdk deploy` has, see [the CDK docs](https://docs.aws.amazon.com/cdk/latest/guide/cli.html#w109aac23b7c33c13). Right now, it only prints the outputs in an ASCII table, mixed in with all the other stuff that goes to stdout, which of course is not machine readable, forcing a user to have to write a separate program to query CloudFormation to get the outputs.",True,"add deploy --outputs-file option, like AWS CDK - ### Describe your idea/feature/enhancement
I wish SAM CLI would have an `--outputs-file` optional CLI argument for `sam deploy`, like the one `cdk deploy` has, see [the CDK docs](https://docs.aws.amazon.com/cdk/latest/guide/cli.html#w109aac23b7c33c13). Right now, it only prints the outputs in an ASCII table, mixed in with all the other stuff that goes to stdout, which of course is not machine readable, forcing a user to have to write a separate program to query CloudFormation to get the outputs.",1,add deploy outputs file option like aws cdk describe your idea feature enhancement i wish sam cli would have an outputs file optional cli argument for sam deploy like the one cdk deploy has see right now it only prints the outputs in an ascii table mixed in with all the other stuff that goes to stdout which of course is not machine readable forcing a user to have to write a separate program to query cloudformation to get the outputs ,1
66410,16609544730.0,IssuesEvent,2021-06-02 09:46:26,Crocoblock/suggestions,https://api.github.com/repos/Crocoblock/suggestions,closed,Problem displaying discount percentage on the single page of the jet woobuilder plugin for products that do not have a discount.,JetWooBuilder,"When we use this widget (https://prnt.sc/1363q97) to display the discount percentage, if the product does not have a discount, the widget will remain empty. https://prnt.sc/1363mf2
Please correct this distance. Thanks
",1.0,"Problem displaying discount percentage on the single page of the jet woobuilder plugin for products that do not have a discount. - When we use this widget (https://prnt.sc/1363q97) to display the discount percentage, if the product does not have a discount, the widget will remain empty. https://prnt.sc/1363mf2
Please correct this distance. Thanks
",0,problem displaying discount percentage on the single page of the jet woobuilder plugin for products that do not have a discount when we use this widget to display the discount percentage if the product does not have a discount the widget will remain empty please correct this distance thanks ,0
1522,6572215715.0,IssuesEvent,2017-09-11 00:09:26,ansible/ansible-modules-extras,https://api.github.com/repos/ansible/ansible-modules-extras,closed,nmcli: type is required for create/modify actions,affects_2.0 bug_report docs_report networking waiting_on_maintainer,"##### Issue Type:
- Bug Report
##### Plugin Name:
nmcli
##### Ansible Version:
```
ansible 2.0.1.0
config file = /root/ansible-boulder/ansible.cfg
configured module search path = /usr/share/ansible/
```
##### Ansible Configuration:
##### Environment:
EL7
##### Summary:
In the docs, type is listed as not required. However, for create/modify actions, if no type is specified run_command is called with empty arguments resulting in a very hard to understand error message. It might also make sense to have a default type of ""ethernet"".
##### Steps To Reproduce:
```
nmcli: state=present conn_name=CORA dhcp_client_id={{ ansible_fqdn }}
```
##### Expected Results:
Error: type= argument needed to modify connection.
##### Actual Results:
```
fatal: [barry.cora.nwra.com]: FAILED! => {""changed"": false, ""cmd"": """", ""failed"": true, ""msg"": ""Traceback (most recent call last):\n File \""/root/.ansible/tmp/ansible-tmp-1457732364.94-222498069462374/nmcli\"", line 2944, in run_command\n cmd = subprocess.Popen(args, **kwargs)\n File \""/usr/lib64/python2.7/subprocess.py\"", line 711, in __init__\n errread, errwrite)\n File \""/usr/lib64/python2.7/subprocess.py\"", line 1207, in _execute_child\n executable = args[0]\nIndexError: list index out of range\n"", ""rc"": 257}
```
",True,"nmcli: type is required for create/modify actions - ##### Issue Type:
- Bug Report
##### Plugin Name:
nmcli
##### Ansible Version:
```
ansible 2.0.1.0
config file = /root/ansible-boulder/ansible.cfg
configured module search path = /usr/share/ansible/
```
##### Ansible Configuration:
##### Environment:
EL7
##### Summary:
In the docs, type is listed as not required. However, for create/modify actions, if no type is specified run_command is called with empty arguments resulting in a very hard to understand error message. It might also make sense to have a default type of ""ethernet"".
##### Steps To Reproduce:
```
nmcli: state=present conn_name=CORA dhcp_client_id={{ ansible_fqdn }}
```
##### Expected Results:
Error: type= argument needed to modify connection.
##### Actual Results:
```
fatal: [barry.cora.nwra.com]: FAILED! => {""changed"": false, ""cmd"": """", ""failed"": true, ""msg"": ""Traceback (most recent call last):\n File \""/root/.ansible/tmp/ansible-tmp-1457732364.94-222498069462374/nmcli\"", line 2944, in run_command\n cmd = subprocess.Popen(args, **kwargs)\n File \""/usr/lib64/python2.7/subprocess.py\"", line 711, in __init__\n errread, errwrite)\n File \""/usr/lib64/python2.7/subprocess.py\"", line 1207, in _execute_child\n executable = args[0]\nIndexError: list index out of range\n"", ""rc"": 257}
```
",1,nmcli type is required for create modify actions issue type bug report plugin name nmcli ansible version ansible config file root ansible boulder ansible cfg configured module search path usr share ansible ansible configuration please mention any settings you ve changed added removed in ansible cfg or using the ansible environment variables environment summary in the docs type is listed as not required however for create modify actions if no type is specified run command is called with empty arguments resulting in a very hard to understand error message it might also make sense to have a default type of ethernet steps to reproduce for bugs please show exactly how to reproduce the problem for new features show how the feature would be used nmcli state present conn name cora dhcp client id ansible fqdn expected results error type argument needed to modify connection actual results fatal failed changed false cmd failed true msg traceback most recent call last n file root ansible tmp ansible tmp nmcli line in run command n cmd subprocess popen args kwargs n file usr subprocess py line in init n errread errwrite n file usr subprocess py line in execute child n executable args nindexerror list index out of range n rc ,1
412969,12058898698.0,IssuesEvent,2020-04-15 18:17:16,AugurProject/augur,https://api.github.com/repos/AugurProject/augur,opened,Transfer Modal Copy upgrade,Needed for V2 launch Priority: High,"The new Transfer button which brings up the Modal still Says Withdraw funds in it
Need to change Withdraw to Transfer",1.0,"Transfer Modal Copy upgrade - The new Transfer button which brings up the Modal still Says Withdraw funds in it
Need to change Withdraw to Transfer",0,transfer modal copy upgrade the new transfer button which brings up the modal still says withdraw funds in it need to change withdraw to transfer,0
1873,6577499468.0,IssuesEvent,2017-09-12 01:20:29,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,setting password when creating user causes job state to always be changed,affects_2.0 bug_report waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
user module
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
nothing
##### OS / ENVIRONMENT
OSX Yosemite, El Cap
##### SUMMARY
when specifying a user, if a password is specified the user play is always changed unless update_password: on_create is specified. when providing the hash explicitly, vs the plaintext password, this should not be necessary. this behavior does not exist on RHEL5/6/7.
##### STEPS TO REPRODUCE
```
- name: create user user (osx)
user:
name: newuser
state: present
password: ""{{ user_hash }}""
#update_password: on_create
groups: 'admin'
append: yes
shell: /bin/bash
```
##### EXPECTED RESULTS
Similar behavior to rhel application of this play, where the play only reports changed when there is a change to the system state.
##### ACTUAL RESULTS
Play results in state changed unless update_password: on_create is specified.
```
TASK [testuser_account_setup : create testuser user (osx)] *************************
task path: /Users/brad8328/repos/ansible-repo/roles/testuser_account_setup/tasks/main.yml:27
ESTABLISH SSH CONNECTION FOR USER: ansibleuser
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansibleuser -o ConnectTimeout=10 -o ControlPath=/Users/brad8328/.ansible/cp/ansible-ssh-%h-%p-%r -tt testhost2 '/bin/sh -c '""'""'( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1459977514.94-56562468891746 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1459977514.94-56562468891746 `"" )'""'""''
ESTABLISH SSH CONNECTION FOR USER: ansibleuser
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansibleuser -o ConnectTimeout=10 -o ControlPath=/Users/brad8328/.ansible/cp/ansible-ssh-%h-%p-%r -tt testhost1 '/bin/sh -c '""'""'( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1459977514.94-221933915191127 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1459977514.94-221933915191127 `"" )'""'""''
PUT /var/folders/d9/7d_sb4mj1bz1jywsy8rrpj61n4xnb5/T/tmpBEbtI3 TO /Users/ansibleuser/.ansible/tmp/ansible-tmp-1459977514.94-221933915191127/user
SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansibleuser -o ConnectTimeout=10 -o ControlPath=/Users/brad8328/.ansible/cp/ansible-ssh-%h-%p-%r '[testhost1]'
PUT /var/folders/d9/7d_sb4mj1bz1jywsy8rrpj61n4xnb5/T/tmp8mMX8j TO /Users/ansibleuser/.ansible/tmp/ansible-tmp-1459977514.94-56562468891746/user
SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansibleuser -o ConnectTimeout=10 -o ControlPath=/Users/brad8328/.ansible/cp/ansible-ssh-%h-%p-%r '[testhost2]'
ESTABLISH SSH CONNECTION FOR USER: ansibleuser
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansibleuser -o ConnectTimeout=10 -o ControlPath=/Users/brad8328/.ansible/cp/ansible-ssh-%h-%p-%r -tt testhost1 '/bin/sh -c '""'""'sudo -H -S -p ""[sudo via ansible, key=ixheeyzjjhmvtkxfyujaeglkekyibvqs] password: "" -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-ixheeyzjjhmvtkxfyujaeglkekyibvqs; /bin/sh -c '""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/ansibleuser/.ansible/tmp/ansible-tmp-1459977514.94-221933915191127/user; rm -rf ""/Users/ansibleuser/.ansible/tmp/ansible-tmp-1459977514.94-221933915191127/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""''""'""'""'""'""'""'""'""''""'""''
ESTABLISH SSH CONNECTION FOR USER: ansibleuser
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansibleuser -o ConnectTimeout=10 -o ControlPath=/Users/brad8328/.ansible/cp/ansible-ssh-%h-%p-%r -tt testhost2 '/bin/sh -c '""'""'sudo -H -S -p ""[sudo via ansible, key=nckvjyuwnrxjapcanzwalcynwzwjeuft] password: "" -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-nckvjyuwnrxjapcanzwalcynwzwjeuft; /bin/sh -c '""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/ansibleuser/.ansible/tmp/ansible-tmp-1459977514.94-56562468891746/user; rm -rf ""/Users/ansibleuser/.ansible/tmp/ansible-tmp-1459977514.94-56562468891746/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""''""'""'""'""'""'""'""'""''""'""''
changed: [testhost1] => {""append"": true, ""changed"": true, ""comment"": """", ""group"": 4294967295, ""groups"": ""admin"", ""home"": ""/Users/testuser"", ""invocation"": {""module_args"": {""append"": true, ""comment"": null, ""createhome"": true, ""expires"": null, ""force"": false, ""generate_ssh_key"": null, ""group"": null, ""groups"": ""admin"", ""home"": null, ""login_class"": null, ""move_home"": false, ""name"": ""testuser"", ""non_unique"": false, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""remove"": false, ""shell"": ""/bin/bash"", ""skeleton"": null, ""ssh_key_bits"": ""2048"", ""ssh_key_comment"": ""ansible-generated on ansibleusers-imac.esri.com"", ""ssh_key_file"": null, ""ssh_key_passphrase"": null, ""ssh_key_type"": ""rsa"", ""state"": ""present"", ""system"": false, ""uid"": null, ""update_password"": ""always""}, ""module_name"": ""user""}, ""move_home"": false, ""name"": ""testuser"", ""password"": ""NOT_LOGGING_PASSWORD"", ""shell"": ""/bin/bash"", ""state"": ""present"", ""uid"": 502}
changed: [testhost2] => {""append"": true, ""changed"": true, ""comment"": """", ""group"": 4294967295, ""groups"": ""admin"", ""home"": ""/Users/testuser"", ""invocation"": {""module_args"": {""append"": true, ""comment"": null, ""createhome"": true, ""expires"": null, ""force"": false, ""generate_ssh_key"": null, ""group"": null, ""groups"": ""admin"", ""home"": null, ""login_class"": null, ""move_home"": false, ""name"": ""testuser"", ""non_unique"": false, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""remove"": false, ""shell"": ""/bin/bash"", ""skeleton"": null, ""ssh_key_bits"": ""2048"", ""ssh_key_comment"": ""ansible-generated on nerumoancer-vm.esri.com"", ""ssh_key_file"": null, ""ssh_key_passphrase"": null, ""ssh_key_type"": ""rsa"", ""state"": ""present"", ""system"": false, ""uid"": null, ""update_password"": ""always""}, ""module_name"": ""user""}, ""move_home"": false, ""name"": ""testuser"", ""password"": ""NOT_LOGGING_PASSWORD"", ""shell"": ""/bin/bash"", ""state"": ""present"", ""uid"": 503}
```
",True,"setting password when creating user causes job state to always be changed -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
user module
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
nothing
##### OS / ENVIRONMENT
OSX Yosemite, El Cap
##### SUMMARY
when specifying a user, if a password is specified the user play is always changed unless update_password: on_create is specified. when providing the hash explicitly, vs the plaintext password, this should not be necessary. this behavior does not exist on RHEL5/6/7.
##### STEPS TO REPRODUCE
```
- name: create user user (osx)
user:
name: newuser
state: present
password: ""{{ user_hash }}""
#update_password: on_create
groups: 'admin'
append: yes
shell: /bin/bash
```
##### EXPECTED RESULTS
Similar behavior to rhel application of this play, where the play only reports changed when there is a change to the system state.
##### ACTUAL RESULTS
Play results in state changed unless update_password: on_create is specified.
```
TASK [testuser_account_setup : create testuser user (osx)] *************************
task path: /Users/brad8328/repos/ansible-repo/roles/testuser_account_setup/tasks/main.yml:27
ESTABLISH SSH CONNECTION FOR USER: ansibleuser
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansibleuser -o ConnectTimeout=10 -o ControlPath=/Users/brad8328/.ansible/cp/ansible-ssh-%h-%p-%r -tt testhost2 '/bin/sh -c '""'""'( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1459977514.94-56562468891746 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1459977514.94-56562468891746 `"" )'""'""''
ESTABLISH SSH CONNECTION FOR USER: ansibleuser
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansibleuser -o ConnectTimeout=10 -o ControlPath=/Users/brad8328/.ansible/cp/ansible-ssh-%h-%p-%r -tt testhost1 '/bin/sh -c '""'""'( umask 22 && mkdir -p ""` echo $HOME/.ansible/tmp/ansible-tmp-1459977514.94-221933915191127 `"" && echo ""` echo $HOME/.ansible/tmp/ansible-tmp-1459977514.94-221933915191127 `"" )'""'""''
PUT /var/folders/d9/7d_sb4mj1bz1jywsy8rrpj61n4xnb5/T/tmpBEbtI3 TO /Users/ansibleuser/.ansible/tmp/ansible-tmp-1459977514.94-221933915191127/user
SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansibleuser -o ConnectTimeout=10 -o ControlPath=/Users/brad8328/.ansible/cp/ansible-ssh-%h-%p-%r '[testhost1]'
PUT /var/folders/d9/7d_sb4mj1bz1jywsy8rrpj61n4xnb5/T/tmp8mMX8j TO /Users/ansibleuser/.ansible/tmp/ansible-tmp-1459977514.94-56562468891746/user
SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansibleuser -o ConnectTimeout=10 -o ControlPath=/Users/brad8328/.ansible/cp/ansible-ssh-%h-%p-%r '[testhost2]'
ESTABLISH SSH CONNECTION FOR USER: ansibleuser
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansibleuser -o ConnectTimeout=10 -o ControlPath=/Users/brad8328/.ansible/cp/ansible-ssh-%h-%p-%r -tt testhost1 '/bin/sh -c '""'""'sudo -H -S -p ""[sudo via ansible, key=ixheeyzjjhmvtkxfyujaeglkekyibvqs] password: "" -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-ixheeyzjjhmvtkxfyujaeglkekyibvqs; /bin/sh -c '""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/ansibleuser/.ansible/tmp/ansible-tmp-1459977514.94-221933915191127/user; rm -rf ""/Users/ansibleuser/.ansible/tmp/ansible-tmp-1459977514.94-221933915191127/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""''""'""'""'""'""'""'""'""''""'""''
ESTABLISH SSH CONNECTION FOR USER: ansibleuser
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ansibleuser -o ConnectTimeout=10 -o ControlPath=/Users/brad8328/.ansible/cp/ansible-ssh-%h-%p-%r -tt testhost2 '/bin/sh -c '""'""'sudo -H -S -p ""[sudo via ansible, key=nckvjyuwnrxjapcanzwalcynwzwjeuft] password: "" -u root /bin/sh -c '""'""'""'""'""'""'""'""'echo BECOME-SUCCESS-nckvjyuwnrxjapcanzwalcynwzwjeuft; /bin/sh -c '""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/ansibleuser/.ansible/tmp/ansible-tmp-1459977514.94-56562468891746/user; rm -rf ""/Users/ansibleuser/.ansible/tmp/ansible-tmp-1459977514.94-56562468891746/"" > /dev/null 2>&1'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""'""''""'""'""'""'""'""'""'""''""'""''
changed: [testhost1] => {""append"": true, ""changed"": true, ""comment"": """", ""group"": 4294967295, ""groups"": ""admin"", ""home"": ""/Users/testuser"", ""invocation"": {""module_args"": {""append"": true, ""comment"": null, ""createhome"": true, ""expires"": null, ""force"": false, ""generate_ssh_key"": null, ""group"": null, ""groups"": ""admin"", ""home"": null, ""login_class"": null, ""move_home"": false, ""name"": ""testuser"", ""non_unique"": false, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""remove"": false, ""shell"": ""/bin/bash"", ""skeleton"": null, ""ssh_key_bits"": ""2048"", ""ssh_key_comment"": ""ansible-generated on ansibleusers-imac.esri.com"", ""ssh_key_file"": null, ""ssh_key_passphrase"": null, ""ssh_key_type"": ""rsa"", ""state"": ""present"", ""system"": false, ""uid"": null, ""update_password"": ""always""}, ""module_name"": ""user""}, ""move_home"": false, ""name"": ""testuser"", ""password"": ""NOT_LOGGING_PASSWORD"", ""shell"": ""/bin/bash"", ""state"": ""present"", ""uid"": 502}
changed: [testhost2] => {""append"": true, ""changed"": true, ""comment"": """", ""group"": 4294967295, ""groups"": ""admin"", ""home"": ""/Users/testuser"", ""invocation"": {""module_args"": {""append"": true, ""comment"": null, ""createhome"": true, ""expires"": null, ""force"": false, ""generate_ssh_key"": null, ""group"": null, ""groups"": ""admin"", ""home"": null, ""login_class"": null, ""move_home"": false, ""name"": ""testuser"", ""non_unique"": false, ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""remove"": false, ""shell"": ""/bin/bash"", ""skeleton"": null, ""ssh_key_bits"": ""2048"", ""ssh_key_comment"": ""ansible-generated on nerumoancer-vm.esri.com"", ""ssh_key_file"": null, ""ssh_key_passphrase"": null, ""ssh_key_type"": ""rsa"", ""state"": ""present"", ""system"": false, ""uid"": null, ""update_password"": ""always""}, ""module_name"": ""user""}, ""move_home"": false, ""name"": ""testuser"", ""password"": ""NOT_LOGGING_PASSWORD"", ""shell"": ""/bin/bash"", ""state"": ""present"", ""uid"": 503}
```
",1,setting password when creating user causes job state to always be changed issue type bug report component name user module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables nothing os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific osx yosemite el cap summary when specifying a user if a password is specified the user play is always changed unless update password on create is specified when providing the hash explicitly vs the plaintext password this should not be necessary this behavior does not exist on steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name create user user osx user name newuser state present password user hash update password on create groups admin append yes shell bin bash expected results similar behavior to rhel application of this play where the play only reports changed when there is a change to the system state actual results play results in state changed unless update password on create is specified task task path users repos ansible repo roles testuser account setup tasks main yml establish ssh connection for user ansibleuser ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansibleuser o connecttimeout o controlpath users ansible cp ansible ssh h p r tt bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp establish ssh connection for user ansibleuser ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansibleuser o connecttimeout o controlpath users ansible cp ansible ssh h p r tt bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put var folders t to users ansibleuser ansible tmp ansible tmp user ssh exec sftp b c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansibleuser o connecttimeout o controlpath users ansible cp ansible ssh h p r put var folders t to users ansibleuser ansible tmp ansible tmp user ssh exec sftp b c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansibleuser o connecttimeout o controlpath users ansible cp ansible ssh h p r establish ssh connection for user ansibleuser ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansibleuser o connecttimeout o controlpath users ansible cp ansible ssh h p r tt bin sh c sudo h s p password u root bin sh c echo become success ixheeyzjjhmvtkxfyujaeglkekyibvqs bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users ansibleuser ansible tmp ansible tmp user rm rf users ansibleuser ansible tmp ansible tmp dev null establish ssh connection for user ansibleuser ssh exec ssh c vvv o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ansibleuser o connecttimeout o controlpath users ansible cp ansible ssh h p r tt bin sh c sudo h s p password u root bin sh c echo become success nckvjyuwnrxjapcanzwalcynwzwjeuft bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users ansibleuser ansible tmp ansible tmp user rm rf users ansibleuser ansible tmp ansible tmp dev null changed append true changed true comment group groups admin home users testuser invocation module args append true comment null createhome true expires null force false generate ssh key null group null groups admin home null login class null move home false name testuser non unique false password value specified in no log parameter remove false shell bin bash skeleton null ssh key bits ssh key comment ansible generated on ansibleusers imac esri com ssh key file null ssh key passphrase null ssh key type rsa state present system false uid null update password always module name user move home false name testuser password not logging password shell bin bash state present uid changed append true changed true comment group groups admin home users testuser invocation module args append true comment null createhome true expires null force false generate ssh key null group null groups admin home null login class null move home false name testuser non unique false password value specified in no log parameter remove false shell bin bash skeleton null ssh key bits ssh key comment ansible generated on nerumoancer vm esri com ssh key file null ssh key passphrase null ssh key type rsa state present system false uid null update password always module name user move home false name testuser password not logging password shell bin bash state present uid ,1
454164,13095836775.0,IssuesEvent,2020-08-03 14:44:30,FAIRsharing/fairsharing.github.io,https://api.github.com/repos/FAIRsharing/fairsharing.github.io,closed,improving login popup style,High priority enhancement,"- [x] changing the background
- [x] changing buttons color
- [x] center Login title",1.0,"improving login popup style - - [x] changing the background
- [x] changing buttons color
- [x] center Login title",0,improving login popup style changing the background changing buttons color center login title,0
650,4163899406.0,IssuesEvent,2016-06-18 12:11:58,Particular/NServiceBus.RabbitMQ,https://api.github.com/repos/Particular/NServiceBus.RabbitMQ,closed,Remove ConnectionManager,Size: S State: In Progress - Maintainer Prio Tag: Maintainer Prio Type: Refactoring,"Currently, we still have both `ConnectionManager` and `ConnectionFactory` classes. As mentioned in https://github.com/Particular/NServiceBus.RabbitMQ/issues/74#issuecomment-201040218, the scope of what `ConnectionManager` is doing has been decreased. At this point it is only being used to manage the publish connection and passing along the creation of the admin connection to the connection factory:
https://github.com/Particular/NServiceBus.RabbitMQ/blob/develop/src/NServiceBus.RabbitMQ/Connection/ConnectionManager.cs
Instead of keeping `ConnectionManager` around just to pass it into `ChannelProvider`, I think it makes sense kill `ConnectionManager` and just pass `ConnectionFactory` around directly.
This would mean that `ChannelProvider` would get a `ConnectionFactory` and create its own publish connection when it needs it, and then would be responsible for closing it when the endpoint is stopping.
This approach works because at that point we would only ever need to be ""creating"" connections and the two places that keep connections open (`ChannelProvider` and `MessagePump`) can just deal with the created connection directly.
The `MessagePump` was originally designed this way because it needed to create a specific `ConnectionFactory` to pass in a custom scheduler, but I've cleaned that up a bit, and it no longer requires the custom scheduler. 550664404c2563d01096cec25cca283e038eb95d
Because of this, we could decide on an alternate approach. The `MessagePump` could once again get a ""managed"" connection from `ConnectionManager`, and then it would use that connection to create a channel.
This would put the `ConnectionManager` back in charge of connection lifetimes.
Currently, as a side effect of the `MessagePump` being responsible for creating a connection, each `MessagePump` instance creates its own connection, so each queue being consumed (main, optional instance queue, satellites) has a separate connection in addition to a separate channel. I was able to take advantage of this and the purpose of each connection is set differently instead of having a generic ""consume"" purpose: https://github.com/Particular/NServiceBus.RabbitMQ/blob/develop/src/NServiceBus.RabbitMQ/Receiving/MessagePump.cs#L74
This is nice when viewing the connections from the management UI, and we'd lose it if we went back to the `ConnectionManager` being in charge.
Thoughts?
@Particular/rabbitmq-transport-maintainers ",True,"Remove ConnectionManager - Currently, we still have both `ConnectionManager` and `ConnectionFactory` classes. As mentioned in https://github.com/Particular/NServiceBus.RabbitMQ/issues/74#issuecomment-201040218, the scope of what `ConnectionManager` is doing has been decreased. At this point it is only being used to manage the publish connection and passing along the creation of the admin connection to the connection factory:
https://github.com/Particular/NServiceBus.RabbitMQ/blob/develop/src/NServiceBus.RabbitMQ/Connection/ConnectionManager.cs
Instead of keeping `ConnectionManager` around just to pass it into `ChannelProvider`, I think it makes sense kill `ConnectionManager` and just pass `ConnectionFactory` around directly.
This would mean that `ChannelProvider` would get a `ConnectionFactory` and create its own publish connection when it needs it, and then would be responsible for closing it when the endpoint is stopping.
This approach works because at that point we would only ever need to be ""creating"" connections and the two places that keep connections open (`ChannelProvider` and `MessagePump`) can just deal with the created connection directly.
The `MessagePump` was originally designed this way because it needed to create a specific `ConnectionFactory` to pass in a custom scheduler, but I've cleaned that up a bit, and it no longer requires the custom scheduler. 550664404c2563d01096cec25cca283e038eb95d
Because of this, we could decide on an alternate approach. The `MessagePump` could once again get a ""managed"" connection from `ConnectionManager`, and then it would use that connection to create a channel.
This would put the `ConnectionManager` back in charge of connection lifetimes.
Currently, as a side effect of the `MessagePump` being responsible for creating a connection, each `MessagePump` instance creates its own connection, so each queue being consumed (main, optional instance queue, satellites) has a separate connection in addition to a separate channel. I was able to take advantage of this and the purpose of each connection is set differently instead of having a generic ""consume"" purpose: https://github.com/Particular/NServiceBus.RabbitMQ/blob/develop/src/NServiceBus.RabbitMQ/Receiving/MessagePump.cs#L74
This is nice when viewing the connections from the management UI, and we'd lose it if we went back to the `ConnectionManager` being in charge.
Thoughts?
@Particular/rabbitmq-transport-maintainers ",1,remove connectionmanager currently we still have both connectionmanager and connectionfactory classes as mentioned in the scope of what connectionmanager is doing has been decreased at this point it is only being used to manage the publish connection and passing along the creation of the admin connection to the connection factory instead of keeping connectionmanager around just to pass it into channelprovider i think it makes sense kill connectionmanager and just pass connectionfactory around directly this would mean that channelprovider would get a connectionfactory and create its own publish connection when it needs it and then would be responsible for closing it when the endpoint is stopping this approach works because at that point we would only ever need to be creating connections and the two places that keep connections open channelprovider and messagepump can just deal with the created connection directly the messagepump was originally designed this way because it needed to create a specific connectionfactory to pass in a custom scheduler but i ve cleaned that up a bit and it no longer requires the custom scheduler because of this we could decide on an alternate approach the messagepump could once again get a managed connection from connectionmanager and then it would use that connection to create a channel this would put the connectionmanager back in charge of connection lifetimes currently as a side effect of the messagepump being responsible for creating a connection each messagepump instance creates its own connection so each queue being consumed main optional instance queue satellites has a separate connection in addition to a separate channel i was able to take advantage of this and the purpose of each connection is set differently instead of having a generic consume purpose this is nice when viewing the connections from the management ui and we d lose it if we went back to the connectionmanager being in charge thoughts particular rabbitmq transport maintainers ,1
125122,17835445853.0,IssuesEvent,2021-09-03 00:02:57,tim-wsdemo/NG2,https://api.github.com/repos/tim-wsdemo/NG2,opened,CVE-2020-7598 (Medium) detected in multiple libraries,security vulnerability,"## CVE-2020-7598 - Medium Severity Vulnerability
Vulnerable Libraries - minimist-0.0.8.tgz, minimist-0.0.10.tgz, minimist-1.2.0.tgz
",0,cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries minimist tgz minimist tgz minimist tgz minimist tgz parse argument options library home page a href dependency hierarchy forever tgz root library mkdirp tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file package json path to vulnerable library node modules minimist package json dependency hierarchy forever tgz root library optimist tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href dependency hierarchy forever tgz root library forever monitor tgz chokidar tgz fsevents tgz node pre gyp tgz rc tgz x minimist tgz vulnerable library found in head commit a href found in base branch master vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution minimist isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree forever mkdirp minimist isminimumfixversionavailable true minimumfixversion minimist packagetype javascript node js packagename minimist packageversion packagefilepaths istransitivedependency true dependencytree forever optimist minimist isminimumfixversionavailable true minimumfixversion minimist packagetype javascript node js packagename minimist packageversion packagefilepaths istransitivedependency true dependencytree forever forever monitor chokidar fsevents node pre gyp rc minimist isminimumfixversionavailable true minimumfixversion minimist basebranches vulnerabilityidentifier cve vulnerabilitydetails minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload vulnerabilityurl ,0
3060,11456912448.0,IssuesEvent,2020-02-06 22:17:56,18F/cg-product,https://api.github.com/repos/18F/cg-product,closed,Debug and fix Logstash pipeline intermittent/changing failures,contractor-2-troubleshooting contractor-3-maintainability operations stale,"Every time the logsearch/logstash deployment pipeline runs, there's a roughly 20-30% chance that it will fail with a non-reproducible error message. This error changes from run to run.
- Typically, one or more services fails to either start or stop. After `ssh`ing in, there's no error message. This may be due to some timeout setting somewhere but the root cause is really unknown at this point.
- Staging seems to perform worse than production or development
- When the deployment gets stalled due to errors, the logsearch deployment blocks the entire deployment process until it's restarted
- Current fix: Rerun the job, maybe it works?
- Last successful staging deployment was August 5
- First failure on staging was August 28
- Last successful production deployment was August 2
- Job reruns when it gets new resources - we have much newer resources in development than staging or production
- Problem seems to be with the deployments rather than the actual software
@bengerman13 has been trying to fast-forward to get the deployment update to date - we consume `logsearch-for-cloud-foundry` and `logsearch-boshrelease`, combine them and create a new artifact, and then deploy that artifact using a dynamically generated bosh manifest.
## Next steps
- Determine the proximate cause
- We need better debug information on what's failing and why
- Pairing/dogpiling to get staging to run successfully more than once in a row
- We diverged from the two upstreams roughly 2 years ago, so there's foundational work to be done to make sure we don't break everything by upgrading (e.g. APIs)
- This is in progress, check with @bengerman13 before starting work on that step
## Acceptance Criteria
- [ ] Upon release of a new stemcell, the deployment pipeline rolls the update through with no manual intervention",True,"Debug and fix Logstash pipeline intermittent/changing failures - Every time the logsearch/logstash deployment pipeline runs, there's a roughly 20-30% chance that it will fail with a non-reproducible error message. This error changes from run to run.
- Typically, one or more services fails to either start or stop. After `ssh`ing in, there's no error message. This may be due to some timeout setting somewhere but the root cause is really unknown at this point.
- Staging seems to perform worse than production or development
- When the deployment gets stalled due to errors, the logsearch deployment blocks the entire deployment process until it's restarted
- Current fix: Rerun the job, maybe it works?
- Last successful staging deployment was August 5
- First failure on staging was August 28
- Last successful production deployment was August 2
- Job reruns when it gets new resources - we have much newer resources in development than staging or production
- Problem seems to be with the deployments rather than the actual software
@bengerman13 has been trying to fast-forward to get the deployment update to date - we consume `logsearch-for-cloud-foundry` and `logsearch-boshrelease`, combine them and create a new artifact, and then deploy that artifact using a dynamically generated bosh manifest.
## Next steps
- Determine the proximate cause
- We need better debug information on what's failing and why
- Pairing/dogpiling to get staging to run successfully more than once in a row
- We diverged from the two upstreams roughly 2 years ago, so there's foundational work to be done to make sure we don't break everything by upgrading (e.g. APIs)
- This is in progress, check with @bengerman13 before starting work on that step
## Acceptance Criteria
- [ ] Upon release of a new stemcell, the deployment pipeline rolls the update through with no manual intervention",1,debug and fix logstash pipeline intermittent changing failures every time the logsearch logstash deployment pipeline runs there s a roughly chance that it will fail with a non reproducible error message this error changes from run to run typically one or more services fails to either start or stop after ssh ing in there s no error message this may be due to some timeout setting somewhere but the root cause is really unknown at this point staging seems to perform worse than production or development when the deployment gets stalled due to errors the logsearch deployment blocks the entire deployment process until it s restarted current fix rerun the job maybe it works last successful staging deployment was august first failure on staging was august last successful production deployment was august job reruns when it gets new resources we have much newer resources in development than staging or production problem seems to be with the deployments rather than the actual software has been trying to fast forward to get the deployment update to date we consume logsearch for cloud foundry and logsearch boshrelease combine them and create a new artifact and then deploy that artifact using a dynamically generated bosh manifest next steps determine the proximate cause we need better debug information on what s failing and why pairing dogpiling to get staging to run successfully more than once in a row we diverged from the two upstreams roughly years ago so there s foundational work to be done to make sure we don t break everything by upgrading e g apis this is in progress check with before starting work on that step acceptance criteria upon release of a new stemcell the deployment pipeline rolls the update through with no manual intervention,1
152988,24049768514.0,IssuesEvent,2022-09-16 11:43:04,aristanetworks/ansible-avd,https://api.github.com/repos/aristanetworks/ansible-avd,reopened,SVI in default routing table,type: enhancement state: accepted role: eos_designs,"### Enhancement summary
In current implementation SVIs have to be configured in a specific VRF.
Is it possible to make AVD able to add SVI interface in default routing table?
### Which component of AVD is impacted
eos_designs
### Use case example
```
interface Vlan110
description Tenant_A_OP_Zone
ip address 172.16.100.2/24
ip virtual-router address 172.16.100.1
```
### Describe the solution you would like
In inventory/group_vars/DC1_TENANTS_NETWORKS.yaml
```yaml
tenants:
# Tenant A Specific Information - VRFs / VLANs
TenantA:
mac_vrf_vni_base: 10000
vrfs:
Tenant_A_OP_Zone: <----- HERE IT SHOULD BE POSSIBLE TO USE ""default"" FOR DEFAULT ROUTING INSTANCE
vrf_vni: 123
svis:
# Service One
100:
name: Service_100
tags: [pub, servers]
enabled: true
ip_virtual_router_address: 172.16.100.1
nodes:
DC1_LEAF1:
ip_address: 172.16.100.2/24
DC1_LEAF2:
ip_address: 172.16.100.3/24
DC1_LEAF3:
ip_address: 172.16.100.4/24
DC1_LEAF4:
ip_address: 172.16.100.5/24
```
### Describe alternatives you have considered
_No response_
### Additional context
_No response_
### Contributing Guide
- [X] I agree to follow this project's Code of Conduct",1.0,"SVI in default routing table - ### Enhancement summary
In current implementation SVIs have to be configured in a specific VRF.
Is it possible to make AVD able to add SVI interface in default routing table?
### Which component of AVD is impacted
eos_designs
### Use case example
```
interface Vlan110
description Tenant_A_OP_Zone
ip address 172.16.100.2/24
ip virtual-router address 172.16.100.1
```
### Describe the solution you would like
In inventory/group_vars/DC1_TENANTS_NETWORKS.yaml
```yaml
tenants:
# Tenant A Specific Information - VRFs / VLANs
TenantA:
mac_vrf_vni_base: 10000
vrfs:
Tenant_A_OP_Zone: <----- HERE IT SHOULD BE POSSIBLE TO USE ""default"" FOR DEFAULT ROUTING INSTANCE
vrf_vni: 123
svis:
# Service One
100:
name: Service_100
tags: [pub, servers]
enabled: true
ip_virtual_router_address: 172.16.100.1
nodes:
DC1_LEAF1:
ip_address: 172.16.100.2/24
DC1_LEAF2:
ip_address: 172.16.100.3/24
DC1_LEAF3:
ip_address: 172.16.100.4/24
DC1_LEAF4:
ip_address: 172.16.100.5/24
```
### Describe alternatives you have considered
_No response_
### Additional context
_No response_
### Contributing Guide
- [X] I agree to follow this project's Code of Conduct",0,svi in default routing table enhancement summary in current implementation svis have to be configured in a specific vrf is it possible to make avd able to add svi interface in default routing table which component of avd is impacted eos designs use case example interface description tenant a op zone ip address ip virtual router address describe the solution you would like in inventory group vars tenants networks yaml yaml tenants tenant a specific information vrfs vlans tenanta mac vrf vni base vrfs tenant a op zone here it should be possible to use default for default routing instance vrf vni svis service one name service tags enabled true ip virtual router address nodes ip address ip address ip address ip address describe alternatives you have considered no response additional context no response contributing guide i agree to follow this project s code of conduct,0
40,2587882312.0,IssuesEvent,2015-02-17 21:16:14,spyder-ide/spyder,https://api.github.com/repos/spyder-ide/spyder,closed,Setup issue autolinking from Bitbucket to Google Code,1 star bug done Easy imported Maintainability,"_From [techtonik@gmail.com](https://code.google.com/u/techtonik@gmail.com/) on 2014-08-25T09:16:08Z_
What steps will reproduce the problem?
Test like "" issue `#1313` "" on Bitbucket is not linked to Google Code tracker
Carlos, you seem to be the only active admin, so can you add this? The process is described here - https://bitbucket.org/techtonik/scons/issue/3/setup-bitbucket-autolinking
_Original issue: http://code.google.com/p/spyderlib/issues/detail?id=1944_",True,"Setup issue autolinking from Bitbucket to Google Code - _From [techtonik@gmail.com](https://code.google.com/u/techtonik@gmail.com/) on 2014-08-25T09:16:08Z_
What steps will reproduce the problem?
Test like "" issue `#1313` "" on Bitbucket is not linked to Google Code tracker
Carlos, you seem to be the only active admin, so can you add this? The process is described here - https://bitbucket.org/techtonik/scons/issue/3/setup-bitbucket-autolinking
_Original issue: http://code.google.com/p/spyderlib/issues/detail?id=1944_",1,setup issue autolinking from bitbucket to google code from on what steps will reproduce the problem test like issue on bitbucket is not linked to google code tracker carlos you seem to be the only active admin so can you add this the process is described here original issue ,1
364164,25482895362.0,IssuesEvent,2022-11-26 01:53:01,Jovenasso/SistemaJudocas2022,https://api.github.com/repos/Jovenasso/SistemaJudocas2022,closed,Documentação - Especificação de requisitos - 4.5 - Requisitos detalhados de casos de uso - não coerente,documentation Ambiguidade,"Tabela de descrição do item 4.5 - Requisitos detalhados de caso de uso incompleto

",1.0,"Documentação - Especificação de requisitos - 4.5 - Requisitos detalhados de casos de uso - não coerente - Tabela de descrição do item 4.5 - Requisitos detalhados de caso de uso incompleto

",0,documentação especificação de requisitos requisitos detalhados de casos de uso não coerente tabela de descrição do item requisitos detalhados de caso de uso incompleto ,0
3725,15434978810.0,IssuesEvent,2021-03-07 06:29:37,diofant/diofant,https://api.github.com/repos/diofant/diofant,closed,Use prod() and isqrt() from stdlib (since 3.8),core maintainability,"See https://docs.python.org/3.8/library/math.html#math.prod
and https://docs.python.org/3.8/library/math.html#math.isqrt.
We have diofant.core.mul.prod and diofant.core.power.isqrt.",True,"Use prod() and isqrt() from stdlib (since 3.8) - See https://docs.python.org/3.8/library/math.html#math.prod
and https://docs.python.org/3.8/library/math.html#math.isqrt.
We have diofant.core.mul.prod and diofant.core.power.isqrt.",1,use prod and isqrt from stdlib since see and we have diofant core mul prod and diofant core power isqrt ,1
86711,10515389477.0,IssuesEvent,2019-09-28 09:24:58,backdrop/backdrop-issues,https://api.github.com/repos/backdrop/backdrop-issues,reopened,[DX] How to specify alternative configuration settings for elements using `system_settings_form()`?,type - documentation type - question,"In https://api.backdropcms.org/api/backdrop/1/search/system_settings_form it mentions how you can specify a different config file for some of the form elements/values via `'#config'`. What do you do though if you want to save to the same top-level `'#config'`, but to a different setting?
So say that your form element is `$form['my']['cool']['element'] = array( ... );` but you want the setting to be saved as `my_cool_element` in the .json? ...is a custom submit handler the only option in that case?
So basically, I understand that you can do this:
```php
$primary_config = config('mymodule.settings');
$secondary_config = config('mymodule.moar.settings');
$form = array('#config' => 'mymodule.settings');
$form['first_setting'] = array( ... );
$form['second_setting'] = array(
...
'#config' => 'mymodule.moar.settings',
...
);
```
...and that this saves `first_setting` in `mymodule.settings.json`, while `second_setting` is saved in `mymodule.moar.settings.json`.
What I need to do though is something like this:
```php
$config = config('mymodule.settings');
$form = array('#config' => 'mymodule.settings');
$form['first_setting'] = array( ... );
$form['second_setting'] = array(
...
'#config_setting' => 'call_this_something_else',
...
);
```
...so both settings will be saved in the same `mymodule.settings.json` file. The first one as `""first_setting""`, while the second one as `""call_this_something_else""`. So instead of this:
```json
{
""_config_name"": ""mymodule.settings"",
""_module"": ""mymodule"",
""first_setting"": 123,
""second_setting"": ""abc"",
}
```
...I would instead want to have this:
```json
{
""_config_name"": ""mymodule.settings"",
""_module"": ""mymodule"",
""first_setting"": 123,
""call_this_something_else"": ""abc"",
}
```",1.0,"[DX] How to specify alternative configuration settings for elements using `system_settings_form()`? - In https://api.backdropcms.org/api/backdrop/1/search/system_settings_form it mentions how you can specify a different config file for some of the form elements/values via `'#config'`. What do you do though if you want to save to the same top-level `'#config'`, but to a different setting?
So say that your form element is `$form['my']['cool']['element'] = array( ... );` but you want the setting to be saved as `my_cool_element` in the .json? ...is a custom submit handler the only option in that case?
So basically, I understand that you can do this:
```php
$primary_config = config('mymodule.settings');
$secondary_config = config('mymodule.moar.settings');
$form = array('#config' => 'mymodule.settings');
$form['first_setting'] = array( ... );
$form['second_setting'] = array(
...
'#config' => 'mymodule.moar.settings',
...
);
```
...and that this saves `first_setting` in `mymodule.settings.json`, while `second_setting` is saved in `mymodule.moar.settings.json`.
What I need to do though is something like this:
```php
$config = config('mymodule.settings');
$form = array('#config' => 'mymodule.settings');
$form['first_setting'] = array( ... );
$form['second_setting'] = array(
...
'#config_setting' => 'call_this_something_else',
...
);
```
...so both settings will be saved in the same `mymodule.settings.json` file. The first one as `""first_setting""`, while the second one as `""call_this_something_else""`. So instead of this:
```json
{
""_config_name"": ""mymodule.settings"",
""_module"": ""mymodule"",
""first_setting"": 123,
""second_setting"": ""abc"",
}
```
...I would instead want to have this:
```json
{
""_config_name"": ""mymodule.settings"",
""_module"": ""mymodule"",
""first_setting"": 123,
""call_this_something_else"": ""abc"",
}
```",0, how to specify alternative configuration settings for elements using system settings form in it mentions how you can specify a different config file for some of the form elements values via config what do you do though if you want to save to the same top level config but to a different setting so say that your form element is form array but you want the setting to be saved as my cool element in the json is a custom submit handler the only option in that case so basically i understand that you can do this php primary config config mymodule settings secondary config config mymodule moar settings form array config mymodule settings form array form array config mymodule moar settings and that this saves first setting in mymodule settings json while second setting is saved in mymodule moar settings json what i need to do though is something like this php config config mymodule settings form array config mymodule settings form array form array config setting call this something else so both settings will be saved in the same mymodule settings json file the first one as first setting while the second one as call this something else so instead of this json config name mymodule settings module mymodule first setting second setting abc i would instead want to have this json config name mymodule settings module mymodule first setting call this something else abc ,0
80439,3561679777.0,IssuesEvent,2016-01-23 23:33:58,Benrnz/BudgetAnalyser,https://api.github.com/repos/Benrnz/BudgetAnalyser,closed,Ledger auto matching possibly not working for savings ledger transactions,bug Priority-high,"Not sure if this was a data problem or a code bug. First noticed in Jan-16.
All Savings ledgers had a budget amount from last month showing up, that should not have been there (they should have been auto matched to last months ledger transactions).",1.0,"Ledger auto matching possibly not working for savings ledger transactions - Not sure if this was a data problem or a code bug. First noticed in Jan-16.
All Savings ledgers had a budget amount from last month showing up, that should not have been there (they should have been auto matched to last months ledger transactions).",0,ledger auto matching possibly not working for savings ledger transactions not sure if this was a data problem or a code bug first noticed in jan all savings ledgers had a budget amount from last month showing up that should not have been there they should have been auto matched to last months ledger transactions ,0
89773,10616618149.0,IssuesEvent,2019-10-12 13:09:49,neutralinojs/neutralinojs,https://api.github.com/repos/neutralinojs/neutralinojs,opened,Add contributors list to README,documentation,"Use a quick tool like https://dev.to/lacolaco/introducing-contributors-img-keep-contributors-in-readme-md-gci
We need to easily update when there are new contributors ",1.0,"Add contributors list to README - Use a quick tool like https://dev.to/lacolaco/introducing-contributors-img-keep-contributors-in-readme-md-gci
We need to easily update when there are new contributors ",0,add contributors list to readme use a quick tool like we need to easily update when there are new contributors ,0
140164,11303466227.0,IssuesEvent,2020-01-17 20:11:25,ni/nimi-python,https://api.github.com/repos/ni/nimi-python,closed,Eliminate the need to run GNU make while executing system tests,priority-medium test,"nimi-bot builds the module installers (wheels) and installs them before it runs system tests. In order to do so, GNU make is involved which means that nimi-bot needs to have mingw installed. We are no longer recommending / supporting [MinGW](http://www.mingw.org). Customers should use WSL.
So one alternative is WSL. But WSL image with 32-bit Python is not easy to find. Also WSL solution would involve 2 different python interpreters (one of Windows and one for WSL).
We think it's a better solution to completely eliminate the need for running GNU make. This will simplify the setup of nimi-bot altogether.
We could have a way to build the installers outside of GNU make, or we could simply install from `generated/` using setup.py with a small loss of test coverage (we would not be running the wheels anymore). Either way it's better than requiring MinGW.",1.0,"Eliminate the need to run GNU make while executing system tests - nimi-bot builds the module installers (wheels) and installs them before it runs system tests. In order to do so, GNU make is involved which means that nimi-bot needs to have mingw installed. We are no longer recommending / supporting [MinGW](http://www.mingw.org). Customers should use WSL.
So one alternative is WSL. But WSL image with 32-bit Python is not easy to find. Also WSL solution would involve 2 different python interpreters (one of Windows and one for WSL).
We think it's a better solution to completely eliminate the need for running GNU make. This will simplify the setup of nimi-bot altogether.
We could have a way to build the installers outside of GNU make, or we could simply install from `generated/` using setup.py with a small loss of test coverage (we would not be running the wheels anymore). Either way it's better than requiring MinGW.",0,eliminate the need to run gnu make while executing system tests nimi bot builds the module installers wheels and installs them before it runs system tests in order to do so gnu make is involved which means that nimi bot needs to have mingw installed we are no longer recommending supporting customers should use wsl so one alternative is wsl but wsl image with bit python is not easy to find also wsl solution would involve different python interpreters one of windows and one for wsl we think it s a better solution to completely eliminate the need for running gnu make this will simplify the setup of nimi bot altogether we could have a way to build the installers outside of gnu make or we could simply install from generated using setup py with a small loss of test coverage we would not be running the wheels anymore either way it s better than requiring mingw ,0
5203,26450064113.0,IssuesEvent,2023-01-16 10:30:04,Pandora-IsoMemo/plotr,https://api.github.com/repos/Pandora-IsoMemo/plotr,closed,Create wiki with installation instruction,Support: IT maintainance,"Move and update installation instructions from
- https://github.com/Pandora-IsoMemo/drat/issues/7
to a new created wiki of this app",True,"Create wiki with installation instruction - Move and update installation instructions from
- https://github.com/Pandora-IsoMemo/drat/issues/7
to a new created wiki of this app",1,create wiki with installation instruction move and update installation instructions from to a new created wiki of this app,1
11,2515070006.0,IssuesEvent,2015-01-15 16:16:33,simplesamlphp/simplesamlphp,https://api.github.com/repos/simplesamlphp/simplesamlphp,opened,Cleanup the SimpleSAML_Utilities class,enhancement maintainability started,"The following must be done:
* Remove the `validateCA()` method.
* Remove the `generateRandomBytesMTrand()` method.
* Remove the `validateXML()` and `validateXMLDocument()` methods. Use a standalone composer module instead.
* Refactor the rest of it to group methods by their functionality in dedicated classes under `lib/SimpleSAML/Utils/`.",True,"Cleanup the SimpleSAML_Utilities class - The following must be done:
* Remove the `validateCA()` method.
* Remove the `generateRandomBytesMTrand()` method.
* Remove the `validateXML()` and `validateXMLDocument()` methods. Use a standalone composer module instead.
* Refactor the rest of it to group methods by their functionality in dedicated classes under `lib/SimpleSAML/Utils/`.",1,cleanup the simplesaml utilities class the following must be done remove the validateca method remove the generaterandombytesmtrand method remove the validatexml and validatexmldocument methods use a standalone composer module instead refactor the rest of it to group methods by their functionality in dedicated classes under lib simplesaml utils ,1
3732,15588496991.0,IssuesEvent,2021-03-18 06:30:42,yast/yast-auth-client,https://api.github.com/repos/yast/yast-auth-client,closed,Change LDAP auth client setup from binddn and bindpwd to rootbinddn and 600 /etc/ldap.secret,other-maintainer,"Hello Team,
I would like to suggest to make this change to improve the security for LDAP Client auth setup, I had a look at the code and does not seem to be so hard to change that, this is related to #70 as well.
The suggestion it is to change the binddn from /etc/ldap.conf to rootbinddn, remove the bindpwd option from ldap.conf and create a file with 0600 as /etc/ldap.secret with the password in clear text.
I would like to start to contribute with some OpenSUSE project and I can give a try if you want.",True,"Change LDAP auth client setup from binddn and bindpwd to rootbinddn and 600 /etc/ldap.secret - Hello Team,
I would like to suggest to make this change to improve the security for LDAP Client auth setup, I had a look at the code and does not seem to be so hard to change that, this is related to #70 as well.
The suggestion it is to change the binddn from /etc/ldap.conf to rootbinddn, remove the bindpwd option from ldap.conf and create a file with 0600 as /etc/ldap.secret with the password in clear text.
I would like to start to contribute with some OpenSUSE project and I can give a try if you want.",1,change ldap auth client setup from binddn and bindpwd to rootbinddn and etc ldap secret hello team i would like to suggest to make this change to improve the security for ldap client auth setup i had a look at the code and does not seem to be so hard to change that this is related to as well the suggestion it is to change the binddn from etc ldap conf to rootbinddn remove the bindpwd option from ldap conf and create a file with as etc ldap secret with the password in clear text i would like to start to contribute with some opensuse project and i can give a try if you want ,1
413,3479973185.0,IssuesEvent,2015-12-29 01:17:15,caskroom/homebrew-cask,https://api.github.com/repos/caskroom/homebrew-cask,closed,Enable Travis caching,awaiting maintainer feedback travis,"As seen on https://docs.travis-ci.com/user/caching#Caching-directories-(Bundler%2C-dependencies)
Looks like the new container-based infrastructure has to be used, which means no sudo.
Not sure if it's feasible, but if so, would speed up Travis checks considerably. ",True,"Enable Travis caching - As seen on https://docs.travis-ci.com/user/caching#Caching-directories-(Bundler%2C-dependencies)
Looks like the new container-based infrastructure has to be used, which means no sudo.
Not sure if it's feasible, but if so, would speed up Travis checks considerably. ",1,enable travis caching as seen on looks like the new container based infrastructure has to be used which means no sudo not sure if it s feasible but if so would speed up travis checks considerably ,1
1367,5895610382.0,IssuesEvent,2017-05-18 07:30:02,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"Ansible calls ""file"" module instead of ""copy"" module if hashes match, gets confused if dest is a symlink",affects_1.9 bug_report waiting_on_maintainer,"##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
copy module
##### ANSIBLE VERSION
ansible 1.9.1
##### SUMMARY
If you create a task item with copy where follow=true and dest is currently a symlink, Ansible gets confused if the target for follow and the name of src aren't the same.
Sample playbook:
```
- hosts: localhost
tasks:
- copy: src=source dest=/tmp/dest follow=true
```
Setup commands:
```
ln -nsf realdest /tmp/dest
touch /tmp/realdest
echo asdf > source
```
First run:
```
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [copy src=source dest=/tmp/dest follow=true] ****************************
changed: [localhost]
PLAY RECAP ********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
```
Second run (and onwards):
```
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [copy src=source dest=/tmp/dest follow=true] ****************************
failed: [localhost] => {""checksum"": ""7d97e98f8af710c7e7fe703abc8f639e0ee507c4"", ""failed"": true, ""gid"": 0, ""group"": ""root"", ""mode"": ""0777"", ""owner"": ""root"", ""path"": ""/tmp/dest"", ""secontext"": ""unconfined_u:object_r:user_tmp_t:s0"", ""size"": 8, ""src"": ""source"", ""state"": ""link"", ""uid"": 0}
msg: src file does not exist, use ""force=yes"" if you really want to create the link: /tmp/source
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/root/test.retry
localhost : ok=1 changed=0 unreachable=0 failed=1
```
",True,"Ansible calls ""file"" module instead of ""copy"" module if hashes match, gets confused if dest is a symlink - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
copy module
##### ANSIBLE VERSION
ansible 1.9.1
##### SUMMARY
If you create a task item with copy where follow=true and dest is currently a symlink, Ansible gets confused if the target for follow and the name of src aren't the same.
Sample playbook:
```
- hosts: localhost
tasks:
- copy: src=source dest=/tmp/dest follow=true
```
Setup commands:
```
ln -nsf realdest /tmp/dest
touch /tmp/realdest
echo asdf > source
```
First run:
```
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [copy src=source dest=/tmp/dest follow=true] ****************************
changed: [localhost]
PLAY RECAP ********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
```
Second run (and onwards):
```
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [copy src=source dest=/tmp/dest follow=true] ****************************
failed: [localhost] => {""checksum"": ""7d97e98f8af710c7e7fe703abc8f639e0ee507c4"", ""failed"": true, ""gid"": 0, ""group"": ""root"", ""mode"": ""0777"", ""owner"": ""root"", ""path"": ""/tmp/dest"", ""secontext"": ""unconfined_u:object_r:user_tmp_t:s0"", ""size"": 8, ""src"": ""source"", ""state"": ""link"", ""uid"": 0}
msg: src file does not exist, use ""force=yes"" if you really want to create the link: /tmp/source
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/root/test.retry
localhost : ok=1 changed=0 unreachable=0 failed=1
```
",1,ansible calls file module instead of copy module if hashes match gets confused if dest is a symlink issue type bug report component name copy module ansible version ansible summary if you create a task item with copy where follow true and dest is currently a symlink ansible gets confused if the target for follow and the name of src aren t the same sample playbook hosts localhost tasks copy src source dest tmp dest follow true setup commands ln nsf realdest tmp dest touch tmp realdest echo asdf source first run play gathering facts ok task changed play recap localhost ok changed unreachable failed second run and onwards play gathering facts ok task failed checksum failed true gid group root mode owner root path tmp dest secontext unconfined u object r user tmp t size src source state link uid msg src file does not exist use force yes if you really want to create the link tmp source fatal all hosts have already failed aborting play recap to retry use limit root test retry localhost ok changed unreachable failed ,1
2695,9413636759.0,IssuesEvent,2019-04-10 08:18:24,IPVS-AS/MBP,https://api.github.com/repos/IPVS-AS/MBP,opened,Clean up plugin directory,maintainance,"There is a ""plugin"" directory that contains lots of javascript plugins which are not used by the project. I don't see a reason for keeping all these plugins in the repo since they are not required, so this directory should be cleaned up.",True,"Clean up plugin directory - There is a ""plugin"" directory that contains lots of javascript plugins which are not used by the project. I don't see a reason for keeping all these plugins in the repo since they are not required, so this directory should be cleaned up.",1,clean up plugin directory there is a plugin directory that contains lots of javascript plugins which are not used by the project i don t see a reason for keeping all these plugins in the repo since they are not required so this directory should be cleaned up ,1
32075,12061780616.0,IssuesEvent,2020-04-16 00:55:49,dotnet/runtime,https://api.github.com/repos/dotnet/runtime,closed,Use correct OpenSSL libraries for FreeBSD,area-System.Security os-freebsd untriaged,"src/libraries/Native/Unix/System.Security.Cryptography.Native/opensslshim.c
`OpenLibrary()` doesn't currently find the correct libraries on my FreeBSD 11.3 build box.
The version of OpenSSL included in the base 11.3 install is 1.0.2
```
[jason@freebsd11 ~/src/runtime]$ /usr/bin/openssl version
OpenSSL 1.0.2s-freebsd 28 May 2019
[jason@freebsd11 ~/src/runtime]$ ldd /usr/bin/openssl
/usr/bin/openssl:
libssl.so.8 => /usr/lib/libssl.so.8 (0x8008a4000)
libcrypto.so.8 => /lib/libcrypto.so.8 (0x800c00000)
libc.so.7 => /lib/libc.so.7 (0x801076000)
```
OpenSSL 1.1.1 can be installed with the FreeBSD package manager
```
[jason@freebsd11 ~/src/runtime]$ /usr/local/bin/openssl version
OpenSSL 1.1.1f 31 Mar 2020
[jason@freebsd11 ~/src/runtime]$ ldd /usr/local/bin/openssl
/usr/local/bin/openssl:
libssl.so.11 => /usr/local/lib/libssl.so.11 (0x8008b7000)
libcrypto.so.11 => /usr/local/lib/libcrypto.so.11 (0x800c00000)
libthr.so.3 => /lib/libthr.so.3 (0x8010ef000)
libc.so.7 => /lib/libc.so.7 (0x801317000)
```
`OpenLibrary()` needs to look for `libssl.so.11` and `libssl.so.8` to support these versions.
@wfurt ",True,"Use correct OpenSSL libraries for FreeBSD - src/libraries/Native/Unix/System.Security.Cryptography.Native/opensslshim.c
`OpenLibrary()` doesn't currently find the correct libraries on my FreeBSD 11.3 build box.
The version of OpenSSL included in the base 11.3 install is 1.0.2
```
[jason@freebsd11 ~/src/runtime]$ /usr/bin/openssl version
OpenSSL 1.0.2s-freebsd 28 May 2019
[jason@freebsd11 ~/src/runtime]$ ldd /usr/bin/openssl
/usr/bin/openssl:
libssl.so.8 => /usr/lib/libssl.so.8 (0x8008a4000)
libcrypto.so.8 => /lib/libcrypto.so.8 (0x800c00000)
libc.so.7 => /lib/libc.so.7 (0x801076000)
```
OpenSSL 1.1.1 can be installed with the FreeBSD package manager
```
[jason@freebsd11 ~/src/runtime]$ /usr/local/bin/openssl version
OpenSSL 1.1.1f 31 Mar 2020
[jason@freebsd11 ~/src/runtime]$ ldd /usr/local/bin/openssl
/usr/local/bin/openssl:
libssl.so.11 => /usr/local/lib/libssl.so.11 (0x8008b7000)
libcrypto.so.11 => /usr/local/lib/libcrypto.so.11 (0x800c00000)
libthr.so.3 => /lib/libthr.so.3 (0x8010ef000)
libc.so.7 => /lib/libc.so.7 (0x801317000)
```
`OpenLibrary()` needs to look for `libssl.so.11` and `libssl.so.8` to support these versions.
@wfurt ",0,use correct openssl libraries for freebsd src libraries native unix system security cryptography native opensslshim c openlibrary doesn t currently find the correct libraries on my freebsd build box the version of openssl included in the base install is usr bin openssl version openssl freebsd may ldd usr bin openssl usr bin openssl libssl so usr lib libssl so libcrypto so lib libcrypto so libc so lib libc so openssl can be installed with the freebsd package manager usr local bin openssl version openssl mar ldd usr local bin openssl usr local bin openssl libssl so usr local lib libssl so libcrypto so usr local lib libcrypto so libthr so lib libthr so libc so lib libc so openlibrary needs to look for libssl so and libssl so to support these versions wfurt ,0
4985,25593822863.0,IssuesEvent,2022-12-01 14:50:55,precice/precice,https://api.github.com/repos/precice/precice,closed,Change default symbol visibility to hidden,enhancement maintainability,"To reduce the symbol visibility to the actual API, we should change the default visibility to `hidden`.
This has some upsides:
1. We adapt the default on Windows platforms
2. We reduce binary size
3. We reduce shared object load time
Downsides are:
1. We have to explicitly mark API functions (CMake can generate export headers)
2. We cannot use the shared library for the unit tests anymore.
We need to add an object library target that compiles the sources.
References:
* [GCC wiki on visibility](https://gcc.gnu.org/wiki/Visibility)
* [CMake generate export header](https://cmake.org/cmake/help/v3.10/module/GenerateExportHeader.html)
* [CMake visibilty preset](https://cmake.org/cmake/help/v3.10/prop_tgt/LANG_VISIBILITY_PRESET.html)
* [CMake inline visibility preset](https://cmake.org/cmake/help/v3.10/prop_tgt/VISIBILITY_INLINES_HIDDEN.html)
* [CMake add_library(OBJECT)](https://cmake.org/cmake/help/v3.10/command/add_library.html?highlight=add_library#object-libraries)
Related to #200
",True,"Change default symbol visibility to hidden - To reduce the symbol visibility to the actual API, we should change the default visibility to `hidden`.
This has some upsides:
1. We adapt the default on Windows platforms
2. We reduce binary size
3. We reduce shared object load time
Downsides are:
1. We have to explicitly mark API functions (CMake can generate export headers)
2. We cannot use the shared library for the unit tests anymore.
We need to add an object library target that compiles the sources.
References:
* [GCC wiki on visibility](https://gcc.gnu.org/wiki/Visibility)
* [CMake generate export header](https://cmake.org/cmake/help/v3.10/module/GenerateExportHeader.html)
* [CMake visibilty preset](https://cmake.org/cmake/help/v3.10/prop_tgt/LANG_VISIBILITY_PRESET.html)
* [CMake inline visibility preset](https://cmake.org/cmake/help/v3.10/prop_tgt/VISIBILITY_INLINES_HIDDEN.html)
* [CMake add_library(OBJECT)](https://cmake.org/cmake/help/v3.10/command/add_library.html?highlight=add_library#object-libraries)
Related to #200
",1,change default symbol visibility to hidden to reduce the symbol visibility to the actual api we should change the default visibility to hidden this has some upsides we adapt the default on windows platforms we reduce binary size we reduce shared object load time downsides are we have to explicitly mark api functions cmake can generate export headers we cannot use the shared library for the unit tests anymore we need to add an object library target that compiles the sources references related to ,1
1778,6575810265.0,IssuesEvent,2017-09-11 17:24:54,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,"ansible 2.2 setup module ""unsupported parameter for module: gather_timeout""",affects_2.2 bug_report waiting_on_maintainer,"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
setup
##### ANSIBLE VERSION
```
ansible 2.2.0.0 (detached HEAD 44faad0593) last updated 2016/10/04 10:41:35 (GMT +200)
lib/ansible/modules/core: (detached HEAD 17ee1cfaf9) last updated 2016/10/04 10:41:04 (GMT +200)
lib/ansible/modules/extras: (detached HEAD d312f34d9b) last updated 2016/10/04 10:41:05 (GMT +200)
config file = {{custom_path}}/.ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
[defaults]
inventory = $HOME/inventory/full.inventory
remote_tmp = $HOME/.ansible/tmp
remote_user = root
##### OS / ENVIRONMENT
RHEL 7/RHEL7
##### SUMMARY
When running setup, the setup module complains about not supporting the new gather_timeout parameter:
##### STEPS TO REPRODUCE
Git clone ansible version 2.2.0, and checkout tag v2.2.0.0-0.1.rc1.
Source the hacking/source/env-setup script to use that version.
Run any playbook with minimal config as specified above, and with setup (gather_facts) activated.
```
- hosts: ""localhost""
tasks:
- name: Displaying all groups
debug: var=groups.keys() verbosity=2
```
##### EXPECTED RESULTS
That the new gather_timeout would be taken into account by setup module without error, or ignored if not handled by setup module.
##### ACTUAL RESULTS
```
fatal: [myhost.mydomain]: FAILED! => {
""changed"": false,
""failed"": true,
""invocation"": {
""module_args"": {
""gather_subset"": ""all"",
""gather_timeout"": 10
},
""module_name"": ""setup""
},
""msg"": ""unsupported parameter for module: gather_timeout""
}
```
",True,"ansible 2.2 setup module ""unsupported parameter for module: gather_timeout"" -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
setup
##### ANSIBLE VERSION
```
ansible 2.2.0.0 (detached HEAD 44faad0593) last updated 2016/10/04 10:41:35 (GMT +200)
lib/ansible/modules/core: (detached HEAD 17ee1cfaf9) last updated 2016/10/04 10:41:04 (GMT +200)
lib/ansible/modules/extras: (detached HEAD d312f34d9b) last updated 2016/10/04 10:41:05 (GMT +200)
config file = {{custom_path}}/.ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
[defaults]
inventory = $HOME/inventory/full.inventory
remote_tmp = $HOME/.ansible/tmp
remote_user = root
##### OS / ENVIRONMENT
RHEL 7/RHEL7
##### SUMMARY
When running setup, the setup module complains about not supporting the new gather_timeout parameter:
##### STEPS TO REPRODUCE
Git clone ansible version 2.2.0, and checkout tag v2.2.0.0-0.1.rc1.
Source the hacking/source/env-setup script to use that version.
Run any playbook with minimal config as specified above, and with setup (gather_facts) activated.
```
- hosts: ""localhost""
tasks:
- name: Displaying all groups
debug: var=groups.keys() verbosity=2
```
##### EXPECTED RESULTS
That the new gather_timeout would be taken into account by setup module without error, or ignored if not handled by setup module.
##### ACTUAL RESULTS
```
fatal: [myhost.mydomain]: FAILED! => {
""changed"": false,
""failed"": true,
""invocation"": {
""module_args"": {
""gather_subset"": ""all"",
""gather_timeout"": 10
},
""module_name"": ""setup""
},
""msg"": ""unsupported parameter for module: gather_timeout""
}
```
",1,ansible setup module unsupported parameter for module gather timeout issue type bug report component name setup ansible version ansible detached head last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file custom path ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables inventory home inventory full inventory remote tmp home ansible tmp remote user root os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific rhel summary when running setup the setup module complains about not supporting the new gather timeout parameter steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used git clone ansible version and checkout tag source the hacking source env setup script to use that version run any playbook with minimal config as specified above and with setup gather facts activated hosts localhost tasks name displaying all groups debug var groups keys verbosity expected results that the new gather timeout would be taken into account by setup module without error or ignored if not handled by setup module actual results fatal failed changed false failed true invocation module args gather subset all gather timeout module name setup msg unsupported parameter for module gather timeout ,1
4661,24097706369.0,IssuesEvent,2022-09-19 20:24:00,aws/aws-sam-cli,https://api.github.com/repos/aws/aws-sam-cli,closed,Sam Package/Deploy --image-repository Behavior,type/feature maintainer/need-followup,"Many of the process I put in place for both open source and in company deploy pipelines take advantage of SAM CLI and the AWS CLI using conventions like AWS_PROFILE. I've been very happy that SAM CLI has followed these patterns. Today when working with the new container features I was surprised by this odd behavior of [sam package](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-package.html) when using the `--image-repository` option. Here is an example of my usage where the new image repo was added to my process.
```shell
sam package \
--region ${AWS_DEFAULT_REGION} \
--template-file ./.aws-sam/build/template.yaml \
--output-template-file ./.aws-sam/build/packaged.yaml \
--image-repository ""lambyc-starter"" \
--s3-bucket ""${CLOUDFORMATION_BUCKET}"" \
--s3-prefix ""lambyc-starter-${RAILS_ENV}""
```
These commands are run as either the default AWS_PROFILE or with specific ENV overrides. Given this was set and that the `--region` was set here, my expectation was this command was going to find and publish to the ECR repo within my AWS account. Instead, it tried to push to docker.io and failed with a user password. Digging into some guides and published SAM examples I can see what you expect folks to do is:
```shell
sam package \
--region ${AWS_DEFAULT_REGION} \
--template-file ./.aws-sam/build/template.yaml \
--output-template-file ./.aws-sam/build/packaged.yaml \
--image-repository ""123456789.dkr.ecr.us-east-1.amazonaws.com/lambyc-starter"" \
--s3-bucket ""${CLOUDFORMATION_BUCKET}"" \
--s3-prefix ""lambyc-starter-${RAILS_ENV}""
```
This feels like the wrong interface to me and against the grain of how the CLI operates given all my previous experiences. I can work around this if y'all disagree by adding more `aws` CLI commands to find the account ID and use the `AWS_DEFAULT_REGION` env and/or look that up as well. But it would cool if SAM did this. Thoughts?",True,"Sam Package/Deploy --image-repository Behavior - Many of the process I put in place for both open source and in company deploy pipelines take advantage of SAM CLI and the AWS CLI using conventions like AWS_PROFILE. I've been very happy that SAM CLI has followed these patterns. Today when working with the new container features I was surprised by this odd behavior of [sam package](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-package.html) when using the `--image-repository` option. Here is an example of my usage where the new image repo was added to my process.
```shell
sam package \
--region ${AWS_DEFAULT_REGION} \
--template-file ./.aws-sam/build/template.yaml \
--output-template-file ./.aws-sam/build/packaged.yaml \
--image-repository ""lambyc-starter"" \
--s3-bucket ""${CLOUDFORMATION_BUCKET}"" \
--s3-prefix ""lambyc-starter-${RAILS_ENV}""
```
These commands are run as either the default AWS_PROFILE or with specific ENV overrides. Given this was set and that the `--region` was set here, my expectation was this command was going to find and publish to the ECR repo within my AWS account. Instead, it tried to push to docker.io and failed with a user password. Digging into some guides and published SAM examples I can see what you expect folks to do is:
```shell
sam package \
--region ${AWS_DEFAULT_REGION} \
--template-file ./.aws-sam/build/template.yaml \
--output-template-file ./.aws-sam/build/packaged.yaml \
--image-repository ""123456789.dkr.ecr.us-east-1.amazonaws.com/lambyc-starter"" \
--s3-bucket ""${CLOUDFORMATION_BUCKET}"" \
--s3-prefix ""lambyc-starter-${RAILS_ENV}""
```
This feels like the wrong interface to me and against the grain of how the CLI operates given all my previous experiences. I can work around this if y'all disagree by adding more `aws` CLI commands to find the account ID and use the `AWS_DEFAULT_REGION` env and/or look that up as well. But it would cool if SAM did this. Thoughts?",1,sam package deploy image repository behavior many of the process i put in place for both open source and in company deploy pipelines take advantage of sam cli and the aws cli using conventions like aws profile i ve been very happy that sam cli has followed these patterns today when working with the new container features i was surprised by this odd behavior of when using the image repository option here is an example of my usage where the new image repo was added to my process shell sam package region aws default region template file aws sam build template yaml output template file aws sam build packaged yaml image repository lambyc starter bucket cloudformation bucket prefix lambyc starter rails env these commands are run as either the default aws profile or with specific env overrides given this was set and that the region was set here my expectation was this command was going to find and publish to the ecr repo within my aws account instead it tried to push to docker io and failed with a user password digging into some guides and published sam examples i can see what you expect folks to do is shell sam package region aws default region template file aws sam build template yaml output template file aws sam build packaged yaml image repository dkr ecr us east amazonaws com lambyc starter bucket cloudformation bucket prefix lambyc starter rails env this feels like the wrong interface to me and against the grain of how the cli operates given all my previous experiences i can work around this if y all disagree by adding more aws cli commands to find the account id and use the aws default region env and or look that up as well but it would cool if sam did this thoughts ,1
2362,8415681657.0,IssuesEvent,2018-10-13 17:09:31,ansible/ansible,https://api.github.com/repos/ansible/ansible,closed,Bower module throws KeyError for packages specified via git endpoint,affects_2.5 bug module needs_maintainer support:community traceback,"From @im-denisenko on 2015-08-04T15:13:36Z
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
bower module
##### ANSIBLE VERSION
devel
##### SUMMARY
Hello.
I have following dependency in my `bower.json`:
``` json
{
""dependencies"": {
""css3pie"": ""git://github.com/PepijnSenders/css3pie.git""
}
}
```
When I trying to install it, it always failing:
```
TASK: [fpm | install bower packages] ******************************************
REMOTE_MODULE bower path=""/home/realty/master""
failed: [srv-01] => {""failed"": true, ""parsed"": false}
BECOME-SUCCESS-ydpbtaykmrsparnntxwwkzrpvjjwamay
Traceback (most recent call last):
File ""/tmp/ansible-tmp-1438699847.74-256940805164596/bower"", line 1792, in
main()
File ""/tmp/ansible-tmp-1438699847.74-256940805164596/bower"", line 168, in main
installed, missing, outdated = bower.list()
File ""/tmp/ansible-tmp-1438699847.74-256940805164596/bower"", line 121, in list
elif data['dependencies'][dep]['pkgMeta']['version'] != data['dependencies'][dep]['update']['latest']:
KeyError: 'version'
```
Output of `bower list --json` for this package:
``` json
""css3pie"": {
""endpoint"": {
""name"": ""css3pie"",
""source"": ""git://github.com/PepijnSenders/css3pie.git"",
""target"": ""*""
},
""canonicalDir"": ""/home/realty/master/www/bower_components/css3pie"",
""pkgMeta"": {
""name"": ""css3pie"",
""homepage"": ""https://github.com/PepijnSenders/css3pie"",
""authors"": [
""Pepijn Senders ""
],
""description"": ""Bower package for css3pie http://css3pie.com/"",
""keywords"": [
""pie"",
""css3"",
""PIE"",
""css"",
""css3pie""
],
""license"": ""MIT"",
""ignore"": [
""**/.*"",
""node_modules"",
""bower_components"",
""test"",
""tests""
],
""_release"": ""b5e68ce841"",
""_resolution"": {
""type"": ""branch"",
""branch"": ""master"",
""commit"": ""b5e68ce8414bc0d7b451922901e093b78df32b11""
},
""_source"": ""git://github.com/PepijnSenders/css3pie.git"",
""_target"": ""*"",
""_originalSource"": ""git://github.com/PepijnSenders/css3pie.git""
},
""dependencies"": {},
""nrDependants"": 1,
""versions"": []
}
```
There is no `css3pie\pkgMeta\version` nor `css3pie\update` fields.
I guess [line 121](https://github.com/ansible/ansible-modules-extras/blob/devel/packaging/language/bower.py#L121) should check presence of these keys before trying to access them.
Copied from original issue: ansible/ansible-modules-extras#809
",True,"Bower module throws KeyError for packages specified via git endpoint - From @im-denisenko on 2015-08-04T15:13:36Z
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
bower module
##### ANSIBLE VERSION
devel
##### SUMMARY
Hello.
I have following dependency in my `bower.json`:
``` json
{
""dependencies"": {
""css3pie"": ""git://github.com/PepijnSenders/css3pie.git""
}
}
```
When I trying to install it, it always failing:
```
TASK: [fpm | install bower packages] ******************************************
REMOTE_MODULE bower path=""/home/realty/master""
failed: [srv-01] => {""failed"": true, ""parsed"": false}
BECOME-SUCCESS-ydpbtaykmrsparnntxwwkzrpvjjwamay
Traceback (most recent call last):
File ""/tmp/ansible-tmp-1438699847.74-256940805164596/bower"", line 1792, in
main()
File ""/tmp/ansible-tmp-1438699847.74-256940805164596/bower"", line 168, in main
installed, missing, outdated = bower.list()
File ""/tmp/ansible-tmp-1438699847.74-256940805164596/bower"", line 121, in list
elif data['dependencies'][dep]['pkgMeta']['version'] != data['dependencies'][dep]['update']['latest']:
KeyError: 'version'
```
Output of `bower list --json` for this package:
``` json
""css3pie"": {
""endpoint"": {
""name"": ""css3pie"",
""source"": ""git://github.com/PepijnSenders/css3pie.git"",
""target"": ""*""
},
""canonicalDir"": ""/home/realty/master/www/bower_components/css3pie"",
""pkgMeta"": {
""name"": ""css3pie"",
""homepage"": ""https://github.com/PepijnSenders/css3pie"",
""authors"": [
""Pepijn Senders ""
],
""description"": ""Bower package for css3pie http://css3pie.com/"",
""keywords"": [
""pie"",
""css3"",
""PIE"",
""css"",
""css3pie""
],
""license"": ""MIT"",
""ignore"": [
""**/.*"",
""node_modules"",
""bower_components"",
""test"",
""tests""
],
""_release"": ""b5e68ce841"",
""_resolution"": {
""type"": ""branch"",
""branch"": ""master"",
""commit"": ""b5e68ce8414bc0d7b451922901e093b78df32b11""
},
""_source"": ""git://github.com/PepijnSenders/css3pie.git"",
""_target"": ""*"",
""_originalSource"": ""git://github.com/PepijnSenders/css3pie.git""
},
""dependencies"": {},
""nrDependants"": 1,
""versions"": []
}
```
There is no `css3pie\pkgMeta\version` nor `css3pie\update` fields.
I guess [line 121](https://github.com/ansible/ansible-modules-extras/blob/devel/packaging/language/bower.py#L121) should check presence of these keys before trying to access them.
Copied from original issue: ansible/ansible-modules-extras#809
",1,bower module throws keyerror for packages specified via git endpoint from im denisenko on issue type bug report component name bower module ansible version devel summary hello i have following dependency in my bower json json dependencies git github com pepijnsenders git when i trying to install it it always failing task remote module bower path home realty master failed failed true parsed false become success ydpbtaykmrsparnntxwwkzrpvjjwamay traceback most recent call last file tmp ansible tmp bower line in main file tmp ansible tmp bower line in main installed missing outdated bower list file tmp ansible tmp bower line in list elif data data keyerror version output of bower list json for this package json endpoint name source git github com pepijnsenders git target canonicaldir home realty master www bower components pkgmeta name homepage authors pepijn senders description bower package for keywords pie pie css license mit ignore node modules bower components test tests release resolution type branch branch master commit source git github com pepijnsenders git target originalsource git github com pepijnsenders git dependencies nrdependants versions there is no pkgmeta version nor update fields i guess should check presence of these keys before trying to access them copied from original issue ansible ansible modules extras ,1
2865,10271528735.0,IssuesEvent,2019-08-23 16:19:34,arcticicestudio/arctic,https://api.github.com/repos/arcticicestudio/arctic,opened,GitHub code owners,context-workflow scope-maintainability scope-quality scope-stability type-task,"
The project should adapt to GitHub's [code owners][intro] feature. This will allow to define matching pattern for project paths to automatically add all required reviewers of the core team and contributors to new PRs.
See [GitHub Help][help] for more details.
Sidebar for code owner PR review requests and review stats
Branch protection configuration to enable required code owner review approvals
PR status checks when required code owner review is pending
The project should adapt to GitHub's [code owners][intro] feature. This will allow to define matching pattern for project paths to automatically add all required reviewers of the core team and contributors to new PRs.
See [GitHub Help][help] for more details.
Sidebar for code owner PR review requests and review stats
Branch protection configuration to enable required code owner review approvals
PR status checks when required code owner review is pending
[help]: https://help.github.com/articles/about-codeowners
[intro]: https://github.com/blog/2392-introducing-code-owners",1,github code owners the project should adapt to github s feature this will allow to define matching pattern for project paths to automatically add all required reviewers of the core team and contributors to new prs see for more details sidebar for code owner pr review requests and review stats branch protection configuration to enable required code owner review approvals pr status checks when required code owner review is pending ,1
533,3931810469.0,IssuesEvent,2016-04-25 13:51:55,duckduckgo/zeroclickinfo-spice,https://api.github.com/repos/duckduckgo/zeroclickinfo-spice,closed,Holiday: False trigger,Bug Maintainer Input Requested Triggering,"This IA shouldn't be triggered on the following query:
[when is california primary 2016](https://duckduckgo.com/?q=when+is+california+primary+2016&ia=answer)
As [reported on Twitter](https://twitter.com/xarph/status/717064457227112448).
------
IA Page: http://duck.co/ia/view/holiday
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sekhavati",True,"Holiday: False trigger - This IA shouldn't be triggered on the following query:
[when is california primary 2016](https://duckduckgo.com/?q=when+is+california+primary+2016&ia=answer)
As [reported on Twitter](https://twitter.com/xarph/status/717064457227112448).
------
IA Page: http://duck.co/ia/view/holiday
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sekhavati",1,holiday false trigger this ia shouldn t be triggered on the following query as ia page sekhavati,1
60652,17023483458.0,IssuesEvent,2021-07-03 02:15:49,tomhughes/trac-tickets,https://api.github.com/repos/tomhughes/trac-tickets,closed,It seems changesets without a bounding box do not show up in a user's list of edits,Component: website Priority: minor Resolution: duplicate Type: defect,"**[Submitted to the original trac issue database at 12.39pm, Friday, 25th September 2009]**
When you browse this user's list of edits page (http://www.openstreetmap.org/user/maning/edits), you don't see this changeset of his: http://www.openstreetmap.org/browse/changeset/2611259 (though this changeset can be reached by following the next/previous changeset links)
It seems that it's not listed because there was no bounding box (the changeset only deleted a relation).",1.0,"It seems changesets without a bounding box do not show up in a user's list of edits - **[Submitted to the original trac issue database at 12.39pm, Friday, 25th September 2009]**
When you browse this user's list of edits page (http://www.openstreetmap.org/user/maning/edits), you don't see this changeset of his: http://www.openstreetmap.org/browse/changeset/2611259 (though this changeset can be reached by following the next/previous changeset links)
It seems that it's not listed because there was no bounding box (the changeset only deleted a relation).",0,it seems changesets without a bounding box do not show up in a user s list of edits when you browse this user s list of edits page you don t see this changeset of his though this changeset can be reached by following the next previous changeset links it seems that it s not listed because there was no bounding box the changeset only deleted a relation ,0
6564,9550086414.0,IssuesEvent,2019-05-02 11:03:45,adaptlearning/adapt_authoring,https://api.github.com/repos/adaptlearning/adapt_authoring,opened,Tenant management alternatives,T: requirements,The purpose to this issue is to explore alternatives to tenant management. Please list any requirements you may have of tenant management.,1.0,Tenant management alternatives - The purpose to this issue is to explore alternatives to tenant management. Please list any requirements you may have of tenant management.,0,tenant management alternatives the purpose to this issue is to explore alternatives to tenant management please list any requirements you may have of tenant management ,0
227730,17397785490.0,IssuesEvent,2021-08-02 15:23:48,department-of-veterans-affairs/va.gov-team,https://api.github.com/repos/department-of-veterans-affairs/va.gov-team,closed,(Due 7/23) Document instances of 911,documentation vaos,"We have a priority need to review all placements and copy of our references to 911.
## Tasks
- [ ] Identify scenarios, pages, flows, etc when 911 is mentioned
- [ ] Screenshot messaging that includes 911
",1.0,"(Due 7/23) Document instances of 911 - We have a priority need to review all placements and copy of our references to 911.
## Tasks
- [ ] Identify scenarios, pages, flows, etc when 911 is mentioned
- [ ] Screenshot messaging that includes 911
",0, due document instances of we have a priority need to review all placements and copy of our references to tasks identify scenarios pages flows etc when is mentioned screenshot messaging that includes ,0
3540,13932592824.0,IssuesEvent,2020-10-22 07:30:06,pace/bricks,https://api.github.com/repos/pace/bricks,closed,objstore: move healthcheck registration into client creation,EST::Hours S::In Progress T::Maintainance,"# Motivation
Do not register healthchecks if the package is simply being imported but the client not necessarily used, i.e., move them out of the `init()` method into the client creation.",True,"objstore: move healthcheck registration into client creation - # Motivation
Do not register healthchecks if the package is simply being imported but the client not necessarily used, i.e., move them out of the `init()` method into the client creation.",1,objstore move healthcheck registration into client creation motivation do not register healthchecks if the package is simply being imported but the client not necessarily used i e move them out of the init method into the client creation ,1
371185,10962670445.0,IssuesEvent,2019-11-27 17:46:59,kubernetes/kubernetes,https://api.github.com/repos/kubernetes/kubernetes,closed,Kubectl version --server should return the server version,kind/feature priority/awaiting-more-evidence sig/cli,"
**What would you like to be added**:
I would like for `kubectl version --server` to return the server version as `kubectl version --client` returns the client version.
**Why is this needed**:
It will make it easier to write automating scripts for checking the server version.
It will maintain consistency as there already is a `--client` flag that returns the client version.",1.0,"Kubectl version --server should return the server version -
**What would you like to be added**:
I would like for `kubectl version --server` to return the server version as `kubectl version --client` returns the client version.
**Why is this needed**:
It will make it easier to write automating scripts for checking the server version.
It will maintain consistency as there already is a `--client` flag that returns the client version.",0,kubectl version server should return the server version what would you like to be added i would like for kubectl version server to return the server version as kubectl version client returns the client version why is this needed it will make it easier to write automating scripts for checking the server version it will maintain consistency as there already is a client flag that returns the client version ,0
934,4644120046.0,IssuesEvent,2016-09-30 15:26:39,ansible/ansible-modules-core,https://api.github.com/repos/ansible/ansible-modules-core,closed,ec2 module hangs if spot request fails,affects_2.1 aws bug_report cloud P2 waiting_on_maintainer,"##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2 module
##### ANSIBLE VERSION
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
##### CONFIGURATION
ANSIBLE_HOSTS=/etc/ansible/ec2.py
##### OS / ENVIRONMENT
Ubuntu on Windows 10
##### SUMMARY
When an ec2 spot bid fails, the ansible console hangs (making it seem like it's just taking a while to providion).
##### STEPS TO REPRODUCE
This task failed due to a bad volume name (/dev/sda):
```yml
- name: Provision a set of instances
ec2:
spot_price: 0.65
spot_wait_timeout: 600
key_name: ...
region: us-east-1
group_id: ...
instance_type: g2.2xlarge
image: ami-d05e75b8
wait: true
exact_count: 1
count_tag:
Name: InstanceTag
instance_tags:
Name: InstanceTag
vpc_subnet_id: ...
assign_public_ip: yes
zone: us-east-1d
volumes:
- device_name: /dev/sda
volume_type: gp2
volume_size: 20
```
##### EXPECTED RESULTS
I expected ansible to show me an error when the request failed
##### ACTUAL RESULTS
Ansible hangs
",True,"ec2 module hangs if spot request fails - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2 module
##### ANSIBLE VERSION
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
##### CONFIGURATION
ANSIBLE_HOSTS=/etc/ansible/ec2.py
##### OS / ENVIRONMENT
Ubuntu on Windows 10
##### SUMMARY
When an ec2 spot bid fails, the ansible console hangs (making it seem like it's just taking a while to providion).
##### STEPS TO REPRODUCE
This task failed due to a bad volume name (/dev/sda):
```yml
- name: Provision a set of instances
ec2:
spot_price: 0.65
spot_wait_timeout: 600
key_name: ...
region: us-east-1
group_id: ...
instance_type: g2.2xlarge
image: ami-d05e75b8
wait: true
exact_count: 1
count_tag:
Name: InstanceTag
instance_tags:
Name: InstanceTag
vpc_subnet_id: ...
assign_public_ip: yes
zone: us-east-1d
volumes:
- device_name: /dev/sda
volume_type: gp2
volume_size: 20
```
##### EXPECTED RESULTS
I expected ansible to show me an error when the request failed
##### ACTUAL RESULTS
Ansible hangs
",1, module hangs if spot request fails issue type bug report component name module ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration ansible hosts etc ansible py os environment ubuntu on windows summary when an spot bid fails the ansible console hangs making it seem like it s just taking a while to providion steps to reproduce this task failed due to a bad volume name dev sda yml name provision a set of instances spot price spot wait timeout key name region us east group id instance type image ami wait true exact count count tag name instancetag instance tags name instancetag vpc subnet id assign public ip yes zone us east volumes device name dev sda volume type volume size expected results i expected ansible to show me an error when the request failed actual results ansible hangs ,1
132815,10764995450.0,IssuesEvent,2019-11-01 09:50:44,appium/appium,https://api.github.com/repos/appium/appium,closed,[iOS] Appium returns wrong element attribute value,ThirdParty XCUITest,"## The problem
I am trying to get elements attribute ""value"" value. At first, I print page source (pasting here just one element I am interested in):
```
```
Here I clearly see that value=1. Next, I am calling GetAttribute method to find attributes ""value"" value. I get that value=0. Appium server log's shows:
```
2019-10-23 09:27:02:394 [MJSONWP (d360e76d)] Calling AppiumDriver.getAttribute() with args: [""value"",""32010000-0000-0000-3607-000000000000"",""d360e76d-af5e-4883-87c8-4c6663172511""]
2019-10-23 09:27:02:394 [XCUITest] Executing command 'getAttribute'
2019-10-23 09:27:02:396 [WD Proxy] Matched '/element/32010000-0000-0000-3607-000000000000/attribute/value' to command name 'getAttribute'
2019-10-23 09:27:02:396 [WD Proxy] Proxying [GET /element/32010000-0000-0000-3607-000000000000/attribute/value] to [GET http://localhost:8103/session/1253BD09-66C2-47A6-91DA-FC8EEB3D47E3/element/32010000-0000-0000-3607-000000000000/attribute/value] with no body
2019-10-23 09:27:03:025 [WD Proxy] Got response with status 200: {
2019-10-23 09:27:03:026 [WD Proxy] ""value"" : ""0"",
2019-10-23 09:27:03:026 [WD Proxy] ""sessionId"" : ""1253BD09-66C2-47A6-91DA-FC8EEB3D47E3""
2019-10-23 09:27:03:026 [WD Proxy] }
2019-10-23 09:27:03:026 [MJSONWP (d360e76d)] Responding to client with driver.getAttribute() result: ""0""
```
I have encountered this after updating Appium from 1.14.2 to 1.15.1.
## Environment
* Appium version (or git revision) that exhibits the issue: 1.15.1
* Last Appium version that did not exhibit the issue (if applicable): 1.14.2
* Desktop OS/version used to run Appium: macOS Mojave 10.14.6
* Node.js version (unless using Appium.app|exe): 10.16.3
* Npm or Yarn package manager: 6.9.0
* Mobile platform/version under test: iOS 11.0.3
* Real device or emulator/simulator: real device iPhone 6s+
* Appium CLI or Appium.app|exe: 1.15.1
## Code To Reproduce Issue [ Good To Have ]
Something like this:
```
var state = element.GetAttribute(""value"");
```",1.0,"[iOS] Appium returns wrong element attribute value - ## The problem
I am trying to get elements attribute ""value"" value. At first, I print page source (pasting here just one element I am interested in):
```
```
Here I clearly see that value=1. Next, I am calling GetAttribute method to find attributes ""value"" value. I get that value=0. Appium server log's shows:
```
2019-10-23 09:27:02:394 [MJSONWP (d360e76d)] Calling AppiumDriver.getAttribute() with args: [""value"",""32010000-0000-0000-3607-000000000000"",""d360e76d-af5e-4883-87c8-4c6663172511""]
2019-10-23 09:27:02:394 [XCUITest] Executing command 'getAttribute'
2019-10-23 09:27:02:396 [WD Proxy] Matched '/element/32010000-0000-0000-3607-000000000000/attribute/value' to command name 'getAttribute'
2019-10-23 09:27:02:396 [WD Proxy] Proxying [GET /element/32010000-0000-0000-3607-000000000000/attribute/value] to [GET http://localhost:8103/session/1253BD09-66C2-47A6-91DA-FC8EEB3D47E3/element/32010000-0000-0000-3607-000000000000/attribute/value] with no body
2019-10-23 09:27:03:025 [WD Proxy] Got response with status 200: {
2019-10-23 09:27:03:026 [WD Proxy] ""value"" : ""0"",
2019-10-23 09:27:03:026 [WD Proxy] ""sessionId"" : ""1253BD09-66C2-47A6-91DA-FC8EEB3D47E3""
2019-10-23 09:27:03:026 [WD Proxy] }
2019-10-23 09:27:03:026 [MJSONWP (d360e76d)] Responding to client with driver.getAttribute() result: ""0""
```
I have encountered this after updating Appium from 1.14.2 to 1.15.1.
## Environment
* Appium version (or git revision) that exhibits the issue: 1.15.1
* Last Appium version that did not exhibit the issue (if applicable): 1.14.2
* Desktop OS/version used to run Appium: macOS Mojave 10.14.6
* Node.js version (unless using Appium.app|exe): 10.16.3
* Npm or Yarn package manager: 6.9.0
* Mobile platform/version under test: iOS 11.0.3
* Real device or emulator/simulator: real device iPhone 6s+
* Appium CLI or Appium.app|exe: 1.15.1
## Code To Reproduce Issue [ Good To Have ]
Something like this:
```
var state = element.GetAttribute(""value"");
```",0, appium returns wrong element attribute value the problem i am trying to get elements attribute value value at first i print page source pasting here just one element i am interested in here i clearly see that value next i am calling getattribute method to find attributes value value i get that value appium server log s shows calling appiumdriver getattribute with args executing command getattribute matched element attribute value to command name getattribute proxying to with no body got response with status value sessionid responding to client with driver getattribute result i have encountered this after updating appium from to environment appium version or git revision that exhibits the issue last appium version that did not exhibit the issue if applicable desktop os version used to run appium macos mojave node js version unless using appium app exe npm or yarn package manager mobile platform version under test ios real device or emulator simulator real device iphone appium cli or appium app exe code to reproduce issue something like this var state element getattribute value ,0
2472,8639906096.0,IssuesEvent,2018-11-23 22:34:45,F5OEO/rpitx,https://api.github.com/repos/F5OEO/rpitx,closed,Hardware to clean up and amplify signal. ,V1 related (not maintained),"Hi.
Kinda new to the raspberry pi and using it to transmit with.
As I've read the signal generated is pretty nasty so it should be cleaned up if it's going to be let out into the wild.
So is there any hardware out there that does this?
Preferably with a amplifier as well.
I did find this QRPi, for the 20M band. But it's not available any more?
I know that this kinda doesn't have anything to do with the software, but some ""official"" pointers regarding existing hardware/how to build one your self would be great.
I'm not a hw geek or very good at radio technical stuff, but I know how to use a soldering iron, as many others too probably, if only we had some drawings / guide to follow.
Awesome software, keep up the good work!
",True,"Hardware to clean up and amplify signal. - Hi.
Kinda new to the raspberry pi and using it to transmit with.
As I've read the signal generated is pretty nasty so it should be cleaned up if it's going to be let out into the wild.
So is there any hardware out there that does this?
Preferably with a amplifier as well.
I did find this QRPi, for the 20M band. But it's not available any more?
I know that this kinda doesn't have anything to do with the software, but some ""official"" pointers regarding existing hardware/how to build one your self would be great.
I'm not a hw geek or very good at radio technical stuff, but I know how to use a soldering iron, as many others too probably, if only we had some drawings / guide to follow.
Awesome software, keep up the good work!
",1,hardware to clean up and amplify signal hi kinda new to the raspberry pi and using it to transmit with as i ve read the signal generated is pretty nasty so it should be cleaned up if it s going to be let out into the wild so is there any hardware out there that does this preferably with a amplifier as well i did find this qrpi for the band but it s not available any more i know that this kinda doesn t have anything to do with the software but some official pointers regarding existing hardware how to build one your self would be great i m not a hw geek or very good at radio technical stuff but i know how to use a soldering iron as many others too probably if only we had some drawings guide to follow awesome software keep up the good work ,1
4536,23616490067.0,IssuesEvent,2022-08-24 16:18:16,freedomofpress/securedrop-client,https://api.github.com/repos/freedomofpress/securedrop-client,closed,Identifying test failures in the CI pipeline is attention-consuming,maintainer quality of life :gear: Tooling :beach_umbrella: Summer cleanup,"## Description
When any test fails in the CI pipeline, the `test` job fails. That's expected. However, if you want to determine which test failed (or which requirement wasn't met, it could be a linting issue), you need to scroll through the output of one single long step.
I propose splitting the testing tasks into multiple CI **jobs**.
Current:
- build
- test
- setup
- lint-and-test-all-the-things
Proposed:
- build
- lint, type check, etc.
- setup
- lint, type check, etc.
- unit test
- setup
- test (`make test`)
- integration test
- setup
- test (`make test-integration`)
- functional test
- setup
- test (`make test-functional`)
## Considerations
### Trade-offs
CI builds duration can be measured using ""wall clock time"", and ""CPU time"". One advantage of grouping all tasks as we do currently is keeping the ""CPU time"" minimal. That's good for the planet in terms or energy consumption :earth_africa:, and may be cheaper.
Splitting the tasks into more jobs may (or may not, it depends) result in a decrease in ""wall clock time"", which means people have to wait less for CI results. However it often results in an increase of ""CPU time"" because some setup has to be repeated across jobs (which are typically run independently). It's main value lays in the readability of the results, and a decrease of the time spent looking for answers.
_I don't take increasing CPU time lightly (:earth_americas:), but I would give a try to splitting jobs, see how we like it before assuming that the trade-off it not worth it._
### Impact
Developers could rely on CI builds more effectively to narrow down troubleshooting. It's a day-to-day quality of life improvement.
### Security
There are no implications for the threat model, because we'd still be running the exact same checks, and those are not order-dependent.",True,"Identifying test failures in the CI pipeline is attention-consuming - ## Description
When any test fails in the CI pipeline, the `test` job fails. That's expected. However, if you want to determine which test failed (or which requirement wasn't met, it could be a linting issue), you need to scroll through the output of one single long step.
I propose splitting the testing tasks into multiple CI **jobs**.
Current:
- build
- test
- setup
- lint-and-test-all-the-things
Proposed:
- build
- lint, type check, etc.
- setup
- lint, type check, etc.
- unit test
- setup
- test (`make test`)
- integration test
- setup
- test (`make test-integration`)
- functional test
- setup
- test (`make test-functional`)
## Considerations
### Trade-offs
CI builds duration can be measured using ""wall clock time"", and ""CPU time"". One advantage of grouping all tasks as we do currently is keeping the ""CPU time"" minimal. That's good for the planet in terms or energy consumption :earth_africa:, and may be cheaper.
Splitting the tasks into more jobs may (or may not, it depends) result in a decrease in ""wall clock time"", which means people have to wait less for CI results. However it often results in an increase of ""CPU time"" because some setup has to be repeated across jobs (which are typically run independently). It's main value lays in the readability of the results, and a decrease of the time spent looking for answers.
_I don't take increasing CPU time lightly (:earth_americas:), but I would give a try to splitting jobs, see how we like it before assuming that the trade-off it not worth it._
### Impact
Developers could rely on CI builds more effectively to narrow down troubleshooting. It's a day-to-day quality of life improvement.
### Security
There are no implications for the threat model, because we'd still be running the exact same checks, and those are not order-dependent.",1,identifying test failures in the ci pipeline is attention consuming description when any test fails in the ci pipeline the test job fails that s expected however if you want to determine which test failed or which requirement wasn t met it could be a linting issue you need to scroll through the output of one single long step i propose splitting the testing tasks into multiple ci jobs current build test setup lint and test all the things proposed build lint type check etc setup lint type check etc unit test setup test make test integration test setup test make test integration functional test setup test make test functional considerations trade offs ci builds duration can be measured using wall clock time and cpu time one advantage of grouping all tasks as we do currently is keeping the cpu time minimal that s good for the planet in terms or energy consumption earth africa and may be cheaper splitting the tasks into more jobs may or may not it depends result in a decrease in wall clock time which means people have to wait less for ci results however it often results in an increase of cpu time because some setup has to be repeated across jobs which are typically run independently it s main value lays in the readability of the results and a decrease of the time spent looking for answers i don t take increasing cpu time lightly earth americas but i would give a try to splitting jobs see how we like it before assuming that the trade off it not worth it impact developers could rely on ci builds more effectively to narrow down troubleshooting it s a day to day quality of life improvement security there are no implications for the threat model because we d still be running the exact same checks and those are not order dependent ,1
237749,7763834356.0,IssuesEvent,2018-06-01 18:00:29,RTXteam/RTX,https://api.github.com/repos/RTXteam/RTX,opened,CypherError,bug high priority,"Traceback (most recent call last):
File ""BuildMasterKG.py"", line 125, in
running_time = timeit.timeit(lambda: run_function(), number=1)
File ""/usr/lib/python3.5/timeit.py"", line 213, in timeit
return Timer(stmt, setup, timer, globals).timeit(number)
File ""/usr/lib/python3.5/timeit.py"", line 178, in timeit
timing = self.inner(it, self.timer)
File """", line 6, in inner
File ""BuildMasterKG.py"", line 125, in
running_time = timeit.timeit(lambda: run_function(), number=1)
File ""BuildMasterKG.py"", line 107, in make_master_kg
ob.neo4j_push()
File ""/mnt/data/orangeboard/RTX/code/reasoningtool/kg-construction/Orangeboard.py"", line 551, in neo4j_push
self.neo4j_run_cypher_query(command)
File ""/mnt/data/orangeboard/RTX/code/reasoningtool/kg-construction/Orangeboard.py"", line 476, in neo4j_run_cypher_query
res = session.run(query, parameters)
File ""/usr/local/lib/python3.5/dist-packages/neo4j/v1/api.py"", line 340, in run
self._connection.fetch()
File ""/usr/local/lib/python3.5/dist-packages/neo4j/bolt/connection.py"", line 283, in fetch
return self._fetch()
File ""/usr/local/lib/python3.5/dist-packages/neo4j/bolt/connection.py"", line 323, in _fetch
response.on_failure(summary_metadata or {})
File ""/usr/local/lib/python3.5/dist-packages/neo4j/v1/result.py"", line 69, in on_failure
raise CypherError.hydrate(**metadata)
neo4j.exceptions.ClientError: There already exists an index for label 'Base' on property 'UUID'. A constraint cannot be created until the index has been dropped.",1.0,"CypherError - Traceback (most recent call last):
File ""BuildMasterKG.py"", line 125, in
running_time = timeit.timeit(lambda: run_function(), number=1)
File ""/usr/lib/python3.5/timeit.py"", line 213, in timeit
return Timer(stmt, setup, timer, globals).timeit(number)
File ""/usr/lib/python3.5/timeit.py"", line 178, in timeit
timing = self.inner(it, self.timer)
File """", line 6, in inner
File ""BuildMasterKG.py"", line 125, in
running_time = timeit.timeit(lambda: run_function(), number=1)
File ""BuildMasterKG.py"", line 107, in make_master_kg
ob.neo4j_push()
File ""/mnt/data/orangeboard/RTX/code/reasoningtool/kg-construction/Orangeboard.py"", line 551, in neo4j_push
self.neo4j_run_cypher_query(command)
File ""/mnt/data/orangeboard/RTX/code/reasoningtool/kg-construction/Orangeboard.py"", line 476, in neo4j_run_cypher_query
res = session.run(query, parameters)
File ""/usr/local/lib/python3.5/dist-packages/neo4j/v1/api.py"", line 340, in run
self._connection.fetch()
File ""/usr/local/lib/python3.5/dist-packages/neo4j/bolt/connection.py"", line 283, in fetch
return self._fetch()
File ""/usr/local/lib/python3.5/dist-packages/neo4j/bolt/connection.py"", line 323, in _fetch
response.on_failure(summary_metadata or {})
File ""/usr/local/lib/python3.5/dist-packages/neo4j/v1/result.py"", line 69, in on_failure
raise CypherError.hydrate(**metadata)
neo4j.exceptions.ClientError: There already exists an index for label 'Base' on property 'UUID'. A constraint cannot be created until the index has been dropped.",0,cyphererror traceback most recent call last file buildmasterkg py line in running time timeit timeit lambda run function number file usr lib timeit py line in timeit return timer stmt setup timer globals timeit number file usr lib timeit py line in timeit timing self inner it self timer file line in inner file buildmasterkg py line in running time timeit timeit lambda run function number file buildmasterkg py line in make master kg ob push file mnt data orangeboard rtx code reasoningtool kg construction orangeboard py line in push self run cypher query command file mnt data orangeboard rtx code reasoningtool kg construction orangeboard py line in run cypher query res session run query parameters file usr local lib dist packages api py line in run self connection fetch file usr local lib dist packages bolt connection py line in fetch return self fetch file usr local lib dist packages bolt connection py line in fetch response on failure summary metadata or file usr local lib dist packages result py line in on failure raise cyphererror hydrate metadata exceptions clienterror there already exists an index for label base on property uuid a constraint cannot be created until the index has been dropped ,0
1020,4805063421.0,IssuesEvent,2016-11-02 15:10:25,saarcashflow/spider_reddituser,https://api.github.com/repos/saarcashflow/spider_reddituser,closed,get all comment data. not just some.. spider should get data not calculate shit and stuff,MAINTAINABILITY USEFULNESS,"#4
",True,"get all comment data. not just some.. spider should get data not calculate shit and stuff - #4
",1,get all comment data not just some spider should get data not calculate shit and stuff ,1
113930,17171924002.0,IssuesEvent,2021-07-15 06:22:28,Thanraj/libpng_,https://api.github.com/repos/Thanraj/libpng_,opened,CVE-2019-9936 (High) detected in sqliteversion-3.22.0,security vulnerability,"## CVE-2019-9936 - High Severity Vulnerability