message
stringlengths 13
484
| diff
stringlengths 38
4.63k
|
---|---|
Remove extra "_tests" suffix
The name of the tests directory in the "getting started" documentation has an extra "_tests" suffix. Presumably this is a copy-and-paste error | @@ -100,7 +100,7 @@ You can specify new Python dependencies in `setup.py`.
Tests can be added in the `my_dagster_project_tests` directory and you can run tests using `pytest`:
```bash
-pytest my_dagster_project_tests_tests
+pytest my_dagster_project_tests
```
### Schedules and sensors
|
client: lona_context: add shortcuts
In most applications Lona has only one window. This patch adds shortcuts to
access this first window as the default window. | @@ -326,4 +326,15 @@ Lona.LonaContext = function(settings) {
_this._ws.close();
});
};
+
+ // shortcuts --------------------------------------------------------------
+ this.get_default_window = function() {
+ return this._windows[1];
+ };
+
+ this.run_view = function(url, post_data) {
+ var window = this.get_default_window();
+
+ return window.run_view(url, post_data);
+ };
};
|
[WIP]Remove no used attribute in FirewallPolicyRuleAssociation
FirewallPolicyRuleAssociation.fire_rule is not used in any places.
This patch remove it.
TESTING | @@ -102,7 +102,6 @@ class FirewallPolicyRuleAssociation(model_base.BASEV2):
ondelete="CASCADE"),
primary_key=True)
position = sa.Column(sa.Integer)
- firewall_rule = orm.relationship('FirewallRuleV2')
class FirewallPolicy(model_base.BASEV2, model_base.HasId, HasName,
|
Documentation: Add one 200 response code
The Rest Api Doc Linter requires at least one 200 response code. | @@ -251,6 +251,8 @@ class HeaderRedirector(ErrorHandlingMethodView):
style: simple
required: false
responses:
+ 200:
+ description: OK
303:
description: OK
content:
|
Now adding flags with no arguments to consumed_arg_values. This way completer/choice
functions that receive parsed_args will still know a flag was used. | @@ -143,8 +143,11 @@ class AutoCompleter(object):
# _ArgumentState of the current flag
flag_arg_state = None
+ # Non-reusable flags that we've parsed
matched_flags = []
- consumed_arg_values = {} # dict(action -> [values, ...])
+
+ # Keeps track of arguments we've seen and any tokens they consumed
+ consumed_arg_values = dict() # dict(action -> [values, ...])
def consume_argument(arg_state: AutoCompleter._ArgumentState) -> None:
"""Consuming token as an argument"""
@@ -205,13 +208,20 @@ class AutoCompleter(object):
action = self._flag_to_action[candidates_flags[0]]
if action is not None:
- # Keep track of what flags have already been used
- # Flags with action set to append, append_const, and count can be reused
- if not isinstance(action, (argparse._AppendAction,
+ if isinstance(action, (argparse._AppendAction,
argparse._AppendConstAction,
argparse._CountAction)):
+ # Flags with action set to append, append_const, and count can be reused
+ # Therefore don't erase any tokens already consumed for this flag
+ consumed_arg_values.setdefault(action, [])
+ else:
+ # This flag is not resusable, so mark that we've seen it
matched_flags.extend(action.option_strings)
+ # It's possible we already have consumed values for this flag if it was used
+ # earlier in the command line. Reset them now for this use of it.
+ consumed_arg_values[action] = []
+
new_arg_state = AutoCompleter._ArgumentState(action)
# Keep track of this flag if it can receive arguments
@@ -219,10 +229,6 @@ class AutoCompleter(object):
flag_arg_state = new_arg_state
skip_remaining_flags = flag_arg_state.is_remainder
- # It's possible we already have consumed values for this flag if it was used
- # earlier in the command line. Reset them now for this use of it.
- consumed_arg_values[flag_arg_state.action] = []
-
# Check if we are consuming a flag
elif flag_arg_state is not None:
consume_argument(flag_arg_state)
|
[tests] Do not skip tests due to MaxlagTimeoutError
After T242081 has been solved upstream, do not skip MaxlagTimeoutError
anymore to detect such failures again. | @@ -1988,19 +1988,14 @@ class Request(MutableMapping):
param_repr = str(self._params)
pywikibot.log('API Error: query=\n%s'
% pprint.pformat(param_repr))
- pywikibot.log(' response=\n%s'
- % result)
+ pywikibot.log(' response=\n{}'.format(result))
raise APIError(**result['error'])
except TypeError:
raise RuntimeError(result)
- msg = 'Maximum retries attempted due to maxlag without success.'
- if os.environ.get('PYWIKIBOT_TESTS_RUNNING', '0') == '1':
- import unittest
- raise unittest.SkipTest(msg)
-
- raise MaxlagTimeoutError(msg)
+ raise MaxlagTimeoutError(
+ 'Maximum retries attempted due to maxlag without success.')
def wait(self, delay=None):
"""Determine how long to wait after a failed request."""
|
Improvements to find bad pixels
Output an image (png) of the bad pixel mask; make printing bad values optional; use known mask if already encoded | mask = pixels.mask
.type = path
.help = "Output mask file name"
+ png = pixels.png
+ .type = path
+ .help = "Bad pixel mask as image"
+ print_values = False
+ .type = bool
+ .help = "Print bad pixel values"
}
"""
)
@@ -67,6 +73,13 @@ def find_constant_signal_pixels(imageset, images):
for idx in images:
pixels = imageset.get_raw_data(idx - 1)
+
+ # apply known mask
+ for _pixel, _panel in zip(pixels, panels):
+ for f0, s0, f1, s1 in _panel.get_mask():
+ blank = flex.int(flex.grid(s1 - s0, f1 - f0), 0)
+ _pixel.matrix_paste_block_in_place(blank, s0, f0)
+
if len(pixels) == 1:
data = pixels[0]
else:
@@ -195,6 +208,15 @@ def run(args):
nslow, nfast = data.focus()
+ # save the total image as a PNG
+ from PIL import Image
+ import numpy
+
+ view = (~(total > (len(images) // 2))).as_int() * 255
+ view.reshape(flex.grid(data.focus()))
+ image = Image.fromarray(view.as_numpy_array().astype(numpy.uint8), mode="L")
+ image.save(params.output.png)
+
ffff = 0
for h in hot_pixels:
@@ -202,6 +224,8 @@ def run(args):
ffff += 1
continue
print("Pixel %d at %d %d" % (total[h], h // nfast, h % nfast))
+ if not params.output.print_values:
+ continue
if len(set(capricious_pixels[h])) >= len(capricious_pixels[h]) // 2:
print(" ... many possible values")
continue
|
client: remove swarming.py put-bootstrap and put-bot-config
All servers are now using luci-config. The existence of these commands is
confusing.
The server code will be removed later.
There was no test. :) | @@ -1327,36 +1327,6 @@ def CMDcollect(parser, args):
return 1
[email protected]('[filename]')
-def CMDput_bootstrap(parser, args):
- """Uploads a new version of bootstrap.py."""
- options, args = parser.parse_args(args)
- if len(args) != 1:
- parser.error('Must specify file to upload')
- url = options.swarming + '/api/swarming/v1/server/put_bootstrap'
- path = unicode(os.path.abspath(args[0]))
- with fs.open(path, 'rb') as f:
- content = f.read().decode('utf-8')
- data = net.url_read_json(url, data={'content': content})
- print data
- return 0
-
-
[email protected]('[filename]')
-def CMDput_bot_config(parser, args):
- """Uploads a new version of bot_config.py."""
- options, args = parser.parse_args(args)
- if len(args) != 1:
- parser.error('Must specify file to upload')
- url = options.swarming + '/api/swarming/v1/server/put_bot_config'
- path = unicode(os.path.abspath(args[0]))
- with fs.open(path, 'rb') as f:
- content = f.read().decode('utf-8')
- data = net.url_read_json(url, data={'content': content})
- print data
- return 0
-
-
@subcommand.usage('[method name]')
def CMDquery(parser, args):
"""Returns raw JSON information via an URL endpoint. Use 'query-list' to
|
SVHN dataset images are of size 32x32x3
SVHN dataset image size is 32x32x3 and not 28x28x1. | @@ -14,14 +14,14 @@ Existing Datasets
+----------------------+------------+------------+----------------+------------------+
| **QMNIST** | 10 | 28x28x1 | YES | Images |
+----------------------+------------+------------+----------------+------------------+
-| **SVHN** | 10 | 28x28x1 | YES | Images |
-+----------------------+------------+------------+----------------+------------------+
| **MNIST Fellowship** | 30 | 28x28x1 | YES | Images |
+----------------------+------------+------------+----------------+------------------+
| **Rainbow MNIST** | 10 | 28x28x3 | YES | Images |
+----------------------+------------+------------+----------------+------------------+
| **Colored MNIST** | 2 | 28x28x3 | YES | Images |
+----------------------+------------+------------+----------------+------------------+
+| **SVHN** | 10 | 32x32x3 | YES | Images |
++----------------------+------------+------------+----------------+------------------+
| **Synbols** | 50 | 32x32x3 | YES | Images |
+----------------------+------------+------------+----------------+------------------+
| **CIFAR10** | 10 | 32x32x3 | YES | Images |
|
Fix column in Report DiscoveryProblem
HG--
branch : feature/microservices | @@ -210,7 +210,7 @@ class ReportFilterApplication(SimpleReport):
title=self.title,
columns=[
_("Managed Object"), _("Address"), _("Profile"),
- _("Avail"), _("Last successful discovery"),
+ _("Administrative Domain"), _("Avail"), _("Last successful discovery"),
_("Discovery"), _("Error")
],
data=data)
|
Update the build system syntax file.
This commit
1. Removes the phantoms related exec args for reasons already stated in
the previous commit.
2. Adds the `kill_previous` arg for highlighting purposes. | @@ -133,7 +133,7 @@ contexts:
2: support.function.exec-arg.sublime-build
3: punctuation.definition.string.end.json
set: [expect-regex-string-value, expect-colon]
- - match: (")(shell|quiet|kill|update_phantoms_only|hide_phantoms_only|word_wrap)(")
+ - match: (")(shell|quiet|kill(?:_previous)?|update_annotations_only|word_wrap)(")
scope: meta.mapping.key.json meta.main-key.sublime-build string.quoted.double.json
captures:
1: punctuation.definition.string.begin.json
|
Good spot by Leo on has_calls not being exclusive,
Using call_args_list to get accurate list of args | @@ -215,7 +215,7 @@ def test_will_remove_csv_files_for_jobs_older_than_seven_days(notify_db, notify_
with freeze_time('2016-10-18T10:00:00'):
remove_csv_files()
- s3.remove_job_from_s3.assert_has_calls([call(job_1.service_id, job_1.id), call(job_2.service_id, job_2.id)])
+ assert s3.remove_job_from_s3.call_args_list == [call(job_1.service_id, job_1.id), call(job_2.service_id, job_2.id)]
def test_send_daily_performance_stats_calls_does_not_send_if_inactive(
|
Extend js_api param desc from create_window()
The documentation of the js_api parameter did not mention, that a promise is returned. | @@ -16,7 +16,7 @@ Create a new _pywebview_ window and returns its instance. Window is not shown un
* `title` - Window title
* `url` - URL to load. If the URL does not have a protocol prefix, it is resolved as a path relative to the application entry point.
* `html` - HTML code to load. If both URL and HTML are specified, HTML takes precedence.
-* `js_api` - Expose a `js_api` class object to the DOM of the current `pywebview` window. Callable functions of `js_api` can be executed using Javascript page via `window.pywebview.api` object.
+* `js_api` - Expose a python object to the DOM of the current `pywebview` window. Methods of the `js_api` object can be executed from javascript by calling `window.pywebview.api.<methodname>(<parameters>)`. Please note that the calling javascript function receives a promise that will contain the return value of the python function. Only basic python objects (like int, str, dict, ...) can be returned to javascript.
* `width` - Window width. Default is 800px.
* `height` - Window height. Default is 600px.
* `x` - Window x coordinate. Default is centered.
|
test(spanner): harden 'test_transaction_batch_update*' systests against partial success + abort
Closes | @@ -24,6 +24,7 @@ import unittest
import uuid
import pytest
+import grpc
from google.rpc import code_pb2
from google.api_core import exceptions
@@ -66,6 +67,10 @@ EXISTING_INSTANCES = []
COUNTERS_TABLE = "counters"
COUNTERS_COLUMNS = ("name", "value")
+_STATUS_CODE_TO_GRPC_STATUS_CODE = {
+ member.value[0]: member for member in grpc.StatusCode
+}
+
class Config(object):
"""Run-time configuration to be modified at set-up.
@@ -785,9 +790,13 @@ class TestSessionAPI(unittest.TestCase, _TestData):
# [END spanner_test_dml_with_mutation]
@staticmethod
- def _check_batch_status(status_code):
- if status_code != code_pb2.OK:
- raise exceptions.from_grpc_status(status_code, "batch_update failed")
+ def _check_batch_status(status_code, expected=code_pb2.OK):
+ if status_code != expected:
+ grpc_status_code = _STATUS_CODE_TO_GRPC_STATUS_CODE[status_code]
+ call = FauxCall(status_code)
+ raise exceptions.from_grpc_status(
+ grpc_status_code, "batch_update failed", errors=[call]
+ )
def test_transaction_batch_update_success(self):
# [START spanner_test_dml_with_mutation]
@@ -906,7 +915,7 @@ class TestSessionAPI(unittest.TestCase, _TestData):
status, row_counts = transaction.batch_update(
[insert_statement, update_statement, delete_statement]
)
- self.assertEqual(status.code, code_pb2.INVALID_ARGUMENT)
+ self._check_batch_status(status.code, code_pb2.INVALID_ARGUMENT)
self.assertEqual(len(row_counts), 1)
self.assertEqual(row_counts[0], 1)
@@ -2190,3 +2199,21 @@ class _ReadAbortTrigger(object):
def handle_abort(self, database):
database.run_in_transaction(self._handle_abort_unit_of_work)
self.handler_done.set()
+
+
+class FauxCall(object):
+ def __init__(self, code, details="FauxCall"):
+ self._code = code
+ self._details = details
+
+ def initial_metadata(self):
+ return {}
+
+ def trailing_metadata(self):
+ return {}
+
+ def code(self):
+ return self._code
+
+ def details(self):
+ return self._details
|
Add additional note about ipv6 during lxd setup
Fixes | @@ -117,7 +117,9 @@ class LXDSetupView(WidgetWrap):
" $ newgrp lxd\n"
" $ lxc finger\n\n"
"If `lxc finger` does not fail with an error you are ready "
- "to re-run conjure-up and continue the installation.")
+ "to re-run conjure-up and continue the installation.\n\n"
+ "NOTE: Do NOT setup a IPv6 subnet when asked, only IPv4 subnets\n"
+ "are supported.")
]
return Pile(items)
|
[cleanup] Always show deprecation warning for -dry option in basic.py
DeprecationWarning is hidden by default.
Show FutureWarning instead which is intended for end users | @@ -114,6 +114,7 @@ class BasicBot(
if 'dry' in kwargs:
issue_deprecation_warning('dry argument',
'pywikibot.config.simulate', 1,
+ warning_class=FutureWarning,
since='20160124')
# use simulate variable instead
pywikibot.config.simulate = True
|
Add tag=True/False option to get_latest_version to allow returning the
tag instead of the version | @@ -389,12 +389,14 @@ class BaseProjectConfig(BaseTaskFlowConfig):
gh = get_github_api(github_config.username, github_config.password)
return gh
- def get_latest_version(self, beta=False):
+ def get_latest_version(self, beta=False, tag=False):
""" Query Github Releases to find the latest production or beta release """
gh = self.get_github_api()
repo = gh.repository(self.repo_owner, self.repo_name)
if not beta:
release = repo.latest_release()
+ if tag:
+ return release.tag_name
version = self.get_version_for_tag(release.tag_name)
if version is not None:
return LooseVersion(version)
@@ -402,6 +404,8 @@ class BaseProjectConfig(BaseTaskFlowConfig):
for release in repo.releases():
if not release.tag_name.startswith(self.project__git__prefix_beta):
continue
+ if tag:
+ return release.tag_name
version = self.get_version_for_tag(release.tag_name)
if version is None:
continue
|
tests: refact flake8 workflow
drop ricardochaves/python-lint action and use `run` steps instead. | name: flake8
-on:
- pull_request:
+on: [pull_request]
jobs:
- flake8:
+ build:
runs-on: ubuntu-latest
+ strategy:
+ matrix:
+ python-version: [ '3.6', '3.7', '3.8' ]
+ name: Python ${{ matrix.python-version }}
steps:
- - uses: actions/checkout@v1
- - uses: ricardochaves/[email protected]
+ - uses: actions/checkout@v2
+ - name: Setup python
+ uses: actions/setup-python@v2
with:
- python-root-list: "./library/"
- use-pylint: false
- use-pycodestyle: false
- use-flake8: true
- use-black: false
- use-mypy: false
- use-isort: false
- extra-pylint-options: ""
- extra-pycodestyle-options: ""
- extra-flake8-options: "--max-line-length 160"
- extra-black-options: ""
- extra-mypy-options: ""
- extra-isort-options: ""
- env:
- GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
\ No newline at end of file
+ python-version: ${{ matrix.python-version }}
+ architecture: x64
+ - run: pip install flake8
+ - run: flake8 --max-line-length 160 ./library/ ./tests/library/
\ No newline at end of file
|
[CrIA] Remove .read permissions.
These have been replaced by .get permissions. | @@ -119,16 +119,12 @@ def db():
role('role/resultdb.reader', [
permission('resultdb.invocations.list'),
permission('resultdb.invocations.get'),
- permission('resultdb.invocations.read'),
permission('resultdb.testResults.list'),
permission('resultdb.testResults.get'),
- permission('resultdb.testResults.read'),
permission('resultdb.artifacts.list'),
permission('resultdb.artifacts.get'),
- permission('resultdb.artifacts.read'),
permission('resultdb.testExonerations.list'),
permission('resultdb.testExonerations.get'),
- permission('resultdb.testExonerations.read'),
])
# Buildbucket permissions and roles. Mostly placeholders for now.
|
Handle non-int `indent_size` in .editorconfig
Editorconfig allows the word `tab` to be used for `indent_size`, which
will result in it using the value from `tab_width`. However, this was
causing a `ValueError` because this condition was not handled. | @@ -179,6 +179,9 @@ def _update_with_config_file(file_path, sections, computed_settings):
if file_path.endswith('.editorconfig'):
indent_style = settings.pop('indent_style', '').strip()
indent_size = settings.pop('indent_size', '').strip()
+ if indent_size == "tab":
+ indent_size = settings.pop('tab_width', '').strip()
+
if indent_style == 'space':
computed_settings['indent'] = ' ' * (indent_size and int(indent_size) or 4)
elif indent_style == 'tab':
|
flags: support accessing flag value instances on the class
Fixes | @@ -35,11 +35,16 @@ class flag_value:
self.__doc__ = func.__doc__
def __get__(self, instance, owner):
+ if instance is None:
+ return self
return instance._has_flag(self.flag)
def __set__(self, instance, value):
instance._set_flag(self.flag, value)
+ def __repr__(self):
+ return '<flag_value flag={.flag!r}>'.format(self)
+
def fill_with_flags(*, inverted=False):
def decorator(cls):
cls.VALID_FLAGS = {
|
Fix error in Python3
AttributeError: 'range' object has no attribute 'remove' | @@ -91,7 +91,7 @@ class TestTilingMock(unittest.TestCase):
# where O: complete cell, X: incomplete cell
size = 4
nodes_per_cell = 8
- nodes = range(size * size * nodes_per_cell)
+ nodes = list(range(size * size * nodes_per_cell))
nodes.remove(nodes_per_cell * 3)
hardware_graph = dnx.chimera_graph(size, node_list=nodes)
|
custom fields: Move field-type inputs first in create-field form.
As other data of field, such as field name, hint etc. are
relative to field type, this commit moves the field type
input to the first order in create field form in org settings. | <div class="new-profile-field-form wrapper">
<div class="settings-section-title new-profile-field-section-title inline-block">{{t "Add a new profile field" }}</div>
<div class="alert-notification" id="admin-add-profile-field-status"></div>
- <div class="control-group">
- <label for="profile_field_name" class="control-label">{{t "Label" }}</label>
- <input type="text" id="profile_field_name" name="name" autocomplete="off" maxlength="40" />
- </div>
- <div class="control-group">
- <label for="profile_field_hint" class="control-label">{{t "Hint (up to 80 characters)" }}</label>
- <input type="text" id="profile_field_hint" name="hint" autocomplete="off" maxlength="80" />
- <div class="alert" id="admin-profile-field-hint-status"></div>
- </div>
<div class="control-group">
<label for="profile_field_type" class="control-label">{{t "Type" }}</label>
<select id="profile_field_type" name="field_type">
{{/each}}
</select>
</div>
- <div class="control-group" id="profile_field_choices_row">
- <label for="profile_field_choices" class="control-label">{{t "Field choices" }}</label>
- <table class="profile_field_choices_table">
- <tbody id="profile_field_choices" class="profile-field-choices"></tbody>
- </table>
- </div>
<div class="control-group" id="profile_field_external_accounts">
<label for="profile_field_external_accounts_type" class="control-label">{{t "External account type" }}</label>
<select id="profile_field_external_accounts_type" name="external_acc_field_type">
<option value="custom">{{t 'Custom' }}</option>
</select>
</div>
+ <div class="control-group">
+ <label for="profile_field_name" class="control-label">{{t "Label" }}</label>
+ <input type="text" id="profile_field_name" name="name" autocomplete="off" maxlength="40" />
+ </div>
+ <div class="control-group">
+ <label for="profile_field_hint" class="control-label">{{t "Hint (up to 80 characters)" }}</label>
+ <input type="text" id="profile_field_hint" name="hint" autocomplete="off" maxlength="80" />
+ <div class="alert" id="admin-profile-field-hint-status"></div>
+ </div>
+ <div class="control-group" id="profile_field_choices_row">
+ <label for="profile_field_choices" class="control-label">{{t "Field choices" }}</label>
+ <table class="profile_field_choices_table">
+ <tbody id="profile_field_choices" class="profile-field-choices"></tbody>
+ </table>
+ </div>
<div class="control-group" id="custom_external_account_url_pattern">
<label for="custom_field_url_pattern" class="control-label">{{t "URL pattern" }}</label>
<input type="url" id="custom_field_url_pattern" name="url_pattern" autocomplete="off" maxlength="80" />
|
Get the parser tests working again
TODO: Structure tests | @@ -74,10 +74,10 @@ class FunctionDef(Structure):
class Lambda(Structure):
- def __init__(self, arity: int, body3: list[Structure]):
- super().__init__(str(arity), body3)
+ def __init__(self, arity: int, body: list[Structure]):
+ super().__init__(str(arity), body)
self.arity = arity
- self.body = body3
+ self.body = body
class LambdaMap(Lambda):
@@ -113,6 +113,11 @@ class MonadicModifier(Structure):
self.modifier = modifier
self.function_A = branches[0]
+ def __repr__(self):
+ return (
+ f"{type(self).__name__}({repr(self.branches)}, {self.modifier!r})"
+ )
+
class DyadicModifier(Structure):
def __init__(self, modifier: str, *branches: Branch):
@@ -121,6 +126,11 @@ class DyadicModifier(Structure):
self.function_A = branches[0]
self.function_B = branches[1]
+ def __repr__(self):
+ return (
+ f"{type(self).__name__}({repr(self.branches)}, {self.modifier!r})"
+ )
+
class TriadicModifier(Structure):
def __init__(self, modifier: str, *branches: Branch):
@@ -129,3 +139,8 @@ class TriadicModifier(Structure):
self.function_A = branches[0]
self.function_B = branches[1]
self.function_C = branches[2]
+
+ def __repr__(self):
+ return (
+ f"{type(self).__name__}({repr(self.branches)}, {self.modifier!r})"
+ )
|
Use long QID for calendar and globe
Wikibase use wikidata entity for date and globe by default.
If the user want to use is own. | @@ -25,8 +25,8 @@ config = {
'MAXLAG': 5,
'PROPERTY_CONSTRAINT_PID': 'P2302',
'DISTINCT_VALUES_CONSTRAINT_QID': 'Q21502410',
- 'COORDINATE_GLOBE_QID': 'Q2',
- 'CALENDAR_MODEL_QID': 'Q1985727',
+ 'COORDINATE_GLOBE_QID': 'http://www.wikidata.org/entity/Q2',
+ 'CALENDAR_MODEL_QID': 'http://www.wikidata.org/entity/Q1985727',
'MEDIAWIKI_API_URL': 'https://www.wikidata.org/w/api.php',
'SPARQL_ENDPOINT_URL': 'https://query.wikidata.org/sparql',
'WIKIBASE_URL': 'http://www.wikidata.org',
|
DOC: Note client dir creation in CLI instructions
Note that the `repo.py --init` call also sets up a client directory.
A student recently ran into some confusion on this point, and it's not properly documented here. | @@ -10,7 +10,9 @@ The CLI requires a few dependencies and C extensions that can be installed with
Create a TUF repository in the current working directory. A cryptographic key
is created and set for each top-level role. The written Targets metadata does
-not sign for any targets, nor does it delegate trust to any roles.
+not sign for any targets, nor does it delegate trust to any roles. The
+`--init` call will also set up a client directory. By default, these
+directories will be `./tufrepo` and `./tufclient`.
```Bash
$ repo.py --init
|
Option "scheduler_default_filters" is deprecated.
The scheduler_default_filters option is deprecated in favor of
the [scheduler]/enabled_filters option. This change updates
the docs to use the enabled_filters option over the deprecated
scheduler_default_filters option.
Closes-Bug: | @@ -1073,15 +1073,16 @@ drives if they need access to faster disk I/O, or access to compute hosts that
have GPU cards to take advantage of GPU-accelerated code.
To configure the scheduler to support host aggregates, the
-``scheduler_default_filters`` configuration option must contain the
-``AggregateInstanceExtraSpecsFilter`` in addition to the other filters used by
-the scheduler. Add the following line to ``/etc/nova/nova.conf`` on the host
-that runs the ``nova-scheduler`` service to enable host aggregates filtering,
+:oslo.config:option:`filter_scheduler.enabled_filters` configuration option must
+contain the ``AggregateInstanceExtraSpecsFilter`` in addition to the other filters
+used by the scheduler. Add the following line to ``/etc/nova/nova.conf`` on the
+host that runs the ``nova-scheduler`` service to enable host aggregates filtering,
as well as the other filters that are typically enabled:
.. code-block:: ini
- scheduler_default_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
+ [filter_scheduler]
+ enabled_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
Example: Specify compute hosts with SSDs
----------------------------------------
|
Adds Category Channels To Error Message
Adds the channels within categories to the failure message of the
command whitelist check. | @@ -250,7 +250,18 @@ def whitelist_check(**default_kwargs: t.Container[int]) -> t.Callable[[Context],
)
# Raise error if the check did not pass
- channels = kwargs.get("channels")
+ channels = set(kwargs.get("channels") or {})
+ categories = kwargs.get("categories")
+
+ # Add all whitelisted category channels
+ if categories:
+ for category_id in categories:
+ category = ctx.guild.get_channel(category_id)
+ if category is None:
+ continue
+
+ [channels.add(channel.id) for channel in category.text_channels]
+
if channels:
channels_str = ', '.join(f"<#{c_id}>" for c_id in channels)
message = f"Sorry, but you may only use this command within {channels_str}."
|
Update contact well support.
Add asymmetric tap overlap support in DRC rules.
Add static nwell and pwell contact in class for measurements. | @@ -43,6 +43,8 @@ class contact(hierarchy_design.hierarchy_design):
self.add_comment("implant type: {}\n".format(implant_type))
self.add_comment("well_type: {}\n".format(well_type))
+ self.is_well_contact = implant_type == well_type
+
self.layer_stack = layer_stack
self.dimensions = dimensions
if directions:
@@ -114,6 +116,11 @@ class contact(hierarchy_design.hierarchy_design):
self.first_layer_minwidth = drc("minwidth_{0}".format(self.first_layer_name))
self.first_layer_enclosure = drc("{0}_enclose_{1}".format(self.first_layer_name, self.via_layer_name))
+ # If there's a different rule for active
+ # FIXME: Make this more elegant
+ if self.is_well_contact and self.first_layer_name == "active" and "tap_extend_contact" in drc.keys():
+ self.first_layer_extend = drc("tap_extend_contact")
+ else:
self.first_layer_extend = drc("{0}_extend_{1}".format(self.first_layer_name, self.via_layer_name))
self.second_layer_minwidth = drc("minwidth_{0}".format(self.second_layer_name))
@@ -232,17 +239,17 @@ class contact(hierarchy_design.hierarchy_design):
well_layer = "{}well".format(self.well_type)
if well_layer in tech.layer:
well_width_rule = drc("minwidth_" + well_layer)
- well_enclose_active = drc(well_layer + "_enclose_active")
- well_width = max(self.first_layer_width + 2 * well_enclose_active,
+ self.well_enclose_active = drc(well_layer + "_enclose_active")
+ self.well_width = max(self.first_layer_width + 2 * self.well_enclose_active,
well_width_rule)
- well_height = max(self.first_layer_height + 2 * well_enclose_active,
+ self.well_height = max(self.first_layer_height + 2 * self.well_enclose_active,
well_width_rule)
center_pos = vector(0.5*self.width, 0.5*self.height)
- well_position = center_pos - vector(0.5*well_width, 0.5*well_height)
+ well_position = center_pos - vector(0.5*self.well_width, 0.5*self.well_height)
self.add_rect(layer=well_layer,
offset=well_position,
- width=well_width,
- height=well_height)
+ width=self.well_width,
+ height=self.well_height)
def analytical_power(self, corner, load):
""" Get total power of a module """
@@ -261,6 +268,21 @@ for layer_stack in tech.layer_stacks:
else:
setattr(module, layer1 + "_via", cont)
+# Set up a static for each well contact for measurements
+if "nwell" in tech.layer:
+ cont = factory.create(module_type="contact",
+ layer_stack=tech.active_stack,
+ implant_type="n",
+ well_type="n")
+ module = sys.modules[__name__]
+ setattr(module, "nwell_contact", cont)
+if "pwell" in tech.layer:
+ cont = factory.create(module_type="contact",
+ layer_stack=tech.active_stack,
+ implant_type="p",
+ well_type="p")
+ module = sys.modules[__name__]
+ setattr(module, "pwell_contact", cont)
|
fix: ignore PUSH_CONTENT_FOCUS_CHANGE
closes | @@ -892,6 +892,7 @@ async def setup_alexa(hass, config_entry, login_obj):
elif command in [
"PUSH_DELETE_DOPPLER_ACTIVITIES", # delete Alexa history
"PUSH_LIST_ITEM_CHANGE", # update shopping list
+ "PUSH_CONTENT_FOCUS_CHANGE", # likely prime related refocus
]:
pass
else:
|
compression value set to 1
bz2 minimum compression value is 1 | @@ -76,8 +76,8 @@ class DevConfig(object):
self.blocks_per_chain_file = 1000
self.chain_read_buffer_size = 1024
self.binary_file_delimiter = '-_-_'
- self.compression_level = 0
- self.version_number = "alpha/0.31a"
+ self.compression_level = 1
+ self.version_number = "alpha/0.32a"
self.chain_file_directory = 'data'
self.db_name = 'state'
|
Add test to verify comparing with deleted version works.
* Add test to verify comparing with deleted version works.
Fixes
* Add comment | @@ -6802,7 +6802,7 @@ class TestReviewAddonVersionCompareViewSet(
}
]
- def test_get_deleted_file(self):
+ def test_compare_with_deleted_file(self):
new_version = version_factory(
addon=self.addon, file_kw={'filename': 'webextension_no_id.xpi'})
@@ -6862,6 +6862,35 @@ class TestReviewAddonVersionCompareViewSet(
result = json.loads(response.content)
assert result['file']['download_url']
+ def test_compare_with_deleted_version(self):
+ new_version = version_factory(
+ addon=self.addon, file_kw={'filename': 'webextension_no_id.xpi'})
+
+ # We need to run extraction first and delete afterwards, otherwise
+ # we'll end up with errors because files don't exist anymore.
+ AddonGitRepository.extract_and_commit_from_version(new_version)
+
+ new_version.delete()
+
+ user = UserProfile.objects.create(username='reviewer')
+ self.grant_permission(user, 'Addons:Review')
+
+ # A reviewer needs the `Addons:ViewDeleted` permission to view and
+ # compare deleted versions
+ self.grant_permission(user, 'Addons:ViewDeleted')
+
+ self.client.login_api(user)
+
+ self.url = reverse_ns('reviewers-versions-compare-detail', kwargs={
+ 'addon_pk': self.addon.pk,
+ 'version_pk': self.version.pk,
+ 'pk': new_version.pk})
+
+ response = self.client.get(self.url)
+ assert response.status_code == 200
+ result = json.loads(response.content)
+ assert result['file']['download_url']
+
class TestDownloadGitFileView(TestCase):
def setUp(self):
|
Save persistent ops as 6 digit no.
With operations 0-10, it was loading in the order 0,1,10,2,3,4,...
Saving with 6 digits (will user ever exceed 1 million operations?) order is preserved on reload. | @@ -4444,7 +4444,7 @@ class Elemental(Modifier):
settings.clear_persistent()
for i, op in enumerate(self.ops()):
- op_set = settings.derive(str(i))
+ op_set = settings.derive("%06i" % i)
if not hasattr(op, "settings"):
continue # Might be a function.
sets = op.settings
|
Remove obsolete mention of modeling from docs
It was removed in | @@ -47,12 +47,6 @@ Finally, install ``stingray`` itself: ::
$ pip install -e "."
-Note that this will install the package *without* the optional dependencies required by
-the ``modeling`` subpackage. In order to install ``stingray`` with these dependencies
-(``emcee``, ``corner`` and ``statsmodels``), execute ::
-
- $ pip install -e ".[modeling]"
-
Installing development environment (for new contributors)
---------------------------------------------------------
@@ -78,12 +72,6 @@ Finally, install ``stingray`` itself: ::
$ pip install -e "."
-Note that this will install the package *without* the optional dependencies required by
-the ``modeling`` subpackage. In order to install ``stingray`` with these dependencies
-(``emcee``, ``corner`` and ``statsmodels``), execute ::
-
- $ pip install -e ".[modeling]"
-
.. _testsuite:
Test Suite
|
Enhance condition in _get_size
This fix the issue | @@ -49,7 +49,7 @@ class BaseIKFile(File):
def _get_size(self):
self._require_file()
- if not self._committed:
+ if not getattr(self, '_committed', False):
return self.file.size
return self.storage.size(self.name)
size = property(_get_size)
|
Ensure that BigQuery queries run until they are complete, instead of until a default timeout
actually make the queries against bq run synchronously
Author: dlovell
Closes from tsdlovell/actually-synchronous and squashes the following commits:
[dlovell] ENH: actually run query synchronously | import re
+import time
import pandas as pd
@@ -83,6 +84,16 @@ class BigQueryAPIProxy(object):
def get_schema(self, table_id, dataset_id):
return self.get_table(table_id, dataset_id).schema
+ def run_sync_query(self, stmt):
+ query = self.client.run_sync_query(stmt)
+ query.use_legacy_sql = False
+ query.run()
+ # run_sync_query is not really synchronous: there's a timeout
+ while not query.job.done():
+ query.job.reload()
+ time.sleep(1)
+ return query
+
class BigQueryDatabase(Database):
pass
@@ -133,9 +144,7 @@ class BigQueryClient(SQLClient):
def _execute(self, stmt, results=True):
# TODO(phillipc): Allow **kwargs in calls to execute
- query = self._proxy.client.run_sync_query(stmt)
- query.use_legacy_sql = False
- query.run()
+ query = self._proxy.run_sync_query(stmt)
return BigQueryCursor(query)
def database(self, name=None):
|
Enable JMX reporting of internal metrics
We need to enable jmx reporting of our internal, dropwizard metrics
so that they can be exposed over prometheus endpoint. | @@ -55,6 +55,7 @@ spec:
- "-Dcom.datastax.driver.FORCE_NIO=true"
- "-DKUBERNETES_MASTER_URL={{openshift_metrics_master_url}}"
- "-DUSER_WRITE_ACCESS={{openshift_metrics_hawkular_user_write_access}}"
+ - "-Dhawkular.metrics.jmx-reporting-enabled"
env:
- name: POD_NAMESPACE
valueFrom:
|
fix: (User Permissions) Allow user to fetch details into mapped doc
Check if apply strict user permissions is enabled
Accordingly restrict user permission | @@ -44,12 +44,18 @@ def map_docs(method, source_names, target_doc):
def get_mapped_doc(from_doctype, from_docname, table_maps, target_doc=None,
postprocess=None, ignore_permissions=False, ignore_child_tables=False):
+ apply_strict_user_permissions = frappe.get_system_settings("apply_strict_user_permissions")
+
# main
if not target_doc:
target_doc = frappe.new_doc(table_maps[from_doctype]["doctype"])
elif isinstance(target_doc, string_types):
target_doc = frappe.get_doc(json.loads(target_doc))
+ if (not apply_strict_user_permissions
+ and not ignore_permissions and not target_doc.has_permission("create")):
+ target_doc.raise_no_permission_to("create")
+
source_doc = frappe.get_doc(from_doctype, from_docname)
if not ignore_permissions:
@@ -111,7 +117,8 @@ def get_mapped_doc(from_doctype, from_docname, table_maps, target_doc=None,
target_doc.set_onload("load_after_mapping", True)
- if not ignore_permissions and not target_doc.has_permission("create"):
+ if (apply_strict_user_permissions
+ and not ignore_permissions and not target_doc.has_permission("create")):
target_doc.raise_no_permission_to("create")
return target_doc
|
fix: Do not set content type for paths ending with .com
To avoid unnecessary download | @@ -103,7 +103,9 @@ def set_content_type(response, data, path):
response.mimetype = 'text/html'
response.charset = 'utf-8'
- if "." in path:
+ # ignore paths ending with .com to avoid unnecessary download
+ # https://bugs.python.org/issue22347
+ if "." in path and not path.endswith('.com'):
content_type, encoding = mimetypes.guess_type(path)
if content_type:
response.mimetype = content_type
|
AC: fix input info for blob
* AC: remove deprecated pipelined mode
* Revert "AC: remove deprecated pipelined mode"
This reverts commit
* AC: fix input info for blob | @@ -797,7 +797,9 @@ class DLSDKLauncher(Launcher):
self.original_outputs = list(self.exec_network.outputs.keys())
has_info = hasattr(self.exec_network, 'input_info')
if has_info:
- ie_input_info = OrderedDict([(name, data.input_data) for name, data in self.network.input_info.items()])
+ ie_input_info = OrderedDict([
+ (name, data.input_data) for name, data in self.exec_network.input_info.items()
+ ])
else:
ie_input_info = self.exec_network.inputs
first_input = next(iter(ie_input_info))
|
add list transform batch to sample mixing
fix previous
fix previous | @@ -397,28 +397,39 @@ class TotalPosterior(Posterior):
self,
n_samples: int = 1,
give_mean: bool = True,
- transform_batch: Optional[int] = None,
+ transform_batch: Optional[Union[int, List[int]]] = None,
) -> np.ndarray:
""" Returns mixing bernoulli parameter for protein negative binomial mixtures (probability background)
:param n_samples: number of samples from posterior distribution
:param sample_protein_mixing: Sample mixing bernoulli, setting background to zero
:param give_mean: bool, whether to return samples along first axis or average over samples
- :param transform_batch: Batches to condition on as integer.
+ :param transform_batch: Batches to condition on.
+ If transform_batch is:
+ - None, then real observed batch is used
+ - int, then batch transform_batch is used
+ - list of int, then values are averaged over provided batches.
:return: array of probability background
"""
py_mixings = []
+ if (transform_batch is None) or (isinstance(transform_batch, int)):
+ transform_batch = [transform_batch]
for tensors in self:
x, _, _, batch_index, label, y = tensors
+ py_mixing = torch.zeros_like(y)
+ if n_samples > 1:
+ py_mixing = torch.stack(n_samples * [py_mixing])
+ for b in transform_batch:
outputs = self.model.inference(
x,
y,
batch_index=batch_index,
label=label,
n_samples=n_samples,
- transform_batch=transform_batch,
+ transform_batch=b,
)
- py_mixing = torch.sigmoid(outputs["py_"]["mixing"])
+ py_mixing += torch.sigmoid(outputs["py_"]["mixing"])
+ py_mixing /= len(transform_batch)
py_mixings += [py_mixing.cpu()]
if n_samples > 1:
# concatenate along batch dimension -> result shape = (samples, cells, features)
|
Fixed the GPU Test summary issue
This issue was causing failure in uploading results to telemetry database | @@ -126,7 +126,7 @@ function Main {
$currentTestResult.TestResult = Get-FinalResultHeader -resultarr "ABORTED"
return $currentTestResult
}
- $CurrentTestResult.TestSummary += New-ResultSummary -metaData "Using nVidia driver: $driver" -testName $CurrentTestData.testName
+ $CurrentTestResult.TestSummary += New-ResultSummary -metaData "Using nVidia driver" -testName $CurrentTestData.testName -testResult $driver
$cmdAddConstants = "echo -e `"driver=$($driver)`" >> constants.sh"
Run-LinuxCmd -username $superuser -password $password -ip $allVMData.PublicIP -port $allVMData.SSHPort `
|
added check for user permissions
permission check by creation and drop of a TEST db, this way the main db used is still up and running without breaking confluence by dropping it without having the possibility to re-create it. | @@ -118,7 +118,7 @@ else
fi
echo "Current PostgreSQL version is $(psql -V)"
-echo "Step2: Get DB Host and check DB connection"
+echo "Step2: Get DB Host, check DB connection and user permissions"
DB_HOST=$(sudo su -c "cat ${DB_CONFIG} | grep 'jdbc:postgresql' | cut -d'/' -f3 | cut -d':' -f1")
if [[ -z ${DB_HOST} ]]; then
echo "DataBase URL was not found in ${DB_CONFIG}"
@@ -137,6 +137,15 @@ if [[ $? -ne 0 ]]; then
exit 1
fi
+echo "Check database permissions for user ${CONFLUENCE_DB_USER}"
+PGPASSWORD=${CONFLUENCE_DB_PASS} createdb -U ${CONFLUENCE_DB_USER} -h ${DB_HOST} -T template0 -E "UNICODE" -l "C" TEST
+if [[ $? -ne 0 ]]; then
+ echo "User ${CONFLUENCE_DB_USER} doesn't have permission to create database."
+ exit 1
+else
+ PGPASSWORD=${CONFLUENCE_DB_PASS} dropdb -U ${CONFLUENCE_DB_USER} -h ${DB_HOST} TEST
+fi
+
echo "Step3: Write confluence baseUrl to file"
CONFLUENCE_BASE_URL_FILE="base_url"
if [[ -s ${CONFLUENCE_BASE_URL_FILE} ]];then
|
Make repr tag highlighting greedy
Addressing before this change the tag-matching code was non-greedy,
resulting in an unbalanced match if there were tags within tags. This
change makes this greedy to ensure that there's a better chance of the match
being balanced. | @@ -82,7 +82,7 @@ class ReprHighlighter(RegexHighlighter):
base_style = "repr."
highlights = [
- r"(?P<tag_start><)(?P<tag_name>[-\w.:|]*)(?P<tag_contents>[\w\W]*?)(?P<tag_end>>)",
+ r"(?P<tag_start><)(?P<tag_name>[-\w.:|]*)(?P<tag_contents>[\w\W]*)(?P<tag_end>>)",
r'(?P<attrib_name>[\w_]{1,50})=(?P<attrib_value>"?[\w_]+"?)?',
r"(?P<brace>[][{}()])",
_combine_regex(
|
[OpenCL] Avoid SelectNode ambiguous overloading
* [OpenCL] Avoid SelectNode ambiguous overloading
* Revert "[OpenCL] Avoid SelectNode ambiguous overloading"
This reverts commit
* [OpenCL] Avoid SelectNode ambiguous codegen | @@ -541,12 +541,26 @@ void CodeGenOpenCL::VisitExpr_(const OrNode* op, std::ostream& os) {
}
void CodeGenOpenCL::VisitExpr_(const SelectNode* op, std::ostream& os) {
+ std::ostringstream oss;
os << "select(";
- PrintExpr(op->false_value, os);
+ PrintExpr(op->false_value, oss);
+ os << CastFromTo(oss.str(), op->false_value.dtype(), op->dtype);
+ oss.str("");
os << ", ";
- PrintExpr(op->true_value, os);
+ PrintExpr(op->true_value, oss);
+ os << CastFromTo(oss.str(), op->true_value.dtype(), op->dtype);
+ oss.str("");
os << ", ";
- PrintExpr(op->condition, os);
+ PrintExpr(op->condition, oss);
+ if (op->dtype.is_float()) {
+ if (op->condition.dtype().is_uint() || op->condition.dtype().is_int()) {
+ os << oss.str();
+ } else {
+ os << CastTo(oss.str(), DataType::Int(op->dtype.bits(), op->dtype.lanes()));
+ }
+ } else {
+ os << CastFromTo(oss.str(), op->condition.dtype(), op->dtype);
+ }
os << ")";
}
|
removing python version < 3.9 requirement
TensorFlow added support for python 3.9, so this check can be removed: | import os, sys, setuptools
from setuptools import setup
-# TODO remove this restriction after https://github.com/tensorflow/tensorflow/issues/44485 is resolved
-if sys.version_info >= (3, 9):
- # NOTE if the next line causes a syntax error, you are likely using Python 2, and need to upgrade to Python 3
- sys.exit(
- f"Sorry, Python versions above 3.8 is not supported. You have: {sys.version_info.major}.{sys.version_info.minor}"
- )
-
-
def read(fname):
with open(os.path.join(os.path.dirname(__file__), fname)) as f:
data = f.read()
|
A number of detections are passing locally
but failing in CI/CD. I believe this is because
they are not being given enough time to finish
their data ingest completely. If a search fails,
wait some time and run it a few more times to
see if it will complete. | @@ -11,6 +11,7 @@ from typing import Union
DEFAULT_EVENT_HOST = "ATTACK_DATA_HOST"
DEFAULT_DATA_INDEX = "main"
+FAILURE_SLEEP_INTERVAL_SECONDS = 120
def enable_delete_for_admin(splunk_host:str, splunk_port:int, splunk_password:str)->bool:
try:
@@ -225,7 +226,12 @@ def test_baseline_search(splunk_host, splunk_port, splunk_password, search, pass
-def test_detection_search(splunk_host:str, splunk_port:int, splunk_password:str, search:str, pass_condition:str, detection_name:str, detection_file:str, earliest_time:str, latest_time:str)->dict:
+def test_detection_search(splunk_host:str, splunk_port:int, splunk_password:str, search:str, pass_condition:str,
+ detection_name:str, detection_file:str, earliest_time:str, latest_time:str, attempts_remaining:int=2,
+ failure_sleep_interval_seconds:int=FAILURE_SLEEP_INTERVAL_SECONDS)->dict:
+ #Since this is an attempt, decrement the number of remaining attempts
+ attempts_remaining -= 1
+
if search.startswith('|'):
search = search
else:
@@ -294,6 +300,13 @@ def test_detection_search(splunk_host:str, splunk_port:int, splunk_password:str,
#Should this be 1 for a pass, or should it be greater than 0?
if int(job['resultCount']) != 1:
#print("Test failed for detection: " + detection_name)
+ if attempts_remaining > 0:
+ print(f"Execution of test failed for [{detection_name}]. Sleeping for [{failure_sleep_interval_seconds} seconds] and triyng again...")
+ time.sleep(failure_sleep_interval_seconds)
+ return test_detection_search(splunk_host, splunk_port, splunk_password, search, pass_condition, detection_name, detection_file,
+ earliest_time, latest_time, attempts_remaining=attempts_remaining,
+ failure_sleep_interval_seconds=failure_sleep_interval_seconds)
+ else:
test_results['success'] = False
return test_results
else:
|
validate: fix a typo
introduced a typo.
This commit fixes it. | @@ -111,7 +111,7 @@ class ActionModule(ActionBase):
display.error(msg)
reason = "[{}] Reason: {}".format(host, error.reason)
try:
- if "schema is missing" not str(error):
+ if "schema is missing" not in str(error):
for i in range(0, len(error.path)):
if i == 0:
given = "[{}] Given value for {}".format(
|
Update pythonpackage.yml
Added python 3.5 | @@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
- python-version: [3.6, 3.7, 3.8]
+ python-version: [3.5, 3.6, 3.7, 3.8]
steps:
- uses: actions/checkout@v2
|
gitignore: Fix format for excluding .cache/.
Comments in gitignore files must be on their own line. | # * Subdirectories with several internal things to ignore get their own
# `.gitignore` files.
#
+# * Comments must be on their own line. (Otherwise they don't work.)
+#
# See `git help ignore` for details on the format.
## Config files for the dev environment
@@ -52,7 +54,8 @@ zulip.kdev4
*.sublime-workspace
.vscode/
*.DS_Store
-.cache/ # Generated by VSCode's test runner
+# .cache/ is generated by VSCode's test runner
+.cache/
## Miscellaneous
# (Ideally this section is empty.)
|
Fix supressing messages leading a 400 error
This only makes it so allowed_mentions are passed if the message is
authored by the bot itself. | @@ -1156,7 +1156,7 @@ class Message(Hashable):
try:
allowed_mentions = fields.pop('allowed_mentions')
except KeyError:
- if self._state.allowed_mentions is not None:
+ if self._state.allowed_mentions is not None and self.author.id == self._state.self_id:
fields['allowed_mentions'] = self._state.allowed_mentions.to_dict()
else:
if allowed_mentions is not None:
|
docs: remove duplicate breaking changes block for 0.5.0
resolves | @@ -211,11 +211,6 @@ and others in the Integrations section for more info.
- models depending on `tensorflow` require `CUDA 10.0` to run on GPU instead of `CUDA 9.0`
- scikit-learn models have to be redownloaded or retrained
-**Breaking changes in version 0.5.0**
-- dependencies have to be reinstalled for most pipeline configurations
-- models depending on `tensorflow` require `CUDA 10.0` to run on GPU instead of `CUDA 9.0`
-- scikit-learn models have to be redownloaded or retrained
-
**Breaking changes in version 0.4.0!**
- default target variable name for [neural evolution](https://docs.deeppavlov.ai/en/0.4.0/intro/hypersearch.html#parameters-evolution-for-deeppavlov-models)
was changed from `MODELS_PATH` to `MODEL_PATH`.
|
StandardConnectionGadget : Fix connection drawing glitch
This was fixed for connections drawn by PlugAdders and StandardNodules in but remained in StandardConnectionGadget. This should finally fix
We should probably change the connection rendering API to work with V2f, and then this bug would have been impossible. | @@ -359,7 +359,7 @@ bool StandardConnectionGadget::dragEnter( const DragDropEvent &event )
bool StandardConnectionGadget::dragMove( const DragDropEvent &event )
{
- updateDragEndPoint( event.line.p0, V3f( 0 ) );
+ updateDragEndPoint( V3f( event.line.p0.x, event.line.p0.y, 0.0f ), V3f( 0 ) );
return true;
}
|
Remove mutable default argument
Common pitfall in python - see link below for reasons why this can cause issues. | @@ -10,12 +10,13 @@ from ..util import read_param_file, ResultDict
from matplotlib import pyplot as plt
-def analyze(problem, X, Y, options={}):
- """High-Dimensional Model Representation (HDMR) using B-spline functions
- for variance-based global sensitivity analysis (GSA) with correlated
- and uncorrelated inputs. This function uses as input a N x d matrix
- of N different d-vectors of model inputs (factors/parameters) and a
- N x 1 vector of corresponding model outputs and returns to the user
+def analyze(problem, X, Y, options=None):
+ """High-Dimensional Model Representation (HDMR) using B-spline functions.
+
+ HDMR is used for variance-based global sensitivity analysis (GSA) with
+ correlated and uncorrelated inputs. This function uses as input a N x d
+ matrix of N different d-vectors of model inputs (factors/parameters) and
+ a N x 1 vector of corresponding model outputs and returns to the user
each factor's first, second, and third order sensitivity coefficient
(separated in total, structural and correlative contributions), an
estimate of their 95% confidence intervals (from bootstrap method)
|
Enable test_trainer_ps in dist_autograd_test.py
Summary: Pull Request resolved:
Test Plan: Imported from OSS | @@ -878,7 +878,6 @@ class DistAutogradTest(RpcAgentTestFixture):
#
# These four test ps-trainer groups run on completely separate autograd
# graphs, but they share the same set of underlying RpcAgents.
- @unittest.skip("Test is flaky, see https://github.com/pytorch/pytorch/issues/28874")
@dist_init
def test_trainer_ps(self):
local_grads = None
|
Correct sentence/statement composition
Incorrect statement spotted
*In the simplest example of broadcasting, the scalar ``b`` is stretched to become an array ~of with the same~ shape as ``a`` so the shapes are compatible for element-by-element multiplication.* | @@ -69,7 +69,7 @@ numpy on Windows 2000 with one million element arrays.
*Figure 1*
*In the simplest example of broadcasting, the scalar ``b`` is
- stretched to become an array of with the same shape as ``a`` so the shapes
+ stretched to become an array of same shape as ``a`` so the shapes
are compatible for element-by-element multiplication.*
|
Fixed manual-update bug
Changing to a non-manual qubit will now switch the "manual" off. | @@ -734,11 +734,23 @@ class GUI:
def qubit_change(self, change) -> None:
self.plot_output.clear_output()
new_qubit = change["new"]
- if new_qubit in gui_defaults.slow_qubits:
- self.manual_update_and_save_widgets["manual_update_checkbox"].value = True
self.unobserve_ranges()
self.unobserve_widgets()
self.unobserve_plot_refresh()
+ self.manual_update_and_save_widgets["manual_update_checkbox"].unobserve(
+ self.manual_update_checkbox, names="value"
+ )
+ if new_qubit in gui_defaults.slow_qubits:
+ self.manual_update_and_save_widgets["manual_update_checkbox"].value = True
+ self.manual_update_and_save_widgets["update_button"].disabled = False
+ self.manual_update_bool = True
+ else:
+ self.manual_update_and_save_widgets["manual_update_checkbox"].value = False
+ self.manual_update_and_save_widgets["update_button"].disabled = True
+ self.manual_update_bool = False
+ self.manual_update_and_save_widgets["manual_update_checkbox"].observe(
+ self.manual_update_checkbox, names="value"
+ )
self.set_qubit(new_qubit)
self.initialize_tab_widget()
self.observe_ranges()
@@ -941,9 +953,7 @@ class GUI:
def plot_refresh(self, change):
self.update_params()
- if not self.manual_update_and_save_widgets[
- "manual_update_checkbox"
- ].get_interact_value():
+ if not self.manual_update_bool:
self.current_plot_option_refresh(None)
def common_params_dropdown_link_refresh(self, change) -> None:
|
Update .travis.yml
Added pip install for future. Trying pyglow in python 3 | @@ -31,7 +31,7 @@ before_install:
# Useful for debugging any issues with conda
- conda info -a
# Replace dep1 dep2 ... with your dependencies
- - conda create -q -n test-environment python=$TRAVIS_PYTHON_VERSION atlas numpy scipy matplotlib nose pandas statsmodels coverage netCDF4 ipython xarray python-future
+ - conda create -q -n test-environment python=$TRAVIS_PYTHON_VERSION atlas numpy scipy matplotlib nose pandas statsmodels coverage netCDF4 ipython xarray
# command to install dependencies
install:
@@ -39,6 +39,7 @@ install:
# - conda install --yes -c dan_blanchard python-coveralls nose-cov
- source activate test-environment
- pip install coveralls
+ - pip install future
- pip install pysatCDF >/dev/null
# install pyglow, space science models
- if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then
@@ -80,15 +81,13 @@ install:
# travis_wait 50 ./pyglow_install.sh >/dev/null;
# cd ../..;
# fi
- - if [[ "$TRAVIS_PYTHON_VERSION" == "2.7" ]]; then
- echo 'cloning pyglow';
- travis_wait 50 git clone https://github.com/timduly4/pyglow.git;
- echo 'installing pyglow';
- cd ./pyglow;
- travis_wait 50 make -C src/pyglow/models source;
- travis_wait 50 python setup.py install --user;
- cd ../pysat;
- fi
+ - echo 'cloning pyglow';
+ - travis_wait 50 git clone https://github.com/timduly4/pyglow.git;
+ - echo 'installing pyglow';
+ - cd ./pyglow;
+ - travis_wait 50 make -C src/pyglow/models source;
+ - travis_wait 50 python setup.py install --user;
+ - cd ../pysat;
# install pysat
- "python setup.py install"
# command to run tests
|
Remove unneccesary variable
Remove unused `buff` from `print_slow_atomic_cache_start_info()` | @@ -540,11 +540,9 @@ const char *get_core_state_name(int core_state)
/* check if device is atomic and print information about potential slow start */
void print_slow_atomic_cache_start_info(const char *device_path)
{
- char buff[MAX_STR_LEN];
struct kcas_cache_check_device cmd_info;
int ret;
- get_dev_path(device_path, buff, sizeof(buff));
ret = _check_cache_device(device_path, &cmd_info);
if (!ret && cmd_info.format_atomic) {
|
Allow more customization for image plots.
More specifically, this commit adds the ability to plot a colorbar for an image
and allows to specify the color range. | @@ -78,10 +78,18 @@ def AddImage(fig,
origin='lower',
suppress_xticks=False,
suppress_yticks=False,
- aspect='auto'):
+ aspect='auto',
+ vmin=None,
+ vmax=None):
"""Convenience function to plot data as an image on the given axes."""
image = axes.imshow(
- data, cmap=cmap, origin=origin, aspect=aspect, interpolation='nearest')
+ data,
+ cmap=cmap,
+ origin=origin,
+ aspect=aspect,
+ interpolation='nearest',
+ vmin=vmin,
+ vmax=vmax)
if show_colorbar:
fig.colorbar(image)
if clim is not None:
@@ -359,7 +367,12 @@ def Image(name, figsize, image, setter=None, **kwargs):
assert image.ndim in (2, 3), '%s' % image.shape
fig = plt.Figure(figsize=figsize, dpi=100, facecolor='white')
axes = fig.add_subplot(1, 1, 1)
- AddImage(fig, axes, image, origin='upper', show_colorbar=False, **kwargs)
+ # Default show_colorbar to False if not explicitly specified.
+ show_colorbar = kwargs.pop('show_colorbar', False)
+ # Default origin to 'upper' if not explicitly specified.
+ origin = kwargs.pop('origin', 'upper')
+ AddImage(
+ fig, axes, image, origin=origin, show_colorbar=show_colorbar, **kwargs)
if setter:
setter(fig, axes)
return FigureToSummary(name, fig)
|
check input for parametric dqn
Summary:
Pull Request resolved:
as titled | @@ -81,6 +81,18 @@ class ParametricDQNTrainer(DQNTrainerMixin, RLTrainerMixin, ReAgentLightningModu
return optimizers
+ def _check_input(self, training_batch: rlt.ParametricDqnInput):
+ assert isinstance(training_batch, rlt.ParametricDqnInput)
+ assert training_batch.not_terminal.dim() == training_batch.reward.dim() == 2
+ assert (
+ training_batch.not_terminal.shape[1] == training_batch.reward.shape[1] == 1
+ )
+ assert (
+ training_batch.action.float_features.dim()
+ == training_batch.next_action.float_features.dim()
+ == 2
+ )
+
@torch.no_grad()
def get_detached_model_outputs(
self, state, action
@@ -91,6 +103,7 @@ class ParametricDQNTrainer(DQNTrainerMixin, RLTrainerMixin, ReAgentLightningModu
return q_values, q_values_target
def train_step_gen(self, training_batch: rlt.ParametricDqnInput, batch_idx: int):
+ self._check_input(training_batch)
reward = training_batch.reward
not_terminal = training_batch.not_terminal.float()
discount_tensor = torch.full_like(reward, self.gamma)
|
SpreadsheetUI : Fix addition of columns to promoted spreadsheets
We were adding the column to the driven plug rather than the driver plug, which automatically broke the connection because the children no longer matched. | @@ -1378,8 +1378,12 @@ def __addColumn( spreadsheet, plug ) :
# plugs are added to spreadsheets.
columnName = Gaffer.Metadata.value( plug, "spreadsheet:columnName" ) or plug.getName()
- columnIndex = spreadsheet["rows"].addColumn( plug, columnName )
- valuePlug = spreadsheet["rows"]["default"]["cells"][columnIndex]["value"]
+ # Rows plug may have been promoted, in which case we need to edit
+ # the source, which will automatically propagate the new column to
+ # the spreadsheet.
+ rowsPlug = spreadsheet["rows"].source()
+ columnIndex = rowsPlug.addColumn( plug, columnName )
+ valuePlug = rowsPlug["default"]["cells"][columnIndex]["value"]
Gaffer.MetadataAlgo.copy( plug, valuePlug, exclude = "spreadsheet:columnName layout:* deletable" )
return columnIndex
|
issue add fix for disconnect cleanup test
Simply listen to RouteMonitor's Context "disconnect" and forget
contexts according to RouteMonitor's rules, rather than duplicate them
(and screw it up). | @@ -297,22 +297,18 @@ class ContextService(mitogen.service.Service):
finally:
self._lock.release()
- def _on_stream_disconnect(self, stream):
+ def _on_context_disconnect(self, context):
"""
- Respond to Stream disconnection by deleting any record of contexts
- reached via that stream. This method runs in the Broker thread and must
- not to block.
+ Respond to Context disconnect event by deleting any record of the no
+ longer reachable context. This method runs in the Broker thread and
+ must not to block.
"""
# TODO: there is a race between creation of a context and disconnection
# of its related stream. An error reply should be sent to any message
# in _latches_by_key below.
self._lock.acquire()
try:
- routes = self.router.route_monitor.get_routes(stream)
- for context in list(self._key_by_context):
- if context.context_id in routes:
- LOG.info('Dropping %r due to disconnect of %r',
- context, stream)
+ LOG.info('Forgetting %r due to stream disconnect', context)
self._forget_context_unlocked(context)
finally:
self._lock.release()
@@ -379,13 +375,10 @@ class ContextService(mitogen.service.Service):
context = method(via=via, unidirectional=True, **spec['kwargs'])
if via and spec.get('enable_lru'):
self._update_lru(context, spec, via)
- else:
- # For directly connected contexts, listen to the associated
- # Stream's disconnect event and use it to invalidate dependent
- # Contexts.
- stream = self.router.stream_by_id(context.context_id)
- mitogen.core.listen(stream, 'disconnect',
- lambda: self._on_stream_disconnect(stream))
+
+ # Forget the context when its disconnect event fires.
+ mitogen.core.listen(context, 'disconnect',
+ lambda: self._on_context_disconnect(context))
self._send_module_forwards(context)
init_child_result = context.call(
|
Update LOCAL_TRAFFIC.yml (FortiOS 5.4)
Update LOCAL_TRAFFIC.yml (FortiOS 5.4) | #
messages:
- error: 'LOCAL_TRAFFIC'
- tag: "subtype=local"
+ tag: "local"
values:
- level: ([^ ]+)
- vd: ([^ ]+)
- srcip: ([^ ]+)
- srcport: ([^ ]+)
- srcintf: ([^ ]+)
- dstip: ([^ ]+)
- dstport: ([^ ]+)
- dstintf: ([^ ]+)
- sessiondId: ([^ ]+)
- protocolId: ([^ ]+)
- action: ([^ ]+)
+ level: (?<=level=)(.*)(?=\s+vd=)
+ vd: (?<=vd=)(.*)(?=\s+srcip=)
+ srcip: (?<=srcip=)(.*)(?=\s+srcport=)
+ srcport: (?<=srcport=)(.*)(?=\s+srcintf=)
+ srcintf: (?<=srcintf=)(.*)(?=\s+dstip=)
+ dstip: (?<=dstip=)(.*)(?=\s+dstport=)
+ dstport: (?<=dstport=)(.*)(?=\s+dstintf=)
+ dstintf: (?<=dstintf=)(.*)(?=\s+sessionid=)
+ sessiondId: (?<=sessionid=)(.*)(?=\s+proto=)
+ protocolId: (?<=proto=)(.*)(?=\s+action=)
+ action: (?<=action=)(.*)(?=\s+policyid=)
miscData: (.*)
- line: '{level} {vd} {srcip} {srcport} {srcintf} {dstip} {dstport} {dstintf} {sessiondId} {protocolId} {action} {miscData}'
+ line: 'level={level} vd={vd} srcip={srcip} srcport={srcport} srcintf={srcintf} dstip={dstip} dstport={dstport} dstintf={dstintf} sessionid={sessiondId} proto={protocolId} action={action} {miscData}'
model: NO_MODEL
mapping:
variables:
|
Fix incorrect site-packages path breaking app at runtime
The keyboard app crashes at launch right now because it has outdated assumptions about the site packages folder and it tries to list things from there. This changes the path accordingly, and avoids a crash in the future if the path shouldn't be present/moved again. | @@ -2,23 +2,29 @@ print('main.py was successfully called')
import os
print('imported os')
+import sys
+print('imported sys')
from kivy import platform
if platform == 'android':
- print('contents of ./lib/python2.7/site-packages/ etc.')
- print(os.listdir('./lib'))
- print(os.listdir('./lib/python2.7'))
- print(os.listdir('./lib/python2.7/site-packages'))
+ site_dir_path = './_python_bundle/site-packages'
+ if not os.path.exists(site_dir_path):
+ print('warning: site-packages dir not found: ' + site_dir_path)
+ else:
+ print('contents of ' + site_dir_path)
+ print(os.listdir(site_dir_path))
print('this dir is', os.path.abspath(os.curdir))
print('contents of this dir', os.listdir('./'))
- with open('./lib/python2.7/site-packages/kivy/app.pyo', 'rb') as fileh:
+ if (os.path.exists(site_dir_path) and
+ os.path.exists(site_dir_path + '/kivy/app.pyo')
+ ):
+ with open(site_dir_path + '/kivy/app.pyo', 'rb') as fileh:
print('app.pyo size is', len(fileh.read()))
-import sys
print('pythonpath is', sys.path)
import kivy
|
Fixed Viewport normalized Area Light scale application
PURPOSE
Fix: the Area Light scaling in Viewport doesn't change the normalized intensity.
EFFECT OF CHANGE
fixed: incorrect intensity normalization when scale changed in active viewport rendering mode. | @@ -198,6 +198,11 @@ def sync_update(rpr_context: RPRContext, obj: bpy.types.Object, is_updated_geome
return True
if is_updated_transform:
+ if light.type == 'AREA' and light.rpr.intensity_normalization:
+ # the normalized are light should be recreated to apply scale correctly
+ rpr_context.remove_object(light_key)
+ sync(rpr_context, obj)
+ else:
# updating only light transform
rpr_light.set_transform(object.get_transform(obj))
return True
|
Fix test dependency on what date it's being run.
* Fix test dependency on what date it's being run.
test_dest was unfortunately dependend on when `FileViewer` got initialized. This fixes it and makes the test run on any date. | @@ -129,11 +129,12 @@ class TestFileViewer(TestCase):
self.viewer.cleanup()
assert not self.viewer.is_extracted()
- @freeze_time('2017-02-08 02:01:00')
+ @freeze_time('2017-01-08 02:01:00')
def test_dest(self):
- assert self.viewer.dest == os.path.join(
+ viewer = FileViewer(make_file(1, get_file('webextension.xpi')))
+ assert viewer.dest == os.path.join(
settings.TMP_PATH, 'file_viewer',
- '0208', str(self.viewer.file.pk))
+ '0108', str(self.viewer.file.pk))
def test_isbinary(self):
binary = self.viewer._is_binary
|
changing "and" to "&&" and "or" to "||" (the operators in cpp)
One of the curses from switching between python and cpp, the gcc compiler is happy to compile the words into the proper operators, but windows compiler is not. | @@ -581,7 +581,7 @@ void Tree::set_levels(int_t l_x, int_t l_y, int_t l_z){
int_t min_l = std::min(l_x, l_y);
if(n_dim == 3) min_l = std::min(min_l, l_z);
max_level = min_l;
- if(l_x != l_y or (n_dim==3 and l_y!=l_z)) ++max_level;
+ if(l_x != l_y || (n_dim==3 && l_y!=l_z)) ++max_level;
nx = 2<<l_x;
ny = 2<<l_y;
@@ -591,7 +591,7 @@ void Tree::set_levels(int_t l_x, int_t l_y, int_t l_z){
ny_roots = 1<<(l_y-(max_level-1));
nz_roots = (n_dim==3)? 1<<(l_z-(max_level-1)) : 1;
- if (l_x==l_y and (n_dim==2 or l_y==l_z)){
+ if (l_x==l_y && (n_dim==2 || l_y==l_z)){
--nx_roots;
--ny_roots;
--nz_roots;
@@ -668,7 +668,7 @@ void Tree::initialize_roots(){
ps[7] = points[iz+1][iy+1][ix+1];
}
roots[iz][iy][ix] = new Cell(ps, n_dim, max_level, NULL);
- if (nx==ny and (n_dim==2 or ny==nz)){
+ if (nx==ny && (n_dim==2 || ny==nz)){
roots[iz][iy][ix]->level = 0;
}else{
roots[iz][iy][ix]->level = 1;
@@ -703,14 +703,14 @@ void Tree::insert_cell(double *new_center, int_t p_level){
int_t ix = 0;
int_t iy = 0;
int_t iz = 0;
- while (new_center[0]>=xs[ixs[ix+1]] and ix<nx_roots-1){
+ while (new_center[0]>=xs[ixs[ix+1]] && ix<nx_roots-1){
++ix;
}
- while (new_center[1]>=ys[iys[iy+1]] and iy<ny_roots-1){
+ while (new_center[1]>=ys[iys[iy+1]] && iy<ny_roots-1){
++iy;
}
if(n_dim == 3){
- while(new_center[2]>=zs[izs[iz+1]] and iz<nz_roots-1){
+ while(new_center[2]>=zs[izs[iz+1]] && iz<nz_roots-1){
++iz;
}
}
@@ -1276,14 +1276,14 @@ Cell* Tree::containing_cell(double x, double y, double z){
int_t ix = 0;
int_t iy = 0;
int_t iz = 0;
- while (x>=xs[ixs[ix+1]] and ix<nx_roots-1){
+ while (x>=xs[ixs[ix+1]] && ix<nx_roots-1){
++ix;
}
- while (y>=ys[iys[iy+1]] and iy<ny_roots-1){
+ while (y>=ys[iys[iy+1]] && iy<ny_roots-1){
++iy;
}
if(n_dim == 3){
- while(z>=zs[izs[iz+1]] and iz<nz_roots-1){
+ while(z>=zs[izs[iz+1]] && iz<nz_roots-1){
++iz;
}
}
|
Update 2)key_features.md
Minor grammar/typo fixes. | @@ -19,11 +19,11 @@ You wish to pass custom error messages to the user. To do so, raise a `gr.Error(
In the previous example, you may have noticed the `title=` and `description=` keyword arguments in the `Interface` constructor that helps users understand your app.
-There are three arguments in `Interface` constructor to specify where this content should go:
+There are three arguments in the `Interface` constructor to specify where this content should go:
-* `title`: which accepts text and can displays it at the very top of interface, and also becomes the page title.
+* `title`: which accepts text and can display it at the very top of interface, and also becomes the page title.
* `description`: which accepts text, markdown or HTML and places it right under the title.
-* `article`: which is also accepts text, markdown or HTML and places it below the interface.
+* `article`: which also accepts text, markdown or HTML and places it below the interface.

|
[tests] Fix test_writer
[global] Add newlines | @@ -22,7 +22,7 @@ def test_verilog_writer():
if subckt["name"] in available_cell_generator:
ws = WriteSpice(subckt["graph"],subckt["name"]+block_name_ext , subckt["ports"], subckts)
ws.print_subckt(SP_FP)
- WriteConst(subckt["graph"], pathlib.Path(__file__).parent, subckt["name"], subckt['ports'], pathlib.Path(__file__).parent,[])
+ WriteConst(subckt["graph"], pathlib.Path(__file__).parent, subckt["name"], subckt['ports'],[])
all_array=FindArray(subckt["graph"], pathlib.Path(__file__).parent, subckt["name"] )
WriteCap(subckt["graph"], pathlib.Path(__file__).parent, subckt["name"], unit_cap,all_array)
VERILOG_FP.close()
|
BUG: assign no longer allows inplace
xarry assign no longer allows inplace, use keyword arguement assignemnt instead. Also fixed try/except loop that caught everything. | @@ -244,7 +244,7 @@ def calc_measurement_loc(self):
if kk in el_keys:
try:
good_dir.append(int(kk))
- except:
+ except ValueError:
logger.warning("unknown direction number [{:}]".format(kk))
# Calculate the geodetic latitude and longitude for each direction
@@ -270,9 +270,7 @@ def calc_measurement_loc(self):
# Assigning as data, to ensure that the number of coordinates match
# the number of data dimensions
- self.data = self.data.assign(lat_key=gdlat, lon_key=gdlon)
- self.data.rename({"lat_key": lat_key, "lon_key": lon_key},
- inplace=True)
+ self.data = self.data.assign({lat_key: gdlat, lon_key: gdlon})
# Add metadata for the new data values
bm_label = "Beam {:d} ".format(dd)
|
Fix _render_validation_statistics method
handle None values | @@ -118,15 +118,24 @@ class ValidationResultsPageRenderer(Renderer):
@classmethod
def _render_validation_statistics(cls, validation_results):
statistics = validation_results["statistics"]
+ statistics_dict = {
+ "evaluated_expectations": "Evaluated Expectations",
+ "successful_expectations": "Successful Expectations",
+ "unsuccessful_expectations": "Unsuccessful Expectations",
+ "success_percent": "Success Percent"
+ }
+ table_rows = []
+ for key, value in statistics_dict.items():
+ if statistics.get(key) is not None:
+ if key == "success_percent":
+ table_rows.append([value, "{0:.2f}%".format(statistics[key])])
+ else:
+ table_rows.append([value, statistics[key]])
+
return RenderedComponentContent(**{
"content_block_type": "table",
"header": "Statistics",
- "table": [
- ["Evaluated Expectations", statistics["evaluated_expectations"]],
- ["Successful Expectations", statistics["successful_expectations"]],
- ["Unsuccessful Expectations", statistics["unsuccessful_expectations"]],
- ["Success Percent", "{0:.2f}%".format(statistics["success_percent"])],
- ],
+ "table": table_rows,
"styling": {
"classes": ["col-6", "table-responsive"],
"styles": {
|
Removed lowercasing when checking slug validity
This was copied from the function above but doesn't apply here. | @@ -57,7 +57,7 @@ class CustomDataFieldsForm(forms.Form):
@classmethod
def verify_no_profiles_missing_fields(cls, data_fields, profiles):
errors = set()
- slugs = {field['slug'].lower()
+ slugs = {field['slug']
for field in data_fields if 'slug' in field}
for profile in profiles:
for field in json.loads(profile.get('fields', "{}")).keys():
|
Skip TestLoginLogout if it is running on Appveyor
APPVEYOR environment variable may be 'True' or 'true' | @@ -3707,7 +3707,7 @@ class TestLoginLogout(DefaultSiteTestCase):
"""Test for login and logout methods."""
- @unittest.skipIf(os.environ.get('APPVEYOR', 'false') == 'true',
+ @unittest.skipIf(os.environ.get('APPVEYOR', 'false') in ('true', 'True'),
'No user defined for APPVEYOR tests')
def test_login_logout(self):
"""Validate login and logout methods by toggling the state."""
|
Minor fix evaluation formula of PILLOW_VERSION
Enable fillcolor option for affine transformation for Pillow >= 5.0.0 as described | @@ -766,7 +766,7 @@ def affine(img, angle, translate, scale, shear, resample=0, fillcolor=None):
output_size = img.size
center = (img.size[0] * 0.5 + 0.5, img.size[1] * 0.5 + 0.5)
matrix = _get_inverse_affine_matrix(center, angle, translate, scale, shear)
- kwargs = {"fillcolor": fillcolor} if PILLOW_VERSION[0] == '5' else {}
+ kwargs = {"fillcolor": fillcolor} if PILLOW_VERSION[0] >= '5' else {}
return img.transform(output_size, Image.AFFINE, matrix, resample, **kwargs)
|
Add Taiwan Government Open Data
Taiwan Government Open Data | @@ -465,6 +465,7 @@ API | Description | Auth | HTTPS | Link |
| Open Government, France | French Government Open Data | `apiKey` | Yes | [Go!](https://www.data.gouv.fr/) |
| Open Government, India | Indian Government Open Data | `apiKey` | Yes | [Go!](https://data.gov.in/) |
| Open Government, New Zealand | New Zealand Government Open Data | No | Yes | [Go!](https://www.data.govt.nz/) |
+| Open Government, Taiwan | Taiwan Government Open Data | No | Yes | [Go!](https://data.gov.tw/) |
| Open Government, USA | United States Government Open Data | No | Yes | [Go!](https://www.data.gov/) |
| Outpan | A Database of Everything | `apiKey` | Yes | [Go!](https://outpan.mixnode.com/developers) |
| Prague Opendata | Prague City Open Data | No | No | [Go!](http://opendata.praha.eu/en) |
|
Fixed a simple Typo in 'solid' decorator
Fixed a Typo in solid decorator. Fixed "Optiona" to be "Optional". | @@ -289,7 +289,7 @@ def solid(
name (str): Name of solid. Must be unique within any :py:class:`PipelineDefinition`
using the solid.
description (str): Human-readable description of this solid.
- input_defs (Optiona[List[InputDefinition]]):
+ input_defs (Optional[List[InputDefinition]]):
List of input definitions. Inferred from typehints if not provided.
output_defs (Optional[List[OutputDefinition]]):
List of output definitions. Inferred from typehints if not provided.
|
Update Robot program to use the new DataWriter
The new DataWriter works. | @@ -451,10 +451,6 @@ void UART1_Handler(void const * argument)
}
else if(cmdMessage.type == cmdWRITE){
Dynamixel_SetGoalPosition(cmdMessage.motorHandle, cmdMessage.position);
-
- do{
- xTaskNotifyWait(0, NOTIFIED_FROM_ISR, ¬ification, MAX_DELAY_TIME);
- }while((notification & NOTIFIED_FROM_ISR) != NOTIFIED_FROM_ISR);
}
}
/* USER CODE END UART1_Handler */
@@ -495,10 +491,6 @@ void UART2_Handler(void const * argument)
}
else if(cmdMessage.type == cmdWRITE) {
Dynamixel_SetGoalPosition(cmdMessage.motorHandle, cmdMessage.position);
-
- do{
- xTaskNotifyWait(0, NOTIFIED_FROM_ISR, ¬ification, MAX_DELAY_TIME);
- }while((notification & NOTIFIED_FROM_ISR) != NOTIFIED_FROM_ISR);
}
}
/* USER CODE END UART2_Handler */
@@ -539,10 +531,6 @@ void UART3_Handler(void const * argument)
}
else if(cmdMessage.type == cmdWRITE) {
Dynamixel_SetGoalPosition(cmdMessage.motorHandle, cmdMessage.position);
-
- do{
- xTaskNotifyWait(0, NOTIFIED_FROM_ISR, ¬ification, MAX_DELAY_TIME);
- }while((notification & NOTIFIED_FROM_ISR) != NOTIFIED_FROM_ISR);
}
}
/* USER CODE END UART3_Handler */
@@ -583,10 +571,6 @@ void UART4_Handler(void const * argument)
}
else if(cmdMessage.type == cmdWRITE) {
Dynamixel_SetGoalPosition(cmdMessage.motorHandle, cmdMessage.position);
-
- do{
- xTaskNotifyWait(0, NOTIFIED_FROM_ISR, ¬ification, MAX_DELAY_TIME);
- }while((notification & NOTIFIED_FROM_ISR) != NOTIFIED_FROM_ISR);
}
}
/* USER CODE END UART4_Handler */
@@ -627,10 +611,6 @@ void UART6_Handler(void const * argument)
}
else if(cmdMessage.type == cmdWRITE) {
Dynamixel_SetGoalPosition(cmdMessage.motorHandle, cmdMessage.position);
-
- do{
- xTaskNotifyWait(0, NOTIFIED_FROM_ISR, ¬ification, MAX_DELAY_TIME);
- }while((notification & NOTIFIED_FROM_ISR) != NOTIFIED_FROM_ISR);
}
}
/* USER CODE END UART6_Handler */
|
[macOS] Only cleanup directory after upload
Missed it in previous enablement of uploading bazel log, we should no longer clean the directory anymore. | @@ -8,13 +8,12 @@ RAY_DIR=$(cd "${ROOT_DIR}/../../"; pwd)
cd "${RAY_DIR}"
-# Cleanup old entries, this is needed in macOS shared environment.
-if [[ "${OSTYPE}" = darwin* ]]; then
- if [[ -n "${BUILDKITE-}" ]]; then
- echo "Cleanup old entries in macOS"
+cleanup() {
+ # Cleanup the directory because macOS file system is shared between builds.
rm -rf /tmp/bazel_event_logs
- fi
-fi
+}
+trap cleanup EXIT
+
mkdir -p /tmp/bazel_event_logs
./ci/build/get_build_info.py > /tmp/bazel_event_logs/metadata.json
|
zap: Adding more PATH variables
Currently the mail spool is receiving messages about not being able to
find `grep`, `sed` and `awk`, due to the PATH overwrite. | @@ -4,5 +4,5 @@ files:
owner: ec2-user
group: ec2-user
content: |
- PATH="/opt/python/run/venv/bin:$PATH"
+ PATH="/opt/python/run/venv/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin"
*/5 * * * * . /opt/python/current/env; cd /opt/python/current/app; ./manage.py worker_health_check > /dev/null 2>&1
|
Bug in stop command when run indexes specified
Index was being applied after filtering by running status. | @@ -1121,6 +1121,7 @@ def _stop_runs(args, ctx):
preview = cmd_impl_support.format_warn("You are about to stop the following runs:")
confirm = "Stop {count} run(s)?"
no_runs_help = "Nothing to stop."
+ if not args.runs:
args.status_running = True
def stop_f(selected):
|
Adding explicit bg definition to iframe in HTML5Renderer.
Default behavior doesn't work reliably across browsers (transparent).
Resolves | <style lang="scss" scoped>
+ @import '~kolibri.styles.definitions';
+
.btn {
position: absolute;
top: 8px;
.iframe {
width: 100%;
height: 100%;
+ background-color: $core-bg-canvas;
}
</style>
|
Fix dagit backfill page when no daemon settings are set
Summary: None doesn't translate to False, graphql is expecting a bool. We should beef up test coverage once we unbreak master.
Test Plan: Load dagit with blank dagster.yaml, launch a backill
Reviewers: catherinewu, prha | @@ -128,4 +128,4 @@ def resolve_sensorDaemonInterval(self, _graphene_info):
def resolve_daemonBackfillEnabled(self, _graphene_info):
backfill_settings = self._instance.get_settings("backfill") or {}
- return backfill_settings.get("daemon_enabled")
+ return backfill_settings.get("daemon_enabled", False)
|
Update Dockerfile to add requests module
Otherwise tests won't run. | @@ -13,7 +13,7 @@ RUN apt-get install -y locales && \
ENV LC_ALL C.UTF-8
# do we need all of these, maybe remove some of them?
-RUN pip install imageio numpy scipy matplotlib pandas sympy nose decorator tqdm pillow pytest
+RUN pip install imageio numpy scipy matplotlib pandas sympy nose decorator tqdm pillow pytest requests
# install scikit-image after the other deps, it doesn't cause errors this way.
RUN pip install scikit-image sklearn
|
add plotnine package
plotnine is an implementation of a grammar of graphics in Python, it is based on ggplot2 | @@ -325,6 +325,7 @@ RUN pip install --upgrade mpld3 && \
pip install folium && \
pip install scikit-plot && \
pip install dipy && \
+ pip install plotnine && \
##### ^^^^ Add new contributions above here
# clean up pip cache
rm -rf /root/.cache/pip/* && \
|
Clone/Duplicate.
Clone Reference, Duplicate for copy by reference, copy by value. | @@ -1733,12 +1733,12 @@ class RootNode(list):
for i in range(1, 10):
gui.Bind(wx.EVT_MENU, self.menu_duplicate_element_op(node, i),
duplicate_menu_eop.Append(wx.ID_ANY, _("Make %d copies.") % i, "", wx.ITEM_NORMAL))
- menu.AppendSubMenu(duplicate_menu_eop, _("Soft Copy"))
+ menu.AppendSubMenu(duplicate_menu_eop, _("Clone Reference"))
duplicate_menu_eop = wx.Menu()
for i in range(1, 10):
gui.Bind(wx.EVT_MENU, self.menu_duplicate(node, i),
duplicate_menu_eop.Append(wx.ID_ANY, _("Make %d copies.") % i, "", wx.ITEM_NORMAL))
- menu.AppendSubMenu(duplicate_menu_eop, _("Hard Copy"))
+ menu.AppendSubMenu(duplicate_menu_eop, _("Duplicate"))
if t in (NODE_OPERATION, NODE_ELEMENTS_BRANCH, NODE_OPERATION_BRANCH) and len(node) > 1:
gui.Bind(wx.EVT_MENU, self.menu_reverse_order(node),
menu.Append(wx.ID_ANY, _("Reverse Layer Order"), "", wx.ITEM_NORMAL))
|
Forward entity information on .at()
TN: | @@ -604,11 +604,30 @@ def collection_get(self, coll_expr, index_expr, or_null=True):
# 0-based indexes, so there is no need to fiddle indexes here.
index_expr = construct(index_expr, long_type)
- coll_expr = construct(coll_expr, lambda t: t.is_collection)
+ coll_expr = construct(coll_expr)
+ as_entity = coll_expr.type.is_entity_type
+ if as_entity:
+ saved_coll_expr, coll_expr, entity_info = (
+ coll_expr.destructure_entity()
+ )
+
+ check_source_language(
+ coll_expr.type.is_collection,
+ '.at prefix must be a collection: got {} instead'.format(
+ coll_expr.type.name.camel
+ )
+ )
+
or_null = construct(or_null)
- return CallExpr('Get_Result', 'Get', coll_expr.type.element_type,
- [coll_expr, index_expr, or_null],
- abstract_expr=self)
+ result = CallExpr('Get_Result', 'Get', coll_expr.type.element_type,
+ [coll_expr, index_expr, or_null])
+
+ if as_entity:
+ result = SequenceExpr(saved_coll_expr,
+ make_as_entity(result, entity_info))
+
+ result.abstract_expr = self
+ return result
@auto_attr
|
[PY3] Fix haproxyconn unit tests that are failing
``assertItemsEqual`` does not exist in Python 3. It was removed. Since
``assertItemsEqual(a, b)`` is the same thing as asserting that
``sorted(a) == sorted(b)``, we can just update the test in this manner
to work in both Python 2 and 3. | @@ -161,7 +161,10 @@ class HaproxyConnTestCase(TestCase, LoaderModuleMockMixin):
'''
Test listing all frontends
'''
- self.assertItemsEqual(haproxyconn.list_frontends(), ['frontend-alpha', 'frontend-beta', 'frontend-gamma'])
+ self.assertEqual(
+ sorted(haproxyconn.list_frontends()),
+ sorted(['frontend-alpha', 'frontend-beta', 'frontend-gamma'])
+ )
# 'show_backends' function tests: 1
@@ -175,7 +178,10 @@ class HaproxyConnTestCase(TestCase, LoaderModuleMockMixin):
'''
Test listing of all backends
'''
- self.assertItemsEqual(haproxyconn.list_backends(), ['backend-alpha', 'backend-beta', 'backend-gamma'])
+ self.assertEqual(
+ sorted(haproxyconn.list_backends()),
+ sorted(['backend-alpha', 'backend-beta', 'backend-gamma'])
+ )
def test_get_backend(self):
'''
|
Add files via upload
Open report automatically | @@ -7,6 +7,7 @@ from six.moves.configparser import RawConfigParser
from time import process_time
import tarfile
import shutil
+import webbrowser
from report import *
import PySimpleGUI as sg
@@ -197,6 +198,9 @@ while True:
log.close()
report(reportfolderbase, time, extracttype, pathto)
- locationmessage = ('Report name: '+reportfolderbase)
+ locationmessage = ('Report name: '+reportfolderbase+'index.html')
sg.Popup('Processing completed', locationmessage)
+
+ basep = os.getcwd()
+ webbrowser.open_new_tab('file://'+basep+base+'index.html')
sys.exit()
|
[common] remove duplicated keyword argument
fix
fix | @@ -1265,7 +1265,7 @@ def script_main(download, download_playlist, **kwargs):
download_main(
download, download_playlist,
URLs, args.playlist,
- stream_id=stream_id, output_dir=args.output_dir, merge=not args.no_merge,
+ output_dir=args.output_dir, merge=not args.no_merge,
info_only=info_only, json_output=json_output, caption=caption,
**extra
)
|
Fix test under Windows.
The "echo" command is not necessarily available when running on Windows. | import gc
+import sys
import warnings
import weakref
@@ -13,7 +14,7 @@ def test_Command_triggers_no_warnings():
with pytest.warns(None) as record:
# This is essentially how RsyncPublisher runs rsync.
- with Command(["echo"]) as client:
+ with Command([sys.executable, "-c", "print()"]) as client:
for _ in client:
pass
|
instruments/energy_measurement: Add a `keep_raw` parameter
Add a `keep_raw` parameter to control whether raw files should be
deleted during teardown. | @@ -144,7 +144,13 @@ class DAQBackend(EnergyInstrumentBackend):
connector on the DAQ (varies between DAQ models). The default
assumes DAQ 6363 and similar with AI channels on connectors
0-7 and 16-23.
- """)
+ """),
+ Parameter('keep_raw', kind=bool, default=False,
+ description="""
+ If set to ``True``, this will prevent the raw files obtained
+ from the device before processing from being deleted
+ (this is maily used for debugging).
+ """),
]
instrument = DaqInstrument
@@ -189,6 +195,12 @@ class EnergyProbeBackend(EnergyInstrumentBackend):
description="""
Path to /dev entry for the energy probe (it should be /dev/ttyACMx)
"""),
+ Parameter('keep_raw', kind=bool, default=False,
+ description="""
+ If set to ``True``, this will prevent the raw files obtained
+ from the device before processing from being deleted
+ (this is maily used for debugging).
+ """),
]
instrument = EnergyProbeInstrument
@@ -224,6 +236,12 @@ class ArmEnergyProbeBackend(EnergyInstrumentBackend):
description="""
Path to config file of the AEP
"""),
+ Parameter('keep_raw', kind=bool, default=False,
+ description="""
+ If set to ``True``, this will prevent the raw files obtained
+ from the device before processing from being deleted
+ (this is maily used for debugging).
+ """),
]
instrument = ArmEnergyProbeInstrument
@@ -282,6 +300,12 @@ class AcmeCapeBackend(EnergyInstrumentBackend):
description="""
Size of the capture buffer (in KB).
"""),
+ Parameter('keep_raw', kind=bool, default=False,
+ description="""
+ If set to ``True``, this will prevent the raw files obtained
+ from the device before processing from being deleted
+ (this is maily used for debugging).
+ """),
]
# pylint: disable=arguments-differ
@@ -307,7 +331,7 @@ class AcmeCapeBackend(EnergyInstrumentBackend):
for iio_device in iio_devices:
ret[iio_device] = AcmeCapeInstrument(
target, iio_capture=iio_capture, host=host,
- iio_device=iio_device, buffer_size=buffer_size)
+ iio_device=iio_device, buffer_size=buffer_size, keep_raw=keep_raw)
return ret
|
[varLib.varStore] Add NO_VARIATION_INDEX
Part of | @@ -7,6 +7,10 @@ from functools import partial
from collections import defaultdict
+NO_VARIATION_INDEX = 0xFFFFFFFF
+ot.VarStore.NO_VARIATION_INDEX = NO_VARIATION_INDEX
+
+
def _getLocationKey(loc):
return tuple(sorted(loc.items(), key=lambda kv: kv[0]))
|
Update super-admin-new-landing-page-in-facility-plugin.feature
Formatting and adding 'Examples' | @@ -14,6 +14,7 @@ Feature: New landing page for super admins in Facility plugin
And I see a list of facilities on the device
And I see how many classes are in each facility
And I don't see the *Facility* plugin subtabs
+
Scenario: View facility
Given there is more than one facility on the device
And I am on the *Facility > Facilities* page
@@ -42,5 +43,5 @@ Feature: New landing page for super admins in Facility plugin
Examples:
-| ??? | ??? |
-| ?????!?! |
+| facility |
+| MySchool |
|
Fix docstring failures
docstring copied from `AgentNotReady` class | @@ -41,6 +41,7 @@ class InvalidModelError(RasaException):
"""
def __init__(self, message: Text) -> None:
+ """Initialize message attribute."""
self.message = message
super(InvalidModelError, self).__init__(message)
@@ -56,6 +57,7 @@ class UnsupportedModelError(RasaException):
"""
def __init__(self, message: Text) -> None:
+ """Initialize message attribute."""
self.message = message
super(UnsupportedModelError, self).__init__(message)
|
Modify method signature
Internal object creation makes unittesting really hard,
especially when you want to simulate side-effects.
Better pass this objects as parameter to the method, which
can than be mocked easily. | @@ -39,7 +39,7 @@ class DownloadWorkflow():
# Get workflow details
try:
- self.fetch_workflow_details()
+ self.fetch_workflow_details(nf_core.list.Workflows())
except LookupError:
sys.exit(1)
@@ -77,9 +77,12 @@ class DownloadWorkflow():
self.pull_singularity_image(container)
- def fetch_workflow_details(self):
- """ Fetch details of nf-core workflow to download """
- wfs = nf_core.list.Workflows()
+ def fetch_workflow_details(self, wfs):
+ """ Fetch details of nf-core workflow to download
+
+ params:
+ - wfs A nf_core.list.Workflows object
+ """
wfs.get_remote_workflows()
# Get workflow download details
|
Update quickstart.rst
Fix typo on line 11. | @@ -8,7 +8,7 @@ Get going fast! Intended for folks familiar with setting up DevOps environments.
Setting up VirtualBox
~~~~~~~~~~~~~~~~~~~~~~~
- Type "Download VirtualBox for Windows" in the search bar.
-- Click on the websight by Oracle.
+- Click on the website by Oracle.
.. image:: development-guide/images/A1.png
:width: 600
- Download VirtualBox for "Windows hosts".
|
swarming_bot: make task_runner more resilient to partial manifest
This was broken in | @@ -197,7 +197,7 @@ class TaskDetails(object):
}
self.env_prefixes = {
k.encode('utf-8'): [path.encode('utf-8') for path in v]
- for k, v in data['env_prefixes'].iteritems()
+ for k, v in (data.get('env_prefixes') or {}).iteritems()
}
self.grace_period = data['grace_period']
self.hard_timeout = data['hard_timeout']
|
Remove debugging code
Summary: As titled | @@ -50,9 +50,7 @@ poll(
folly::Expected<folly::Unit, Error> proxy(
void *frontend, void *backend, void *capture) {
- LOG(INFO) << "HAHAHA";
auto rc = zmq_proxy(frontend, backend, capture);
- LOG(INFO) << "HOHOHO";
if (rc == 0) {
return folly::Unit();
}
|
[DOCS] Update docs guide: how_to_write_a_how_to_guide
Update how_to_write_a_how_to_guide.rst | @@ -15,8 +15,13 @@ This guide will help you {do something.} {That something is important or useful,
Steps
-----
-1. First, do something bashy, like so: ``something --foo bar``
-2. Next, do something with yml:
+#. First, do something bashy.
+
+ .. code-block:: yaml
+
+ something --foo bar
+
+#. Next, do something with yml.
.. code-block:: yaml
@@ -29,7 +34,7 @@ Steps
class_name: ExpectationSuitePageRenderer
-3. Next, try a python snippet or two:
+#. Next, try a python snippet or two.
.. code-block:: python
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.