message
stringlengths
13
484
diff
stringlengths
38
4.63k
Clarified mutual exclusivity in of filters /_replicate The filters are the fields `doc_ids`, `filter`, and `selector`.
:<json object create_target_params: An object that contains parameters to be used when creating the target database. Can include the standard ``q`` and ``n`` parameters. - :<json array doc_ids: Array of document IDs to be synchronized + :<json array doc_ids: Array of document IDs to be synchronized. + ``doc_ids``, ``filter``, and ``selector`` mutually exclusive. :<json string filter: The name of a :ref:`filter function <filterfun>`. + ``doc_ids``, ``filter``, and ``selector`` mutually exclusive. :<json json selector: A :ref:`selector <find/selectors>` to filter - documents for synchronization. + documents for synchronization. Has the same behavior as the + :ref:`selector objects <selectorobj>` in replication documents. + ``doc_ids``, ``filter``, and ``selector`` mutually exclusive. :<json string source_proxy: Address of a proxy server through which replication from the source should occur (protocol can be "http" or "socks5")
Persist digests before emitting them in `fs_util` Otherwise they will not have been written to the LMDB store (although they will have been uploaded to any configured remote stores via the `ensure_uploaded_to_remote`).
@@ -586,7 +586,10 @@ async fn execute(top_match: &clap::ArgMatches) -> Result<(), ExitError> { ) .await?; - let report = ensure_uploaded_to_remote(&store, store_has_remote, snapshot.digest).await?; + let ((), report) = futures::try_join!( + store.ensure_directory_digest_persisted(snapshot.clone().into()), + ensure_uploaded_to_remote(&store, store_has_remote, snapshot.digest), + )?; print_upload_summary(args.value_of("output-mode"), &report); Ok(())
Fix the typo Remove unnecessary 'if'.
@@ -258,7 +258,7 @@ reflected in the view indexes. .. note:: View index rebuilds occur when one view from the same the view group (i.e. all the views defined within a single a design document) has been - determined as needing a rebuild. For example, if if you have a design + determined as needing a rebuild. For example, if you have a design document with different views, and you update the database, all three view indexes within the design document will be updated.
Update gef.py Quick fix for restoring the compatibility with Fedora GDB package which adds `Fedora ` at the start of the `gdb.VERSION` string. This fix is quick'n dirty and distro specific, if some other distribs are found doing the same thing, a better approach would be to use regular expressions instead.
@@ -215,7 +215,7 @@ ___default_aliases___ = { } GDB_MIN_VERSION = (7, 7) -GDB_VERSION_MAJOR, GDB_VERSION_MINOR = [int(_) for _ in gdb.VERSION.split(".")[:2]] +GDB_VERSION_MAJOR, GDB_VERSION_MINOR = [int(_) for _ in gdb.VERSION.replace("Fedora ","").split(".")[:2]] GDB_VERSION = (GDB_VERSION_MAJOR, GDB_VERSION_MINOR) current_elf = None
Fix alert signup HTML You can't have a FORM inside a P so browsers insert a bunch of extra open/close P tags to try to make it valid which makes the vertical spacing look odd.
{% load crispy_forms_tags %} {% if not signed_up_for_alert %} -<p> <form method="post" action="{{ alert_preview_action }}"> {% csrf_token %} + <p> {% if alert_type == 'analyse' %} <input type="hidden" name="url" value=""> <input type="hidden" name="name" value=""> We offer a service which emails you about unusual or interesting prescribing at this {{entity_type}} once a month. Subscribe by entering your email address below. {% endif %} You can <input type="submit" class="btn-link" value="preview"></input> the email first. + </p> </form> <form method="post" class="form form-inline"> {% csrf_token %} + <p> {{ form.non_field_errors }} {{ form | crispy }} <input class="btn btn-primary" type="submit" value="Subscribe"> - - </form> </p> +</form> {% endif %}
Docs: Fixed IndexTemplate example Added 'template_name' for as_template method
@@ -452,7 +452,7 @@ Potential workflow for a set of time based indices governed by a single template return super().save(**kwargs) # once, as part of application setup, during deploy/migrations: - logs = Log._index.as_template() + logs = Log._index.as_template('logs') logs.save() # to perform search across all logs:
django-reversion compatibility documentation Fixes
@@ -20,6 +20,7 @@ Django REST Framework |lt| 3.7.0 |check| Django Allauth |check| Django Simple Captcha |check| Django OAuth Toolkit |check| +Django Reversion |check| ======================= ============= ============ ============ ============== .. |check| unicode:: U+2713 @@ -249,3 +250,25 @@ validator classes to function correctly. 'OAUTH2_VALIDATOR_CLASS': 'example.validators.AxesOAuth2Validator', 'SCOPES': {'read': 'Read scope', 'write': 'Write scope'}, } + + +Integration with Django Reversion +--------------------------------- + +Django Reversion is not designed to work with Axes, +but some users have reported that they have configured +a workaround with a monkeypatch function that functions correctly. + +``example/monkeypatch.py``:: + + from django.urls import resolve + + from reversion import views + + def _request_creates_revision(request): + if resolve(request.path_info).url_name.endswith("login"): + return False + + return request.method not in ["OPTIONS", "GET", "HEAD"] + + views._request_creates_revision = _request_creates_revision
CompoundEditor : Persist Editors following a numeric bookmark in layouts User facing changes : - When saving a layout, any editors that are currently following a numeric bookmark will continue to follow that bookmark when the layout is restored.
@@ -290,6 +290,15 @@ class CompoundEditor( GafferUI.Editor ) : "driver" : self.__pathToEditor(driver), "driverMode" : mode } + else : + nodeSet = n.getNodeSet() + # NumericBookmarkSet doesn't support repr as we don't want to + # couple the layout-centric serialisation that assumes 'scriptNode' + # is a global into Sets, so we keep it all contained here. + if isinstance( nodeSet, Gaffer.NumericBookmarkSet ) : + state[ self.__pathToEditor(n) ] = { + "nodeSet" : "Gaffer.NumericBookmarkSet( scriptNode, %d )" % nodeSet.getBookmark() + } return state @@ -300,18 +309,29 @@ class CompoundEditor( GafferUI.Editor ) : for path, state in editorState.items() : - if "driver" in state : editor = self.__editorAtPath( path ) - driver = self.__editorAtPath( state["driver"] ) - # The mode may not be registered any more, so make sure - # we fail gracefully here + try : + + if "driver" in state : + + driver = self.__editorAtPath( state["driver"] ) editor.setNodeSetDriver( driver, state["driverMode"] ) + + elif "nodeSet" in state : + + g = { + "scriptNode" : self.scriptNode(), + "Gaffer" : Gaffer + } + nodeSet = eval( state["nodeSet"], g ) + editor.setNodeSet( nodeSet ) + except Exception as e : sys.stderr.write( - "Unable to restore node set driver for {editor}: {error}\n".format( - editor = path, - error = "%s: %s" % ( type(e), e ) + "Unable to restore editor state for {editor}: {error}\n".format( + editor = "%s (%s)" % ( path, type(editor).__name__ ), + error = "%s: %s" % ( type(e).__name__, e ) ) )
Fix Eltex.MES5448.get_lldp_neighbors script HG-- branch : feature/microservices
@@ -21,26 +21,28 @@ class Script(BaseScript): interface = IGetLLDPNeighbors rx_detail = re.compile( - r"^Chassis ID Subtype: (?P<chassis_id_subtype>.+)\n" - r"^Chassis ID: (?P<chassis_id>.+)\n" - r"^Port ID Subtype: (?P<port_id_subtype>.+)\n" - r"^Port ID: (?P<port_id>.+)\n" - r"^System Name:(?P<system_name>.*)\n" - r"^System Description:(?P<system_description>.*)\n" - r"^Port Description:(?P<port_description>.*)\n" - r"^System Capabilities Supported:.*\n" + r"^Chassis ID Subtype: (?P<chassis_id_subtype>.+?)\n" + r"^Chassis ID: (?P<chassis_id>.+?)\n" + r"^Port ID Subtype: (?P<port_id_subtype>.+?)\n" + r"^Port ID: (?P<port_id>.+?)\n" + r"^System Name:(?P<system_name>.*?)\n" + r"^System Description:(?P<system_description>.*?)\n" + r"^Port Description:(?P<port_description>.*?)\n" + r"^System Capabilities Supported:.*?\n" r"^System Capabilities Enabled:(?P<caps>.*?)\n", re.MULTILINE | re.DOTALL ) CAPS_MAP = { "repeater": 2, "bridge": 4, + "WLAN access point": 8, "router": 16 } CHASSIS_SUBTYPE = { "MAC Address": 4 } PORT_SUBTYPE = { + "Interface Alias": 1, "MAC Address": 3, "Interface Name": 5, "Local": 7 @@ -52,17 +54,16 @@ class Script(BaseScript): if not i[1]: continue c = self.cli("show lldp remote-device detail %s" % i[0]) - match = self.rx_detail.search(c) iface = { "local_interface": i[0], "neighbors": [] } + for match in self.rx_detail.finditer(c): cap = 0 for c in match.group("caps").split(","): c = c.strip() if c: cap |= self.CAPS_MAP[c] - n = { "remote_chassis_id": match.group("chassis_id").strip(), "remote_chassis_id_subtype": self.CHASSIS_SUBTYPE[
Workaround fix YOLOv2 training Set need_grad=True in get_unlinked_variable to fix issue. Still, it doesn't solve the issue because there is a bug in nnabla itself. Workaround is just seting need_grad=True again.
@@ -142,8 +142,12 @@ def create_network(batchsize, imheight, imwidth, args): nH = yolo_features.shape[2] nW = yolo_features.shape[3] - output = yolo_features.unlinked() - output = output.reshape((nB, nA, (5+nC), nH, nW)) + output = yolo_features.get_unlinked_variable(need_grad=True) + # TODO: Workaround until v1.0.2. + # Explicitly enable grad since need_grad option above didn't work. + output.need_grad = True + + output = F.reshape(output, (nB, nA, (5 + nC), nH, nW)) output_splitted = F.split(output, 2) x, y, w, h, conf = [v.reshape((nB, nA, nH, nW)) for v in output_splitted[0:5]]
Test that the 2 created libraries are in `all_libraries` ...instead of just checking that `all_libraries` has at least 2 elements.
@@ -22,17 +22,16 @@ class TestGalaxyLibraries(GalaxyTestBase.GalaxyTestBase): self.assertIsNotNone(self.library['id']) def test_get_libraries(self): + library_data = self.gi.libraries.get_libraries(library_id=self.library['id'], deleted=False)[0] + self.assertTrue(library_data['name'] == self.name) deleted_name = 'deleted test library' deleted_library = self.gi.libraries.create_library(deleted_name, description='a deleted library', synopsis='automated test synopsis') self.gi.libraries.delete_library(deleted_library['id']) - # Make sure there's at least two values - the two we created - # - one deleted, one not, with the same IDs provided on creation - all_libraries = self.gi.libraries.get_libraries(deleted=None) - self.assertGreaterEqual(len(all_libraries), 2) - library_data = self.gi.libraries.get_libraries(library_id=self.library['id'], deleted=False)[0] - self.assertTrue(library_data['name'] == self.name) deleted_library_data = self.gi.libraries.get_libraries(library_id=deleted_library['id'], deleted=True)[0] self.assertTrue(deleted_library_data['name'] == deleted_name) + all_libraries = self.gi.libraries.get_libraries(deleted=None) + self.assertTrue(any(l['id'] == self.library['id'] for l in all_libraries)) + self.assertTrue(any(l['id'] == deleted_library['id'] for l in all_libraries)) def test_show_library(self): library_data = self.gi.libraries.show_library(self.library['id'])
[fix update env.Whereis docu [ci skip] Apply the patch in the issue, and further tweak the wording.
@@ -3650,30 +3650,51 @@ SConscript(dirs='doc', variant_dir='build/doc', duplicate=0) Searches for the specified executable <varname>program</varname>, returning the full path name to the program -if it is found, -and returning None if not. -Searches the specified -<varname>path</varname>, -the value of the calling environment's PATH -(<literal>env['ENV']['PATH']</literal>), -or the user's current external PATH -(<literal>os.environ['PATH']</literal>) -by default. +if it is found, else <literal>None</literal>. +Searches the value of the +<varname>path</varname> keyword argument, +or if <literal>None</literal> (the default) +the value of the calling environment's <envar>PATH</envar> +(<literal>env['ENV']['PATH']</literal>). +If <varname>path</varname> is <literal>None</literal> and +the <literal>env['ENV']['PATH']</literal> key does not exist, +the user's current external <envar>PATH</envar> +(<literal>os.environ['PATH']</literal>) is used as fallback. +</para> +<para> On Windows systems, searches for executable -programs with any of the file extensions -listed in the specified -<varname>pathext</varname>, -the calling environment's PATHEXT -(<literal>env['ENV']['PATHEXT']</literal>) -or the user's current PATHEXT +programs with any of the file extensions listed in the +<varname>pathext</varname> keyword argument, +or if <literal>None</literal> (the default) +the calling environment's <envar>PATHEXT</envar> +(<literal>env['ENV']['PATHEXT']</literal>). +The user's current external <envar>PATHEXT</envar> (<literal>os.environ['PATHEXT']</literal>) -by default. +is used as a fallback if <varname>pathext</varname> is +<literal>None</literal> +and the key <literal>env['ENV']['PATHEXT']</literal> +does not exist. +</para> +<para> Will not select any path name or names in the specified <varname>reject</varname> list, if any. </para> +<note> +<para> +If you would prefer to search +the user's current external <envar>PATH</envar> +(<literal>os.environ['PATH']</literal>) +by default, +consider using the function <literal>SCons.Util.WhereIs</literal> instead. +Note that <literal>SCons.Util.WhereIs</literal> +does not expand environment variables automatically +(no implicit <literal>env.subst</literal> for its arguments). +</para> +</note> + </summary> </scons_function>
Clarify TypeError message * Clarify TypeError message * typo * Wrap tye() in repr() so that the type-check stfu * Even better message incidentally speeds up the thing, since all() possibly checked all elements and we don't.
@@ -56,7 +56,12 @@ def make_grid( """ if not torch.jit.is_scripting() and not torch.jit.is_tracing(): _log_api_usage_once(make_grid) - if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))): + if not torch.is_tensor(tensor): + if isinstance(tensor, list): + for t in tensor: + if not torch.is_tensor(t): + raise TypeError(f"tensor or list of tensors expected, got a list containing {type(t)}") + else: raise TypeError(f"tensor or list of tensors expected, got {type(tensor)}") if "range" in kwargs.keys():
Viewport geometry corruption in Isometric view. Hair not work correct in isometric view Fixed by setting MAX_ORTHO_DEPTH=200 as much more suited. Also, small improvement in tile export.
@@ -12,6 +12,12 @@ from rprblender.utils import logging log = logging.Log(tag='export.camera') +# Core has issues with drawing faces in orthographic camera view with big +# ortho depth (far_clip_plane - near_clip_plane). +# Experimentally found quite suited value = 200 +MAX_ORTHO_DEPTH = 200.0 + + @dataclass(init=False, eq=True) class CameraData: """ Comparable dataclass which holds all camera settings """ @@ -89,6 +95,7 @@ class CameraData: (camera.ortho_scale * ratio, camera.ortho_scale) data.ortho_size = tuple(data.ortho_size[i] * size[i] for i in (0, 1)) + data.clip_plane = (camera.clip_start, min(camera.clip_end, MAX_ORTHO_DEPTH + camera.clip_start)) elif camera.type == 'PANO': # TODO: Recheck parameters for PANO camera @@ -132,7 +139,8 @@ class CameraData: data.mode = pyrpr.CAMERA_MODE_ORTHOGRAPHIC ortho_size = context.region_data.view_distance * VIEWPORT_SENSOR_SIZE / context.space_data.lens data.lens_shift = (0.0, 0.0) - data.clip_plane = (-context.space_data.clip_end * 0.5, context.space_data.clip_end * 0.5) + ortho_depth = min(context.space_data.clip_end, MAX_ORTHO_DEPTH) + data.clip_plane = (-ortho_depth * 0.5, ortho_depth * 0.5) data.ortho_size = (ortho_size, ortho_size / ratio) if ratio > 1.0 else \ (ortho_size * ratio, ortho_size) @@ -168,7 +176,7 @@ class CameraData: :param tile: tuple of tile position <tuple> and tile size <tuple> normalized to 1 """ - pos, size = tile + tile_pos, tile_size = tile rpr_camera.set_mode(self.mode) rpr_camera.set_clip_plane(*self.clip_plane) @@ -176,16 +184,16 @@ class CameraData: # following formula is used: # lens_shift = lens_shift * resolution / tile_size + (center - resolution/2) / tile_size # where: center = tile_pos + tile_size/2 - lens_shift = tuple(self.lens_shift[i] / size[i] + (pos[i] + size[i] * 0.5 - 0.5) / size[i] for i in (0, 1)) + lens_shift = tuple((self.lens_shift[i] + tile_pos[i] + tile_size[i] * 0.5 - 0.5) / tile_size[i] for i in (0, 1)) rpr_camera.set_lens_shift(*lens_shift) if self.mode == pyrpr.CAMERA_MODE_PERSPECTIVE: - sensor_size = tuple(self.sensor_size[i] * size[i] for i in (0, 1)) + sensor_size = tuple(self.sensor_size[i] * tile_size[i] for i in (0, 1)) rpr_camera.set_sensor_size(*sensor_size) rpr_camera.set_focal_length(self.focal_length) elif self.mode == pyrpr.CAMERA_MODE_ORTHOGRAPHIC: - ortho_size = tuple(self.ortho_size[i] * size[i] for i in (0, 1)) + ortho_size = tuple(self.ortho_size[i] * tile_size[i] for i in (0, 1)) rpr_camera.set_ortho(*ortho_size) elif self.mode == pyrpr.CAMERA_MODE_LATITUDE_LONGITUDE_360:
Add Serverless Practitioners Summit 2020 Presentation Add link to Youtube recording of "Serverless Machine Learning Inference with KFServing - Clive Cox, Seldon & Yuzhui Liu, Bloomberg"
@@ -14,4 +14,5 @@ This page contains a list of KFServing presentations and demos.If you'd like to | [Anchor MLOps Podcast: Serving Models with KFServing](https://anchor.fm/mlops/episodes/MLOps-Coffee-Sessions-1-Serving-Models-with-Kubeflow-efbht0) | David Aponte, Demetrios Brinkmann| | [Kubeflow 101: What is KFserving?](https://www.youtube.com/watch?v=lj_X2ND2BBI) | Stephanie Wong | | [ICML 2020, Workshop on Challenges in Deploying and Monitoring Machine Learning Systems : Serverless inferencing on Kubernetes](https://slideslive.com/38931706/serverless-inferencing-on-kubernetes?ref=account-folder-55868-folders) | Clive Cox | +| [Serverless Practitioners Summit 2020: Serverless Machine Learning Inference with KFServing](https://www.youtube.com/watch?v=HlKOOgY5OyA) | Clive Cox, Yuzhui Liu|
tests: allow defining arbitrary number of OSDs Some tests might want to set this since number of devices will not necessarily map to number of OSDs
@@ -91,6 +91,9 @@ def node(host, request): num_devices = len(ansible_vars.get("devices", [])) if not num_devices: num_devices = len(ansible_vars.get("lvm_volumes", [])) + # If number of devices doesn't map to number of OSDs, allow tests to define + # that custom number, defaulting it to ``num_devices`` + num_osds = ansible_vars.get('num_osds', num_devices) cluster_name = ansible_vars.get("cluster", "ceph") conf_path = "/etc/ceph/{}.conf".format(cluster_name) if "osds" in group_names: @@ -116,6 +119,7 @@ def node(host, request): osd_ids=osd_ids, num_mons=num_mons, num_devices=num_devices, + num_osds=num_osds, cluster_name=cluster_name, conf_path=conf_path, cluster_address=cluster_address,
test_upload: Use assertLogs in upload tests to verify logs. This will avoid spam in test-backend output.
@@ -343,7 +343,11 @@ class FileUploadTest(UploadSerializeMixin, ZulipTestCase): # dummy_2 should not exist in database or the uploads folder do_delete_old_unclaimed_attachments(2) self.assertTrue(not Attachment.objects.filter(path_id = d2_path_id).exists()) + with self.assertLogs(level='WARNING') as warn_log: self.assertTrue(not delete_message_image(d2_path_id)) + self.assertEqual(warn_log.output, [ + 'WARNING:root:dummy_2.txt does not exist. Its entry in the database will be removed.' + ]) def test_attachment_url_without_upload(self) -> None: hamlet = self.example_user("hamlet") @@ -393,7 +397,10 @@ class FileUploadTest(UploadSerializeMixin, ZulipTestCase): # Then, try having a user who didn't receive the message try to publish it, and fail body = f"Illegal message ...[zulip.txt](http://{host}/user_uploads/" + d1_path_id + ")" + with self.assertLogs(level='WARNING') as warn_log: self.send_stream_message(self.example_user("cordelia"), "Denmark", body, "test") + self.assertTrue('WARNING:root:User 8 tried to share upload' in warn_log.output[0] + and 'but lacks permission' in warn_log.output[0]) self.assertEqual(Attachment.objects.get(path_id=d1_path_id).messages.count(), 1) self.assertFalse(Attachment.objects.get(path_id=d1_path_id).is_realm_public) @@ -1580,7 +1587,11 @@ class LocalStorageTest(UploadSerializeMixin, ZulipTestCase): self.assertEqual(expected_url, uri) # Delete the tarball. + with self.assertLogs(level='WARNING') as warn_log: self.assertIsNone(delete_export_tarball('not_a_file')) + self.assertEqual(warn_log.output, [ + 'WARNING:root:not_a_file does not exist. Its entry in the database will be removed.' + ]) path_id = urllib.parse.urlparse(uri).path self.assertEqual(delete_export_tarball(path_id), path_id) @@ -1635,7 +1646,11 @@ class S3Test(ZulipTestCase): @use_s3_backend def test_message_image_delete_when_file_doesnt_exist(self) -> None: + with self.assertLogs(level='WARNING') as warn_log: self.assertEqual(False, delete_message_image('non-existant-file')) + self.assertEqual(warn_log.output, [ + 'WARNING:root:non-existant-file does not exist. Its entry in the database will be removed.' + ]) @use_s3_backend def test_file_upload_authed(self) -> None: @@ -1877,7 +1892,11 @@ class S3Test(ZulipTestCase): self.assertEqual(uri, expected_url) # Delete the tarball. + with self.assertLogs(level='WARNING') as warn_log: self.assertIsNone(delete_export_tarball('not_a_file')) + self.assertEqual(warn_log.output, [ + 'WARNING:root:not_a_file does not exist. Its entry in the database will be removed.' + ]) path_id = urllib.parse.urlparse(uri).path self.assertEqual(delete_export_tarball(path_id), path_id)
tutorial updates punctuation, stylistic issues
@@ -151,7 +151,7 @@ The following example demonstrates this computation in SciPy >>> A.dot(linalg.inv(A)) #double check array([[ 1.00000000e+00, -1.11022302e-16, -5.55111512e-17], [ 3.05311332e-16, 1.00000000e+00, 1.87350135e-16], - [ 2.22044605e-16, -1.11022302e-16, 1.00000000e+00]]). + [ 2.22044605e-16, -1.11022302e-16, 1.00000000e+00]]) Solving a linear system ^^^^^^^^^^^^^^^^^^^^^^^ @@ -244,7 +244,7 @@ In SciPy, this is computed as shown in this example: array([[1, 2], [3, 4]]) >>> linalg.det(A) - -2.0. + -2.0 Computing norms
Add "is_diff" to the metric name in EvalResults. Support default candidate model when no model_name is provided when loading eval_result.
@@ -121,6 +121,7 @@ def load_and_deserialize_metrics( path: Text, model_name: Optional[Text] = None) -> List[Tuple[slicer.SliceKeyType, Any]]: """Loads metrics from the given location and builds a metric map for it.""" + # TODO(b/150413770): support metrics from multiple candidates. result = [] for record in tf.compat.v1.python_io.tf_record_iterator(path): metrics_for_slice = metrics_for_slice_pb2.MetricsForSlice.FromString(record) @@ -133,6 +134,7 @@ def load_and_deserialize_metrics( } } + default_model_name = None if metrics_for_slice.metric_keys_and_values: for kv in metrics_for_slice.metric_keys_and_values: current_model_name = kv.key.model_name @@ -148,24 +150,33 @@ def load_and_deserialize_metrics( kv.key.sub_key) if kv.key.HasField('sub_key') else '' if sub_key_id not in sub_key_metrics_map: sub_key_metrics_map[sub_key_id] = {} + if kv.key.is_diff: + if default_model_name is None: + default_model_name = current_model_name + elif default_model_name != current_model_name: + # Setting '' to trigger no match found ValueError below. + default_model_name = '' + metric_name = '{}_diff'.format(kv.key.name) + else: metric_name = kv.key.name sub_key_metrics_map[sub_key_id][ metric_name] = json_format.MessageToDict(kv.value) metrics_map = None keys = list(model_metrics_map.keys()) - if model_name in model_metrics_map: + tmp_model_name = model_name or default_model_name + if tmp_model_name in model_metrics_map: # Use the provided model name if there is a match. - metrics_map = model_metrics_map[model_name] + metrics_map = model_metrics_map[tmp_model_name] # Add model-independent (e.g. example_count) metrics to all models. - if model_name and '' in model_metrics_map: + if tmp_model_name and '' in model_metrics_map: for output_name, output_dict in model_metrics_map[''].items(): for sub_key_id, sub_key_dict in output_dict.items(): for name, value in sub_key_dict.items(): metrics_map.setdefault(output_name, {}).setdefault(sub_key_id, {})[name] = value - elif not model_name and len(keys) == 1: + elif not tmp_model_name and len(keys) == 1: # Show result of the only model if no model name is specified. metrics_map = model_metrics_map[keys[0]] else:
Update alias command added alias command for Windows
@@ -36,6 +36,9 @@ Check the GitHub releases for the most stable release versions. > **IMPORTANT**: The core system relies on plugins (git submodules). If you are unfamiliar with this concept and want to run the bleeding-edge code, a "git pull" on this code will likely not be sufficient. You will also need to update the submodules to ensure all plugins are current. One way to do this is by using an alias, such as: ```alias tig="git reset --hard origin/master && git checkout master && git reset --hard origin/master && git pull && git submodule foreach git checkout master && git submodule foreach git pull"``` +Windows +'''set "tig=git reset --hard origin/master && git checkout master && git reset --hard origin/master && git pull && git submodule foreach git checkout master && git submodule foreach git pull"''' + > *NOTE*: The functionality and schema used by the first release of CALDERA is now stored within the *ADVERSARY* plugin. This plugin is loaded automatically with the rest of the submodules, but will not be loaded in
Added geostationary orbit creation method. The method is supposed to be called with -An attractor and -Attractor's rotational velocity or period -Hill radius (optional) Issue:
@@ -297,6 +297,46 @@ class Orbit(object): attractor, a, ecc, inc, raan, argp, arglat, epoch, plane ) + @classmethod + @u.quantity_input(angular_velocity=u.rad / u.s, period=u.s, hill_radius=u.m) + def geostationary( + cls, attractor, angular_velocity=None, period=None, hill_radius=None + ): + """Return the geostationary orbit for the given attractor and its rotational speed. + + Parameters + ---------- + attractor : Body + Main attractor. + angular_velocity : ~astropy.units.Quantity + Rotational angular velocity of the attractor. + period : ~astropy.units.Quantity + Attractor's rotational period, ignored if angular_velocity is passed. + hill_radius : ~astropy.units.Quantity + Radius of Hill sphere of the attractor (optional). Gravitational sphere of + influence of parent body is ignored if hill_radius is not provided. + """ + + if angular_velocity is None and period is None: + raise ValueError( + "At least one among angular_velocity or period must be passed" + ) + + if angular_velocity is None: + angular_velocity = 2 * np.pi / period + + # Find out geostationary radius using r = cube_root(GM/(angular velocity)^2) + with u.set_enabled_equivalencies(u.dimensionless_angles()): + geo_radius = np.cbrt(attractor.k / np.square(angular_velocity.to(1 / u.s))) + + if hill_radius is not None and geo_radius > hill_radius: + raise RuntimeError( + "Geostationary orbit for the given parameters doesn't exist" + ) + + altitude = geo_radius - attractor.R + return cls.circular(attractor, altitude) + @classmethod @u.quantity_input(p=u.m, inc=u.rad, raan=u.rad, argp=u.rad, nu=u.rad) def parabolic(
Use the library short name instead of "mdl" in the playground script TN:
## vim: filetype=makopython import argparse + from IPython import embed from IPython.terminal.ipapp import load_default_config -import ${module_name} as mdl + +import ${module_name} +import ${module_name} as ${ctx.short_name.lower if ctx.short_name else 'mdl'} HEADER = """ -- @@ -18,7 +21,7 @@ there are multiple. Enjoy! """.strip() -ctx = mdl.AnalysisContext('utf-8') +ctx = ${module_name}.AnalysisContext('utf-8') parser = argparse.ArgumentParser( description="${module_name} playground. Analyze files passed as arguments."
MAINT: attempt to fix a sporadic optimiser test failure [CHANGED] testing max evaluations triggers exit fails rarely, but still fails. All I've done in this patch is check whether the evaluations done is greater than *or equal* to the set limit...
@@ -93,7 +93,7 @@ class OptimiserTestCase(TestCase): f, last, evals = MakeF() x, e = quiet(maximise, f, xinit=[1.0], bounds=([-10, 10]), return_eval_count=True) - self.assertTrue(e > 500) + self.assertGreaterEqual(e, 500) def test_checkpointing(self): filename = 'checkpoint.tmp.pickle'
RPM: Do not define unused variables on RHEL8 * It seems OBS recently started hating these, because they are not defining concrete versions, and fails therefore, however these variables are not used, so lets just guard their definition.
+%if 0%{?rhel} < 8 # detect python site-packages path, use get_python_lib(0) as nuitka using %global python_sitearch %(%{__python} -c "import sys, distutils.sysconfig; sys.stdout.write(distutils.sysconfig.get_python_lib(0))") - %global python3_sitearch %(%{__python3} -c "import sys, distutils.sysconfig; sys.stdout.write(distutils.sysconfig.get_python_lib(0))") +%endif %global _python_bytecompile_errors_terminate_build 0
Enable hub tests on MacOS Summary: fix This was broken by a bad openssl release in conda. Should be fixed now. Testing... Pull Request resolved:
@@ -14,7 +14,7 @@ from torch.utils.checkpoint import checkpoint, checkpoint_sequential import torch.hub as hub from torch.autograd._functions.utils import prepare_onnx_paddings from torch.autograd._functions.utils import check_onnx_broadcast -from common_utils import skipIfRocm, load_tests, IS_MACOS +from common_utils import skipIfRocm, load_tests # load_tests from common_utils is used to automatically filter tests for # sharding on sandcastle. This line silences flake warnings @@ -519,7 +519,6 @@ def sum_of_model_parameters(model): SUM_OF_PRETRAINED_RESNET18_PARAMS = -12703.992365 [email protected](IS_MACOS, 'Broken on macOS; see #26032') class TestHub(TestCase): @classmethod def setUpClass(cls):
[tests] Make tests run when env is passed to subprocess fixes
@@ -75,7 +75,7 @@ class MockPopen(object): self.mock.returncode = 0 def assert_call(self, cmd): - self.mock.popen.assert_any_call(shlex.split(cmd), stdout=subprocess.PIPE, stderr=subprocess.STDOUT) + self.mock.popen.assert_any_call(shlex.split(cmd), env=mock.ANY, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) def cleanup(self): self._patch.stop()
Add a diagnostic for missing dynvar bindings in Bind's conv_prop TN:
@@ -7,8 +7,8 @@ from langkit.compiled_types import ( from langkit.diagnostics import check_multiple, check_source_language from langkit.expressions.base import ( - AbstractExpression, CallExpr, ComputingExpr, LiteralExpr, PropertyDef, - aggregate_expr, auto_attr, construct, render + AbstractExpression, CallExpr, ComputingExpr, DynamicVariable, LiteralExpr, + PropertyDef, aggregate_expr, auto_attr, construct, render ) @@ -172,6 +172,10 @@ class Bind(AbstractExpression): )), ]) + DynamicVariable.check_call_bindings( + self.conv_prop, "In Bind's conv_prop {prop}" + ) + # Those checks are run in construct, because we need the eq_prop to be # prepared already, which is not certain in do_prepare (order # dependent).
Re-enable dataflow tests. tfx-bsl 0.25.0 is released and we can run these tests again.
@@ -306,23 +306,22 @@ class TaxiTemplateKubeflowE2ETest(test_utils.BaseEndToEndTest): self._update_pipeline() self._run_pipeline() - # TODO(b/170163019) Re-enable Dataflow tests after tfx-bsl 0.25.0 release. - # # Enable Dataflow - # self._comment('kubeflow_dag_runner.py', [ - # 'beam_pipeline_args=configs\n', - # '.BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS', - # ]) - # self._uncomment('kubeflow_dag_runner.py', [ - # 'beam_pipeline_args=configs.DATAFLOW_BEAM_PIPELINE_ARGS', - # ]) - # logging.info('Added Dataflow to pipeline.') - # self._update_pipeline() - # self._run_pipeline() - - # # Enable CAIP extension. - # self._comment('kubeflow_dag_runner.py', [ - # 'beam_pipeline_args=configs.DATAFLOW_BEAM_PIPELINE_ARGS', - # ]) + # Enable Dataflow + self._comment('kubeflow_dag_runner.py', [ + 'beam_pipeline_args=configs\n', + '.BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS', + ]) + self._uncomment('kubeflow_dag_runner.py', [ + 'beam_pipeline_args=configs.DATAFLOW_BEAM_PIPELINE_ARGS', + ]) + logging.info('Added Dataflow to pipeline.') + self._update_pipeline() + self._run_pipeline() + + # Enable CAIP extension. + self._comment('kubeflow_dag_runner.py', [ + 'beam_pipeline_args=configs.DATAFLOW_BEAM_PIPELINE_ARGS', + ]) self._uncomment('kubeflow_dag_runner.py', [ 'ai_platform_training_args=configs.GCP_AI_PLATFORM_TRAINING_ARGS,', 'ai_platform_serving_args=configs.GCP_AI_PLATFORM_SERVING_ARGS,',
Update response.py fix status code not propagating from response.stream to response.StreamingHTTPResponse
@@ -331,7 +331,7 @@ def stream( :param headers: Custom Headers. """ return StreamingHTTPResponse( - streaming_fn, headers=headers, content_type=content_type, status=200) + streaming_fn, headers=headers, content_type=content_type, status=status) def redirect(to, headers=None, status=302,
Mentioning minimum TF version for Tensorforce The sampling_body method in replay.py uses tf.while_loop and is given a parameter named as maximum_iterations. This parameter is introduced in Tensorflow version 1.5. If Tensorflow 1.4 is used for DQN and similar agents, we get unknown args 'maximum_iteration' error
@@ -102,7 +102,7 @@ pip install -e . ``` TensorForce is built on [Google's Tensorflow](https://www.tensorflow.org/). The installation command assumes -that you have `tensorflow` or `tensorflow-gpu` installed. +that you have `tensorflow` or `tensorflow-gpu` installed. Tensorforce requires Tensorflow version 1.5 or later Alternatively, you can use the following commands to install the tensorflow dependency.
changed logging method PypeLogger is obsolete
import os from enum import Enum from abc import abstractmethod - import attr from openpype.lib.path_tools import sha256sum -from openpype.lib import PypeLogger from openpype.lib.file_handler import RemoteFileHandler - -log = PypeLogger().get_logger(__name__) +from openpype.lib import Logger class UrlType(Enum): @@ -28,6 +25,7 @@ class AddonInfo(object): class AddonDownloader: + log = Logger.get_logger(__name__) def __init__(self): self._downloaders = {} @@ -101,7 +99,7 @@ class HTTPAddonDownloader(AddonDownloader): @classmethod def download(cls, addon_url, destination): - log.debug(f"Downloading {addon_url} to {destination}") + cls.log.debug(f"Downloading {addon_url} to {destination}") file_name = os.path.basename(destination) _, ext = os.path.splitext(file_name) if (ext.replace(".", '') not @@ -113,6 +111,7 @@ class HTTPAddonDownloader(AddonDownloader): return os.path.join(destination, file_name) + def get_addons_info(): """Returns list of addon information from Server""" # TODO temp @@ -145,6 +144,10 @@ def update_addon_state(addon_infos, destination_folder, factory): factory (AddonDownloader): factory to get appropriate downloader per addon type """ + from openpype.lib import Logger + + log = Logger.get_logger(__name__) + for addon in addon_infos: full_name = "{}_{}".format(addon.name, addon.version) addon_dest = os.path.join(destination_folder, full_name)
Replace some malloc+memset pairs with calloc. Summary: Pull Request resolved:
@@ -33,8 +33,7 @@ THCCudaResourcesPerDevice* THCState_getDeviceResourcePtr( THCState* THCState_alloc(void) { - THCState* state = (THCState*) malloc(sizeof(THCState)); - memset(state, 0, sizeof(THCState)); + THCState* state = (THCState*) calloc(1, sizeof(THCState)); return state; } @@ -55,8 +54,7 @@ void THCudaInit(THCState* state) THCudaCheck(cudaGetDevice(&device)); state->resourcesPerDevice = (THCCudaResourcesPerDevice*) - malloc(numDevices * sizeof(THCCudaResourcesPerDevice)); - memset(state->resourcesPerDevice, 0, numDevices * sizeof(THCCudaResourcesPerDevice)); + calloc(numDevices, sizeof(THCCudaResourcesPerDevice)); state->deviceProperties = (struct cudaDeviceProp*)malloc(numDevices * sizeof(struct cudaDeviceProp)); @@ -69,14 +67,12 @@ void THCudaInit(THCState* state) // "-1" (unknown). // Currently the max number of gpus in P2P group is 8, so if there are more // we enable P2P in groups of 8 - state->p2pAccessEnabled = (int**) malloc(sizeof(int*) * numDevices); + state->p2pAccessEnabled = (int**) calloc(numDevices, sizeof(int*)); for (int i = 0; i < numDevices; ++i) { - state->p2pAccessEnabled[i] = (int*) malloc(sizeof(int) * numDevices); + state->p2pAccessEnabled[i] = (int*) calloc(numDevices, sizeof(int)); for (int j = 0; j < numDevices; ++j) if (i == j) state->p2pAccessEnabled[i][j] = 1; - else if (j / THC_CUDA_MAX_PEER_SIZE != i / THC_CUDA_MAX_PEER_SIZE) - state->p2pAccessEnabled[i][j] = 0; else state->p2pAccessEnabled[i][j] = -1; }
Fix center marker shifting fixes
@@ -133,6 +133,6 @@ export default createComponent({ left: 0; z-index: 2; - transform: translate(calc(50vw - 12.5px), calc(50vh - 12.5px + 47px)); + transform: translate(calc(50vw - 12.5px), calc(50vh - 12.5px + 47px - 25px)); } </style>
Cannot execute sample code. I tried to execute the sample code of parse() on python 2.7 and 3.0. But It occured JSONDecodeError because the "pattern" value of this sample JSON are described as multi-line. To ensure that the sample code works, this value shuld be single-line.
@@ -60,8 +60,7 @@ To parse a STIX JSON string into a Python STIX object, use "malicious-activity" ], "name": "File hash for malware variant", - "pattern": "[file:hashes.md5 = - 'd41d8cd98f00b204e9800998ecf8427e']", + "pattern": "[file:hashes.md5 ='d41d8cd98f00b204e9800998ecf8427e']", "valid_from": "2017-09-26T23:33:39.829952Z" }""") print(indicator)
Use our stored version of the crowdin-cli So it'll never change under our feet.
@@ -102,7 +102,7 @@ compilemessages: syncmessages: ensurecrowdinclient uploadmessages downloadmessages distributefrontendmessages ensurecrowdinclient: - ls -l crowdin-cli.jar || wget https://crowdin.com/downloads/crowdin-cli.jar # make sure we have the official crowdin cli client + ls -l crowdin-cli.jar || wget https://storage.googleapis.com/le-downloads/crowdin-cli/crowdin-cli.jar # make sure we have the official crowdin cli client uploadmessages: java -jar crowdin-cli.jar upload sources -b `git symbolic-ref HEAD | xargs basename`
Prevent incorrect tuple size on get_extra_info errors According to get_extra_info fails by returning None. This is an attempt in normalization of the response in cases of AF_INET, AF_INET6 and erroneous return values.
@@ -143,7 +143,7 @@ class Request(dict): @property def ip(self): if not hasattr(self, '_ip'): - self._ip = self.transport.get_extra_info('peername') + self._ip = self.transport.get_extra_info('peername') or (None, None) return self._ip @property
Update install docs to reflect new minimum dependencies [ci skip]
@@ -34,42 +34,29 @@ Dependencies Mandatory dependencies ^^^^^^^^^^^^^^^^^^^^^^ -- `numpy <http://www.numpy.org/>`__ (>= 1.9.3) +- `numpy <http://www.numpy.org/>`__ (>= 1.10.4) -- `scipy <https://www.scipy.org/>`__ (>= 0.14.0) +- `scipy <https://www.scipy.org/>`__ (>= 0.17.1) -- `matplotlib <https://matplotlib.org>`__ (>= 1.4.3) +- `matplotlib <https://matplotlib.org>`__ (>= 1.5.3) -- `pandas <https://pandas.pydata.org/>`__ (>= 0.15.2) +- `pandas <https://pandas.pydata.org/>`__ (>= 0.17.1) Recommended dependencies ^^^^^^^^^^^^^^^^^^^^^^^^ -- `statsmodels <https://www.statsmodels.org/>`__ (>= 0.5.0) - -Testing -~~~~~~~ - -To test seaborn, run ``make test`` in the root directory of the source -distribution. This runs the unit test suite (using ``pytest``, but many older -tests use ``nose`` asserts). It also runs the example code in function -docstrings to smoke-test a broader and more realistic range of example usage. - -The full set of tests requires an internet connection to download the example -datasets (if they haven't been previously cached), but the unit tests should -be possible to run offline. - +- `statsmodels <https://www.statsmodels.org/>`__ (>= 0.8.0) Bugs ~~~~ Please report any bugs you encounter through the github `issue tracker <https://github.com/mwaskom/seaborn/issues/new>`_. It will be most helpful to -include a reproducible example on one of the example datasets (accessed through -:func:`load_dataset`). It is difficult debug any issues without knowing the -versions of seaborn and matplotlib you are using, as well as what `matplotlib -backend <https://matplotlib.org/faq/usage_faq.html#what-is-a-backend>`__ you -are using to draw the plots, so please include those in your bug report. +include a reproducible example on synthetic data or one of the example datasets +(accessed through :func:`load_dataset`). It is difficult debug any issues +without knowing the versions of seaborn and matplotlib you are using, as well +as what `matplotlib backend +<https://matplotlib.org/faq/usage_faq.html#what-is-a-backend>`__ you are have active, so please include those in your bug report. .. raw:: html
Updates for InvenTree serializer classes Catch and re-throw errors correctly
@@ -167,6 +167,18 @@ class InvenTreeModelSerializer(serializers.ModelSerializer): return self.instance + def update(self, instance, validated_data): + """ + Catch any django ValidationError, and re-throw as a DRF ValidationError + """ + + try: + instance = super().update(instance, validated_data) + except (ValidationError, DjangoValidationError) as exc: + raise ValidationError(detail=serializers.as_serializer_error(exc)) + + return instance + def run_validation(self, data=empty): """ Perform serializer validation. @@ -188,7 +200,10 @@ class InvenTreeModelSerializer(serializers.ModelSerializer): # Update instance fields for attr, value in data.items(): + try: setattr(instance, attr, value) + except (ValidationError, DjangoValidationError) as exc: + raise ValidationError(detail=serializers.as_serializer_error(exc)) # Run a 'full_clean' on the model. # Note that by default, DRF does *not* perform full model validation! @@ -219,20 +234,10 @@ class InvenTreeAttachmentSerializer(InvenTreeModelSerializer): filename = serializers.CharField( label=_('Filename'), required=False, - source='get_filename', + source='basename', + allow_blank=False, ) - def update(self, instance, validated_data): - """ - Filename can only be edited on "update" - """ - - instance = super().update(instance, validated_data) - - print(validated_data) - - return instance - class InvenTreeAttachmentSerializerField(serializers.FileField): """
installer pip changes installer pip changes
@@ -82,7 +82,6 @@ case "$ID" in PACKAGE_MGR=$(command -v apt-get) PYTHON_PREIN="git patch" PYTHON_DEPS="python3 python3-pip python3-dev python3-setuptools python3-zmq python3-tornado python3-cryptography python3-simplejson python3-requests gcc g++ libssl-dev swig python3-yaml wget" - PYTHON_PIPS="m2crypto" BUILD_TOOLS="build-essential libtool automake pkg-config m4 libgcrypt20-dev uthash-dev autoconf autoconf-archive libcurl4-gnutls-dev gnulib doxygen libdbus-1-dev" NEED_BUILD_TOOLS=1 $PACKAGE_MGR update @@ -96,7 +95,6 @@ case "$ID" in NEED_EPEL=1 PYTHON_PREIN="python36 python36-devel python36-setuptools python36-pip git wget patch openssl" PYTHON_DEPS="gcc gcc-c++ openssl-devel swig python36-PyYAML python36-tornado python36-simplejson python36-cryptography python36-requests python36-zmq yaml-cpp-devel" - PYTHON_PIPS="m2crypto" BUILD_TOOLS="openssl-devel file libtool make automake m4 libgcrypt-devel autoconf autoconf-archive libcurl-devel libstdc++-devel uriparser-devel dbus-devel gnulib-devel doxygen" NEED_BUILD_TOOLS=1 CENTOS7_TSS_FLAGS="--enable-esapi=no --disable-doxygen-doc" @@ -106,8 +104,7 @@ case "$ID" in PACKAGE_MGR=$(command -v dnf) NEED_EPEL=1 PYTHON_PREIN="python3 python3-devel python3-setuptools python3-pip" - PYTHON_DEPS="gcc gcc-c++ openssl-devel python3-yaml python3-requests swig python3-cryptography wget git" - PYTHON_PIPS="tornado==5.0.2 pyzmq m2crypto simplejson" + PYTHON_DEPS="gcc gcc-c++ openssl-devel python3-yaml python3-requests swig python3-cryptography wget git python3-tornado python3-zmq python3-simplejson" BUILD_TOOLS="git wget patch libyaml openssl-devel libtool make automake m4 libgcrypt-devel autoconf libcurl-devel libstdc++-devel dbus-devel" #TPM2_TOOLS_PKGS="tpm2-tss tpm2-tools tpm2-abrmd" TODO: still on 3.1.1 tpm2_tools NEED_BUILD_TOOLS=1
Update GitHub Action workflows to use micromamba Replace conda setup with micromamba Reduce fetch depth for checkout Fetch tags for version inference Install pvlib from source before testing Closes
@@ -8,7 +8,6 @@ on: jobs: test: - strategy: fail-fast: false # don't cancel other matrix jobs when one fails matrix: @@ -31,16 +30,23 @@ jobs: runs-on: ${{ matrix.os }} steps: - - uses: actions/checkout@v3 + # We check out only a limited depth and then pull tags to save time + - name: Checkout source + uses: actions/checkout@v3 + with: + fetch-depth: 100 + + - name: Get tags + run: git fetch --depth=1 origin +refs/tags/*:refs/tags/* - - name: Set up conda environment + - name: Install Conda environment with Micromamba if: matrix.environment-type == 'conda' - uses: conda-incubator/setup-miniconda@v2 + uses: mamba-org/provision-with-micromamba@v12 with: - activate-environment: test_env environment-file: ${{ env.REQUIREMENTS }} - python-version: ${{ matrix.python-version }} - auto-activate-base: false + cache-downloads: true + extra-specs: | + python=${{ matrix.python-version }} env: # build requirement filename. First replacement is for the python # version, second is to add "-min" if needed @@ -49,7 +55,7 @@ jobs: - name: List installed package versions (conda) if: matrix.environment-type == 'conda' shell: bash -l {0} # necessary for conda env to be active - run: conda list + run: micromamba list - name: Install bare Python ${{ matrix.python-version }}${{ matrix.suffix }} if: matrix.environment-type == 'bare' @@ -57,6 +63,11 @@ jobs: with: python-version: ${{ matrix.python-version }} + - name: Install pvlib + if: matrix.environment-type == 'conda' + shell: bash -l {0} + run: python -m pip install --no-deps . + - name: Set up bare environment if: matrix.environment-type == 'bare' run: |
External Schema/Services credential type rendering bug When 'name.' + <value> not found in translation resources, displays as name.<value>, just display <value>
@@ -35,6 +35,7 @@ import { UtilModule } from './util/util.module'; import { environment } from '../environments/environment'; const ROUTE_PREFIX: string = 'ROUTES.'; +const NAME_PREFIX: string = 'name.'; export const appInitializerFn = (appConfig: AppConfigService) => { return () => { @@ -69,6 +70,11 @@ export class MyMissingTranslationHandler implements MissingTranslationHandler { if (params.key.substring(0, ROUTE_PREFIX.length) === ROUTE_PREFIX) { return params.key.substring(ROUTE_PREFIX.length); } + // when credential type name is not in resources, + // remove the resource prefix 'name.' and render incoming value + if (params.key.substring(0, NAME_PREFIX.length) === NAME_PREFIX) { + return params.key.substring(NAME_PREFIX.length); + } // highlight missing translation strings in development mode if (!environment.production && !~params.key.indexOf('??')) { console.warn('missing translation: ' + params.key);
[otBase] Actually call conv.writeArray() Huh. Somehow the writeArray() was never wired up. We lose the failing array index in the exception, but is fine to me.
@@ -728,12 +728,11 @@ class BaseTable(object): # conv.repeat is a propagated count writer[conv.repeat].setValue(countValue) values = value - for i, value in enumerate(values): try: - conv.write(writer, font, table, value, i) + conv.writeArray(writer, font, table, values) except Exception as e: name = value.__class__.__name__ if value is not None else conv.name - e.args = e.args + (name+'['+str(i)+']',) + e.args = e.args + (name+'[]',) raise elif conv.isCount: # Special-case Count values.
Bump docutils to 0.18.1 (Not 0.19, which is not compatible with sphinx-rtd-theme yet)
@@ -21,9 +21,9 @@ Babel==2.11.0 \ --hash=sha256:1ad3eca1c885218f6dce2ab67291178944f810a10a9b5f3cb8382a5a232b64fe \ --hash=sha256:5ef4b3226b0180dedded4229651c8b0e1a3a6a2837d45a073272f313e4cf97f6 # docutils is required by Sphinx -docutils==0.17.1 \ - --hash=sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61 \ - --hash=sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125 +docutils==0.18.1 \ + --hash=sha256:23010f129180089fbcd3bc08cfefccb3b890b0050e1ca00c867036e9d161b98c \ + --hash=sha256:679987caf361a7539d76e584cbeddc311e3aee937877c87346f31debc63e9d0 # imagesize is required by Sphinx imagesize==1.4.1 \ --hash=sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b \
[DOC] clarifications in the Deseasonalizer docstring Adds some clarifications in the docstring of `Deseasonalizer`.
@@ -20,10 +20,14 @@ from sktime.utils.validation.forecasting import check_sp class Deseasonalizer(BaseTransformer): """Remove seasonal components from a time series. - Fit computes :term:`seasonal components <Seasonality>` and - stores them in `seasonal_`. + Applies `statsmodels.tsa.seasonal.seasonal_compose` and removes the `seasonal` + component in `transform`. Adds seasonal component back again in `inverse_transform`. + Seasonality removal can be additive or multiplicative. - Transform aligns seasonal components stored in `_seasonal` with + `fit` computes :term:`seasonal components <Seasonality>` and + stores them in `seasonal_` attribute. + + `transform` aligns seasonal components stored in `seasonal_` with the time index of the passed :term:`series <Time series>` and then substracts them ("additive" model) from the passed :term:`series <Time series>` or divides the passed series by them ("multiplicative" model).
TrivialFix: replace list comprehension with 'for' Creation of list here is useless. As I understand such 'cycle' was added just to have less number of code. But creation of this list allocates memory for it.
@@ -1266,7 +1266,8 @@ def numa_get_constraints(flavor, image_meta): nodes, flavor, cpu_list, mem_list) # We currently support same pagesize for all cells. - [setattr(c, 'pagesize', pagesize) for c in numa_topology.cells] + for c in numa_topology.cells: + setattr(c, 'pagesize', pagesize) cpu_policy = _get_cpu_policy_constraints(flavor, image_meta) cpu_thread_policy = _get_cpu_thread_policy_constraints(flavor, image_meta)
Make reason a required input to bb watch Resolves
@@ -220,9 +220,14 @@ class BigBrother: """ Relay messages sent by the given `user` to the `#big-brother-logs` channel - If a `reason` is specified, a note is added for `user` + A `reason` for watching is required, which is added for the user to be watched as a + note (aka: shadow warning) """ + if not reason: + await ctx.send(":x: A reason for watching this user is required") + return + channel_id = Channels.big_brother_logs post_data = { @@ -251,8 +256,7 @@ class BigBrother: reason = data.get('error_message', "no message provided") await ctx.send(f":x: the API returned an error: {reason}") - # Add a note (shadow warning) if a reason is specified - if reason: + # Add a note (shadow warning) with the reason for watching reason = "bb watch: " + reason # Prepend for situational awareness await post_infraction(ctx, user, type="warning", reason=reason, hidden=True)
Remove dead function Summary: Pull Request resolved: This wasn't called from anywhere (confirmed by grep) ghstack-source-id: Test Plan: waitforsandcastle
@@ -913,30 +913,6 @@ def create_generic(top_env, declarations): return broadcast_actuals - def emit_nn_body(option): - # type: (FunctionOption) -> Union[str, List[str]] - # Concrete definition on Type.cpp for NN functions. Delegates to the - # xxx_forward variant variant after creating any necessary buffers. - actuals = option['actuals'] - base_name = option['name'][:-1] if option['inplace'] else option['name'] - fwd_name = option['api_name'].replace(base_name, base_name + '_forward') - - if len(option['buffers']) == 0: - return 'return {}({});'.format(fwd_name, ', '.join(actuals)) - - body = [] # type: List[str] - if option['api_name'].endswith('_out'): - # _out variants must create buffers and insert them in the - # arguments list between output and input arguments - for buffer in option['buffers']: - body.append('Tensor {} = at::empty({{0}}, this->options());'.format(buffer['name'])) - actuals = [arg['name'] for arg in option['arguments'] if arg.get('output')] - actuals += [buffer['name'] for buffer in option['buffers']] - actuals += [arg['name'] for arg in option['arguments'] if not arg.get('output')] - - body.append('return std::get<0>({}({}));'.format(fwd_name, ', '.join(actuals))) - return body - def process_option(option): # type: (FunctionOption) -> None option['inplace'] = re.search(
Add r in front of error message string to make this a raw string and avoid 'anomalous backslash in string' codacy and travis errors.
@@ -184,7 +184,9 @@ class Test_correct_collapsed_coordinates(IrisTest): new_cube.add_dim_coord(DimCoord([0, 1, 2], "forecast_period", units="hours"), 0) - message = "Require data with shape \(3,\), got \(2,\)\." + # r added in front of error message string to make this a raw string + # and avoid 'anomalous backslash in string' codacy and travis errors. + message = r"Require data with shape \(3,\), got \(2,\)\." with self.assertRaisesRegexp(ValueError, message): self.plugin.correct_collapsed_coordinates(orig_cube, new_cube, ['forecast_period'])
Make aoc_name a keyword arguemnt to accept spaces Makes `aoc_name` in the link command a keyword only argument. This allows users to link accounts with spaces in the name without having to use quotes.
@@ -183,7 +183,7 @@ class AdventOfCode(commands.Cog): brief="Tie your Discord account with your Advent of Code name." ) @whitelist_override(channels=AOC_WHITELIST) - async def aoc_link_account(self, ctx: commands.Context, aoc_name: str = None) -> None: + async def aoc_link_account(self, ctx: commands.Context, *, aoc_name: str = None) -> None: """ Link your Discord Account to your Advent of Code name.
Fix Error make --csv and --input-column optional when running in interactive mode
-from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser +from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser, ArgumentError import csv import os import time @@ -31,6 +31,8 @@ class AugmentCommand(TextAttackCommand): if not args.interactive: textattack.shared.utils.set_seed(args.random_seed) start_time = time.time() + if not (args.csv and args.input_column): + raise ArgumentError("The following arguments are required: --csv, --input-column/--i") # Validate input/output paths. if not os.path.exists(args.csv): raise FileNotFoundError(f"Can't find CSV at location {args.csv}") @@ -86,7 +88,7 @@ class AugmentCommand(TextAttackCommand): f"Wrote {len(output_rows)} augmentations to {args.outfile} in {time.time() - start_time}s." ) else: - print("Running in interactive mode") + print("\nRunning in interactive mode") print("----------------------------") while True: print( @@ -130,14 +132,15 @@ class AugmentCommand(TextAttackCommand): formatter_class=ArgumentDefaultsHelpFormatter, ) parser.add_argument( - "--csv", help="input csv file to augment", type=str, required=True + "--csv", help="input csv file to augment", type=str, required=False, default=None ) parser.add_argument( "--input-column", "--i", help="csv input column to be augmented", type=str, - required=True, + required=False, + default=None ) parser.add_argument( "--recipe",
Checksum the colors attribute of paths Fixes
@@ -821,6 +821,9 @@ def geometry_hash(geometry): if hasattr(geometry, 'visual'): # if visual properties are defined h += str(geometry.visual.crc()) + elif hasattr(geometry, 'colors'): + # paths do not use the visual attribute + h += str(caching.crc32(geometry.colors.tobytes())) return h
infra: update firewall rules, add cluster_network for osds At the moment, all daemons accept connections from 0.0.0.0. We should at least restrict to public_network and add cluster_network for OSDs. Closes:
firewalld: service: ceph-mon zone: "{{ ceph_mon_firewall_zone }}" + source: "{{ public_network }}" permanent: true immediate: false # if true then fails in case firewalld is stopped state: enabled firewalld: service: ceph zone: "{{ ceph_mgr_firewall_zone }}" + source: "{{ public_network }}" permanent: true immediate: false # if true then fails in case firewalld is stopped state: enabled firewalld: service: ceph zone: "{{ ceph_osd_firewall_zone }}" + source: "{{ item }}" permanent: true immediate: false # if true then fails in case firewalld is stopped state: enabled + with_items: + - "{{ public_network }}" + - "{{ cluster_network }}" notify: restart firewalld when: - osd_group_name is defined firewalld: port: "{{ radosgw_frontend_port }}/tcp" zone: "{{ ceph_rgw_firewall_zone }}" + source: "{{ public_network }}" permanent: true immediate: false # if true then fails in case firewalld is stopped state: enabled firewalld: service: ceph zone: "{{ ceph_mds_firewall_zone }}" + source: "{{ public_network }}" permanent: true immediate: false # if true then fails in case firewalld is stopped state: enabled firewalld: service: nfs zone: "{{ ceph_nfs_firewall_zone }}" + source: "{{ public_network }}" permanent: true immediate: false # if true then fails in case firewalld is stopped state: enabled firewalld: port: "111/tcp" zone: "{{ ceph_nfs_firewall_zone }}" + source: "{{ public_network }}" permanent: true immediate: false # if true then fails in case firewalld is stopped state: enabled firewalld: port: "{{ restapi_port }}/tcp" zone: "{{ ceph_restapi_firewall_zone }}" + source: "{{ public_network }}" permanent: true immediate: false # if true then fails in case firewalld is stopped state: enabled firewalld: service: ceph zone: "{{ ceph_rbdmirror_firewall_zone }}" + source: "{{ public_network }}" permanent: true immediate: false # if true then fails in case firewalld is stopped state: enabled firewalld: port: "5001/tcp" zone: "{{ ceph_iscsi_firewall_zone }}" + source: "{{ public_network }}" permanent: true immediate: false # if true then fails in case firewalld is stopped state: enabled
bugfix: set eval_model_params.is_training to 1 to enable training evaluate_model() references the model.cost, but model.cost is only set if is_training is set to True.
@@ -131,7 +131,7 @@ def load_dataset(data_dir, model_params, inference_mode=False): eval_model_params.use_input_dropout = 0 eval_model_params.use_recurrent_dropout = 0 eval_model_params.use_output_dropout = 0 - eval_model_params.is_training = 0 + eval_model_params.is_training = 1 sample_model_params = sketch_rnn_model.copy_hparams(eval_model_params) sample_model_params.batch_size = 1 # only sample one at a time
[fix]: updated the minor fix Github Issue:
@@ -18,7 +18,7 @@ class GrowthTrackerExport(ExportableMixin, IcdsSqlData): return data_dict[case_id][column] if case_id in data_dict.keys() else "N/A" def _fetch_data(filters, order_by, case_by_grouping=False): - query_set = ChildHealthMonthlyView.objects.filter(filters).order_by(order_by) + query_set = ChildHealthMonthlyView.objects.filter(**filters).order_by(*order_by) data_month = query_set.values('state_name', 'district_name', 'block_name', 'supervisor_name', 'awc_name', 'awc_site_code', 'person_name', 'dob', 'mother_name', 'mother_phone_number', 'pse_days_attended', 'lunch_count',
Use uncompiled version of pep508checker This ensures that it will work when running across different Python versions.
@@ -546,7 +546,7 @@ def check(): click.echo(crayons.yellow('Checking PEP 508 requirements...')) # Run the PEP 508 checker in the virtualenv. - c = delegator.run('{0} {1}'.format(which('python'), pep508checker.__file__)) + c = delegator.run('{0} {1}'.format(which('python'), pep508checker.__file__.rstrip('cd'))) results = json.loads(c.out) # Load the pipfile.
Integ tests: do not add user_properties on retries When retrying tests on failure user_properties were added twice causing an inconsistency in the tests result count.
@@ -144,7 +144,7 @@ def _setup_custom_logger(log_file): def _add_properties_to_report(item): for dimension in DIMENSIONS_MARKER_ARGS: value = item.funcargs.get(dimension) - if value: + if value and (dimension, value) not in item.user_properties: item.user_properties.append((dimension, value))
fix: handle ebook which returns no progress info closes
@@ -584,7 +584,6 @@ class AlexaClient(MediaPlayerDevice): ) if self._session.get("state"): self._media_player_state = self._session["state"] - self._media_pos = self._session.get("progress", {}).get("mediaProgress") self._media_title = self._session.get("infoText", {}).get("title") self._media_artist = self._session.get("infoText", {}).get("subText1") self._media_album_name = self._session.get("infoText", {}).get( @@ -595,8 +594,15 @@ class AlexaClient(MediaPlayerDevice): if self._session.get("mainArt") else None ) - self._media_duration = self._session.get("progress", {}).get( - "mediaLength" + self._media_pos = ( + self._session.get("progress", {}).get("mediaProgress") + if self._session.get("progress") + else None + ) + self._media_duration = ( + self._session.get("progress", {}).get("mediaLength") + if self._session.get("progress") + else None ) if not self._session.get("lemurVolume"): self._media_is_muted = (
adding create_backup, get_backups, remove_backup(s) Fixes
@@ -1279,11 +1279,17 @@ class Model: """ raise NotImplementedError() - def get_backups(self): + async def get_backups(self): """Retrieve metadata for backups in this model. + :return [dict]: List of metadata for the stored backups """ - raise NotImplementedError() + backups_facade = client.BackupsFacade.from_connection(self.connection()) + _backups_metadata = await backups_facade.List() + backups_metadata = _backups_metadata.serialize() + if 'list' not in backups_metadata: + raise JujuAPIError("Unexpected response metadata : %s" % backups_metadata) + return backups_metadata['list'] def block(self, *commands): """Add a new block to this model. @@ -1310,15 +1316,20 @@ class Model: """ raise NotImplementedError() - def create_backup(self, note=None, no_download=False): + async def create_backup(self, notes=None, keep_copy=False, no_download=False): """Create a backup of this model. :param str note: A note to store with the backup + :param bool keep_copy: Keep a copy of the archive on the controller :param bool no_download: Do not download the backup archive - :return str: Path to downloaded archive + :return dict: Metadata for the created backup (id, checksum, notes, filename, etc.) """ - raise NotImplementedError() + backups_facade = client.BackupsFacade.from_connection(self.connection()) + results = await backups_facade.Create(notes=notes, keep_copy=keep_copy, no_download=no_download) + if results is None: + raise JujuAPIError("Couldn't create a backup") + return results.serialize() def create_storage_pool(self, name, provider_type, **pool_config): """Create or define a storage pool. @@ -1679,7 +1690,7 @@ class Model: return await app_facade.DestroyUnits(unit_names=list(unit_names)) destroy_units = destroy_unit - def get_backup(self, archive_id): + def download_backup(self, archive_id): """Download a backup archive file. :param str archive_id: The id of the archive to download @@ -1822,13 +1833,23 @@ class Model: """ raise NotImplementedError() - def remove_backup(self, backup_id): + async def remove_backup(self, backup_id): """Delete a backup. :param str backup_id: The id of the backup to remove """ - raise NotImplementedError() + backups_facade = client.BackupsFacade.from_connection(self.connection()) + return await backups_facade.Remove([backup_id]) + + async def remove_backups(self, backup_ids): + """Delete the given backups. + + :param [str] backup_ids: The list of ids of the backups to remove + + """ + backups_facade = client.BackupsFacade.from_connection(self.connection()) + return await backups_facade.Remove(backup_ids) def remove_cached_images(self, arch=None, kind=None, series=None): """Remove cached OS images.
Update CONTRIBUTING.md Fixed typo to resolve LICENSE link.
@@ -56,6 +56,6 @@ If you discover a potential security issue in this project we ask that you notif ## Licensing -See the [LICENSE](https://github.com/aws/aws-parallelcluster/blob/develop/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. +See the [LICENSE](https://github.com/aws/aws-parallelcluster/blob/develop/LICENSE.txt) file for our project's licensing. We will ask you to confirm the licensing of your contribution. We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes.
Fixed login/logout for Django 2.1. In django.contrib.auth.views, the login and logout funcs are removed as of Django 2.1. Helpdesk's urls.py needs to be updated to use the LoginView and LogoutView classes instead, which were introduced in Django 1.11.
@@ -185,13 +185,14 @@ urlpatterns += [ urlpatterns += [ url(r'^login/$', - auth_views.login, - {'template_name': 'helpdesk/registration/login.html'}, + auth_views.LoginView.as_view( + template_name='helpdesk/registration/login.html'), name='login'), url(r'^logout/$', - auth_views.logout, - {'template_name': 'helpdesk/registration/login.html', 'next_page': '../'}, + auth_views.LogoutView.as_view( + template_name='helpdesk/registration/login.html', + next_page='../'), name='logout'), ]
ci: fix gha deprecations for setup-deps action update actions/setup-go to v3 updates actions/setup-python to v4 remove usage of BSFishy/pip-action install pip packages directly remove usages of deprecated ::set-output
@@ -9,21 +9,15 @@ runs: run: | sudo apt-get update -y sudo apt-get install -y libarchive-tools - - name: "Install Python requirements with pip" - uses: BSFishy/pip-action@v1 - with: - packages: | - awscli - packaging # Go: Do this first because `Makefile` checks that the `go` version is correct. - name: "Get Go version from builder container" id: step-detect-go shell: bash run: | make "$PWD/build-aux/go-version.txt" - echo "::set-output name=go_version::$(cat "$PWD/build-aux/go-version.txt")" + echo "go_version=$(cat "$PWD/build-aux/go-version.txt")" >> $GITHUB_OUTPUT - name: "Install Go (${{ steps.step-detect-go.outputs.go_version }})" - uses: actions/setup-go@v2 + uses: actions/setup-go@v3 with: go-version: "${{ steps.step-detect-go.outputs.go_version }}" # Python @@ -32,8 +26,12 @@ runs: shell: bash run: | make "$PWD/build-aux/py-version.txt" - echo "::set-output name=py_version::$(cat "$PWD/build-aux/py-version.txt")" + echo "py_version=$(cat "$PWD/build-aux/py-version.txt")" >> $GITHUB_OUTPUT - name: "Install Py (${{ steps.step-detect-py.outputs.py_version }})" - uses: actions/setup-python@v2 + uses: actions/setup-python@v4 with: python-version: "${{ steps.step-detect-py.outputs.py_version }}" + - name: "Install Python requirements with pip" + shell: bash + run: python -m pip install awscli packaging +
update PETScNonlinearSolver to return same status information as Newton set manually the solution from the update in case the KSP did not converge
@@ -493,8 +493,21 @@ class PETScNonlinearSolver(NonlinearSolver): from petsc4py import PETSc as petsc + converged_reasons = {} + for key, val in six.iteritems(petsc.SNES.ConvergedReason.__dict__): + if isinstance(val, int): + converged_reasons[val] = key + + ksp_converged_reasons = {} + for key, val in six.iteritems(petsc.KSP.ConvergedReason.__dict__): + if isinstance(val, int): + ksp_converged_reasons[val] = key + NonlinearSolver.__init__(self, conf, petsc=petsc, - pmtx=pmtx, prhs=prhs, comm=comm, **kwargs) + pmtx=pmtx, prhs=prhs, comm=comm, + converged_reasons=converged_reasons, + ksp_converged_reasons=ksp_converged_reasons, + **kwargs) def __call__(self, vec_x0, conf=None, fun=None, fun_grad=None, lin_solver=None, iter_hook=None, status=None, @@ -536,11 +549,44 @@ class PETScNonlinearSolver(NonlinearSolver): snes.setMaxFunctionEvaluations(conf.if_max) snes.setFromOptions() + fun(snes, psol, prhs) + err0 = prhs.norm() + snes.solve(prhs.duplicate(), psol) if status is not None: status['time_stats'] = time.clock() - tt + if snes.reason in self.converged_reasons: + reason = 'snes: %s' % self.converged_reasons[snes.reason] + + else: + reason = 'ksp: %s' % self.ksp_converged_reasons[snes.reason] + + output('%s convergence: %s (%s, %d iterations, %d function evaluations)' + % (snes.getType(), snes.reason, reason, + snes.getIterationNumber(), snes.getFunctionEvaluations()), + verbose=conf.verbose) + + converged = snes.reason >= 0 + + if not converged: + # PETSc does not update the solution if KSP have not converged. + dpsol = snes.getSolutionUpdate() + psol -= dpsol + + fun(snes, psol, prhs) + err = prhs.norm() + + else: + err = snes.getFunctionNorm() + + if status is not None: + status['err0'] = err0 + status['err'] = err + status['n_iter'] = snes.getIterationNumber() + status['condition'] = 0 if converged else -1 + if isinstance(vec_x0, self.petsc.Vec): sol = psol
Add IOBase.read() and write() These methods are required on IOBase-derived classes, even if they are not a formal part of the protocol. For more information, see Closes:
@@ -34,12 +34,14 @@ class IOBase: def flush(self) -> None: ... def isatty(self) -> bool: ... def readable(self) -> bool: ... + read: Callable[..., Any] def readlines(self, __hint: int = ...) -> List[bytes]: ... def seek(self, __offset: int, __whence: int = ...) -> int: ... def seekable(self) -> bool: ... def tell(self) -> int: ... def truncate(self, __size: Optional[int] = ...) -> int: ... def writable(self) -> bool: ... + write: Callable[..., Any] def writelines(self, __lines: Iterable[ReadableBuffer]) -> None: ... def readline(self, __size: Optional[int] = ...) -> bytes: ... def __del__(self) -> None: ...
Devirtualize StorageImpl deconstructor Summary: Further align at::StorageImpl with caffe2::StorageImpl Pull Request resolved:
@@ -21,7 +21,7 @@ struct Type; struct AT_API StorageImpl : public c10::intrusive_ptr_target { public: StorageImpl() = delete; - virtual ~StorageImpl() {}; + ~StorageImpl() {}; StorageImpl( at::DataType data_type, ptrdiff_t size,
Fixing a few typos I induced by find-replacing "node_" Now the StreamPower component also works.
@@ -159,7 +159,7 @@ class StreamPowerEroder(Component): 7. , 0. , 7. , 7. , 7. ]) >>> mg2 = RasterModelGrid((3, 7), 1.) - >>> z = np.array(mg2.x**2.) + >>> z = np.array(mg2.node_x**2.) >>> z = mg2.add_field('node', 'topographic__elevation', z) >>> mg2.status_at_node[mg2.nodes_at_left_edge] = FIXED_VALUE_BOUNDARY >>> mg2.status_at_node[mg2.nodes_at_top_edge] = CLOSED_BOUNDARY @@ -175,13 +175,13 @@ class StreamPowerEroder(Component): 13.29039716, 18.44367965, 36. ]) >>> mg3 = RasterModelGrid((5, 5), 2.) - >>> z = mg.x/100. + >>> z = mg.node_x/100. >>> z = mg3.add_field('node', 'topographic__elevation', z) >>> mg3.status_at_node[mg3.nodes_at_left_edge] = FIXED_VALUE_BOUNDARY >>> mg3.status_at_node[mg3.nodes_at_top_edge] = CLOSED_BOUNDARY >>> mg3.status_at_node[mg3.nodes_at_bottom_edge] = CLOSED_BOUNDARY >>> mg3.status_at_node[mg3.nodes_at_right_edge] = CLOSED_BOUNDARY - >>> mg3.at_node['water__unit_flux_in'] = mg3.y + >>> mg3.at_node['water__unit_flux_in'] = mg3.node_y >>> fr3 = FlowRouter(mg3) >>> Q = mg3.at_node['surface_water__discharge'] >>> sp3 = StreamPowerEroder(mg3, K_sp=1., sp_type='Unit', a_sp=1., @@ -651,7 +651,7 @@ class StreamPowerEroder(Component): x = brenth(erode_fn, 0.0, 1.0, - args=(alpha_param, beta_param, n), + args=(alpha_param, beta_param, self._n), maxiter=200) # just in case, if x>0:
Update deprecation_warning.py Users have the option to turn off the (deprecation ) warnings by setting 'params.verbose=False'.
import sys from plantcv.plantcv import _version - +from plantcv.plantcv import params def deprecation_warning(warning): """Print out deprecation warning @@ -14,4 +14,5 @@ def deprecation_warning(warning): """ version = _version.get_versions() warning_msg = f"DeprecationWarning: {warning} Current PlantCV version: {version['version']} released on {version['date']}" + if params.verbose is True: print(warning_msg, file=sys.stderr)
Update CAIP Prediction compatibility table to use TF 2.9. See for the full version list.
@@ -33,8 +33,8 @@ _TF_COMPATIBILITY_OVERRIDE = { # CAIP pusher. See: # https://cloud.google.com/ai-platform/prediction/docs/runtime-version-list '2.0': '1.15', - # TODO(b/168249383) Update this once CAIP model support TF 2.8 runtime. - '2.8': '2.7', + # TODO(b/168249383) Update this once CAIP model support TF 2.9 runtime. + '2.9': '2.8', } # Google Cloud AI Platform's ModelVersion resource path format.
[IMPR] Revise GeneratorsMixin's search deprecation Making the changes discussed in CR 693515 [1]... * Deprecating any use of 'titles' in favor of 'title'. Previously its usage was only deprecated when the CirrusSearch extension isn't present. * Only deprecate CirrusSearch's where usage when our family is a WikimediaFamily. [1]
@@ -1353,20 +1353,22 @@ class GeneratorsMixin: if where not in where_types: raise Error("search: unrecognized 'where' value: {}".format(where)) if where in ('title', 'titles'): - if self.has_extension('CirrusSearch'): + if where == 'titles': + issue_deprecation_warning("where='titles'", "where='title'", + since='20160224') + where = 'title' + + if self.has_extension('CirrusSearch') and \ + isinstance(self.family, pywikibot.family.WikimediaFamily): # 'title' search was disabled, use intitle instead searchstring = 'intitle:' + searchstring issue_deprecation_warning( "where='{}'".format(where), "searchstring='{}'".format(searchstring), since='20160224') + where = None # default - else: - if where == 'titles': - issue_deprecation_warning("where='titles'", - "where='title'", - since='20160224') - where = 'title' + if not namespaces and namespaces != 0: namespaces = [ns_id for ns_id in self.namespaces if ns_id >= 0] srgen = self._generator(api.PageGenerator, type_arg='search',
Reactivate watcher dashboard plugin in devstack/local.conf.controller Since watcher dashboard can be sucessfully installed now by devstack, we should enable this again. Many of us are get the local.conf from here,so this change is necessary, we can enable watch dashboard plugin by default.
@@ -28,7 +28,7 @@ ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3,neutron enable_service n-cauth # Enable the Watcher Dashboard plugin -# enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard +enable_plugin watcher-dashboard git://git.openstack.org/openstack/watcher-dashboard # Enable the Watcher plugin enable_plugin watcher git://git.openstack.org/openstack/watcher
updating the file path for newer version of Windows Filesystem.file_path!="C:\\Users\\*\\AppData\\Local\\Microsoft\\Outlook*"
@@ -35,7 +35,7 @@ detect: search: '| tstats `security_content_summariesonly` count values(Filesystem.file_path) as file_path min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Filesystem where (Filesystem.file_name=*.dll OR Filesystem.file_name=*.ost) Filesystem.file_path - != "C:\\Users\\*\\My Documents\\Outlook Files\\*" by Filesystem.action Filesystem.process_id + != "C:\\Users\\*\\My Documents\\Outlook Files\\*" Filesystem.file_path!="C:\\Users\\*\\AppData\\Local\\Microsoft\\Outlook*" by Filesystem.action Filesystem.process_id Filesystem.file_name Filesystem.dest | `drop_dm_object_name("Filesystem")` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`' suppress:
fix: use file name for backups to Google Drive Currently, backups to Google Drive are uploaded with the absolute path as the filenames. This fix changes that. [skip ci]
@@ -169,7 +169,7 @@ def upload_system_backup_to_google_drive(): if not fileurl: continue - file_metadata = {"name": fileurl, "parents": [account.backup_folder_id]} + file_metadata = {"name": os.path.basename(fileurl), "parents": [account.backup_folder_id]} try: media = MediaFileUpload(
Added helper function `News.sync_maillists` Function sync maillists listing with API, that hold IDs of message that have news. PEPs handling is over RSS, so this will added manually in this function.
@@ -2,12 +2,35 @@ from discord.ext.commands import Cog from bot.bot import Bot +MAIL_LISTS = [ + "python-ideas", + "python-announce-list", + "pypi-announce" +] + class News(Cog): """Post new PEPs and Python News to `#python-news`.""" def __init__(self, bot: Bot): self.bot = bot + self.bot.loop.create_task(self.sync_maillists()) + + async def sync_maillists(self) -> None: + """Sync currently in-use maillists with API.""" + # Wait until guild is available to avoid running before API is ready + await self.bot.wait_until_guild_available() + + response = await self.bot.api_client.get("bot/bot-settings/news") + for mail in MAIL_LISTS: + if mail not in response["data"]: + response["data"][mail] = [] + + # Because we are handling PEPs differently, we don't include it to mail lists + if "pep" not in response["data"]: + response["data"]["pep"] = [] + + await self.bot.api_client.put("bot/bot-settings/news", json=response) def setup(bot: Bot) -> None:
Disable warnings display In case this is hiding something in travis
[pytest] DJANGO_SETTINGS_MODULE = config.settings python_files = tests.py test_*.py *_tests.py -addopts = --strict -l -p no:cacheprovider +addopts = --strict --showlocals -p no:cacheprovider --disable-warnings markers = integration: integration tests
Add documentation for Linux Bridge and OVS ingress QoS Added documentation reference for ingress bandwith limit QoS rule for Open vSwitch and Linux Bridge backends. Closes-Bug: Closes-Bug:
@@ -43,7 +43,7 @@ traffic directions (from the VM point of view). ==================== ================ ================ ================ Rule \ back end Open vSwitch SR-IOV Linux bridge ==================== ================ ================ ================ - Bandwidth limit Egress Egress (1) Egress + Bandwidth limit Egress\Ingress Egress (1) Egress\Ingress Minimum bandwidth - Egress - DSCP marking Egress - Egress ==================== ================ ================ ================
Bugfix: Avoid the use of `locals()` in the reminder plugin It makes it less clear where variables are used and to follow the code.
@@ -15,7 +15,7 @@ class RemindPlugin(WillPlugin): "reminder_text": reminder_text, } self.schedule_say(formatted_reminder_text, parsed_time, message=message) - self.say("%(reminder_text)s %(natural_datetime)s. Got it." % locals(), message=message) + self.say("%s %s. Got it." % (reminder_text, natural_datetime), message=message) @respond_to("remind (?P<reminder_recipient>(?!me).*?) to (?P<reminder_text>.*?) (at|on|in) (?P<remind_time>.*)") def remind_somebody_at(self, message, reminder_recipient=None, reminder_text=None, remind_time=None): @@ -31,4 +31,4 @@ class RemindPlugin(WillPlugin): } self.schedule_say(formatted_reminder_text, parsed_time, message=message) - self.say("%(reminder_text)s %(natural_datetime)s. Got it." % locals(), message=message) + self.say("%s %s. Got it." % (reminder_text, natural_datetime), message=message)
STY: whitespace Change whitespace to remove differences between old and new files.
import importlib + class Constellation(object): """Manage and analyze data from multiple pysat Instruments. @@ -136,4 +137,3 @@ class Constellation(object): for instrument in self.instruments: instrument.load(*args, **kwargs) -
Redirect sign out to current page Closes
My Challenges</a> <div class="dropdown-divider"></div> <a class="dropdown-item" - href="{% url 'userena_signout' %}?next=/"> + href="{% url 'userena_signout' %}?next={{ subdomain_absolute_uri }}"> Sign out</a> </div> </li>
Fix typo python-pixel -> python-pyxel
@@ -79,7 +79,7 @@ Install the required packages in a way appropriate for each distribution. [glfw] **Arch:** -Install [`python-pixel`](https://aur.archlinux.org/packages/python-pyxel/) by using your favorite AUR helper: +Install [`python-pyxel`](https://aur.archlinux.org/packages/python-pyxel/) by using your favorite AUR helper: ```sh yay -S python-pyxel
models: Rename 'Jitsi' to 'Jitsi Meet' in Realm model. Fixes
@@ -292,7 +292,7 @@ class Realm(models.Model): VIDEO_CHAT_PROVIDERS = { 'jitsi_meet': { - 'name': u"Jitsi", + 'name': u"Jitsi Meet", 'id': 1 }, 'google_hangouts': {
Replace more calls to get_repository_definition with get_external_repository Summary: Simply getting rid of a few more calls to `get_repository_definition` where it is unecessary Test Plan: unit Reviewers: schrockn, alangenfeld
@@ -285,24 +285,24 @@ def resolve_attempts_count(self, graphene_info): # https://github.com/dagster-io/dagster/issues/228 def resolve_logs_path(self, graphene_info): instance = graphene_info.context.instance - repository = graphene_info.context.get_repository_definition() - return instance.log_path_for_schedule(repository.name, self._schedule.name) + external_repository = graphene_info.context.get_external_repository() + return instance.log_path_for_schedule(external_repository.name, self._schedule.name) def resolve_stats(self, graphene_info): - repository = graphene_info.context.get_repository_definition() + external_repository = graphene_info.context.get_external_repository() stats = graphene_info.context.instance.get_schedule_tick_stats_by_schedule( - repository.name, self._schedule.name + external_repository.name, self._schedule.name ) return graphene_info.schema.type_named('ScheduleTickStatsSnapshot')(stats) def resolve_ticks(self, graphene_info, limit=None): - repository = graphene_info.context.get_repository_definition() + external_repository = graphene_info.context.get_external_repository() # TODO: Add cursor limit argument to get_schedule_ticks_by_schedule # https://github.com/dagster-io/dagster/issues/2291 ticks = graphene_info.context.instance.get_schedule_ticks_by_schedule( - repository.name, self._schedule.name + external_repository.name, self._schedule.name ) if not limit: @@ -321,9 +321,9 @@ def resolve_ticks(self, graphene_info, limit=None): ] def resolve_ticks_count(self, graphene_info): - repository = graphene_info.context.get_repository_definition() + external_repository = graphene_info.context.get_external_repository() ticks = graphene_info.context.instance.get_schedule_ticks_by_schedule( - repository.name, self._schedule.name + external_repository.name, self._schedule.name ) return len(ticks)
[4.0] remove UnicodeType and Python 2 related code Also use tempfile.mkstemp instead of tempfile.mktemp which is deprecated since Python 2.3
@@ -31,12 +31,10 @@ The following generators and filters are supported: &params; """ # -# (C) Pywikibot team, 2008-2019 +# (C) Pywikibot team, 2008-2020 # # Distributed under the terms of the MIT license. # -from __future__ import absolute_import, division, unicode_literals - import os import pipes import tempfile @@ -46,7 +44,6 @@ import pywikibot from pywikibot import pagegenerators from pywikibot.bot import (MultipleSitesBot, ExistingPageBot, NoRedirectPageBot, AutomaticTWSummaryBot) -from pywikibot.tools import UnicodeType # This is required for the text that is shown when you run this script # with the parameter -help. @@ -71,38 +68,34 @@ class PiperBot(MultipleSitesBot, ExistingPageBot, NoRedirectPageBot, self.availableOptions.update({ 'filters': [], }) - super(PiperBot, self).__init__(generator=generator, **kwargs) + super().__init__(generator=generator, **kwargs) @property - def summary_parameters(self): + def summary_parameters(self) -> dict: """Return the filter parameter.""" return {'filters': ', '.join(self.getOption('filters'))} - def pipe(self, program, text): + def pipe(self, program: str, text: str) -> str: """Pipe a given text through a given program. @return: processed text after piping - @rtype: str """ - if not isinstance(text, str): # py2-py3 compatibility - text = text.encode('utf-8') pipe = pipes.Template() - pipe.append(str(program), '--') # py2-py3 compatibility + pipe.append(program, '--') # Create a temporary filename to save the piped stuff to - temp_filename = '%s.%s' % (tempfile.mktemp(), 'txt') + file, temp_filename = tempfile.mkstemp(suffix='.txt') + file.close() with pipe.open(temp_filename, 'w') as file: file.write(text) # Now retrieve the munged text with open(temp_filename, 'r') as file: - unicode_text = file.read() - if not isinstance(unicode_text, UnicodeType): # py2-py3 compatibility - unicode_text = unicode_text.decode('utf-8') + text = file.read() # clean up os.unlink(temp_filename) - return unicode_text + return text def treat_page(self): """Load the given page, do some changes, and save it."""
AnimationEditor : Refactor away `__visiblePlugs` member data It was never used outside of `__expansionChanged()`, and we weren't using it to carry state from one invocation to the next.
@@ -165,7 +165,6 @@ class AnimationEditor( GafferUI.NodeSetEditor ) : self.__splitter.setSizes( betterSize ) # set initial state - self.__visiblePlugs = None self.__editablePlugs = None self._updateFromSet() self._updateFromContext( [ "frame" ] ) @@ -192,27 +191,24 @@ class AnimationEditor( GafferUI.NodeSetEditor ) : paths = pathListing.getExpandedPaths() - plugList = [] - + visiblePlugs = set() for path in paths: graphComponent = self.__scriptNode.descendant( str( path ).replace( '/', '.' ) ) for child in graphComponent.children() : if isinstance( child, Gaffer.ValuePlug ) and Gaffer.Animation.isAnimated( child ) : - plugList.append( child ) - - self.__visiblePlugs = set( plugList ) + visiblePlugs.add( child ) visible = self.__animationGadget.visiblePlugs() editable = self.__animationGadget.editablePlugs() visible.clear() - for plug in plugList : + for plug in visiblePlugs : visible.add( self.__sourceCurvePlug( plug ) ) with Gaffer.BlockedConnection( self.__editablePlugAddedConnection ) : editable.clear() - for plug in ( self.__editablePlugs or set() ) & self.__visiblePlugs : + for plug in ( self.__editablePlugs or set() ) & visiblePlugs : editable.add( self.__sourceCurvePlug( plug ) ) def __selectionChanged( self, pathListing ) :
conftest.py: Provide all individual columns to get Instead of simply passing ['*'], get the list of individual colummns and pass that instead. This is useful as a way to get suppressed fields such as mackey, which we need to verify the written data.
@@ -72,11 +72,10 @@ def _get_table_data_cols(table: str, datadir: str, columns: List[str], cfgfile = create_dummy_config_file(datadir=datadir) - if columns is None: - # the test_parsing rouiines were written without needing to specify - # columns - columns = ['*'] - df = get_sqobject(table)(config_file=cfgfile).get(columns=columns) + sqobj = get_sqobject(table)(config_file=cfgfile) + cols = sqobj.schema.get_display_fields(columns) + + df = sqobj.get(columns=cols) if not add_os_col: return df
Fix flake8 errors in viskit/frontend.py These are blocking PRs in the CI. It's possible a flake8 update changed the error behavior.
@@ -224,8 +224,10 @@ def summary_name(exp, selector=None): # if len(rest_params) > 0: # name = "" # for k in rest_params: - # name += "%s=%s;" % (k.split(".")[-1], - # str(exp.flat_params.get(k, "")).split(".")[-1]) + # name += "%s=%s;" % ( + # k.split(".")[-1], + # str(exp.flat_params.get(k, "")).split(".")[-1] + # ) # return name return exp.params["exp_name"] @@ -260,7 +262,11 @@ def get_plot_instruction(plot_key, else: selector = core.Selector(exps_data) if legend_post_processor is None: - legend_post_processor = lambda x: x + + def default_legend_post_processor(x): + return x + + legend_post_processor = default_legend_post_processor if filters is None: filters = dict() for k, v in filters.items(): @@ -291,7 +297,7 @@ def get_plot_instruction(plot_key, group_selectors = [core.Selector(list(x[1])) for x in splitted] group_legends = [x[0] for x in splitted] else: - if group_key and group_key is not "exp_name": + if group_key and group_key != "exp_name": vs = [vs for k, vs in distinct_params if k == group_key][0] group_selectors = [ split_selector.where(group_key, v) for v in vs @@ -316,11 +322,12 @@ def get_plot_instruction(plot_key, filtered_data = group_selector.extract() if filtered_data: - if only_show_best or only_show_best_final or only_show_best_sofar: + if (only_show_best or only_show_best_final + or only_show_best_sofar): # Group by seed and sort. # ----------------------- filtered_params = core.extract_distinct_params( - filtered_data, l=0) + filtered_data, l=0) # noqa: E741 filtered_params2 = [p[1] for p in filtered_params] filtered_params_k = [p[0] for p in filtered_params] product_space = list(itertools.product(*filtered_params2)) @@ -376,7 +383,8 @@ def get_plot_instruction(plot_key, best_regret = regret best_progress = progresses data_best_regret = data - kv_string_best_regret = distinct_params_kv_string + kv_string_best_regret = \ + distinct_params_kv_string print(group_selector._filters) print('best regret: {}'.format(best_regret)) @@ -386,7 +394,8 @@ def get_plot_instruction(plot_key, exp.progress.get(plot_key, np.array([np.nan])) for exp in data_best_regret ] - # progresses = [progress[:500] for progress in progresses ] + # progresses = \ + # [progress[:500] for progress in progresses] sizes = list(map(len, progresses)) # more intelligent: max_size = max(sizes) @@ -434,8 +443,8 @@ def get_plot_instruction(plot_key, stds = np.nanstd(progresses, axis=0) if normalize_error: # and len(progresses) > 0: stds /= np.sqrt( - np.sum( - (1. - np.isnan(progresses)), axis=0)) + np.sum((1. - np.isnan(progresses)), + axis=0)) if smooth_curve: means = sliding_mean(means, window=window_size) stds = sliding_mean(stds, window=window_size)
Update permission_manager_help.html translation tag
<li>{%= __("Permissions at level 0 are Document Level permissions, i.e. they are primary for access to the document.") %}</li> <li>{%= __("If a Role does not have access at Level 0, then higher levels are meaningless.") %}</li> <li>{%= __("Permissions at higher levels are Field Level permissions. All Fields have a Permission Level set against them and the rules defined at that permissions apply to the field. This is useful in case you want to hide or make certain field read-only for certain Roles.") %}</li> - <li>{%= __("You can use Customize Form to set levels on fields.") %} <a href="#Form/Customize Form">Setup > Customize Form</a></li> + <li>{%= __("You can use Customize Form to set levels on fields.") %} <a href="#Form/Customize Form">{%= __("Setup > Customize Form") %}</a></li> </ol> <hr> <h4>{%= __("User Permissions") %}:</h4>
BUG: fixed bad list Fixed bad list formatting in installation dependencies.
@@ -35,16 +35,17 @@ pysat itself may be installed from a terminal command line via:: pip install pysat There are a few packages that pysat depends on that will be installed as -needed by the installer - - * dask - * netCDF4 - * numpy - * pandas - * portalocker - * scipy - * toolz - * xarray +needed by the installer: + +#. dask +#. netCDF4 +#. numpy +#. pandas +#. portalocker +#. pytest +#. scipy +#. toolz +#. xarray .. _inst-dev:
DOC: special: Remove heaviside from "functions not in special" in the tutorial. heaviside is now in numpy, so it is no longer of a good example of a simple function that is not in scipy.special.
@@ -246,12 +246,7 @@ The `binary entropy function`_:: def binary_entropy(x): return -(sc.xlogy(x, x) + sc.xlog1py(1 - x, -x))/np.log(2) -The `Heaviside step function`_:: - - def heaviside(x): - return 0.5*(np.sign(x) + 1) - -A similar idea can also be used to get a step function on [0, 1]:: +A rectangular step function on [0, 1]:: def step(x): return 0.5*(np.sign(x) + np.sign(1 - x)) @@ -270,6 +265,4 @@ The `ramp function`_:: .. _`binary entropy function`: https://en.wikipedia.org/wiki/Binary_entropy_function -.. _`Heaviside step function`: https://stackoverflow.com/questions/15121048/does-a-heaviside-step-function-exist - .. _`ramp function`: https://en.wikipedia.org/wiki/Ramp_function
Fix predictString output length translate_back_locations and translate_back output lengths differed for some reason. use translate_back_locations in predictString instead.
@@ -89,7 +89,7 @@ class ClstmSeqRecognizer(kraken.lib.lstm.SeqRecognizer): self.rnn.inputs.aset(line.astype('float32')) self.rnn.forward() self.outputs = self.rnn.outputs.array().reshape(line.shape[0], self.rnn.noutput()) - codes = kraken.lib.lstm.translate_back(self.outputs) + codes = [x[0] for x in kraken.lib.lstm.translate_back_locations(self.outputs)] cls = clstm.Classes() cls.resize(len(codes)) for i, v in enumerate(codes):
Add comments for the 'route' data
// These are all the anchors to show in the TOC. // They are objects with the "hash" and "label" properties. anchors: [], + + // This will be auto-populated to the current route. route: '' }; },
DOC: fix indentation level in list [skip azp] [skip actions]
@@ -32,9 +32,9 @@ def spdiags(data, diags, m, n, format=None): Matrix diagonals stored row-wise diags : sequence of int or an int Diagonals to set: - - k = 0 the main diagonal - - k > 0 the kth upper diagonal - - k < 0 the kth lower diagonal + * k = 0 the main diagonal + * k > 0 the kth upper diagonal + * k < 0 the kth lower diagonal m, n : int Shape of the result format : str, optional
Fixed 'NameError: name 'adata' is not defined' Fixed 'NameError: name 'adata' is not defined' line 768, in diffusion_conn changed from all of 'data' to 'adata' in diffusion_conn()
@@ -741,7 +741,7 @@ def select_hvg(adata, select=True): return adata ### diffusion for connectivites matrix extension -def diffusion_conn(data, min_k=50, copy=True, max_iterations=20): +def diffusion_conn(adata, min_k=50, copy=True, max_iterations=20): ''' This function performs graph diffusion on the connectivities matrix until a minimum number `min_k` of entries per row are non-zero. @@ -755,20 +755,20 @@ def diffusion_conn(data, min_k=50, copy=True, max_iterations=20): with the diffusion-enhanced connectivities matrix is in `adata.uns["neighbors"]["conectivities"]` ''' - if isinstance(data, anndata.AnnData): + if isinstance(adata, anndata.AnnData): - if 'neighbors' not in data.uns: + if 'neighbors' not in adata.uns: raise ValueError('`neighbors` not in adata object. ' 'Please compute a neighbourhood graph!') - if 'connectivities' not in data.uns['neighbors']: + if 'connectivities' not in adata.uns['neighbors']: raise ValueError('`connectivities` not in `adata.uns["neighbors"]`. ' 'Please pass an object with connectivities computed!') T = adata.uns['neighbors']['connectivities'] else: - T = data + T = adata M = T @@ -786,7 +786,7 @@ def diffusion_conn(data, min_k=50, copy=True, max_iterations=20): M.setdiag(0) - if isinstance(data, anndata.AnnData): + if isinstance(adata, anndata.AnnData): if copy: adata_tmp = adata.copy() adata_tmp.uns['neighbors'].update({'diffusion_connectivities': M})
Wildfire getReport bug fix * getReport bug fix getReport bug fix * Added empty RN * Improved implementation
@@ -155,11 +155,15 @@ script: throw 'Invalid hash. Only SHA256 and MD5 are supported.'; } var bodyXML = 'apikey='+TOKEN+'&format=xml&hash='+hash; - var resXML = sendRequest(reportUrl, bodyXML, DEFAULT_HEADERS).Body; + var resXML = sendRequest(reportUrl, bodyXML, DEFAULT_HEADERS); if(!resXML){ - return 'No results yet'; + return {error: 'Report not found'}; } - var result = JSON.parse(x2j(resXML)); + var resXMLBody = resXML.Body; + if(!resXMLBody){ + return {error: 'No results yet'}; + } + var result = JSON.parse(x2j(resXMLBody)); var report = dq(result, 'wildfire.task_info.report'); var file_info = dq(result, 'wildfire.file_info'); var returnObj = { @@ -262,6 +266,9 @@ script: case 'file': case 'wildfire-report': var reportResponse = getReport(args.md5, args.hash, args.file); + if (reportResponse.error){ + return reportResponse.error; + } return createReport(reportResponse, args.format, args.verbose); case 'wildfire-upload': var response = uploadFile(args.upload); @@ -409,3 +416,4 @@ script: description: Optional - request a structured report (formatted as XML/PDF) description: Detonate file hosted on a website through WildFire hidden: false +releaseNotes: "-" \ No newline at end of file
removing installs dups from requirements.txt Clean up the Dockerfile to not reinstall the pip modules from requirements.txt
@@ -37,9 +37,6 @@ RUN apt-get update && \ COPY setup/requirements.txt /requirements.txt RUN easy_install pip && \ pip install -r /requirements.txt && \ - pip install psycopg2==2.6.2 && \ - pip install gunicorn==19.6.0 && \ - pip install setproctitle && \ rm /requirements.txt && \ update-rc.d -f postgresql remove && \ update-rc.d -f nginx remove && \
Fix a exception error For shrink, if not specified instance_id should throw "Not instances specified for shrink operation."
@@ -307,7 +307,7 @@ class MongoDbCluster(models.Cluster): """ if not len(instances) > 0: raise exception.TroveError( - _('Not instances specified for grow operation.') + _('No instances specified for grow operation.') ) self._prep_resize() self._check_quotas(self.context, instances) @@ -339,7 +339,7 @@ class MongoDbCluster(models.Cluster): """ if not len(instance_ids) > 0: raise exception.TroveError( - _('Not instances specified for grow operation.') + _('No instances specified for shrink operation.') ) self._prep_resize()
Remove `_assert_computation_duration_of_dispatch_is_reasonable` It should not happen in real life and highly depends on the host, the current cpu load, the number of workers, users and user classes. If there are performance issues, people will probably raise issue anyway. Then, we can run profiling.
@@ -203,31 +203,11 @@ class UsersDispatcher(Iterator): if user_count_in_current_dispatch == self._user_count_per_dispatch: break - self._assert_computation_duration_of_dispatch_is_reasonable(duration=time.perf_counter() - ts_dispatch) - return { worker_node_id: dict(sorted(user_classes_count.items(), key=itemgetter(0))) for worker_node_id, user_classes_count in sorted(self._dispatched_users.items(), key=itemgetter(0)) } - def _assert_computation_duration_of_dispatch_is_reasonable(self, duration: float) -> None: - # Safeguard against unforeseen performance issues. Ideally, - # we want each dispatch loop to be as short as possible to compute, but with - # a large amount of workers/user classes, it can take longer to come up with the dispatch solution. - # If the assertion is raised, then it could be a sign that the code needs to be optimized for the - # situation that caused the assertion to be raised. - assert duration < ( - 0.5 - if self._number_of_workers < 100 - else 1 - if self._number_of_workers < 250 - else 1.5 - if self._number_of_workers < 350 - else 3 - ), "Dispatch iteration took too much time: {}s (len(workers) = {}, len(user_classes) = {})".format( - duration, self._number_of_workers, len(self._user_classes_count) - ) - @property def _number_of_workers(self) -> int: return len(self._users_left_to_assigned)
Update README.rst Remove repeated information in demo README, along with broken links to the archived toga-demo repo.
@@ -18,32 +18,10 @@ and then run it:: This will pop up a GUI window. -If you have cloned the toga-demo repository, you can run the demo like this:: +If you have cloned the toga repository, navigate to the demo directory and run it like this:: $ pip install toga $ python -m toga_demo -Community ---------- - -Toga Demo is part of the `BeeWare suite`_. You can talk to the community through: - -* `@pybeeware on Twitter`_ - -* The `beeware/general`_ channel on Gitter. - -Contributing ------------- - -If you experience problems with Toga Demo, `log them on GitHub`_. If you -want to contribute code, please `fork the code`_ and `submit a pull request`_. - -.. _BeeWare suite: http://beeware.org -.. _Read The Docs: http://toga-demo.readthedocs.org .. _Toga widget toolkit: http://beeware.org/toga .. _toga repository on GitHub: https://github.com/beeware/toga -.. _@pybeeware on Twitter: https://twitter.com/pybeeware -.. _beeware/general: https://gitter.im/beeware/general -.. _log them on Github: https://github.com/beeware/toga-demo/issues -.. _fork the code: https://github.com/beeware/toga-demo -.. _submit a pull request: https://github.com/beeware/toga-demo/pulls
Temporarily disable pyright in BK ### Summary & Motivation Pending fix to venv build with rust dependency. ### How I Tested These Changes BK
@@ -28,7 +28,7 @@ def build_repo_wide_steps() -> List[BuildkiteStep]: return [ *build_repo_wide_black_steps(), *build_repo_wide_check_manifest_steps(), - *build_repo_wide_pyright_steps(), + # *build_repo_wide_pyright_steps(), *build_repo_wide_ruff_steps(), ]
Remove unnecessary code in BaseOutputHandler Closes
@@ -105,7 +105,7 @@ class BaseOutputHandler(BaseHandler): if not isinstance(output_dict, dict): output_dict = {"output": output_dict} - metrics_state_attrs.update({name: value for name, value in output_dict.items()}) + metrics_state_attrs.update(output_dict) if self.state_attributes is not None: metrics_state_attrs.update({name: getattr(engine.state, name, None) for name in self.state_attributes})
[ideep] Add IDEEP fallbacks for Faster-RCNN ops TSIA
#include <caffe2/ideep/operators/operator_fallback_ideep.h> #include <caffe2/ideep/utils/ideep_operator.h> +#include <caffe2/operators/bbox_transform_op.h> +#include <caffe2/operators/box_with_nms_limit_op.h> #include <caffe2/operators/channel_shuffle_op.h> +#include <caffe2/operators/collect_and_distribute_fpn_rpn_proposals_op.h> #include <caffe2/operators/conv_transpose_op.h> #include <caffe2/operators/cross_entropy_op.h> #include <caffe2/operators/dropout_op.h> +#include <caffe2/operators/elementwise_ops.h> #include <caffe2/operators/filler_op.h> #include <caffe2/operators/flatten_op.h> +#include <caffe2/operators/generate_proposals_op.h> #include <caffe2/operators/given_tensor_fill_op.h> #include <caffe2/operators/load_save_op.h> #include <caffe2/operators/loss_op.h> #include <caffe2/operators/reshape_op.h> +#include <caffe2/operators/roi_align_op.h> #include <caffe2/operators/softmax_op.h> #include <caffe2/operators/transpose_op.h> #include <caffe2/operators/utility_ops.h> // can add more non-IDEEP operators if needed namespace caffe2 { +namespace { + +struct SigmoidCPUFunctor { + template <typename T> + bool operator()(const int n, const T* x, T* y, CPUContext* /* context */) + const { + ConstEigenVectorArrayMap<T> xM(x, n); + EigenVectorArrayMap<T>(y, n) = 1. / (1. + (-xM).exp()); + return true; + } +}; + +} // namespace REGISTER_IDEEP_OPERATOR(Softmax, IDEEPFallbackOp<SoftmaxOp<float, CPUContext>>); REGISTER_IDEEP_OPERATOR( @@ -57,4 +76,27 @@ REGISTER_IDEEP_OPERATOR( REGISTER_IDEEP_OPERATOR(Load, IDEEPFallbackOp<LoadOp<CPUContext>>); REGISTER_IDEEP_OPERATOR(Save, IDEEPFallbackOp<SaveOp<CPUContext>>); +REGISTER_IDEEP_OPERATOR( + Sigmoid, + IDEEPFallbackOp< + UnaryElementwiseOp<TensorTypes<float>, CPUContext, SigmoidCPUFunctor>>); +REGISTER_IDEEP_OPERATOR( + RoIAlign, + IDEEPFallbackOp<RoIAlignOp<float, CPUContext>>); +REGISTER_IDEEP_OPERATOR( + GenerateProposals, + IDEEPFallbackOp<GenerateProposalsOp<CPUContext>>); +REGISTER_IDEEP_OPERATOR( + GenerateProposalsCPP, + IDEEPFallbackOp<GenerateProposalsOp<CPUContext>>); +REGISTER_IDEEP_OPERATOR( + CollectAndDistributeFpnRpnProposals, + IDEEPFallbackOp<CollectAndDistributeFpnRpnProposalsOp<CPUContext>>); +REGISTER_IDEEP_OPERATOR( + BoxWithNMSLimit, + IDEEPFallbackOp<BoxWithNMSLimitOp<CPUContext>>); +REGISTER_IDEEP_OPERATOR( + BBoxTransform, + IDEEPFallbackOp<BBoxTransformOp<float, CPUContext>>); + } // namespace caffe2
api/nxtdevices/ColorSensor: add rgb and light also, remove detection of brown color, because this is not supported
@@ -38,6 +38,24 @@ NXT Light Sensor NXT Color Sensor ^^^^^^^^^^^^^^^^ .. autoclass:: pybricks.nxtdevices.ColorSensor + :no-members: + + .. automethod:: pybricks.nxtdevices.ColorSensor.color + + .. automethod:: pybricks.nxtdevices.ColorSensor.ambient + + .. automethod:: pybricks.nxtdevices.ColorSensor.reflection + + .. automethod:: pybricks.nxtdevices.ColorSensor.rgb + + .. rubric:: Built-in light + + This sensor has a built-in light. You can make it red, green, blue, or turn + it off. + + .. automethod:: pybricks.nxtdevices::ColorSensor.light.on + + .. automethod:: pybricks.nxtdevices::ColorSensor.light.off NXT Ultrasonic Sensor ^^^^^^^^^^^^^^^^^^^^^