message
stringlengths 13
484
| diff
stringlengths 38
4.63k
|
---|---|
Update Manifest.toml
Manifest.toml file now points to REopt.jl#newboiler | @@ -517,7 +517,9 @@ uuid = "3fa0cd96-eef1-5676-8a61-b3b8758bbffb"
[[deps.REopt]]
deps = ["ArchGDAL", "Dates", "DelimitedFiles", "HTTP", "JSON", "JuMP", "LinDistFlow", "Logging", "MathOptInterface", "Roots", "Shapefile", "TestEnv"]
-path = "C:\\Users\\BRATHOD\\.julia\\dev\\REopt"
+git-tree-sha1 = "6f608cb544fdd6b380c5803e72df346c8d93b8dd"
+repo-rev = "newboiler"
+repo-url = "https://github.com/NREL/REopt.jl.git"
uuid = "d36ad4e8-d74a-4f7a-ace1-eaea049febf6"
version = "0.15.1"
|
Improve debuggability of free/reagent
Summary:
Pull Request resolved:
update READEME and add ForkedPdb in reagent | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
-
import logging
+import pdb
+import sys
from collections import defaultdict
from typing import List, Dict
@@ -86,3 +87,18 @@ class lazy_property(object):
value = self._fget(obj)
setattr(obj, self.__name__, value)
return value
+
+
+class ForkedPdb(pdb.Pdb):
+ """A Pdb subclass that may be used
+ from a forked multiprocessing child
+
+ """
+
+ def interaction(self, *args, **kwargs):
+ _stdin = sys.stdin
+ try:
+ sys.stdin = open("/dev/stdin") # noqa
+ pdb.Pdb.interaction(self, *args, **kwargs)
+ finally:
+ sys.stdin = _stdin
|
concatenate docs style
* concatenate docs style
added and highlighted keywords as in the chapters on merge and combine.
* Update doc/user-guide/combining.rst
* Update doc/user-guide/combining.rst | @@ -22,10 +22,10 @@ Combining data
Concatenate
~~~~~~~~~~~
-To combine arrays along existing or new dimension into a larger array, you
-can use :py:func:`~xarray.concat`. ``concat`` takes an iterable of ``DataArray``
-or ``Dataset`` objects, as well as a dimension name, and concatenates along
-that dimension:
+To combine :py:class:`~xarray.Dataset`s / :py:class:`~xarray.DataArray`s along an existing or new dimension
+into a larger object, you can use :py:func:`~xarray.concat`. ``concat``
+takes an iterable of ``DataArray`` or ``Dataset`` objects, as well as a
+dimension name, and concatenates along that dimension:
.. ipython:: python
|
Fix docs on normalization of AiryDisk2DKernel
[ci skip] | @@ -697,7 +697,7 @@ class AiryDisk2DKernel(Kernel2D):
2D Airy disk kernel.
This kernel models the diffraction pattern of a circular aperture. This
- kernel is normalized to a peak value of 1.
+ kernel is normalized so that it sums to 1.
Parameters
----------
|
when BUILD_CAFFE2_OPS is OFF, torch-python needs a direct dep on nccl
Summary:
tracks supporting this with CI
Pull Request resolved: | @@ -692,6 +692,7 @@ if (BUILD_PYTHON)
${TORCH_SRC_DIR}/csrc/cuda/nccl.cpp
${TORCH_SRC_DIR}/csrc/cuda/python_nccl.cpp)
list(APPEND TORCH_PYTHON_COMPILE_DEFINITIONS USE_NCCL)
+ list(APPEND TORCH_PYTHON_LINK_LIBRARIES __caffe2_nccl)
if (USE_SYSTEM_NCCL)
endif()
endif()
|
chore: typo
missed out in last commit | @@ -1311,7 +1311,7 @@ Object.assign(frappe.utils, {
result = no_of_decimals > max_no_of_decimals
? result.toFixed(max_no_of_decimals)
: result;
- return result + ' ' + symbol;
+ return result + ' ' + map.symbol;
}
}
|
Mistakenly lost interpolation variable
Dropped in | @@ -35,7 +35,9 @@ def get_cli_endpoint() -> Endpoint:
# ensure that configs exist
if not os.path.exists(endpoint.funcx_dir):
- log.info("No existing configuration found at %s. Initializing...")
+ log.info(
+ "No existing configuration found at %s. Initializing...", endpoint.funcx_dir
+ )
endpoint.init_endpoint()
return endpoint
|
stop the tutorial build for docs deploy
Removed: QISKIT_DOCS_BUILD_TUTORIALS: 'always' | @@ -24,6 +24,5 @@ jobs:
env:
encrypted_rclone_key: ${{ secrets.encrypted_rclone_key }}
encrypted_rclone_iv: ${{ secrets.encrypted_rclone_iv }}
- QISKIT_DOCS_BUILD_TUTORIALS: 'always'
run: |
tools/deploy_documentation.sh
|
Remove deprecated ironic-agent element
ironic-agent is deprecated. ironic-python-agent-ramdisk is the new
element to build a ramdisk with ironic-python-agent | @@ -1611,7 +1611,7 @@ DIB support for Proliant Hardware Manager
To create an agent ramdisk with ``Proliant Hardware Manager``,
use the ``proliant-tools`` element in DIB::
- disk-image-create -o proliant-agent-ramdisk ironic-agent fedora proliant-tools
+ disk-image-create -o proliant-agent-ramdisk ironic-python-agent-ramdisk fedora proliant-tools
Disk Erase Support
^^^^^^^^^^^^^^^^^^
@@ -1641,7 +1641,7 @@ enabling/disabling a clean step.
To create an agent ramdisk with ``Proliant Hardware Manager``, use the
``proliant-tools`` element in DIB::
- disk-image-create -o proliant-agent-ramdisk ironic-agent fedora proliant-tools
+ disk-image-create -o proliant-agent-ramdisk ironic-python-agent-ramdisk fedora proliant-tools
See the `proliant-tools`_ for more information on creating agent ramdisk with
``proliant-tools`` element in DIB.
|
Explain emissions related to storage
Hope this is a reasonable public explanation of what was discussed in issue Also see issue | @@ -72,6 +72,10 @@ const faq = {
question: 'Do you take into account imports and exports of electricity?',
answer: 'Yes, we do. Imports and exports can be see on the map as small arrows between different areas [link to question about arrows]. Detailed information can be seen in the charts shown when you click on an area.',
},
+ emissionsOfStored: {
+ question: 'What about emissions generating electricity which is then stored in batteries or used to fill reservoirs?',
+ answer: 'As currently only a small proportion of electricity is stored, inaccuracies in the modelling of these emissions should not make much difference to the total at present. However given the increasing importance of storage we hope to take it into account more fully in our model shortly.',
+ },
guaranteesOfOrigin: {
question: 'What are green certificates, and how are they taken into account?',
answer: 'When producers of renewable energy produce electricity, they can create Guarantees of Origin (or Renewable Energy Certificates) - a proof that renewable electricity has been produced and distributed into the power grid. These guarantees can then be sold, giving other non-renewable electricity producers a way to "compensate" for their emissions and the right to claim that the electricity they produce comes from renewable sources, regardless of its actual, physical source.<br><br> This means that electricity consumers can be told that their electricity is green, when this is not true in the physical sense. This problematic, as it removes consumer incentives to consume electricity at the best time (for instance when the amount of renewable electricity in the grid is at its highest). This map therefore excludes Guarantees of Origin, instead offering a location-based, physical picture of the power grid, in order to responsibilize consumers.',
|
Change session engine to db
Since we dropped memcache support, we need to store user sessions
in the database. | @@ -125,7 +125,7 @@ LOGIN_REDIRECT_URL = '/home'
# Sessions
# ---------------------------------------------------------
-SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
+SESSION_ENGINE = 'django.contrib.sessions.backends.db'
SESSION_SAVE_EVERY_REQUEST = True
|
Replace add-cache with init-cache for testing data tier devices
These tests check that that adding devices, which are already in
the cache tier, to the data tier will cause an error.
init-cache causes the same effect as add-cache in this case. | @@ -103,7 +103,9 @@ class AddDataTestCase1(SimTestCase):
an exception.
"""
devices = _DEVICE_STRATEGY()
- command_line = ["--propagate", "pool", "add-cache"] + [self._POOLNAME] + devices
+ command_line = (
+ ["--propagate", "pool", "init-cache"] + [self._POOLNAME] + devices
+ )
RUNNER(command_line)
self.check_error(
StratisCliInUseOtherTierError,
@@ -117,7 +119,9 @@ class AddDataTestCase1(SimTestCase):
an exception.
"""
devices = _DEVICE_STRATEGY_2()
- command_line = ["--propagate", "pool", "add-cache"] + [self._POOLNAME] + devices
+ command_line = (
+ ["--propagate", "pool", "init-cache"] + [self._POOLNAME] + devices
+ )
RUNNER(command_line)
self.check_error(
StratisCliInUseOtherTierError,
|
Update gradescope.md
Added missing word 'gen' to 'otter gen' example usages | @@ -71,19 +71,19 @@ otter gen data.csv
If we needed the requirements in `requirements.txt`, we would add
```
-otter -r requirements.txt data.csv
+otter gen -r requirements.txt data.csv
```
Now let's say that we maintained to different directories of tests: `tests` with public versions of tests and `hidden-tests` with hidden versions. Because I want to grade with the hidden tests, my call then becomes
```
-otter -t hidden-tests -r requirements.txt data.csv
+otter gen -t hidden-tests -r requirements.txt data.csv
```
Now let's say that I need some functions defined in `utils.py`; then I would add this to the last part of my `otter gen` call:
```
-otter -t hidden-tests -r requirements.txt data.csv utils.py
+otter gen -t hidden-tests -r requirements.txt data.csv utils.py
```
**An important note about relative imports:** Because of the way that the Gradescope autograder is structured and in what directories files are executed, Otter only supports imports from a file called `utils.py` and this import *must* be of the form `from utils import *` in the notebook, otherwise the import will fail in the Gradescope autograder.
@@ -95,7 +95,7 @@ The Gradescope generator supports providing a pass/fail threshold. A threshold i
The threshold is specified with the `--threshold` flag:
```
-otter -t hidden-tests -r requirements.txt data.csv --threshold 0.75
+otter gen -t hidden-tests -r requirements.txt data.csv --threshold 0.75
```
For example, if a student passes a 2- and 1- point test but fails a 4-point test (a 43%) on a 25% threshold, they will get all 7 points. If they only pass the 1-point test (a 14%), they will get 0 points.
@@ -109,7 +109,7 @@ For example, if a student passes a 2- and 1- point test but fails a 4-point test
As an example, the command below scales the number of points to 3:
```
-otter -t hidden-tests -r requirements.txt data.csv --points 3
+otter gen -t hidden-tests -r requirements.txt data.csv --points 3
```
#### Showing Autograder Results
@@ -119,7 +119,7 @@ The generator lastly allows intructors to specify whether or not the stdout of t
This behavior is turned off by default and can be turned on by passing the `--show-results` flag to `otter gen`.
```
-otter -t hidden-tests -r requirements.txt data.csv --show-results
+otter gen -t hidden-tests -r requirements.txt data.csv --show-results
```
If `--show-results` is passed, the stdout will be made available to students _only after grades are published on Gradescope_. The [next section](#gradescope-results) details more about what is included in the stdout.
|
Corrected Aliases.
Alias now run correctly again. These are dynamically created by the alias functionality. | @@ -702,7 +702,6 @@ class BindAlias(Modifier):
except KeyError:
pass
return
-
self.context.register('command/bind', bind)
def alias(command, *args):
@@ -724,9 +723,18 @@ class BindAlias(Modifier):
return
context.alias[args[0]] = ' '.join(args[1:])
return
-
self.context.register('command/alias', alias)
+ def alias_execute(command, *args):
+ context = self.context
+ if command in self.alias:
+ aliased_command = self.alias[command]
+ for cmd in aliased_command.split(';'):
+ context.console("%s\n" % cmd)
+ else:
+ raise ValueError # This is not an alias.
+ self.context.register('command_re/.*', alias_execute)
+
def server_console(command, *args):
_ = self.context._kernel.translation
port = 23
@@ -2051,14 +2059,17 @@ class Kernel:
yield "--- System Commands ---"
yield 'loop \t- loop <command>'
yield 'end \t- end <commmand>'
- yield 'timer<?> \t- timer<?> <duration> <iterations>'
- yield 'device \t- device [<value>]'
+ yield 'timer.* \t- timer<?> <duration> <iterations>'
+ yield 'register \t- register'
+ yield 'context \t- context'
yield 'set \t- set [<key> <value>]'
yield 'control \t- control [<executive>]'
yield 'module \t- module [(open|close) <module_name>]'
yield 'modifier \t- modifier [(open|close) <module_name>]'
yield 'schedule \t- schedule'
yield 'channel \t- channel [(open|close|save) <channel_name>]'
+ yield 'device \t- device [<value>]'
+ yield 'flush \t- flush'
yield 'shutdown \t- shutdown'
return
# +- controls.
@@ -2098,7 +2109,6 @@ class Kernel:
except ValueError:
yield _("Syntax Error: timer<name> <times> <interval> <command>")
return
-
# Kernel Element commands.
elif command == 'register':
if len(args) == 0:
@@ -2362,23 +2372,33 @@ class Kernel:
for line in command(command_name, *args):
yield line
except TypeError:
- pass
+ pass # Command match is non-generating.
return # Command matched context command.
for command_re in self.match('%s/command_re/.*' % active_context._path):
cmd_re = command_re.split('/')[-1]
match = re.compile(cmd_re)
if match.match(command):
command_funct = self.registered[command_re]
+ try:
for line in command_funct(command, *args):
yield line
+ except TypeError:
+ pass # Command match is non-generating.
+ except ValueError:
+ continue # command match rejected.
return # Command matched context command_re
for command_re in self.match('command_re/.*'):
cmd_re = command_re.split('/')[-1]
match = re.compile(cmd_re)
if match.match(command):
command_funct = self.registered[command_re]
+ try:
for line in command_funct(command, *args):
yield line
+ except TypeError:
+ pass # Command match is non-generating.
+ except ValueError:
+ continue # If the command_re raised a value error it rejected the match.
return # Context matched global command_re.
try: # Command matches global command.
for line in self.registered['command/%s' % command](command, *args):
@@ -2386,7 +2406,7 @@ class Kernel:
except KeyError:
yield _('Error. Command Unrecognized: %s') % command
except TypeError:
- pass # Did not yield anything. Still success. But, not a generator.
+ pass # Command match is non-generating.
class Job:
|
Remove map element id attribute
new vue api <3 | <template>
- <div ref="root" id="map" />
+ <div ref="root" class="map" />
</template>
<script lang="ts">
@@ -178,7 +178,7 @@ export default createComponent({
</script>
<style lang="scss" scoped>
-#map {
+.map {
position: absolute;
top: 0;
bottom: 0;
|
transform_code_ada.mako: replace .typ with .get_type()
TN: | @@ -7,8 +7,8 @@ ${parser.parser.generate_code()}
if ${parser.pos_var} /= No_Token_Index then
## Create the transform wrapper node
- ${parser.res_var} := ${parser.typ.name()}
- (${parser.typ.name()}_Alloc.Alloc (Parser.Mem_Pool));
+ ${parser.res_var} := ${parser.get_type().name()}
+ (${parser.get_type().name()}_Alloc.Alloc (Parser.Mem_Pool));
## Compute and set the sloc range for this AST node. Reminders:
## * start_pos the name for the position of the lexer before this parser
@@ -24,7 +24,7 @@ if ${parser.pos_var} /= No_Token_Index then
then No_Token_Index
else ${parser.pos_var} - 1);
- % for field, arg in zip(parser.typ.get_parse_fields(), args):
+ % for field, arg in zip(parser.get_type().get_parse_fields(), args):
## Set children fields into the created node
${parser.res_var}.${field.name} :=
% if field.type.is_ast_node:
|
Remove non-working log statements
The monkeypatching done by this module is executed too early, before
settings are fully configured, so the logging does not work, we end
up with `No handlers could be found for logger "z.files.utils"`. | """
Monkey patch and defuse all stdlib xml packages and lxml.
"""
-import logging
import sys
patched_modules = (
@@ -17,11 +16,6 @@ if any(module in sys.modules for module in patched_modules):
from defusedxml import defuse_stdlib # noqa
-log = logging.getLogger('z.files.utils')
-log.warn(
- 'Calling defusedxml.defuse_stdlib to patch known xml '
- 'security vulnerabilities')
-
defuse_stdlib()
import lxml # noqa
@@ -46,11 +40,9 @@ def create_rdf_parser_without_externals(target, store):
See https://bugzilla.mozilla.org/show_bug.cgi?id=1306954
"""
parser = _rdfxml_create_parser(target, store)
- log.warn('Using a custom XML parser without external entities or params')
parser.setFeature(feature_external_ges, 0)
parser.setFeature(feature_external_pes, 0)
return parser
-log.warn('Patching rdfxml create_parser() to disable external entities/params')
rdfxml.create_parser = create_rdf_parser_without_externals
|
Removed Yahoo Weather API
As of Jan. 3rd 2019 Yahoo Weather API has been retired. | @@ -865,4 +865,3 @@ API | Description | Auth | HTTPS | CORS |
| [OpenWeatherMap](http://openweathermap.org/api) | Weather | `apiKey` | No | Unknown |
| [Storm Glass](https://stormglass.io/) | Global marine weather from multiple sources | `apiKey` | Yes | Yes |
| [Weatherbit](https://www.weatherbit.io/api) | Weather | `apiKey` | Yes | Unknown |
-| [Yahoo! Weather](https://developer.yahoo.com/weather/) | Weather | `apiKey` | Yes | Unknown |
|
Update SlowFast_FasterRCNN_en.md
Add SlowFast_FasterRCNN mAP and model (English version) | @@ -65,6 +65,12 @@ python main.py --test \
-c configs/detection/ava/ava.yaml
```
+
+| architecture | depth | Pretrain Model | frame length x sample rate | MAP | AVA version | model |
+| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |------------- |
+| SlowFast | R50 | [Kinetics 400](https://videotag.bj.bcebos.com/PaddleVideo/SlowFast/SlowFast_8*8.pdparams) | 8 x 8 | 23.2 | 2.1 | [`link`](https://videotag.bj.bcebos.com/PaddleVideo-release2.2/SlowFastRCNN_AVA.pdparams) |
+
+
## Inference
The action detection of this project is divided into two stages. In the first stage, humans' proposals are obtained, and then input into the SlowFast+FasterRCNN model for action recognition.
|
Fix `'AppFuture' object does not support indexing`
Fixes | @@ -35,7 +35,7 @@ def run_test():
outputs=['{0}/hello.txt'.format(outdir),
'{0}/this.txt'.format(outdir),
'{0}/cat.txt'.format(outdir)])
- print(f[0].result())
+ print(f.result())
time.sleep(0.1)
assert 'hello.txt' in os.listdir(outdir), "hello.txt is missing"
|
langkit.expressions: handle function annotations in
This is necessary to start adding type hints to the functions that are
decorated with and
TN: | @@ -435,27 +435,35 @@ class DocumentedExpression:
elif not inspect.isfunction(func):
return 'expr', ['???']
- args, varargs, keywords, defaults = inspect.getargspec(func)
+ params = list(inspect.signature(func).parameters.values())
+ varargs: Opt[str] = None
+ kwargs: Opt[str] = None
+ if params and params[-1].kind == inspect.Parameter.VAR_KEYWORD:
+ kwargs = params.pop().name
+ if params and params[-1].kind == inspect.Parameter.VAR_POSITIONAL:
+ varargs = params.pop().name
# If present, discard the first argument (self), which is irrelevant
# for documentation purposes. The second argument is the prefix for the
# attribute expression.
- argspec = list(args)
if first_arg_is_self:
- argspec.pop(0)
- prefix_name = argspec.pop(0) if self.is_attribute else None
+ params.pop(0)
+ prefix_name = params.pop(0).name if self.is_attribute else None
# Remove positional and keyword arguments which are already provided by
# partial evaluation.
- argspec = argspec[len(self.args):]
+ params = params[len(self.args):]
+ params_dict = {p.name: p for p in params}
for kw in self.kwargs:
- argspec.remove(kw)
+ params_dict.pop(kw)
+
+ argspec: Opt[List[str]] = [p.name for p in params_dict.values()]
# Describe variadic constructors as such
if varargs:
argspec.append(r'\*' + varargs)
- if keywords:
- argspec.append(r'\*\*' + keywords)
+ if kwargs:
+ argspec.append(r'\*\*' + kwargs)
if self.parameterless:
argspec = None
@@ -956,10 +964,11 @@ def auto_attr_custom(name, *partial_args, **partial_kwargs):
def construct(self):
return fn(self, *self.sub_expressions, **self.kwargs)
- nb_args = len(inspect.getargspec(fn).args)
-
+ # "fn" is supposed to take at least one positional argument: the "self"
+ # expression. If there are more arguments, make it use the call syntax,
+ # otherwise make it an attribute.
+ nb_args = len(inspect.signature(fn).parameters)
assert nb_args > 1
-
decorator = (attr_expr if nb_args == 2 else attr_call)
decorator(attr_name, *partial_args, doc=doc, **partial_kwargs)(type(
|
expresions.envs.is_visible_from: fix typo in docstring
TN: minor | @@ -287,9 +287,9 @@ def is_visible_from(referenced_env, base_env):
Expression that will return whether an env's associated compilation unit is
visible from another env's compilation unit.
- TODO: This is mainly exposed on envs because the CompilationUnit type is
- not exposed in the DSL yet. We might want to change that eventually if
- there are other compelling reasons to do it.
+ TODO: This is mainly exposed on envs because the AnalysisUnit type is not
+ exposed in the DSL yet. We might want to change that eventually if there
+ are other compelling reasons to do it.
:param AbstractExpression base_env: The environment from which we want
to check visibility.
|
add collection of keywords per rule
In reference to issue this commit aims to add an initial set of keywords per rule.
These keywords will be later in the "rule" bot command in order to make rule identification easier | @@ -124,35 +124,44 @@ class RulesView(APIView):
return Response([
(
- f"Follow the {pydis_coc}."
+ f"Follow the {pydis_coc}.",
+ {"coc", "conduct", "code"}
),
(
- f"Follow the {discord_community_guidelines} and {discord_tos}."
+ f"Follow the {discord_community_guidelines} and {discord_tos}.",
+ {"guidelines", "discord_tos"}
),
(
- "Respect staff members and listen to their instructions."
+ "Respect staff members and listen to their instructions.",
+ {"staff", "instructions"}
),
(
"Use English to the best of your ability. "
- "Be polite if someone speaks English imperfectly."
+ "Be polite if someone speaks English imperfectly.",
+ {"english", "language"}
),
(
"Do not provide or request help on projects that may break laws, "
- "breach terms of services, or are malicious or inappropriate."
+ "breach terms of services, or are malicious or inappropriate.",
+ {"infraction", "tos", "breach", "malicious", "inappropriate"}
),
(
- "Do not post unapproved advertising."
+ "Do not post unapproved advertising.",
+ {"ads", "advertising"}
),
(
"Keep discussions relevant to the channel topic. "
- "Each channel's description tells you the topic."
+ "Each channel's description tells you the topic.",
+ {"off-topic", "topic", "relevance"}
),
(
"Do not help with ongoing exams. When helping with homework, "
- "help people learn how to do the assignment without doing it for them."
+ "help people learn how to do the assignment without doing it for them.",
+ {"exams", "assignment", "assignments", "homework"}
),
(
- "Do not offer or ask for paid work of any kind."
+ "Do not offer or ask for paid work of any kind.",
+ {"work", "money"}
),
])
|
Remove safeKeywords from VISAAdapter
Implemented by | @@ -57,14 +57,6 @@ class VISAAdapter(Adapter):
resource_name = "GPIB0::%d::INSTR" % resource_name
self.resource_name = resource_name
self.manager = pyvisa.ResourceManager(visa_library)
- safeKeywords = [
- 'resource_name', 'timeout', 'chunk_size', 'lock', 'query_delay', 'send_end',
- 'read_termination', 'write_termination'
- ]
- kwargsCopy = copy.deepcopy(kwargs)
- for key in kwargsCopy:
- if key not in safeKeywords:
- kwargs.pop(key)
self.connection = self.manager.open_resource(
resource_name,
**kwargs
|
Minor typo in IP adress
127.0.01 replaced by 127.0.0.1 | @@ -67,7 +67,7 @@ You can start the Datasette process running using the following::
You can confirm that Datasette is running on port 8000 like so::
- curl 127.0.01:8000/-/versions.json
+ curl 127.0.0.1:8000/-/versions.json
# Should output JSON showing the installed version
Datasette will not be accessible from outside the server because it is listening on ``127.0.0.1``. You can expose it by instead listening on ``0.0.0.0``, but a better way is to set up a proxy such as ``nginx``.
|
Update TensorFlow version in development_guide.rst
Change TensorFlow 1.3 references to TensorFlow 2.0. Remove instructions for installing TensorFlow 1.3 with older version of Python. | @@ -42,39 +42,20 @@ importing StrawberryFields in Python.
TensorFlow support
------------------
-To use Strawberry Fields with TensorFlow, version 1.3 of
-TensorFlow is required. This can be installed alongside Strawberry Fields
+To use Strawberry Fields with TensorFlow, version 2.0 of
+TensorFlow (or higher) is required. This can be installed alongside Strawberry Fields
as follows:
.. code-block:: console
- pip install strawberryfields tensorflow==1.3
+ pip install strawberryfields tensorflow
Or, to install Strawberry Fields and TensorFlow with GPU and CUDA support:
.. code-block:: console
- pip install strawberryfields tensorflow-gpu==1.3
+ pip install strawberryfields tensorflow-gpu
-Note that TensorFlow version 1.3 is only supported on Python versions
-less than 3.7. You can use the following command to check your
-Python version:
-
-.. code-block:: console
-
- python --version
-
-If the above prints out 3.7, and you still want to use TensorFlow, you will need
-to install Python 3.6. The recommended method is to install
-`Anaconda for Python 3 <https://www.anaconda.com/download/>`_.
-
-Once installed, you can then create a Python 3.6 Conda environment:
-
-.. code-block:: console
-
- conda create --name sf_tensorflow_env python=3.6
- conda activate sf_tensorflow_env
- pip install strawberryfields tensorflow==1.3
Software tests
--------------
|
Update documentation of 3D Slicer extension
Fixes | medical image informatics, image processing,
and three-dimensional visualization.
-You can download and install Slicer 4.11 from
-`their download website <https://download.slicer.org/>`_ or, if you are on
-macOS, using `Homebrew <https://docs.brew.sh/>`_:
-``brew cask install slicer-nightly``.
-
TorchIO provides a 3D Slicer extension for quick experimentation and
visualization of the package features without any coding.
The TorchIO extension can be easily installed using the
-`Extensions Manager <https://www.slicer.org/wiki/Documentation/4.10/SlicerApplication/ExtensionsManager>`_.
+`Extensions Manager <https://slicer.readthedocs.io/en/latest/user_guide/extensions_manager.html>`_.
The code and installation instructions are available on
`GitHub <https://github.com/fepegar/SlicerTorchIO>`_.
+.. note:: The Preview version (built nightly) is recommended. You can download
+ and install Slicer from `their download website <https://download.slicer.org/>`_
+ or, if you are on macOS, using `Homebrew <https://docs.brew.sh/>`_::
+
+ brew tap homebrew/cask-versions && brew cask install slicer-preview
+
TorchIO Transforms
------------------
@@ -62,7 +63,7 @@ Hovering the mouse over the transforms will show tooltips extracted from the
TorchIO documentation.
.. image:: https://raw.githubusercontent.com/fepegar/SlicerTorchIO/master/Screenshots/usage_4.png
- :alt: Select volume nodes
+ :alt: Apply transform
You can click on the ``Toggle volumes`` button to switch between input and output
|
Fix python client backcompat tests
Summary: These were using the wrong folder, so were never generating any snapshots
Test Plan: BK, now fails this test
Reviewers: alangenfeld, sidkmenon, rexledesma | from typing import Iterator
import pytest
+from dagster.utils import file_relative_path
from dagster_graphql.schema import create_schema
from gql import Client, gql
@@ -12,7 +13,7 @@ def get_validator_client() -> Client:
def generate_legacy_query_strings() -> Iterator[str]:
- for (dir_name, _, file_names) in os.walk("./query_snapshots"):
+ for (dir_name, _, file_names) in os.walk(file_relative_path(__file__, "./query_snapshots")):
for file_name in file_names:
if file_name.endswith(".graphql"):
with open(os.path.join(dir_name, file_name)) as f:
|
Separate cache key for ContentNodeSlim ancestors
Not sure if cache entries would collide with the ones from the
ContentNodeViewset, so just to make sure, let's separate these into two
separate keys, once for each of the Viewsets. | @@ -519,7 +519,7 @@ class ContentNodeSlimViewset(viewsets.ReadOnlyModelViewSet):
@detail_route(methods=['get'])
def ancestors(self, request, **kwargs):
- cache_key = 'contentnode_ancestors_{pk}'.format(pk=kwargs.get('pk'))
+ cache_key = 'contentnode_slim_ancestors_{pk}'.format(pk=kwargs.get('pk'))
if cache.get(cache_key) is not None:
return Response(cache.get(cache_key))
|
[varLib.merger] Handle missing PairPos format1/2 subtables in AligningMerger
Fixes
The Format2 is still failing in my test case. Investigating. | @@ -280,7 +280,11 @@ def _PairPosFormat1_merge(self, lst, merger):
# Merge everything else; makes sure Format is the same.
merger.mergeObjects(self, lst,
exclude=('Coverage',
- 'PairSet', 'PairSetCount'))
+ 'PairSet', 'PairSetCount',
+ 'ValueFormat1', 'ValueFormat2'))
+
+ self.ValueFormat1 = reduce(int.__or__, [l.ValueFormat1 for l in lst], 0)
+ self.ValueFormat2 = reduce(int.__or__, [l.ValueFormat2 for l in lst], 0)
empty = ot.PairSet()
empty.PairValueRecord = []
@@ -435,7 +439,11 @@ def _PairPosFormat2_merge(self, lst, merger):
exclude=('Coverage',
'ClassDef1', 'Class1Count',
'ClassDef2', 'Class2Count',
- 'Class1Record'))
+ 'Class1Record',
+ 'ValueFormat1', 'ValueFormat2'))
+
+ self.ValueFormat1 = reduce(int.__or__, [l.ValueFormat1 for l in lst], 0)
+ self.ValueFormat2 = reduce(int.__or__, [l.ValueFormat2 for l in lst], 0)
# Align coverages
glyphs, _ = _merge_GlyphOrders(merger.font,
|
Update ATen doc with optional syntax
Summary:
Update the readme to reflect the recent optional syntax change.
Pull Request resolved: | @@ -81,6 +81,21 @@ signature.
- `*` is a special sentinel argument, which doesn't translate into an actual
argument, but indicates that in the Python bindings, any subsequent arguments
must be specified as keyword arguments (and cannot be provided positionally).
+- `?` is trailing question mark that annotate an argument to be an optional type, grep for
+ `optional` to find some example usages. In general, most functions will not need to use
+ this, but there are some cases that we want to use optional for the different types:
+ - You want to pass in a `None` to a ATen function/method from Python, and handles the
+ None type in the C++ side. For example, `clamp(Tensor self, Scalar? min=None, Scalar? max=None)`
+ can take `None` for its `min` and `max` parameter, and do dispatch to different
+ backend if one of the parameters is `None`. Optional type can accept a `None` type
+ (`nullopt` in C++) from Python and use the [C++ Optional class](https://en.cppreference.com/w/cpp/utility/optional) to interact with the parameters.
+ - You want a default value which is fine in Python but would cause ambiguity in C++.
+ For example, `norm(Tensor self, Scalar p=2, int64_t dim, bool keepdim=false)` would
+ cause ambiguity in C++ since it default args must be adjacent and `p` could not
+ have a default value when `dim` does not. Therefore, we need to make `p` as a
+ optional Scalar, and make `p=2` when `p` is not passed in (nullopt).
+ - You want a value to default to the same value as another argument (this cannot be
+ expressed in C++ default arguments).
Functions with no tensor inputs are called *factory functions*, and
are handled specially by code generation. If your function is behaving
@@ -163,29 +178,6 @@ to unconditionally dispatch to a native function whose name is different than
the name in the public ATen API, but this is generally frowned upon (just name
them the same thing!)
-### `python_default_init`
-
-```
-python_default_init:
- argument_name: initializing_expression
-```
-
-A map from argument names to default initializing expressions written in C++. Such default
-expressions will only be used in Python API (in the C++ API, these arguments are
-mandatory).
-
-There are a few situations where you might like to use this functionality:
-
-- You want a default value which is fine in Python but would cause ambiguity in C++.
- For example, `norm(Tensor self, real p=2, int64_t dim=1)` would cause ambiguity
- with long tensors in C++. Therefore, we need to make `p=2` a python only default
- initialization value.
-
-- You want a value to default to the same value as another argument (this cannot
- be expressed in C++ default arguments).
-
-If you grep for `python_default_init`, you can find examples of this being used;
-in general, most functions will not need to use this.
## Writing an implementation in C++
@@ -290,7 +282,7 @@ Tensor& s_add_(Tensor& self, const Tensor& other) {
By default, `Tensor` arguments to ATen functions are always defined, unless
you explicitly specified that an undefined tensor was permissible by writing
-`Tensor?` or `Tensor x={}`.
+`Tensor?` or `Tensor? x={}`, the latter one is needed when you have to assign a default value in C++ (e.g. in the middle of other parameters with default values).
The rules for returning undefined Tensors are a bit more subtle, but there
is only one case you have to remember:
|
correct way to remove prefix
Tested-by: Build Bot | @@ -36,10 +36,9 @@ class DesignDocumentNamespace(Enum):
@classmethod
def unprefix(cls, name):
- # could have _design/...
- name = name.lstrip('_design/')
- # could have _dev
- return name.lstrip('_dev')
+ for prefix in ('_design/', '_dev'):
+ name = name[name.startswith(prefix) and len(prefix):]
+ return name
|
REQUEST-900-EXCLUSION-RULES-BEFORE-CRS.conf.example: fix old url link
fix old url link from
to | # This ruleset allows you to control how ModSecurity will handle traffic
# originating from Authorized Vulnerability Scanning (AVS) sources. See
# related blog post -
-# http://blog.spiderlabs.com/2010/12/advanced-topic-of-the-week-handling-authorized-scanning-traffic.html
+# https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/updated-advanced-topic-of-the-week-handling-authorized-scanning-traffic/
#
# Allow List ASV network block (no blocking or logging of AVS traffic) Update
# IP network block as appropriate for your AVS traffic
|
skip the _count() for get() operations too.
We'll leave the full counting to getz() operations for now. | @@ -162,8 +162,10 @@ class DiskQueue(OKTypesMixin):
with open(fname, 'rb') as fh:
ret = self.decompress(fh.read())
ret = ret, self.read_meta(fname)
+ sz = os.stat(fname).st_size
self.unlink_(fname)
- self._count()
+ self.cn -= 1
+ self.sz -= sz
return ret
def getz(self, sz=SPLUNK_MAX_MSG):
|
Blocks-Backend-Events
add change event to TabItem | @@ -87,6 +87,16 @@ class TabItem(BlockContext):
def get_template_context(self):
return {"label": self.label}
+ def change(self, fn: Callable, inputs: List[Component], outputs: List[Component]):
+ """
+ Parameters:
+ fn: Callable function
+ inputs: List of inputs
+ outputs: List of outputs
+ Returns: None
+ """
+ self.set_event_trigger("change", fn, inputs, outputs)
+
class Blocks(Launchable, BlockContext):
def __init__(self, theme="default"):
|
Update test_dwavecliquesampler.py
Add test_qubit_coupling_range | @@ -133,6 +133,15 @@ class TestDWaveCliqueSampler(unittest.TestCase):
with self.assertRaises(ValueError):
chimera_sampler.sample(bqm)
+ def test_qubit_coupling_range(self):
+ n = pegasus_sampler.largest_clique_size
+
+ bqm = dimod.BinaryQuadraticModel({},
+ {(u, v): -2 for u in range(n) for v in range(u+1, n)}, 'SPIN')
+
+ pegasus_sampler.sample(bqm, chain_strength=-0.5).resolve()
+
+
class TestFailover(unittest.TestCase):
@unittest.mock.patch('dwave.system.samplers.clique.DWaveSampler',
|
Add self.sf and self.tooling to the CumulusCI robot library for access
to Salesforce REST API's | @@ -4,6 +4,7 @@ from cumulusci.core.exceptions import TaskNotFoundError
from cumulusci.core.exceptions import TaskOptionsError
from cumulusci.core.config import TaskConfig
from robot.libraries.BuiltIn import BuiltIn
+from simple_salesforce import Salesforce
class CumulusCI(object):
""" Library for accessing CumulusCI for the local git project
@@ -28,6 +29,8 @@ class CumulusCI(object):
def __init__(self, org_name):
self.org_name = org_name
+ self.sf = self._init_api()
+ self.tooling = self._init_api('tooling/')
@property
def config(self):
@@ -47,6 +50,12 @@ class CumulusCI(object):
"""
BuiltIn().set_suite_variable('${LOGIN_URL}', self.org.start_url)
+ def get_org_info(self):
+ """ Returns a dictionary of the org information for the current target
+ Salesforce org
+ """
+ return self.org.config
+
def login_url(self, org=None):
""" Returns the login url which will automatically log into the target
Salesforce org. By default, the org_name passed to the library
@@ -92,6 +101,21 @@ class CumulusCI(object):
task_class, task_config = self._init_task(class_path, options, task_config)
return self._run_task(task_class, task_config)
+ def _init_api(self, base_url=None):
+ if self.api_version:
+ api_version = self.api_version
+ else:
+ api_version = self.config.project_config.project__package__api_version
+
+ rv = Salesforce(
+ instance=self.org.instance_url.replace('https://', ''),
+ session_id=self.org.access_token,
+ version=api_version,
+ )
+ if base_url is not None:
+ rv.base_url += base_url
+ return rv
+
def _init_config(self):
config = CliConfig()
return config
|
Fix generic export corner case
Resolves issue where exporting a generic pipeline shows no
options when no runtime configurations are defined | @@ -111,9 +111,15 @@ const getAllPaletteNodes = (palette: any): any[] => {
return nodes;
};
-const isRuntimeTypeAvailable = (data: IRuntimeData, type: string): boolean => {
- const configs = data.platforms.find(p => p.id === type)?.configs ?? [];
- return configs.length > 0;
+const isRuntimeTypeAvailable = (data: IRuntimeData, type?: string): boolean => {
+ for (const p of data.platforms) {
+ if (type === undefined || p.id === type) {
+ if (p.configs.length > 0) {
+ return true;
+ }
+ }
+ }
+ return false;
};
const getDisplayName = (
@@ -534,7 +540,7 @@ const PipelineWrapper: React.FC<IProps> = ({
});
let title = `${actionType} pipeline`;
- if (type !== undefined) {
+ if (actionType === 'export' || type !== undefined) {
title = `${actionType} pipeline for ${runtimeDisplayName}`;
if (!isRuntimeTypeAvailable(runtimeData, type)) {
|
(ambassador-ratelimit) Make cross-compilation work
Originally-Committed-To:
Originally-Committed-As: | @@ -19,7 +19,7 @@ $(bins): %: FORCE vendor
$(GO) install $(pkg)/cmd/$@
$(bins:%=image/%): %: FORCE vendor
- $(IMAGE_GO) install $(pkg)/cmd/${@:image/%=%}
+ $(IMAGE_GO) build -o ${@} $(pkg)/cmd/${@:image/%=%}
.SECONDARY:
# The only reason .DELETE_ON_ERROR is off by default is for historical
|
vagrant: remove centos/8 workaround
The CentOS 8 vagrant box has finally been updated [1] with a recent
version (the latest one 2011 which means CentOS 8.3).
We don't need to download the vagrant libvirt box with a direct url
anymore from the CentOS infrastructure.
[1] | #!/bin/bash
-vagrant box remove --force --provider libvirt --box-version 1905.1 centos/8 || true
-vagrant box remove --force --provider libvirt --box-version 0 centos/8 || true
-vagrant box add --provider libvirt --name centos/8 https://cloud.centos.org/centos/8/vagrant/x86_64/images/CentOS-8-Vagrant-8.3.2011-20201204.2.x86_64.vagrant-libvirt.box || true
-
retries=0
until [ $retries -ge 5 ]
do
|
Update apt_unclassified.txt
> apt_tinyscouts | @@ -1573,9 +1573,3 @@ am.my-zo.org
# Reference: https://www.virustotal.com/gui/file/c07a332b932a211c5477d3a9941c5ee308aa3463eb3ed3dd1ddba09987261aba/detection
watchcartoon-live.org
-
-# Reference: https://twitter.com/ShadowChasing1/status/1552595370961944576
-# Reference: https://www.virustotal.com/gui/file/fb92611e3260e372be7799d17dd03109f5d0882efa3838923787ca8e16e31e06/detection
-# Reference: https://www.virustotal.com/gui/file/5b229e1a2a86f59258d007385cf167760c3bb3377de41cf69c9ead4256c4fc45/detection
-
-http://164.92.205.182
|
some small fixes to edge case failures of elemental generator. Move dataframe
concatentation to base generator evaluate method | @@ -130,13 +130,14 @@ class ElementalFeatureGenerator(BaseGenerator):
def __init__(self, composition_df, feature_types=None, remove_constant_columns=False):
super(BaseGenerator, self).__init__()
self.composition_df = composition_df
+ if type(self.composition_df) == pd.Series:
+ self.composition_df = pd.DataFrame(self.composition_df)
self.feature_types = feature_types
self.remove_constant_columns = remove_constant_columns
if self.feature_types is None:
self.feature_types = ['composition_avg', 'arithmetic_avg', 'max', 'min', 'difference']
def fit(self, X, y=None):
- # TODO: consider changing df to X throughout this to be simpler
self.df = X
self.y = y
self.original_features = self.df.columns
@@ -146,9 +147,6 @@ class ElementalFeatureGenerator(BaseGenerator):
df = self.generate_magpie_features()
- # remove original features. Don't necessarily want to do this?
- #df = df.drop(self.original_features, axis=1)
-
# delete missing values, generation makes a lot of garbage.
df = DataframeUtilities().clean_dataframe(df)
df = df.select_dtypes(['number']).dropna(axis=1)
@@ -156,7 +154,6 @@ class ElementalFeatureGenerator(BaseGenerator):
if self.remove_constant_columns is True:
df = DataframeUtilities().remove_constant_columns(dataframe=df)
- #assert self.composition_df[self.composition_df.columns[0]] not in df.columns
df = df[sorted(df.columns.tolist())]
return df, self.y
@@ -523,7 +520,7 @@ class ElementalFeatureGenerator(BaseGenerator):
if 'Site2Site3' in self.feature_types:
magpiedata_dict_list_toinclude.append(magpiedata_dict_list[29])
- df = self.df # Initialize the final df as initial df then add magpie features to it
+ #df = self.df # Initialize the final df as initial df then add magpie features to it
for magpiedata_dict in magpiedata_dict_list_toinclude:
df_magpie = pd.DataFrame.from_dict(data=magpiedata_dict, orient='index')
# Need to reorder compositions in new dataframe to match input dataframe
@@ -532,9 +529,9 @@ class ElementalFeatureGenerator(BaseGenerator):
df_magpie.index.name = self.composition_df.columns[0]
df_magpie.reset_index(inplace=True)
# Merge magpie feature dataframe with originally supplied dataframe
- df = DataframeUtilities().merge_dataframe_columns(dataframe1=df, dataframe2=df_magpie)
+ #df = DataframeUtilities().merge_dataframe_columns(dataframe1=df, dataframe2=df_magpie)
- return df
+ return df_magpie
def _get_computed_magpie_features(self, composition, data_path, site_dict=None):
magpiedata_composition_average = {}
|
Fix some spelling and grammar in README.md
Fix some spelling and grammar in the readme up to and including "Time Estimate to Set Up a RaspiBlitz". (includes minor subjective improvements) | 
-**The RaspiBlitz is a do-it-yourself Lightning Node based on LND running together with a Bitcoin-Fullnode on a RaspberryPi 3/4 - with a HDD/SSD and an nice display for easy setup & monitoring.**
+**The RaspiBlitz is a do-it-yourself Lightning Node based on LND running together with a Bitcoin-Fullnode on a RaspberryPi 3/4 - with an HDD/SSD and a nice display for easy setup & monitoring.**
RaspiBlitz is mainly targeted for learning how to run your own node decentralized from home - because: Not your Node, Not your Rules. Discover & develop the growing ecosystem of the Lightning Network by becoming a full part of it. Build it as part of a [workshop](WORKSHOP.md) or as a weekend project yourself.
@@ -45,18 +45,18 @@ You can connect the following Wallet-Apps to your RaspiBlitz:
* **Fully Noded** (iOS) [details](https://apps.apple.com/us/app/fully-noded/id1436425586)
* **SendMany** (Android) [details](https://github.com/fusion44/sendmany/blob/master/README.md)
-Also much more features like Touchscreen, Autopilot, DynDNS, SSH-Tunneling, UPS Support, ...
+Also many more features like Touchscreen, Autopilot, DynDNS, SSH-Tunneling, UPS Support, ...
## DeepDive Video (July 2020)
<img src="pictures/raspiblitz-deepdive.png" alt="DeepDive Video" width="400">
--watch--> https://www.youtube.com/watch?v=QXUGg45CWLo
-## Time Estimate to Setup a RaspiBlitz
+## Time Estimate to Set Up a RaspiBlitz
-The RaspiBlitz is optimized for being setup during a workshop at a hackday or conference (see [detailed workshop tutorial](WORKSHOP.md)). When it comes ready assembled together with a up-to-date synced blockchain it's possible to have it ready in about 2 to 3 hours - most is waiting time.
+The RaspiBlitz is optimized for being setup during a workshop at a hackday or conference (see [detailed workshop tutorial](WORKSHOP.md)). When it comes fully assembled with an up-to-date synced blockchain, it's possible to have it ready in about 2 to 3 hours - most of it is waiting time.
-If you start at home ordering the parts from Amazon (see shopping list below) then it's a weekend project with a lot of download and syncing time where you can do other stuff while checking on the progress from time to time.
+If you start at home ordering the parts from Amazon (see shopping list below) then it's a weekend project with a lot of downloading and syncing time where you can do other stuff while checking on the progress from time to time.
## Hardware Needed
|
Remove unique constraint before using `GenericForeignKey` in `WorkflowState`
Prevent migrations crash on SQLite before landed in Django | @@ -10,6 +10,10 @@ class Migration(migrations.Migration):
]
operations = [
+ migrations.RemoveConstraint(
+ model_name="workflowstate",
+ name="unique_in_progress_workflow",
+ ),
migrations.AlterField(
model_name="workflowstate",
name="page",
@@ -54,10 +58,6 @@ class Migration(migrations.Migration):
name="workflowstate_base_ct_id_idx",
),
),
- migrations.RemoveConstraint(
- model_name="workflowstate",
- name="unique_in_progress_workflow",
- ),
migrations.AddConstraint(
model_name="workflowstate",
constraint=models.UniqueConstraint(
|
allow skipping warnings
It would be useful to not show these warning messages if there are more than
one valid configuration method. | @@ -3244,7 +3244,7 @@ def get_cloud_config_value(name, vm_, opts, default=None, search_global=True):
return value
-def is_provider_configured(opts, provider, required_keys=()):
+def is_provider_configured(opts, provider, required_keys=(), log_message=True):
'''
Check and return the first matching and fully configured cloud provider
configuration.
@@ -3257,6 +3257,7 @@ def is_provider_configured(opts, provider, required_keys=()):
return False
for key in required_keys:
if opts['providers'][alias][driver].get(key, None) is None:
+ if log_message is True:
# There's at least one require configuration key which is not
# set.
log.warning(
@@ -3279,6 +3280,7 @@ def is_provider_configured(opts, provider, required_keys=()):
skip_provider = False
for key in required_keys:
if provider_details.get(key, None) is None:
+ if log_message is True:
# This provider does not include all necessary keys,
# continue to next one.
log.warning(
|
Simplify PR template
This PR simplifies the Pull request template to be identical to the new simpler template used by cuDF, adding in #rapidsai/cudf#10774
Closes
Authors:
- Mark Harris (https://github.com/harrism)
Approvers:
- AJ Schmidt (https://github.com/ajschmidt8)
- Bradley Dice (https://github.com/bdice)
URL: | -<!--
-
-Thank you for contributing to cuSpatial :)
-
-Here are some guidelines to help the review process go smoothly.
-
-1. Please write a description in this text box of the changes that are being
- made.
-
-2. Please ensure that you have written units tests for the changes made/features
- added.
-
-3. If you are closing an issue please use one of the automatic closing words as
- noted here: https://help.github.com/articles/closing-issues-using-keywords/
-
-4. If your pull request is not ready for review but you want to make use of the
- continuous integration testing facilities please label it with `[WIP]`.
-
-5. If your pull request is ready to be reviewed without requiring additional
- work on top of it, then remove the `[WIP]` label (if present) and replace
- it with `[REVIEW]`. If assistance is required to complete the functionality,
- for example when the C/C++ code of a feature is complete but Python bindings
- are still required, then add the label `[HELP-REQ]` so that others can triage
- and assist. The additional changes then can be implemented on top of the
- same PR. If the assistance is done by members of the rapidsAI team, then no
- additional actions are required by the creator of the original PR for this,
- otherwise the original author of the PR needs to give permission to the
- person(s) assisting to commit to their personal fork of the project. If that
- doesn't happen then a new PR based on the code of the original PR can be
- opened by the person assisting, which then will be the PR that will be
- merged.
-
-6. Once all work has been done and review has taken place please do not add
- features or make changes out of the scope of those requested by the reviewer
- (doing this just add delays as already reviewed code ends up having to be
- re-reviewed/it is hard to tell what is new etc!). Further, please do not
- rebase your branch on master/force push/rewrite history, doing any of these
- causes the context of any comments made by reviewers to be lost. If
- conflicts occur against master they should be resolved by merging master
- into the branch used for making the pull request.
-
-Many thanks in advance for your cooperation!
-
--->
+## Description
+<!-- Provide a standalone description of changes in this PR. -->
+<!-- Reference any issues closed by this PR with "closes #1234". -->
+<!-- Note: The pull request title will be included in the CHANGELOG. -->
+
+## Checklist
+- [ ] I am familiar with the [Contributing Guidelines](https://github.com/rapidsai/cuspatial/blob/HEAD/CONTRIBUTING.md).
+- [ ] New or existing tests cover these changes.
+- [ ] The documentation is up to date with these changes.
|
Fix return URL when deleting a peering session.
After deleting a peering session the user should be redirected to the
IX the peering session was belonging. | @@ -240,4 +240,6 @@ class PeeringSessionEdit(AddOrEditView):
class PeeringSessionDelete(DeleteView):
model = PeeringSession
- # return redirect('peering:ix_details', slug=peering_session.internet_exchange.slug)
+
+ def get_return_url(self, obj):
+ return obj.internet_exchange.get_absolute_url()
|
dashboard: if no host is available, let's just skip these plays.
If there is no host available, let's just skip these plays.
Closes: | status: "Complete"
end: "{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}"
-- hosts: "{{ groups[mgr_group_name] | default(groups[mon_group_name]) }}"
+# using groups[] here otherwise it can't fallback to the mon if there's no mgr group.
+# adding an additional | default(omit) in case where no monitors are present (external ceph cluster)
+- hosts: "{{ groups[mgr_group_name] | default(groups[mon_group_name]) | default(omit) }}"
become: true
pre_tasks:
- name: set ceph dashboard install 'In Progress'
|
ready for PR
fixed argparser | @@ -123,7 +123,7 @@ def plot_histogram(hist_data): # pragma: no cover
plt.xlabel('Confidence')
plt.ylabel('Number of Samples')
fig = plt.gcf()
- fig.set_size_inches(15, 15)
+ fig.set_size_inches(10, 10)
fig.savefig(cmdline_args.histogram, bbox_inches='tight')
@@ -154,7 +154,7 @@ def get_evaluation_metrics(targets, predictions): # pragma: no cover
return report, precision, f1, accuracy
-def remove_empty_intent_examples(targets, predictions):
+def remove_empty_intent_examples(targets, predictions, messages, confidences):
"""Removes those examples without intent."""
targets = np.array(targets)
@@ -195,14 +195,15 @@ def show_nlu_errors(targets, preds, messages, conf): # pragma: no cover
"""Log messages which result in wrong predictions and save them to file"""
errors = {}
- errors['nlu'] = [
+ # it could also be interesting to include entity-errors later, therefore we start with a "intent_errors" key
+ errors['intent_errors'] = [
{"text": messages[i],
"true_label": targets[i],
"prediction": preds[i],
"confidence": conf[i]}
for i in range(len(messages)) if not (targets[i] == preds[i])]
- if errors['nlu']:
+ if errors['intent_errors']:
errors = json.dumps(errors, indent=4)
logger.info("\n\nThese intent examples could not be classified "
"correctly \n{}".format(errors))
@@ -242,7 +243,7 @@ def evaluate_intents(targets, preds, messages, conf): # pragma: no cover
title='Intent Confusion matrix')
# save confusion matrix to file before showing it
fig = plt.gcf()
- fig.set_size_inches(15, 15)
+ fig.set_size_inches(10, 10)
fig.savefig(cmdline_args.confmat, bbox_inches='tight')
plt.show()
@@ -772,8 +773,6 @@ def return_entity_results(results, dataset_name):
def main():
- parser = create_argument_parser()
- cmdline_args = parser.parse_args()
utils.configure_colored_logging(cmdline_args.loglevel)
@@ -812,4 +811,6 @@ def main():
if __name__ == '__main__': # pragma: no cover
+ parser = create_argument_parser()
+ cmdline_args = parser.parse_args()
main()
|
project_file.mako: refactor variables for compiler switches
This refactoring introduces new variables to hold compiler switches.
This makes it possible in language extensions to conveniently use the
appropriate set of options for hand written source files.
TN: | @@ -134,12 +134,14 @@ library project ${lib_name} is
null;
end case;
- for Default_Switches ("Ada") use
- Mode_Args & Ada_Mode_Args & Generated_Ada_Cargs;
- for Default_Switches ("C") use Mode_Args & C_Mode_Args;
+ Common_Ada_Cargs := Mode_Args & Ada_Mode_Args;
+ Common_C_Cargs := Mode_Args & C_Mode_Args;
+
+ for Default_Switches ("Ada") use Common_Ada_Cargs & Generated_Ada_Cargs;
+ for Default_Switches ("C") use Common_C_Cargs;
% for f in extra_source_files:
- for Switches ("${f}") use Mode_Args & Ada_Mode_Args & Manual_Ada_Cargs;
+ for Switches ("${f}") use Common_Ada_Cargs & Manual_Ada_Cargs;
% endfor
case Build_Mode is
|
fix minor error: GCNLayerSAGE->GraphSAGELayer
fix minor error: GCNLayerSAGE->GraphSAGELayer | @@ -78,15 +78,15 @@ class GraphSAGE(nn.Module):
self.layers = nn.ModuleList()
# input layer
- self.layers.append(GCNLayerSAGE(in_feats, n_hidden, activation=activation,
+ self.layers.append(GraphSAGELayer(in_feats, n_hidden, activation=activation,
dropout=dropout, use_pp=use_pp, use_lynorm=True))
# hidden layers
for i in range(n_layers - 1):
self.layers.append(
- GCNLayerSAGE(n_hidden, n_hidden, activation=activation, dropout=dropout,
+ GraphSAGELayer(n_hidden, n_hidden, activation=activation, dropout=dropout,
use_pp=False, use_lynorm=True))
# output layer
- self.layers.append(GCNLayerSAGE(n_hidden, n_classes, activation=None,
+ self.layers.append(GraphSAGELayer(n_hidden, n_classes, activation=None,
dropout=dropout, use_pp=False, use_lynorm=False))
def forward(self, g):
|
Two qubit tomo cardinal
Finished work on the two qubit cardinal tomo. Needs to be tested. | @@ -47,26 +47,61 @@ def two_qubit_off_on(q0, q1, RO_target='all'):
def two_qubit_tomo_cardinal(cardinal,
q0,
q1,
- RO_target,
- timings_dict,
- verbose=False):
- # TODO: docstring
+ RO_target):
+ '''
+ Cardinal tomography for two qubits.
+
+ Args:
+ cardinal (int) : index of prep gate
+ q0, q1 (str) : target qubits for the sequence
+ RO_target (str) : target for the RO, can be a qubit name or 'all'
+ '''
tomo_pulses = ['I ', 'X180 ', 'Y90 ', 'mY90 ', 'X90 ', 'mX90 ']
for tp in tomo_pulses:
- tomo_list_q0 = [tp + q0]
- tomo_list_q1 = [tp + q1]
+ tomo_list_q0 = [tp + q0 + '\n']
+ tomo_list_q1 = [tp + q1 + '\n']
+
+ prep_index_q0 = int(cardinal % len(tomo_list_q0))
+ prep_index_q1 = int(((cardinal - prep_index_q0) / len(tomo_list_q0) %
+ len(tomo_list_q1)))
- prep_pulse_q0 = tomo_list_q0[int(cardinal % len(tomo_list_q0))]
- prep_pulse_q1 = tomo_list_q1[int(cardinal % len(tomo_list_q1))]
+ prep_pulse_q0 = tomo_list_q0[prep_index_q0]
+ prep_pulse_q1 = tomo_list_q1[prep_index_q1]
filename = join(base_qasm_path, 'two_qubit_tomo_cardinal.qasm')
qasm_file = mopen(filename, mode='w')
qasm_file.writelines('qubit {} \nqubit {} \n'.format(q0, q1))
- for p_q0 in tomo_list_q0:
+ # Tomography pulses
for p_q1 in tomo_list_q1:
+ for p_q0 in tomo_list_q0:
+ qasm_file.writelines('\ninit_all\n')
+ qasm_file.writelines(prep_pulse_q0)
+ qasm_file.writelines(prep_pulse_q1)
+ qasm_file.writelines(p_q0)
+ qasm_file.writelines(p_q1)
+ qasm_file.writelines('RO ' + RO_target + ' \n')
+
+ # Calibration pulses
+ cal_points = [['I ', 'I '],
+ ['X180 ', 'I '],
+ ['I ', 'X180 '],
+ ['X180 ', 'X180 ']]
+ cal_pulses = []
+ # every calibration point is repeated 7 times. This is copied from the
+ # script for Tektronix driven qubits. I do not know if this repetition
+ # is important or even necessary here.
+ for seq in cal_points:
+ cal_pulses += [[seq[0] + q0 + '\n', seq[1] +
+ q1 + '\n', 'RO ' + RO_target + '\n']] * 7
+
+ for seq in cal_pulses:
qasm_file.writelines('\ninit_all\n')
- qasm_file.writelines(prep_pulse_q0 + '\n')
+ for p in seq:
+ qasm_file.writelines(p)
+
+ qasm_file.close()
+ return qasm_file
def two_qubit_AllXY(q0, q1, RO_target='all',
|
Receive MPP: Use persisted payment status to decide whether to
fulfill HTLCs. Without this commit, we might timeout a part of
a payment if the client is shut down before all parts are
fulfilled. | @@ -1614,27 +1614,26 @@ class LNWallet(LNWorker):
def add_received_htlc(self, short_channel_id, htlc: UpdateAddHtlc, expected_msat: int) -> Optional[bool]:
""" return MPP status: True (accepted), False (expired) or None """
payment_hash = htlc.payment_hash
- mpp_status, htlc_set = self.received_htlcs.get(payment_hash, (None, set()))
+ is_accepted = (self.get_payment_status(payment_hash) == PR_PAID)
+ is_expired, htlc_set = self.received_htlcs.get(payment_hash, (False, set()))
key = (short_channel_id, htlc)
if key not in htlc_set:
htlc_set.add(key)
- if mpp_status is None:
+ if not is_accepted and not is_expired:
total = sum([_htlc.amount_msat for scid, _htlc in htlc_set])
first_timestamp = min([_htlc.timestamp for scid, _htlc in htlc_set])
- expired = time.time() - first_timestamp > MPP_EXPIRY
- if expired:
- mpp_status = False
- elif total == expected_msat:
- mpp_status = True
+ is_expired = time.time() - first_timestamp > MPP_EXPIRY
+ if not is_expired and total == expected_msat:
+ is_accepted = True
self.set_payment_status(payment_hash, PR_PAID)
util.trigger_callback('request_status', self.wallet, payment_hash.hex(), PR_PAID)
- if mpp_status is not None:
+ if is_accepted or is_expired:
htlc_set.remove(key)
if len(htlc_set) > 0:
- self.received_htlcs[payment_hash] = mpp_status, htlc_set
+ self.received_htlcs[payment_hash] = is_expired, htlc_set
elif payment_hash in self.received_htlcs:
self.received_htlcs.pop(payment_hash)
- return mpp_status
+ return True if is_accepted else (False if is_expired else None)
def get_payment_status(self, payment_hash):
info = self.get_payment_info(payment_hash)
|
Fix PyPI uploading by selecting a particular CPython version
Thanks to | @@ -32,8 +32,8 @@ jobs:
- name: Make packages
run: python setup.py sdist bdist_wheel
- name: Publish package on PyPI
- if: github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags')
- uses: pypa/gh-action-pypi-publish@f91f98d65eb3eb032447201d64f2c25d67c28efe
+ if: matrix.python-version == 3.9 && github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags')
+ uses: pypa/[email protected]
with:
user: __token__
password: ${{ secrets.PYPI_API_TOKEN }}
|
Use QubitId in stepresult DM.
fixes . | @@ -781,26 +781,26 @@ class StepResult:
and non-zero floats of the specified accuracy."""
return wave_function.dirac_notation(self.state(), decimals)
- def density_matrix(self, indices: Iterable[int] = None) -> np.ndarray:
+ def density_matrix(self, qubits: List[ops.QubitId] = None) -> np.ndarray:
"""Returns the density matrix of the wavefunction.
- Calculate the density matrix for the system on the given qubit
- indices, with the qubits not in indices that are present in self.state
- traced out. If indices is None the full density matrix for self.state
+ Calculate the density matrix for the system on the list, qubits.
+ Any qubits not in the list that are present in self.state will be
+ traced out. If qubits is None the full density matrix for self.state
is returned, given self.state follows standard Kronecker convention
of numpy.kron.
For example:
self.state = np.array([1/np.sqrt(2), 1/np.sqrt(2)],
dtype=np.complex64)
- indices = None
+ qubits = None
gives us \rho = \begin{bmatrix}
0.5 & 0.5
0.5 & 0.5
\end{bmatrix}
Args:
- indices: list containing indices for qubits that you would like
+ qubits: list containing qubit IDs that you would like
to include in the density matrix (i.e.) qubits that WON'T
be traced out.
@@ -813,17 +813,19 @@ class StepResult:
corresponding to the state.
"""
return wave_function.density_matrix_from_state_vector(
- self.state(), indices)
+ self.state(),
+ [self.qubit_map[q] for q in qubits] if qubits is not None else None
+ )
- def bloch_vector(self, index: int) -> np.ndarray:
+ def bloch_vector(self, qubit: ops.QubitId) -> np.ndarray:
"""Returns the bloch vector of a qubit.
- Calculates the bloch vector of the qubit at index
- in the wavefunction given by self.state. Given that self.state
+ Calculates the bloch vector of the given qubit
+ in the wavefunction given by self.state, given that self.state
follows the standard Kronecker convention of numpy.kron.
Args:
- index: index of qubit who's bloch vector we want to find.
+ qubit: qubit who's bloch vector we want to find.
Returns:
A length 3 numpy array representing the qubit's bloch vector.
@@ -834,4 +836,4 @@ class StepResult:
corresponding to the state.
"""
return wave_function.bloch_vector_from_state_vector(
- self.state(), index)
+ self.state(), self.qubit_map[qubit])
|
Bump the lambda version
So it matches the version on AWS | @@ -58,7 +58,7 @@ DD_SOURCE = "ddsource"
DD_CUSTOM_TAGS = "ddtags"
DD_SERVICE = "service"
DD_HOST = "host"
-DD_FORWARDER_VERSION = "1.0.2"
+DD_FORWARDER_VERSION = "1.2.1"
# Pass custom tags as environment variable, ensure comma separated, no trailing comma in envvar!
DD_TAGS = os.environ.get("DD_TAGS", "")
|
Fix broken line spacing in Paragraph
The line_spacing kwarg was missing when creating Text mobjects; this adds it. | @@ -442,7 +442,7 @@ class Paragraph(VGroup):
VGroup.__init__(self, **config)
lines_str = "\n".join(list(text))
- self.lines_text = Text(lines_str, **config)
+ self.lines_text = Text(lines_str, line_spacing=line_spacing, **config)
lines_str_list = lines_str.split("\n")
self.chars = self.gen_chars(lines_str_list)
|
Update deployment.rst
User feedback change | @@ -160,7 +160,6 @@ Upgrade Mattermost
Downgrade Mattermost Server </upgrade/downgrading-mattermost-server>
Version archive </upgrade/version-archive>
-
Stay up to date with the latest features and improvements.
* :doc:`Upgrade Mattermost Server </upgrade/upgrading-mattermost-server>` - Learn the basics of upgrading your Mattermost server to the latest version.
@@ -173,7 +172,6 @@ Stay up to date with the latest features and improvements.
* :doc:`Downgrade Mattermost Server </upgrade/downgrading-mattermost-server>` - Find out how to roll back to older versions of Mattermost.
* :doc:`Version archive </upgrade/version-archive>` - Download binaries for every release.
-
Scale Mattermost
----------------
.. toctree::
@@ -204,7 +202,6 @@ Troubleshooting guides
Troubleshooting mobile applications </deploy/mobile-troubleshoot>
MySQL installation troubleshooting </install/trouble_mysql>
-
* :doc:`General troubleshooting </install/troubleshooting>`
* :doc:`Troubleshooting mobile applications </deploy/mobile-troubleshoot>`
* :doc:`MySQL installation troubleshooting </install/trouble_mysql>`
@@ -229,8 +226,8 @@ Changelogs
* :doc:`Desktop Apps </install/desktop-app-changelog>`
* :doc:`Deprecated features </install/deprecated-features>`
-Other resources
----------------
+Additional server install guides
+--------------------------------
.. toctree::
:maxdepth: 1
:hidden:
@@ -242,7 +239,6 @@ Other resources
Install Mattermost Team Edition in GitLab Helm Chart </install/installing-team-edition-helm-chart>
Open source components </upgrade/open-source-components>
-
* :doc:`Install on Ubuntu 18.04 LTS </install/installing-ubuntu-1804-LTS>`
* :doc:`Install on RHEL 7 </install/install-rhel-7>`
* :doc:`Deploy Mattermost on Bitnami </install/deploying-team-edition-on-bitnami>`
|
Add 3 novel source
Add 3 novel source | @@ -32,11 +32,13 @@ List of supported sites are given below.
- https://novelraw.blogspot.com
- https://volarenovels.com
- https://webnovel.online
+- https://wordexcerpt.com/
- https://wuxiaworld.online
- https://www.asianhobbyist.com
- https://www.idqidian.us
- https://www.jieruihao.cn/
- https://www.machine-translation.org
+- https://www.mtlnovel.com/
- https://www.novelall.com
- https://www.novelspread.com
- https://www.qidian.com
@@ -47,6 +49,7 @@ List of supported sites are given below.
- https://www.royalroad.com
- https://www.scribblehub.com
- https://www.tapread.com
+- https://www.translateindo.com/
- https://www.wattpad.com
- https://www.webnovel.com
- https://www.worldnovel.online
|
sep: drop height_percent compatibility
This has been deprecated for quite some time. | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
-from libqtile.log_utils import logger
from libqtile.widget import base
@@ -41,14 +40,7 @@ class Sep(base._Widget):
),
]
- def __init__(self, height_percent=None, **config):
- # 'height_percent' was replaced by 'size_percent' since the widget can
- # be installed in vertical bars
- if height_percent is not None:
- logger.warning('height_percent kwarg or positional argument is '
- 'deprecated. Please use size_percent.')
- config["size_percent"] = height_percent
-
+ def __init__(self, **config):
length = config.get("padding", 2) * 2 + config.get("linewidth", 1)
base._Widget.__init__(self, length, **config)
self.add_defaults(Sep.defaults)
|
Upgrade Node to v14
Fix | -deb https://deb.nodesource.com/node_12.x buster main
-deb-src https://deb.nodesource.com/node_12.x buster main
+deb https://deb.nodesource.com/node_14.x buster main
+deb-src https://deb.nodesource.com/node_14.x buster main
|
Update deafrica-crop-extent.yaml
added pub | @@ -64,6 +64,9 @@ DataAtWork:
URL: https://www.africageoportal.com/pages/digital-earth-africa
AuthorName: Digital Earth Africa Contributors
Publications:
+ - Title: "Cropland Extent is now available for the entire African continent"
+ URL: https://www.digitalearthafrica.org/media-center/blog/cropland-extent-now-available-entire-african-continent
+ AuthorName: Digital Earth Africa Contributors
- Title: "Co-Production of a 10-m Cropland Extent Map for Continental Africa using Sentinel-2, Cloud Computing, and the Open-Data-Cube"
URL: https://agu2021fallmeeting-agu.ipostersessions.com/Default.aspx?s=BA-E4-89-84-15-50-25-89-AE-A8-EF-C5-32-7D-19-EE
AuthorName: Chad Burton, Fang Yuan, Chong Ee-Faye, Meghan Halabisky, David Ongo, Fatou Mar, Victor Addabor, Bako Mamane and Sena Adimou
|
restart_policy can not be None
security_groups is a dictionary variable, but the way to judge is
if container.restart_policy is not None
eg.
(Pdb) container.restart_policy
{}
(Pdb) container.restart_policy is not None
True
Closes-Bug: | @@ -181,7 +181,7 @@ class DockerDriver(driver.ContainerDriver):
if container.cpu is not None:
host_config['cpu_quota'] = int(100000 * container.cpu)
host_config['cpu_period'] = 100000
- if container.restart_policy is not None:
+ if container.restart_policy:
count = int(container.restart_policy['MaximumRetryCount'])
name = container.restart_policy['Name']
host_config['restart_policy'] = {'Name': name,
|
Always add unique sort keys when sorting
This avoids the logs being filled with warnings from
the sort utils about unstable sorting order.
"Unique keys not in sort_keys. The sorting order may be unstable." | @@ -222,8 +222,7 @@ class CommonDbMixin(object):
if sorts:
sort_keys = db_utils.get_and_validate_sort_keys(sorts, model)
sort_dirs = db_utils.get_sort_dirs(sorts, page_reverse)
- if limit:
- # we always want deterministic results for limit subqueries
+ # we always want deterministic results for sorted queries
# so add unique keys to limit queries when present.
# (http://docs.sqlalchemy.org/en/latest/orm/
# loading_relationships.html#subqueryload-ordering)
|
Readme Update 2
* Update README.rst
Adds the instructions to install pycrypto-2.6.1 and fixes spelling errors.
* Update README.rst
* Update README.rst
multiple packages with one call | @@ -231,10 +231,10 @@ could lead to version conflicts.
::
- # uninstall pycryto & pycryptodome | reinstall pycryptodome-3.6.1
+ # uninstall pycrypto & pycryptodome | reinstall pycrypto-2.6.1 & pycryptodome-3.6.1
- pip uninstall pycryto pycryptodome
- pip install pycryptodome==3.6.1
+ pip uninstall pycrypto pycryptodome
+ pip install pycrypto==2.6.1 pycryptodome==3.6.1
Running
-------
|
Fix replacing targets in add_targets()
Match the pattern in add_target() where if the filepath already exists
in roleinfo['paths'] it is updated to replace the existing entry with
the new fileinfo. | @@ -2027,10 +2027,10 @@ class Targets(Metadata):
for relative_target in relative_list_of_targets:
if relative_target not in roleinfo['paths']:
logger.debug('Adding new target: ' + repr(relative_target))
- roleinfo['paths'].update({relative_target: {}})
else:
logger.debug('Replacing target: ' + repr(relative_target))
+ roleinfo['paths'].update({relative_target: {}})
tuf.roledb.update_roleinfo(self.rolename, roleinfo,
repository_name=self._repository_name)
|
modified proper families
renderLocal is legacy, should be removed in the future | @@ -12,7 +12,7 @@ class ExtractLocalRender(openpype.api.Extractor):
order = openpype.api.Extractor.order - 0.47
label = "Extract Local Render"
hosts = ["aftereffects"]
- families = ["render"]
+ families = ["renderLocal", "render.local"]
def process(self, instance):
stub = get_stub()
|
Per OVS config advice set long max-idle time.
ovs-vsctl --no-wait set Open_vSwitch . other_config:max-idle=50000 | @@ -21,6 +21,7 @@ ping -c 1 127.0.0.1 || exit 1
echo "========== Starting OVS ========================="
/usr/local/share/openvswitch/scripts/ovs-ctl start || exit 1
ovs-vsctl show || exit 1
+ovs-vsctl --no-wait set Open_vSwitch . other_config:max-idle=50000
# enable fast reuse of ports.
sysctl -w net.netfilter.nf_conntrack_tcp_timeout_time_wait=10
|
User agent was not correctly resolved.
This is because of ``'useragent' in Config.items('network')`` check.
Config.items('network') returns list of tuples so 'useragent' is never in the list.
Corrected by using .has_option | @@ -331,10 +331,7 @@ class LoaderBase(object):
else:
# read from internet
request = urllib_request.Request(filename)
- if (
- Config.has_section('network')
- and 'useragent' in Config.items('network')
- ):
+ if Config.has_option('network', 'useragent'):
useragent = Config.get('network', 'useragent')
if useragent:
request.add_header('User-Agent', useragent)
|
[microNPU] Add relu6 relu_n1_to_1 test cases for Ethos-U
Tests are extended with cases with activations relu6 and relu_n1_to_1. Test cases contain conv2d operation + activation because separate activation is not offloaded to NPU. | @@ -1132,6 +1132,65 @@ def test_tflite_leaky_relu(accel_type, ifm_shape, alpha):
)
+# conv2d + relu_n1_to_1 is used because separate activation is not offloaded to NPU.
+def test_tflite_relu_n1_to_1():
+ np.random.seed(0)
+ accel_type = "ethos-u55-256"
+ ifm_shape = (1, 55, 34, 3)
+ kernel_shape = (3, 2)
+ strides = (1, 1)
+
+ @tf.function
+ def conv2d_relu_n1_to_1(x):
+ tf_strides = [1, strides[0], strides[1], 1]
+ weight_shape = [kernel_shape[0], kernel_shape[1], ifm_shape[3], 3]
+ weight = tf.constant(np.random.uniform(size=weight_shape), dtype=tf.float32)
+ op = tf.nn.conv2d(
+ x,
+ weight,
+ strides=tf_strides,
+ padding="VALID",
+ )
+ # The specific pattern will be replaced into RELU_N1_TO_1 by tflite.
+ return tf.math.maximum(-1.0, tf.math.minimum(op, 1.0))
+
+ infra.compare_tvm_with_tflite(
+ conv2d_relu_n1_to_1,
+ [ifm_shape],
+ accel_type,
+ enable_cascader=True,
+ )
+
+
+# conv2d + relu6 is used because separate activation is not offloaded to NPU.
+def test_tflite_relu6():
+ np.random.seed(0)
+ accel_type = "ethos-u55-256"
+ ifm_shape = (1, 55, 34, 3)
+ kernel_shape = (3, 2)
+ strides = (1, 1)
+
+ @tf.function
+ def conv2d_relu6(x):
+ tf_strides = [1, strides[0], strides[1], 1]
+ weight_shape = [kernel_shape[0], kernel_shape[1], ifm_shape[3], 3]
+ weight = tf.constant(np.random.uniform(size=weight_shape), dtype=tf.float32)
+ op = tf.nn.conv2d(
+ x,
+ weight,
+ strides=tf_strides,
+ padding="VALID",
+ )
+ return tf.nn.relu6(op)
+
+ infra.compare_tvm_with_tflite(
+ conv2d_relu6,
+ [ifm_shape],
+ accel_type,
+ enable_cascader=True,
+ )
+
+
@pytest.mark.parametrize("accel_type", ACCEL_TYPES)
@pytest.mark.parametrize("ifm_shape", [(1, 14), (1, 151)])
@pytest.mark.parametrize("ofm_channels", [32, 64])
|
fix test
Summary: Pull Request resolved: | @@ -6237,9 +6237,7 @@ a")
m = M()
graph = str(m.graph)
- print(graph)
- return
- self.assertTrue(graph.count("aten::add") == 4)
+ self.assertTrue(graph.count("aten::add") == 5)
self.assertTrue("python" not in graph)
def test_script_nested_mod_list(self):
|
Update run-tests.yml
allow tests to proceed even if linting doesn't return 0 | @@ -70,7 +70,11 @@ jobs:
echo "source $(pwd)/gef.py" > ~/.gdbinit
gdb -q -ex 'gef missing' -ex 'gef help' -ex 'gef config' -ex start -ex continue -ex quit /bin/pwd
- - name: Run Tests
+ - name: Run linter
+ continue-on-error: true
run: |
make lint
+
+ - name: Run Tests
+ run: |
make test
|
Change file staging to use decorator
is deprecated with a warning, which means before this commit, an
App deprecation warning was issued when using staging. | @@ -6,7 +6,7 @@ import concurrent.futures as cf
from parsl.data_provider.scheme import GlobusScheme
from parsl.executors.base import ParslExecutor
from parsl.data_provider.globus import get_globus
-from parsl.app.app import App
+from parsl.app.app import python_app
logger = logging.getLogger(__name__)
@@ -175,19 +175,19 @@ class DataManager(ParslExecutor):
raise Exception('Staging in with unknown file scheme {} is not supported'.format(file.scheme))
def _file_stage_in_app(self):
- return App("python", executors=['data_manager'])(self._file_stage_in)
+ return python_app(executors=['data_manager'])(self._file_stage_in)
def _file_stage_in(self, outputs=[]):
pass
def _ftp_stage_in_app(self, executor):
- return App("python", executors=[executor])(_ftp_stage_in)
+ return python_app(executors=[executor])(_ftp_stage_in)
def _http_stage_in_app(self, executor):
- return App("python", executors=[executor])(_http_stage_in)
+ return python_app(executors=[executor])(_http_stage_in)
def _globus_stage_in_app(self):
- return App("python", executors=['data_manager'])(self._globus_stage_in)
+ return python_app(executors=['data_manager'])(self._globus_stage_in)
def _globus_stage_in(self, globus_ep, outputs=[]):
file = outputs[0]
@@ -228,13 +228,13 @@ class DataManager(ParslExecutor):
raise Exception('Staging out with unknown file scheme {} is not supported'.format(file.scheme))
def _file_stage_out_app(self):
- return App("python", executors=['data_manager'])(self._file_stage_out)
+ return python_app(executors=['data_manager'])(self._file_stage_out)
def _file_stage_out(self):
pass
def _globus_stage_out_app(self):
- return App("python", executors=['data_manager'])(self._globus_stage_out)
+ return python_app(executors=['data_manager'])(self._globus_stage_out)
def _globus_stage_out(self, globus_ep, inputs=[]):
file = inputs[0]
|
Add docs on permanent session storage
Hopefully this clarifies a couple of recent issues. | @@ -40,6 +40,20 @@ An example usage to store a users colour preference would be,
session['colour'] = colour
return redirect(url_for('index'))
+Permanent Sessions
+------------------
+
+The cookies used by default are not set to be permanent (deleted when
+the browser's session ends) to have permanent cookies
+``session.permanent`` must be ``True`` when the session is
+modified. To set this as the default use this snippet,
+
+.. code-block:: python
+
+ @app.before_request
+ def make_session_permanent():
+ session.permanent = True
+
WebSockets
----------
|
Fix half-float conversion ops to handle tensors larger than 2B of params
Summary:
Pull Request resolved:
As desc. | @@ -12,7 +12,7 @@ bool FloatToHalfOp<CPUContext>::RunOnDevice() {
at::Half* out = output->template mutable_data<at::Half>();
auto N = input.numel();
- for (auto i = 0; i < N; i++) {
+ for (size_t i = 0; i < N; i++) {
out[i] = data[i];
}
@@ -28,7 +28,7 @@ bool HalfToFloatOp<CPUContext>::RunOnDevice() {
float* out = output->template mutable_data<float>();
auto N = input.numel();
- for (auto i = 0; i < N; i++) {
+ for (size_t i = 0; i < N; i++) {
out[i] = data[i];
}
return true;
|
[Tweets] Clarify rate limiting error
Cleanup loop code more to prevent trying to create the autotweet loop when no credentials are set or for some reason the api breaks. | @@ -72,9 +72,19 @@ class Tweets(getattr(commands, "Cog", object)):
async def start_stream(self):
await self.bot.wait_until_ready()
while self is self.bot.get_cog("Tweets"):
- if self.mystream is None:
- api = await self.authenticate()
+ if not await self.config.api.consumer_key():
+ # Don't run the loop until tokens are set
+ await asyncio.sleep(300)
+ continue
tweet_list = [str(x["twitter_id"]) for x in await self.config.accounts()]
+ try:
+ api = await self.authenticate()
+ except:
+ # Just continue on our own until this works
+ # No sense trying to restart the stream if we can't authenticate
+ await asyncio.sleep(300)
+ continue
+ if self.mystream is None:
if tweet_list != []:
try:
stream_start = TweetListener(api, self.bot)
@@ -92,8 +102,6 @@ class Tweets(getattr(commands, "Cog", object)):
except Exception as e:
print(e)
if not getattr(self.mystream, "running", False):
- api = await self.authenticate()
- tweet_list = [str(x["twitter_id"]) for x in await self.config.accounts()]
try:
stream_start = TweetListener(api, self.bot)
self.mystream = tw.Stream(api.auth,
@@ -141,7 +149,8 @@ class Tweets(getattr(commands, "Cog", object)):
"<https://developer.twitter.com/en/docs/basics/response-codes.html>")
await channel.send(str(error) + help_msg)
if "420" in error:
- await channel.send(_("Maybe you should unload the cog for a while..."))
+ await channel.send(_("You're being rate limited. Maybe you "
+ "should unload the cog for a while..."))
return
async def build_tweet_embed(self, status):
|
[CircleCI] Make PR build manager dependent on few worker jobs(#1908)
Just a proof of concept change, shows how it could be done to avoid wasting `build-manager` cycles waiting for the job to finish. | @@ -448,6 +448,10 @@ workflows:
ignore:
- master
- pytorch_tutorial_pr_build_manager:
+ requires:
+ - pytorch_tutorial_pr_build_worker_17
+ - pytorch_tutorial_pr_build_worker_18
+ - pytorch_tutorial_pr_build_worker_19
filters:
branches:
ignore:
|
Add a docs status badge
Always passing... travis doesn't have a specific deploy badge | @@ -8,6 +8,7 @@ A collection of environments for *highway driving* and tactical decision-making
</p>
[](https://travis-ci.org/eleurent/highway-env)
+[](https://eleurent.github.io/highway-env/)
## Installation
|
Update README.md
Summary:
Clarify that Redis Cluster is not supported. Also see
Closes | @@ -79,7 +79,7 @@ To run a benchmark:
1. Copy the benchmark tool to all participating machines
2. Start a Redis server on any host (either a client machine or one of
- the machines participating in the test).
+ the machines participating in the test). Note that Redis Cluster is **not** supported.
3. Determine some unique ID for the benchmark run (e.g. the `uuid`
tool or some number).
|
Fix format cache.rst
Fix format in doc/source/admin/cache.rst :
Fix indentation.
Add two missing bullet points. | @@ -121,7 +121,7 @@ To queue an image for prefetching, you can use one of the following methods:
you may call ``PUT /queued-images/<IMAGE_ID>`` to queue the image with
identifier ``<IMAGE_ID>``
- Alternately, you can use the ``glance-cache-manage`` program to queue the
+* Alternately, you can use the ``glance-cache-manage`` program to queue the
image. This program may be run from a different host than the host
containing the image cache. Example usage::
@@ -144,7 +144,7 @@ following methods:
mappings that show cached images, the number of cache hits on each image,
the size of the image, and the times they were last accessed.
- Alternately, you can use the ``glance-cache-manage`` program. This program
+* Alternately, you can use the ``glance-cache-manage`` program. This program
may be run from a different host than the host containing the image cache.
Example usage::
|
[IR] Remove shadowing in IRSubstituteWithDataTypeLegalization
Previously, the `IRSubstituteWithDataTypeLegalization` class
implemented some virtual functions of `DataTypeLegalizer`, but not
all. As a result, some compilers gave warnings that the base class
methods were being shadowed. This commit adds the `using` declaration
to avoid shadowing. | @@ -814,6 +814,9 @@ class IRSubstituteWithDataTypeLegalization : public DataTypeLegalizer {
explicit IRSubstituteWithDataTypeLegalization(std::function<Optional<PrimExpr>(const Var&)> vmap)
: vmap_(vmap) {}
+ using DataTypeLegalizer::VisitExpr_;
+ using DataTypeLegalizer::VisitStmt_;
+
PrimExpr VisitExpr_(const VarNode* op) final {
Var var = GetRef<Var>(op);
auto ret = vmap_(var);
|
Swaps the base address for g4 IWDG and WWDG.
The two addresses were incorrectly reverted. | @@ -5,7 +5,7 @@ _delete:
_add:
WWDG:
description: System window watchdog
- baseAddress: 0x40003000
+ baseAddress: 0x40002C00
addressBlock:
offset: 0x0
size: 0x4
@@ -58,7 +58,7 @@ _add:
bitWidth: 1
IWDG:
description: WinWATCHDOG
- baseAddress: 0x40002C00
+ baseAddress: 0x40003000
addressBlock:
offset: 0x0
size: 0x400
|
Framework/Workload: Utilize package_names during package resolution
Iterate through available package names when resolving an apk file from
the host. | @@ -562,6 +562,7 @@ class PackageHandler(object):
msg = 'Cannot Resolve package; No package name(s) specified'
raise WorkloadError(msg)
+ if self.package_name:
self.apk_file = context.resolver.get(ApkFile(self.owner,
variant=self.variant,
version=self.version,
@@ -569,6 +570,21 @@ class PackageHandler(object):
exact_abi=self.exact_abi,
supported_abi=self.supported_abi),
strict=self.strict)
+ else:
+ available_packages = []
+ for package in self.owner.package_names:
+ available_packages.append(context.resolver.get(ApkFile(self.owner,
+ variant=self.variant,
+ version=self.version,
+ package=package,
+ exact_abi=self.exact_abi,
+ supported_abi=self.supported_abi),
+ strict=self.strict))
+ if len(available_packages) == 1:
+ self.apk_file = available_packages[0]
+ elif len(available_packages) > 1:
+ msg = 'Multiple matching packages found for "{}" on host: {}'
+ raise WorkloadError(msg.format(self.owner, available_packages))
if self.apk_file:
self.apk_info = ApkInfo(self.apk_file)
if self.version:
|
Ensure sql version is right for upsert
linux distributions can differs in versions | @@ -245,13 +245,13 @@ class SQLiteDatabase(db_base.BaseDatabase):
table_columns = table[1]
# Doing many sqlite operations at the same makes the performance much worse (especially on Kodi 18)
# The use of 'executemany' and 'transaction' can improve performance up to about 75% !!
- if G.PY_IS_VER2:
+ if common.is_less_version(sql.sqlite_version, '3.24.0'):
query = 'INSERT OR REPLACE INTO {} ({}, {}) VALUES (?, ?)'.format(table_name,
table_columns[0],
table_columns[1])
records_values = [(key, common.convert_to_string(value)) for key, value in iteritems(dict_values)]
else:
- # sqlite UPSERT clause exists only on sqlite >= 3.24.0 (not available on Kodi 18)
+ # sqlite UPSERT clause exists only on sqlite >= 3.24.0
query = ('INSERT INTO {tbl_name} ({tbl_col1}, {tbl_col2}) VALUES (?, ?) '
'ON CONFLICT({tbl_col1}) DO UPDATE SET {tbl_col2} = ? '
'WHERE {tbl_col1} = ?').format(tbl_name=table_name,
|
Update centos_bootstrap.sh
* Update centos_bootstrap.sh
added changes to make yeti run on centos.
* Update to centos_bootstrap.sh
This is a patch for the Centos bootstrap script please test using Centos. I also added a fix for the uwsgi service.
* Update centos_bootstrap.sh | cat << EOF > /etc/yum.repos.d/mongodb-org-4.0.repo
[mongodb-org-4.0]
name=MongoDB Repository
-baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/
+baseurl=https://repo.mongodb.org/yum/redhat/\$releasever/mongodb-org/4.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc
@@ -21,14 +21,24 @@ yum update -y && yum upgrade -y
### Install the YETI Dependencies
yum groupinstall "Development Tools" -y
yum install epel-release
-yum install python-pip git mongodb-org python-devel libxml2-devel libxslt-devel zlib-devel redis firewalld yarn -y
+yum install python-pip git mongodb-org python-devel libxml2-devel libxslt-devel zlib-devel redis firewalld yarn vim curl wget net-tools nginx uwsgi -y
pip install --upgrade pip
+pip install uwsgi
### Install YETI
+cd /opt
git clone https://github.com/yeti-platform/yeti.git
+sudo chown -R yeti:yeti /opt/yeti
cd yeti
pip install -r requirements.txt
yarn install
+PWD1=`pwd`
+
+sudo chmod +x $PWD1/extras/systemd/*
+sed -i s'/\/usr\/local\/bin\/uwsgi/\/usr\/bin\/uwsgi\ --plugin\ python/g' $PWD1/extras/systemd/yeti_uwsgi.service
+sed -i s'/\/usr\/local\/bin/\/usr\/bin/g' $PWD1/extras/systemd/yeti_uwsgi.service
+sed -i s'/\/usr\/local\/bin/\/bin/g' $PWD1/extras/systemd/*
+sudo ln -s $PWD1/extras/systemd/* /lib/systemd/system/
### Secure your instance
# Add firewall rules for YETI
@@ -42,5 +52,20 @@ systemctl enable mongod
systemctl start mongod
# Launch Yeti
-cd yeti
-./yeti.py webserver
+sudo systemctl enable yeti_web.service
+sudo systemctl enable yeti_analytics.service
+sudo systemctl enable yeti_beat.service
+sudo systemctl enable yeti_exports.service
+sudo systemctl enable yeti_feeds.service
+sudo systemctl enable yeti_oneshot.service
+sudo systemctl enable yeti_uwsgi.servic
+sudo systemctl enable redis
+
+sudo systemctl start yeti_web.service
+sudo systemctl start yeti_analytics.service
+sudo systemctl start yeti_beat.service
+sudo systemctl start yeti_exports.service
+sudo systemctl start yeti_feeds.service
+sudo systemctl start yeti_oneshot.service
+sudo systemctl start yeti_uwsgi.service
+sudo systemctl start redis
|
BUG: fixed netcdf `pandas_format`
Fixed setting `pandas_format` attribute to be done during initialization. | @@ -66,11 +66,13 @@ _test_download_ci = {'': {'': False}}
# ----------------------------------------------------------------------------
# Instrument methods
-def init(self):
+def init(self, pandas_format=True):
"""Initialize the Instrument object with instrument specific values."""
self.acknowledgements = "Acknowledgements missing from file"
self.references = "References missing from file"
+ self.pandas_format = pandas_format
+
return
|
slightly better error message for unconnected input
Test Plan: eyes
Reviewers: prha | @@ -447,7 +447,8 @@ def _validate_inputs(dependency_structure, solid_dict):
):
raise DagsterInvalidDefinitionError(
'Input "{input_name}" in solid "{solid_name}" is not connected to '
- 'any outputs and can not be hydrated from configuration, creating an impossible to execute pipeline. '
+ 'the output of a previous solid and can not be hydrated from configuration, '
+ 'creating an impossible to execute pipeline. '
'Posible solutions are:\n'
' * add a input_hydration_config for the type "{runtime_type}"\n'
' * connect "{input_name}" to the output of another solid\n'.format(
|
Info: simplify channel redirection for the user command
`in_whitelist_check` is a convenient utility that does the same thing as
the previous code. | @@ -15,7 +15,7 @@ from bot import constants
from bot.bot import Bot
from bot.decorators import in_whitelist, with_role
from bot.pagination import LinePaginator
-from bot.utils.checks import InWhitelistCheckFailure, cooldown_with_role_bypass, with_role_check
+from bot.utils.checks import cooldown_with_role_bypass, in_whitelist_check, with_role_check
from bot.utils.time import time_since
log = logging.getLogger(__name__)
@@ -201,13 +201,9 @@ class Information(Cog):
await ctx.send("You may not use this command on users other than yourself.")
return
- # Non-staff may only do this in #bot-commands
- if not with_role_check(ctx, *constants.STAFF_ROLES):
- if not ctx.channel.id == constants.Channels.bot_commands:
- raise InWhitelistCheckFailure(constants.Channels.bot_commands)
-
+ # Will redirect to #bot-commands if it fails.
+ if in_whitelist_check(ctx, roles=constants.STAFF_ROLES):
embed = await self.create_user_embed(ctx, user)
-
await ctx.send(embed=embed)
async def create_user_embed(self, ctx: Context, user: Member) -> Embed:
|
stylechecks: implement check for __future__ imports
TN: | @@ -386,6 +386,9 @@ class PythonLang(LanguageChecker):
'(?P<remaining>.*)')
from_import_re = re.compile('^from (?P<name>[a-zA-Z0-9_.]+) import.*')
+ future_expected = {'absolute_import', 'division', 'print_function',
+ 'unicode_literals'}
+
def check(self, report, filename, content, parse):
self.custom_check(report, filename, content, parse)
if os.path.exists(filename):
@@ -475,6 +478,12 @@ class PythonLang(LanguageChecker):
except (SyntaxError, TypeError) as exc:
report.add('Could not parse: {}'.format(exc))
else:
+
+ def node_lineno(node):
+ return getattr(node, 'lineno', 0) + 1
+
+ future_seen = set()
+
for node in ast.walk(root):
try:
docstring = ast.get_docstring(node)
@@ -483,10 +492,23 @@ class PythonLang(LanguageChecker):
else:
if docstring:
check_text(report, filename, self,
- getattr(node, 'lineno', 0) + 1,
+ node_lineno(node),
docstring,
False)
+ if (isinstance(node, ast.ImportFrom)
+ and node.module == '__future__'
+ ):
+ future_seen.update(alias.name for alias in node.names)
+
+ report.set_context(filename, 1)
+ if not future_seen:
+ report.add('Missing __future__ imports')
+ elif future_seen != self.future_expected:
+ report.add('Missing __future__ imports: {}'.format(
+ ', '.join(sorted(name for name in
+ self.future_expected - future_seen))
+ ))
class MakoLang(LanguageChecker):
comment_start = '##'
|
fix(stock_info_sh_name_code): fix stock_info_sh_name_code interface
fix stock_info_sh_name_code interface | @@ -695,7 +695,7 @@ if __name__ == "__main__":
print(futures_zh_realtime_df)
futures_zh_minute_sina_df = futures_zh_minute_sina(
- symbol="V2201", period="5"
+ symbol="TF2009", period="1"
)
print(futures_zh_minute_sina_df)
|
Remove assumption of Gil in CandidateBlock
The gil may not always be aquired during processing in CandidateBlock. | @@ -94,7 +94,8 @@ impl CandidateBlock {
}
pub fn previous_block_id(&self) -> String {
- let py = unsafe { cpython::Python::assume_gil_acquired() };
+ let gil = cpython::Python::acquire_gil();
+ let py = gil.python();
self.block_builder
.getattr(py, "previous_block_id")
.expect("BlockBuilder has no attribute 'previous_block_id'")
@@ -147,7 +148,8 @@ impl CandidateBlock {
committed_txn_cache: &TransactionCommitCache,
) -> bool {
committed_txn_cache.contains(txn.header_signature.as_str()) || {
- let py = unsafe { cpython::Python::assume_gil_acquired() };
+ let gil = cpython::Python::acquire_gil();
+ let py = gil.python();
self.block_store
.call_method(py, "has_batch", (txn.header_signature.as_str(),), None)
.expect("Blockstore has no method 'has_batch'")
@@ -159,7 +161,8 @@ impl CandidateBlock {
fn batch_is_already_committed(&self, batch: &Batch) -> bool {
self.pending_batch_ids
.contains(batch.header_signature.as_str()) || {
- let py = unsafe { cpython::Python::assume_gil_acquired() };
+ let gil = cpython::Python::acquire_gil();
+ let py = gil.python();
self.block_store
.call_method(py, "has_batch", (batch.header_signature.as_str(),), None)
.expect("Blockstore has no method 'has_batch'")
@@ -179,8 +182,6 @@ impl CandidateBlock {
let inject_list = poller(injector);
if !inject_list.is_empty() {
for b in inject_list {
- match b.extract(py) {
- Ok(b) => batches.push(b),
Err(err) => pylogger::exception(py, "During batch injection", err),
}
}
@@ -289,7 +290,8 @@ impl CandidateBlock {
}
pub fn sign_block(&self, block_builder: &cpython::PyObject) {
- let py = unsafe { cpython::Python::assume_gil_acquired() };
+ let gil = cpython::Python::acquire_gil();
+ let py = gil.python();
let header_bytes = block_builder
.getattr(py, "block_header")
.expect("BlockBuilder has no attribute 'block_header'")
|
DOC: Instrument docstrings
Added missing docstrings to Instrument `__repr__` and `__str__` methods. | @@ -1052,7 +1052,7 @@ class Instrument(object):
self._password_req = False
def __repr__(self):
- # Print the basic Instrument properties
+ """ Print the basic Instrument properties"""
out_str = "".join(["Instrument(platform='", self.platform, "', name='",
self.name, "', sat_id='", self.sat_id,
"', clean_level='", self.clean_level,
@@ -1062,6 +1062,8 @@ class Instrument(object):
return out_str
def __str__(self):
+ """ Descriptively print the basic Instrument properties"""
+
# Get the basic Instrument properties
output_str = 'pysat Instrument object\n'
output_str += '-----------------------\n'
|
Fixed invalid property access
Supposed to be `__data__` not `_data`. | @@ -5124,7 +5124,7 @@ class Model(with_metaclass(ModelBase, Node)):
self.__rel__.get(foreign_key) is not None)
if conditions:
setattr(self, foreign_key, getattr(self, foreign_key))
- field_dict[foreign_key] = self._data[foreign_key]
+ field_dict[foreign_key] = self.__data__[foreign_key]
def save(self, force_insert=False, only=None):
field_dict = self.__data__.copy()
|
Update README.md
ipfs bootstrap node | @@ -30,6 +30,11 @@ You need to have a running go IPFS instance running and linked in the configurat
PubSub should be active and configured to use GossipSub.
More info there: https://github.com/ipfs/go-ipfs/blob/master/docs/experimental-features.md#ipfs-pubsub
+You can add our bootstrap node and connect to it on your ipfs node to be connected to the aleph network faster:
+
+`$ ipfs bootstrap add /dnsaddr/bootstrap.aleph.im/ipfs/QmPR8m8WCmYKuuxg5Qnadd4LbnTCD2L93cV2zPW5XGVHTG`
+`$ ipfs swarm connect /dnsaddr/bootstrap.aleph.im/ipfs/QmPR8m8WCmYKuuxg5Qnadd4LbnTCD2L93cV2zPW5XGVHTG`
+
### NULS
If you want to run with a local NULS instance (and not light client mode), you need to have a local fully synced NULS blockchain instance.
|
Updated CONTRIBUTING.md
clarified the 100 characters limit on Descriptions. | @@ -46,6 +46,7 @@ After you've created a branch on your fork with your changes, it's time to [make
* Continue to follow the alphabetical ordering that is in place per section.
* Each table column should be padded with one space on either side.
+* The Description should not exceed 100 characters.
* If an API seems to fall into multiple categories, please place the listing within the section most in line with the services offered through the API. For example, the Instagram API is listed under `Social` since it is mainly a social network, even though it could also apply to `Photography`.
* Add one link per Pull Request.
* Make sure the PR title is in the format of `Add Api-name API` *for e.g.*: `Add Blockchain API`
|
Update release workflow
Update AWS secrets
Rework condition for Slack notification | @@ -178,8 +178,8 @@ jobs:
nightly_release: ${{ inputs.nightly_release }}
secrets:
- AWS_ACCESS_KEY_ID: ${{ secrets.PRODUCTION_AWS_ACCESS_KEY_ID }}
- AWS_SECRET_ACCESS_KEY: ${{ secrets.PRODUCTION_AWS_SECRET_ACCESS_KEY }}
+ AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
github-release:
name: GitHub Release
@@ -212,7 +212,7 @@ jobs:
slack-notification:
name: Slack Notification
- if: ${{ failure() }}
+ if: ${{ failure() && (!inputs.test_run || inputs.nightly_release) }}
needs:
[
|
timestamp_format e.g. "%Y-%m-%d-%H-%M-%S-%f" raises an exception
ValueError('Unknown string format:', '2019-10-24-20-25-53-460007')
in dateutil.parser.parse during collect of submissions.
This is due to the dependency of dateutil.parser.parse in parse_utc().
To avoid this during collection of submission, this code will
validate the timestamp_format against dateutils parsing algorithm
and raises a TraitError if a ValueError occurs. | @@ -42,6 +42,17 @@ class Exchange(LoggingConfigurable):
help="Format string for timestamps"
).tag(config=True)
+ @validate('timestamp_format')
+ def _valid_timestamp_format(self, proposal):
+ try:
+ from dateutil.parse import parse
+ import datetime
+ ts = datetime.datetime.now().strftime(proposal['value'])
+ ts = parse(ts)
+ except ValueError:
+ raise TraitError('Invalid timestamp_format: {} - could not be parsed by dateutil'.format(proposal['value']))
+ return proposal['value']
+
root = Unicode(
"/srv/nbgrader/exchange",
help="The nbgrader exchange directory writable to everyone. MUST be preexisting."
|
[Fix] Pins KeyError
Fixed a KeyError that popped up regularly when trying to add a pin
for an IPFS file. The "Pins" key is not present in the dictionary
returned by the IPFS client instead of being present with a null
value. | @@ -81,7 +81,7 @@ async def handle_new_storage(message: Dict, content: Dict):
is_folder = stats["Type"] == "directory"
async for status in pin_api.pin.add(item_hash):
timer += 1
- if timer > 30 and status["Pins"] is None:
+ if timer > 30 and "Pins" not in status:
return None # Can't retrieve data now.
do_standard_lookup = False
|
Use local imports for Gdk in handle move aspect
This way Gtk/Gdk are not required when you only load the module. | +from __future__ import annotations
+
import logging
from functools import singledispatch
from operator import itemgetter
-from typing import Iterable, Optional, Sequence, Tuple
-
-from gi.repository import Gdk, Gtk
+from typing import TYPE_CHECKING, Iterable, Sequence
from gaphas.connector import ConnectionSink, ConnectionSinkType, Connector
from gaphas.handle import Handle
from gaphas.item import Element, Item
from gaphas.types import Pos
+
+if TYPE_CHECKING:
from gaphas.view import GtkView
log = logging.getLogger(__name__)
@@ -58,7 +60,7 @@ class ItemHandleMove:
def glue(
self, pos: Pos, distance: int = GLUE_DISTANCE
- ) -> Optional[ConnectionSinkType]:
+ ) -> ConnectionSinkType | None:
"""Glue to an item near a specific point.
Returns a ConnectionSink or None.
@@ -120,7 +122,7 @@ class ElementHandleMove(ItemHandleMove):
def __init__(self, item: Item, handle: Handle, view: GtkView):
super().__init__(item, handle, view)
- self.cursor: Optional[Gdk.Cursor] = None
+ self.cursor = None
def start_move(self, pos: Pos) -> None:
super().start_move(pos)
@@ -131,6 +133,8 @@ class ElementHandleMove(ItemHandleMove):
super().stop_move(pos)
def set_cursor(self) -> None:
+ from gi.repository import Gdk, Gtk
+
index = self.item.handles().index(self.handle)
if index < 4:
display = self.view.get_display()
@@ -144,6 +148,8 @@ class ElementHandleMove(ItemHandleMove):
self.view.set_cursor(cursor)
def reset_cursor(self) -> None:
+ from gi.repository import Gtk
+
self.view.get_window().set_cursor(
self.cursor
) if Gtk.get_major_version() == 3 else self.view.set_cursor(self.cursor)
@@ -154,7 +160,7 @@ def item_distance(
pos: Pos,
distance: float = 0.5,
exclude: Sequence[Item] = (),
-) -> Iterable[Tuple[float, Item]]:
+) -> Iterable[tuple[float, Item]]:
"""Return the topmost item located at ``pos`` (x, y).
Parameters:
|
removed duplicate sentence
"We recommend using [Anaconda](https://store.continuum.io/cshop/anaconda/), which bundles together most of the required packages. " x 2 | # Installing NILMTK
-We recommend using [Anaconda](https://store.continuum.io/cshop/anaconda/), which bundles together most of the required packages. We recommend using [Anaconda](https://www.anaconda.com/distribution/), which bundles togther most of the required packages. NILMTK requires Python 3.6+ due to the module it depends upon.
+We recommend using [Anaconda](https://store.continuum.io/cshop/anaconda/), which bundles together most of the required packages. NILMTK requires Python 3.6+ due to the module it depends upon.
After Anaconda has been installed, open up the terminal (Unix) or Anaconda prompt (Windows):
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.