message
stringlengths 13
484
| diff
stringlengths 38
4.63k
|
---|---|
Updated example to be consistent with Secrets Manager Docs
I updated the descriptions to be more consistent with the terminology found in the Secrets Manager documentation. | .. _aws-boto3-secrets-manager:
###############################################
-Retrieving Your Secret from AWS Secrets Manager
+Retrieving a Secret from AWS Secrets Manager
###############################################
-This Python example shows you how to retrieve an AWS Secrets Manager decoded secret that was created in the Secrets Manager console.
+This Python example shows you how to retrieve the decrypted secret value from an AWS Secrets Manager secret. The secret could be created using either the Secrets Manager console or the CLI/SDK.
-The code uses the AWS SDK for Python to retrieve a decoded secret.
+The code uses the AWS SDK for Python to retrieve a decrypted secret value.
For more information about using an Amazon Secrets Manager, see
`Tutorial: Storing and Retrieving a Secret <https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html>`_
@@ -32,12 +32,12 @@ To set up and run this example, you must first set up the following:
* Configure your AWS credentials, as described in :doc:`quickstart`.
* Create a Secret with the AWS Secrets Manager, as described in the `AWS Secrets Manager Developer Guide <https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html>`_
-Get the Current Bucket Website Configuration
+Retrive the Secret Value
=============================================
The example below shows how to:
-* Retrieve a secret using
+* Retrieve a secret value using
`get_secret_value <https://boto3.readthedocs.io/en/latest/reference/services/secretsmanager.html#SecretsManager.Client.get_secret_value>`_.
Example
@@ -73,8 +73,8 @@ Example
elif e.response['Error']['Code'] == 'InvalidParameterException':
print("The request had invalid params:", e)
else:
- # Decrypted secret using the associated KMS CMK
- # Depending on whether the secret was a string or binary, one of these fields will be populated
+ # Secrets Manager decrypts the secret value using the associated KMS CMK
+ # Depending on whether the secret was a string or binary, only one of these fields will be populated
if 'SecretString' in get_secret_value_response:
text_secret_data = get_secret_value_response['SecretString']
else:
|
Start Gaphor as a Gtk.Application
For one, this greys out the Quit option in the MacOS menu | @@ -125,9 +125,17 @@ class _Application(object):
return self.init_service(name)
def run(self):
- from gi.repository import Gtk
+ from gi.repository import Gio, Gtk
+ app = Gtk.Application(application_id="org.gaphor.gaphor",
+ flags=Gio.ApplicationFlags.FLAGS_NONE)
- Gtk.main()
+ def app_activate(app):
+ main_window = self.get_service("main_window")
+ app.add_window(main_window.window)
+
+ app.connect("activate", app_activate)
+
+ app.run()
def shutdown(self):
for name, srv in self.component_registry.get_utilities(IService):
|
make sure cmd is not run when npm isn't installed
apparently the skipIf on the functions still get run, even if the function is
going to be skipped based on a skipIf on the class. | @@ -54,7 +54,8 @@ class NpmStateTest(integration.ModuleCase, integration.SaltReturnAssertsMixIn):
ret = self.run_state('npm.installed', name=None, pkgs=['pm2', 'grunt'])
self.assertSaltTrueReturn(ret)
- @skipIf(LooseVersion(cmd.run('npm -v')) >= LooseVersion(MAX_NPM_VERSION), 'Skip with npm >= 5.0.0 until #41770 is fixed')
+ @skipIf(salt.utils.which('npm') and LooseVersion(cmd.run('npm -v')) >= LooseVersion(MAX_NPM_VERSION),
+ 'Skip with npm >= 5.0.0 until #41770 is fixed')
@destructiveTest
def test_npm_cache_clean(self):
'''
|
(refactor) liquid connector get_tracking_pairs
changed logger info to avoid user confusion | @@ -234,7 +234,7 @@ class LiquidAPIOrderBookDataSource(OrderBookTrackerDataSource):
retval[trading_pair] = LiquidOrderBookTrackerEntry(trading_pair, snapshot_timestamp, order_book)
self.logger().info(f"Initialized order book for {trading_pair}. "
- f"{index*1}/{number_of_pairs} completed")
+ f"{index+1}/{number_of_pairs} completed")
# Each 1000 limit snapshot costs ?? requests and Liquid rate limit is ?? requests per second.
await asyncio.sleep(1.0) # Might need to be changed
except Exception:
|
Update exposed-gitignore.yaml
New conditions to avoid false positives. | @@ -24,7 +24,7 @@ requests:
- type: dsl
dsl:
- - '!contains(tolower(body), "<html")'
+ - '!contains(tolower(body), "<html") && !contains(tolower(body), "<!doctype") && !contains(tolower(body), "<script")'
- type: dsl
dsl:
|
Fix centipede linker flags
Mirrors [the fixes from
OSS-Fuzz](https://github.com/google/oss-fuzz/pull/9427):
1. Use [`-Wl` on the linker
flags](https://github.com/google/oss-fuzz/pull/9427#issuecomment-1385205488).
2. Use
[`LDFLAGS`](https://github.com/google/oss-fuzz/pull/9427#issuecomment-1385375441). | @@ -24,11 +24,8 @@ def build():
san_cflags = ['-fsanitize-coverage=trace-loads']
link_cflags = [
- '-Wno-error=unused-command-line-argument',
- '-ldl',
- '-lrt',
- '-lpthread',
- '/lib/weak.o',
+ '-Wno-unused-command-line-argument',
+ '-Wl,-ldl,-lrt,-lpthread,/lib/weak.o'
]
# TODO(Dongge): Build targets with sanitizers.
@@ -41,6 +38,7 @@ def build():
cflags = san_cflags + centipede_cflags + link_cflags
utils.append_flags('CFLAGS', cflags)
utils.append_flags('CXXFLAGS', cflags)
+ utils.append_flags('LDFLAGS', ['/lib/weak.o'])
os.environ['CC'] = '/clang/bin/clang'
os.environ['CXX'] = '/clang/bin/clang++'
|
Update readme.md for Mac
Mono will generate a path '/usr/local/lib/pkgconfig:/usr/lib/pkgconfig:/Library/Frameworks/Mono.framework/Versions/Current/lib/pkgconfig' for different mono versions, thus we do not need to care about which version we are using. | @@ -93,7 +93,7 @@ To enable usage of a GPU, additional packages need to be installed. The followin
For macOS Catalina, open the configuration of zsh via the terminal:
* Type in `cd` to navigate to the home directory.
* Type `nano ~/.zshrc` to open the configuration of the terminal
-* Add the path to your mono installation: `export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:/usr/lib/pkgconfig:/Library/Frameworks/Mono.framework/Versions/6.12.0/lib/pkgconfig:$PKG_CONFIG_PATH`. Make sure that the Path matches to your version (Here 6.12.0)
+* Add the path to your mono installation: `export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:/usr/lib/pkgconfig:/Library/Frameworks/Mono.framework/Versions/Current/lib/pkgconfig:$PKG_CONFIG_PATH`.
* Save everything and execute `. ~/.zshrc`
4. Navigate to the alphapept folder and install the package with `pip install .` (default users) or with `pip install -e .` to enable developers mode.
|
BUG/DOC: Description of k_posdef
Closes
Corrects documentation to note that k_posdef is the dimension of the state innovation. | @@ -66,7 +66,7 @@ class KalmanFilter(Representation):
The dimension of the unobserved state process.
k_posdef : int, optional
The dimension of a guaranteed positive definite covariance matrix
- describing the shocks in the measurement equation. Must be less than
+ describing the shocks in the transition equation. Must be less than
or equal to `k_states`. Default is `k_states`.
loglikelihood_burn : int, optional
The number of initial periods during which the loglikelihood is not
|
Add examples for view_* functions
Minimal examples of an airplane model of different views | @@ -1980,6 +1980,18 @@ class Renderer(_vtk.vtkRenderer):
negative : bool, optional
View from the opposite direction.
+ Examples
+ --------
+ View the XY plane of a built-in mesh example.
+
+ >>> from pyvista import examples
+ >>> import pyvista as pv
+ >>> airplane = examples.load_airplane()
+ >>> pl = pv.Plotter()
+ >>> _ = pl.add_mesh(airplane)
+ >>> pl.view_xy()
+ >>> pl.show()
+
"""
vec = np.array([0,0,1])
viewup = np.array([0,1,0])
@@ -1995,6 +2007,18 @@ class Renderer(_vtk.vtkRenderer):
negative : bool, optional
View from the opposite direction.
+ Examples
+ --------
+ View the YX plane of a built-in mesh example.
+
+ >>> from pyvista import examples
+ >>> import pyvista as pv
+ >>> airplane = examples.load_airplane()
+ >>> pl = pv.Plotter()
+ >>> _ = pl.add_mesh(airplane)
+ >>> pl.view_yx()
+ >>> pl.show()
+
"""
vec = np.array([0,0,-1])
viewup = np.array([1,0,0])
@@ -2010,6 +2034,18 @@ class Renderer(_vtk.vtkRenderer):
negative : bool, optional
View from the opposite direction.
+ Examples
+ --------
+ View the XZ plane of a built-in mesh example.
+
+ >>> from pyvista import examples
+ >>> import pyvista as pv
+ >>> airplane = examples.load_airplane()
+ >>> pl = pv.Plotter()
+ >>> _ = pl.add_mesh(airplane)
+ >>> pl.view_xz()
+ >>> pl.show()
+
"""
vec = np.array([0,-1,0])
viewup = np.array([0,0,1])
@@ -2025,6 +2061,18 @@ class Renderer(_vtk.vtkRenderer):
negative : bool, optional
View from the opposite direction.
+ Examples
+ --------
+ View the ZX plane of a built-in mesh example.
+
+ >>> from pyvista import examples
+ >>> import pyvista as pv
+ >>> airplane = examples.load_airplane()
+ >>> pl = pv.Plotter()
+ >>> _ = pl.add_mesh(airplane)
+ >>> pl.view_zx()
+ >>> pl.show()
+
"""
vec = np.array([0,1,0])
viewup = np.array([1,0,0])
@@ -2040,6 +2088,18 @@ class Renderer(_vtk.vtkRenderer):
negative : bool, optional
View from the opposite direction.
+ Examples
+ --------
+ View the YZ plane of a built-in mesh example.
+
+ >>> from pyvista import examples
+ >>> import pyvista as pv
+ >>> airplane = examples.load_airplane()
+ >>> pl = pv.Plotter()
+ >>> _ = pl.add_mesh(airplane)
+ >>> pl.view_yz()
+ >>> pl.show()
+
"""
vec = np.array([1,0,0])
viewup = np.array([0,0,1])
@@ -2055,6 +2115,18 @@ class Renderer(_vtk.vtkRenderer):
negative : bool, optional
View from the opposite direction.
+ Examples
+ --------
+ View the ZY plane of a built-in mesh example.
+
+ >>> from pyvista import examples
+ >>> import pyvista as pv
+ >>> airplane = examples.load_airplane()
+ >>> pl = pv.Plotter()
+ >>> _ = pl.add_mesh(airplane)
+ >>> pl.view_zy()
+ >>> pl.show()
+
"""
vec = np.array([-1,0,0])
viewup = np.array([0,1,0])
|
refactored still image creator
Not tested yet as it is not working in regular develop either. | -from collections import OrderedDict
-
import nuke
-from openpype.hosts.nuke.api import plugin
from openpype.hosts.nuke.api.lib import create_write_node
+from openpype.hosts.nuke.plugins.create import create_write_render
-class CreateWriteStill(plugin.OpenPypeCreator):
+class CreateWriteStill(create_write_render.CreateWriteRender):
# change this to template preset
name = "WriteStillFrame"
label = "Create Write Still Image"
@@ -23,77 +21,8 @@ class CreateWriteStill(plugin.OpenPypeCreator):
def __init__(self, *args, **kwargs):
super(CreateWriteStill, self).__init__(*args, **kwargs)
- data = OrderedDict()
-
- data["family"] = self.family
- data["families"] = self.n_class
-
- for k, v in self.data.items():
- if k not in data.keys():
- data.update({k: v})
-
- self.data = data
- self.nodes = nuke.selectedNodes()
- self.log.debug("_ self.data: '{}'".format(self.data))
-
- def process(self):
-
- inputs = []
- outputs = []
- instance = nuke.toNode(self.data["subset"])
- selected_node = None
-
- # use selection
- if (self.options or {}).get("useSelection"):
- nodes = self.nodes
-
- if not (len(nodes) < 2):
- msg = ("Select only one node. "
- "The node you want to connect to, "
- "or tick off `Use selection`")
- self.log.error(msg)
- nuke.message(msg)
- return
-
- if len(nodes) == 0:
- msg = (
- "No nodes selected. Please select a single node to connect"
- " to or tick off `Use selection`"
- )
- self.log.error(msg)
- nuke.message(msg)
- return
-
- selected_node = nodes[0]
- inputs = [selected_node]
- outputs = selected_node.dependent()
-
- if instance:
- if (instance.name() in selected_node.name()):
- selected_node = instance.dependencies()[0]
-
- # if node already exist
- if instance:
- # collect input / outputs
- inputs = instance.dependencies()
- outputs = instance.dependent()
- selected_node = inputs[0]
- # remove old one
- nuke.delete(instance)
-
- # recreate new
- write_data = {
- "nodeclass": self.n_class,
- "families": [self.family],
- "avalon": self.data
- }
-
- # add creator data
- creator_data = {"creator": self.__class__.__name__}
- self.data.update(creator_data)
- write_data.update(creator_data)
-
- self.log.info("Adding template path from plugin")
+ def _create_write_node(self, selected_node, inputs, outputs, write_data):
+ # explicitly reset template to 'renders', not same as other 2 writes
write_data.update({
"fpath_template": (
"{work}/renders/nuke/{subset}/{subset}.{ext}")})
@@ -118,16 +47,9 @@ class CreateWriteStill(plugin.OpenPypeCreator):
farm=False,
linked_knobs=["channels", "___", "first", "last", "use_limit"])
- # relinking to collected connections
- for i, input in enumerate(inputs):
- write_node.setInput(i, input)
-
- write_node.autoplace()
-
- for output in outputs:
- output.setInput(0, write_node)
+ return write_node
- # link frame hold to group node
+ def _modify_write_node(self, write_node):
write_node.begin()
for n in nuke.allNodes():
# get write node
|
MAINT: Remove duplicate cond check from assert_array_compare
We already in "if not cond" branch of code, we don't need to check it again | @@ -773,7 +773,6 @@ def chk_same_position(x_id, y_id, hasval='nan'):
+ '\n(mismatch %s%%)' % (match,),
verbose=verbose, header=header,
names=('x', 'y'), precision=precision)
- if not cond:
raise AssertionError(msg)
except ValueError:
import traceback
|
Stop referencing langkit.compiled_types.logic_var_type
TN: | @@ -5,8 +5,7 @@ from itertools import izip_longest
import funcy
from langkit import names
-from langkit.compiled_types import (Argument, T, equation_type, logic_var_type,
- no_compiled_type)
+from langkit.compiled_types import Argument, T, equation_type, no_compiled_type
from langkit.diagnostics import check_multiple, check_source_language
from langkit.expressions.base import (
AbstractExpression, CallExpr, ComputingExpr, DynamicVariable, LiteralExpr,
@@ -246,7 +245,7 @@ class Bind(AbstractExpression):
expr.create_result_var('Ent')
check_source_language(
- expr.type == logic_var_type
+ expr.type == T.LogicVarType
or expr.type.matches(T.root_node.entity),
'Operands to a logic bind operator should be either'
@@ -298,7 +297,7 @@ def domain(self, logic_var_expr, domain):
Define the domain of a logical variable. Several important properties about
this expression:
- This is the entry point into the logic DSL. A ``logic_var_type`` variable
+ This is the entry point into the logic DSL. A ``LogicVarType`` variable
*must* have a domain defined in the context of an equation. If it doesn't,
its solution set is empty, and thus the only possible value for it is
undefined.
@@ -341,7 +340,7 @@ def domain(self, logic_var_expr, domain):
return DomainExpr(
construct(domain, lambda d: d.is_collection, "Type given "
"to LogicVar must be collection type, got {expr_type}"),
- construct(logic_var_expr, logic_var_type),
+ construct(logic_var_expr, T.LogicVarType),
abstract_expr=self,
)
@@ -426,14 +425,14 @@ class Predicate(AbstractExpression):
# Separate logic variable expressions from extra argument expressions
exprs = [construct(e) for e in self.exprs]
logic_var_exprs, closure_exprs = funcy.split_by(
- lambda e: e.type == logic_var_type, exprs
+ lambda e: e.type == T.LogicVarType, exprs
)
check_source_language(
len(logic_var_exprs) > 0, "Predicate instantiation should have at "
"least one logic variable expression"
)
check_source_language(
- all(e.type != logic_var_type for e in closure_exprs),
+ all(e.type != T.LogicVarType for e in closure_exprs),
'Logic variable expressions should be grouped at the beginning,'
' and should not appear after non logic variable expressions'
)
@@ -467,7 +466,7 @@ class Predicate(AbstractExpression):
)
)
- if expr.type == logic_var_type:
+ if expr.type == T.LogicVarType:
check_source_language(
arg.type.matches(T.root_node.entity),
"Argument #{} of predicate "
@@ -532,7 +531,7 @@ def get_value(self, logic_var):
rtype = T.root_node.entity
- logic_var_expr = construct(logic_var, logic_var_type)
+ logic_var_expr = construct(logic_var, T.LogicVarType)
logic_var_ref = logic_var_expr.create_result_var('Logic_Var_Value')
return If.Expr(
|
Resolve - Regular division for memory calculation
* Resolve
This commit attempts to solve the problem of making LSF to "-R
select[mem > 0] rusage[mem=0]" as well as the problem of overloading the behaviour of the --doubleMem parameters
* Resolve - regular division for memory calculation | @@ -249,7 +249,7 @@ class LSFBatchSystem(AbstractGridEngineBatchSystem):
mem_resource = parse_memory_resource(mem)
mem_limit = parse_memory_limit(mem)
else:
- mem = float(mem) // 1024**3
+ mem = float(mem) / 1024**3
mem_resource = parse_memory_resource(mem)
mem_limit = parse_memory_limit(mem)
|
Fixed some warnings
Applied type casts for warnings which pointed out 8-bit and 16-bit int
conversions | #include <cinttypes>
#include <stdexcept>
-#define DEF(METHOD) .def_static(#METHOD, &JaggedArraySrc::METHOD<std::int64_t>)\
+#define DEF(METHOD) def_static(#METHOD, &JaggedArraySrc::METHOD<std::int64_t>)\
.def_static(#METHOD, &JaggedArraySrc::METHOD<std::uint64_t>)\
.def_static(#METHOD, &JaggedArraySrc::METHOD<std::int32_t>)\
.def_static(#METHOD, &JaggedArraySrc::METHOD<std::uint32_t>)\
namespace py = pybind11;
struct JaggedArraySrc {
+private:
+
+ /*template <typename T>
+ static void set_native_endian(py::array_t<T> input) {
+ if (!input.dtype().isnative()) {
+ input = input.byteswap().newbyteorder();
+ }
+ }*/
+
public:
+ template <typename T>
+ static auto testEndian(py::array_t<T> input) {
+ return py::array::ensure(input).dtype();
+ }
+
template <typename T>
static py::array_t<T> offsets2parents(py::array_t<T> offsets) {
py::buffer_info offsets_info = offsets.request();
@@ -35,7 +49,7 @@ public:
size_t k = -1;
for (size_t i = 0; i < (size_t)offsets_info.size; i++) {
while (j < (size_t)offsets_ptr[i]) {
- parents_ptr[j] = k;
+ parents_ptr[j] = (T)k;
j += 1;
}
k += 1;
@@ -90,7 +104,7 @@ public:
for (size_t i = 0; i < (size_t)starts_info.size; i++) {
for (size_t j = (size_t)starts_ptr[i]; j < (size_t)stops_ptr[i]; j++) {
- parents_ptr[j] = i;
+ parents_ptr[j] = (T)i;
}
}
@@ -220,9 +234,10 @@ public:
PYBIND11_MODULE(_jagged, m) {
py::class_<JaggedArraySrc>(m, "JaggedArraySrc")
.def(py::init<>())
- DEF(offsets2parents)
- DEF(counts2offsets)
- DEF(startsstops2parents)
- DEF(parents2startsstops)
- DEF(uniques2offsetsparents);
+ .DEF(testEndian)
+ .DEF(offsets2parents)
+ .DEF(counts2offsets)
+ .DEF(startsstops2parents)
+ .DEF(parents2startsstops)
+ .DEF(uniques2offsetsparents);
}
|
Add 2 plants to AR.py
Added VMA2TG01 to TG04 (gas) and NESPDI02 (oil). | @@ -291,6 +291,7 @@ power_plant_type = {
'NECOTV02': 'gas',
'NECOTV03': 'gas',
'NECOTV04': 'gas',
+ 'NESPDI02': 'oil',
'NIH1HI': 'hydro',
'NIH4HI': 'hydro',
'NOMODI01': 'gas',
@@ -450,6 +451,10 @@ power_plant_type = {
'VGESTG16': 'gas',
'VGESTG18': 'gas',
'VIALDI01': 'oil',
+ 'VMA2TG01': 'gas',
+ 'VMA2TG02': 'gas',
+ 'VMA2TG03': 'gas',
+ 'VMA2TG04': 'gas',
'VMARTG01': 'gas',
'VMARTG02': 'gas',
'VMARTG03': 'gas',
|
fix Tagging.tags usage properly
Fixes | @@ -2393,7 +2393,7 @@ class Minio: # pylint: disable=too-many-public-methods
"GET", bucket_name, query_params={"tagging": ""},
)
tagging = unmarshal(Tagging, response.data.decode())
- return tagging.tags()
+ return tagging.tags
except S3Error as exc:
if exc.code != "NoSuchTagSet":
raise
@@ -2470,7 +2470,7 @@ class Minio: # pylint: disable=too-many-public-methods
query_params=query_params,
)
tagging = unmarshal(Tagging, response.data.decode())
- return tagging.tags()
+ return tagging.tags
except S3Error as exc:
if exc.code != "NoSuchTagSet":
raise
|
comment verify stage
we need to fix package first | @@ -11,8 +11,8 @@ stages:
- name: test
- name: publish
if: branch = master AND tag =~ ^v.*
-- name: verify
- if: branch = master
+#- name: verify
+# if: branch = master
jobs:
include:
@@ -36,10 +36,10 @@ jobs:
tags: true
repo: Mirantis/kqueen
- - stage: verify
- before_install:
- - docker-compose up -d
- install:
- - pip3 install kqueen
- script:
- - kqueen
+# - stage: verify
+# before_install:
+# - docker-compose up -d
+# install:
+# - pip3 install kqueen
+# script:
+# - kqueen
|
downloader: unpack archives to the directory they are in
This makes a lot more sense than unpacking them to the model directory.
I think the only reason we haven't done it this way before is because
we never downloaded any archives to subdirectories of the model directory. | @@ -446,7 +446,7 @@ class PostprocUnpackArchive(Postproc):
reporter.print_section_heading('Unpacking {}', postproc_file)
- shutil.unpack_archive(str(postproc_file), str(output_dir), self.format)
+ shutil.unpack_archive(str(postproc_file), str(output_dir / postproc_file.parent), self.format)
postproc_file.unlink() # Remove the archive
Postproc.types['unpack_archive'] = PostprocUnpackArchive
|
Add new is_causal flag introduced by nn.Transformer API
Summary: Add new is_causal flag introduced by nn.Transformer API | @@ -149,7 +149,7 @@ class PETransformerEncoderLayer(nn.Module):
state["activation"] = F.relu
super(PETransformerEncoderLayer, self).__setstate__(state)
- def forward(self, src, src_mask=None, src_key_padding_mask=None):
+ def forward(self, src, src_mask=None, src_key_padding_mask=None, is_causal=False):
encoded_src = self.pos_encoder(src)
query = self.qk_residual(encoded_src)
# do not involve pos_encoding info into the value
|
[nixio] Write new objects found in ChannelIndex subtree
If new objects are found in the ChannelIndex substructure (determined by
the lack of nix_name annotation), they are created on the parent Block
without being attached to a Group.
Fixes | @@ -999,8 +999,15 @@ class NixIO(BaseIO):
"""
for chx in neoblock.channel_indexes:
- signames = [sig.annotations["nix_name"] for sig in
- chx.analogsignals + chx.irregularlysampledsignals]
+ signames = []
+ for asig in chx.analogsignals:
+ if "nix_name" not in asig.annotations:
+ self._write_analogsignal(asig, nixblock, None)
+ signames.append(asig.annotations["nix_name"])
+ for isig in chx.irregularlysampledsignals:
+ if "nix_name" not in isig.annotations:
+ self._write_irregularlysampledsignal(isig, nixblock, None)
+ signames.append(isig.annotations["nix_name"])
chxsource = nixblock.sources[chx.annotations["nix_name"]]
for name in signames:
for da in self._signal_map[name]:
@@ -1009,6 +1016,8 @@ class NixIO(BaseIO):
for unit in chx.units:
unitsource = chxsource.sources[unit.annotations["nix_name"]]
for st in unit.spiketrains:
+ if "nix_name" not in st.annotations:
+ self._write_spiketrain(st, nixblock, None)
stmt = nixblock.multi_tags[st.annotations["nix_name"]]
stmt.sources.append(chxsource)
stmt.sources.append(unitsource)
|
Small typo correction on CONTRIBUTING.md
* Update CONTRIBUTING.md
Small typo correction.
* Update .github/CONTRIBUTING.md | @@ -112,8 +112,8 @@ In case you adding new dependencies, make sure that they are compatible with the
### Coding Style
-1. Use f-strings for output formation (except logging when we stay with lazy `logging.info("Hello %s!`, name);
-2. Black code formatter is used using `pre-commit` hook.
+1. Use f-strings for output formation (except logging when we stay with lazy `logging.info("Hello %s!", name)`.
+2. Black code formatter is used using a `pre-commit` hook.
### Documentation
|
names.Name: remove dead code
TN: | @@ -35,15 +35,9 @@ class Name(object):
def __eq__(self, other):
return isinstance(other, Name) and self.base_name == other.base_name
- def __ne__(self, other):
- return not (self == other)
-
def __lt__(self, other):
return self.base_name < other.base_name
- def __gt__(self, other):
- return self.base_name > other.base_name
-
@property
def camel_with_underscores(self):
"""
|
Change the quantizer to match the behavior of the FBGEMM implementation
Summary:
Pull Request resolved:
FBGEMM uses 64 bit values. Need to change our implementation to match | @@ -134,13 +134,13 @@ T quantize_val(float scale, int32_t zero_point, float value) {
// cases away from zero, and can be consistent with SIMD implementations for
// example in x86 using _mm512_cvtps_epi32 or mm512_round_ps with
// _MM_FROUND_CUR_DIRECTION option that also follow the current rounding mode.
- int32_t qvalue;
+ int64_t qvalue;
constexpr int32_t qmin = std::numeric_limits<typename T::underlying>::min();
constexpr int32_t qmax = std::numeric_limits<typename T::underlying>::max();
checkZeroPoint<typename T::underlying>("quantize_val", zero_point);
- qvalue = static_cast<int32_t>(std::nearbyint(value / scale + zero_point));
- qvalue = std::max(qvalue, qmin);
- qvalue = std::min(qvalue, qmax);
+ qvalue = static_cast<int64_t>(std::nearbyint(value / scale + zero_point));
+ qvalue = std::max<int64_t>(qvalue, qmin);
+ qvalue = std::min<int64_t>(qvalue, qmax);
return static_cast<T>(qvalue);
}
|
TST: Add test of new `parametrize` decorator.
The new decorator was added to numpy.testing in order to facilitate the
transition to using pytest. | +"""
+Test the decorators from ``testing.decorators``.
+
+"""
from __future__ import division, absolute_import, print_function
import warnings
@@ -13,6 +17,7 @@ def slow_func(x, y, z):
assert_(slow_func.slow)
+
def test_setastest():
@dec.setastest()
def f_default(a):
@@ -30,6 +35,7 @@ def f_isnottest(a):
assert_(f_istest.__test__)
assert_(not f_isnottest.__test__)
+
class DidntSkipException(Exception):
pass
@@ -182,5 +188,13 @@ def deprecated_func3():
assert_raises(AssertionError, deprecated_func3)
[email protected]('base, power, expected',
+ [(1, 1, 1),
+ (2, 1, 2),
+ (2, 2, 4)])
+def test_parametrize(base, power, expected):
+ assert_(base**power == expected)
+
+
if __name__ == '__main__':
run_module_suite()
|
Add webargs-quart to the extensions list
This is a useful library to parse and validate arguments. | @@ -10,3 +10,5 @@ here,
Resource Sharing (access control) support.
- `Quart-OpenApi <https://github.com/factset/quart-openapi/>`_ RESTful
API building.
+- `Webargs-Quart <https://github.com/esfoobar/webargs-quart>`_ Webargs
+ parsing for Quart.
|
Add bold to unread host/surfing request messages
Added styling to messages in both the hosting and surfing tab to be in
bold when the last message sent is unread. | @@ -7,6 +7,7 @@ import {
} from "@material-ui/core";
import { makeStyles } from "@material-ui/core/styles";
import { Skeleton } from "@material-ui/lab";
+import classNames from "classnames";
import Avatar from "components/Avatar";
import TextBody from "components/TextBody";
import useAuthStore from "features/auth/useAuthStore";
@@ -31,6 +32,7 @@ const useStyles = makeStyles((theme) => ({
hostStatusIcon: {
marginInlineEnd: theme.spacing(1),
},
+ unread: { fontWeight: "bold" },
}));
export interface HostRequestListItemProps {
@@ -48,7 +50,8 @@ export default function HostRequestListItem({
const { data: otherUser, isLoading: isOtherUserLoading } = useUser(
isHost ? hostRequest.fromUserId : hostRequest.toUserId
);
-
+ const isUnread =
+ hostRequest.lastSeenMessageId !== hostRequest.latestMessage?.messageId;
//define the latest message author's name and
//control message target to use in short message preview
const authorName =
@@ -105,7 +108,10 @@ export default function HostRequestListItem({
true
)}`}
</Typography>
- <TextBody noWrap>
+ <TextBody
+ noWrap
+ className={classNames({ [classes.unread]: isUnread })}
+ >
{isOtherUserLoading ? (
<Skeleton width={100} />
) : (
|
attributions
Adjusted attribution wording to direct more users to the docs page. | @@ -28,6 +28,6 @@ of the code can be found
### Attribution
-Please cite [Speagle (2019)](https://arxiv.org/abs/1904.02180) if you find the
-package useful in your research, along with any relevant papers on the
-[citations page](https://dynesty.readthedocs.io/en/latest/index.html#citations).
+If you find the package useful in your research, please see the
+[documentation](https://dynesty.readthedocs.io) for papers you
+should cite.
|
Update dynamics-krylov.rst
fixed inherited imports | @@ -42,8 +42,11 @@ function for master-equation evolution, except that the initial state must be a
Let's solve a simple example using the algorithm in QuTiP to get familiar with the method.
.. plot::
- :context:
+ :context: reset
+ from qutip import jmat, rand_ket, krylovsolve
+ import numpy as np
+ import matplotlib.pyplot as plt
dim = 100
e_ops = [jmat((dim - 1) / 2.0, "x"), jmat((dim - 1) / 2.0, "y"), jmat((dim - 1) / 2.0, "z")]
H = .5*jmat((dim - 1) / 2.0, "z") + .5*jmat((dim - 1) / 2.0, "x")
|
Last Connection date for user in admin panel
* Last Connection date for user in admin panel
display last connection for each user in user admin panel.
add the date at the end of the tab
* Update admin.py
* Update admin.py | @@ -208,7 +208,7 @@ class InvenTreeUserAdmin(UserAdmin):
(And it's confusing!)
"""
-
+ list_display = ('username', 'email', 'first_name', 'last_name', 'is_staff', 'last_login') # display last connection for each user in user admin panel.
fieldsets = (
(None, {'fields': ('username', 'password')}),
(_('Personal info'), {'fields': ('first_name', 'last_name', 'email')}),
|
robot-simulator: updated tests to v3.0.0
Updated tests to v2.3.0 , added Exception tests for invalid directions/instructions as well as tests for each direction the robot can go
Closes | @@ -3,7 +3,7 @@ import unittest
from robot_simulator import Robot, NORTH, EAST, SOUTH, WEST
-# Tests adapted from `problem-specifications//canonical-data.json` @ v2.2.0
+# Tests adapted from `problem-specifications//canonical-data.json` @ v3.0.0
class RobotSimulatorTest(unittest.TestCase):
def test_init(self):
@@ -17,16 +17,36 @@ class RobotSimulatorTest(unittest.TestCase):
self.assertEqual(robot.bearing, SOUTH)
def test_turn_right(self):
- robot = Robot()
- for direction in [EAST, SOUTH, WEST, NORTH]:
+ dirA = [EAST, SOUTH, WEST, NORTH]
+ dirB = [SOUTH, WEST, NORTH, EAST]
+ for x in range(len(dirA)):
+ robot = Robot(dirA[x], 0, 0)
robot.turn_right()
- self.assertEqual(robot.bearing, direction)
+ self.assertEqual(robot.bearing, dirB[x])
+
+ def test_change_direction_right(self):
+ A = [NORTH, EAST, SOUTH, WEST]
+ B = [EAST, SOUTH, WEST, NORTH]
+ for x in range(len(A)):
+ robot = Robot(A[x], 0, 0)
+ robot.simulate("R")
+ self.assertEqual(robot.bearing, B[x])
+
+ def test_change_direction_left(self):
+ A = [NORTH, WEST, SOUTH, EAST]
+ B = [WEST, SOUTH, EAST, NORTH]
+ for x in range(len(A)):
+ robot = Robot(A[x], 0, 0)
+ robot.simulate("L")
+ self.assertEqual(robot.bearing, B[x])
def test_turn_left(self):
- robot = Robot()
- for direction in [WEST, SOUTH, EAST, NORTH]:
+ dirA = [EAST, SOUTH, WEST, NORTH]
+ dirB = [NORTH, EAST, SOUTH, WEST]
+ for x in range(len(dirA)):
+ robot = Robot(dirA[x], 0, 0)
robot.turn_left()
- self.assertEqual(robot.bearing, direction)
+ self.assertEqual(robot.bearing, dirB[x])
def test_advance_positive_north(self):
robot = Robot(NORTH, 0, 0)
|
add monster girl doctor
monster musume no Oisha-san, absolutely nothing to do with Monster Musume from before, hopefully it's a fun show | @@ -103,6 +103,32 @@ streams:
amazon|Amazon US: ''
amazon_uk|Amazon UK: ''
primevideo|Prime Video International: ''
+---
+title: 'Monster Musume no Oisha-san'
+alias: ['Monster Girl Doctor']
+has_source: true
+info:
+ mal: 'https://myanimelist.net/anime/40708'
+ anilist: 'https://anilist.co/anime/113286'
+ anidb: 'https://anidb.net/anime/15266'
+ kitsu: 'https://kitsu.io/anime/monster-musume-no-oishasan'
+ animeplanet: 'https://www.anime-planet.com/anime/monster-girl-doctor'
+ official: 'https://mon-isha-anime.com/'
+ subreddit: '/r/MonsterGirlDoctor'
+streams:
+ crunchyroll: 'https://www.crunchyroll.com/monster-girl-doctor'
+ museasia: 'https://www.youtube.com/playlist?list=PLwLSw1_eDZl3LmQOuDIveKBPJvUcapoq5'
+ funimation|Funimation: ''
+ wakanim|Wakanim: ''
+ hidive: ''
+ animelab|AnimeLab: ''
+ crunchyroll_nsfw|Crunchyroll: ''
+ vrv|VRV: 'https://vrv.co/series/G60XNJW4R'
+ hulu|Hulu: ''
+ youtube: ''
+ amazon|Amazon US: ''
+ amazon_uk|Amazon UK: ''
+ primevideo|Prime Video International: ''
#---
#title: ''
#alias: ['']
|
Update the homepage in pyproject.toml
It was still pointing to the older location within Will's GitHub, but now it
lives under the Textualize organisation. | [tool.poetry]
name = "rich"
-homepage = "https://github.com/willmcgugan/rich"
+homepage = "https://github.com/Textualize/rich"
documentation = "https://rich.readthedocs.io/en/latest/"
version = "13.0.0"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
|
fs2bs: use match filter in selectattr()
changed the filter in
selectattr() from 'match' to 'equalto' but due to an incompatibility with
the Jinja2 version for python 2.7 on el7 we must stick to using 'match'
filter. | set_fact:
osd_ids: "{{ osd_ids | default([]) | union(item) }}"
with_items:
- - "{{ ((osd_tree.stdout | default('{}') | trim | from_json).nodes | selectattr('name', 'equalto', inventory_hostname) | map(attribute='children') | list) }}"
+ - "{{ ((osd_tree.stdout | default('{}') | trim | from_json).nodes | selectattr('name', 'match', '^' + inventory_hostname + '$') | map(attribute='children') | list) }}"
- name: get osd metadata
command: "{{ ceph_cmd }} --cluster {{ cluster }} osd metadata osd.{{ item }} -f json"
set_fact:
osd_ids: "{{ osd_ids | default([]) + [item] }}"
with_items:
- - "{{ ((osd_tree.stdout | default('{}') | from_json).nodes | selectattr('name', 'equalto', inventory_hostname) | map(attribute='children') | list) }}"
+ - "{{ ((osd_tree.stdout | default('{}') | from_json).nodes | selectattr('name', 'match', '^' + inventory_hostname + '$') | map(attribute='children') | list) }}"
- name: purge osd(s) from the cluster
ceph_osd:
|
(fix) Fixes pre-commit gh CI step
In a recent [PR](https://github.com/hummingbot/hummingbot/pull/5219),
I have inadvertently stopped the pre-commit hooks from running on
the gh CI pipeline. This PR fixes that issue. | @@ -54,13 +54,13 @@ jobs:
run: yarn --cwd ./gateway install
# Compile and run tests if code has changed
- - name: Run Flake8 and eslint
+ - name: Run pre-commit hooks on diff
shell: bash
if: steps.program-changes.outputs.cache-hit != 'true' || steps.conda-dependencies.outputs.cache-hit != 'true'
run: |
source $CONDA/etc/profile.d/conda.sh
conda activate hummingbot
- pre-commit run
+ pre-commit run --files $(git diff --name-only origin/$GITHUB_BASE_REF)
- name: Run stable tests and calculate coverage
if: steps.program-changes.outputs.cache-hit != 'true' || steps.conda-dependencies.outputs.cache-hit != 'true'
|
Fix nightly build failures
Summary: See:
Test Plan: n/a
Reviewers: max, alangenfeld, schrockn | @@ -80,11 +80,17 @@ def construct_publish_comands(additional_steps=None, nightly=False):
'''The modules managed by this script.'''
-MODULE_NAMES = ['dagster', 'dagit', 'dagster-graphql', 'dagstermill', 'dagster-airflow']
+MODULE_NAMES = [
+ 'dagster',
+ 'dagit',
+ 'dagster-graphql',
+ 'dagstermill',
+ 'dagster-airflow',
+ 'dagster-dask',
+]
LIBRARY_MODULES = [
'dagster-aws',
- 'dagster-dask',
'dagster-datadog',
'dagster-gcp',
'dagster-ge',
|
Attempt to fix API error
Based on server errors, this part of the code seems to be the issue. | @@ -96,9 +96,13 @@ class DataFile(models.Model):
return str("{0}:{1}:{2}").format(self.user, self.source, self.file)
def download_url(self, request):
- key = self.generate_key(request)
+ # 20201222 MPB: commenting these out because generate_key is producing "out of range" errors;
+ # unclear why, but this was an incomplete logging effort we ended up not using.
+ #
+ # key = self.generate_key(request)
url = full_url(reverse("data-management:datafile-download", args=(self.id,)))
- return "{0}?key={1}".format(url, key)
+ # return "{0}?key={1}".format(url, key)
+ return url
@property
def file_url_as_attachment(self):
|
[BUG] `plot_cluster_algorithm`: fix error "`predict_series is undefined`" if `X` is passed as `np.ndarray`
currently the `plot_cluster_algorithm` function in `sktime/clustering/utils/plotting/_plot_partitions.py` throws an error if `X` is passed as a `numpy` array. Exception is throwns because `predict_series` is undefined | @@ -74,6 +74,7 @@ def plot_cluster_algorithm(model: TimeSeriesLloyds, X: TimeSeriesInstances, k: i
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
+ predict_series = X
if isinstance(X, pd.DataFrame):
predict_series = convert_to(X, "numpy3D")
plt.figure(figsize=(5, 10))
|
[cli] import survey locally
Two advantages:
* It does not crash in pytest where stdin / stdout don't point to files
* The daemon does not need it. | @@ -14,7 +14,6 @@ from typing import (
)
import click
-import survey
if TYPE_CHECKING:
from datetime import datetime
@@ -562,6 +561,8 @@ focus_color = f"{response_color}{bold}"
def prompt(message: str, default: str = "", validate: Optional[Callable] = None) -> str:
+ import survey
+
styled_default = _syle_hint(default)
styled_message = _style_message(message)
@@ -585,12 +586,18 @@ def prompt(message: str, default: str = "", validate: Optional[Callable] = None)
def confirm(message: str, default: Optional[bool] = True) -> bool:
+
+ import survey
+
styled_message = _style_message(message)
+
return survey.confirm(styled_message, default=default, color=response_color)
def select(message: str, options: Sequence[str], hint="") -> int:
+ import survey
+
try:
styled_hint = _syle_hint(hint)
styled_message = _style_message(message)
@@ -611,6 +618,8 @@ def select(message: str, options: Sequence[str], hint="") -> int:
def select_multiple(message: str, options: Sequence[str], hint="") -> List[int]:
+ import survey
+
try:
styled_hint = _syle_hint(hint)
styled_message = _style_message(message)
@@ -656,6 +665,8 @@ def select_path(
import os
+ import survey
+
styled_default = _syle_hint(f"[{default}]")
styled_message = _style_message(message)
|
AbstractEventLoop exception handler is optional
Closes
* get_exception_handler() is only available in 3.5 | @@ -180,9 +180,10 @@ class AbstractEventLoop(metaclass=ABCMeta):
def remove_signal_handler(self, sig: int) -> None: ...
# Error handlers.
@abstractmethod
- def set_exception_handler(self, handler: _ExceptionHandler) -> None: ...
+ def set_exception_handler(self, handler: Optional[_ExceptionHandler]) -> None: ...
+ if sys.version_info >= (3, 5):
@abstractmethod
- def get_exception_handler(self) -> _ExceptionHandler: ...
+ def get_exception_handler(self) -> Optional[_ExceptionHandler]: ...
@abstractmethod
def default_exception_handler(self, context: _Context) -> None: ...
@abstractmethod
|
Update hclu.py
last fix for layout | @@ -57,7 +57,8 @@ class HighConfidenceLowUncertainty(Attack):
:param max_val: maximal value any feature can take, defaults to 1.0
:type max_val: :float:
"""
- super(HighConfidenceLowUncertainty, self).__init__(classifier=classifier)
+ super(HighConfidenceLowUncertainty, self).__init__(
+ classifier=classifier)
if not isinstance(classifier, GPyGaussianProcessClassifier):
raise TypeError('Model must be a GPy Gaussian Process classifier!')
params = {'conf': conf,
@@ -92,7 +93,8 @@ class HighConfidenceLowUncertainty(Attack):
return (pred - args['conf']).reshape(-1)
def constraint_unc(x, args): # constraint for uncertainty
- cur_unc = (args['classifier'].predict_uncertainty(x.reshape(1, -1))).reshape(-1)
+ cur_unc = (args['classifier'].predict_uncertainty(
+ x.reshape(1, -1))).reshape(-1)
return (args['max_uncertainty'] - cur_unc)[0]
bounds = []
@@ -103,16 +105,20 @@ class HighConfidenceLowUncertainty(Attack):
# get properties for attack
max_uncertainty = self.unc_increase * self.classifier.predict_uncertainty(
x_adv[i].reshape(1, -1))
- class_zero = not self.classifier.predict(x_adv[i].reshape(1, -1))[0, 0] < 0.5
+ class_zero = not self.classifier.predict(
+ x_adv[i].reshape(1, -1))[0, 0] < 0.5
init_args = {'classifier': self.classifier, 'class_zero': class_zero,
'max_uncertainty': max_uncertainty, 'conf': self.conf}
- constr_conf = {'type': 'ineq', 'fun': constraint_conf, 'args': (init_args,)}
- constr_unc = {'type': 'ineq', 'fun': constraint_unc, 'args': (init_args,)}
+ constr_conf = {'type': 'ineq',
+ 'fun': constraint_conf, 'args': (init_args,)}
+ constr_unc = {'type': 'ineq',
+ 'fun': constraint_unc, 'args': (init_args,)}
args = {'args': init_args, 'orig': x[i].reshape(-1)}
# finally, run optimization
x_adv[i] = minimize(minfun, x_adv[i], args=args, bounds=bounds,
constraints=[constr_conf, constr_unc])['x']
- logger.info('Success rate of HCLU attack: %.2f%%', 100 * compute_success(self.classifier, x, y, x_adv))
+ logger.info('Success rate of HCLU attack: %.2f%%', 100 *
+ compute_success(self.classifier, x, y, x_adv))
return x_adv
def set_params(self, **kwargs):
@@ -125,7 +131,8 @@ class HighConfidenceLowUncertainty(Attack):
"""
super(HighConfidenceLowUncertainty, self).set_params(**kwargs)
if self.conf <= 0.5 or self.conf > 1.0:
- raise ValueError("Confidence value has to be a value between 0.5 and 1.0.")
+ raise ValueError(
+ "Confidence value has to be a value between 0.5 and 1.0.")
if self.unc_increase <= 0.0:
raise ValueError("Value to increase uncertainty must be positive.")
if self.min_val > self.max_val:
|
Update http_server.cc
Change metrics http content type from "text/plain" to "text/plain; charset=utf-8". | @@ -144,6 +144,9 @@ HTTPMetricsServer::Handle(evhtp_request_t* req)
}
evhtp_res res = EVHTP_RES_BADREQ;
+ evhtp_headers_add_header(
+ req->headers_out,
+ evhtp_header_new(kContentTypeHeader, "text/plain; charset=utf-8", 1, 1));
// Call to metric endpoint should not have any trailing string
if (RE2::FullMatch(std::string(req->uri->path->full), api_regex_)) {
|
Remove invalid param for flatpak_update_dockerfile
The plugin no longer takes compose_ids from user params, instead it
takes compose info from the resolve_composes result. | @@ -21,7 +21,7 @@ from atomic_reactor.plugins.flatpak_create_dockerfile import (
FLATPAK_CLEANUPSCRIPT_FILENAME,
FLATPAK_INCLUDEPKGS_FILENAME,
)
-from atomic_reactor.util import is_flatpak_build, map_to_user_params
+from atomic_reactor.util import is_flatpak_build
from atomic_reactor.utils.flatpak_util import FlatpakUtil
@@ -29,8 +29,6 @@ class FlatpakUpdateDockerfilePlugin(Plugin):
key = "flatpak_update_dockerfile"
is_allowed_to_fail = False
- args_from_user_params = map_to_user_params("compose_ids")
-
def __init__(self, workflow):
"""
constructor
|
Standalone: Make "main" inside "site.py" do nothing.
* This fixes standalone for at least Anaconda, where it was otherwise
adding paths that should not be there. | @@ -127,11 +127,18 @@ __file__ = (__nuitka_binary_dir + '%s%s') if '__nuitka_binary_dir' in dict(__bui
source_code
)
+ # Debian stretch site.py
source_code = source_code.replace(
"PREFIXES = [sys.prefix, sys.exec_prefix]",
"PREFIXES = []"
)
+ # Anaconda3 4.1.2 site.py
+ source_code = source_code.replace(
+ "def main():",
+ "def main():return\n\nif 0:\n def _unused():",
+ )
+
debug(
"Freezing module '%s' (from '%s').",
module_name,
|
[Doc] messaging -> messagingv2
Nova sends notifications using 2.0 messging format. | @@ -424,7 +424,7 @@ to Watcher receives Nova notifications in ``watcher_notifications`` as well.
into which Nova services will publish events ::
[oslo_messaging_notifications]
- driver = messaging
+ driver = messagingv2
topics = notifications,watcher_notifications
* Restart the Nova services.
|
Fix double parsing of 'fmt' in Pomodoro widget
By using a variable called 'fmt' which is also used by
`base._TextBox`, the string formatting happense twice.
This can be fixed by removing the formatting within
Pomodoro and just letting the base class handle it.
Closes | @@ -29,7 +29,6 @@ class Pomodoro(base.ThreadPoolText):
"""Pomodoro technique widget"""
orientations = base.ORIENTATION_HORIZONTAL
defaults = [
- ("fmt", "{}", "fmt"),
("num_pomodori", 4, "Number of pomodori to do in a cycle"),
("length_pomodori", 25, "Length of one pomodori in minutes"),
("length_short_break", 5, "Length of a short break in minutes"),
@@ -171,4 +170,4 @@ class Pomodoro(base.ThreadPoolText):
send_notification("Pomodoro", message, urgent=urgent)
def poll(self):
- return self.fmt.format(self._get_text())
+ return self._get_text()
|
bool(restore_id_element) == False for some reason
Very strange. I introduced this in
actually - got past
testing because dropping "is not None" was a last minute response to PR
feedback. | @@ -197,7 +197,7 @@ class AdminRestoreView(TemplateView):
@staticmethod
def get_stats_from_xml(xml_payload):
restore_id_element = xml_payload.find('{{{0}}}Sync/{{{0}}}restore_id'.format(SYNC_XMLNS))
- restore_id = restore_id_element.text if restore_id_element else None
+ restore_id = restore_id_element.text if restore_id_element is not None else None
cases = xml_payload.findall('{http://commcarehq.org/case/transaction/v2}case')
num_cases = len(cases)
|
Cabana: use deque::resize() instead of pop_back in loop
use resize instead of pop_front in loop | @@ -71,8 +71,8 @@ void CANMessages::process(QHash<QString, std::deque<CanData>> *messages) {
msgs = std::move(new_msgs);
} else {
msgs.insert(msgs.begin(), std::make_move_iterator(new_msgs.begin()), std::make_move_iterator(new_msgs.end()));
- while (msgs.size() >= settings.can_msg_log_size) {
- msgs.pop_back();
+ if (msgs.size() > settings.can_msg_log_size) {
+ msgs.resize(settings.can_msg_log_size);
}
}
}
|
Keep one msvs debug msg, test expects is
Investigate whether it should expect such output at a later date,
for now, just let the test pass. | @@ -607,8 +607,8 @@ class _DSPGenerator:
config.platform = 'Win32'
self.configs[variant] = config
- # DEBUG
- # print("Adding '" + self.name + ' - ' + config.variant + '|' + config.platform + "' to '" + str(dspfile) + "'")
+ # DEBUG: leave enabled, test/MSVS/CPPPATH-dirs.py expects this
+ print("Adding '" + self.name + ' - ' + config.variant + '|' + config.platform + "' to '" + str(dspfile) + "'")
for i in range(len(variants)):
AddConfig(self, variants[i], buildtarget[i], outdir[i], runfile[i], cmdargs[i], cppdefines[i], cpppaths[i], cppflags[i])
|
Update ek_spelevo.txt
Root forms to hold more potential subdomains for this EK. | @@ -85,3 +85,61 @@ world.italyalemanes.top
# Reference: https://otx.alienvault.com/pulse/5d40766ecabf3f345b3811db
shark.denizprivatne.top
+
+# Misc.
+
+aphroditedrink.top
+armlessdance.top
+awesomeablam.top
+barbiereallity.top
+beestkilroys.top
+belarusapple.top
+bloggerlolicon.top
+bridgettepromise.top
+brunetbebitas.top
+caballerosricky.top
+carmanexteme.top
+carmellanightelf.top
+cartoonseverinin.top
+chabertcigarette.top
+clearnubile.top
+clothedcalcutta.top
+cosbyfunnies.top
+dailysexpress.top
+damitahustler.top
+denizprivatne.top
+denudaskalani.top
+emblemliterotica.top
+fantasygisselle.top
+fightingsatan.top
+flavorideal.top
+freakylanguage.top
+galeriebeths.top
+ghanaiansorority.top
+graffitoandnot.top
+gratuitekrystal.top
+guerradanger.top
+ingyenesrusian.top
+italyalemanes.top
+leilanihardcord.top
+mancicdreadlock.top
+militarymagyar.top
+minorikeibler.top
+motoribyron.top
+natachafetish.top
+natashayoungster.top
+nylonsneak.top
+nylontruth.top
+periodherstory.top
+playingactive.top
+preitymutter.top
+sandeerugrat.top
+santarough.top
+satanicenanos.top
+shark.denizprivatne.top
+teannapostales.top
+teasingfreehome.top
+tranniefotologs.top
+unicornbrune.top
+vediocorset.top
+virusemoticonos.top
|
Change NodeJS 12->14, include apt-utils
fix: nodejs12 not build docker
fix: apt-utils warning | @@ -10,14 +10,14 @@ ENV APT_INSTALL="apt-get -y install --no-install-recommends"
ENV APT_UPDATE="apt-get -y update"
ENV PIP_INSTALL="python3 -m pip install"
-ADD https://deb.nodesource.com/setup_12.x /tmp
+ADD https://deb.nodesource.com/setup_14.x /tmp
ADD https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb /tmp
ADD https://packages.microsoft.com/config/ubuntu/21.04/packages-microsoft-prod.deb /tmp
COPY dist/bzt*whl /tmp
WORKDIR /tmp
# add node repo and call 'apt-get update'
-RUN bash ./setup_12.x && $APT_INSTALL build-essential python3-pip python3.9-dev net-tools
+RUN bash ./setup_14.x && $APT_INSTALL build-essential python3-pip python3.9-dev net-tools apt-utils
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
|
Added more useful output on overwrite fail
ListifyRobot fails on non-empty page for output, adding a more
helpful error message. | @@ -944,8 +944,10 @@ class CategoryListifyRobot(object):
else:
listString += "*[[:%s]]\n" % article.title()
if self.list.exists() and not self.overwrite:
- pywikibot.output(u'Page %s already exists, aborting.'
- % self.list.title())
+ pywikibot.output(
+ 'Page {} already exists, aborting.\n'
+ 'Use -overwrite option to overwrite the output page.'
+ .format(self.list.title()))
else:
self.list.put(listString, summary=self.editSummary)
|
Fix the problem that BlockVerifier did not choose proper tx hash
generator | from typing import TYPE_CHECKING
from . import BlockBuilder
from .. import BlockVerifier as BaseBlockVerifier
-from ... import TransactionVerifier
+from ... import TransactionVerifier, TransactionVersions
if TYPE_CHECKING:
from . import BlockHeader, BlockBody
@@ -51,8 +51,9 @@ class BlockVerifier(BaseBlockVerifier):
return invoke_result
def verify_transactions(self, block: 'Block', blockchain=None):
+ tx_versions = TransactionVersions()
for tx in block.body.transactions.values():
- tv = TransactionVerifier.new(tx.version, 0)
+ tv = TransactionVerifier.new(tx.version, tx_versions.get_hash_generator_version(tx.version))
tv.verify(tx, blockchain)
def verify_by_prev_block(self, block: 'Block', prev_block: 'Block'):
|
Use the ingress cert for IBM Cloud deployments
Use the ingress cert for IBM Cloud deployments | @@ -1185,7 +1185,10 @@ def obc_io_create_delete(mcg_obj, awscli_pod, bucket_factory):
def retrieve_verification_mode():
- if config.ENV_DATA["platform"].lower() == "ibm_cloud":
+ if (
+ config.ENV_DATA["platform"] == constants.IBMCLOUD_PLATFORM
+ and config.ENV_DATA["deployment_type"] == "managed"
+ ):
verify = True
elif config.DEPLOYMENT.get("use_custom_ingress_ssl_cert"):
verify = get_root_ca_cert()
|
Ctypes: Make "is" and "is not" compararison target C type aware.
* This now generates the needed target type from the values
given to be identical. | @@ -80,6 +80,36 @@ def generateComparisonExpressionCode(to_name, expression, emit, context):
)
)
+ return
+ elif comparator == "Is":
+ emit(
+ to_name.getCType().getAssignmentCodeFromBoolCondition(
+ to_name = to_name,
+ condition = "%s == %s" % (left_name, right_name)
+ )
+ )
+
+ getReleaseCodes(
+ release_names = (left_name, right_name),
+ emit = emit,
+ context = context
+ )
+
+ return
+ elif comparator == "IsNot":
+ emit(
+ to_name.getCType().getAssignmentCodeFromBoolCondition(
+ to_name = to_name,
+ condition = "%s != %s" % (left_name, right_name)
+ )
+ )
+
+ getReleaseCodes(
+ release_names = (left_name, right_name),
+ emit = emit,
+ context = context
+ )
+
return
if to_name.c_type == "nuitka_bool":
@@ -122,34 +152,6 @@ def generateComparisonExpressionCode(to_name, expression, emit, context):
)
context.addCleanupTempName(to_name)
- elif comparator == "Is":
- emit(
- "%s = BOOL_FROM( %s == %s );" % (
- to_name,
- left_name,
- right_name
- )
- )
-
- getReleaseCodes(
- release_names = (left_name, right_name),
- emit = emit,
- context = context
- )
- elif comparator == "IsNot":
- emit(
- "%s = BOOL_FROM( %s != %s );" % (
- to_name,
- left_name,
- right_name
- )
- )
-
- getReleaseCodes(
- release_names = (left_name, right_name),
- emit = emit,
- context = context
- )
elif comparator == "exception_match":
needs_check = expression.mayRaiseExceptionBool(BaseException)
@@ -217,42 +219,6 @@ def getComparisonExpressionBoolCode(to_name, expression, left_name, right_name,
condition = "%s == 1" % (
operator_res_name,
)
- elif comparator == "Is":
- operator_res_name = context.allocateTempName("is", "bool")
-
- emit(
- "%s = ( %s == %s );" % (
- operator_res_name,
- left_name,
- right_name
- )
- )
-
- getReleaseCodes(
- release_names = (left_name, right_name),
- emit = emit,
- context = context
- )
-
- condition = operator_res_name
- elif comparator == "IsNot":
- operator_res_name = context.allocateTempName("isnot", "bool")
-
- emit(
- "%s = ( %s != %s );" % (
- operator_res_name,
- left_name,
- right_name
- )
- )
-
- getReleaseCodes(
- release_names = (left_name, right_name),
- emit = emit,
- context = context
- )
-
- condition = operator_res_name
elif comparator == "exception_match":
needs_check = expression.mayRaiseExceptionBool(BaseException)
|
Fix optimised_pow2_inplace() on Python 3.10
Fix optimised_pow2_inplace() doctest on Python 3.10 because the error message changed.
Python 3.9 error message:
unsupported operand type(s) for ** or pow(): 'int' and 'str'
Python 3.10 error message:
unsupported operand type(s) for **=: 'int' and 'str' | @@ -153,9 +153,9 @@ def optimised_pow2_inplace(n):
0.5
>>> optimised_pow2_inplace(0.5) == 2 ** 0.5
True
- >>> optimised_pow2_inplace('test')
+ >>> optimised_pow2_inplace('test') #doctest: +ELLIPSIS
Traceback (most recent call last):
- TypeError: unsupported operand type(s) for ** or pow(): 'int' and 'str'
+ TypeError: unsupported operand type(s) for ...: 'int' and 'str'
"""
x = 2
x **= n
|
Fix ordering of tf and tflite installs in ci_cpu
The recently merged 8306 PR introduced a depedency
for tflite installation that tf must be installed first.
However, that PR did not correct the ordering in ci_cpu which
does not have that ordering. | @@ -79,14 +79,14 @@ RUN bash /install/ubuntu_install_sbt.sh
COPY install/ubuntu_install_verilator.sh /install/ubuntu_install_verilator.sh
RUN bash /install/ubuntu_install_verilator.sh
-# TFLite deps
-COPY install/ubuntu_install_tflite.sh /install/ubuntu_install_tflite.sh
-RUN bash /install/ubuntu_install_tflite.sh
-
# TensorFlow deps
COPY install/ubuntu_install_tensorflow.sh /install/ubuntu_install_tensorflow.sh
RUN bash /install/ubuntu_install_tensorflow.sh
+# TFLite deps
+COPY install/ubuntu_install_tflite.sh /install/ubuntu_install_tflite.sh
+RUN bash /install/ubuntu_install_tflite.sh
+
# Compute Library
COPY install/ubuntu_download_arm_compute_lib_binaries.sh /install/ubuntu_download_arm_compute_lib_binaries.sh
RUN bash /install/ubuntu_download_arm_compute_lib_binaries.sh
|
notifications: Reformat 0003 migration
To make it easier to add more choices here. | @@ -15,6 +15,35 @@ class Migration(migrations.Migration):
migrations.AlterField(
model_name='notificationtemplate',
name='type',
- field=models.CharField(choices=[('reservation_requested', 'Reservation requested'), ('reservation_requested_official', 'Reservation requested official'), ('reservation_cancelled', 'Reservation cancelled'), ('reservation_confirmed', 'Reservation confirmed'), ('reservation_denied', 'Reservation denied'), ('reservation_created_with_access_code', 'Reservation created with access code'), ('catering_order_created', 'Catering order created'), ('catering_order_modified', 'Catering order modified'), ('catering_order_deleted', 'Catering order deleted'), ('reservation_comment_created', 'Reservation comment created'), ('catering_order_comment_created', 'Catering order comment created')], db_index=True, max_length=100, unique=True, verbose_name='Type'),
+ field=models.CharField(
+ choices=[
+ ('reservation_requested',
+ 'Reservation requested'),
+ ('reservation_requested_official',
+ 'Reservation requested official'),
+ ('reservation_cancelled',
+ 'Reservation cancelled'),
+ ('reservation_confirmed',
+ 'Reservation confirmed'),
+ ('reservation_denied',
+ 'Reservation denied'),
+ ('reservation_created_with_access_code',
+ 'Reservation created with access code'),
+ ('catering_order_created',
+ 'Catering order created'),
+ ('catering_order_modified',
+ 'Catering order modified'),
+ ('catering_order_deleted',
+ 'Catering order deleted'),
+ ('reservation_comment_created',
+ 'Reservation comment created'),
+ ('catering_order_comment_created',
+ 'Catering order comment created'),
+ ],
+ db_index=True,
+ max_length=100,
+ unique=True,
+ verbose_name='Type',
+ ),
),
]
|
osd autodiscovery mode: fix holders detection
Small fix for (probably copy&paste) issue from | - ansible_devices is defined
- item.0.item.value.removable == "0"
- item.0.item.value.partitions|count == 0
- - item.value.holders|count == 0
+ - item.0.item.value.holders|count == 0
- item.0.rc != 0
- name: check if a partition named 'ceph' exists (autodiscover disks)
|
enable docker layer caching in circleci
remove "Download docker images for cache" step as we don't need it any more | @@ -12,9 +12,7 @@ jobs:
pre-commit run --all-files
- setup_remote_docker:
version: 17.10.0-ce
- - run:
- name: Download docker images for cache
- command: make pull
+ docker_layer_caching: true
- run:
name: Build docker images
command: make build-ci
|
Fix finding interpreters if file matches spec
Before this fix, when running `pdm use python` if there was a file or folder named `python` then the code would not look for other installed interpreters. | @@ -632,7 +632,6 @@ class Project:
python = find_python_in_path(python_spec)
if python:
yield PythonInfo.from_path(python)
- else:
python = shutil.which(python_spec)
if python:
yield PythonInfo.from_path(python)
|
BUG: When filling an array from the cache, store original for objects
For object dtypes when the dimension limit is reached, we should prefer
to store the original object, even if we have a converted array available. | @@ -632,6 +632,7 @@ PyArray_AssignFromCache_Recursive(
PyArrayObject *self, const int ndim, coercion_cache_obj **cache)
{
/* Consume first cache element by extracting information and freeing it */
+ PyObject *original_obj = (*cache)->converted_obj;
PyObject *obj = (*cache)->arr_or_sequence;
Py_INCREF(obj);
npy_bool sequence = (*cache)->sequence;
@@ -651,7 +652,8 @@ PyArray_AssignFromCache_Recursive(
if (PyArray_ISOBJECT(self)) {
assert(ndim != 0); /* guaranteed by PyArray_AssignFromCache */
assert(PyArray_NDIM(self) == 0);
- if (PyArray_Pack(PyArray_DESCR(self), PyArray_BYTES(self), obj) < 0) {
+ if (PyArray_Pack(PyArray_DESCR(self), PyArray_BYTES(self),
+ original_obj) < 0) {
goto fail;
}
Py_DECREF(obj);
|
Try kill -9ing apt-get
Summary:
Pull Request resolved:
Test Plan: Imported from OSS | @@ -37,6 +37,9 @@ sudo apt-get purge -y unattended-upgrades
cat /etc/apt/sources.list
+# For the bestest luck, kill -9 now
+sudo pkill -9 apt-get || true
+
# Bail out early if we detect apt/dpkg is stuck
ps auxfww | (! grep '[a]pt')
ps auxfww | (! grep '[d]pkg')
|
Remove right frame flip
Gives better results | @@ -101,7 +101,6 @@ with dai.Device() as device:
rightFrame = qRight.get().getFrame()
disparityFrame = qDisparity.get().getFrame()
- rightFrame = cv2.flip(rightFrame, flipCode=1)
cv2.imshow("rectified right", rightFrame)
cv2.imshow("disparity", disparityFrame)
|
Update solaris core grains test
The check for zpool grains was moved out of core grains and into
zfs grains. The mock for the call to zpool grains function was failing.
We also need to update any calls to the salt.utils file to use the new
paths. | @@ -27,6 +27,7 @@ from tests.support.mock import (
# Import Salt Libs
import salt.utils.files
import salt.utils.platform
+import salt.utils.path
import salt.grains.core as core
# Import 3rd-party libs
@@ -938,15 +939,15 @@ SwapTotal: 4789244 kB'''
path_isfile_mock = MagicMock(side_effect=lambda x: x in ['/etc/release'])
with patch.object(platform, 'uname',
MagicMock(return_value=('SunOS', 'testsystem', '5.11', '11.3', 'sunv4', 'sparc'))):
- with patch.object(salt.utils, 'is_proxy',
+ with patch.object(salt.utils.platform, 'is_proxy',
MagicMock(return_value=False)):
- with patch.object(salt.utils, 'is_linux',
+ with patch.object(salt.utils.platform, 'is_linux',
MagicMock(return_value=False)):
- with patch.object(salt.utils, 'is_windows',
+ with patch.object(salt.utils.platform, 'is_windows',
MagicMock(return_value=False)):
- with patch.object(salt.utils, 'is_smartos',
+ with patch.object(salt.utils.platform, 'is_smartos',
MagicMock(return_value=False)):
- with patch.object(salt.utils, 'which_bin',
+ with patch.object(salt.utils.path, 'which_bin',
MagicMock(return_value=None)):
with patch.object(os.path, 'isfile', path_isfile_mock):
with salt.utils.files.fopen(os.path.join(OS_RELEASE_DIR, "solaris-11.3")) as os_release_file:
@@ -960,13 +961,11 @@ SwapTotal: 4789244 kB'''
'cpu_flags': []})):
with patch.object(core, '_memdata',
MagicMock(return_value={'mem_total': 16384})):
- with patch.object(core, '_zpool_data',
- MagicMock(return_value={})):
with patch.object(core, '_virtual',
MagicMock(return_value={})):
with patch.object(core, '_ps',
MagicMock(return_value={})):
- with patch.object(salt.utils, 'which',
+ with patch.object(salt.utils.path, 'which',
MagicMock(return_value=True)):
sparc_return_mock = MagicMock(return_value=prtdata)
with patch.dict(core.__salt__, {'cmd.run': sparc_return_mock}):
|
Temporary workaround to bypass the proxy until the reason for the
chameleon crash is found. | @@ -24,7 +24,7 @@ services:
- --consul=consul:8500
- --fluentd=fluentd:24224
- --rest-port=8881
- - --grpc-endpoint=voltha:50555
+ - --grpc-endpoint=vcore:50556
- --instance-id-is-container-name
networks:
- voltha-net
|
Fix typo on _merge_url
seperator -> separator | @@ -370,7 +370,7 @@ class BaseClient:
if merge_url.is_relative_url:
# To merge URLs we always append to the base URL. To get this
# behaviour correct we always ensure the base URL ends in a '/'
- # seperator, and strip any leading '/' from the merge URL.
+ # separator, and strip any leading '/' from the merge URL.
#
# So, eg...
#
|
Refactor infraction_edit and infraction_append
This refactors the infraction_edit and infraction_append commands to
utilize the Infraction converter. | @@ -10,7 +10,7 @@ from discord.utils import escape_markdown
from bot import constants
from bot.bot import Bot
-from bot.converters import Expiry, Snowflake, UserMention, allowed_strings, proxy_user
+from bot.converters import Expiry, Infraction, Snowflake, UserMention, allowed_strings, proxy_user
from bot.exts.moderation.infraction.infractions import Infractions
from bot.exts.moderation.modlog import ModLog
from bot.pagination import LinePaginator
@@ -49,7 +49,7 @@ class ModManagement(commands.Cog):
async def infraction_append(
self,
ctx: Context,
- infraction_id: t.Union[int, allowed_strings("l", "last", "recent")], # noqa: F821
+ infraction: Infraction, # noqa: F821
duration: t.Union[Expiry, allowed_strings("p", "permanent"), None], # noqa: F821
*,
reason: str = None
@@ -73,29 +73,21 @@ class ModManagement(commands.Cog):
Use "p" or "permanent" to mark the infraction as permanent. Alternatively, an ISO 8601
timestamp can be provided for the duration.
"""
- if isinstance(infraction_id, str):
- old_infraction = await self.get_latest_infraction(ctx.author.id)
-
- if old_infraction is None:
- await ctx.send(
- ":x: Couldn't find most recent infraction; you have never given an infraction."
- )
+ if not infraction:
return
- infraction_id = old_infraction["id"]
-
- else:
- old_infraction = await self.bot.api_client.get(f"bot/infractions/{infraction_id}")
-
- reason = fr"{old_infraction['reason']} **\|\|** {reason}"
-
- await self.infraction_edit(infraction_id=infraction_id, duration=duration, reason=reason)
+ await self.infraction_edit(
+ ctx=ctx,
+ infraction=infraction,
+ duration=duration,
+ reason=fr"{infraction['reason']} **\|\|** {reason}",
+ )
@infraction_group.command(name='edit')
async def infraction_edit(
self,
ctx: Context,
- infraction_id: t.Union[int, allowed_strings("l", "last", "recent")], # noqa: F821
+ infraction: Infraction, # noqa: F821
duration: t.Union[Expiry, allowed_strings("p", "permanent"), None], # noqa: F821
*,
reason: str = None
@@ -123,20 +115,11 @@ class ModManagement(commands.Cog):
# Unlike UserInputError, the error handler will show a specified message for BadArgument
raise commands.BadArgument("Neither a new expiry nor a new reason was specified.")
- # Retrieve the previous infraction for its information.
- if isinstance(infraction_id, str):
- old_infraction = await self.get_latest_infraction(ctx.author.id)
-
- if old_infraction is None:
- await ctx.send(
- ":x: Couldn't find most recent infraction; you have never given an infraction."
- )
+ if not infraction:
return
- infraction_id = old_infraction["id"]
-
- else:
- old_infraction = await self.bot.api_client.get(f"bot/infractions/{infraction_id}")
+ old_infraction = infraction
+ infraction_id = infraction["id"]
request_data = {}
confirm_messages = []
|
Set negative scale factor reflections with the excluded flag
as opposed to outlier flag. | @@ -278,9 +278,9 @@ def remove_bad_data(self):
for table in self.reflections:
bad_sf = table["inverse_scale_factor"] < 0.001
n += bad_sf.count(True)
- table.set_flags(bad_sf, table.flags.outlier_in_scaling)
+ table.set_flags(bad_sf, table.flags.excluded_for_scaling)
if n > 0:
- logger.info("%s reflections set as outliers: scale factor < 0.001", n)
+ logger.info("%s reflections excluded: scale factor < 0.001", n)
@Subject.notify_event(event="merging_statistics")
def calculate_merging_stats(self):
@@ -322,10 +322,12 @@ def finish(self):
n_neg = (good_sel & sel).count(True)
if n_neg > 0:
logger.warning(
- "%s non-excluded reflections were assigned scale factors < 0.001 during scaling. These will be set as outliers in the reflection table. It may be best to rerun scaling from this point for an improved model.",
+ """%s non-excluded reflections were assigned scale factors < 0.001 during scaling.
+These will be excluded in the output reflection table. It may be best to rerun
+scaling from this point for an improved model.""",
n_neg,
)
- joint_table.set_flags(sel, joint_table.flags.outlier_in_scaling)
+ joint_table.set_flags(sel, joint_table.flags.excluded_for_scaling)
to_del = [
"variance",
|
Changes wilcard optimizer to Nelder-Mead.
After realizing that the landscape seems particularly challenging,
as the L-BFGS-B and CG methods fail to find good optima. | @@ -1623,7 +1623,9 @@ def get_wildcard_budget(model, ds, circuitsToUse, parameters, evaltree_cache, co
a, b = _wildcard_objective_firstTerms(Wv), eta * _np.linalg.norm(Wv, ord=1)
print('wildcard: misfit + L1_reg = %.3g + %.3g = %.3g' % (a,b,a+b),Wv)
soln = _spo.minimize(_wildcard_objective, Wvec_init,
- method='L-BFGS-B', callback=callbackF, tol=1e-6)
+ method='Nelder-Mead', callback=callbackF, tol=1e-6)
+ if not soln.success:
+ _warnings.warn("Nelder-Mead optimization failed to converge!")
Wvec = soln.x
firstTerms = _wildcard_objective_firstTerms(Wvec)
#printer.log(" Firstterms value = %g" % firstTerms)
|
improve hosts template
HG--
branch : feature/microservices | # {{ ansible_managed }}
-127.0.0.1 localhost
-::1 localhost ip6-localhost ip6-loopback
+127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
+::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
# The following lines are desirable for IPv6 capable hosts.
fe00::0 ip6-localnet
|
ceph-defaults: fix containerized osd restarts
This needs to check `containerized_deployment` because
socket_osd_container is undefined otherwise. | when:
# We do not want to run these checks on initial deployment (`socket_osd_container.results[n].rc == 0`)
# except when a crush location is specified. ceph-disk will start the osds before the osd crush location is specified
+ - containerized_deployment
- ((crush_location is defined and crush_location) or item.get('rc') == 0)
- handler_health_osd_check
# See https://github.com/ceph/ceph-ansible/issues/1457 for the condition below
|
isAlive removed in Python 9, change to is_alive
See for more details | @@ -502,7 +502,7 @@ def start(host, tlsport, port):
# keep the main thread active, so it can process the signals and gracefully shutdown
while True:
- if not any([thread.isAlive() for thread in threads]):
+ if not any([thread.is_alive() for thread in threads]):
# All threads have stopped
break
# Some threads are still going
|
fix!: use repeatable read isolation level
RR isolation is default in MariaDB, for sake of consistency use same
isolation level in postgres | @@ -3,7 +3,7 @@ from typing import List, Tuple, Union
import psycopg2
import psycopg2.extensions
-from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
+from psycopg2.extensions import ISOLATION_LEVEL_REPEATABLE_READ
from psycopg2.errorcodes import STRING_DATA_RIGHT_TRUNCATION
import frappe
@@ -69,7 +69,7 @@ class PostgresDatabase(Database):
conn = psycopg2.connect("host='{}' dbname='{}' user='{}' password='{}' port={}".format(
self.host, self.user, self.user, self.password, self.port
))
- conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT) # TODO: Remove this
+ conn.set_isolation_level(ISOLATION_LEVEL_REPEATABLE_READ)
return conn
|
race host added
subscribe_to_races function added | @@ -26,6 +26,7 @@ class BetfairStream(object):
HOSTS = collections.defaultdict(
lambda: 'stream-api.betfair.com',
integration='stream-api-integration.betfair.com',
+ race='sports-data-stream-api.betfair.com',
)
def __init__(self, unique_id, listener, app_key, session_token, timeout, buffer_size, description, host):
@@ -163,6 +164,15 @@ class BetfairStream(object):
self._send(message)
return unique_id
+ def subscribe_to_races(self):
+ unique_id = self.new_unique_id()
+ message = {
+ 'op': 'raceSubscription',
+ 'id': unique_id,
+ }
+ self._send(message)
+ return unique_id
+
def new_unique_id(self):
self._unique_id += 1
return self._unique_id
|
Update accuracy-check.yml
* Update accuracy-check.yml
Corrected inputs name for onnx_runtime framework.
* Update accuracy-check.yml
Removed inputs section for onnx_runtime framework | @@ -5,10 +5,6 @@ models:
- framework: onnx_runtime
model: resnet-v1-50.onnx
adapter: classification
- inputs:
- - name: data
- type: INPUT
- shape: 1,3,224,224
datasets:
- name: imagenet_1000_classes
|
Enforce usage of urllib2 instead of urllib for updates
urllib1 SSL implementation is broken in some situations (for instance when using specific web proxies...). | @@ -14,7 +14,7 @@ import sqlite3
import subprocess
import sys
import time
-import urllib
+import urllib2
import urlparse
sys.dont_write_bytecode = True
@@ -270,10 +270,9 @@ def update_ipcat(force=False):
print "[i] updating ipcat database..."
try:
- if PROXIES:
- urllib.URLopener(PROXIES).urlretrieve(IPCAT_URL, IPCAT_CSV_FILE)
- else:
- urllib.urlretrieve(IPCAT_URL, IPCAT_CSV_FILE)
+ with file(IPCAT_CSV_FILE,'wb') as fp:
+ data = urllib2.urlopen(IPCAT_URL)
+ fp.write(data.read())
except Exception, ex:
print "[x] something went wrong during retrieval of '%s' ('%s')" % (IPCAT_URL, ex)
|
Pontoon: Update Indonesian (id) localization of AMO
Localization authors:
eljuno
Kiki
Reinhart Previano K. | @@ -1806,9 +1806,8 @@ msgstr ""
msgid "Comment on {addon} {version}."
msgstr "Komentar untuk {addon} {version}."
-#, fuzzy
msgid "Commented"
-msgstr "dikomentari"
+msgstr "Dikomentari"
msgid "{tag} added to {addon}."
msgstr "{tag} ditambahkan ke {addon}."
@@ -7905,7 +7904,6 @@ msgstr "Pengaya Mozilla telah dipindahkan ke Firefox Accounts untuk masuk. Lanju
msgid "User Login"
msgstr "Masuk"
-#, fuzzy
msgid "Log in with Firefox Accounts"
msgstr "Masuk dengan Firefox Accounts"
|
CONTRIBUTING.md: Note on protoc
I had to do this on my fresh installation of Ubuntu 20.04 | @@ -59,6 +59,8 @@ pushd <your_source_dir>
pip install -e .[all] # the [all] suffix includes additional packages for test
```
+Note that you have to have protocol buffer compiler `protoc` installed in order to be able to install the requirements. Download the latest version [here](https://github.com/protocolbuffers/protobuf/releases) and follow the instructions in the readme.
+
## Running Unit Tests
At this point all unit tests are safe to run externaly. We are working on
|
fix skipped_backward tests to return as PASS
If skip_backward/skip_double_back is False, the test
is marked as skipped, even when the Forward is checked
and passes successfully. This change will make sure that,
even these tests are marked as PASS by Pytest | import inspect
import sys
+import unittest
import numpy
import pytest
@@ -79,6 +80,27 @@ class OpTest(chainer.testing.function_link.FunctionTestBase):
raise NotImplementedError(
'Op test implementation must override `forward_chainerx`.')
+ def run_test_forward(self, backend_config):
+ # Skipping Forward -> Test Skipped
+ if self.skip_forward_test:
+ raise unittest.SkipTest('skip_forward_test is set')
+
+ super(OpTest, self).run_test_forward
+
+ def run_test_backward(self, backend_config):
+ # Skipping Backward -> Test PASS
+ if self.skip_backward_test:
+ return
+
+ super(OpTest, self).run_test_backward(backend_config)
+
+ def run_test_double_backward(self, backend_config):
+ # Skipping Double Backward -> Test PASS
+ if self.skip_double_backward_test:
+ return
+
+ super(OpTest, self).run_test_backward(backend_config)
+
class ChainerOpTest(OpTest):
|
Fix tf2 lite nano camera support
Fix tflite input/output conversion. | import os
+import numpy as np
import tensorflow as tf
from donkeycar.parts.keras import KerasPilot
@@ -55,7 +56,7 @@ class TFLitePilot(KerasPilot):
self.input_shape = self.input_details[0]['shape']
def inference(self, img_arr, other_arr):
- input_data = img_arr.reshape(self.input_shape)
+ input_data = np.float32(img_arr.reshape(self.input_shape))
self.interpreter.set_tensor(self.input_details[0]['index'], input_data)
self.interpreter.invoke()
@@ -66,9 +67,9 @@ class TFLitePilot(KerasPilot):
output_data = self.interpreter.get_tensor(tensor['index'])
outputs.append(output_data[0][0])
- steering = outputs[0]
+ steering = float(outputs[0])
if len(outputs) > 1:
- throttle = outputs[1]
+ throttle = float(outputs[1])
return steering, throttle
|
[Test] rename to test_brevitas_trained_lfc_w1a1_pytorch
since w1a2 is coming | @@ -15,15 +15,15 @@ from finn.core.modelwrapper import ModelWrapper
export_onnx_path = "test_output_lfc.onnx"
# TODO get from config instead, hardcoded to Docker path for now
-trained_lfc_checkpoint = (
+trained_lfc_w1a1_checkpoint = (
"/workspace/brevitas_cnv_lfc/pretrained_models/LFC_1W1A/checkpoints/best.tar"
)
-def test_brevitas_trained_lfc_pytorch():
+def test_brevitas_trained_lfc_w1a1_pytorch():
# load pretrained weights into LFC-w1a1
lfc = LFC(weight_bit_width=1, act_bit_width=1, in_bit_width=1).eval()
- checkpoint = torch.load(trained_lfc_checkpoint, map_location="cpu")
+ checkpoint = torch.load(trained_lfc_w1a1_checkpoint, map_location="cpu")
lfc.load_state_dict(checkpoint["state_dict"])
# load one of the test vectors
raw_i = get_data("finn", "data/onnx/mnist-conv/test_data_set_0/input_0.pb")
@@ -49,9 +49,9 @@ def test_brevitas_trained_lfc_pytorch():
assert np.isclose(produced, expected, atol=1e-4).all()
-def test_brevitas_to_onnx_export_and_exec():
+def test_brevitas_to_onnx_export_and_exec_lfc_w1a1():
lfc = LFC(weight_bit_width=1, act_bit_width=1, in_bit_width=1)
- checkpoint = torch.load(trained_lfc_checkpoint, map_location="cpu")
+ checkpoint = torch.load(trained_lfc_w1a1_checkpoint, map_location="cpu")
lfc.load_state_dict(checkpoint["state_dict"])
bo.export_finn_onnx(lfc, (1, 1, 28, 28), export_onnx_path)
model = ModelWrapper(export_onnx_path)
|
Add comment formatting to NCL_conOncon_1.py
A line beginning with 20 or more '#' characters tells sphinx-gallery to
create a text cell containing the following comment lines instead of a
code block when generating Jupyter notebooks from Python scripts. | @@ -5,20 +5,32 @@ conOncon_1
Plots/Contours/Lines
"""
+################################################################################
+#
+# import modules
+#
import numpy as np
import xarray as xr
-
import matplotlib.pyplot as plt
import matplotlib.ticker as tic
from matplotlib.ticker import ScalarFormatter
-
from pprint import pprint
+
+
+################################################################################
+#
+# open data file and extract variables
+#
ds = xr.open_dataset('../../data/netcdf_files/mxclim.nc')
U = ds.U[0,:,:]
V = ds.V[0,:,:]
+################################################################################
+#
+# create plot
+#
plt.rcParams['figure.figsize'] = [20, 20]
fig, ax = plt.subplots()
fig.suptitle('Ensemble Average 1987-89', fontsize=22, fontweight='bold', y=0.94)
@@ -43,17 +55,20 @@ plt.title('') # Someone (xarray?) generates their own title
+################################################################################
#
# Hard code the y-axis (pressure) level tic locations. Necessary?
#
ax.yaxis.set_minor_locator(plt.FixedLocator([30,50,70, 150, 200, 250, 300, 400, 500, 700, 850]))
+################################################################################
#
# Change formatter or else we tick values formatted in exponential form
#
ax.yaxis.set_major_formatter(ScalarFormatter())
ax.yaxis.set_minor_formatter(ScalarFormatter())
+################################################################################
#
# Tweak label sizes, etc.
#
@@ -62,6 +77,7 @@ ax.xaxis.label.set_size(20)
ax.tick_params('both', length=20, width=2, which='major', labelsize=20)
ax.tick_params('both', length=10, width=1, which='minor', labelsize=20)
+################################################################################
#
# This is how we get the y-axis on the right side plotted to show geopotential height.
# Currently we're using bogus values for height 'cause we haven't figured out how to make this work.
@@ -74,6 +90,7 @@ axRHS.set_ylabel('Height (km)')
axRHS.yaxis.label.set_size(20)
+################################################################################
#
# add a title to the plot axes. What happens if xarray data set doesn't have long_name and units?
#
|
api/iodevices/Ev3devSensor: add device index
This makes it easy to find the sensor to access additional features through the ev3dev lego-sensor and lego-port classes. | @@ -35,7 +35,13 @@ class LUMPDevice():
class Ev3devSensor():
- """Read values with an ev3dev-compatible sensor."""
+ """Read values of an ev3dev-compatible sensor."""
+
+ sensor_index = 0
+ """Index of the ev3dev sysfs `lego-sensor`_ class."""
+
+ port_index = 0
+ """Index of the ev3dev sysfs `lego-port`_ class."""
def __init__(self, port):
"""
@@ -49,7 +55,7 @@ class Ev3devSensor():
"""Read values at a given mode.
Arguments:
- mode (``str``): Mode name.
+ mode (``str``): `Mode name`_.
Returns:
``tuple``: Values read from the sensor.
|
Fix link to custom tutorial
Missing an s | @@ -30,7 +30,7 @@ Ubuntu 18.04. We have a bunch of tutorials to get you started.
tutorials/jetstream
tutorials/google
-- :ref:`tutorial/custom`.
+- :ref:`tutorials/custom`.
You should use this if your cloud provider does not already have a direct tutorial,
or if you have experience setting up servers.
|
TST: Add test for 1d hetrd
This test could not be run previously due to a numpy bug (see
as the minimum numpy version (1.14) required by SciPy now includes the
fix, we are able to add the requried test. | @@ -709,21 +709,12 @@ class TestHetrd(object):
@pytest.mark.parametrize('real_dtype,complex_dtype',
zip(REAL_DTYPES, COMPLEX_DTYPES))
- def test_hetrd(self, real_dtype, complex_dtype):
- n = 3
+ @pytest.mark.parametrize('n', (1, 3))
+ def test_hetrd(self, n, real_dtype, complex_dtype):
A = np.zeros((n, n), dtype=complex_dtype)
hetrd, hetrd_lwork = \
get_lapack_funcs(('hetrd', 'hetrd_lwork'), (A,))
- # Tests for n = 1 currently fail with
- # ```
- # ValueError: failed to create intent(cache|hide)|optional array--
- # must have defined dimensions but got (0,)
- # ```
- # This is a NumPy issue
- # <https://github.com/numpy/numpy/issues/9617>.
- # TODO Once the minimum NumPy version is past 1.14, test for n=1
-
# some upper triangular array
A[np.triu_indices_from(A)] = (
np.arange(1, n*(n+1)//2+1, dtype=real_dtype)
|
Griewank function
description added; bug fixes | @@ -32,7 +32,7 @@ class MyBenchmark(object):
for i in range(10):
Algorithm = DifferentialEvolutionAlgorithm(
- 10, 40, 10000, 0.5, 0.9, 'whitley')
+ 10, 40, 10000, -32.768, 32.768, 'griewank')
Best = Algorithm.run()
logger.info(Best)
|
Fix the signal handling to work on Windows
This includes handling the SIGBREAK signal which is raised on Windows
resulting in the correct graceful shutdown. | @@ -1371,11 +1371,9 @@ class Quart(Scaffold):
def _signal_handler(*_: Any) -> None:
shutdown_event.set()
- try:
- loop.add_signal_handler(signal.SIGTERM, _signal_handler)
- loop.add_signal_handler(signal.SIGINT, _signal_handler)
- except (AttributeError, NotImplementedError):
- pass
+ for signal_name in {"SIGINT", "SIGTERM", "SIGBREAK"}:
+ if hasattr(signal, signal_name):
+ signal.signal(getattr(signal, signal_name), _signal_handler)
server_name = self.config.get("SERVER_NAME")
sn_host = None
|
Add FW versions for 2018 Lexus NX Hybrid
New Ecu Engine and Ecu Esp for NX Hybrid My18 European Edition, I'm From Italy | @@ -1225,10 +1225,12 @@ FW_VERSIONS = {
(Ecu.engine, 0x7e0, None): [
b'\x0237882000\x00\x00\x00\x00\x00\x00\x00\x00A4701000\x00\x00\x00\x00\x00\x00\x00\x00',
b'\x0237841000\x00\x00\x00\x00\x00\x00\x00\x00A4701000\x00\x00\x00\x00\x00\x00\x00\x00',
+ b'\x0237886000\x00\x00\x00\x00\x00\x00\x00\x00A4701000\x00\x00\x00\x00\x00\x00\x00\x00',
],
(Ecu.esp, 0x7b0, None): [
b'F152678160\x00\x00\x00\x00\x00\x00',
b'F152678170\x00\x00\x00\x00\x00\x00',
+ b'F152678171\x00\x00\x00\x00\x00\x00',
],
(Ecu.dsu, 0x791, None): [
b'881517804300\x00\x00\x00\x00',
|
[validation] Replace required_openstack in neutron/network.py
Replaces old required_openstack decorator with new validation.add
in neutron/network.py. | @@ -156,7 +156,7 @@ class CreateAndUpdateSubnets(utils.NeutronScenario):
@validation.number("subnets_per_network", minval=1, integer_only=True)
@validation.required_services(consts.Service.NEUTRON)
[email protected]_openstack(users=True)
[email protected]("required_platform", platform="openstack", users=True)
@scenario.configure(context={"cleanup": ["neutron"]},
name="NeutronNetworks.create_and_show_subnets")
class CreateAndShowSubnets(utils.NeutronScenario):
@@ -305,7 +305,7 @@ class CreateAndDeleteRouters(utils.NeutronScenario):
@validation.required_services(consts.Service.NEUTRON)
[email protected]_openstack(users=True)
[email protected]("required_platform", platform="openstack", users=True)
@scenario.configure(context={"cleanup": ["neutron"]},
name="NeutronNetworks.set_and_clear_router_gateway")
class SetAndClearRouterGateway(utils.NeutronScenario):
|
Fix mapped dimensions in simple initiator
Note: Unsure why it was not done this way beforehand. | @@ -119,8 +119,8 @@ class SimpleMeasurementInitiator(GaussianInitiator):
prior_state_vector = self.prior_state.state_vector.copy()
prior_covar = self.prior_state.covar.copy()
- mapped_dimensions, _ = np.nonzero(
- model_matrix.T @ np.ones((model_matrix.shape[0], 1)))
+ mapped_dimensions = measurement_model.mapping
+
prior_state_vector[mapped_dimensions, :] = 0
prior_covar[mapped_dimensions, :] = 0
C0 = inv_model_matrix @ model_covar @ inv_model_matrix.T
|
Scheduler: drop _task suffix from method names
It's redundant. After all, this scheduler cannot schedule anything else. | @@ -18,7 +18,7 @@ class Scheduler:
"""Return True if a task with the given `task_id` is currently scheduled."""
return task_id in self._scheduled_tasks
- def schedule_task(self, task_id: t.Hashable, task: t.Awaitable) -> None:
+ def schedule(self, task_id: t.Hashable, task: t.Awaitable) -> None:
"""Schedule the execution of a task."""
self._log.trace(f"Scheduling task #{task_id}...")
@@ -32,7 +32,7 @@ class Scheduler:
self._scheduled_tasks[task_id] = task
self._log.debug(f"Scheduled task #{task_id} {id(task)}.")
- def cancel_task(self, task_id: t.Hashable) -> None:
+ def cancel(self, task_id: t.Hashable) -> None:
"""Unschedule the task identified by `task_id`. Log a warning if the task doesn't exist."""
self._log.trace(f"Cancelling task #{task_id}...")
@@ -51,7 +51,7 @@ class Scheduler:
self._log.debug("Unscheduling all tasks")
for task_id in self._scheduled_tasks.copy():
- self.cancel_task(task_id)
+ self.cancel(task_id)
def _task_done_callback(self, task_id: t.Hashable, done_task: asyncio.Task) -> None:
"""
|
Add json "strict" parameter to CoreNLP
This allows the (optional) processing of text control characters without raising errors. | @@ -48,6 +48,7 @@ class CoreNLPServer:
java_options=None,
corenlp_options=None,
port=None,
+ strict_json=True,
):
if corenlp_options is None:
@@ -98,6 +99,7 @@ class CoreNLPServer:
self.corenlp_options = corenlp_options
self.java_options = java_options or ["-mx2g"]
+ self.strict_json = strict_json
def start(self, stdout="devnull", stderr="devnull"):
"""Starts the CoreNLP server
@@ -246,7 +248,7 @@ class GenericCoreNLPParser(ParserI, TokenizerI, TaggerI):
response.raise_for_status()
- return response.json()
+ return response.json(strict=self.strict_json)
def raw_parse_sents(
self, sentences, verbose=False, properties=None, *args, **kwargs
|
Azure : Run Nightly builds
Azure will now run nightly builds on `master` as long as changes have
been made to the repo since the last build. | trigger:
- master
+schedules:
+- cron: "0 23 * * *"
+ displayName: Nightly
+ always: false
+ branches:
+ include:
+ - master
+
jobs :
# We build on linux using a Docker container generated by GafferHQ/build.
|
[DOC] Update .all-contributorsrc with doc contribution by arampuria19
Update .all-contributorsrc with doc contribution by arampuria19 | "code",
"ideas"
]
+ },
+ {
+ "login": "arampuria19",
+ "name": "Akshat Rampuria",
+ "profile": "https://github.com/arampuria19",
+ "contributions": [
+ "doc"
+ ]
}
]
}
|
[BUG FIX] Fix: ray version check failed due to extra output in STDOUT
Use files to transfer information across processes | @@ -5,6 +5,7 @@ import logging
import os
import shutil
import sys
+import tempfile
from typing import Dict, List, Optional, Tuple
from ray._private.async_compat import asynccontextmanager, create_task, get_running_loop
@@ -167,10 +168,22 @@ class PipProcessor:
"""
async def _get_ray_version_and_path() -> Tuple[str, str]:
+ with tempfile.TemporaryDirectory(
+ prefix="check_ray_version_tempfile"
+ ) as tmp_dir:
+ ray_version_path = os.path.join(tmp_dir, "ray_version.txt")
check_ray_cmd = [
python,
"-c",
- "import ray; print(ray.__version__, ray.__path__[0])",
+ """
+import ray
+with open(r"{ray_version_path}", "wt") as f:
+ f.write(ray.__version__)
+ f.write(" ")
+ f.write(ray.__path__[0])
+ """.format(
+ ray_version_path=ray_version_path
+ ),
]
if _WIN32:
env = os.environ.copy()
@@ -179,6 +192,11 @@ class PipProcessor:
output = await check_output_cmd(
check_ray_cmd, logger=logger, cwd=cwd, env=env
)
+ logger.info(
+ f"try to write ray version information in: {ray_version_path}"
+ )
+ with open(ray_version_path, "rt") as f:
+ output = f.read()
# print after import ray may have [0m endings, so we strip them by *_
ray_version, ray_path, *_ = [s.strip() for s in output.split()]
return ray_version, ray_path
|
Documentation added for the two new settting variables
BLOG_ABSTRACT_CKEDITOR and BLOG_POST_TEXT_CKEDITOR | @@ -97,7 +97,8 @@ Global Settings
* BLOG_PLUGIN_TEMPLATE_FOLDERS: (Sub-)folder from which the plugin templates are loaded. The default folder is ``plugins``. It goes into the ``djangocms_blog`` template folder (or, if set, the folder named in the app hook). This allows, e.g., different templates for showing a post list as tables, columns, ... . New templates have the same names as the standard templates in the ``plugins`` folder (``latest_entries.html``, ``authors.html``, ``tags.html``, ``categories.html``, ``archive.html``). Default behavior corresponds to this setting being ``( ("plugins", _("Default template") )``. To add new templates add to this setting, e.g., ``('timeline', _('Vertical timeline') )``.
* BLOG_META_DESCRIPTION_LENGTH: Maximum length for the Meta description field (default: ``320``)
* BLOG_META_TITLE_LENGTH: Maximum length for the Meta title field (default: ``70``)
-
+* BLOG_ABSTRACT_CKEDITOR: Configuration for the CKEditor of the abstract field (as per https://github.com/divio/djangocms-text-ckeditor/#customizing-htmlfield-editor)
+* BLOG_POST_TEXT_CKEDITOR: Configuration for the CKEditor of the post content field
******************
Read-only settings
|
tests: don't install s3cmd on containerized setup
The s3cmd package should only be installed on non containerized
deployment. | vars:
s3cmd_cmd: "s3cmd --no-ssl --access_key={{ system_access_key }} --secret_key={{ system_secret_key }} --host={{ rgw_multisite_endpoint_addr }}:8080 --host-bucket={{ rgw_multisite_endpoint_addr }}:8080"
tasks:
-
- - name: check if it is Atomic host
- stat: path=/run/ostree-booted
- register: stat_ostree
- check_mode: no
-
- - name: set fact for using Atomic host
- set_fact:
- is_atomic: '{{ stat_ostree.stat.exists }}'
-
- name: install s3cmd
package:
name: s3cmd
state: present
register: result
until: result is succeeded
- when: not is_atomic | bool
+ when: not containerized_deployment | default(false) | bool
- name: generate and upload a random 10Mb file - containerized deployment
command: >
|
improve installation instructions
Added separated subsections for each OS (OS X, Linux, Windows) on the installation guide. | @@ -51,26 +51,45 @@ Received file written to README.md
```$ pip install magic-wormhole```
-Or on macOS with `homebrew`: `$ brew install magic-wormhole`
-Or on Debian 9 and Ubuntu 17.04+ with `apt`:
+### OS X
+
+On OS X, you may need to install `pip` and
+run `$ xcode-select --install` to get GCC.
+
+Or with `homebrew`:
+
+`$ brew install magic-wormhole`
+
+### Linux
+
+On Debian 9 and Ubuntu 17.04+ with `apt`:
+
```$ sudo apt install magic-wormhole```
On previous versions of the Debian/Ubuntu systems, or if you want to install
-the latest version, you may first need `apt-get install python-pip
-build-essential python-dev libffi-dev libssl-dev` before running `pip`. On
-Fedora it's `dnf install python-pip python-devel libffi-devel openssl-devel
-gcc-c++ libtool redhat-rpm-config`. On OS-X, you may need to install `pip`
-and run `xcode-select --install` to get GCC. On Windows, python2 may work
-better than python3. On older systems, `pip install --upgrade pip` may be
-necessary to get a version that can compile all the dependencies.
-
-If you get errors like `fatal error: sodium.h: No such file or directory` on
+the latest version, you may first need:
+
+`$ apt-get install python-pip build-essential python-dev libffi-dev libssl-dev`.
+
+On Fedora:
+
+`$ dnf install python-pip python-devel libffi-devel openssl-devel gcc-c++
+libtool redhat-rpm-config`.
+
+Note: If you get errors like `fatal error: sodium.h: No such file or directory` on
Linux, either use `SODIUM_INSTALL=bundled pip install magic-wormhole`, or try
installing the `libsodium-dev` / `libsodium-devel` package. These work around
a bug in pynacl which gets confused when the libsodium runtime is installed
(e.g. `libsodium13`) but not the development package.
+### Windows
+
+On Windows, python2 may work
+better than python3. On older systems, `$ pip install --upgrade pip` may
+be necessary to get a version that can compile all the dependencies.
+
+
Developers can clone the source tree and run `tox` to run the unit tests on
all supported (and installed) versions of python: 2.7, 3.4, 3.5, and 3.6.
|
ceph-nfs: fix ceph_nfs_ceph_user variable
The ceph_nfs_ceph_user variable is a string for the ceph-nfs role but a
list in ceph-client role.
introduced a confusion between both variable type in the ceph-nfs
role for external ceph with ganesha.
Closes: | - name: copy rgw keyring when deploying internal ganesha with external ceph cluster
copy:
- src: "/etc/ceph/{{ cluster }}.{{ ceph_nfs_ceph_user.name }}.keyring"
+ src: "/etc/ceph/{{ cluster }}.{{ ceph_nfs_ceph_user }}.keyring"
dest: "/var/lib/ceph/radosgw/{{ cluster }}-rgw.{{ ansible_hostname }}/keyring"
mode: '0600'
owner: "{{ ceph_uid if containerized_deployment else 'ceph' }}"
|
tests/http_tests.py: Fix unittest.skipTest calls
'unittest.skipTest' does not exist. Replace it with `self.skipTest`. | @@ -652,8 +652,8 @@ class QueryStringParamsTestCase(HttpbinTestCase):
"""Test fetch method with no parameters."""
r = http.fetch(uri=self.url, params={})
if r.status == 503: # T203637
- unittest.skipTest('503: Service currently not available for '
- + self.url)
+ self.skipTest(
+ '503: Service currently not available for ' + self.url)
self.assertEqual(r.status, 200)
content = json.loads(r.text)
@@ -668,8 +668,8 @@ class QueryStringParamsTestCase(HttpbinTestCase):
"""
r = http.fetch(uri=self.url, params={'fish&chips': 'delicious'})
if r.status == 503: # T203637
- unittest.skipTest('503: Service currently not available for '
- + self.url)
+ self.skipTest(
+ '503: Service currently not available for ' + self.url)
self.assertEqual(r.status, 200)
content = json.loads(r.text)
@@ -684,8 +684,8 @@ class QueryStringParamsTestCase(HttpbinTestCase):
"""
r = http.fetch(uri=self.url, params={'fish%26chips': 'delicious'})
if r.status == 503: # T203637
- unittest.skipTest('503: Service currently not available for '
- + self.url)
+ self.skipTest(
+ '503: Service currently not available for ' + self.url)
self.assertEqual(r.status, 200)
content = json.loads(r.text)
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.