message
stringlengths 13
484
| diff
stringlengths 38
4.63k
|
---|---|
fix dagit download mocks script
Summary: Not really sure how this changed in the last week, but...
Test Plan: Ran `make download-mocks`, saw toy mocks return valid pipeline snapshots
Reviewers: dgibson, sashank | @@ -47,7 +47,7 @@ MOCKS.push(
variables: {
pipelineSelector: {
pipelineName: name,
- repositoryLocationName: "<<in_process>>",
+ repositoryLocationName: "toys_repository",
repositoryName: "toys_repository"
},
rootHandleID: "",
|
Handle non-numeric features in ``fast_predict()``.
Also add logger creation that was missing before. | @@ -99,7 +99,15 @@ def fast_predict(
predictions for the input features. It contains the following
columns: "raw", "scale", "raw_trim", "scale_trim", "raw_trim_round",
and "scale_trim_round".
+
+ Raises
+ ------
+ ValueError
+ If ``input_features`` contains any non-numeric features.
"""
+ # initialize a logger if none provided
+ logger = logger if logger else logging.getLogger(__name__)
+
# instantiate a feature preprocessor
preprocessor = FeaturePreprocessor(logger=logger)
@@ -108,7 +116,12 @@ def fast_predict(
df_input_features["spkitemid"] = "RESPONSE"
# preprocess the input features so that they match what the model expects
- df_processed_features, _ = preprocessor.preprocess_new_data(df_input_features, df_feature_info)
+ try:
+ df_processed_features, _ = preprocessor.preprocess_new_data(
+ df_input_features, df_feature_info
+ )
+ except ValueError:
+ raise ValueError("Input features must not contain non-numeric values.") from None
# read the post-processing parameters
trim_min = df_postprocessing_params["trim_min"].values[0]
|
Remove log translation function calls from ironic.db
Remove all calls to logging functions from the db directory of ironic.
In this instance, there were very few, and all were to _LW.
Partial-bug: | @@ -34,7 +34,7 @@ from sqlalchemy.orm import joinedload
from sqlalchemy import sql
from ironic.common import exception
-from ironic.common.i18n import _, _LW
+from ironic.common.i18n import _
from ironic.common import states
from ironic.conf import CONF
from ironic.db import api
@@ -783,8 +783,8 @@ class Connection(api.Connection):
if nodes:
nodes = ', '.join(nodes)
LOG.warning(
- _LW('Cleared reservations held by %(hostname)s: '
- '%(nodes)s'), {'hostname': hostname, 'nodes': nodes})
+ 'Cleared reservations held by %(hostname)s: '
+ '%(nodes)s', {'hostname': hostname, 'nodes': nodes})
@oslo_db_api.retry_on_deadlock
def clear_node_target_power_state(self, hostname):
@@ -802,9 +802,9 @@ class Connection(api.Connection):
if nodes:
nodes = ', '.join(nodes)
LOG.warning(
- _LW('Cleared target_power_state of the locked nodes in '
+ 'Cleared target_power_state of the locked nodes in '
'powering process, their power state can be incorrect: '
- '%(nodes)s'), {'nodes': nodes})
+ '%(nodes)s', {'nodes': nodes})
def get_active_driver_dict(self, interval=None):
query = model_query(models.Conductor)
|
TST: Update `travis-test.sh` for C99
Most of this was already done, but we were still raising an error for
declaration after a statement because the Windows Python 2.7 compiler
did not allow it. We can fix this now as NumPy >= 1.17 has dropped
Python 2.7 support. | @@ -25,8 +25,7 @@ if [ -n "$PYTHON_OPTS" ]; then
fi
# make some warnings fatal, mostly to match windows compilers
-werrors="-Werror=declaration-after-statement -Werror=vla "
-werrors+="-Werror=nonnull -Werror=pointer-arith"
+werrors="-Werror=vla -Werror=nonnull -Werror=pointer-arith"
# build with c99 by default
|
Update opioidper1000.json
removed oxycodone 30mg MR as no longer considered high dose in Opioids Aware | "twice daily (120mg daily dose), whereas MST 30mg are not, as the daily dose is 60mg. ",
"We have not included preparations used for breakthrough pain, e.g. Oramorph, or opioid injections which tend to be ",
"used more commonly in palliative care. We have calculated morphine equivalencies using ",
- "the <a href='https://www.rcoa.ac.uk/faculty-of-pain-medicine/opioids-aware/structured-approach-to-prescribing/dose-equivalents-and-changing-opioids'> ",
+ "the updated August 2020 <a href='https://www.rcoa.ac.uk/faculty-of-pain-medicine/opioids-aware/structured-approach-to-prescribing/dose-equivalents-and-changing-opioids'> ",
"tables available </a> from the Faculty of Pain Medicine, Royal College of Anaesthetists."
],
"tags": [
"0407020AD%AI # Oxycodone HCl_Tab 80mg M/R (brands and generic)",
"0407020AD%AP # Oxycodone HCl_Tab 120mg M/R (brands and generic)",
"0407020AD%AQ # Oxycodone HCl_Tab 60mg M/R (brands and generic)",
- "0407020AD%AR # Oxycodone HCl_Tab 30mg M/R (brands and generic)",
"0407020AF%AD # Oxycodone HCl/NaloxoneHCl_Tab 40/20mgM/R (brands and generic)",
"0407020AG%AF # Tapentadol HCl_Tab 200mg M/R (brands and generic)",
"0407020AG%AG # Tapentadol HCl_Tab 250mg M/R (brands and generic)",
|
Add OpenAPIHub to Development
PR inactive
resolve | @@ -581,6 +581,7 @@ API | Description | Auth | HTTPS | CORS |
| [OneSignal](https://documentation.onesignal.com/docs/onesignal-api) | Self-serve customer engagement solution for Push Notifications, Email, SMS & In-App | `apiKey` | Yes | Unknown |
| [OOPSpam](https://oopspam.com/) | Multiple spam filtering service | No | Yes | Yes |
| [Open Page Rank](https://www.domcop.com/openpagerank/) | API for calculating and comparing metrics of different websites using Page Rank algorithm | `apiKey` | Yes | Unknown |
+| [OpenAPIHub](https://hub.openapihub.com/) | The All-in-one API Platform | `X-Mashape-Key` | Yes | Unknown |
| [OpenGraphr](https://opengraphr.com/docs/1.0/overview) | Really simple API to retrieve Open Graph data from an URL | `apiKey` | Yes | Unknown |
| [oyyi](https://oyyi.xyz/docs/1.0) | API for Fake Data, image/video conversion, optimization, pdf optimization and thumbnail generation | No | Yes | Yes |
| [PageCDN](https://pagecdn.com/docs/public-api) | Public API for javascript, css and font libraries on PageCDN | `apiKey` | Yes | Yes |
|
Check read-after write consistency
Perform a read right after a write to check if the consistency is respected when read-after-write is performed from the same host | @@ -67,7 +67,9 @@ def verify_directory_correctly_shared(remote_command_executor, mount_dir, schedu
head_node_file = random_alphanumeric()
logging.info(f"Writing HeadNode File: {head_node_file}")
remote_command_executor.run_remote_command(
- "touch {mount_dir}/{head_node_file}".format(mount_dir=mount_dir, head_node_file=head_node_file)
+ "touch {mount_dir}/{head_node_file} && cat {mount_dir}/{head_node_file}".format(
+ mount_dir=mount_dir, head_node_file=head_node_file
+ )
)
# Submit a "Write" job to each partition
@@ -78,7 +80,9 @@ def verify_directory_correctly_shared(remote_command_executor, mount_dir, schedu
for partition in partitions:
compute_file = "{}-{}".format(partition, random_alphanumeric())
logging.info(f"Writing Compute File: {compute_file} from {partition}")
- job_command = "touch {mount_dir}/{compute_file}".format(mount_dir=mount_dir, compute_file=compute_file)
+ job_command = "touch {mount_dir}/{compute_file} && cat {mount_dir}/{compute_file}".format(
+ mount_dir=mount_dir, compute_file=compute_file
+ )
result = scheduler_commands.submit_command(job_command, partition=partition)
job_id = scheduler_commands.assert_job_submitted(result.stdout)
scheduler_commands.wait_job_completed(job_id)
|
Update kombu to 4.2.2.post1
Fixes | @@ -252,9 +252,9 @@ isodate==0.6.0 \
jmespath==0.9.3 \
--hash=sha256:f11b4461f425740a1d908e9a3f7365c3d2e569f6ca68a2ff8bc5bcd9676edd63
# kombu is required by celery
-kombu==4.2.2 \
- --hash=sha256:9bf7d37b93249b76a03afb7bbcf7149a358b6079ca2431e725414b1caa10922c \
- --hash=sha256:52763f41077e25fe7e2f17b8319d8a7b7ab953a888c49d9e4e0464fceb716896
+kombu==4.2.2.post1 \
+ --hash=sha256:1ef049243aa05f29e988ab33444ec7f514375540eaa8e0b2e1f5255e81c5e56d \
+ --hash=sha256:3c9dca2338c5d893f30c151f5d29bfb81196748ab426d33c362ab51f1e8dbf78
# lxml is required by pyquery
lxml==4.2.5 \
--hash=sha256:fa39ea60d527fbdd94215b5e5552f1c6a912624521093f1384a491a8ad89ad8b \
|
Only retrieve released pcluster AMIs.
This change only applies to integration tests for released versions.
In the develop code, we should conditionally retrieve AMI depending if the test is running as released test or develop test. | @@ -42,7 +42,7 @@ OS_TO_REMARKABLE_AMI_NAME_OWNER_MAP = {
}
# Get official pcluster AMIs or get from dev account
-PCLUSTER_AMI_OWNERS = ["amazon", "self"]
+PCLUSTER_AMI_OWNERS = ["amazon"]
# Pcluster AMIs are latest ParallelCluster official AMIs that align with cli version
OS_TO_PCLUSTER_AMI_NAME_OWNER_MAP = {
"alinux2": {"name": "amzn2-hvm-*-*", "owners": PCLUSTER_AMI_OWNERS},
|
Get rid of "sending ack" in peer logging
In one case this isn't accurate anymore, and in the other cases it
isn't interesting. | @@ -40,8 +40,8 @@ class GetPeersRequestHandler(Handler):
def handle(self, connection_id, message_content):
request = GetPeersRequest()
request.ParseFromString(message_content)
- LOGGER.debug("got peers request message "
- "from %s. sending ack", connection_id)
+
+ LOGGER.debug("Got peers request message from %s", connection_id)
self._gossip.send_peers(connection_id)
@@ -61,8 +61,8 @@ class GetPeersResponseHandler(Handler):
def handle(self, connection_id, message_content):
response = GetPeersResponse()
response.ParseFromString(message_content)
- LOGGER.debug("got peers response message "
- "from %s. sending ack", connection_id)
+
+ LOGGER.debug("Got peers response message from %s", connection_id)
LOGGER.debug("PEERS RESPONSE ENDPOINTS: %s", response.peer_endpoints)
@@ -78,8 +78,8 @@ class PeerRegisterHandler(Handler):
def handle(self, connection_id, message_content):
request = PeerRegisterRequest()
request.ParseFromString(message_content)
- LOGGER.debug("got peer register message "
- "from %s. sending ack", connection_id)
+
+ LOGGER.debug("Got peer register message from %s", connection_id)
ack = NetworkAcknowledgement()
try:
@@ -101,8 +101,9 @@ class PeerUnregisterHandler(Handler):
def handle(self, connection_id, message_content):
request = PeerUnregisterRequest()
request.ParseFromString(message_content)
- LOGGER.debug("got peer unregister message "
- "from %s. sending ack", connection_id)
+
+ LOGGER.debug("Got peer unregister message from %s", connection_id)
+
self._gossip.unregister_peer(connection_id)
ack = NetworkAcknowledgement()
ack.status = ack.OK
|
[syncBN]
test update to resolve
Using identical learning rate for both DDP with sync BN and single process BN.
The previous configure leaves the impression that sync BN requires adjusting lr
in the script, which is not true. | @@ -92,6 +92,8 @@ inp_bn = inp_t.clone().requires_grad_()
grad_bn = grad_output_t.clone().detach()
out_bn = bn(inp_bn)
out_bn.backward(grad_bn)
+for param in bn.parameters():
+ param.grad = param.grad / args.world_size
bn_opt = optim.SGD(bn.parameters(), lr=1.0)
sbn = apex.parallel.SyncBatchNorm(feature_size).cuda()
@@ -103,7 +105,7 @@ if args.fp16:
if args.fp64:
sbn.double()
sbn = DDP(sbn)
-sbn_opt = optim.SGD(sbn.parameters(), lr=1.0*args.world_size)
+sbn_opt = optim.SGD(sbn.parameters(), lr=1.0)
inp_sbn = inp_t.clone().requires_grad_()
grad_sbn = grad_output_t.clone().detach()
out_sbn = sbn(inp_sbn[start:finish])
@@ -159,11 +161,7 @@ sbn_opt.step()
if args.local_rank == 0:
compare("comparing bn vs sbn bias: ", bn.bias, sbn.module.bias, error)
- compare("comparing bn vs ref bias: ", bn.bias, bias_r.view(-1) - grad_bias_r, error)
- sbn_result = compare("comparing sbn vs ref bias: ", sbn.module.bias, bias_r.view(-1) - grad_bias_r, error) and sbn_result
compare("comparing bn vs sbn weight: ", bn.weight, sbn.module.weight, error)
- compare("comparing bn vs ref weight: ", bn.weight, (weight_r.view(-1) - grad_weight_r), error)
- sbn_result = compare("comparing sbn vs ref weight: ", sbn.module.weight, (weight_r.view(-1) - grad_weight_r), error) and sbn_result
if sbn_result:
|
Fix for bug in loading dotted modules from the command line
for when you are trying to do something like:
$ ginga --loglevel=40 --stderr --plugins=stginga.plugins.DQInspect | @@ -16,23 +16,28 @@ __all__ = ['ModuleManager']
def my_import(name, path=None):
"""Return imported module for the given name."""
- #mod = __import__(name)
- if path is None:
- fp, path, description = imp.find_module(name)
+ if path is not None:
+ description = ('.py', 'r', imp.PY_SOURCE)
+
+ with open(path, 'r') as fp:
+ mod = imp.load_module(name, fp, path, description)
else:
- fp = open(path, 'r')
- description = ('.py', 'r', imp.PY_SOURCE)
+ components = name.split('.')
+ for comp in components:
+ fp, path, description = imp.find_module(comp, path=path)
try:
- mod = imp.load_module(name, fp, path, description)
+ mod = imp.load_module(comp, fp, path, description)
+ if hasattr(mod, '__path__'):
+ path = mod.__path__
+ else:
+ path = mod.__file__
finally:
+ if fp is not None:
fp.close()
- components = name.split('.')
- for comp in components[1:]:
- mod = getattr(mod, comp)
return mod
|
Shut up pylint some more
since it kept suggesting to add an import that is not needed nor used.
If you add the import and then lint using .pylintrc, it then complains about the unused import. | '''
Tests for loop state(s)
'''
-
# Import Python Libs
from __future__ import absolute_import, print_function, unicode_literals
+# Disable pylint complaining about incompatible python3 code and suggesting
+# from salt.ext.six.moves import range
+# which is not needed (nor used) as the range is used as iterable for both py2 and py3.
+# pylint: disable=incompatible-py3-code
+
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.unit import TestCase
|
fix typo in comment
fix typo | @@ -394,7 +394,7 @@ class HfArgumentParser(ArgumentParser):
def parse_yaml_file(self, yaml_file: str, allow_extra_keys: bool = False) -> Tuple[DataClass, ...]:
"""
- Alternative helper method that does not use `argparse` at all, instead loading a json file and populating the
+ Alternative helper method that does not use `argparse` at all, instead loading a yaml file and populating the
dataclass types.
Args:
|
Added protocol change for cip03
Fixed some errors in compose format when reset was not enabled | "testnet_block_index": 2287021
},
+ "cip03": {
+ "minimum_version_major": 9,
+ "minimum_version_minor": 59,
+ "minimum_version_revision": 6,
+ "block_index": 753000,
+ "testnet_block_index": 2288000
+ },
"issuance_asset_serialization_format": {
"mainnet":{
"1":{
"value":">QQ??If"
},
- "1000000":{
+ "753000":{
"value":">QQ???If"
}
},
"1":{
"value":">QQ??If"
},
- "2196579":{
+ "2288000":{
"value":">QQ???If"
}
}
"1":{
"value":26
},
- "1000000":{
+ "753000":{
"value":27
}
},
"1":{
"value":26
},
- "2196579":{
+ "2288000":{
"value":27
}
}
"1":{
"value":">QQ?B"
},
- "1000000":{
+ "753000":{
"value":">QQ??B"
}
},
"1":{
"value":">QQ?B"
},
- "2196579":{
+ "2288000":{
"value":">QQ??B"
}
}
"1":{
"value":18
},
- "1000000":{
+ "753000":{
"value":19
}
},
"1":{
"value":18
},
- "2196579":{
+ "2288000":{
"value":19
}
}
|
lightbox: update help text for `v` shortcut.
New behavior of the `v` shortcut updated in documentation.
Follow up to | </tr>
<tr>
<td class="hotkey">v</td>
- <td class="definition">{% trans %}Show images in message{% endtrans %}</td>
+ <td class="definition">{% trans %}Show images in thread{% endtrans %}</td>
</tr>
<tr id="edit-message-hotkey-help">
<td class="hotkey">i then Enter</td>
|
Adds argument validation for group size argument
Also renames the class variable for the iterable to match other built-ins. | @@ -314,12 +314,18 @@ class groupsof{object}:
"""groupsof(n, iterable) returns an iterator that returns groups of length n.
If the length of the iterable is not divisible by n, the last group may be of size < n.
"""
- __slots__ = ("_grp_size", "_iterable")
+ __slots__ = ("_grp_size", "_iter")
def __init__(self, n, iterable):
- self._grp_size, self._iterable = n, iterable
+ self._iter = iterable
+ try:
+ self._grp_size = int(n)
+ except ValueError:
+ raise TypeError("group size must be an int: %s" % (n, ))
+ if self._grp_size <= 0:
+ raise ValueError("group size (%s) must be positive" % (self._grp_size, ))
def __iter__(self):
loop = True
- iterator = iter(self._iterable)
+ iterator = iter(self._iter)
while loop:
group = [next(iterator)]
for _ in range(1, self._grp_size):
@@ -330,13 +336,13 @@ def __iter__(self):
break
yield tuple(group)
def __len__(self):
- return _coconut.len(self._iterable)
+ return _coconut.len(self._iter)
def __repr__(self):
- return "groupsof(%r)" % (_coconut.repr(self._iterable), )
+ return "groupsof(%r)" % (_coconut.repr(self._iter), )
def __reduce__(self):
- return (self.__class__, (self._grp_size, self._iterable))
+ return (self.__class__, (self._grp_size, self._iter))
def __copy__(self):
- return self.__class__(self._grp_size, _coconut.copy.copy(self._iterable))
+ return self.__class__(self._grp_size, _coconut.copy.copy(self._iter))
def __fmap__(self, func):
return _coconut_map(func, self)
def recursive_iterator(func):
|
Small fix of the Depthwise Convolution example in python3
* fix for python3
fix for python3
* Update depthwise_conv2d_map_test.py
remove sys.append | @@ -78,12 +78,14 @@ def test_depthwise_conv2d_map():
index_w = pad_left_scipy - pad_left_tvm
for i in range(batch):
for j in range(out_channel):
- depthwise_conv2d_scipy[i,j,:,:] = signal.convolve2d(input_np[i,j/channel_multiplier,:,:], np.rot90(filter_np[j/channel_multiplier,j%channel_multiplier,:,:], 2),
+ depthwise_conv2d_scipy[i,j,:,:] = signal.convolve2d(input_np[i,j // channel_multiplier,:,:],
+ np.rot90(filter_np[j // channel_multiplier,j%channel_multiplier,:,:], 2),
mode='same')[index_h:in_height:stride_h, index_w:in_width:stride_w]
if padding == 'VALID':
for i in range(batch):
for j in range(out_channel):
- depthwise_conv2d_scipy[i,j,:,:] = signal.convolve2d(input_np[i,j/channel_multiplier,:,:], np.rot90(filter_np[j/channel_multiplier,j%channel_multiplier,:,:], 2),
+ depthwise_conv2d_scipy[i,j,:,:] = signal.convolve2d(input_np[i,j // channel_multiplier,:,:],
+ np.rot90(filter_np[j // channel_multiplier,j%channel_multiplier,:,:], 2),
mode='valid')[0:(in_height - filter_height + 1):stride_h, 0:(in_width - filter_height + 1):stride_w]
for c in range(out_channel):
scale_shift_scipy[:,c,:,:] = depthwise_conv2d_scipy[:,c,:,:] * scale_np[c] + shift_np[c]
@@ -132,7 +134,7 @@ def test_depthwise_conv2d_map():
np.testing.assert_allclose(depthwise_conv2d_tvm.asnumpy(), depthwise_conv2d_scipy, rtol=1e-5)
np.testing.assert_allclose(scale_shift_tvm.asnumpy(), scale_shift_scipy, rtol=1e-5)
np.testing.assert_allclose(relu_tvm.asnumpy(), relu_scipy, rtol=1e-5)
- print "success"
+ print("success")
with tvm.build_config(auto_unroll_max_step=32,
auto_unroll_min_depth=0,
|
Fixing issue # 40167 with file.replace where the diff output does not
display correctly. | @@ -2231,8 +2231,8 @@ def replace(path,
check_perms(path, None, pre_user, pre_group, pre_mode)
if show_changes:
- orig_file_as_str = ''.join([salt.utils.to_str(x) for x in orig_file])
- new_file_as_str = ''.join([salt.utils.to_str(x) for x in new_file])
+ orig_file_as_str = [salt.utils.to_str(x) for x in orig_file]
+ new_file_as_str = [salt.utils.to_str(x) for x in new_file]
return ''.join(difflib.unified_diff(orig_file_as_str, new_file_as_str))
return has_changes
|
2 step re-execution test
Summary: had to prove to myself this worked as expected
Test Plan: its a test
Reviewers: max, prha, nate | @@ -620,6 +620,41 @@ def add_one(num):
assert reexecution_result.result_for_solid('add_one').output_value() == 2
+def test_two_step_reexecution():
+ @lambda_solid
+ def return_one():
+ return 1
+
+ @lambda_solid
+ def add_one(num):
+ return num + 1
+
+ @pipeline
+ def two_step_reexec():
+ add_one(add_one(return_one()))
+
+ instance = DagsterInstance.ephemeral()
+ pipeline_result = execute_pipeline(
+ two_step_reexec, environment_dict={'storage': {'filesystem': {}}}, instance=instance
+ )
+ assert pipeline_result.success
+ assert pipeline_result.result_for_solid('add_one_2').output_value() == 3
+
+ reexecution_result = execute_pipeline(
+ two_step_reexec,
+ environment_dict={'storage': {'filesystem': {}}},
+ run_config=RunConfig(
+ previous_run_id=pipeline_result.run_id,
+ step_keys_to_execute=['add_one.compute', 'add_one_2.compute'],
+ ),
+ instance=instance,
+ )
+
+ assert reexecution_result.success
+ assert reexecution_result.result_for_solid('return_one').output_value() == None
+ assert reexecution_result.result_for_solid('add_one_2').output_value() == 3
+
+
def test_optional():
@solid(output_defs=[OutputDefinition(Int, 'x'), OutputDefinition(Int, 'y', is_optional=True)])
def return_optional(_context):
|
fix test_incremental_load_hidden_core
Cache state after load with missig cores should be "incomplete"
rather than "running. | @@ -110,8 +110,11 @@ def test_incremental_load_hidden_core():
with TestRun.step("Load cache"):
cache = casadm.load_cache(cache_dev)
- if cache.get_status() is not CacheStatus.running:
- TestRun.fail(f"Cache {cache.cache_id} should be running but is {cache.get_status()}.")
+ if cache.get_status() is not CacheStatus.incomplete:
+ TestRun.fail(
+ f"Cache {cache.cache_id} should be incomplete but is "
+ f"{cache.get_status()}."
+ )
for core in cache.get_core_devices():
if core.get_status() is not CoreStatus.active:
TestRun.fail(f"Core {core.core_id} should be Active but is {core.get_status()}.")
|
Fix asyncio create_task syntax for compatibility with python < 3.7
The `asyncio.create_task` method was introduced with python 3.7. To be
compatible with older version, we should use
`asyncio.get_event_loop().create_task` | @@ -153,7 +153,10 @@ class IOCache:
raise e
queue.task_done()
- tasks = [asyncio.create_task(worker(queue)) for _ in range(n_workers)]
+ tasks = [
+ asyncio.get_event_loop().create_task(worker(queue))
+ for _ in range(n_workers)
+ ]
for job in jobs:
for f in chain(job.input, job.expanded_output):
|
Incidents: define allowed roles and emoji
These serve as whitelists, i.e. any reaction using an emoji not
explicitly allowed, or from a user not specifically allowed,
will be rejected. Such reactions will be removed by the bot. | @@ -4,7 +4,7 @@ from enum import Enum
from discord.ext.commands import Cog
from bot.bot import Bot
-from bot.constants import Emojis
+from bot.constants import Emojis, Roles
log = logging.getLogger(__name__)
@@ -17,6 +17,10 @@ class Signal(Enum):
INVESTIGATING = Emojis.incident_investigating
+ALLOWED_ROLES: t.Set[int] = {Roles.moderators, Roles.admins, Roles.owners}
+ALLOWED_EMOJI: t.Set[str] = {signal.value for signal in Signal}
+
+
class Incidents(Cog):
"""Automation for the #incidents channel."""
|
modules/nilrt_ip.py: Add default value for gateway
In case that connman cannot provide a default gateway, then this should
have a default value (0.0.0.0) to keep consistency between various ip
modules. | @@ -156,7 +156,9 @@ def _get_service_info(service):
state = service_info.get_property('State')
if state == 'ready' or state == 'online':
data['up'] = True
- data['ipv4'] = {}
+ data['ipv4'] = {
+ 'gateway': '0.0.0.0'
+ }
ipv4 = 'IPv4'
if service_info.get_property('IPv4')['Method'] == 'manual':
ipv4 += '.Configuration'
|
Adding the decode('utf-8') method
This will fix the problem when trying to send the
lintrc file to celery because the JSON serializer
expects a valid JSON string type instead of bytes
when Python 3.6+ is used.
Ref: | @@ -56,7 +56,7 @@ def get_lintrc(repo, ref):
"""
log.info('Fetching lintrc file')
response = repo.file_contents('.lintrc', ref)
- return response.decoded
+ return response.decoded.decode('utf-8')
def register_hook(repo, hook_url):
|
Replace Send Option with SocketCommand
Replace the use of Option for messages sent, with a SocketCommand. This
makes the action performed explicit, particularly with respect to the
Shutdown command. | @@ -63,13 +63,18 @@ impl MessageConnection<ZmqMessageSender> for ZmqMessageConnection {
}
}
+#[derive(Debug)]
+enum SocketCommand {
+ Send(Message),
+ Shutdown
+}
#[derive(Clone)]
pub struct ZmqMessageSender {
context: zmq::Context,
address: String,
inbound_router: InboundRouter,
- outbound_sender: Option<SyncSender<Option<Message>>>,
+ outbound_sender: Option<SyncSender<SocketCommand>>,
}
impl ZmqMessageSender {
@@ -119,7 +124,7 @@ impl MessageSender for ZmqMessageSender {
let future = MessageFuture::new(self.inbound_router.expect_reply(
String::from(correlation_id)));
- sender.send(Some(msg)).unwrap();
+ sender.send(SocketCommand::Send(msg)).unwrap();
Ok(future)
} else {
@@ -137,7 +142,7 @@ impl MessageSender for ZmqMessageSender {
msg.set_correlation_id(String::from(correlation_id));
msg.set_content(Vec::from(contents));
- match sender.send(Some(msg)) {
+ match sender.send(SocketCommand::Send(msg)) {
Ok(_) => Ok(()),
Err(_) => Err(SendError::UnknownError)
}
@@ -148,7 +153,7 @@ impl MessageSender for ZmqMessageSender {
fn close(&mut self) {
if let Some(ref sender) = self.outbound_sender.take() {
- sender.send(None).unwrap();
+ sender.send(SocketCommand::Shutdown).unwrap();
}
}
}
@@ -190,7 +195,7 @@ impl InboundRouter {
struct SendReceiveStream {
address: String,
socket: zmq::Socket,
- outbound_recv: Receiver<Option<Message>>,
+ outbound_recv: Receiver<SocketCommand>,
inbound_router: InboundRouter
}
@@ -198,7 +203,7 @@ const POLL_TIMEOUT: i64 = 10;
impl SendReceiveStream {
fn new(context: zmq::Context, address: &str,
- outbound_recv: Receiver<Option<Message>>,
+ outbound_recv: Receiver<SocketCommand>,
inbound_router: InboundRouter)
-> Self
{
@@ -240,12 +245,12 @@ impl SendReceiveStream {
match self.outbound_recv.recv_timeout(
Duration::from_millis(POLL_TIMEOUT as u64))
{
- Ok(Some(msg)) => {
+ Ok(SocketCommand::Send(msg)) => {
let message_bytes = protobuf::Message::write_to_bytes(&msg).unwrap();
trace!("Sending {} bytes", message_bytes.len());
self.socket.send(&message_bytes, 0).unwrap();
}
- Ok(None) => {
+ Ok(SocketCommand::Shutdown) => {
trace!("Shutdown Signal Received");
break;
}
|
Make sure environment settings are merged in for requests
Fixes | @@ -67,12 +67,13 @@ class RequestsHttpConnection(Connection):
url = '%s?%s' % (url, urlencode(params or {}))
start = time.time()
- try:
request = requests.Request(method=method, url=url, data=body)
prepared_request = self.session.prepare_request(request)
- response = self.session.send(
- prepared_request,
- timeout=timeout or self.timeout)
+ settings = self.session.merge_environment_settings(prepared_request.url, {}, None, None, None)
+ send_kwargs = {'timeout': timeout or self.timeout}
+ send_kwargs.update(settings)
+ try:
+ response = self.session.send(prepared_request, **send_kwargs)
duration = time.time() - start
raw_data = response.text
except Exception as e:
|
fix header delete profile
moves the "Delete profile" header inside the if-block | <input type="submit" name="cancel" value="{% trans 'Cancel' %}" class="btn" />
</form>
- <h2>{% trans "Delete profile" %}</h2>
-
{% if settings.PROFILE_DELETE %}
+ <h2>{% trans "Delete profile" %}</h2>
+
<p>
{% trans 'If you want to remove all your account information please proceed by clicking the button below.' %}
</p>
|
ci: remove the composer image test
This test that compiles and compare image-info from manifests is
redundant with the tests from manifest-db. | @@ -55,27 +55,6 @@ RPM:
- aws/rhel-9.1-nightly-aarch64
INTERNAL_NETWORK: "true"
-Composer Tests:
- stage: test
- extends: .terraform
- script:
- - schutzbot/deploy.sh
- - /usr/libexec/tests/osbuild-composer/image_tests.sh
- parallel:
- matrix:
- - RUNNER:
- - aws/fedora-35-x86_64
- - aws/fedora-35-aarch64
- - aws/fedora-36-x86_64
- - aws/fedora-36-aarch64
- - aws/rhel-8.6-ga-x86_64
- - aws/rhel-8.6-ga-aarch64
- - aws/rhel-8.7-nightly-x86_64
- - aws/rhel-8.7-nightly-aarch64
- - aws/rhel-9.1-nightly-x86_64
- - aws/rhel-9.1-nightly-aarch64
- INTERNAL_NETWORK: "true"
-
OSTree Images:
stage: test
extends: .terraform
|
Support HTTP/2 Server Push via ASGI
This is only attempted (and possible) if the server supports server
push and annonces so in the extensions dictionary. Currently Hypercorn
is the only ASGI server to support this extension. | @@ -62,6 +62,15 @@ class ASGIHTTPConnection:
'status': response.status_code,
'headers': headers,
})
+
+ if 'http.response.push' in self.scope.get('extensions', {}):
+ for path in response.push_promises:
+ await send({
+ 'type': 'http.response.push',
+ 'path': path,
+ 'headers': [],
+ })
+
async for data in response.response:
await send({
'type': 'http.response.body',
|
Add proxy support for rutracker plugin
[new] rutracker: Added proxy support | @@ -16,7 +16,6 @@ from flexget.event import event
from flexget.db_schema import versioned_base
from flexget.plugin import PluginError
from flexget.manager import Session
-from requests import Session as RSession
from requests.auth import AuthBase
from requests.utils import dict_from_cookiejar
from requests.exceptions import RequestException
@@ -67,13 +66,11 @@ class RutrackerAuth(AuthBase):
if you pass cookies (CookieJar) to constructor then authentication will be bypassed and cookies will be just set
"""
- @staticmethod
- def update_base_url():
+ def update_base_url(self):
url = None
for mirror in MIRRORS:
try:
- s = RSession()
- response = s.get(mirror, timeout=2)
+ response = self.requests.get(mirror, timeout=2)
if response.ok:
url = mirror
break
@@ -87,15 +84,15 @@ class RutrackerAuth(AuthBase):
def try_authenticate(self, payload):
for _ in range(5):
- s = RSession()
- s.post('{}/forum/login.php'.format(self.base_url), data=payload)
- if s.cookies and len(s.cookies) > 0:
- return s.cookies
+ self.requests.post('{}/forum/login.php'.format(self.base_url), data=payload)
+ if self.requests.cookies and len(self.requests.cookies) > 0:
+ return self.requests.cookies
else:
sleep(3)
raise PluginError('unable to obtain cookies from rutracker')
- def __init__(self, login, password, cookies=None, db_session=None):
+ def __init__(self, requests, login, password, cookies=None, db_session=None):
+ self.requests = requests
self.base_url = self.update_base_url()
if cookies is None:
log.debug('rutracker cookie not found. Requesting new one')
@@ -156,7 +153,7 @@ class RutrackerUrlrewrite(object):
cookies = self.try_find_cookie(db_session, username)
if username not in self.auth_cache:
auth_handler = RutrackerAuth(
- username, config['password'], cookies, db_session)
+ task.requests, username, config['password'], cookies, db_session)
self.auth_cache[username] = auth_handler
else:
auth_handler = self.auth_cache[username]
|
do not integrate thumbnail
Storing thumbnail representation in the DB doesn't make sense.
There will be eventually pre-integrator that could allow this with profiles usage. | @@ -164,7 +164,7 @@ class ExtractReview(publish.Extractor):
"ext": "jpg",
"files": os.path.basename(thumbnail_path),
"stagingDir": staging_dir,
- "tags": ["thumbnail"]
+ "tags": ["thumbnail", "delete"]
})
def _check_and_resize(self, processed_img_names, source_files_pattern,
|
validate: fail in check_devices at the right task
see for details.
Fixes: | - name: devices validation
when: devices is defined
block:
- - name: validate devices is actually a device
+ - name: get devices information
parted:
device: "{{ item }}"
unit: MiB
register: devices_parted
+ failed_when: False
with_items: "{{ devices }}"
- name: fail if one of the devices is not a device
fail:
msg: "{{ item }} is not a block special file!"
- when: item.failed
+ when: item.rc is defined
with_items: "{{ devices_parted.results }}"
|
doc/common/extensions: Basic styling.
Turns it into a table with images, using builtin RTD classes for CSS. | :mod:`umath <umath>` -- Math functions
============================================================
+This MicroPython module is similar to the `math module`_ in Python.
+It is available on these hubs:
+
.. pybricks-requirements:: stm32-extra stm32-float
-This MicroPython module is similar to the `math module`_ in Python.
.. module:: umath
|
Remove duplicate fields from the registration model
[#PLAT-1061] | @@ -15,7 +15,7 @@ from website import settings
from website.archiver import ARCHIVER_INITIATED
from osf.models import (
- OSFUser, RegistrationSchema, RegistrationApproval,
+ OSFUser, RegistrationSchema,
Retraction, Embargo, DraftRegistrationApproval,
EmbargoTerminationApproval,
)
@@ -40,11 +40,6 @@ class Registration(AbstractNode):
registered_schema = models.ManyToManyField(RegistrationSchema)
registered_meta = DateTimeAwareJSONField(default=dict, blank=True)
- # TODO Add back in once dependencies are resolved
- registration_approval = models.ForeignKey(RegistrationApproval, null=True, blank=True, on_delete=models.CASCADE)
- retraction = models.ForeignKey(Retraction, null=True, blank=True, on_delete=models.CASCADE)
- embargo = models.ForeignKey(Embargo, null=True, blank=True, on_delete=models.CASCADE)
-
registered_from = models.ForeignKey('self',
related_name='registrations',
on_delete=models.SET_NULL,
|
Add redirect to notion privacy location.
Since this is a backwards compatibility redirect, the page should
redirect the user rather than rely on the cloudflare worker. | @@ -5,3 +5,8 @@ icon: fab fa-discord
---
You should be redirected. If you are not, [please click here](https://www.notion.so/pythondiscord/Python-Discord-Privacy-ee2581fea4854ddcb1ebc06c1dbb9fbd).
+
+<script>
+ // Redirect visitor to the privacy page
+ window.location.href = "https://www.notion.so/pythondiscord/Python-Discord-Privacy-ee2581fea4854ddcb1ebc06c1dbb9fbd";
+</script>
|
fix: ignore unpicklable hooks
If any custom app use import statement in hooks.py everything breaks.
Hooks.py while being python file is still only supposed to be used for
configuring.
This PR ignores unpicklable members of hooks.py | @@ -1432,6 +1432,8 @@ def get_doc_hooks():
@request_cache
def _load_app_hooks(app_name: str | None = None):
+ import types
+
hooks = {}
apps = [app_name] if app_name else get_installed_apps(sort=True)
@@ -1447,9 +1449,13 @@ def _load_app_hooks(app_name: str | None = None):
if not request:
raise SystemExit
raise
- for key in dir(app_hooks):
+
+ def _is_valid_hook(obj):
+ return not isinstance(obj, (types.ModuleType, types.FunctionType, type))
+
+ for key, value in inspect.getmembers(app_hooks, predicate=_is_valid_hook):
if not key.startswith("_"):
- append_hook(hooks, key, getattr(app_hooks, key))
+ append_hook(hooks, key, value)
return hooks
|
Update __init__.py
Import CieanaSAOSDriver | @@ -51,6 +51,7 @@ from Exscript.protocols.drivers.vxworks import VxworksDriver
from Exscript.protocols.drivers.ericsson_ban import EricssonBanDriver
from Exscript.protocols.drivers.rios import RIOSDriver
from Exscript.protocols.drivers.eos import EOSDriver
+from Exscript.protocols.drivers.cienasaos import CienaSAOSDriver
driver_classes = []
drivers = []
|
Fix user deletion
An improper check causes problems when trying to delete a user. This fixes that error. | @@ -1098,9 +1098,9 @@ def admin_manageuser():
data = jdata['data']
if jdata['action'] == 'delete_user':
- if username == current_user.username:
- return make_response(jsonify( { 'status': 'error', 'msg': 'You cannot delete yourself.' } ), 400)
user = User(username=data)
+ if user.username == current_user.username:
+ return make_response(jsonify( { 'status': 'error', 'msg': 'You cannot delete yourself.' } ), 400)
result = user.delete()
if result:
history = History(msg='Delete username {0}'.format(data), created_by=current_user.username)
|
read res with CovMat regul
Load Pst with a *.res file that contains rows for CovMat regularization observation groups.
loads res file, only keeping columns that match headers
strips Cov, Mat. and na strings
sets cols to float | @@ -253,8 +253,12 @@ def read_resfile(resfile):
header = line.lower().strip().split()
break
res_df = pd.read_csv(
- f, header=None, names=header, sep=r"\s+", converters=converters
+ f, header=None, names=header, sep=r"\s+", converters=converters,
+ usecols=header #on_bad_lines='skip'
)
+ # strip the "Cov.", "Mat." and "na" strings that PEST records in the *.res file; make float
+ float_cols = [x for x in res_df.columns if x not in ['name','group']]
+ res_df[float_cols] = res_df[float_cols].replace(['Cov.', 'Mat.', 'na'], np.nan).astype(float)
res_df.index = res_df.name
f.close()
return res_df
|
create-project.py: fix a Python3 syntax error
In Python3, octal notation for integer literals requires the "0o"
prefix.
TN: | @@ -38,7 +38,7 @@ def generate(lang_name):
with open(filename, 'w') as f:
f.write(template.format(**template_args))
- os.chmod('manage.py', 0755)
+ os.chmod('manage.py', 0o755)
MANAGE_TEMPLATE = '''#! /usr/bin/env python
|
fix(docs): format docs
### Summary & Motivation
### How I Tested These Changes | @@ -57,11 +57,9 @@ Let's get started by downloading the [`tutorial_dbt_dagster` example](https://gi
<a href="https://docs.getdbt.com/reference/warehouse-setups/bigquery-setup">
BigQuery
</a>
- ,{" "}
- <a href="https://docs.getdbt.com/reference/warehouse-setups/redshift-setup">
+ , <a href="https://docs.getdbt.com/reference/warehouse-setups/redshift-setup">
Redshift
- </a>
- ,{" "}
+ </a>,{" "}
<a href="https://docs.getdbt.com/reference/warehouse-setups/snowflake-setup">
Snowflake
</a>
|
Update ESP32_Code.ino
Including the chip ID for the ESP32 (48 bits long or 6 bytes). Avoiding the use of String for the stored ID. Avoiding the use of the String class at all in the future would be great. | @@ -76,6 +76,8 @@ volatile int wifiStatus = 0;
volatile int wifiPrev = WL_CONNECTED;
volatile bool OTA_status = false;
+volatile char ID[23]; // DUCO MCU ID
+
void WiFireconnect( void * pvParameters ) {
int n = 0;
unsigned long previousMillis = 0;
@@ -266,7 +268,7 @@ void Task1code( void * pvParameters ) {
Serial.println(F("CORE1 Reconnection successful."));
}
client1.flush();
- client1.print(String(iJob1) + "," + String(HashRate1) + ",ESP32 CORE1 Miner v2.3," + String(rigname)); // Send result to server
+ client1.print(String(iJob1) + "," + String(HashRate1) + ",ESP32 CORE1 Miner v2.3," + String(rigname) + "," + String((char*)ID)); // Send result to server
Serial.println(F("CORE1 Posting result and waiting for feedback."));
while(!client1.available()){
if (!client1.connected()) {
@@ -420,7 +422,7 @@ void Task2code( void * pvParameters ) {
Serial.println(F("CORE2 Reconnection successful."));
}
client.flush();
- client.print(String(iJob) + "," + String(HashRate) + ",ESP32 CORE2 Miner v2.3," + String(rigname)); // Send result to server
+ client.print(String(iJob) + "," + String(HashRate) + ",ESP32 CORE2 Miner v2.3," + String(rigname) + "," + String((char*)ID)); // Send result to server
Serial.println(F("CORE2 Posting result and waiting for feedback."));
while(!client.available()){
if (!client.connected()) {
@@ -457,10 +459,15 @@ void setup() {
//disableCore1WDT();
Serial.begin(500000); // Start serial connection
Serial.println("\n\nDuino-Coin ESP32 Miner v2.3");
+
WiFi.mode(WIFI_STA); // Setup ESP in client mode
btStop();
WiFi.begin(ssid, password); // Connect to wifi
+ uint64_t chipid = ESP.getEfuseMac(); // Getting chip ID
+ uint16_t chip = (uint16_t)(chipid >> 32); // Preparing for printing a 64 bit value (it's actually 48 bits long) into a char array
+ snprintf((char*)ID, 23, "DUCOID%04X%08X", chip, (uint32_t)chipid); // Storing the 48 bit chip ID into a char array.
+
OTA_status = false;
// Port defaults to 3232
|
Update Alien 3 (USA, Europe) (Action Replay).cht
Change 99 to 63 hex. 99 results in 0 and weapon, granage ,flame is not usable. 63 shows visually the same as 99 but works and can shoot | cheats = 9
cheat0_desc = "Infinite Pulse Rifle Ammo"
-cheat0_code = "FF0844:99"
+cheat0_code = "FF0844:63"
cheat0_enable = false
cheat1_desc = "Infinite Time"
@@ -9,15 +9,15 @@ cheat1_code = "FF0866:60"
cheat1_enable = false
cheat2_desc = "Infinite Fuel"
-cheat2_code = "FF0846:99"
+cheat2_code = "FF0846:63"
cheat2_enable = false
cheat3_desc = "Infinite Grenades"
-cheat3_code = "FF0848:99"
+cheat3_code = "FF0848:63"
cheat3_enable = false
cheat4_desc = "Infinite Hand Grenades"
-cheat4_code = "FF084A:99"
+cheat4_code = "FF084A:63"
cheat4_enable = false
cheat5_desc = "Infinite Energy"
|
libcurl/7.71.1 : fix wolfssl path not specified to configure
similar to openssl, wolfssl path needs to be specified while
executing configure script for libcurl with-wolfssl option | @@ -199,7 +199,8 @@ class LibcurlConan(ConanFile):
openssl_path = self.deps_cpp_info["openssl"].rootpath.replace("\\", "/")
params.append("--with-ssl=%s" % openssl_path)
elif self.options.with_wolfssl:
- params.append("--with-wolfssl")
+ wolfssl_path = self.deps_cpp_info["wolfssl"].rootpath.replace("\\", "/")
+ params.append("--with-wolfssl=%s" % wolfssl_path)
params.append("--without-ssl")
else:
params.append("--without-ssl")
|
Minor cleanup
* Use consistent quoting in verbose optimization output for the
hard import stuff. | @@ -665,7 +665,7 @@ class ExpressionImportModuleHard(
return (
new_node,
"new_raise",
- "Hard module %s attribute missing %r pre-computed."
+ "Hard module '%s' attribute missing '%s* pre-computed."
% (self.value_name, attribute_name),
)
else:
@@ -691,14 +691,14 @@ class ExpressionImportModuleHard(
return (
result,
"new_expression",
- "Attribute lookup %s of hard module %r becomes hard module name import."
+ "Attribute lookup '%s* of hard module *%s* becomes hard module name import."
% (self.value_name, attribute_name),
)
trace_collection.onExceptionRaiseExit(ImportError)
onMissingTrust(
- "Hard module %s attribute %r missing trust config for existing value.",
+ "Hard module '%s' attribute '%s' missing trust config for existing value.",
lookup_node.getSourceReference(),
self.value_name,
attribute_name,
@@ -714,7 +714,7 @@ class ExpressionImportModuleHard(
user_provided=True,
),
"new_constant",
- "Hard module %s imported %r pre-computed to constant value."
+ "Hard module '%s' imported '%s' pre-computed to constant value."
% (self.value_name, attribute_name),
)
elif trust is trust_node:
@@ -725,7 +725,7 @@ class ExpressionImportModuleHard(
return (
result,
"new_expression",
- "Attribute lookup %s of hard module %r becomes node %r."
+ "Attribute lookup '%s' of hard module '%s' becomes node '%s'."
% (self.value_name, attribute_name, result.kind),
)
else:
@@ -739,7 +739,7 @@ class ExpressionImportModuleHard(
return (
result,
"new_expression",
- "Attribute lookup %s of hard module %r becomes hard module name import."
+ "Attribute lookup '%s' of hard module '%s' becomes hard module name import."
% (self.value_name, attribute_name),
)
|
Convert positions to list
h5py returns tuples, so enforcing a list here. | @@ -68,7 +68,7 @@ class MdaInputExtractor(InputExtractor):
geom=np.zeros((M,nd))
for ii in range(len(channel_ids)):
info0=input_extractor.getChannelInfo(channel_ids[ii])
- geom[ii,:]=info0['location']
+ geom[ii,:]=list(info0['location'])
if not os.path.exists(output_dirname):
os.mkdir(output_dirname)
mdaio.writemda32(raw,output_dirname+'/raw.mda')
|
MAINT: Fix unused IgnoreException in nose_tools/utils.py
It did not have `pass` in the definition. It appears unused, so should
be removed at some point. | @@ -1849,6 +1849,7 @@ def _gen_alignment_data(dtype=float32, type='binary', max_size=24):
class IgnoreException(Exception):
"Ignoring this exception due to disabled feature"
+ pass
@contextlib.contextmanager
|
[docs] Clarify that import supports a list
Ref - documentation is misleading, it's now possible to
have a rule import multiple additional configuration files, ie:
```yaml
# my-rule.yml
name: my-rule
import:
- $HOME/conf.d/base.yml
- $HOME/conf.d/slack-alerter.yml
``` | @@ -315,7 +315,8 @@ import
``import``: If specified includes all the settings from this yaml file. This allows common config options to be shared. Note that imported files that aren't
complete rules should not have a ``.yml`` or ``.yaml`` suffix so that ElastAlert 2 doesn't treat them as rules. Filters in imported files are merged (ANDed)
-with any filters in the rule. You can only have one import per rule, though the imported file can import another file or multiple files, recursively.
+with any filters in the rule. You can have one import per rule (value is string) or several imports per rule (value is a list of strings).
+The imported file can import another file or multiple files, recursively.
The filename can be an absolute path or relative to the rules directory. (Optional, string or array of strings, no default)
use_ssl
|
readme: fix windows binary build step
fixes: | @@ -51,9 +51,10 @@ If you want to build your own Windows binaries:
1. Install [cx_freeze](https://anthony-tuininga.github.io/cx_Freeze/)
3. Follow the steps listed under [From source](#from-source)
4. cd path\to\svtplay-dl && mkdir build
-5. `python setversion.py` # this will change the version string to a more useful one
-5. `python %PYTHON%\\Scripts\\cxfreeze --include-modules=cffi,queue,idna.idnadata --target-dir=build bin/svtplay-dl`
-6. Find binary in build folder. you need `svtplay-dl.exe` and `pythonXX.dll` from that folder to run `svtplay-dl.exe`
+5. `pip install -e .`
+6. `python setversion.py` # this will change the version string to a more useful one
+7. `python %PYTHON%\\Scripts\\cxfreeze --include-modules=cffi,queue,idna.idnadata --target-dir=build bin/svtplay-dl`
+8. Find binary in build folder. you need `svtplay-dl.exe` and `pythonXX.dll` from that folder to run `svtplay-dl.exe`
### Other systems with python
|
Makefile: fix sh issue
[[ is a bash-builtin indeed no sh. | @@ -231,7 +231,7 @@ define update_pin
$(eval new_ver := $(call get_remote_version,$(2),$(3)))
$(DOCKER_RUN) -i $(CALICO_BUILD) sh -c '\
- if [[ ! -z "$(new_ver)" ]]; then \
+ if [ ! -z "$(new_ver)" ]; then \
go get $(1)@$(new_ver); \
go mod download; \
fi'
@@ -244,7 +244,7 @@ define update_replace_pin
$(eval new_ver := $(call get_remote_version,$(2),$(3)))
$(DOCKER_RUN) -i $(CALICO_BUILD) sh -c '\
- if [[ ! -z "$(new_ver)" ]]; then \
+ if [ ! -z "$(new_ver)" ]; then \
go mod edit -replace $(1)=$(2)@$(new_ver); \
go mod download; \
fi'
|
Fix dependency-installation bug in Java MLflow model scoring server
Fixed a dependency-installation bug that prevented running the Java MLflow model scoring server against a docker image built via mlflow sagemaker build-and-push-container. | @@ -71,7 +71,10 @@ def _get_mlflow_install_step(dockerfile_context_dir, mlflow_home):
"RUN mvn "
" --batch-mode dependency:copy"
" -Dartifact=org.mlflow:mlflow-scoring:{version}:jar"
- " -DoutputDirectory=/opt/java/jars"
+ " -DoutputDirectory=/opt/java/jars\n"
+ "RUN cp /opt/java/mlflow-scoring-{version}.pom /opt/java/pom.xml\n"
+ "RUN cd /opt/java && mvn "
+ "--batch-mode dependency:copy-dependencies -DoutputDirectory=/opt/java/jars\n"
).format(version=mlflow.version.VERSION)
|
navbar_alerts: Change HTML ordering for obvious tab order.
Fixes | <div id="panels">
<div data-process="notifications" class="alert alert-info">
- <span class="close" data-dismiss="alert" aria-label="{{ _('Close') }}" role="button" tabindex=0>×</span>
<div data-step="1">
{% trans %}Zulip needs your permission to
<a class="request-desktop-notifications alert-link" role="button" tabindex=0>enable desktop notifications.</a>
<a class="alert-link reject-notifications" role="button" tabindex=0>{{ _('Never ask on this computer') }}</a>
</span>
</div>
+ <span class="close" data-dismiss="alert" aria-label="{{ _('Close') }}" role="button" tabindex=0>×</span>
</div>
<div data-process="email-server" class="alert alert-info red">
- <span class="close" data-dismiss="alert" aria-label="{{ _('Close') }}" role="button" tabindex=0>×</span>
<div data-step="1">
{% trans %}Zulip needs to send email to confirm users' addresses and send notifications.{% endtrans %}
<a class="alert-link" href="https://zulip.readthedocs.io/en/latest/production/email.html" target="_blank" rel="noopener noreferrer">
{% trans %}See how to configure email.{% endtrans %}
</a>
</div>
+ <span class="close" data-dismiss="alert" aria-label="{{ _('Close') }}" role="button" tabindex=0>×</span>
</div>
<div data-process="profile-incomplete" class="alert alert-info">
- <span class="close" data-dismiss="alert" aria-label="{{ _('Close') }}" role="button" tabindex=0>×</span>
<div data-step="2">
{% trans %}
Complete the
</a> to brand and explain the purpose of this Zulip organization.
{% endtrans %}
</div>
+ <span class="close" data-dismiss="alert" aria-label="{{ _('Close') }}" role="button" tabindex=0>×</span>
</div>
<div data-process="insecure-desktop-app" class="alert alert-info red">
- <span class="close" data-dismiss="alert" aria-label="{{ _('Close') }}" role="button" tabindex=0>×</span>
<div data-step="1">
{% trans apps_page_link="https://zulip.com/apps" %}
You are using an old version of the Zulip desktop app with known security bugs.
</a>
{% endtrans %}
</div>
+ <span class="close" data-dismiss="alert" aria-label="{{ _('Close') }}" role="button" tabindex=0>×</span>
</div>
<div data-process="bankruptcy" class="alert alert-info brankruptcy">
- <span class="close" data-dismiss="alert" aria-label="{{ _('Close') }}" role="button" tabindex=0>×</span>
<div data-step="1">
{% trans count=page_params.unread_msgs.count %}
Welcome back! You have <span class="bankruptcy_unread_count">{{ count }}</span> unread messages. Do you want to mark them all as read?
</span>
{% endtrans %}
</div>
+ <span class="close" data-dismiss="alert" aria-label="{{ _('Close') }}" role="button" tabindex=0>×</span>
</div>
<div class="alert alert-info bankruptcy-loader" style="display: none;">
{% trans %}
|
[swarming] comment out default reboot in on_bot_idle added in
We don't want this to happen by default. It interfere with smoke tests. | @@ -305,10 +305,10 @@ def on_bot_idle(bot, since_last_action):
bot has been idle.
"""
# Don't try this if running inside docker.
- if sys.platform != 'linux2' or not platforms.linux.get_inside_docker():
- uptime = os_utilities.get_uptime()
- if uptime > 12*60*60 * (1. + bot.get_pseudo_rand(0.2)):
- bot.host_reboot('Periodic reboot after %ds' % uptime)
+ #if sys.platform != 'linux2' or not platforms.linux.get_inside_docker():
+ # uptime = os_utilities.get_uptime()
+ # if uptime > 12*60*60 * (1. + bot.get_pseudo_rand(0.2)):
+ # bot.host_reboot('Periodic reboot after %ds' % uptime)
### Setup
|
Fix dquery.
add `long` type as python int
add `[NOT] LIKE` predication
add `[NOT] IN` predication | @@ -156,6 +156,7 @@ class Type(Token):
'string':STR,
'int':INT,
'float':FLOAT,
+ 'long':INT,
}
reverse_mapping = dict(reversed(i) for i in mappping.items())
@classmethod
@@ -198,11 +199,11 @@ class SpecialChar(Token):
KEYWORDS = r'select|from|where|like|having|order|not|and|or|group|by|desc|asc|'\
r'as|limit|in|sum|count|avg|max|min|adcount|outfile|into|drop|show|create|'\
- r'table|if|exists|all|distinct|tables|inner|left|right|outer|join|using'
+ r'table|if|exists|all|distinct|tables|inner|left|right|outer|join|using|long'
lexer = re.Scanner([
(r'\b(' + KEYWORDS + r')\b', lambda _,t: Keyword(t)),
- (r'\b(int|float|string|char|varchar)\b', lambda _,t: Type(t)),
+ (r'\b(int|long|float|string|char|varchar)\b', lambda _,t: Type(t)),
(r'`(\\`|\\\\|[^\\`])+`', lambda _,t:Identity(t[1:-1])),
(r'\b([_a-z][.\w]*)\b', lambda _,t:Identity(t)),
(r'(\d+(\.\d*)?|\.\d+)(e[+-]?\d+)?', lambda _,t:Number(t)),
@@ -499,6 +500,15 @@ class Expr(object):
def not_(self):
return combine('NOT', operator.not_, self)
+ def like(self, pattern):
+ return combine('LIKE', lambda x,y: re.match(y, x) is not None, self, pattern)
+
+ def in_(self, values):
+ def _in(x, *y):
+ return any((operator.eq(x, i) for i in y))
+ return combine('IN', _in, self, *values)
+
+
class CombineExpr(Expr):
def __init__(self, rep, fun, *args):
self.fun = fun
@@ -539,6 +549,7 @@ class ColumnRefExpr(Expr):
def __repr__(self):
return self.value
+
class NativeExpr(Expr):
def __init__(self, x):
self.value = unquote(x, '"').replace('\n', ' ')
@@ -551,6 +562,7 @@ class NativeExpr(Expr):
def __repr__(self):
return '$"%s"' % self.value
+
class SetExpr(Expr):
def __call__(self, schema):
mapper = schema.get_mapper(self)
@@ -929,7 +941,6 @@ def comparison_predicate():
)
)
-
def like_predicate():
return bind(string_value_expression(), lambda x:
bind(seq(plus(Keyword.sat('NOT'), result(None)),Keyword.sat('LIKE')), lambda n__:
@@ -1541,7 +1552,7 @@ Use Python expression:
if __name__ == '__main__':
from dpark import optParser
- optParser.set_default('master', 'flet6')
+ optParser.set_default('master', 'mesos')
optParser.add_option('-e', '--query', type='string', default='',
help='execute the SQL qeury then exit')
optParser.add_option('-s', '--script', type='string', default='',
|
Fix, python flag "no_warnings" was not working on all platforms.
* According to the docs, this function must be called before
the interpreter is initialized. It seems it had no effect
on macOS then, although worked on Linux. | @@ -508,6 +508,21 @@ int main(int argc, char **argv) {
char const *old_env = getenv("PYTHONHASHSEED");
setenv("PYTHONHASHSEED", "0", 1);
#endif
+
+ /* Disable CPython warnings if requested to. */
+#if NO_PYTHON_WARNINGS
+ {
+#if PYTHON_VERSION >= 0x300
+ wchar_t ignore[] = L"ignore";
+#else
+ char ignore[] = "ignore";
+#endif
+
+ PySys_ResetWarnOptions();
+ PySys_AddWarnOption(ignore);
+ }
+#endif
+
/* Initialize the embedded CPython interpreter. */
NUITKA_PRINT_TIMING("main(): Calling Py_Initialize to initialize interpreter.");
Py_Initialize();
@@ -717,28 +732,20 @@ int main(int argc, char **argv) {
/* Enable meta path based loader. */
setupMetaPathBasedLoader();
+ /* Initialize warnings module. */
_PyWarnings_Init();
- /* Disable CPython warnings if requested to. */
-#if NO_PYTHON_WARNINGS
- {
-#if PYTHON_VERSION >= 0x300
- wchar_t ignore[] = L"ignore";
-#else
- char ignore[] = "ignore";
-#endif
-
- PySys_AddWarnOption(ignore);
-
-#if PYTHON_VERSION >= 0x342 && defined(_NUITKA_FULL_COMPAT)
+#if NO_PYTHON_WARNINGS && PYTHON_VERSION >= 0x342 && defined(_NUITKA_FULL_COMPAT)
// For full compatibility bump the warnings registry version,
// otherwise modules "__warningsregistry__" will mismatch.
PyObject *warnings_module = PyImport_ImportModule("warnings");
PyObject *meth = PyObject_GetAttrString(warnings_module, "_filters_mutated");
+ CALL_FUNCTION_NO_ARGS(meth);
+#if PYTHON_VERSION < 0x380
+ // Two times, so "__warningregistry__" version matches.
CALL_FUNCTION_NO_ARGS(meth);
#endif
- }
#endif
#if PYTHON_VERSION >= 0x300
|
Fix syntax problem in Action doc [ci skip]
Recent change introduced an xml problem which prevents the
docs from validating or building - transforming so it builds now. | @@ -65,9 +65,9 @@ is set to <literal>2</literal> or higher,
then that number of entries in the command
string will be scanned for relative or absolute
paths. The count will reset after any
-<literal>&&</literal> entries are found.
+<literal>&&</literal> entries are found.
The first command in the action string and
-the first after any <literal>&&</literal>
+the first after any <literal>&&</literal>
entries will be found by searching the
<varname>PATH</varname> variable in the
<varname>ENV</varname> environment used to
@@ -88,7 +88,7 @@ with that construction environment.
not be added to the targets built with that
construction environment. The first command
in the action string and the first after any
-<literal>&&</literal> entries will be found
+<literal>&&</literal> entries will be found
by searching the <varname>PATH</varname>
variable in the <varname>ENV</varname>
environment used to execute the command.
|
remove cache from gitlab ci
gitlab ci's cache servers are currently broken | @@ -13,12 +13,12 @@ stages:
.basetest: &testbaseanchor
stage: basic-tests
- cache:
- key: tavern-project-cache
- paths:
- - .cache/pip
- - .tox
- policy: pull
+ # cache:
+ # key: tavern-project-cache
+ # paths:
+ # - .cache/pip
+ # - .tox
+ # policy: pull
before_script:
- pip install tox
@@ -68,11 +68,11 @@ py36:
# Make this job push the cache. It doesn't hugely matter if the cache is out
# of date, so just do it in one job.
- cache:
- key: tavern-project-cache
- paths:
- - .cache/pip
- - .tox
+ # cache:
+ # key: tavern-project-cache
+ # paths:
+ # - .cache/pip
+ # - .tox
py27lint:
<<: *basictest
|
fix soft_nms_cpu call in BoxWithNMSLimit
Summary:
Pull Request resolved:
introduces a bug of misaligned arguments. | @@ -99,6 +99,7 @@ bool BoxWithNMSLimitOp<CPUContext>::RunOnDevice() {
nms_thres_,
soft_nms_min_score_thres_,
soft_nms_method_,
+ -1, /* topN */
legacy_plus_one_);
} else {
std::sort(
|
fix(report): correction of logo present in the patient discharge summary
fix(report): updated images in patient discharge report | </head>
<body class="max-w-5xl mx-10 mt-4 text-sm">
<div class="bg-white overflow-hidden sm:rounded-lg m-6">
- <div class="mx-auto flex justify-center">
- <img class='h-28' src="https://cdn.coronasafe.network/kgovtlogo.png"/>
- </div>
</div>
<div class="mt-2 text-center w-full font-bold text-3xl">
{{patient.facility.name}}
{% endif %}
</div>
</div>
- <img class='h-12' src="https://cdn.coronasafe.network/coronaSafeLogo.png"/>
+ <img class='h-12' src="https://cdn.coronasafe.network/black-logo.svg"/>
</div>
<div class="px-4 py-5 sm:px-6">
<dl class="">
|
Submitting with the run-appinspect step, however
we are currently missing an appinspect
username and password stored as secrets, so
we know it will fail. Testing if it will fail on
validation or fail during test run. | @@ -162,7 +162,6 @@ jobs:
- uses: actions/download-artifact@v2
with:
name: content-pack-build
- path: build/
#This explicitly uses a different version of python (2.7)
- uses: actions/setup-python@v2
@@ -214,49 +213,50 @@ jobs:
build/DA-ESS-ContentUpdate-latest.tar.gz
build/DA-ESS_AmazonWebServices_Content-latest.tar.gz
+ run-appinspect:
+ runs-on: ubuntu-latest
+ steps:
- # - name: persist_to_workspace
- # uses: actions/upload-artifact@v2
- # with:
- # name: content_tars
- # path: |
- # ~/build/DA-ESS-ContentUpdate-latest.tar.gz
- # ~/build/DA-ESS_AmazonWebServices_Content-latest.tar.gz
-
-
- # run-appinspect:
- # runs-on: ubuntu-latest
- # steps:
- # - name: Install System Packages
- # run: |
- # sudo apt update -qq
- # sudo apt install jq -qq
-
- # - name: Checkout Repo
- # uses: actions/checkout@v2
-
- # - name: Submit ESCU Package to AppInspect API
- # env:
- # APPINSPECT_USERNAME: ${{ secrets.AppInspectUsername }}
- # APPINSPECT_PASSWORD: ${{ secrets.AppInspectPassword }}
- # run: |
- # cd security-content/bin
- # #Enclose in quotes in case there are any special characters in the username/password
- # #Better not to pass these arguments on the command line, if possible
- # ./appinspect.sh ~/ DA-ESS-ContentUpdate-latest.tar.gz "$APPINSPECT_USERNAME" "$APPINSPECT_PASSWORD"
- # - name: Submit SAAWS Package to AppInspect API
- # env:
- # APPINSPECT_USERNAME: ${{ secrets.AppInspectUsername }}
- # APPINSPECT_PASSWORD: ${{ secrets.AppInspectPassword }}
- # run: |
- # cd security-content/bin
- # ./appinspect.sh ~/ DA-ESS_AmazonWebServices_Content-latest.tar.gz $APPINSPECT_USERNAME $APPINSPECT_PASSWORD
- # - name: store_artifacts
- # uses: actions/upload-artifacts@v2
- # with:
- # name: report/
- # path: |
- # ~/report
+ - name: Checkout Repo
+ uses: actions/checkout@v2
+
+ #Download the artifacts we want to check
+ - name: Restore Content-Pack Artifacts for AppInspect testing
+ uses: actions/download-artifact@v2
+ with:
+ name: content-latest
+
+
+
+ - name: Install System Packages
+ run: |
+ sudo apt update -qq
+ sudo apt install jq -qq
+
+
+
+ - name: Submit ESCU Package to AppInspect API
+ env:
+ APPINSPECT_USERNAME: ${{ secrets.AppInspectUsername }}
+ APPINSPECT_PASSWORD: ${{ secrets.AppInspectPassword }}
+ run: |
+ cd bin
+ #Enclose in quotes in case there are any special characters in the username/password
+ #Better not to pass these arguments on the command line, if possible
+ ./appinspect.sh ../ DA-ESS-ContentUpdate-latest.tar.gz "$APPINSPECT_USERNAME" "$APPINSPECT_PASSWORD"
+ - name: Submit SAAWS Package to AppInspect API
+ env:
+ APPINSPECT_USERNAME: ${{ secrets.AppInspectUsername }}
+ APPINSPECT_PASSWORD: ${{ secrets.AppInspectPassword }}
+ run: |
+ cd bin
+ ./appinspect.sh ../ DA-ESS_AmazonWebServices_Content-latest.tar.gz "$APPINSPECT_USERNAME" "$APPINSPECT_PASSWORD"
+ - name: store_artifacts
+ uses: actions/upload-artifacts@v2
+ with:
+ name: report
+ path: |
+ report/
|
Update installation guide
The link of Red Hat certificate system release notes is out of date, replace it with
the latest link. Also specify the absolute path of barbican.conf to follow the convention. | @@ -34,7 +34,7 @@ Crypto plugin and the PKCS#11 crypto plugin.
Simple Crypto Plugin
^^^^^^^^^^^^^^^^^^^^
-This crypto plugin is configured by default in barbican.conf. This plugin
+This crypto plugin is configured by default in ``/etc/barbican/barbican.conf``. This plugin
is completely insecure and is only suitable for development testing.
.. warning::
@@ -42,10 +42,10 @@ is completely insecure and is only suitable for development testing.
THIS PLUGIN IS NOT SUITABLE FOR PRODUCTION DEPLOYMENTS.
This plugin uses single symmetric key (kek - or 'key encryption key')
-- which is stored in plain text in the ``barbican.conf`` file to encrypt
+- which is stored in plain text in the ``/etc/barbican/barbican.conf`` file to encrypt
and decrypt all secrets.
-The configuration for this plugin in ``barbican.conf`` is as follows:
+The configuration for this plugin in ``/etc/barbican/barbican.conf`` is as follows:
.. code-block:: ini
@@ -72,7 +72,7 @@ using the PKCS#11 protocol.
Secrets are encrypted (and decrypted on retrieval) by a project specific
Key Encryption Key (KEK), which resides in the HSM.
-The configuration for this plugin in ``barbican.conf`` with settings shown for
+The configuration for this plugin in ``/etc/barbican/barbican.conf`` with settings shown for
use with a SafeNet HSM is as follows:
.. code-block:: ini
@@ -115,7 +115,7 @@ secret's location for later retrieval.
The plugin can be configured to authenticate to the KMIP device using either
a username and password, or using a client certificate.
-The configuration for this plugin in ``barbican.conf`` is as follows:
+The configuration for this plugin in ``/etc/barbican/barbican.conf`` is as follows:
.. code-block:: ini
@@ -135,7 +135,7 @@ The configuration for this plugin in ``barbican.conf`` is as follows:
Dogtag Plugin
-------------
-Dogtag is the upstream project corresponding to the Red Hat Certificate System.
+Dogtag is the upstream project corresponding to the Red Hat Certificate System,
a robust, full-featured PKI solution that contains a Certificate Manager (CA)
and a Key Recovery Authority (KRA) which is used to securely store secrets.
@@ -148,7 +148,7 @@ those deployments that do not require or cannot afford an HSM. This is the only
current plugin to provide this option.
The KRA communicates with HSMs using PKCS#11. For a list of certified HSMs,
-see the latest `release notes <https://access.redhat.com/documentation/en-US/Red_Hat_Certificate_System/9/html/Release_Notes/Release_Notes-Deployment_Notes.html>`_. Dogtag and the KRA meet all the relevant Common Criteria and FIPS specifications.
+see the latest `release notes <https://access.redhat.com/documentation/en-US/Red_Hat_Certificate_System/9/html/Release_Notes/>`_. Dogtag and the KRA meet all the relevant Common Criteria and FIPS specifications.
The KRA is a component of FreeIPA. Therefore, it is possible to configure the plugin
with a FreeIPA server. More detailed instructions on how to set up Barbican with FreeIPA
@@ -158,7 +158,7 @@ The plugin communicates with the KRA using a client certificate for a trusted KR
That certificate is stored in an NSS database as well as a PEM file as seen in the
configuration below.
-The configuration for this plugin in ``barbican.conf`` is as follows:
+The configuration for this plugin in ``/etc/barbican/barbican.conf`` is as follows:
.. code-block:: ini
|
Enables Ansible logs OCP multimaster plan
When installing OCP using the multimaster plan, Ansible logs are lost.
Enabling them on /etc/ansible/ansible.cfg, so in case of failure during the installation,
there is a log to check.
By default to /var/log/ansible.log. | @@ -8,6 +8,7 @@ ssh-keyscan -H master03.karmalabs.local >> ~/.ssh/known_hosts
ssh-keyscan -H node01.karmalabs.local >> ~/.ssh/known_hosts
ssh-keyscan -H node02.karmalabs.local >> ~/.ssh/known_hosts
export IP=`ip a l eth0 | grep 'inet ' | cut -d' ' -f6 | awk -F'/' '{ print $1}'`
+sed -i "s/#log_path/log_path/" /etc/ansible/ansible.cfg
sed -i "s/openshift_master_default_subdomain=.*/openshift_master_default_subdomain=$IP.xip.io/" /root/hosts
ansible-playbook -i /root/hosts /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
for i in 1 2 3 ; do
|
Add description to policies in certificates.py
blueprint: policy-docs | @@ -25,12 +25,26 @@ certificates_policies = [
policy.RuleDefault(
name=POLICY_ROOT % 'discoverable',
check_str=base.RULE_ANY),
- policy.RuleDefault(
- name=POLICY_ROOT % 'create',
- check_str=base.RULE_ADMIN_OR_OWNER),
- policy.RuleDefault(
- name=POLICY_ROOT % 'show',
- check_str=base.RULE_ADMIN_OR_OWNER),
+ base.create_rule_default(
+ POLICY_ROOT % 'create',
+ base.RULE_ADMIN_OR_OWNER,
+ "Create a root certificate. This API is deprecated.",
+ [
+ {
+ 'method': 'POST',
+ 'path': '/os-certificates'
+ }
+ ]),
+ base.create_rule_default(
+ POLICY_ROOT % 'show',
+ base.RULE_ADMIN_OR_OWNER,
+ "Show details for a root certificate. This API is deprecated.",
+ [
+ {
+ 'method': 'GET',
+ 'path': '/os-certificates/root'
+ }
+ ])
]
|
tests: when running test set USE_RHCS=true to install set ceph_rhcs=true
When invoking the tests if USE_RHCS=true is set then all tests will be
run with ceph_rhcs=True. | @@ -9,9 +9,11 @@ skipsdist = True
[purge]
commands=
cp {toxinidir}/infrastructure-playbooks/purge-cluster.yml {toxinidir}/purge-cluster.yml
- ansible-playbook -vv -i {changedir}/hosts {toxinidir}/purge-cluster.yml --extra-vars="ireallymeanit=yes fetch_directory={changedir}/fetch"
+ ansible-playbook -vv -i {changedir}/hosts {toxinidir}/purge-cluster.yml \
+ --extra-vars="ireallymeanit=yes fetch_directory={changedir}/fetch ceph_rhcs={env:USE_CEPH_RHCS:false}"
# set up the cluster again
- ansible-playbook -vv -i {changedir}/hosts {toxinidir}/site.yml.sample --extra-vars="fetch_directory={changedir}/fetch"
+ ansible-playbook -vv -i {changedir}/hosts {toxinidir}/site.yml.sample \
+ --extra-vars="fetch_directory={changedir}/fetch ceph_rhcs={env:USE_CEPH_RHCS:false}"
# test that the cluster can be redeployed in a healthy state
testinfra -n 4 --sudo -v --connection=ansible --ansible-inventory={changedir}/hosts {toxinidir}/tests/functional/tests
@@ -21,7 +23,8 @@ commands=
[update]
commands=
cp {toxinidir}/infrastructure-playbooks/rolling_update.yml {toxinidir}/rolling_update.yml
- ansible-playbook -vv -i {changedir}/hosts {toxinidir}/rolling_update.yml --extra-vars="ceph_stable_release=kraken ireallymeanit=yes fetch_directory={changedir}/fetch"
+ ansible-playbook -vv -i {changedir}/hosts {toxinidir}/rolling_update.yml \
+ --extra-vars="ceph_stable_release=kraken ireallymeanit=yes fetch_directory={changedir}/fetch ceph_rhcs={env:USE_CEPH_RHCS:false}"
testinfra -n 4 --sudo -v --connection=ansible --ansible-inventory={changedir}/hosts {toxinidir}/tests/functional/tests
@@ -68,7 +71,8 @@ commands=
vagrant up --no-provision {posargs:--provider=virtualbox}
bash {toxinidir}/tests/scripts/generate_ssh_config.sh {changedir}
- ansible-playbook -vv -i {changedir}/hosts {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars="fetch_directory={changedir}/fetch"
+ ansible-playbook -vv -i {changedir}/hosts {toxinidir}/{env:PLAYBOOK:site.yml.sample} \
+ --extra-vars="fetch_directory={changedir}/fetch ceph_rhcs={env:USE_CEPH_RHCS:false}"
ansible-playbook -vv -i {changedir}/hosts {toxinidir}/tests/functional/setup.yml
testinfra -n 4 --sudo -v --connection=ansible --ansible-inventory={changedir}/hosts {toxinidir}/tests/functional/tests
|
doc: give more attention to Catalina issues doc
It's easy to miss the Catalina issues doc when reading the readme. Since
it can be a common issue among developers, it makes sense to give more
attention to it in the readme.
PR-URL: | @@ -37,10 +37,11 @@ Depending on your operating system, you will need to install:
### On macOS
+**ATTENTION**: If your Mac has been _upgraded_ to macOS Catalina (10.15), please read [macOS_Catalina.md](macOS_Catalina.md).
+
* Python v2.7, v3.5, v3.6, v3.7, or v3.8
* [Xcode](https://developer.apple.com/xcode/download/)
* You also need to install the `XCode Command Line Tools` by running `xcode-select --install`. Alternatively, if you already have the full Xcode installed, you can find them under the menu `Xcode -> Open Developer Tool -> More Developer Tools...`. This step will install `clang`, `clang++`, and `make`.
- * If your Mac has been _upgraded_ to macOS Catalina (10.15), please read [macOS_Catalina.md](macOS_Catalina.md).
### On Windows
|
TST: removed unknown keyword unit test
Since the nan_policy keyword is now a parameter, we don't need to test for unknown keyword arguement input. | @@ -1660,12 +1660,6 @@ class TestCircFuncs(object):
assert_raises(ValueError, stats.circstd, x, high=360,
nan_policy='foobar')
- def test_bad_keyword(self):
- x = [355, 5, 2, 359, 10, 350, np.nan]
- assert_raises(TypeError, stats.circmean, x, high=360, foo="foo")
- assert_raises(TypeError, stats.circvar, x, high=360, foo="foo")
- assert_raises(TypeError, stats.circstd, x, high=360, foo="foo")
-
def test_circmean_scalar(self):
x = 1.
M1 = x
|
Changed syntax from v2 to v3
client-certs = v2 syntax --> --set client_certs=value = v3 syntax
cadir = v2 syntax --> --set cadir=value = v3 syntax | @@ -143,14 +143,14 @@ mitmproxy --cert *.example.com=cert.pem
By default, mitmproxy will use `~/.mitmproxy/mitmproxy-ca.pem` as the
certificate authority to generate certificates for all domains for which
no custom certificate is provided (see above). You can use your own
-certificate authority by passing the `--cadir DIRECTORY` option to
+certificate authority by passing the `--set cadir=DIRECTORY` option to
mitmproxy. Mitmproxy will then look for `mitmproxy-ca.pem` in the
specified directory. If no such file exists, it will be generated
automatically.
## Using a client side certificate
-You can use a client certificate by passing the `--client-certs DIRECTORY|FILE`
+You can use a client certificate by passing the `--set client_certs=DIRECTORY|FILE`
option to mitmproxy. Using a directory allows certs to be selected based on
hostname, while using a filename allows a single specific certificate to be used
for all SSL connections. Certificate files must be in the PEM format and should
@@ -158,7 +158,7 @@ contain both the unencrypted private key and the certificate.
### Multiple client certificates
-You can specify a directory to `--client-certs`, in which case the matching
+You can specify a directory to `--set client_certs=DIRECTORY`, in which case the matching
certificate is looked up by filename. So, if you visit example.org, mitmproxy
looks for a file named `example.org.pem` in the specified directory and uses
this as the client cert.
|
Add filter "user_id" for cluster receiver list
This patch adds "user_id" to cluster receiver's query map,
so that user can get the required result when doing cluster
receiver list.
Partial-Bug: | @@ -27,7 +27,8 @@ class Receiver(resource.Resource):
allow_delete = True
_query_mapping = resource.QueryParameters(
- 'name', 'type', 'cluster_id', 'action', 'sort', 'global_project')
+ 'name', 'type', 'cluster_id', 'action', 'sort', 'global_project',
+ user_id='user')
# Properties
#: The name of the receiver.
|
[IMPR] Check for missing generator after setup() call
Bot.setup may create the generator.
Therefore check for it after setup()
call in front of the loop. | @@ -1385,16 +1385,16 @@ class BaseBot(OptionHandler):
@raise AssertionError: "page" is not a pywikibot.page.BasePage object
"""
self._start_ts = pywikibot.Timestamp.now()
+ self.setup()
+
if not hasattr(self, 'generator'):
raise NotImplementedError('Variable %s.generator not set.'
% self.__class__.__name__)
-
if PY2:
# Python 2 does not clear previous exceptions and method `exit`
# relies on sys.exc_info returning exceptions occurring in `run`.
sys.exc_clear()
- self.setup()
try:
for item in self.generator:
# preprocessing of the page
|
[internal] fix non-empty __init__.py
Not sure how I ended up copying the contents of a register.py into this `__init__.py`. Fix by clearing out the file.
[ci skip-rust]
[ci skip-build-wheels] | -# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
-# Licensed under the Apache License, Version 2.0 (see LICENSE).
-
-from pants.backend.go.lint.vet import skip_field
-from pants.backend.go.lint.vet.rules import rules as go_vet_rules
-
-
-def rules():
- return (
- *go_vet_rules(),
- *skip_field.rules(),
- )
|
fix(stock_all_pb.py): fix stock_a_all_pb interface
fix stock_a_all_pb interface | @@ -336,7 +336,6 @@ def stock_a_all_pb() -> pd.DataFrame:
temp_df = pd.DataFrame(data_json["data"])
temp_df['date'] = pd.to_datetime(
temp_df["date"], unit="ms", utc=True).dt.tz_convert("Asia/Shanghai").dt.date
- del temp_df['marketId']
del temp_df['weightingAveragePB']
return temp_df
|
Fix Shellcheck SC2064: Use single quotes on traps
Use single quotes, otherwise this expands now rather than when signalled. | @@ -57,7 +57,7 @@ case "$DEB_BUILD_OPTIONS" in
# Copy tests to a temporary directory so that we can put them on the
# PYTHONPATH without putting the uninstalled synapse on the pythonpath.
tmpdir=`mktemp -d`
- trap "rm -r $tmpdir" EXIT
+ trap 'rm -r $tmpdir' EXIT
cp -r tests "$tmpdir"
|
Fix comparison to None
The comparison should be made using 'is' keyword instead of '==' | @@ -18,7 +18,7 @@ if not os.path.exists(requirements):
pip.main(['install', '--upgrade', '-r', requirements])
-if which('unzip') == None:
+if which('unzip') is None:
print('The following executables are needed and were not found: unzip')
print('Downloading datasets (this might take several minutes depending on your internet connection)')
|
issue426-timeout-no-error
settings.py add boto3 timeouts for aws info lookups | @@ -7,6 +7,7 @@ import base64
import os
import boto3
+import botocore.config
import logging
import re
@@ -181,23 +182,27 @@ SCRUBBING_RULE_CONFIGS = [
INCLUDE_AT_MATCH = get_env_var("INCLUDE_AT_MATCH", default=None)
EXCLUDE_AT_MATCH = get_env_var("EXCLUDE_AT_MATCH", default=None)
+# Set boto3 timeout
+boto3_config = botocore.config.Config(
+ connect_timeout=5, read_timeout=5, retries={"max_attempts": 2}
+)
# DD API Key
if "DD_API_KEY_SECRET_ARN" in os.environ:
SECRET_ARN = os.environ["DD_API_KEY_SECRET_ARN"]
logger.debug(f"Fetching the Datadog API key from SecretsManager: {SECRET_ARN}")
- DD_API_KEY = boto3.client("secretsmanager").get_secret_value(SecretId=SECRET_ARN)[
- "SecretString"
- ]
+ DD_API_KEY = boto3.client("secretsmanager", config=boto3_config).get_secret_value(
+ SecretId=SECRET_ARN
+ )["SecretString"]
elif "DD_API_KEY_SSM_NAME" in os.environ:
SECRET_NAME = os.environ["DD_API_KEY_SSM_NAME"]
logger.debug(f"Fetching the Datadog API key from SSM: {SECRET_NAME}")
- DD_API_KEY = boto3.client("ssm").get_parameter(
+ DD_API_KEY = boto3.client("ssm", config=boto3_config).get_parameter(
Name=SECRET_NAME, WithDecryption=True
)["Parameter"]["Value"]
elif "DD_KMS_API_KEY" in os.environ:
ENCRYPTED = os.environ["DD_KMS_API_KEY"]
logger.debug(f"Fetching the Datadog API key from KMS: {ENCRYPTED}")
- DD_API_KEY = boto3.client("kms").decrypt(
+ DD_API_KEY = boto3.client("kms", config=boto3_config).decrypt(
CiphertextBlob=base64.b64decode(ENCRYPTED)
)["Plaintext"]
if type(DD_API_KEY) is bytes:
|
Remove broken link from resources.md
I am recommending the removal of the following link as it does not seem to be working and a quick search did not turn any results for the project.
Text in question:
## Tools
1. [cntr](https://github.com/nsgonultas/cntr): A command line day counter to track your progress easily | ## Other resources
1. [CodeNewbie - #100DaysOfCode Slack Channel](https://codenewbie.typeform.com/to/uwsWlZ)
-## Tools
-1. [cntr](https://github.com/nsgonultas/cntr): A command line day counter to track your progress easily
-
## Books (both coding and non-coding)
### Non-Coding
|
use 'ip route get' over 'ip addr' for interface check
use ip route get over ip addr for interface check | @@ -85,7 +85,7 @@ else
fi
# get name of active interface (eth0 or wlan0)
-network_active_if=$(ip addr | grep -v "lo:" | grep 'state UP' | tr -d " " | cut -d ":" -f2 | head -n 1)
+network_active_if=$(ip route get 255.255.255.255 | awk -- '{print $4}' | head -n 1)
# get network traffic
# ifconfig does not show eth0 on Armbian or in a VM - get first traffic info
|
stream_stats: Add a column representing type of stream.
This adds a column which represents whether a stream is public or
private.
Fixes | @@ -40,13 +40,18 @@ class Command(BaseCommand):
print("%10s %d public streams and" % ("(", public_count), end=' ')
print("%d private streams )" % (private_count,))
print("------------")
- print("%25s %15s %10s" % ("stream", "subscribers", "messages"))
+ print("%25s %15s %10s %12s" % ("stream", "subscribers", "messages", "type"))
for stream in streams:
+ if stream.invite_only:
+ stream_type = 'private'
+ else:
+ stream_type = 'public'
print("%25s" % (stream.name,), end=' ')
recipient = Recipient.objects.filter(type=Recipient.STREAM, type_id=stream.id)
print("%10d" % (len(Subscription.objects.filter(recipient=recipient,
active=True)),), end=' ')
num_messages = len(Message.objects.filter(recipient=recipient))
- print("%12d" % (num_messages,))
+ print("%12d" % (num_messages,), end=' ')
+ print("%15s" % (stream_type,))
print("")
|
[metricbeat] remove unused /var/lib/docker/container mount
This mount don't seem to be used by metricbeat as we don't use `add_docker_metadata` processor. | @@ -68,9 +68,6 @@ spec:
hostPath:
path: {{ .Values.hostPathRoot }}/{{ template "metricbeat.fullname" . }}-{{ .Release.Namespace }}-data
type: DirectoryOrCreate
- - name: varlibdockercontainers
- hostPath:
- path: /var/lib/docker/containers
- name: varrundockersock
hostPath:
path: /var/run/docker.sock
@@ -142,9 +139,6 @@ spec:
{{- end }}
- name: data
mountPath: /usr/share/metricbeat/data
- - name: varlibdockercontainers
- mountPath: /var/lib/docker/containers
- readOnly: true
# Necessary when using autodiscovery; avoid mounting it otherwise
# See: https://www.elastic.co/guide/en/beats/metricbeat/master/configuration-autodiscover.html
- name: varrundockersock
|
used openstack cli in magnum devstack plugin
Currently magnum CI tests(magnum/tests/contrib/post_test_hook.sh) uses
python clients(nova,neutron, glance) for openstack operations.
We should start using openstack client instead.
Closes-Bug: | @@ -43,8 +43,7 @@ function create_test_data {
# cf. https://bugs.launchpad.net/ironic/+bug/1596421
echo "alter table ironic.nodes modify instance_info LONGTEXT;" | mysql -uroot -p${MYSQL_PASSWORD} ironic
# NOTE(yuanying): Ironic instances need to connect to Internet
- neutron subnet-update private-subnet --dns-nameserver 8.8.8.8
-
+ openstack subnet set private-subnet --dns-nameserver 8.8.8.8
local container_format="ami"
else
local image_name="atomic"
@@ -56,12 +55,14 @@ function create_test_data {
# setting, it allows to perform testing on custom images.
image_name=${MAGNUM_IMAGE_NAME:-$image_name}
- export NIC_ID=$(neutron net-show public | awk '/ id /{print $4}')
+ export NIC_ID=$(openstack network show public -f value -c id)
# We need to filter by container_format to get the appropriate
# image. Specifically, when we provide kernel and ramdisk images
# we need to select the 'ami' image. Otherwise, when we have
# qcow2 images, the format is 'bare'.
+ # NOTE(prameswar): openstack cli not giving container format in
+ # command 'openstack image list' once it start supporting we have to add.
export IMAGE_ID=$(glance --os-image-api-version 1 image-list | grep $container_format | grep -i $image_name | awk '{print $2}')
#Get magnum_url
@@ -108,7 +109,7 @@ EOF
# Create a keypair for use in the functional tests.
echo_summary "Generate a key-pair"
ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
- nova keypair-add --pub-key ~/.ssh/id_rsa.pub default
+ openstack keypair create --public-key ~/.ssh/id_rsa.pub default
}
function add_flavor {
@@ -122,8 +123,8 @@ function add_flavor {
# Create magnum specific flavor for use in functional tests.
echo_summary "Create a flavor"
- nova flavor-create m1.magnum 100 1024 10 1
- nova flavor-create s1.magnum 200 512 10 1
+ openstack flavor create m1.magnum --id 100 --ram 1024 --disk 10 --vcpus 1
+ openstack flavor create s1.magnum --id 200 --ram 512 --disk 10 --vcpus 1
}
if ! function_exists echo_summary; then
@@ -207,13 +208,13 @@ EXIT_CODE=$?
# Delete the keypair used in the functional test.
echo_summary "Running keypair-delete"
-nova keypair-delete default
+openstack keypair delete default
if [[ "-ironic" != "$special" ]]; then
# Delete the flavor used in the functional test.
echo_summary "Running flavor-delete"
- nova flavor-delete m1.magnum
- nova flavor-delete s1.magnum
+ openstack flavor delete m1.magnum
+ openstack flavor delete s1.magnum
fi
# Save functional testing log
|
Superstarify: use user mentions in mod logs
`format_user` isn't used in the apply mod log cause it already shows
both the old and new nicknames elsewhere. | @@ -12,6 +12,7 @@ from bot import constants
from bot.bot import Bot
from bot.converters import Expiry
from bot.utils.checks import with_role_check
+from bot.utils.messages import format_user
from bot.utils.time import format_infraction
from . import utils
from .scheduler import InfractionScheduler
@@ -181,8 +182,8 @@ class Superstarify(InfractionScheduler, Cog):
title="Member achieved superstardom",
thumbnail=member.avatar_url_as(static_format="png"),
text=textwrap.dedent(f"""
- Member: {member.mention} (`{member.id}`)
- Actor: {ctx.message.author}
+ Member: {member.mention}
+ Actor: {ctx.message.author.mention}
Expires: {expiry_str}
Old nickname: `{old_nick}`
New nickname: `{forced_nick}`
@@ -221,7 +222,7 @@ class Superstarify(InfractionScheduler, Cog):
)
return {
- "Member": f"{user.mention}(`{user.id}`)",
+ "Member": format_user(user),
"DM": "Sent" if notified else "**Failed**"
}
|
Fix a bug in import of shape keys
The shape key index is overwritten by the inner loop:
io_scene_gltf2/blender/imp/gltf2_blender_mesh.py", line 135, in set_mesh
if i >= len(prim.targets):
TypeError: '>=' not supported between instances of 'tuple' and 'int' | @@ -109,21 +109,21 @@ class BlenderMesh():
obj.shape_key_add(name="Basis")
current_shapekey_index = 0
- for i in range(max_shape_to_create):
+ for sk in range(max_shape_to_create):
# Check if this target has POSITION
- if 'POSITION' not in prim.targets[i].keys():
- gltf.shapekeys[i] = None
+ if 'POSITION' not in prim.targets[sk].keys():
+ gltf.shapekeys[sk] = None
continue
# Check if glTF file has some extras with targetNames
shapekey_name = None
if pymesh.extras is not None:
- if 'targetNames' in pymesh.extras.keys() and i < len(pymesh.extras['targetNames']):
- shapekey_name = pymesh.extras['targetNames'][i]
+ if 'targetNames' in pymesh.extras.keys() and sk < len(pymesh.extras['targetNames']):
+ shapekey_name = pymesh.extras['targetNames'][sk]
if shapekey_name is None:
- shapekey_name = "target_" + str(i)
+ shapekey_name = "target_" + str(sk)
obj.shape_key_add(name=shapekey_name)
current_shapekey_index += 1
@@ -132,16 +132,16 @@ class BlenderMesh():
for prim in pymesh.primitives:
if prim.targets is None:
continue
- if i >= len(prim.targets):
+ if sk >= len(prim.targets):
continue
bm = bmesh.new()
bm.from_mesh(mesh)
shape_layer = bm.verts.layers.shape[current_shapekey_index]
- gltf.shapekeys[i] = current_shapekey_index
+ gltf.shapekeys[sk] = current_shapekey_index
- original_pos = BinaryData.get_data_from_accessor(gltf, prim.targets[i]['POSITION'])
+ original_pos = BinaryData.get_data_from_accessor(gltf, prim.targets[sk]['POSITION'])
tmp_indices = {}
tmp_idx = 0
|
client: report exception in archive_files_to_storage
so that we can track how often this happens. | @@ -1664,6 +1664,8 @@ def archive_files_to_storage(storage, files, blacklist, verify_push=False):
if backoff > 100:
raise
+ on_error.report('error before %d second backoff' % backoff)
+
logging.exception(
'failed to run _archive_files_to_storage_internal,'
' will retry after %d seconds', backoff)
|
gdbserver: delete FlashLoader on exception during commit.
This ensures that there is no stale data in a reused FlashLoader
instance, in case the user tries another load. (Which may encounter the
same exception that happened the first time, but at least it's not
adding another failure to the mix.) | @@ -714,9 +714,10 @@ class GDBServer(threading.Thread):
elif b'FlashDone' in ops :
# Only program if we received data.
if self.flash_loader is not None:
+ try:
# Write all buffered flash contents.
self.flash_loader.commit()
-
+ finally:
# Set flash loader to None so that on the next flash command a new
# object is used.
self.flash_loader = None
|
Raise TypeError when scalar value is passed to add_column.
Closes | @@ -1942,9 +1942,13 @@ class Table:
col = self._convert_data_to_col(col, name=name, copy=copy,
default_name=default_name)
+ # Assigning a scalar column to an empty table should result in an
+ # exception (see #3811).
+ if col.shape == () and len(self) == 0:
+ raise TypeError("Empty table cannot have column set to scalar value.")
# Make col data shape correct for scalars. The second test is to allow
# broadcasting an N-d element to a column, e.g. t['new'] = [[1, 2]].
- if (col.shape == () or col.shape[0] == 1) and len(self) > 0:
+ elif (col.shape == () or col.shape[0] == 1) and len(self) > 0:
new_shape = (len(self),) + getattr(col, 'shape', ())[1:]
if isinstance(col, np.ndarray):
col = np.broadcast_to(col, shape=new_shape,
@@ -1953,6 +1957,7 @@ class Table:
col = col._apply(np.broadcast_to, shape=new_shape,
subok=True)
+
# broadcast_to() results in a read-only array. Apparently it only changes
# the view to look like the broadcasted array. So copy.
col = col_copy(col)
|
Update working-at-mattermost.rst
Added MatterCon 2019 video
Updated "country" to be "region/country" per things everyone must know | @@ -39,6 +39,7 @@ This gives us tremendous advantages:
Also, we have Meetups around the world to deepen and broaden our relationships and build the future of our products together:
+* Take a look at the `MatterCon 2019 (held in Punta Cana, Dominican Republic) video <https://youtu.be/pMySvCfy7Bw>`_.
* Check out `MatterCon 2018 in Lisbon, Portugal <https://www.youtube.com/watch?v=CZXaYttz3NA&feature=youtu.be>`__.
* Watch highlights of `our Community Meetup in Toronto, Canada <https://www.youtube.com/watch?v=5c9oJdbXrMU>`__.
* See `our Core Commiter Summit in Las Vegas, Nevada, USA <https://www.youtube.com/watch?v=_RpmrM-5UFY>`__.
@@ -113,7 +114,7 @@ There is no limit to how much time-off you can take when your work is meeting or
2) Holidays
~~~~~~~~~~~
-Please take off holidays relevant to your culture, resident country and preferences. When doing so, please follow the time-off process in 1) above.
+Please take off holidays relevant to your culture, region/country and preferences. When doing so, please follow the time-off process in 1) above.
We're headquartered in the US and have a large Canadian contingent, so below are holidays we're expecting people from those countries to take off:
|
add test for fori_loop index batching
fixes | @@ -312,6 +312,13 @@ class LaxControlFlowTest(jtu.JaxTestCase):
expected = (onp.array([10, 11]), onp.array([20, 20]))
self.assertAllClose(ans, expected, check_dtypes=False)
+ def testForiLoopBatchedIssue1190(self):
+ f = lambda x: lax.fori_loop(0, 4, lambda _, x: x + 1, x)
+ jaxpr = api.make_jaxpr(api.vmap(f))(np.arange(3))
+ eqn = jaxpr.eqns[0]
+ self.assertIs(eqn.primitive, lax.while_p)
+ self.assertEqual(eqn.params['cond_jaxpr'].in_avals[0].shape, ())
+
def testForiLoopBasic(self):
def body_fun(i, tot):
return lax.add(tot, i)
|
(docs) restrict PrevNext to only work within top-level sections
Test Plan: docs
Reviewers: yuhan, sashank | import { useRouter } from 'next/router';
-import Link from 'next/link';
-import { flatten } from 'utils/treeOfContents/flatten';
+import { flatten, TreeLink } from 'utils/treeOfContents/flatten';
import { useTreeOfContents } from 'hooks/useTreeOfContents';
import { VersionedLink } from './VersionedComponents';
@@ -8,23 +7,33 @@ export const PrevNext = () => {
const treeOfContents = useTreeOfContents();
const router = useRouter();
- const allLinks = flatten(Object.values(treeOfContents), true)
+ const allLinks = Object.values(treeOfContents)
+ // Flatten links within each section
+ .map((val) =>
+ flatten([val], true)
.filter(({ isExternal }) => !isExternal)
- .filter(({ ignore }) => !ignore);
+ .filter(({ ignore }) => !ignore),
+ )
+ // Only keep top-level section which contains the current path
+ .find((section) => section.some((t) => router.asPath.includes(t.path)));
+
+ if (!allLinks || allLinks.length === 0) {
+ return <nav></nav>;
+ }
const selectedItems = allLinks.filter((l) => router.asPath.includes(l.path));
const selectedItem = selectedItems[selectedItems.length - 1];
const currentIndex = allLinks.indexOf(selectedItem);
- let prev = 0;
- let next = 0;
+ let prev: number | null;
+ let next: number | null;
if (currentIndex === 0) {
- prev = allLinks.length - 1;
+ prev = null;
next = currentIndex + 1;
} else if (currentIndex === allLinks.length - 1) {
prev = currentIndex - 1;
- next = 0;
+ next = null;
} else {
prev = currentIndex - 1;
next = currentIndex + 1;
@@ -32,7 +41,7 @@ export const PrevNext = () => {
return (
<nav className="mt-8 mb-4 border-t border-gray-200 px-4 flex items-center justify-between sm:px-0">
- {allLinks[prev] && (
+ {prev !== null && allLinks[prev] && (
<div className="w-0 flex-1 flex">
<VersionedLink href={allLinks[prev].path}>
<a className="-mt-px border-t-2 border-transparent pt-4 pr-1 inline-flex items-center text-sm leading-5 font-medium text-gray-500 hover:text-gray-700 hover:border-gray-300 focus:outline-none focus:text-gray-700 focus:border-gray-400 transition ease-in-out duration-150">
@@ -53,7 +62,7 @@ export const PrevNext = () => {
</div>
)}
- {allLinks[next] && (
+ {next !== null && allLinks[next] && (
<div className="w-0 flex-1 flex justify-end">
<VersionedLink href={allLinks[next].path}>
<a className="-mt-px border-t-2 border-transparent pt-4 pl-1 inline-flex items-center text-sm leading-5 font-medium text-gray-500 hover:text-gray-700 hover:border-gray-300 focus:outline-none focus:text-gray-700 focus:border-gray-400 transition ease-in-out duration-150">
|
don't return None
Returning None means 'go to next router' | @@ -43,9 +43,12 @@ class MonolithRouter(object):
def allow_migrate(db, app_label):
+ """
+ :return: Must return a boolean value, not None.
+ """
if app_label == ICDS_REPORTS_APP:
db_alias = get_icds_ucr_db_alias()
- return db_alias and db_alias == db
+ return bool(db_alias and db_alias == db)
elif app_label == SYNCLOGS_APP:
return db == settings.SYNCLOGS_SQL_DB_ALIAS
|
test-run-dev: Delete commented-out code.
We don't disable code by commenting it out -- that leaves a mess.
We delete it. Remembering what the code was is what source control
is for.
This fixes "test-run-dev: Disable Nagios check."
from a few weeks ago. | @@ -36,38 +36,11 @@ def start_server(logfile_name: str) -> Tuple[bool, str]:
return failure, ''.join(datalog)
-def test_nagios(nagios_logfile):
- # type: (IO[str]) -> bool
- ZULIP_DIR = os.path.join(TOOLS_DIR, '..')
- API_DIR = os.path.join(ZULIP_DIR, 'api')
- os.chdir(API_DIR)
- subprocess.call(['./setup.py', 'install'])
- PUPPET_DIR = os.path.join(ZULIP_DIR, 'puppet')
- os.chdir(ZULIP_DIR)
- my_env = os.environ.copy()
- my_env['PYTHONPATH'] = ZULIP_DIR
- args = [
- os.path.join(PUPPET_DIR,
- 'zulip/files/nagios_plugins/zulip_app_frontend/check_send_receive_time'),
- '--nagios',
- '--site=http://127.0.0.1:9991',
- ]
- result = subprocess.check_output(args, env=my_env, universal_newlines=True,
- stderr=nagios_logfile)
-
- if result.startswith("OK:"):
- return True
-
- print(result)
- return False
-
-
if __name__ == '__main__':
print("Testing development server start!")
logfile_name = '/tmp/run-dev-output'
logfile = open(logfile_name, 'wb', buffering=0)
- # nagios_logfile = open('/tmp/test-nagios-output', 'w+')
args = ["{}/run-dev.py".format(TOOLS_DIR)]
STDOUT = subprocess.STDOUT
@@ -75,24 +48,18 @@ if __name__ == '__main__':
try:
failure, log = start_server(logfile_name)
- # We have moved API to a separate repo. `test_nagios` and
- # `check_send_receive_time` need to be updated.
- # failure = failure or not test_nagios(nagios_logfile)
finally:
logfile.close()
- # nagios_log = close_and_get_content(nagios_logfile)
run_dev.send_signal(signal.SIGINT)
run_dev.wait()
- if not failure and 'Traceback' in log: # + ' ' + nagios_log:
+ if not failure and 'Traceback' in log:
failure = True
if failure:
print("Development server is not working properly:")
print(log)
- # print("check_send_receive_time output:")
- # print(nagios_log)
sys.exit(1)
else:
print("Development server is working properly.")
|
encode for py3
Fixes | @@ -30,6 +30,9 @@ import signal
import select
import logging
+# Import salt libs
+from salt.ext import six
+
mswindows = (sys.platform == "win32")
try:
@@ -566,6 +569,9 @@ class Terminal(object):
try:
if self.stdin_logger:
self.stdin_logger.log(self.stdin_logger_level, data)
+ if six.PY3:
+ written = os.write(self.child_fd, data.encode(__salt_system_encoding__))
+ else:
written = os.write(self.child_fd, data)
except OSError as why:
if why.errno == errno.EPIPE: # broken pipe
|
Added link to wduco repository
I've added a direct link to wDUCO | @@ -244,7 +244,7 @@ Hashrate Calculators for AVR/ESP platforms are available in the [Useful tools br
* [@Tech1k](https://github.com/Tech1k/) - [email protected]
* **Contributors:**
- * [@ygboucherk](https://github.com/ygboucherk) (wDUCO dev)
+ * [@ygboucherk](https://github.com/ygboucherk) ([wDUCO](https://github.com/ygboucherk/wrapped-duino-coin-v2) dev)
* [@kyngs](https://github.com/kyngs)
* [@httsmvkcom](https://github.com/httsmvkcom)
* [@Nosh-Ware](https://github.com/Nosh-Ware)
|
Wait for element to be visible before getting text
The original version of these tests appeared to be racey. | @@ -77,7 +77,7 @@ class SmallListTest(SeleniumTestCase):
"&denom=total_list_size&selectedTab=summary"
)
)
- warning = self.find_by_xpath("//div[contains(@class, 'toggle')]/a")
+ warning = self.find_visible_by_xpath("//div[contains(@class, 'toggle')]/a")
self.assertIn("Remove", warning.text)
xlabels = self.find_by_xpath("//*[contains(@class, 'highcharts-xaxis-labels')]")
self.assertIn("GREEN", xlabels.text)
@@ -114,7 +114,7 @@ class AnalyseSummaryTotalsTest(SeleniumTestCase):
id="js-summary-totals", classname=classname
)
)
- element = self.find_by_xpath(selector)
+ element = self.find_visible_by_xpath(selector)
self.assertTrue(
element.is_displayed(), ".{} is not visible".format(classname)
)
|
fix: changed logger.error --> logger.exception in hope
to fix missing tracebacks | @@ -322,7 +322,7 @@ class Task:
self.logger.info(f"Running '{self.name}'", extra={"action": "run"})
def log_failure(self):
- self.logger.error(f"Task '{self.name}' failed", exc_info=True, extra={"action": "fail"})
+ self.logger.exception(f"Task '{self.name}' failed", extra={"action": "fail"})
def log_success(self):
self.logger.info(f"Task '{self.name}' succeeded", extra={"action": "success"})
|
added route_linewidth additional parameter
add in plot_graph_routes the possibility to pass route_linewidth as list of different linewidth one for each route to be plotted | @@ -319,7 +319,7 @@ def plot_graph_route(
return fig, ax
-def plot_graph_routes(G, routes, route_colors="r", **pgr_kwargs):
+def plot_graph_routes(G, routes, route_colors="r", route_linewidth=4, **pgr_kwargs):
"""
Plot several routes along a graph.
@@ -331,6 +331,8 @@ def plot_graph_routes(G, routes, route_colors="r", **pgr_kwargs):
routes as a list of lists of node IDs
route_colors : string or list
if string, 1 color for all routes. if list, the colors for each route.
+ route_linewidth : int or list
+ if int, 1 linewidth for all routes. if list, the linewidth for each route.
pgr_kwargs
keyword arguments to pass to plot_graph_route
@@ -348,14 +350,19 @@ def plot_graph_routes(G, routes, route_colors="r", **pgr_kwargs):
route_colors = [route_colors] * len(routes)
if len(routes) != len(route_colors): # pragma: no cover
raise ValueError("route_colors list must have same length as routes")
+ if isinstance(route_linewidth, int):
+ route_linewidth = [route_linewidth] * len(routes)
+ if len(routes) != len(route_linewidth): # pragma: no cover
+ raise ValueError("route_linewidth list must have same length as routes")
# plot the graph and the first route
- override = {"route", "route_color", "show", "save", "close"}
+ override = {"route", "route_color", "route_linewidth", "show", "save", "close"}
kwargs = {k: v for k, v in pgr_kwargs.items() if k not in override}
fig, ax = plot_graph_route(
G,
route=routes[0],
route_color=route_colors[0],
+ route_linewidth=route_linewidth[0]
show=False,
save=False,
close=False,
@@ -365,11 +372,12 @@ def plot_graph_routes(G, routes, route_colors="r", **pgr_kwargs):
# plot the subsequent routes on top of existing ax
override.update({"ax"})
kwargs = {k: v for k, v in pgr_kwargs.items() if k not in override}
- for route, route_color in zip(routes[1:], route_colors[1:]):
+ for route, route_color, route_linewidth in zip(routes[1:], route_colors[1:], route_linewidth[1:]):
fig, ax = plot_graph_route(
G,
route=route,
route_color=route_color,
+ route_linewidth=route_linewidth,
show=False,
save=False,
close=False,
|
[Chore] Fix ubuntu rules file for systemd
Problem: Supplied systemd service should be disabled by default.
However, for some reason, they are enabled automatically after
installation for baker, accuser and tx-node packages.
Solution: Be very explicit about systemd installation in debhelper rules. | @@ -195,8 +195,8 @@ def mk_dh_flags(package):
def gen_systemd_rules_contents(package, binaries_dir=None):
- override_dh_install_init = "override_dh_installinit:\n"
package_name = package.name.lower()
+ units = set()
for systemd_unit in package.systemd_units:
if systemd_unit.instances is None:
if systemd_unit.suffix is not None:
@@ -208,20 +208,41 @@ def gen_systemd_rules_contents(package, binaries_dir=None):
unit_name = f"{package_name}-{systemd_unit.suffix}@"
else:
unit_name = f"{package_name}@"
- override_dh_install_init += f" dh_installinit --name={unit_name}\n"
+ units.add(unit_name)
+ override_dh_install_init = "override_dh_installinit:\n" + "\n".join(
+ f" dh_installinit --name={unit_name}" for unit_name in units
+ )
+ override_dh_auto_install = (
+ "override_dh_auto_install:\n"
+ + " dh_auto_install\n"
+ + "\n".join(
+ f" dh_installsystemd --no-enable --no-start --name={unit_name} {unit_name}.service"
+ for unit_name in units
+ )
+ )
+ splice_if = lambda cond: lambda string: string if cond else ""
+ pybuild_splice = splice_if(package.buildfile == "setup.py")
rules_contents = f"""#!/usr/bin/make -f
-{"export DEB_BUILD_OPTIONS=nostrip" if binaries_dir is not None else ""}
-export DEB_CFLAGS_APPEND=-fPIC
# Disable usage of instructions from the ADX extension to avoid incompatibility
# with old CPUs, see https://gitlab.com/dannywillems/ocaml-bls12-381/-/merge_requests/135/
export BLST_PORTABLE=yes
-{f"export PYBUILD_NAME={package_name}" if package.buildfile == "setup.py" else ""}
+{splice_if(binaries_dir)("export DEB_BUILD_OPTIONS=nostrip")}
+{pybuild_splice(f"export PYBUILD_NAME={package_name}")}
+export DEB_CFLAGS_APPEND=-fPIC
%:
dh $@ {mk_dh_flags(package)}
+
+override_dh_systemd_enable:
+ dh_systemd_enable {pybuild_splice("-O--buildsystem=pybuild")} --no-enable
+
override_dh_systemd_start:
- dh_systemd_start --no-start
-{override_dh_install_init if len(package.systemd_units) > 1 else ""}"""
+ dh_systemd_start {pybuild_splice("-O--buildsystem=pybuild")} --no-start
+
+{override_dh_auto_install if len(package.systemd_units) > 1 else ""}
+
+{override_dh_install_init if len(package.systemd_units) > 1 else ""}
+"""
return rules_contents
|
AbstractNodeData.arguments: update docstring
TN: | @@ -771,15 +771,8 @@ class AbstractNodeData(object):
which take at least a mandatory Self argument and return the
corresponding data.
- This is a list that describes all other arguments. For each argument,
- this contains a tuple for:
-
- * the name of the argument;
- * its type;
- * its default value as a string, or None if there is no default
- value.
-
- Note that only Property instances accept other arguments.
+ This is a list that describes all other arguments. Note that only
+ Property instances accept other arguments.
:type: list[Argument]
"""
|
BUG: fixed bug in extracting type
Fixed bug to extract type instead of numpy class `dtype`, which wraps `type`. | @@ -1598,7 +1598,7 @@ class Instrument(object):
"""
# Get the data type
- data_type = data.dtype
+ data_type = data.dtype.type
# Check for object type
if data_type != np.dtype('O'):
|
Temporarily pause Python 3.10 CI tests due to scikit-learn issues with Windows
Scikit-learn is planning to add Python 3.10 support in the middle of December 2021, according to scikit-learn/scikit-learn#21882 | @@ -76,7 +76,7 @@ jobs:
needs: [cache_nltk_data, cache_third_party]
strategy:
matrix:
- python-version: ['3.7', '3.8', '3.9', '3.10']
+ python-version: ['3.7', '3.8', '3.9']
os: [ubuntu-latest, macos-latest, windows-latest]
fail-fast: false
runs-on: ${{ matrix.os }}
|
fix: Add Currency Yemeni Rial
Closes issue | },
"Yemen": {
"code": "ye",
+ "currency": "YER",
"currency_fraction": "Fils",
"currency_fraction_units": 100,
+ "smallest_currency_fraction_value": 0.01,
+ "currency_name": "Yemeni Rial",
"currency_symbol": "\ufdfc",
"number_format": "#,###.##",
"timezones": [
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.