message
stringlengths 13
484
| diff
stringlengths 38
4.63k
|
---|---|
small update on readme.md
Should clone the scripts before bash | @@ -43,9 +43,9 @@ Installation from source is the only currently supported format. ```deepchem```
You can install deepchem in a new conda environment using the conda commands in scripts/install_deepchem_conda.sh
```bash
+git clone https://github.com/deepchem/deepchem.git # Clone deepchem source code from GitHub
bash scripts/install_deepchem_conda.sh deepchem
pip install tensorflow-gpu==1.0.1 # If you want GPU support
-git clone https://github.com/deepchem/deepchem.git # Clone deepchem source code from GitHub
cd deepchem
python setup.py install # Manual install
nosetests -v deepchem --nologcapture # Run tests
|
qt: follow-up changing light/dark theme at runtime
follow-up | @@ -963,8 +963,7 @@ class ColorScheme:
@staticmethod
def update_from_widget(widget, force_dark=False):
- if force_dark or ColorScheme.has_dark_background(widget):
- ColorScheme.dark_scheme = True
+ ColorScheme.dark_scheme = bool(force_dark or ColorScheme.has_dark_background(widget))
class AcceptFileDragDrop:
|
ci(pre-commit): Correct end-of-file-fixer exclude
The existing regex had numerous bugs and boiled down to a very complex
way of matching any file whatsoever in the tests directory. Correct the
syntax issues, and make a best guess at the original intent. | @@ -5,7 +5,7 @@ repos:
hooks:
- id: check-vcs-permalinks
- id: end-of-file-fixer
- exclude: "tests/[test_*|data|commands/tests_*]/*"
+ exclude: "tests/((commands|data)/|test_).+"
- id: trailing-whitespace
args: [--markdown-linebreak-ext=md]
- id: debug-statements
|
specified a version for pypiwin32 and moved it to install_requires
removed white spaces | @@ -26,6 +26,7 @@ setup(
"eth-hash[pycryptodome]",
"requests>=2.16.0,<3.0.0",
"websockets>=4.0.1",
+ "pypiwin32>=223;platform_system=='Windows'",
],
setup_requires=['setuptools-markdown'],
python_requires='>=3.5, <4',
@@ -39,9 +40,6 @@ setup(
"flake8==3.4.1",
"isort>=4.2.15,<5",
],
- 'platform_system=="Windows"': [
- 'pypiwin32' # TODO: specify a version number, move under install_requires
- ],
},
py_modules=['web3', 'ens'],
license="MIT",
|
adt: use ADT_FOREACH macros internally
This fixes some bugs too. | @@ -78,12 +78,7 @@ const struct adt_property *adt_get_property_namelen(const void *adt, int offset,
{
dprintf("adt_get_property_namelen(%p, %d, \"%s\", %u)\n", adt, offset, name, namelen);
- for (offset = adt_first_property_offset(adt, offset); (offset >= 0);
- (offset = adt_next_property_offset(adt, offset))) {
- const struct adt_property *prop;
-
- prop = adt_get_property_by_offset(adt, offset);
-
+ ADT_FOREACH_PROPERTY(adt, offset, prop) {
dprintf(" off=0x%x name=\"%s\"\n", offset, prop->name);
if (_adt_string_eq(prop->name, name, namelen))
return prop;
@@ -179,19 +174,13 @@ int adt_next_sibling_offset(const void *adt, int offset)
int adt_subnode_offset_namelen(const void *adt, int offset, const char *name, size_t namelen)
{
- const struct adt_node_hdr *node = ADT_NODE(adt, offset);
-
ADT_CHECK_HEADER(adt);
- offset = adt_first_child_offset(adt, offset);
-
- for (u32 i = 0; i < node->child_count; i++) {
+ ADT_FOREACH_CHILD(adt, offset) {
const char *cname = adt_get_name(adt, offset);
if (_adt_nodename_eq(cname, name, namelen))
return offset;
-
- offset = adt_next_sibling_offset(adt, offset);
}
return -ADT_ERR_NOTFOUND;
|
Update detection-testing.yml
Updated to use pip cache | @@ -59,6 +59,7 @@ jobs:
with:
python-version: '3.9' #Available versions here - https://github.com/actions/python-versions/releases easy to change/make a matrix/use pypy
architecture: 'x64' # optional x64 or x86. Defaults to x64 if not specified
+ cache: 'pip'
- name: Install Python Dependencies
run: |
@@ -152,6 +153,7 @@ jobs:
with:
python-version: '3.9' #Available versions here - https://github.com/actions/python-versions/releases easy to change/make a matrix/use pypy
architecture: 'x64' # optional x64 or x86. Defaults to x64 if not specified
+ cache: 'pip'
- name: Install Python Dependencies
run: |
@@ -255,6 +257,7 @@ jobs:
with:
python-version: '3.9' #Available versions here - https://github.com/actions/python-versions/releases easy to change/make a matrix/use pypy
architecture: 'x64' # optional x64 or x86. Defaults to x64 if not specified
+ cache: 'pip
- name: Install Python Dependencies
run: |
|
fix(docs): make it official to pass a domain during sign-up
The functionality is used by Nextcloud VM and we're now officially
advertising it. | @@ -103,9 +103,6 @@ human-readable error message that may look like::
Zone Creation during Account Registration
*****************************************
-**Note:** The following functionality is intended for internal deSEC use only.
-Availability of this functionality may change without notice.
-
Along with your account creation request, you can provide a domain name as
follows::
|
Update setup.py
unrelease and change author | @@ -111,9 +111,9 @@ def generate_long_description() -> str:
setup(
name="pyjanitor",
- version="0.20.11",
+ version="0.20.10",
description="Tools for cleaning pandas DataFrames",
- author="Eric J. Ma",
+ author="pyjanitor devs",
author_email="[email protected]",
url="https://github.com/ericmjl/pyjanitor",
license="MIT",
|
Update whitelist.txt
1 -- IPv6 localhost added.
1.1.1.1 is Cloudflare public DNS. Added in PR Remove Bogus group.
symcd.com -- added to "triggering suspicious http request" group. | localhost
127.0.0.1
+::1
local
localdomain
@@ -287,6 +288,7 @@ sanasecurity.com
sim-unlock.net
sqlzoo.net
symcb.com
+symcd.com
victronenergy.com
# found as false positive in otx.alienvault.com
@@ -686,10 +688,6 @@ scorecardresearch.com
216.58.192.0/19
-# Bogus (though, valid DNS server)
-
-1.1.1.1
-
# Teamviewer
37.252.227.51
|
test exposing bug that we don't send warcprox-meta
when pushing stitched-up video with WARCPROX_WRITE_RECORD | @@ -801,7 +801,10 @@ def test_ydl_stitching(httpd):
rr = doublethink.Rethinker('localhost', db='brozzler')
frontier = brozzler.RethinkDbFrontier(rr)
site = brozzler.Site(rr, {
- 'seed': 'http://localhost:%s/site10/' % httpd.server_port})
+ 'seed': 'http://localhost:%s/site10/' % httpd.server_port,
+ 'warcprox_meta': {
+ 'warc-prefix': 'test_ydl_stitching',
+ 'captures-table-extra-fields': {'test_id':test_id}}})
brozzler.new_site(frontier, site)
# the site should be brozzled fairly quickly
@@ -816,11 +819,21 @@ def test_ydl_stitching(httpd):
assert len(pages) == 1
page = pages[0]
assert len(page.videos) == 6
+ stitched_url = 'youtube-dl:00001:http://localhost:%s/site10/' % httpd.server_port
assert {
'blame': 'youtube-dl',
'content-length': 267900,
'content-type': 'video/mp4',
'response_code': 204,
- 'url': 'youtube-dl:00001:http://localhost:%s/site10/' % httpd.server_port,
+ 'url': stitched_url,
} in page.videos
+ time.sleep(2) # in case warcprox hasn't finished processing urls
+ # take a look at the captures table
+ captures = list(rr.table('captures').filter({'test_id':test_id}).run())
+ l = [c for c in captures if c['url'] == stitched_url]
+ assert len(l) == 1
+ c = l[0]
+ assert c['filename'].startswith('test_ydl_stitching')
+ assert c['content_type'] == 'video/mp4'
+ assert c['http_method'] == 'WARCPROX_WRITE_RECORD'
|
Bumped `mesos-dns`.
Added capability in `mesos-dns` of providing DC/OS version it was
validated against. | "single_source" : {
"kind": "git",
"git": "https://github.com/mesosphere/mesos-dns.git",
- "ref": "be539256550ae317c5f3ba440256f04c44d68320",
- "ref_origin": "master"
+ "ref": "114915e5ce05376be2b237b5016d6832d273ea6f",
+ "ref_origin": "dcos/1.11"
},
"username": "dcos_mesos_dns"
}
|
Make changes to readability
Add `\displaystyle` directive to fractions to make them more readable esp. on smaller screens.
Indent some formulae in true score section.
Move the textual descriptions early on in `evaluation.rst` to make it clear that they aren't part of the formula but rather what's being computed. | @@ -744,7 +744,7 @@ def quadratic_weighted_kappa(y_true_observed, y_pred, ddof=0):
The formula is as follows:
- :math:`QWK=\\frac{2*Cov(M,H)}{Var(H)+Var(M)+(\\bar{M}-\\bar{H})^2}`, where
+ :math:`QWK=\\displaystyle\\frac{2*Cov(M,H)}{Var(H)+Var(M)+(\\bar{M}-\\bar{H})^2}`, where
- :math:`Cov` - covariance with normalization by :math:`N` (the total number of observations given)
- :math:`H` - the human score
|
Change ordering of lock acquisition in `put`
The three locks, references_by_block_id lock,, block_by_block_id,
block_store_by_name -- if they ever overlap -- must overlap in that
order, or there can be a deadlock. | @@ -179,6 +179,10 @@ impl BlockManagerState {
.references_by_block_id
.write()
.expect("Acquiring reference write lock; lock poisoned");
+ let mut block_by_block_id = self
+ .block_by_block_id
+ .write()
+ .expect("Acquiring block pool write lock; lock poisoned");
let blockstore_by_name = self
.blockstore_by_name
.read()
@@ -227,10 +231,6 @@ impl BlockManagerState {
last_block.previous_block_id.clone(),
),
);
- let mut block_by_block_id = self
- .block_by_block_id
- .write()
- .expect("Acquiring block pool write lock; lock poisoned");
block_by_block_id.insert(last_block.header_signature.clone(), last_block.clone());
blocks_with_references.into_iter().for_each(|block| {
@@ -248,12 +248,7 @@ impl BlockManagerState {
COLLECTOR
.gauge("BlockManager.pool_size", None, None)
- .set_value(
- self.block_by_block_id
- .read()
- .expect("Acquiring block pool read lock; lock poisoned")
- .len(),
- );
+ .set_value(block_by_block_id.len());
Ok(())
}
|
Change decoder to deal with cuts and confidences
Codecs have to be aware of cuts and confidences on the decoder side. | @@ -20,6 +20,7 @@ graphemes.
"""
import regex
+import numpy as np
from torch import IntTensor
@@ -79,23 +80,37 @@ class PytorchCodec(object):
l.extend(self.c2l[c])
return IntTensor(l)
- def decode(self, l):
+ def decode(self, labels):
"""
- Decodes a labelling into a string.
+ Decodes a labelling.
+
+ Given a labelling with cuts and confidences returns a string with the
+ cuts and confidences aggregated across label-code point
+ correspondences. When decoding multilabels to code points the resulting
+ cuts are min/max, confidences are averaged.
Args:
- l (torch.IntTensor): Input vector containing the label sequence.
+ labels (list): Input containing tuples (label, start, end,
+ confidence).
Returns:
- (list) decoded sequence of unicode code points.
+ list: A list of tuples (code point, start, end, confidence)
"""
# map into unicode space
- l = ''.join(unichr(v) for v in l)
+ l = ''.join(unichr(v) for v, _, _, _ in labels)
+ start = [x for _, x, _, _ in labels]
+ end = [x for _, _, x, _ in labels]
+ con = [x for _, _, _, x in labels]
splits = self._greedy_split(l, self.l2c_regex)
- c = []
+ decoded = []
+ idx = 0
for i in splits:
- c.append(self.l2c[i])
- return c
+ decoded.extend([(c, s, e, u) for c, s, e, u in zip(self.l2c[i],
+ len(i) * [start[idx]],
+ len(i) * [end[idx + len(i) - 1]],
+ len(i) * [np.mean(con[idx:idx + len(i)])])])
+ idx += len(i)
+ return decoded
def _greedy_split(self, input, re):
"""
|
Set default_auto_field in wagtail.contrib.postgres_search AppConfig
Add default_auto_field = 'django.db.models.AutoField' | @@ -6,6 +6,7 @@ from .utils import get_postgresql_connections, set_weights
class PostgresSearchConfig(AppConfig):
name = 'wagtail.contrib.postgres_search'
+ default_auto_field = 'django.db.models.AutoField'
def ready(self):
@register(Tags.compatibility, Tags.database)
|
Fix installation with setup.py
Installing using setup.py was failing due to a couple of files having been
moved in the repository. | @@ -86,7 +86,7 @@ the moment it can generate Python, C++, Perl, Lisp and XRC (wxWidgets'
XML resources) code."""
text_files = ['CHANGES.txt', 'CONTRIBUTING.txt', 'CREDITS.txt',
- 'LICENSE.txt', 'NEWS.txt', 'README.txt', 'TODO.txt', ]
+ 'LICENSE.txt', 'NEWS.txt', 'README.txt', 'docs/Todo.txt',]
data_files = [
['share/wxglade/icons', glob('icons/*.*')],
@@ -98,7 +98,7 @@ data_files = [
['share/doc/wxglade/tutorial/img', glob('docs/img/*.*')],
['share/doc/wxglade/manual_html', glob('docs/html/*.*')],
['share/doc/wxglade/manual_pdf', glob('docs/pdf/*.pdf')],
- ['share/man/man1', ['docs/man/wxglade.1']],
+ ['share/man/man1', ['docs_old/man/wxglade.1']],
]
packages = ['wxglade.%s' % pkg for pkg in find_packages(exclude=['tests'])]
|
fix ocs_operator_storage_cluster_cr url
the file ocs_v1alpha1_storagecluster_cr.yaml was renamed to
ocs_v1_storagecluster_cr.yaml
fixes | @@ -41,7 +41,7 @@ DEPLOYMENT:
# opened issue here about the version in the name of OLM
# https://github.com/openshift/ocs-operator/issues/50
ocs_csv_version: "v0.0.1"
- ocs_operator_storage_cluster_cr: "https://raw.githubusercontent.com/openshift/ocs-operator/master/deploy/crds/ocs_v1alpha1_storagecluster_cr.yaml"
+ ocs_operator_storage_cluster_cr: "https://raw.githubusercontent.com/openshift/ocs-operator/master/deploy/crds/ocs_v1_storagecluster_cr.yaml"
ocs_operator_nodes_to_label: 3
# This is described as a WA for minimal configuration 3/3 worker/master
# nodes. See: https://github.com/openshift/ocs-operator
|
Fixed reconfigure compute message in audit
Fixed reconfigure compute message in audit | @@ -21,7 +21,7 @@ package com.epam.dlab.rest.contracts;
public interface ComputationalAPI {
String AUDIT_MESSAGE = "Notebook name %s";
- String AUDIT_COMPUTATIONAL_RECONFIGURE_MESSAGE = "Reconfigure compute <%s>, requested for notebook <%s>";
+ String AUDIT_COMPUTATIONAL_RECONFIGURE_MESSAGE = "Reconfigure compute %s, requested for notebook %s";
String LIBRARY = "library/";
String COMPUTATIONAL = "computational";
String COMPUTATIONAL_CREATE = COMPUTATIONAL + "/create";
|
Remove exception catcher in PresentationConnector
Connections should succeed, or the transaction should be reverted. | @@ -37,7 +37,6 @@ class PresentationConnector(ItemConnector):
item = self.item
cinfo = self.connections.get_connection(handle)
- try:
if cinfo and cinfo.connected is sink.item:
# reconnect only constraint - leave model intact
log.debug("performing reconnect constraint")
@@ -73,8 +72,6 @@ class PresentationConnector(ItemConnector):
# adapter requires both ends to be connected.
connect(handle, sink.port)
item.handle(ItemConnected(item, handle, sink.item, sink.port))
- except Exception:
- log.error("Error during connect", exc_info=True)
def connect_handle(self, sink):
callback = DisconnectHandle(self.item, self.handle, self.connections)
|
Updated install docs with supported django versions
Updated docs to reflect that django'cms supports django 1.9 and 1.10 in addition to django 1.8 | @@ -8,7 +8,7 @@ We'll get started by setting up our environment.
Requirements
************
-django CMS requires Django 1.8, and Python 2.7, 3.3 or 3.4.
+django CMS requires Django 1.8, 1.9 or 1.10 and Python 2.7, 3.3 or 3.4.
************************
Your working environment
|
Drop mention of long-removed 'policy' object.
Closes | @@ -5,9 +5,7 @@ Subscribing to messages is handled through the
:class:`~.pubsub_v1.subscriber.client.Client` class (aliased as
``google.cloud.pubsub.SubscriberClient``). This class provides a
:meth:`~.pubsub_v1.subscriber.client.Client.subscribe` method to
-attach to subscriptions on existing topics, and (most importantly) a
-:meth:`~.pubsub_v1.subscriber.policy.thread.Policy.open` method that
-consumes messages from Pub/Sub.
+attach to subscriptions on existing topics.
Instantiating a subscriber client is straightforward:
|
check if hasAttr before access it
Summary: Related to Just need to check hasAttr before access the _console field. | @@ -624,7 +624,8 @@ class SSHCommandSession(CliCommandSession):
# TODO: We need to resolve the console hostnames to ip addresses.
# Ignoring consoles for now.
- if not self._console and self._devinfo.connect_using_proxy(host):
+ if (not getattr(self, '_console', False) and
+ self._devinfo.connect_using_proxy(host)):
host = self.service.get_http_proxy_url(host)
self.logger.info("Connecting to: %s: %d", host, port)
|
Update density fitting mcscf example.
mf._cderi requires the h5py file object ("file1"), not the
dataset object ("file1['j3c']"). Obsolete API? | @@ -41,7 +41,7 @@ with h5py.File(ftmp.name, 'r') as file1:
# Note, here the integral object file1['j3c'] are not loaded in memory.
# It is still the HDF5 array object held on disk. The HDF5 array can be used
# the same way as the regular numpy ndarray stored in memory.
- mf.with_df._cderi = file1['j3c']
+ mf.with_df._cderi = file1
mf.kernel()
# Note the mc object must be put inside the "with" statement block because it
|
Actually fill in string template in many events pipeline
Summary: Missed this one
Test Plan: Look at event log
Reviewers: #ft, natekupp | def create_raw_file_solid(name):
def do_expectation(_context, _value):
return ExpectationResult(
- success=True, label='output_table_exists', description='Checked {name} exists'
+ success=True,
+ label='output_table_exists',
+ description='Checked {name} exists'.format(name=name),
)
@solid(
|
Issue : pbar persists due to specific rule in tqdm (notebook) when n < total
* Issue
pbar persists in notebook due to specific rules when n < total
* close pbar doesn't rise danger bar
* fix when pbar.total is None | @@ -199,6 +199,11 @@ class ProgressBar(BaseLogger):
def _close(self, engine):
if self.pbar is not None:
+ # https://github.com/tqdm/notebook.py#L240-L250
+ # issue #1115 : notebook backend of tqdm checks if n < total (error or KeyboardInterrupt)
+ # and the bar persists in 'danger' mode
+ if self.pbar.total is not None:
+ self.pbar.n = self.pbar.total
self.pbar.close()
self.pbar = None
|
minor changes
requested changes are addressed | @@ -55,6 +55,7 @@ def logistic_reg(alpha,X,y,max_iterations=70000):
if iterations== max_iterations:
print("Maximum iterations exceeded!")
+ print("Minimal cost function J=",J)
converged=True
return theta
|
auth: log details in case of user auth issue
When a local SaltClient sends a job as "root" while the Master runs as a non-root user, make it more clear why the message `Authentication failure of type "user" occurred.` is logged. | @@ -347,6 +347,7 @@ class LoadAuth(object):
load["user"] == self.opts.get("user", "root") or load["user"] == "root"
):
if auth_key != key[self.opts.get("user", "root")]:
+ log.warning('Master runs as "{}", but user in payload is "{}"'.format(self.opts.get('user', 'root'), load['user']))
log.warning(error_msg)
return False
elif auth_user.is_running_user():
|
[docs] Add instructions for uploading CI resources to S3
These were missing the final step to use the uploaded resources | @@ -148,12 +148,14 @@ server can go down or be slow), so try to avoid using the network at all during
this isn't a reasonable proposition (e.g. the docs tutorials which need to download models).
In these cases you can re-host files in S3 for fast access in CI. A committer can upload a file,
-specified by a name, hash, and path in S3, using the `workflow_dispatch` event on `the
+specified by a name, hash, and path in S3, using the ``workflow_dispatch`` event on `the
upload_ci_resource.yml GitHub Actions workflow
<https://github.com/apache/tvm/actions/workflows/upload_ci_resource.yml>`_. The sha256 must match
the file or it will not be uploaded. The upload path is user-defined so it can be any path (no
trailing or leading slashes allowed) but be careful not to collide with existing resources on
-accident.
+accident. Once uploaded you should send a PR to update the ``URL_MAP`` in
+`request_hook.py <https://github.com/apache/tvm/blob/main/tests/scripts/request_hook/request_hook.py>`_
+with the new URL.
Handle Integer Constant Expression
|
Bump setuptools & setuptools_scm
Just for precaution, lower version were causing some weirdness
for scikit-misc. | # Reference https://github.com/pydata/xarray/blob/main/pyproject.toml
[build-system]
requires = [
- "setuptools>=45",
- "setuptools_scm[toml]>=6.2",
+ "setuptools>=59",
+ "setuptools_scm[toml]>=6.4",
"wheel",
]
build-backend = "setuptools.build_meta"
|
Update Jenkinsfile
- To use full GIT SHA in img
- To use latest jenkins pipeline lib. | -@Library('github.com/mozmeao/[email protected]')
+@Library('github.com/mozmeao/[email protected]')
def config
def docker_image
def dc_name
@@ -26,7 +26,7 @@ conduit {
stage("Build") {
if (!dockerImageExists(docker_image)) {
- sh "GIT_SHA=${GIT_COMMIT_SHORT} LOCALE_ENV=production ./docker/bin/build-docker-images.sh"
+ sh "GIT_SHA=${GIT_COMMIT} LOCALE_ENV=production ./docker/bin/build-docker-images.sh"
}
else {
echo "Image ${docker_image} already exists."
|
poll_widget: Use e.key instead of deprecated e.keyCode.
Tested by making sure Enter and Escape work for editing poll title
and adding new poll options. | @@ -125,12 +125,12 @@ export function activate({
elem.find("input.poll-question").on("keydown", (e) => {
e.stopPropagation();
- if (e.keyCode === 13) {
+ if (e.key === "Enter") {
submit_question();
return;
}
- if (e.keyCode === 27) {
+ if (e.key === "Escape") {
abort_edit();
return;
}
@@ -159,12 +159,12 @@ export function activate({
elem.find("input.poll-option").on("keydown", (e) => {
e.stopPropagation();
- if (e.keyCode === 13) {
+ if (e.key === "Enter") {
submit_option();
return;
}
- if (e.keyCode === 27) {
+ if (e.key === "Escape") {
$("input.poll-option").val("");
return;
}
|
Update pythonpackage.yml
Try to fix CI with mongo install | @@ -11,8 +11,6 @@ jobs:
matrix:
python-version: [3.7]
os: [ubuntu-latest, macOS-latest]
- mongodb-version: ["4.2"]
-
steps:
- uses: actions/checkout@v1
@@ -28,16 +26,17 @@ jobs:
eval "$(ssh-agent -s)"
ssh-add - <<< "${CHIA_MACHINE_SSH_KEY}"
git submodule update --init --recursive
- brew update && brew install gmp docker
+ brew tap mongodb/brew
+ sudo apt-get install --no-install-recommends mongodb-org=4.2.1
+ brew update && brew install gmp [email protected]
python3 -m venv .venv
. .venv/bin/activate
pip install wheel # For building blspy
pip install -e .
pip install -r requirements.txt
- - name: Launch MongoDB
- uses: wbari/[email protected]
- with:
- mongoDBVersion: ${{ matrix.mongodb-version }}
+ - name: Start mongodb
+ run: |
+ mongod --dbpath ./db/ &
- name: Lint with flake8
run: |
./.venv/bin/flake8 src
|
Update config.yml
Moved another to the broken category | @@ -49,8 +49,8 @@ jobs:
make DESIGN_NAME=switched_capacitor_filter ALIGN_docker
make DESIGN_NAME=cascode_current_mirror_ota ALIGN_docker
make DESIGN_NAME=current_mirror_ota ALIGN_docker
- make DESIGN_NAME=five_transistor_ota ALIGN_docker
# Currently failing
+# make DESIGN_NAME=five_transistor_ota ALIGN_docker
# make DESIGN_NAME=sc_dc_dc_converter ALIGN_docker
build-tally:
|
remove object/init boilerplate
This is unused -- it would affect how __init__ is called in the presence of subclasses, but this class is not designed for subclassing/multiple inheritence | @@ -5,10 +5,7 @@ from parsl.executors.base import ParslExecutor
from parsl.providers.provider_base import JobStatus, JobState
-class JobErrorHandler(object):
- def __init__(self):
- pass
-
+class JobErrorHandler:
def run(self, status: List[ExecutorStatus]):
for es in status:
self._check_irrecoverable_executor(es)
|
Add support for Vault KV API v2
This adds the ability to target KV API v1 or v2. | @@ -37,6 +37,17 @@ class VaultDestinationPlugin(DestinationPlugin):
'validation': '^https?://[a-zA-Z0-9.:-]+$',
'helpMessage': 'Valid URL to Hashi Vault instance'
},
+ {
+ 'name': 'vaultKvApiVersion',
+ 'type': 'select',
+ 'value': '2',
+ 'available': [
+ '1',
+ '2'
+ ],
+ 'required': True,
+ 'helpMessage': 'Version of the Vault KV API to use'
+ },
{
'name': 'vaultAuthTokenFile',
'type': 'str',
@@ -98,17 +109,20 @@ class VaultDestinationPlugin(DestinationPlugin):
path = self.get_option('vaultPath', options)
bundle = self.get_option('bundleChain', options)
obj_name = self.get_option('objectName', options)
+ api_version = self.get_option('vaultKvApiVersion', options)
with open(token_file, 'r') as file:
token = file.readline().rstrip('\n')
client = hvac.Client(url=url, token=token)
+ client.secrets.kv.default_kv_version = api_version
+
if obj_name:
path = '{0}/{1}'.format(path, obj_name)
else:
path = '{0}/{1}'.format(path, cname)
- secret = get_secret(url, token, mount, path)
+ secret = get_secret(client, mount, path)
secret['data'][cname] = {}
if bundle == 'Nginx' and cert_chain:
@@ -123,8 +137,9 @@ class VaultDestinationPlugin(DestinationPlugin):
if isinstance(san_list, list):
secret['data'][cname]['san'] = san_list
try:
- client.secrets.kv.v1.create_or_update_secret(
- path=path, mount_point=mount, secret=secret['data'])
+ client.secrets.kv.create_or_update_secret(
+ path=path, mount_point=mount, secret=secret['data']
+ )
except ConnectionError as err:
current_app.logger.exception(
"Exception uploading secret to vault: {0}".format(err), exc_info=True)
@@ -144,12 +159,14 @@ def get_san_list(body):
return san_list
-def get_secret(url, token, mount, path):
+def get_secret(client, mount, path):
""" retreiive existing data from mount path and return dictionary """
result = {'data': {}}
try:
- client = hvac.Client(url=url, token=token)
+ if client.secrets.kv.default_kv_version == '1':
result = client.secrets.kv.v1.read_secret(path=path, mount_point=mount)
+ else:
+ result = client.secrets.kv.v2.read_secret_version(path=path, mount_point=mount)
except ConnectionError:
pass
finally:
|
docker_mock: fix remember_tags functionality
Previously tagging an image without an explicit tag would
trick this stub into thinking the image was not present,
as inspection always uses an explicit tag. | @@ -193,6 +193,7 @@ mock_inspect_container = {
def _find_image(img, ignore_registry=False):
global mock_images
+ img = ImageName.parse(img).to_str(explicit_tag=True)
for im in mock_images:
im_name = im['RepoTags'][0]
if im_name == img:
|
Fix whitespace errors
reduces diff against devel | @@ -230,6 +230,7 @@ class IntegratorMechanism(ProcessingMechanism_Base):
name=name,
prefs=prefs,
context=self)
+
# IMPLEMENT: INITIALIZE LOG ENTRIES, NOW THAT ALL PARTS OF THE MECHANISM HAVE BEEN INSTANTIATED
# MODIFIED 6/2/17 NEW:
|
[util/cli] make sure language is set to "C"
to make parsing of CLI output reliable (i.e. always in english), set
LC_ALL accordingly.
see | @@ -30,6 +30,11 @@ def execute(
"""
args = cmd if shell else shlex.split(cmd)
logging.debug(cmd)
+
+ if not env:
+ env = os.environ.copy()
+ env["LC_ALL"] = "C"
+
try:
proc = subprocess.Popen(
args,
|
Update Nintendo - Super Nintendo Entertainment System.dat
The CRC32 is right, but the MD5 and SHA1 hashes for Hyper Metroid by RealRed version (1.0) are wrong. | @@ -179,7 +179,7 @@ game (
name "Hyper Metroid [Hack by RealRed]"
description "Hyper Metroid by RealRed version (1.0)"
homepage "http://metroidconstruction.com/hack.php?id=294"
- rom ( name "Super Metroid (Japan, USA) (En,Ja).sfc" size 4194304 crc d4d38230 md5 6b3c722165b3c566eda465386b585b51 sha1 727f8763983753e014e706520bb958bb4752e8b7 )
+ rom ( name "Super Metroid (Japan, USA) (En,Ja).sfc" size 4194304 crc d4d38230 md5 51c91e0372d47207a325e8eca40e0587 sha1 d8a37ef21a73d6be0f8907090a33d96531358cc0 )
)
game (
name "Mario & Luigi - Kola Kingdom Quest [Hack by Gamma V]"
|
SceneView : Draw overscan region
Crop window and Overscan are generally mutually exclusive. We don't
reflect this in `StandardOptions` though. As such, this implementation
simply draws the overscan region when set, regardless of the crop window
configuration.
Improvements
- Viewer : Added overscan display when looking through a suitably
configured render camera. | @@ -674,8 +674,6 @@ class SceneView::Gnomon : public boost::signals::trackable
namespace
{
-/// \todo If we made CropWindowTool::Rectangle public, we
-/// could ditch this class.
class CameraOverlay : public GafferUI::Gadget
{
@@ -740,6 +738,22 @@ class CameraOverlay : public GafferUI::Gadget
return m_cropWindow;
}
+ // left, top, right, bottom
+ void setOverscan( const V4f &overscan )
+ {
+ if( overscan == m_overscan )
+ {
+ return;
+ }
+ m_overscan = overscan;
+ requestRender();
+ }
+
+ const V4f &getOverscan() const
+ {
+ return m_overscan;
+ }
+
void setCaption( const std::string &caption )
{
if( caption == m_caption )
@@ -843,6 +857,26 @@ class CameraOverlay : public GafferUI::Gadget
glColor4f( 0, 0.25, 0, 1.0f );
style->renderRectangle( m_resolutionGate );
+ if( m_overscan[0] != 0.0f || m_overscan[1] != 0.0f || m_overscan[2] != 0.0f || m_overscan[3] != 0.0f )
+ {
+ glLineStipple( 2, 0x3333 );
+ glEnable( GL_LINE_STIPPLE );
+
+ const V2f gateSize = m_resolutionGate.size();
+ style->renderRectangle( Box2f(
+ V2f(
+ m_resolutionGate.min.x - ( m_overscan[0] * gateSize.x ),
+ m_resolutionGate.min.y - ( m_overscan[1] * gateSize.y )
+ ),
+ V2f(
+ m_resolutionGate.max.x + ( m_overscan[2] * gateSize.x ),
+ m_resolutionGate.max.y + ( m_overscan[3] * gateSize.y )
+ )
+ ) );
+
+ glDisable( GL_LINE_STIPPLE );
+ }
+
if( !m_icon.empty() )
{
IECoreGL::ConstTexturePtr texture = ImageGadget::loadTexture( m_icon );
@@ -878,6 +912,7 @@ class CameraOverlay : public GafferUI::Gadget
Box2f m_resolutionGate;
Box2f m_apertureGate;
Box2f m_cropWindow;
+ V4f m_overscan;
std::string m_caption;
std::string m_icon;
@@ -1249,6 +1284,18 @@ class SceneView::Camera : public boost::signals::trackable
{
m_overlay->setCropWindow( Box2f( V2f( 0 ), V2f( 1 ) ) );
}
+ if( isCamera && m_lookThroughCamera->getOverscan() )
+ {
+ const float left = m_lookThroughCamera->getOverscanLeft();
+ const float top = m_lookThroughCamera->getOverscanTop();
+ const float right = m_lookThroughCamera->getOverscanRight();
+ const float bottom = m_lookThroughCamera->getOverscanBottom();
+ m_overlay->setOverscan( V4f( left, top, right, bottom ) );
+ }
+ else
+ {
+ m_overlay->setOverscan( V4f( 0.0f ) );
+ }
if( errorMessage.empty() )
{
|
[cleanup] update site_detect_tests
not look like a mw site
is no longer a wiki but redirects to
instead
is no longer a wiki but redirects to
is no longer available
is no longer reachable | @@ -114,29 +114,15 @@ class Pre114SiteTestCase(SiteDetectionTestCase):
"""Test pre 1.14 sites which should be detected as unsupported."""
- def test_livepedia(self):
- """Test detection of MediaWiki sites for www.livepedia.gr."""
- self.assertNoSite(
- 'http://www.livepedia.gr/index.php?title=$1') # v1.12
-
def test_wikifon(self):
"""Test detection of MediaWiki sites for www.wikifon.org."""
self.assertNoSite('http://www.wikifon.org/$1') # v1.11
- def test_reuters(self):
- """Test detection of MediaWiki sites for glossary.reuters.com."""
- self.assertNoSite(
- 'http://glossary.reuters.com/index.php?title=$1') # v1.11
-
def test_wikitree(self):
"""Test detection of MediaWiki sites for wikitree.org."""
# v1.11, with no query module
self.assertNoSite('http://wikitree.org/index.php?title=$1')
- def test_wikinvest(self):
- """Test detection of MediaWiki sites for www.wikinvest.com."""
- self.assertNoSite('http://www.wikinvest.com/$1') # v1.9
-
class PreAPISiteTestCase(SiteDetectionTestCase):
@@ -150,10 +136,6 @@ class PreAPISiteTestCase(SiteDetectionTestCase):
"""Test detection of MediaWiki sites for www.thelemapedia.org."""
self.assertNoSite('http://www.thelemapedia.org/index.php/$1')
- def test_blahus(self):
- """Test detection of MediaWiki sites for esperanto.blahus.cz."""
- self.assertNoSite('http://esperanto.blahus.cz/cxej/vikio/index.php/$1')
-
def test_werelate(self):
"""Test detection of MediaWiki sites for www.werelate.org."""
self.assertNoSite('http://www.werelate.org/wiki/$1')
@@ -222,10 +204,6 @@ class NoSiteTestCase(SiteDetectionTestCase):
"""Test detection of MediaWiki sites for www.ecyrd.com."""
self.assertNoSite('http://www.ecyrd.com/JSPWiki/Wiki.jsp?page=$1')
- def test_operawiki(self):
- """Test detection of MediaWiki sites for operawiki.info."""
- self.assertNoSite('http://operawiki.info/$1')
-
def test_tvtropes(self):
"""Test detection of MediaWiki sites for www.tvtropes.org."""
self.assertNoSite('http://www.tvtropes.org/pmwiki/pmwiki.php/Main/$1')
|
Remove unicode from notifiler
Implements: blueprint remove-unicode | @@ -84,18 +84,18 @@ def _alarm_request(data, state):
severity = severity_translation.get(data.get(VProps.SEVERITY), 'low')
return dict(
name=aodh_alarm_name,
- description=u'Vitrage deduced alarm',
+ description='Vitrage deduced alarm',
event_rule=dict(query=[
dict(
- field=u'resource_id',
+ field='resource_id',
type='',
- op=u'eq',
+ op='eq',
value=affected_resource_id),
dict(
- field=u'vitrage_id',
+ field='vitrage_id',
type='',
- op=u'eq',
+ op='eq',
value=data.get(VProps.VITRAGE_ID))]),
severity=severity,
state=state,
- type=u'event')
+ type='event')
|
Update ray.md
The title for the images appears in the view/preview and we do not need it so I removed it. | @@ -25,7 +25,7 @@ TTC = Instrument("TTC", 8, "TensorTrade Coin")
Now let us look at the curve we will be using to define our price.
-
+
Ideally, what we will be expecting from our agent is that it will be able to sell at the peaks and buy at the troughs. We will move on to defining our action scheme. The `ActionScheme` we are going to build is going to be extremely simple. There are only 3 states that our agent can be in which are `buy`, `sell`, and `hold`. We will make use of a new function in the library, `proportion_order`. This function enables the user to make an order that can take a percentage of funds at a particular wallet and send it to another. Therefore, we want to structure a way to only have two actions in our scheme and use them as indicators to move our funds to the opposite wallet.
@@ -361,7 +361,7 @@ while not done:
env.render()
```
-
+
From the rendering, you can see that the agent is making near optimal decisions on the environment. Now we want to put the agent in an environment it is not used to and see what kind of decisions it makes. We will use for our price, an `order` 5 Fourier series fitted to a randomly generated Geometric Brownian Motion (GBM). We will use the `symfit` library to do this.
@@ -512,10 +512,10 @@ while not done:
The following are a few examples of the rendering that came from the evaluation environment.
-
-
-
-
+
+
+
+
As you can see, the agent has been able to make correct decisions on some of the price curves, but not all of them. The last curve shows some of the shortcomings of the agent. The agent seemed to have stop making decisions to move its wealth to capitalize on the changes in price. In the first three, however, it was able to make those decisions. The reason for this is most likely the change in the frequency of the curve. The last chart has a price curve that is volatile, containing many local maxima and minima as opposed to the first three charts.
|
Terminate gpg-agent after each test.
This is actual for gnupg 2.1 that requires the agent and starts it
automatically. But since each test creates a new keystore tests become
fail after the first one starts gpg-agent. | @@ -237,6 +237,19 @@ class DecryptGPGPillarTest(ModuleCase):
@classmethod
def tearDownClass(cls):
+ cmd = ['gpg-connect-agent', '--homedir', GPG_HOMEDIR]
+ try:
+ log.debug('Killing gpg-agent using: %s', cmd)
+ output = subprocess.Popen(cmd,
+ stdin=subprocess.PIPE,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.STDOUT,
+ shell=False).communicate(input=six.b('KILLAGENT'))[0]
+ log.debug('Result:\n%s', output)
+ except OSError:
+ log.debug('No need to kill: old gnupg doesn\'t start the agent.')
+ pass
+
if cls.created_gpg_homedir:
try:
shutil.rmtree(GPG_HOMEDIR)
|
Don't wrap small number of node changes in delay mptt updates
improves performance roughly 5x | @@ -333,7 +333,7 @@ def move_nodes(channel_id, target_parent_id, nodes, min_order, max_order, task_o
percent_per_node = math.ceil(total_percent / len(nodes))
percent_done = 0.0
- with ContentNode.objects.delay_mptt_updates():
+ with transaction.atomic():
for n in nodes:
min_order = min_order + float(max_order - min_order) / 2
node = ContentNode.objects.get(pk=n['id'])
|
Update About page
updated citation and where we get Rx data from | <h3>Citation</h3>
-<p>You are welcome to use data or graphs from this site in your academic output with attribution. Our methods paper will be published shortly, until then please cite "OpenPrescribing.net, EBM DataLab, University of Oxford, 2017" as the source for academic attribution.</p>
+<p>You are welcome to use data or graphs from this site in your academic output with attribution. Our methods paper will be published shortly, until then please cite "OpenPrescribing.net, EBM DataLab, University of Oxford, 2019" as the source for academic attribution.</p>
<p>If you use data or images from this site online or in a report, please link back to us. Your readers will then be able to see live updates to the data you are interested in, and explore other queries for themselves.</p>
@@ -37,7 +37,7 @@ cross-platform testing.</p>
<h3 id="sources">Data sources</h3>
-<p><strong>Prescribing data</strong> is from the monthly files published by <a href="https://digital.nhs.uk/practice-level-prescribing-summary">NHS Digital</a>, used under the terms of the Open Government Licence.</p>
+<p><strong>Prescribing data</strong> is from the monthly files published by <a href="https://www.nhsbsa.nhs.uk/information-services-portal-isp">NHS Business Service Authority</a>, used under the terms of the Open Government Licence.</p>
<p><strong>Practice list sizes</strong> for August 2010 - September 2016 are from the <a href="https://apps.nhsbsa.nhs.uk/infosystems/welcome">NHS Business Service Authority's Information Portal</a>, used under the terms of the Open Government Licence. From October 2016, practice list sizes are from <a href="http://content.digital.nhs.uk/article/2021/Website-Search?q=number+of+patients+registered+at+a+gp+practice&go=Go&area=both">NHS Digital</a>, used under the terms of the Open Government Licence. ASTRO-PU and STAR-PUs are calculated from list sizes, based on standard formulas.</p>
|
bumps version
+ incorporates IUPAC to N, to uppercase
+ adds tmpdir to bbnorm | -__version__ = "1.0.30"
+__version__ = "1.0.31"
TAX_LEVELS = ["superkingdom", "phylum", "class", "order", "family", "genus", "species"]
BLAST6 = ["qseqid", "sseqid", "pident", "length", "mismatch", "gapopen", "qstart",
"qend", "sstart", "send", "evalue", "bitscore"]
|
Project Description : Update readme and copyrights
Readme: minor copy and formatting edits
Readme: update/add links
Readme: add Gaffer logo
Readme: update copyrights
License: update copyrights | -Copyright (c) 2011-2016, John Haddon. All rights reserved.
-Copyright (c) 2011-2016 Image Engine Design Inc.
+Copyright (c) 2011-2018 John Haddon. All rights reserved.
+Copyright (c) 2011-2018 Image Engine Design Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
|
Fix an error on pypy3
Fix | @@ -75,10 +75,16 @@ def md5_digest(data):
if platform.python_implementation() == 'PyPy':
def create_readline_wrapper(fh):
fh.recv = fh.read
+ if is_py2:
if not hasattr(fh, '_drop'):
fh._drop = lambda: None
fh._reuse = lambda: None
- return socket._fileobject(fh, close=True)
+ ans = socket._fileobject(fh, close=True)
+ else:
+ fh.recv_into = fh.readinto
+ fh._decref_socketios = lambda: None
+ ans = BufferedReader(socket.SocketIO(fh, 'r'))
+ return ans
else:
def create_readline_wrapper(fh):
fh.recv = fh.read
|
dart: Only allocate a max of 2 L2 tables
This is what iBoot normally does, and >2 will panic macOS | @@ -188,8 +188,8 @@ int dart_setup_pt_region(dart_dev_t *dart, const char *path, int device)
printf("dart: dart %s ignoring large pt-region-0, %lu L2 tables\n", path, tbl_count);
return -1;
}
- /* first index is the l1 table */
- tbl_count -= 1;
+ /* first index is the l1 table, cap at 2 or else macOS hates it */
+ tbl_count = min(2, tbl_count - 1);
u64 l2_start = region[0] + SZ_16K;
for (u64 index = 0; index < tbl_count; index++) {
int ttbr = index >> 11;
@@ -204,6 +204,7 @@ int dart_setup_pt_region(dart_dev_t *dart, const char *path, int device)
off, l2tbl);
continue;
} else {
+ printf("dart: allocating L2 tbl at %d, %d to 0x%lx\n", ttbr, idx, l2tbl);
memset((void *)l2tbl, 0, SZ_16K);
}
|
Fix for 'on_order' calculation
Handle null results | @@ -237,6 +237,9 @@ class SupplierPart(models.Model):
@property
def manufacturer_string(self):
+ """ Format a MPN string for this SupplierPart.
+ Concatenates manufacture name and part number
+ """
items = []
@@ -315,7 +318,16 @@ class SupplierPart(models.Model):
totals = self.open_orders().aggregate(Sum('quantity'), Sum('received'))
- return totals['quantity__sum'] - totals['received__sum']
+ # Quantity on order
+ q = totals.get('quantity__sum', 0)
+
+ # Quantity received
+ r = totals.get('received__sum', 0)
+
+ if q is None or r is None:
+ return 0
+ else:
+ return q - r
def purchase_orders(self):
|
Fixed grouping
Now is groping, but the function needs improvement.
To finish the implementation I have to modify `spreadsheep.py` | @@ -278,6 +278,7 @@ def group_parts(components, fields_merge):
value = SGROUP_SEPRTR.join( [collapse_refs(r) + SEPRTR + ' ' + t for t,r in ocurrences.items()] )
for r in grp.refs:
components[r][f] = value
+ #print('++++++++++++++',len(new_component_groups))
#for grp in new_component_groups:
# print(grp.refs)
# for r in grp.refs:
@@ -288,10 +289,16 @@ def group_parts(components, fields_merge):
logger.log(DEBUG_OVERVIEW, 'Propagating field values to identical components...')
for grp in new_component_groups:
grp_fields = {}
+ qty = []
for ref in grp.refs:
for key, val in list(components[ref].items()):
if key == 'manf#_qty':
- #grp_fields['manf#_qty'] #TODO
+ try:
+ for i in range(len(val)):
+ grp_fields['manf#_qty'][i] += '+' + val[i] # DUMMY way and need improvement to realy do arithmetic and not string cat. #TODO
+ val[i] = grp_fields['manf#_qty'][i] # Make the firt values take also equal.
+ except:
+ grp_fields['manf#_qty'] = val
continue
if val is None: # Field with no value...
continue # so ignore it.
@@ -306,6 +313,8 @@ def group_parts(components, fields_merge):
#print('------------')
#for grp in new_component_groups:
# print(grp.refs)
+ # for r in grp.refs:
+ # print(r, components[r])
#exit(1)
return new_component_groups
|
[ENH] better error message on transform output check fail
This PR improves the error message if the transform output check fails, by pointing to the estimator type and printing the output. | @@ -1021,11 +1021,18 @@ class BaseTransformer(BaseEstimator):
# we cannot convert back to pd.Series, do pd.DataFrame instead then
# this happens only for Series, not Panel
if X_input_scitype == "Series":
- _, _, metadata = check_is_mtype(
+ valid, msg, metadata = check_is_mtype(
Xt,
["pd.DataFrame", "pd.Series", "np.ndarray"],
return_metadata=True,
)
+ if not valid:
+ raise TypeError(
+ f"_transform output of {type(self)} does not comply "
+ "with sktime mtype specifications. See datatypes.MTYPE_REGISTER"
+ " for mtype specifications. Returned error message:"
+ f" {msg}. Returned object: {Xt}"
+ )
if not metadata["is_univariate"] and X_input_mtype == "pd.Series":
X_output_mtype = "pd.DataFrame"
|
prioritise bundled LCB include path
Tested-by: Build Bot | @@ -303,7 +303,10 @@ class CBuildCommon(build_ext):
ext.extra_compile_args += lcb_api_flags
compiler = self.compiler # type: CCompiler
lcb_include = os.path.join(self.build_temp, "install", "include")
- compiler.add_include_dir(lcb_include)
+ try:
+ compiler.set_include_dirs([lcb_include]+compiler.include_dirs)
+ except:
+ compiler.add_include_dirs([lcb_include])
lib_dirs = [self.info.pkg_data_dir] + self.info.get_lcb_dirs()
try:
existing_lib_dirs = compiler.library_dirs
|
Qt5 migration script : Tidy up
Use argparse to add argument and help text
Remove commented code
Fix bug that caused the QtWidget substitutions to be omitted
Default to running in ./, since not all projects have a root ./python directory | +#! /usr/bin/env python
+
import os
import re
+import inspect
+import argparse
import functools
import Qt
+parser = argparse.ArgumentParser(
+ description = inspect.cleandoc(
+ """
+ Attempts to modify Python source files to assist
+ in the migration from Qt4 to Qt5, via Qt.py :
+
+ - Replaces QtGui with QtWidgets where
+ necessary
+ - Replaces `GafferUI._qtImport( "X" )` calls with
+ `from Qt import X`
+
+ This is a rough and (hopefully) ready script that does
+ very little validation. It is recommended that you run
+ it in a clean source repository and use `git diff` to
+ manually verify the changes that have been made.
+ """ ),
+ formatter_class = argparse.RawTextHelpFormatter
+)
+
+parser.add_argument(
+ "source-directory",
+ help = "A directory containing python files. This will be searched recursively.",
+ nargs = "?",
+ default = "./",
+)
+
def convert( fileName ) :
with open( fileName ) as f :
text = "".join( f.readlines() )
- #print text
# Substitute QtWidgets for QtGui where needed
@@ -54,13 +83,15 @@ def convert( fileName ) :
newText = re.sub(
r'(Qt\w*)\s*=\s*GafferUI._qtImport\(\s*["\'](Qt.*)["\']\s*\)',
r'from Qt import \2',
- text
+ newText
)
with open( fileName, "w" ) as f :
f.write( newText )
-for root, dirs, files in os.walk( "./python" ) :
+args = parser.parse_args()
+directory = vars( args )["source-directory"]
+for root, dirs, files in os.walk( directory ) :
for file in files :
if os.path.splitext( file )[1] == ".py" :
convert( os.path.join( root, file ) )
|
build: use `$CIRRUS_CPU` to determine cpu count on CI
related | @@ -129,7 +129,13 @@ if [ -n "$GCC_TRIPLET_BUILD" ] ; then
fi
export GCC_STRIP_BINARIES="${GCC_STRIP_BINARIES:-0}"
+
+if [ -n "$CIRRUS_CPU" ] ; then
+ # special-case for CI. see https://github.com/cirruslabs/cirrus-ci-docs/issues/1115
+ export CPU_COUNT="$CIRRUS_CPU"
+else
export CPU_COUNT="$(nproc 2> /dev/null || sysctl -n hw.ncpu)"
+fi
info "Found $CPU_COUNT CPUs, which we might use for building."
|
Moves break
Ref | <div id=pageContainer>{{ currentpage.raw_html }}</div>
- <br/>
{% get_obj_perms user for site as "site_perms" %}
{% if not currentpage.is_error_page %}
{% block editlink %}
{% if currentpage.pk %}
{% if "change_challenge" in site_perms %}
+ <br>
<a class="editPageLink"
href="{% url 'pages:update' currentpage.challenge.short_name currentpage.title %}">Edit
this
page</a>
-
- {% else %}
-
{% endif %}
{% endif %}
{% endblock %}
|
Update appshell_extensions.js
Fixing the comment | @@ -871,12 +871,12 @@ if (!appshell.app) {
};
/**
- * Get hash of the machine based on various
+ * Get hash of the machine based on various parameters like
+ * network interfaces, CPU ID, volume serial number e.t.c
*
+ * @param {none}
*
- * @param {number}
- *
- * @return none.
+ * @return None. This is an asynchronous call that sends all return information to the callback.
*/
native function GetMachineHash();
appshell.app.getMachineHash = function (callback) {
|
Setting header if auto_change is true.
This is overall not a great solution, but the better solution
requires some real rework. This is meant as a temporary fix until
we get time to do a real fix for this. | @@ -244,6 +244,15 @@ class TokenListAPI(BaseHandler):
if verified:
tokens = generate_tokens(principal, self.REFRESH_COOKIE_EXP)
+
+ # This is a semi-done solution. To really do this, we cannot give them
+ # a token, instead we should return an error, indicating they need to
+ # update their password, and then login again. In the short term, this
+ # will be enough. This is really meant only to work for our UI so
+ # backwards compatibility is not a concern.
+ if principal.metadata.get('auto_change') and not principal.metadata.get('changed'):
+ self.set_header('change_password_required', 'true')
+
if parsed_body.get("remember_me", False):
self.set_secure_cookie(
self.REFRESH_COOKIE_NAME,
|
remove unused environment variables
both of these were changed to default to false in flask-sqlalchemy 3.0
anyway | @@ -138,8 +138,6 @@ class Config(object):
AWS_REGION = "eu-west-1"
INVITATION_EXPIRATION_DAYS = 2
NOTIFY_APP_NAME = "api"
- SQLALCHEMY_RECORD_QUERIES = False
- SQLALCHEMY_TRACK_MODIFICATIONS = False
SQLALCHEMY_POOL_SIZE = int(os.environ.get("SQLALCHEMY_POOL_SIZE", 5))
SQLALCHEMY_POOL_TIMEOUT = 30
SQLALCHEMY_POOL_RECYCLE = 300
|
Fix outdated package notice
When I install this package I get a notice that:
`us 1.0.0 has requirement jellyfish==0.5.6, but you'll have jellyfish 0.5.1 which is incompatible.`
Let's fix it :-) | @@ -58,7 +58,7 @@ setup(name='docassemble.base',
url='https://docassemble.org',
download_url='https://download.docassemble.org/docassemble-base.tar.gz',
namespace_packages = ['docassemble'],
- install_requires = ['docassemble==0.2.78', '3to2', 'astunparse', 'babel', 'bcrypt', 'blinker', 'cffi', 'fdfgen', 'guess-language-spirit', 'httplib2', 'itsdangerous', 'jellyfish==0.5.1', 'jinja2', 'lxml', 'mako', 'markdown', 'markupsafe', 'mdx-smartypants', 'namedentities==1.5.2', 'passlib', 'pdfminer', 'pillow', 'pip', 'pycparser', 'pycrypto', 'geopy', 'pygments', 'pyjwt', 'pypdf', 'pypdftk', 'PyPDF2', 'python-dateutil', 'pytz', 'pyyaml', 'ruamel.yaml', 'qrcode', 'six', 'titlecase', 'wheel', 'pattern', 'tzlocal', 'us', 'phonenumbers', 'pycountry', 'ua-parser', 'user-agents', 'textstat', 'twine', 'docxtpl', 'qrtools'],
+ install_requires = ['docassemble==0.2.78', '3to2', 'astunparse', 'babel', 'bcrypt', 'blinker', 'cffi', 'fdfgen', 'guess-language-spirit', 'httplib2', 'itsdangerous', 'jellyfish==0.5.6', 'jinja2', 'lxml', 'mako', 'markdown', 'markupsafe', 'mdx-smartypants', 'namedentities==1.5.2', 'passlib', 'pdfminer', 'pillow', 'pip', 'pycparser', 'pycrypto', 'geopy', 'pygments', 'pyjwt', 'pypdf', 'pypdftk', 'PyPDF2', 'python-dateutil', 'pytz', 'pyyaml', 'ruamel.yaml', 'qrcode', 'six', 'titlecase', 'wheel', 'pattern', 'tzlocal', 'us', 'phonenumbers', 'pycountry', 'ua-parser', 'user-agents', 'textstat', 'twine', 'docxtpl', 'qrtools'],
packages=find_packages(),
zip_safe = False,
package_data=find_package_data(where='docassemble/base/', package='docassemble.base'),
|
clean up 'setup test' in 'test_registries'
Some tests in 'test_registries' were not cleaned up correctly.
In particular, the current working directory was not restored,
causing the HTML report of pytest-cov to be saved into a temporary
folder. | @@ -45,10 +45,10 @@ class TestProtocolRegistry:
cls.patch = unittest.mock.patch.object(aea.registries.base.logger, 'exception')
cls.mocked_logger = cls.patch.__enter__()
+ cls.oldcwd = os.getcwd()
cls.agent_name = "agent_dir_test"
cls.t = tempfile.mkdtemp()
cls.agent_folder = os.path.join(cls.t, cls.agent_name)
- cls.oldcwd = os.getcwd()
shutil.copytree(os.path.join(CUR_PATH, "data", "dummy_aea"), cls.agent_folder)
os.chdir(cls.agent_folder)
@@ -116,12 +116,12 @@ class TestResources:
cls._patch_logger()
# create temp agent folder
+ os.chdir(cls.agent_folder)
cls.agent_name = "agent_dir_test"
cls.t = tempfile.mkdtemp()
cls.agent_folder = os.path.join(cls.t, cls.agent_name)
cls.oldcwd = os.getcwd()
shutil.copytree(os.path.join(CUR_PATH, "data", "dummy_aea"), cls.agent_folder)
- os.chdir(cls.agent_folder)
# make fake skill
cls.fake_skill_id = "fake"
@@ -197,3 +197,5 @@ class TestResources:
def teardown_class(cls):
"""Tear the tests down."""
cls._unpatch_logger()
+ shutil.rmtree(cls.t, ignore_errors=True)
+ os.chdir(cls.oldcwd)
|
Update homophones.py
Correct comment on capture. | @@ -139,8 +139,8 @@ def canonical(m) -> str:
"Returns a single string"
@mod.capture
-def selection(m) -> int:
- "Returns a single integer (1-based)"
+def selection(m) -> str:
+ "Returns the selected homophone"
@mod.action_class
class Actions:
|
Make permutations not join sublists when given list of strings
Closes | @@ -4098,7 +4098,7 @@ def permutations(lhs, ctx):
lhs = iterable(lhs, ctx=ctx)
return LazyList(
map(
- lambda x: "".join(x) if all(isinstance(y, str) for y in x) else x,
+ lambda x: "".join(x) if all(isinstance(y, str) for y in x) and isinstance(lhs, str) else x,
itertools.permutations(
iterable(lhs, number_type=range, ctx=ctx), len(lhs)
),
|
Avoid convert forth and back from pixel to data
Remove reuse of x|yClosest for pixel values, so they can be used straightaway. | @@ -172,17 +172,15 @@ class PositionInfo(qt.QWidget):
closestInPixels = self.plot.dataToPixel(
xClosest, yClosest, axis=activeCurve.getYAxis())
if closestInPixels is not None:
- xClosest, yClosest = closestInPixels
xPixel, yPixel = event['xpixel'], event['ypixel']
- if (abs(xClosest - xPixel) < 5 and
- abs(yClosest - yPixel) < 5):
+ if (abs(closestInPixels[0] - xPixel) < 5 and
+ abs(closestInPixels[1] - yPixel) < 5):
# Update label style sheet
styleSheet = "color: rgb(0, 0, 0);"
# if close enough, wrap to data point coords
- x, y = self.plot.pixelToData(
- xClosest, yClosest, axis=activeCurve.getYAxis())
+ x, y = xClosest, yClosest
for label, name, func in self._fields:
label.setStyleSheet(styleSheet)
|
sql: add 8003 error code description
Via: | @@ -16,6 +16,7 @@ TiDB is compatible with the error codes in MySQL, and in most cases returns the
| ---- | ------- | --------- |
| 8001 | The memory used by the request exceeds the threshold limit for the TiDB memory usage. | Increase the value of the system variable with the `tidb_mem_quota` prefix. |
| 8002 | To guarantee consistency, a transaction with the `SELECT FOR UPDATE` statement cannot be retried when it encounters a commit conflict. TiDB rolls back the transaction and returns this error. | Retry the failed transaction. |
+| 8003 | If the data in a row is not consistent with the index when executing the `ADMIN CHECK TABLE` command, TiDB returns this error. |
| 9001 | The PD request timed out. | Check the state/monitor/log of the PD server and the network between the TiDB server and the PD server. |
| 9002 | The TiKV request timed out. | Check the state/monitor/log of the TiKV server and the network between the TiDB server and the TiKV server. |
| 9003 | The TiKV server is busy and this usually occurs when the workload is too high. | Check the state/monitor/log of the TiKV server. |
|
Oklahoma - new entry
Reviewed info (OK) and fixed a type-o | @@ -12,7 +12,7 @@ id: ok-oklahomacity-1
### Law enforcement uses tear-gas several times | May 30th
-Four different video, in what is believed to be chronological order, show law enforcement using tear-gas on protestors that are standing.
+Four different videos, in what is believed to be chronological order, show law enforcement using tear-gas on protestors that are standing.
id: ok-oklahomacity-2
|
Update README.md
include in a table to center | @@ -3,7 +3,9 @@ SMA Conversion/Tagging Automation Script.
**Automatically converts media files downloaded by various programs to a standardized format, and tags them with the appropriate metadata from TMDB if the container supports tagging.**
-
+|  |
+| :--: |
+
Works on Windows, OSX, and Linux. Despite the name works with much more than just Sickbeard and handles more than MP4
|
Update mock_oauth_provider Blueprint for flask 2.0.1 changes:
> Show an error when a blueprint name contains a dot. The . has special
> meaning, it is used to separate (nested) blueprint names and the
> endpoint name.
Ref: | @@ -12,7 +12,7 @@ from tests.utils.mock_oauth_provider.models import OAuth2Client, User, db
from tests.utils.mock_oauth_provider.oauth2 import authorization, require_oauth
logger = logging.getLogger(__name__)
-bp = Blueprint(__name__, "home")
+bp = Blueprint("mock-oauth-provider", "home")
def current_user():
|
[IMPR] Simplify code and use subTest
Setup data in setUp and use subTest to reduce similar codes | # -*- coding: utf-8 -*-
"""Test Interwiki Graph functionality."""
#
-# (C) Pywikibot team, 2015-2018
+# (C) Pywikibot team, 2015-2020
#
# Distributed under the terms of the MIT license.
#
@@ -47,45 +47,36 @@ class TestWiktionaryGraph(SiteAttributeTestCase):
'pl': DryPage(cls.plwikt, 'origin'),
}
- def test_simple_graph(self):
- """Test that GraphDrawer.createGraph does not raise exception."""
+ def setUp(self):
+ """Setup interwiki_graph data."""
+ super(TestWiktionaryGraph, self).setUp()
data = interwiki_graph.Subject(self.pages['en'])
-
data.found_in[self.pages['en']] = [self.pages['fr'], self.pages['pl']]
data.found_in[self.pages['fr']] = [self.pages['en'], self.pages['pl']]
data.found_in[self.pages['pl']] = [self.pages['en'], self.pages['fr']]
+ self.data = data
- drawer = interwiki_graph.GraphDrawer(data)
-
+ def test_simple_graph(self):
+ """Test that GraphDrawer.createGraph does not raise exception."""
+ drawer = interwiki_graph.GraphDrawer(self.data)
drawer.createGraph()
def test_octagon(self):
"""Test octagon nodes."""
- data = interwiki_graph.Subject(self.pages['en'])
-
- data.found_in[self.pages['en']] = [self.pages['fr'], self.pages['pl']]
- data.found_in[self.pages['en2']] = [self.pages['fr']]
- data.found_in[self.pages['fr']] = [self.pages['en'], self.pages['pl']]
- data.found_in[self.pages['pl']] = [self.pages['en'], self.pages['fr']]
-
- drawer = interwiki_graph.GraphDrawer(data)
+ self.data.found_in[self.pages['en2']] = [self.pages['fr']]
+ drawer = interwiki_graph.GraphDrawer(self.data)
self.assertEqual({self.pages['en'].site}, drawer._octagon_site_set())
drawer.createGraph()
-
nodes = drawer.graph.obj_dict['nodes']
- self.assertEqual(
- nodes['"pl:origin"'][0]['attributes']['shape'],
- 'rectangle')
-
- self.assertEqual(
- nodes['"fr:origin"'][0]['attributes']['shape'],
- 'rectangle')
+ for node, shape in [('"pl:origin"', 'rectangle'),
+ ('"fr:origin"', 'rectangle'),
+ ('"en:origin"', 'octagon')]:
+ with self.subTest(node=node):
self.assertEqual(
- nodes['"en:origin"'][0]['attributes']['shape'],
- 'octagon')
+ nodes[node][0]['attributes']['shape'], shape)
if __name__ == '__main__': # pragma: no cover
|
rolling_update.yml: force ceph-volume scan on osds
The rolling_update.yml playbook fails when scanning ceph-disk osds while
deploying nautilus. The --force flag is required to scan existing osds
and rewrite their json metadata. | when: containerized_deployment | bool
- name: scan ceph-disk osds with ceph-volume if deploying nautilus
- command: "ceph-volume --cluster={{ cluster }} simple scan"
+ command: "ceph-volume --cluster={{ cluster }} simple scan --force"
environment:
CEPH_VOLUME_DEBUG: 1
when:
|
Consolidate links in JAX documentation
Move notes down | @@ -15,7 +15,7 @@ parallelize, Just-In-Time compile to GPU/TPU, and more.
notebooks/Common_Gotchas_in_JAX
.. toctree::
- :maxdepth: 2
+ :maxdepth: 1
jax-101/index
@@ -54,20 +54,9 @@ parallelize, Just-In-Time compile to GPU/TPU, and more.
notebooks/xmap_tutorial
multi_process
-.. toctree::
- :maxdepth: 1
- :caption: Notes
-
- api_compatibility
- deprecation
- concurrency
- gpu_memory_allocation
- profiling
- device_memory_profiling
- rank_promotion_warning
.. toctree::
- :maxdepth: 2
+ :maxdepth: 1
:caption: Developer documentation
contributing
@@ -77,11 +66,23 @@ parallelize, Just-In-Time compile to GPU/TPU, and more.
jep/index
.. toctree::
- :maxdepth: 3
+ :maxdepth: 1
:caption: API documentation
jax
+.. toctree::
+ :maxdepth: 1
+ :caption: Notes
+
+ api_compatibility
+ deprecation
+ concurrency
+ gpu_memory_allocation
+ profiling
+ device_memory_profiling
+ rank_promotion_warning
+
Indices and tables
==================
|
OOB access in GLM SoftMax
closes
Authors:
- Divye Gala (@divyegala)
Approvers:
- William Hicks (@wphicks)
- John Zedlewski (@JohnZed)
URL: | /*
- * Copyright (c) 2018-2020, NVIDIA CORPORATION.
+ * Copyright (c) 2018-2021, NVIDIA CORPORATION.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -59,7 +59,11 @@ __global__ void logSoftmaxKernel(T *out, T *dZ, const T *in, const T *labels,
bool delta = false;
// TODO is there a better way to read this?
if (getDerivative && threadIdx.x == 0) {
+ if (y < N) {
shm.sh_val[threadIdx.y] = labels[y];
+ } else {
+ shm.sh_val[threadIdx.y] = std::numeric_limits<T>::lowest();
+ }
}
__syncthreads();
T label = shm.sh_val[threadIdx.y];
|
Fix outdated link to guide for contributors
'dev' branch was deleted.
Link to ReadTheDocs version,
as done elsewhere in the file. | # PlantCV: Plant phenotyping using computer vision
-Please use, cite, and [contribute to](https://github.com/danforthcenter/plantcv/blob/dev/CONTRIBUTING.md) PlantCV!
+Please use, cite, and [contribute to](http://plantcv.readthedocs.io/en/latest/CONTRIBUTING/) PlantCV!
If you have questions, please submit them via the
[GitHub issues page](https://github.com/danforthcenter/plantcv/issues).
Follow us on twitter [@plantcv](https://twitter.com/plantcv).
|
Bugfix correct mimetype result
It should always return a value (to match Werkzeug) rather than
returning None if there is no Content-Type in the headers. | @@ -182,12 +182,9 @@ class _BaseRequestResponse:
self.headers = CIMultiDict(headers)
@property
- def mimetype(self) -> Optional[str]:
+ def mimetype(self) -> str:
"""Returns the mimetype parsed from the Content-Type header."""
- if 'Content-Type' in self.headers:
- return parse_header(self.headers.get('Content-Type'))[0]
- else:
- return None
+ return parse_header(self.headers.get('Content-Type', ''))[0]
@mimetype.setter
def mimetype(self, value: str) -> None:
@@ -204,7 +201,7 @@ class _BaseRequestResponse:
@property
def mimetype_params(self) -> Dict[str, str]:
"""Returns the params parsed from the Content-Type header."""
- return parse_header(self.headers.get('Content-Type'))[1]
+ return parse_header(self.headers.get('Content-Type', ''))[1]
async def get_data(self, raw: bool=True) -> AnyStr:
raise NotImplemented()
|
Disable printing of the histogram when dump
Summary:
Pull Request resolved:
Disable printing of the histogram when dump to make the log cleaner.
Test Plan: CI | @@ -142,7 +142,7 @@ class HistogramNetObserver final : public NetObserver {
string delimiter = " ");
~HistogramNetObserver();
void DumpHistogramFile() {
- DumpAndReset_(out_file_name_, true);
+ DumpAndReset_(out_file_name_, false);
}
private:
|
Fixed bug with 'Host name' input on git credential
Fixed bug with 'Host name' input on git credential | @@ -37,7 +37,7 @@ export class ManageUngitComponent implements OnInit {
currentEditableItem: AccountCredentials = null;
mail_validity_pattern = /^([A-Za-z0-9_\-\.])+\@([A-Za-z0-9_\-\.])+\.([A-Za-z]{2,63})$/;
- hostname_validity_pattern = /^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])+\.[a-z\.]+\S$/;
+ hostname_validity_pattern = /^([a-zA-Z0-9]+(\.[a-zA-Z0-9]+)+.*)$/;
login_acceptance_pattern = '[[email protected]]+';
acceptance_pattern = '[-_ a-zA-Z0-9]+';
|
Updated API key documentation.
The documentation for requesting an API key was updated to reference
HTML5 API and the default of a 100k requests per day limit. | @@ -10,6 +10,24 @@ well as reporting a location based on IP addresses, cell or WiFi networks.
New client developments should use the :ref:`api_region_latest`,
:ref:`api_geolocate_latest` or :ref:`api_geosubmit_latest` APIs.
+Requesting an API Key
+=====================
+
+The key has a daily usage limit of about 100,000 requests. As we aren't
+offering a commercial service, please note that we do not make any
+guarantees about the accuracy of the results or the availability of
+the service.
+
+Please make sure that you actually need the raw API access to
+perform geolocation lookups. If you just need to get location data
+from your web application, you can directly use the
+`HTML5 API
+<https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation>`_.
+
+To apply for an API key, please email [email protected] with
+the number of requests you're planning on making per day and how you
+are planning on using the location service. This will help us plan
+out how we allocate resources to keep the service responsive.
API Access Keys
===============
|
api/builtins/DCMotor: drop settings
These settings are control properties, so they cannot be altered on DC Motors. | @@ -72,9 +72,6 @@ DC Motor
.. automethod:: pybricks.builtins.DCMotor.dc
:noindex:
- .. automethod:: pybricks.builtins.DCMotor.set_dc_settings
- :noindex:
-
Other ev3dev sensors
^^^^^^^^^^^^^^^^^^^^^
|
Commander: fix elf being passed to commander.
Fix for submitted by | from __future__ import print_function
import logging
+import os
import traceback
from ..core.helpers import ConnectHelper
@@ -215,7 +216,7 @@ class PyOCDCommander(object):
# Set elf file if provided.
if self.args.elf:
- self.target.elf = os.path.expanduser(self.args.elf)
+ self.session.target.elf = os.path.expanduser(self.args.elf)
# Handle a device with flash security enabled.
if not self.args.no_init and self.session.target.is_locked():
|
Deletion of a trailing '#'.
Please note that I have added that '#' by mistake. | @@ -1177,7 +1177,7 @@ def domain_to_idna(line):
else:
splited_line[1] = splited_line[1] \
.encode("IDNA") \
- .decode("UTF-8") + '#'
+ .decode("UTF-8")
else:
splited_line[1] = splited_line[1] \
.encode("IDNA") \
|
Added missing pytest decorator
Message-Id:
Message-Id: | @@ -27,18 +27,6 @@ class ClassesTest(unittest.TestCase):
self.assertEqual(3, alien.health, msg=error)
- # Test class variables are identical across instances
- def test_alien_class_variable(self):
- alien_one = Alien(0, 2)
- alien_two = Alien(-6, -1)
- Alien.total_aliens_created = -2
-
- error_one = "Expected the total_aliens_created variable to be identical."
- error_two = "Expected the health variable to be identical."
-
- self.assertEqual(alien_two.total_aliens_created, alien_one.total_aliens_created, msg=error_one)
- self.assertEqual(alien_two.health, alien_one.health, msg=error_two)
-
# Test instance variables are unique to specific instances
@pytest.mark.task(taskno=1)
def test_alien_instance_variables(self):
@@ -99,6 +87,19 @@ class ClassesTest(unittest.TestCase):
self.assertIsNone(alien.collision_detection(Alien(7, 2)), msg=error)
+ # Test class variables are identical across instances
+ @pytest.mark.task(taskno=6)
+ def test_alien_class_variable(self):
+ alien_one = Alien(0, 2)
+ alien_two = Alien(-6, -1)
+ Alien.total_aliens_created = -2
+
+ error_one = "Expected the total_aliens_created variable to be identical."
+ error_two = "Expected the health variable to be identical."
+
+ self.assertEqual(alien_two.total_aliens_created, alien_one.total_aliens_created, msg=error_one)
+ self.assertEqual(alien_two.health, alien_one.health, msg=error_two)
+
# Test total_aliens_created increments upon object instantiation
@pytest.mark.task(taskno=6)
def test_alien_total_aliens_created(self):
|
Transformer_ABC_Meta was confused
class name/transformer name confusion
metaclass initialization/class initialization confusion | @@ -260,14 +260,16 @@ class Transformer_ABC_Meta(abc.ABCMeta):
metaclass for the backend objects
takes care of registering all the backend subclasses
"""
- def __init__(self, name, bases, dict_):
- if not hasattr(self, 'transformers'):
- self.transformers = {}
- else:
- name = getattr(self, 'transformer_name', None)
- if name and name not in ['Transformer']:
- self.transformers[name] = self
- super(Transformer_ABC_Meta, self).__init__(name, bases, dict_)
+ def __init__(cls, name, bases, dict_):
+ if not hasattr(cls, 'transformers'):
+ # First possible transformer class sets things up
+ cls.transformers = {}
+
+ # If this transformer has a transformer_name, register it
+ transformer_name = getattr(cls, 'transformer_name', None)
+ if transformer_name is not None:
+ cls.transformers[transformer_name] = cls
+ super(Transformer_ABC_Meta, cls).__init__(name, bases, dict_)
class Transformer(with_metaclass(Transformer_ABC_Meta, object)):
|
Updates and fixes SpamTable: figures have box labels & evals are not repeated.
Eigenvalue columns could be repeated b/c of same confidence region view.
This is fixed now. Also, the matrix plots look nicer. | @@ -113,6 +113,7 @@ class SpamTable(WorkspaceTable):
rhoMx_real = rhoMx.hermitian_to_real()
v = rhoMx_real.get_value()
fig = _wp.GateMatrixPlot(self.ws, v, colorbar=False,
+ boxLabels=True, prec='compacthp',
mxBasis=None) #no basis labels
rowData.append( fig )
rowFormatters.append('Figure')
@@ -121,8 +122,10 @@ class SpamTable(WorkspaceTable):
for gateset in gatesets:
+ cri = confidenceRegionInfo if confidenceRegionInfo and \
+ (confidenceRegionInfo.gateset.frobeniusdist(gateset) < 1e-6) else None
evals = _ev(_reportables.Vec_as_stdmx_eigenvalues(gateset, lbl, "prep"),
- confidenceRegionInfo)
+ cri)
rowData.append( evals )
rowFormatters.append('Brackets')
@@ -154,6 +157,7 @@ class SpamTable(WorkspaceTable):
EMx_real = EMx.hermitian_to_real()
v = EMx_real.get_value()
fig = _wp.GateMatrixPlot(self.ws, v, colorbar=False,
+ boxLabels=True, prec='compacthp',
mxBasis=None) #no basis labels
rowData.append( fig )
rowFormatters.append('Figure')
@@ -161,8 +165,10 @@ class SpamTable(WorkspaceTable):
raise ValueError("Invalid 'display_as' argument: %s" % display_as)
for gateset in gatesets:
+ cri = confidenceRegionInfo if confidenceRegionInfo and \
+ (confidenceRegionInfo.gateset.frobeniusdist(gateset) < 1e-6) else None
evals = _ev(_reportables.Vec_as_stdmx_eigenvalues(gateset, lbl, "effect"),
- confidenceRegionInfo)
+ cri)
rowData.append( evals )
rowFormatters.append('Brackets')
|
utilities: handle nonuniform objects better when converting to numpy arrays
previously, it would flatten some arrays - e.g. [0, [0]] -> [0, 0] | @@ -1109,7 +1109,7 @@ def convert_all_elements_to_np_array(arr, cast_from=None, cast_to=None):
return np.asarray(arr, dtype=cast_to)
if not isinstance(arr, collections.Iterable) or isinstance(arr, str):
- return np.asarray(arr)
+ return np.array(arr)
if isinstance(arr, np.matrix):
if arr.dtype == object:
@@ -1118,9 +1118,12 @@ def convert_all_elements_to_np_array(arr, cast_from=None, cast_to=None):
return arr
subarr = [convert_all_elements_to_np_array(x, cast_from, cast_to) for x in arr]
- try:
- return np.array(subarr)
- except ValueError:
+
+ if all([subarr[i].shape == subarr[0].shape for i in range(1, len(subarr))]):
+ # the elements are all uniform in shape, so we can use numpy's standard behavior
+ return np.asarray(subarr)
+ else:
+ # the elements are nonuniform, so create an array that just wraps them individually
# numpy cannot easily create arrays with subarrays of certain dimensions, workaround here
# https://stackoverflow.com/q/26885508/3131666
len_subarr = len(subarr)
|
[DOC] Fix broken fbprophet hyperlink in README.md
fixes broken fbprophet hyperlink in features section or README.md | @@ -70,7 +70,7 @@ For **deep learning**, see our companion package: [sktime-dl](https://github.com
[statsmodels]: https://www.statsmodels.org/stable/index.html
[tsfresh]: https://tsfresh.readthedocs.io/en/latest/
[pyod]: https://pyod.readthedocs.io/en/latest/
-[prophet]: https://facebook.github.io/prophet/
+[fbprophet]: https://facebook.github.io/prophet/
| Module | Status | Links |
|---|---|---|
|
bi, hierarchy query
HG--
branch : feature/microservices | @@ -222,6 +222,28 @@ class BIAPI(API):
:param params:
:return:
"""
+ def search_parent(node, p_id):
+ if p_id is None:
+ return node
+ if node and node["id"] == p_id:
+ return node
+ else:
+ if node and "nodes" in node.keys():
+ for child in node["nodes"]:
+ _searched = search_parent(child, p_id)
+ if _searched:
+ return _searched
+ else:
+ return None
+
+ def sort_nodes(node):
+ if "nodes" not in node.keys():
+ return
+ else:
+ node["nodes"] = sorted(node["nodes"], key=lambda x: x["text"])
+ for n in node["nodes"]:
+ sort_nodes(n)
+
if "datasource" not in params:
raise APIError("No datasource")
if "dic_name" not in params:
@@ -277,18 +299,24 @@ class BIAPI(API):
}
result = model.query(query, self.handler.current_user)
- parents = {}
- r = []
+ tree = {}
for row in result["result"]:
- names = map((lambda z: z), row[0].strip("[] ").split(","))
- ids = map((lambda z: int(z)), row[1].strip("[] ").split(","))
- x = 1
- while x < len(ids) - 1:
- parents[ids[x]] = {"name": names[x], "id": ids[x], "p_id": ids[x + 1]}
- x += 1
- if len(ids) > 1:
- r.append({"name": names[0], "id": ids[0], "p_id": ids[1]})
- parents['root'] = {"name": names[-1], "id": ids[-1], "p_id": "null"}
- for k in parents:
- r.append(parents[k])
- return r
+ names = map(lambda x: x[1:-1], row[0][1:-1].split(","))
+ ids = map(lambda x: int(x), row[1][1:-1].split(","))
+ ids.reverse()
+ names.reverse()
+ parent_id = None
+ for col in zip(ids, names):
+ searched = search_parent(tree, parent_id)
+ parent_id = col[0]
+ if searched:
+ if searched["id"] != col[0]:
+ if "nodes" not in searched.keys():
+ searched["nodes"] = []
+ if not col[0] in map(lambda x: x["id"], searched["nodes"]):
+ searched["nodes"].append({"id": col[0], "text": col[1]})
+ else: # start point
+ tree = {"id": col[0], "text": col[1], "nodes": []}
+
+ sort_nodes(tree)
+ return tree
|
fix: Comment to docstring and minor changes
added description for args with example
'yield' might not be the best idea, barely any improvement, so changed back to 'return' | @@ -31,9 +31,12 @@ def make_mapped_doc(method, source_name, selected_children=None, args=None):
@frappe.whitelist()
def map_docs(method, source_names, target_doc, args: str = None):
- # args => "{ 'supplier': 100 }"
''' Returns the mapped document calling the given mapper method
- with each of the given source docs on the target doc'''
+ with each of the given source docs on the target doc
+
+ :param args: Args to pass to the mapper method
+ E.g. args: "{ 'supplier': 'XYZ' }" '''
+
method = frappe.get_attr(method)
if method not in frappe.whitelisted:
raise frappe.PermissionError
@@ -41,7 +44,7 @@ def map_docs(method, source_names, target_doc, args:str=None):
for src in json.loads(source_names):
_args = (src, target_doc, json.loads(args)) if args else (src, target_doc)
target_doc = method(*_args)
- yield target_doc
+ return target_doc
def get_mapped_doc(from_doctype, from_docname, table_maps, target_doc=None,
postprocess=None, ignore_permissions=False, ignore_child_tables=False):
|
fixing a bug in MigrationManager._run_add_columns
The Table metaclass wasn't being called on columns, so the ForeignKeyMeta wasn't being setup correctly on ForeignKey columns. | @@ -511,8 +511,15 @@ class MigrationManager:
AddColumnClass
] = self.add_columns.for_table_class_name(table_class_name)
+ # Define the table, with the columns, so the metaclass
+ # sets up the columns correctly.
_Table: t.Type[Table] = type(
- add_columns[0].table_class_name, (Table,), {}
+ add_columns[0].table_class_name,
+ (Table,),
+ {
+ add_column.column._meta.name: add_column.column
+ for add_column in add_columns
+ },
)
_Table._meta.tablename = add_columns[0].tablename
|
langkit_support-adalog-solver.adb: remove obsolete pragma
Now that the solver no longer uses 'Update attributes, disabling the
corresponding warning is useless.
TN: | @@ -31,14 +31,6 @@ with GNATCOLL.Strings; use GNATCOLL.Strings;
with Langkit_Support.Images;
-pragma Warnings (Off, "attribute Update");
--- Attribute update is obsolescent in Ada 2022, but we don't yet want to use
--- delta aggregates because they won't be supported on old compilers, so just
--- silence the warning.
---
--- TODO??? Remove this and consistently use delta aggregates once the oldest
--- GNAT supported decently supports them.
-
package body Langkit_Support.Adalog.Solver is
----------------------
|
Add `**kwargs` to `optimize_acqf_mixed` and `optimize_acqf_discrete`
Summary: Add `**kwargs` to `optimize_acqf_mixed` and `optimize_acqf_discrete` to avoid errors when `Acquisition` passes default arguments | @@ -417,6 +417,7 @@ def optimize_acqf_mixed(
equality_constraints: Optional[List[Tuple[Tensor, Tensor, float]]] = None,
post_processing_func: Optional[Callable[[Tensor], Tensor]] = None,
batch_initial_conditions: Optional[Tensor] = None,
+ **kwargs: Any,
) -> Tuple[Tensor, Tensor]:
r"""Optimize over a list of fixed_features and returns the best solution.
@@ -518,6 +519,7 @@ def optimize_acqf_discrete(
choices: Tensor,
max_batch_size: int = 2048,
unique: bool = True,
+ **kwargs: Any,
) -> Tuple[Tensor, Tensor]:
r"""Optimize over a discrete set of points using batch evaluation.
|
Correct background tasks documentation
run_in_exector is available on the loop, not the asyncio module. | @@ -29,7 +29,7 @@ a separate thread via the ``run_in_executor`` function.
asyncio.ensure_future(io_background_task())
# Runs on another thread
- asyncio.run_in_executor(None, cpu_background_task())
+ asyncio.get_running_loop().run_in_executor(None, cpu_background_task())
return 'Success'
These background tasks will not have access to the request or app
|
Bump required python version.
Also improved error description | @@ -3,8 +3,8 @@ The synapse distributed key-value hypergraph analysis framework.
'''
import sys
-if (sys.version_info.major, sys.version_info.minor) < (3, 6): # pragma: no cover
- raise Exception('synapse is not supported on Python versions < 3.6')
+if (sys.version_info.major, sys.version_info.minor) < (3, 7): # pragma: no cover
+ raise Exception('synapse is not supported on Python versions >= 3.7')
# checking maximum *signed* integer size to determine the interpreter arch
if sys.maxsize < 9223372036854775807: # pragma: no cover
@@ -12,7 +12,7 @@ if sys.maxsize < 9223372036854775807: # pragma: no cover
import lmdb
if tuple([int(x) for x in lmdb.__version__.split('.')]) < (0, 94): # pragma: no cover
- raise Exception('synapse is only supported on version 0.94 of the lmdb python module')
+ raise Exception('synapse is only supported on version >= 0.94 of the lmdb python module')
import multiprocessing
|
Support coercing pandas Series into numpy arrays
TODO: add support for datetime arrays in validation and in widget serialization | @@ -8,6 +8,7 @@ import io
from copy import deepcopy
import numpy as np
+import pandas as pd
import re
# Utility functions
@@ -19,6 +20,7 @@ def copy_to_contiguous_readonly_numpy_array(v, dtype=None, force_numeric=False):
# If dtype was not specified then it will be passed to the numpy array constructor as None and the data type
# will be inferred automatically
+ # TODO: support datetime dtype here and in widget serialization
numeric_kinds = ['u', 'i', 'f']
if not isinstance(v, np.ndarray):
@@ -55,7 +57,9 @@ def copy_to_contiguous_readonly_numpy_array(v, dtype=None, force_numeric=False):
def is_array(v):
- return isinstance(v, (list, tuple)) or (isinstance(v, np.ndarray) and v.ndim == 1)
+ return (isinstance(v, (list, tuple)) or
+ (isinstance(v, np.ndarray) and v.ndim == 1) or
+ isinstance(v, pd.Series))
def type_str(v):
|
Fix Now Chromevox can read close button for x icon after uploading a picture
Update edit-profile-picture-modal.component.html | <div *ngIf="uploadedImage">
<span>{{ 'I18N_PREFERENCES_PROFILE_PICTURE_DRAG' | translate }}</span>
<div class="oppia-profile-picture-crop-area e2e-test-photo-crop">
- <button class="btn btn-secondary oppia-profile-picture-reset-button" (click)="reset()">
+ <button class="btn btn-secondary oppia-profile-picture-reset-button" aria-label="close" (click)="reset()">
<i class="fas fa-times oppia-vcenter"></i>
</button>
<img id="croppable-image" [src]="uploadedImage" #croppableImage>
|
settings_display: Refactor settings_display.
This commit refactors settings_display in order to make code more precise
and clear. | @@ -45,10 +45,9 @@ exports.set_up = function () {
e.stopPropagation();
overlays.close_modal('default_language_modal');
- var data = {};
var $link = $(e.target).closest("a[data-code]");
var setting_value = $link.attr('data-code');
- data.default_language = JSON.stringify(setting_value);
+ var data = {default_language: JSON.stringify(setting_value)};
var new_language = $link.attr('data-name');
$('#default_language_name').text(new_language);
@@ -65,23 +64,17 @@ exports.set_up = function () {
});
$("#high_contrast_mode").change(function () {
- var high_contrast_mode = this.checked;
- var data = {};
- data.high_contrast_mode = JSON.stringify(high_contrast_mode);
+ var data = {high_contrast_mode: JSON.stringify(this.checked)};
change_display_setting(data, '#display-settings-status');
});
$("#dense_mode").change(function () {
- var dense_mode = this.checked;
- var data = {};
- data.dense_mode = JSON.stringify(dense_mode);
+ var data = {dense_mode: JSON.stringify(this.checked)};
change_display_setting(data, '#display-settings-status');
});
$('#starred_message_counts').change(function () {
- var starred_message_counts = this.checked;
- var data = {};
- data.starred_message_counts = JSON.stringify(starred_message_counts);
+ var data = {starred_message_counts: JSON.stringify(this.checked)};
change_display_setting(data, '#display-settings-status');
});
@@ -94,30 +87,22 @@ exports.set_up = function () {
});
$("#left_side_userlist").change(function () {
- var left_side_userlist = this.checked;
- var data = {};
- data.left_side_userlist = JSON.stringify(left_side_userlist);
+ var data = {left_side_userlist: JSON.stringify(this.checked)};
change_display_setting(data, '#display-settings-status',
i18n.t("Saved. Please <a class='reload_link'>reload</a> for the change to take effect."), true);
});
$("#twenty_four_hour_time").change(function () {
- var data = {};
- var setting_value = $("#twenty_four_hour_time").is(":checked");
- data.twenty_four_hour_time = JSON.stringify(setting_value);
+ var data = {twenty_four_hour_time: JSON.stringify(this.checked)};
change_display_setting(data, '#time-settings-status');
});
$("#user_timezone").change(function () {
- var data = {};
- var timezone = this.value;
- data.timezone = JSON.stringify(timezone);
+ var data = {timezone: JSON.stringify(this.value)};
change_display_setting(data, '#time-settings-status');
});
$(".emojiset_choice").click(function () {
- var emojiset = $(this).val();
- var data = {};
- data.emojiset = JSON.stringify(emojiset);
+ var data = {emojiset: JSON.stringify($(this).val())};
var spinner = $("#emoji-settings-status").expectOne();
loading.make_indicator(spinner, {text: settings_ui.strings.saving });
@@ -133,9 +118,7 @@ exports.set_up = function () {
});
$("#translate_emoticons").change(function () {
- var data = {};
- var setting_value = $("#translate_emoticons").is(":checked");
- data.translate_emoticons = JSON.stringify(setting_value);
+ var data = {translate_emoticons: JSON.stringify(this.checked)};
change_display_setting(data, '#emoji-settings-status');
});
};
|
Update mkvtomp4.py
fix error if swl is none | @@ -795,6 +795,7 @@ class MkvtoMp4:
else:
self.log.debug("Ignoring %s external subtitle stream due to language %s." % (fname, lang))
self.log.info("Scanned for external subtitles and found %d results in your approved languages." % (len(valid_external_subs)))
+ if swl:
valid_external_subs.sort(key=lambda x: swl.index(x.subtitle[0].metadata['language']) if x.subtitle[0].metadata['language'] in swl else 999)
return valid_external_subs
|
better handling of data
It should take task from instance anatomyData, then from context and handle non dict items. | @@ -112,7 +112,13 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
if review_path:
fill_pairs.append(("review_filepath", review_path))
- task_data = fill_data.get("task")
+ task_data = (
+ copy.deepcopy(instance.data.get("anatomyData", [])).get("task")
+ or fill_data.get("task")
+ )
+ if not isinstance(task_data, dict):
+ # fallback for legacy - if task_data is only task name
+ task_data["name"] = task_data
if task_data:
if (
"{task}" in message_templ
@@ -121,13 +127,10 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
):
fill_pairs.append(("task", task_data["name"]))
- elif isinstance(task_data, dict):
+ else:
for key, value in task_data.items():
fill_key = "task[{}]".format(key)
fill_pairs.append((fill_key, value))
- else:
- # fallback for legacy - if task_data is only task name
- fill_pairs.append(("task", task_data))
self.log.debug("fill_pairs ::{}".format(fill_pairs))
multiple_case_variants = prepare_template_data(fill_pairs)
|
Update docs with clean
Summary:
Add tip about cleaning if installing ninja after a build.
Pull Request resolved: | @@ -182,6 +182,8 @@ information for the code in `torch/csrc`. More information at:
Python `setuptools` is pretty dumb, and always rebuilds every C file in a
project. If you install the ninja build system with `pip install ninja`,
then PyTorch will use it to track dependencies correctly.
+If pytorch was already built, you will need to run `python setup.py clean` once
+after installing ninja for builds to succeed.
#### Use CCache
|
aclnet.md -- add fullstop
Add a fullstop after the license statement. | @@ -65,4 +65,4 @@ Sound classifier according to DES-53 classes, name - `203`, shape - `1,53`, outp
## Legal Information
-The original model is distributed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
+The original model is distributed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.