message
stringlengths 13
484
| diff
stringlengths 38
4.63k
|
---|---|
Fix KeyError in rank_genes_groups_violin with gene_symbols argument
When using the gene_symbol argument, KeyError is raised accessing adata.var when it should be accessing adata.raw.var. The variable, g, is also reassigned so df column names match new gene symbols when calling pd.melt on df. | @@ -505,11 +505,14 @@ def rank_genes_groups_violin(
for g in gene_names:
if adata.raw is not None and use_raw:
X_col = adata.raw[:, g].X
+ if gene_symbols:
+ g = adata.raw.var[gene_symbols][g]
else:
X_col = adata[:, g].X
+ if gene_symbols:
+ g = adata.var[gene_symbols][g]
if issparse(X_col): X_col = X_col.toarray().flatten()
- new_gene_names.append(
- g if gene_symbols is None else adata.var[gene_symbols][g])
+ new_gene_names.append(g)
df[g] = X_col
df['hue'] = adata.obs[groups_key].astype(str).values
if reference == 'rest':
|
Update commit with review changes
Change to use "the internal name" | @@ -91,7 +91,7 @@ If all names for a parameter contain dashes, the internal name is generated
automatically by taking the longest argument and converting all dashes to
underscores.
-All parameter names are converted to lowercase.
+The internal name is converted to lowercase.
Examples:
|
Add ECU fw for 2016 Golf GTI
Added ECU for 2016 Golf GTI | @@ -377,6 +377,7 @@ FW_VERSIONS = {
b'\xf1\x870EA906016Q \xf1\x895993',
b'\xf1\x870EA906016S \xf1\x897207',
b'\xf1\x875G0906259 \xf1\x890007',
+ b'\xf1\x875G0906259D \xf1\x890002',
b'\xf1\x875G0906259J \xf1\x890002',
b'\xf1\x875G0906259L \xf1\x890002',
b'\xf1\x875G0906259N \xf1\x890003',
@@ -412,6 +413,7 @@ FW_VERSIONS = {
b'\xf1\x870D9300012 \xf1\x895045',
b'\xf1\x870D9300014M \xf1\x895004',
b'\xf1\x870D9300014Q \xf1\x895006',
+ b'\xf1\x870D9300020J \xf1\x894902',
b'\xf1\x870D9300020Q \xf1\x895201',
b'\xf1\x870D9300020S \xf1\x895201',
b'\xf1\x870D9300040A \xf1\x893613',
@@ -459,6 +461,7 @@ FW_VERSIONS = {
b'\xf1\x873Q0909144F \xf1\x895043\xf1\x82\x0561A01612A0',
b'\xf1\x873Q0909144H \xf1\x895061\xf1\x82\x0566A0J612A1',
b'\xf1\x873Q0909144J \xf1\x895063\xf1\x82\x0566A00514A1',
+ b'\xf1\x873Q0909144J \xf1\x895063\xf1\x82\x0566A01613A1',
b'\xf1\x873Q0909144J \xf1\x895063\xf1\x82\x0566A0J712A1',
b'\xf1\x873Q0909144K \xf1\x895072\xf1\x82\x0571A0J714A1',
b'\xf1\x873Q0909144L \xf1\x895081\xf1\x82\x0571A0JA15A1',
|
Reorder and document $.Lexer.Token_Data_Type fields
TN: | @@ -22,6 +22,11 @@ package ${_self.ada_api_settings.lib_name}.Lexer is
type Token_Data_Type is record
Kind : Token_Kind;
+ -- Kind for this token
+
+ Offset : Unsigned_32;
+ -- Offset of Text in the source buffer associated with the token data
+ -- handler that owns this token. This offset is 1-based.
Text : Text_Access;
-- Text as found in original source file or null depending on the token
@@ -29,8 +34,8 @@ package ${_self.ada_api_settings.lib_name}.Lexer is
-- keywords but actual text for identifiers.
Sloc_Range : Source_Location_Range;
-
- Offset : Unsigned_32;
+ -- Source location range for this token. Note that the end bound is
+ -- exclusive.
end record;
package Token_Data_Handlers is new Langkit_Support.Token_Data_Handlers
|
Extend SafeGraph limitations
This shouldn't be a subsection of Weekly Patterns (it applies to both), and we should mention research on its biases. | @@ -121,9 +121,12 @@ meaning estimates for a specific day are only available 3-9 days later. It may
take up to an additional day for SafeGraph's data to be ingested into the
COVIDcast API.
-### Limitations
+## Limitations
-This data source is based on mobile devices that are members of SafeGraph panels, which is not necessarily the same thing as measuring the general public. This means that counts are subject to bias if some regions have a greater density of SafeGraph panel members as a percentage of the population. These counts do not represent absolute counts, and only count visits by members of the panel in that region.
+SafeGraph's Social Distancing Metrics and Weekly Patterns are based on mobile devices that are members of SafeGraph panels, which is not necessarily the same thing as measuring the general public. These counts do not represent absolute counts, and only count visits by members of the panel in that region. This can result in several biases:
+
+* **Geographic bias.** If some regions have a greater density of SafeGraph panel members as a percentage of the population than other regions, comparisons of metrics between regions may be biased. Regions with more SafeGraph panel members will appear to have more visits counted, even if the rate of visits in the general population is the same.
+* **Demographic bias.** SafeGraph panels may not be representative of the local population as a whole. For example, [some research suggests](https://arxiv.org/abs/2011.07194) that "older and non-white voters are less likely to be captured by mobility data", so this data will not accurately reflect behavior in those populations. Since population demographics vary across the United States, this can also contribute to geographic biases.
The number of POIs coded as bars is much smaller than the number of POIs coded as restaurants.
SafeGraph's Weekly Patterns data consistently lacks data on bar visits for Alaska, Delaware, Maine, North Dakota, New Hampshire, South Dakota, Vermont, West Virginia, and Wyoming.
|
Added support for delimited-string values to BaseMultipleOptionFilter
This re-applies | @@ -6,6 +6,7 @@ import pytz
from memoized import memoized
from corehq.apps.hqwebapp.crispy import CSS_FIELD_CLASS, CSS_LABEL_CLASS
+from corehq.apps.userreports.reports.filters.values import CHOICE_DELIMITER
class BaseReportFilter(object):
@@ -169,7 +170,10 @@ class BaseMultipleOptionFilter(BaseSingleOptionFilter):
@classmethod
def get_value(cls, request, domain):
- return request.GET.getlist(cls.slug)
+ selected_ids = []
+ for ids in request.GET.getlist(cls.slug):
+ selected_ids.extend(ids.split(CHOICE_DELIMITER))
+ return selected_ids
@property
@memoized
|
Make changes to java sdk pom file to remove warnings
Before this change the sourceEncoding wasn't set and there
wasn't a slf4j logging implementation in the tests. | <artifactId>sdk</artifactId>
<version>1.0-SNAPSHOT</version>
+ <properties>
+ <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
+ </properties>
+
<build>
<extensions>
<extension>
<version>4.12</version>
<scope>test</scope>
</dependency>
+ <dependency>
+ <groupId>org.slf4j</groupId>
+ <artifactId>slf4j-simple</artifactId>
+ <version>1.7.25</version>
+ <scope>test</scope>
+ </dependency>
</dependencies>
</project>
|
Caps the WTForms version which conflicts with Flask-appbuilder.
See | @@ -114,6 +114,9 @@ def make_extra_packages_airflow():
return [
# TODO(b/188940096): update supported version.
'apache-airflow[mysql]>=1.10.14,<3',
+ # TODO(b/205459685): Delete pinned WTForms after flask-appbuilder fix the
+ # issue. (https://github.com/dpgaspar/Flask-AppBuilder/issues/1732)
+ 'WTForms<3',
# TODO(b/182848576): Delete pinned sqlalchemy after apache-airflow 2.0.2
# or later.(github.com/apache/airflow/issues/14811)
'sqlalchemy>=1.3,<1.4',
|
Updated the key to retrieve correct rank of a process
Merging this PR to the master branch | @@ -26,17 +26,18 @@ def get_proc_info():
MPI_WORLDSIZE appropriately as described above to retrieve total no. of
processes, when needed.
+ MPI_LOCALRANKID is only valid on a single-node cluster. In a multi-node cluster
+ the correct key to use is PMI_RANK. In a multi-node cluster, MPI_LOCALRANKID
+ returns local rank of the process in the context of a particular node in which
+ it is unique.
+
Returns:
--------
integer :
Rank of the current process.
"""
env_variables = dict(os.environ)
- if "OMPI_COMM_WORLD_RANK" in env_variables:
- local_rank = int(os.environ.get("OMPI_COMM_WORLD_RANK") or 0)
- elif "MPI_LOCALRANKID" in env_variables:
- local_rank = int(os.environ.get("MPI_LOCALRANKID") or 0)
- return local_rank
+ return int(os.environ.get("PMI_RANK") or 0)
def gen_edge_files(schema_map, output):
|
fix(compat): pin requirements for backwards compatibility
gunicorn==19.9.0
ipython==5.8.0
openpyxl==2.6.4
to update these packages, will have to drop support for py2 | @@ -20,16 +20,16 @@ google-auth-httplib2==0.0.3
google-auth-oauthlib==0.4.1
google-auth==1.7.1
googlemaps==3.1.1
-gunicorn==20.0.0
+gunicorn==19.9.0
html2text==2016.9.19
-ipython==7.9.0
+ipython==5.8.0
Jinja2==2.10.3
markdown2==2.3.5
maxminddb-geolite2==2018.703
ndg-httpsclient==0.5.1
num2words==0.5.5
oauthlib==3.1.0
-openpyxl==3.0.1
+openpyxl==2.6.4
passlib==1.7.1
pdfkit==0.6.1
Pillow==6.2.1
|
fix alias in hosts_file role
alias were not generated due to \n at the end of the maccro | {% macro write_host(host,host_dict,current_iceberg) %}
{%- if host_dict['network_interfaces'] is defined and host_dict['network_interfaces'] is iterable %}
- {% set host_iceberg = node_current_iceberg(host) %}
+ {% set host_iceberg = node_current_iceberg(host) | trim %}
{% set host_resolution_network = host_dict['network_interfaces'][0].network %}{# equivalent to j2_node_main_resolution_network, but no need for a macro here #}
{% set alias_list = [] %}
{% if host_dict['global_alias'] is defined and host_dict['global_alias'] is not none %}
|
billing: Disable fixture generation in setUp function.
This means you'll need access to our Stripe API key to add new fixtures.
Will be undone eventually, but having this in place will make it easier to
finish the mock.patch to mock_stripe migration. | @@ -121,10 +121,13 @@ def read_stripe_fixture(decorated_function_name: str,
return _read_stripe_fixture
def mock_stripe(mocked_function_name: str,
- generate_this_fixture: bool=False) -> Callable[[CallableT], CallableT]:
+ generate_this_fixture: Optional[bool]=None) -> Callable[[CallableT], CallableT]:
def _mock_stripe(decorated_function: CallableT) -> CallableT:
mocked_function = operator.attrgetter(mocked_function_name)(sys.modules[__name__])
- if GENERATE_STRIPE_FIXTURES or generate_this_fixture:
+ generate_fixture = generate_this_fixture
+ if generate_fixture is None:
+ generate_fixture = GENERATE_STRIPE_FIXTURES
+ if generate_fixture:
side_effect = generate_and_save_stripe_fixture(
decorated_function.__name__, mocked_function_name, mocked_function) # nocoverage
else:
@@ -144,9 +147,9 @@ class Kandra(object):
return True
class StripeTest(ZulipTestCase):
- @mock_stripe("stripe.Coupon.create")
- @mock_stripe("stripe.Plan.create")
- @mock_stripe("stripe.Product.create")
+ @mock_stripe("stripe.Coupon.create", False)
+ @mock_stripe("stripe.Plan.create", False)
+ @mock_stripe("stripe.Product.create", False)
def setUp(self, mock3: Mock, mock2: Mock, mock1: Mock) -> None:
call_command("setup_stripe")
|
Composition: add enable_logging method
- ensures that nested Compositions also get logged when running Composition
with log=True | @@ -6349,11 +6349,7 @@ class Composition(Composition_Base, metaclass=ComponentsMeta):
# set auto logging if it's not already set, and if log argument is True
if log:
- for item in self.nodes + self.projections:
- if not isinstance(item, CompositionInterfaceMechanism):
- for param in item.parameters:
- if param.loggable and param.log_condition is LogCondition.OFF:
- param.log_condition = LogCondition.EXECUTION
+ self.enable_logging()
# Set animation attributes
if animate is True:
@@ -7703,6 +7699,15 @@ class Composition(Composition_Base, metaclass=ComponentsMeta):
return llvm_func
+ def enable_logging(self):
+ for item in self.nodes + self.projections:
+ if isinstance(item, Composition):
+ item.enable_logging()
+ elif not isinstance(item, CompositionInterfaceMechanism):
+ for param in item.parameters:
+ if param.loggable and param.log_condition is LogCondition.OFF:
+ param.log_condition = LogCondition.EXECUTION
+
@property
def _dict_summary(self):
scheduler_dict = {
|
[ci] Deflakey gcs_heartbeat_test in windows.
We need to check the time after acquiring the lock to make sure the correctness. Otherwise, it might wait for the lock and the heartbeat has been updated. | @@ -66,8 +66,11 @@ TEST_F(GcsHeartbeatManagerTest, TestBasicTimeout) {
auto start = absl::Now();
AddNode(node_1);
- while (absl::Now() - start < absl::Seconds(1)) {
+ while (true) {
absl::MutexLock lock(&mutex_);
+ if (absl::Now() - start >= absl::Seconds(1)) {
+ break;
+ }
ASSERT_TRUE(dead_nodes.empty());
}
@@ -84,8 +87,11 @@ TEST_F(GcsHeartbeatManagerTest, TestBasicReport) {
auto start = absl::Now();
AddNode(node_1);
- while (absl::Now() - start < absl::Seconds(3)) {
+ while (true) {
absl::MutexLock lock(&mutex_);
+ if (absl::Now() - start >= absl::Seconds(3)) {
+ break;
+ }
ASSERT_TRUE(dead_nodes.empty());
io_service.post(
[&]() {
@@ -116,8 +122,11 @@ TEST_F(GcsHeartbeatManagerTest, TestBasicRestart) {
heartbeat_manager->Initialize(init_data);
- while (absl::Now() - start < absl::Seconds(3)) {
+ while (true) {
absl::MutexLock lock(&mutex_);
+ if (absl::Now() - start >= absl::Seconds(3)) {
+ break;
+ }
ASSERT_TRUE(dead_nodes.empty());
}
@@ -158,8 +167,11 @@ TEST_F(GcsHeartbeatManagerTest, TestBasicRestart2) {
std::this_thread::sleep_for(0.1s);
}
- while (absl::Now() - start < absl::Seconds(1)) {
+ while (true) {
absl::MutexLock lock(&mutex_);
+ if (absl::Now() - start >= absl::Seconds(1)) {
+ break;
+ }
ASSERT_TRUE(dead_nodes.empty());
}
|
Docstring fix for CreateVolumeAttachment class
The command "volume attachment create" has a typo in the docstring.
The docstring says to use "server add volume", but the command is
actually "server volume add". This
change fixes the typo in the docstring.
Task: 46781
Story: | @@ -82,7 +82,7 @@ class CreateVolumeAttachment(command.ShowOne):
the volume to the server at the hypervisor level. As a result, it should
typically only be used for troubleshooting issues with an existing server
in combination with other tooling. For all other use cases, the 'server
- volume add' command should be preferred.
+ add volume' command should be preferred.
"""
def get_parser(self, prog_name):
|
Remove duplicated code in dcos diagnostics test
Extract common code responsible for searching expected files
in diagnostic bundle to a function `verify_archived_items`.
This chagnes test. Now expected files MUST be equal all files
in the bundle. | @@ -598,9 +598,7 @@ def _download_bundle_from_master(dcos_api_session, master_index):
gzipped_unit_output = z.open(master_folder + 'dcos-mesos-master.service.gz')
verify_unit_response(gzipped_unit_output, 100)
- for expected_master_file in expected_master_files:
- expected_file = master_folder + expected_master_file
- assert expected_file in archived_items, 'expecting {} in {}'.format(expected_file, archived_items)
+ verify_archived_items(master_folder, archived_items, expected_master_files)
# make sure all required log files for agent node are in place.
for slave_ip in dcos_api_session.slaves:
@@ -615,9 +613,7 @@ def _download_bundle_from_master(dcos_api_session, master_index):
gzipped_unit_output = z.open(agent_folder + 'dcos-mesos-slave.service.gz')
verify_unit_response(gzipped_unit_output, 100)
- for expected_agent_file in expected_agent_files:
- expected_file = agent_folder + expected_agent_file
- assert expected_file in archived_items, 'expecting {} in {}'.format(expected_file, archived_items)
+ verify_archived_items(agent_folder, archived_items, expected_agent_files)
# make sure all required log files for public agent node are in place.
for public_slave_ip in dcos_api_session.public_slaves:
@@ -632,8 +628,12 @@ def _download_bundle_from_master(dcos_api_session, master_index):
gzipped_unit_output = z.open(agent_public_folder + 'dcos-mesos-slave-public.service.gz')
verify_unit_response(gzipped_unit_output, 100)
- for expected_public_agent_file in expected_public_agent_files:
- expected_file = agent_public_folder + expected_public_agent_file
+ verify_archived_items(agent_public_folder, archived_items, expected_public_agent_files)
+
+
+def verify_archived_items(folder, archived_items, expected_files):
+ for expected_file in expected_files:
+ expected_file = folder + expected_file
assert expected_file in archived_items, ('expecting {} in {}'.format(expected_file, archived_items))
|
fix: Keep charts as it is
Frappe Charts do not support RTL and rtlcss does not work on SVG...
Setting direction as ltr to keep charts as it is | @@ -302,15 +302,6 @@ select.input-xs {
}
}
-/*!rtl:raw:
-.dropdown-menu {
- right: auto;
-}
-.popover {
- right: auto;
-}
-*/
-
.custom-control.custom-switch {
font-size: var(--text-md);
line-height: 1.6;
@@ -583,3 +574,15 @@ details > summary:focus {
// font-family: 'Octicons';
// content: "\f00b";
// }
+
+/*rtl:raw:
+.dropdown-menu {
+ right: auto;
+}
+.popover {
+ right: auto;
+}
+.chart-container {
+ direction: ltr;
+}
+*/
\ No newline at end of file
|
Removed explicit use of `long` type.
PEP 237 removes distinctions between `long` and `int`. `int` should be
used exlusively. | @@ -13,9 +13,6 @@ from . import matrixmod2 as _mtx
import numpy as _np
import copy as _copy
-import sys
-if sys.version_info >= (3,):
- long = int
def symplectic_form(n, convention='standard'):
@@ -1636,7 +1633,7 @@ def numberofcliffords(n):
The cardinality of the n-qubit Clifford group.
"""
- return (long(4)**long(n)) * numberofsymplectic(n)
+ return (4**int(n)) * numberofsymplectic(n)
def numberofsymplectic(n):
@@ -1647,7 +1644,7 @@ def numberofsymplectic(n):
John A. Smolin.
"""
- x = long(1)
+ x = 1
for j in range(1, n + 1):
x = x * numberofcosets(j)
@@ -1661,7 +1658,7 @@ def numberofcosets(n):
select an arbitrary Clifford group element" by Robert Koenig and
John A. Smolin.
"""
- x = long(2)**long(2 * n - 1) * ((long(2)**long(2 * n)) - long(1))
+ x = 2**int(2 * n - 1) * ((2**int(2 * n)) - 1)
return x
@@ -1895,7 +1892,7 @@ def get_symplectic_label(gn, n=None):
bb[j - 1] = tw[j]
zv = bitstring_to_int(v, nn) - 1
zw = bitstring_to_int(bb, nn - 1)
- cvw = zw * ((long(2)**long(2 * n)) - 1) + zv
+ cvw = zw * ((2**int(2 * n)) - 1) + zv
#step 5
if n == 1:
@@ -1948,7 +1945,7 @@ def random_symplectic_index(n):
index = cardinality
while index >= cardinality:
- temp = long(0)
+ temp = 0
for i in range(0, n):
add = zeros_string(m)
sample = _np.random.randint(10**digits2)
@@ -1957,9 +1954,9 @@ def random_symplectic_index(n):
add += str(sample)
for j in range(i + 1, n):
add += zeros_string(digits2)
- temp += long(add)
+ temp += int(add)
add = str(_np.random.randint(10**m)) + zeros_string(n * digits2)
- index = long(add) + temp
+ index = int(add) + temp
return index
|
global: fixing hosts without imageio ocio settings
in cases old project with imageio settings group is having override | @@ -344,9 +344,9 @@ def get_imageio_config(
imageio_global, imageio_host = _get_imageio_settings(
project_settings, host_name)
- config_host = imageio_host["ocio_config"]
+ config_host = imageio_host.get("ocio_config", {})
- if config_host["enabled"]:
+ if config_host.get("enabled"):
config_data = _get_config_data(
config_host["filepath"], anatomy_data
)
|
DOC: added `ma.round` and `ma.round_` examples
This PR is partially addressing
Added examples for ma.round and ma.round_ | @@ -7503,6 +7503,28 @@ def round_(a, decimals=0, out=None):
If out is given and does not have a mask attribute, the mask of a
is lost!
+ Examples
+ --------
+ >>> import numpy.ma as ma
+ >>> x = [11.2, -3.973, 0.801, -1.41]
+ >>> mask = [0, 0, 0, 1]
+ >>> masked_x = ma.masked_array(x, mask)
+ >>> masked_x
+ masked_array(data=[11.2, -3.973, 0.801, --],
+ mask=[False, False, False, True],
+ fill_value=1e+20)
+ >>> ma.round_(masked_x)
+ masked_array(data=[11.0, -4.0, 1.0, --],
+ mask=[False, False, False, True],
+ fill_value=1e+20)
+ >>> ma.round(masked_x, decimals=1)
+ masked_array(data=[11.2, -4.0, 0.8, --],
+ mask=[False, False, False, True],
+ fill_value=1e+20)
+ >>> ma.round_(masked_x, decimals=-1)
+ masked_array(data=[10.0, -0.0, 0.0, --],
+ mask=[False, False, False, True],
+ fill_value=1e+20)
"""
if out is None:
return np.round_(a, decimals, out)
|
[ci/release] Fix fetching logs from staging clusters
Replaces a formerly hard-coded URI to anyscale prod with the respective env variable. | @@ -17,7 +17,12 @@ from ray_release.exception import (
)
from ray_release.file_manager.file_manager import FileManager
from ray_release.logger import logger
-from ray_release.util import exponential_backoff_retry, format_link, get_anyscale_sdk
+from ray_release.util import (
+ exponential_backoff_retry,
+ format_link,
+ get_anyscale_sdk,
+ ANYSCALE_HOST,
+)
if TYPE_CHECKING:
from anyscale.sdk.anyscale_client.sdk import AnyscaleSDK
@@ -153,7 +158,7 @@ class SDKRunner(CommandRunner):
query_params={"start_line": -LAST_LOGS_LENGTH, "end_line": 0},
header_params={},
response_type=object,
- _host="https://console.anyscale.com",
+ _host=str(ANYSCALE_HOST),
_preload_content=True,
_return_http_data_only=False,
)
|
Fix transition xml name in lifecycleconfig
Fixes | @@ -88,7 +88,7 @@ class Transition(DateDays):
def toxml(self, element):
"""Convert to XML."""
- element = SubElement(element, "NoncurrentVersionTransition")
+ element = SubElement(element, "Transition")
super().toxml(element)
if self._storage_class:
SubElement(element, "StorageClass", self._storage_class)
|
Properties: use the equivalencce function to compare lexical envs
TN: | @@ -122,10 +122,14 @@ class Eq(AbstractExpression):
@classmethod
def make_expr(cls, lhs, rhs, abstract_expr=None):
- return (cls.make_expr_for_entities(lhs, rhs, abstract_expr)
- if lhs.type.is_entity_type else
- BasicExpr('Is_Equal', '{} = {}', T.BoolType, [lhs, rhs],
- abstract_expr=abstract_expr))
+ if lhs.type.is_entity_type:
+ return cls.make_expr_for_entities(lhs, rhs, abstract_expr)
+ elif lhs.type.is_lexical_env_type:
+ return CallExpr('Is_Equal', 'Equivalent', T.BoolType, [lhs, rhs],
+ abstract_expr=abstract_expr)
+ else:
+ return BasicExpr('Is_Equal', '{} = {}', T.BoolType, [lhs, rhs],
+ abstract_expr=abstract_expr)
@staticmethod
def make_expr_for_entities(lhs, rhs, abstract_expr=None):
|
Updated config_template.py
Added CAA and SRV record to standard template | @@ -75,7 +75,7 @@ PDNS_API_KEY = 'you never know'
PDNS_VERSION = '3.4.7'
# RECORDS ALLOWED TO EDIT
-RECORDS_ALLOW_EDIT = ['A', 'AAAA', 'CNAME', 'SPF', 'PTR', 'MX', 'TXT']
+RECORDS_ALLOW_EDIT = ['A', 'AAAA', 'CAA', 'CNAME', 'MX', 'PTR', 'SPF', 'SRV', 'TXT']
# EXPERIMENTAL FEATURES
PRETTY_IPV6_PTR = False
|
Add documentation on the state_event option
Adds additional documentation on the state_events option to the
Salt Master Events doc | @@ -90,8 +90,13 @@ Job events
.. salt:event:: salt/job/<JID>/prog/<MID>/<RUN NUM>
- Fired each time a each function in a state run completes execution. Must be
- enabled using the :conf_master:`state_events` option.
+ Fired each time a each function in a state run completes execution.
+
+ Can be enabled for all state runs in the Salt master config with the
+ :conf_master:`state_events` option. To enable for an individual state
+ run, pass ``state_events=True`` to the :py:mod:`state <salt.modules.state>`
+ function being used.
+
:var data: The data returned from the state module function.
:var id: The minion ID.
|
Containment items can be wrong in property based test
Since ownership can change while the (old) containment
relation is still in place. | @@ -307,7 +307,8 @@ def _(relation: diagramitems.InterfaceRealizationItem, head, tail):
@check_relation.register
def _(relation: diagramitems.ContainmentItem, head, tail):
assert not relation.subject
- assert tail.subject.owner is head.subject
+ # subject ownership can not be checked, since
+ # it can be changed by the group functionality.
@check_relation.register
|
Bump package release
Update package release to update with new defaults. | Name: postgres_exporter
Version: 0.8.0
-Release: 1%{?dist}
+Release: 2%{?dist}
Summary: Prometheus exporter for PostgreSQL server metrics
License: ASL 2.0
URL: https://github.com/wrouesnel/%{name}
|
Update state_of_specimen.json
Added ischemic_temperature field. | },
"type": "array"
},
+ "ischemic_temperature": {
+ "description": "Whether warm or cold ischemia.",
+ "type": "string",
+ "enum": [
+ "warm",
+ "cold"
+ ]
+ },
"ischemic_time": {
"description": "Duration of time, in seconds, that the body part had insufficient blood supply.",
"maximum": 100000,
|
update scripts/subtree-merge for more recent versions of git
merge now fails on unrelated histories unless you use --allow-unrelated-histories | @@ -12,7 +12,7 @@ function subtree-merge() {
new_path=$4
git remote add -f $module_name $module_uri
- git merge -s ours --no-commit $module_name/master
+ git merge -s ours --no-commit $module_name/master --allow-unrelated-histories
git read-tree --prefix=$new_path -u $module_name/master:$old_path
git commit -m "subtree merge $module_name:$old_path into $new_path"
}
|
Allow running RTM over a thread
When trying to start a RTM connection over a thread a RuntimeError appears:
RuntimeError: set_wakeup_fd only works in main thread
This change allows it. | @@ -10,6 +10,7 @@ import inspect
import signal
from typing import Optional, Callable, DefaultDict
from ssl import SSLContext
+from threading import current_thread, main_thread
# ThirdParty Imports
import asyncio
@@ -185,7 +186,7 @@ class RTMClient(object):
SlackApiError: Unable to retreive RTM URL from Slack.
"""
# TODO: Add Windows support for graceful shutdowns.
- if os.name != "nt":
+ if os.name != "nt" and current_thread() == main_thread():
signals = (signal.SIGHUP, signal.SIGTERM, signal.SIGINT)
for s in signals:
self._event_loop.add_signal_handler(s, self.stop)
|
Upgrade pysolr 3.9.0 -> no upgrade, and its deps: requests,certifi,charset-normalizer,idna,urllib3
pysolr 3.9.0 -> no upgrade
requests 2.28.1 -> no upgrade
certifi 2022.6.15 -> 2022.9.24
charset-normalizer 2.1.0 -> 2.1.1 (OUTDATED! latest is: 3.0.0)
idna 3.3 -> 3.4
urllib3 1.26.9 -> 1.26.12 | @@ -16,11 +16,11 @@ bleach[css]==5.0.1
# via pypeline
cchardet==2.1.7
# via -r requirements.in
-certifi==2021.10.8
+certifi==2022.9.24
# via requests
cffi==1.15.1
# via cryptography
-charset-normalizer==2.0.12
+charset-normalizer==2.1.1
# via requests
colander==1.8.3
# via -r requirements.in
@@ -61,7 +61,7 @@ html5lib==1.1
# -r requirements.in
# pypeline
# textile
-idna==3.3
+idna==3.4
# via requests
importlib-metadata==4.13.0
# via
@@ -150,7 +150,7 @@ regex-as-re-globally==0.0.2
# via -r requirements.in
repoze-lru==0.7
# via turbogears2
-requests==2.27.1
+requests==2.28.1
# via
# -r requirements.in
# pysolr
@@ -194,7 +194,7 @@ typing-extensions==4.4.0
# via
# gitpython
# importlib-metadata
-urllib3==1.26.9
+urllib3==1.26.12
# via requests
waitress==2.1.2
# via webtest
|
Inference benchmark: NUMA-awareness + multi-model support
Summary:
Pure experimental addition to guide us on delivering this
into real production systems and their threadpools. Biggest limitation
now is that we need to turn off BlackBoxPredictor activation
deallocation logic to get to sane performance | @@ -119,7 +119,7 @@ NetDef optimize_inference_net(
ao->CopyFrom(op);
}
- LOG(INFO) << "optimized net using " << renaming.size() << " shared blobs";
+ VLOG(1) << "optimized net using " << renaming.size() << " shared blobs";
return optim_net;
}
|
Fix Huawei.VRP.get_vlans script
HG--
branch : feature/microservices | @@ -25,13 +25,13 @@ class Script(BaseScript):
result = []
oids = {}
# Get OID -> VLAN ID mapping
- for oid, v in self.snmp.getnext("1.3.6.1.2.1.17.7.1.4.2.1.3",
- bulk=True): # dot1qVlanFdbId
+ # dot1qVlanFdbId
+ for oid, v in self.snmp.getnext("1.3.6.1.2.1.17.7.1.4.2.1.3"):
oids[oid.split(".")[-1]] = v
if oids:
# Get VLAN names
- for oid, v in self.snmp.getnext("1.3.6.1.2.1.17.7.1.4.3.1.1",
- bulk=True): # dot1qVlanStaticName
+ # dot1qVlanStaticName
+ for oid, v in self.snmp.getnext("1.3.6.1.2.1.17.7.1.4.3.1.1"):
o = oid.split(".")[-1]
result += [{
"vlan_id":int(oids[o]),
@@ -39,8 +39,8 @@ class Script(BaseScript):
}]
else:
tmp_vlan = []
- for oid, v in self.snmp.getnext("1.3.6.1.2.1.17.7.1.4.3.1",
- bulk=True): # dot1qVlanStaticName
+ # dot1qVlanStaticName
+ for oid, v in self.snmp.getnext("1.3.6.1.2.1.17.7.1.4.3.1"):
vlan_id = int(oid.split(".")[-1])
if vlan_id in tmp_vlan:
break
|
Fix unused 'timeout' in OvsdbMonitor start
The 'timeout' parameter needs to be passed to wait_until_true
to make it meaningful and change it to 60 as a default timeout.
In fact 60sec was used till now. | @@ -62,10 +62,10 @@ class OvsdbMonitor(async_process.AsyncProcess):
self.new_events = {'added': [], 'removed': [], 'modified': []}
return events
- def start(self, block=False, timeout=5):
+ def start(self, block=False, timeout=60):
super(OvsdbMonitor, self).start()
if block:
- utils.wait_until_true(self.is_active)
+ utils.wait_until_true(self.is_active, timeout=timeout)
class SimpleInterfaceMonitor(OvsdbMonitor):
|
Update server maintenance to also clean up management sources.
This is probably extra rare. | @@ -10,7 +10,8 @@ from django.conf import settings
import django.utils.timezone
import server.utils
-from server.models import PluginScriptSubmission, HistoricalFact, Machine, ManagedItemHistory
+from server.models import (PluginScriptSubmission, HistoricalFact, Machine, ManagedItemHistory,
+ ManagementSource)
class Command(BaseCommand):
@@ -32,7 +33,15 @@ class Command(BaseCommand):
PluginScriptSubmission.objects.filter(recorded__lt=datelimit).delete()
# Clear out-of-date ManagedItemHistories
- ManagedItemHistory.objects.filter(recorded__lt=retention_date).delete())
+ ManagedItemHistory.objects.filter(recorded__lt=datelimit).delete()
+
+ for source in ManagementSource.objects.exclude(name__in=('Machine', 'Sal')):
+ if (not source.manageditem_set.count() and
+ not source.manageditemhistory_set.count() and
+ not source.facts.count() and
+ not source.historical_facts.count() and
+ not source.messages.count()):
+ source.delete()
HistoricalFact.objects.filter(fact_recorded__lt=datelimit).delete()
|
Support for the latest version of openvino2tensorflow v1.28.8
Support for the latest version of openvino2tensorflow v1.28.8 | {
"format_version": 2,
"layers": [
+ {
+ "layer_id": "214",
+ "type": "Squeeze",
+ "replace_mode": "insert_after",
+ "values": 1
+ },
+ {
+ "layer_id": "292",
+ "type": "Squeeze",
+ "replace_mode": "insert_after",
+ "values": 1
+ },
+ {
+ "layer_id": "313",
+ "type": "Squeeze",
+ "replace_mode": "insert_after",
+ "values": 1
+ },
+ {
+ "layer_id": "333",
+ "type": "Squeeze",
+ "replace_mode": "insert_after",
+ "values": 1
+ },
+ {
+ "layer_id": "343",
+ "type": "Squeeze",
+ "replace_mode": "insert_after",
+ "values": 1
+ },
+ {
+ "layer_id": "365",
+ "type": "Squeeze",
+ "replace_mode": "insert_after",
+ "values": 1
+ },
{
"layer_id": "370",
"type": "Const",
|
Update CHANGELOG.md
add links to docs | -## dbt 0.12.2 - Grace Kelly (Currently Unreleased)
+## dbt 0.12.2 - Grace Kelly (January 8, 2019)
### Overview
@@ -10,9 +10,9 @@ This release reduces the runtime of dbt projects by improving dbt's approach to
### Features
- More intelligently order and execute nodes in the graph. This _significantly_ speeds up the runtime of most dbt projects ([#813](https://github.com/fishtown-analytics/dbt/issues/813))
- Add `-m` flag as an alias for `--models` ([#1160](https://github.com/fishtown-analytics/dbt/issues/1160))
-- Add `post_hook` and `pre_hook` as aliases for `post-hook` and `pre-hook`, respectively ([#1124](https://github.com/fishtown-analytics/dbt/issues/1124))
+- Add `post_hook` and `pre_hook` as aliases for `post-hook` and `pre-hook`, respectively ([#1124](https://github.com/fishtown-analytics/dbt/issues/1124)) ([docs](https://docs.getdbt.com/v0.12/docs/using-hooks))
- Better handling of git errors in `dbt deps` + full support for Windows ([#994](https://github.com/fishtown-analytics/dbt/issues/994), [#778](https://github.com/fishtown-analytics/dbt/issues/778), [#895](https://github.com/fishtown-analytics/dbt/issues/895))
-- Add support for specifying a `location` in BigQuery datasets ([#969](https://github.com/fishtown-analytics/dbt/issues/969))
+- Add support for specifying a `location` in BigQuery datasets ([#969](https://github.com/fishtown-analytics/dbt/issues/969)) ([docs](https://docs.getdbt.com/v0.12/docs/supported-databases#section-dataset-locations))
- Add support for Jinja expressions using the `{% do ... %}` block ([#1113](https://github.com/fishtown-analytics/dbt/issues/1113))
- The `dbt debug` command is actually useful now ([#1061](https://github.com/fishtown-analytics/dbt/issues/1061))
- The `config` function can now be called multiple times in a model ([#558](https://github.com/fishtown-analytics/dbt/issues/558))
|
add back in reference to jit_unsupported section
Summary:
It was added in and removed in a bad merge in
Pull Request resolved: | @@ -180,17 +180,11 @@ PyTorch Functions and Modules
TorchScript supports a subset of the tensor and neural network
functions that PyTorch provides. Most methods on Tensor as well as functions in
-the ``torch`` namespace, all functions in ``torch.nn.functional`` and all
-modules from ``torch.nn`` are supported in TorchScript, excluding those in the
-table below. For unsupported modules, we suggest using :meth:`torch.jit.trace`.
-See :ref:`supported-pytorch-functions` and :ref:`supported-tensor-methods` for a full
-listing of available methods.
+the ``torch`` namespace, all functions in ``torch.nn.functional`` and
+most modules from ``torch.nn`` are supported in TorchScript.
-Unsupported ``torch.nn`` Modules::
+See :ref:`jit_unsupported` for a list of unsupported PyTorch functions and modules.
- torch.nn.modules.adaptive.AdaptiveLogSoftmaxWithLoss
- torch.nn.modules.normalization.CrossMapLRN2d
- torch.nn.modules.rnn.RNN
Python Functions and Modules
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
vellum:a038de5e28e4275a47f5df17b081d911477b9ae6
[ci skip] | -cd357ce60716beb65e2781db1b8a80f570acedf3
+a038de5e28e4275a47f5df17b081d911477b9ae6
|
Make pkgutil.get_data return Optional[bytes]
As per the docs (and implementation):
If the package cannot be located or loaded ... then None is returned. | @@ -28,4 +28,4 @@ def iter_modules(path: Optional[List[str]] = ...,
prefix: str = ...) -> _YMFNI: ... # TODO precise type
def walk_packages(path: Optional[str] = ..., prefix: str = ...,
onerror: Optional[Callable[[str], None]] = ...) -> _YMFNI: ...
-def get_data(package: str, resource: str) -> bytes: ...
+def get_data(package: str, resource: str) -> Optional[bytes]: ...
|
Stop using InputPeerSelf() on events and special case edit()
Used to fail on the chat with your own (where messages are
"incoming" instead outgoing). Now the ID of the chat and
sender are compared to achieve the same effect. Fixes | @@ -21,8 +21,7 @@ def _into_id_set(client, chats):
for chat in chats:
chat = client.get_input_entity(chat)
if isinstance(chat, types.InputPeerSelf):
- chat = getattr(_into_id_set, 'me', None) or client.get_me()
- _into_id_set.me = chat
+ chat = client.get_me(input_peer=True)
result.add(utils.get_peer_id(chat))
return result
@@ -43,6 +42,7 @@ class _EventBuilder(abc.ABC):
def __init__(self, chats=None, blacklist_chats=False):
self.chats = chats
self.blacklist_chats = blacklist_chats
+ self._self_id = None
@abc.abstractmethod
def build(self, update):
@@ -51,6 +51,7 @@ class _EventBuilder(abc.ABC):
def resolve(self, client):
"""Helper method to allow event builders to be resolved before usage"""
self.chats = _into_id_set(client, self.chats)
+ self._self_id = client.get_me(input_peer=True).user_id
def _filter_event(self, event):
"""
@@ -179,11 +180,6 @@ class NewMessage(_EventBuilder):
You can specify a regex-like string which will be matched
against the message, a callable function that returns ``True``
if a message is acceptable, or a compiled regex pattern.
-
- Notes:
- The ``message.from_id`` might not only be an integer or ``None``,
- but also ``InputPeerSelf()`` for short private messages (the API
- would not return such thing, this is a custom modification).
"""
def __init__(self, incoming=None, outgoing=None,
chats=None, blacklist_chats=False, pattern=None):
@@ -216,7 +212,7 @@ class NewMessage(_EventBuilder):
silent=update.silent,
id=update.id,
to_id=types.PeerUser(update.user_id),
- from_id=types.InputPeerSelf() if update.out else update.user_id,
+ from_id=self._self_id if update.out else update.user_id,
message=update.message,
date=update.date,
fwd_from=update.fwd_from,
@@ -317,6 +313,10 @@ class NewMessage(_EventBuilder):
or the edited message otherwise.
"""
if not self.message.out:
+ if not isinstance(self.message.to_id, types.PeerUser):
+ return None
+ me = self._client.get_me(input_peer=True)
+ if self.message.to_id.user_id != me.user_id:
return None
return self._client.edit_message(self.input_chat,
|
Add proper error messages in `__check_input__()`
* Add proper error messages in `__check_input__()`
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see
* Format update to pass linting error | @@ -180,9 +180,12 @@ class MessagePassing(torch.nn.Module):
the_size: List[Optional[int]] = [None, None]
if isinstance(edge_index, Tensor):
- assert edge_index.dtype == torch.long
- assert edge_index.dim() == 2
- assert edge_index.size(0) == 2
+ assert edge_index.dtype == torch.long, \
+ "edge_index.dtype is not of torch.long"
+ assert edge_index.dim() == 2, \
+ "edge_index.dim() is not equal to 2"
+ assert edge_index.size(0) == 2, \
+ "edge_index.size(0) is not equal to 2"
if size is not None:
the_size[0] = size[0]
the_size[1] = size[1]
|
White space change
* White space change
Also, some files remain with space after the values (like in comfig.cfg) there are spaces right after the first commands.
* remove blank lines | @@ -48,11 +48,11 @@ for P in "${decals_depth[@]}"; do
sed -i "/\"ConVar.mat_slopescaledepthbias_decal\"/ s/\"[-0.5]*\"/\"0.000001\"/" mastercomfig-"${P}"-preset/dxsupport_override.cfg
done
-
# Remove comments to save space
if [ "$release" = true ] ; then
find . -name "*.cfg" | xargs sed -i '/^[[:blank:]]*\/\//d;s/\/\/.*//'
find . -name "*.cfg" | xargs sed -i '/^[[:space:]]*$/d'
+ find . -name "*.cfg" | xargs sed -i '/^$/d'
# Package into VPK
for D in *; do
if [ -d "${D}" ]; then
|
help: Hide y-scrollbar on collapsed sidebar view.
Closes and fixes | @@ -1798,6 +1798,10 @@ input.new-organization-button {
.app.help .sidebar.show ~ .markdown {
-webkit-filter: brightness(0.7);
}
+
+ .app.help .sidebar:not(.show) .ps__scrollbar-y-rail {
+ display: none; /* Hide y-scrollbar on collapsed sidebar view */
+ }
}
@media (max-width: 950px) {
|
Update TaskTestingNotebook.ipynb
fixed colab link | "id": "eGNIn0Ay9sr0"
},
"source": [
- "[](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)\n"
+ "[](https://colab.sandbox.google.com/github/google/BIG-bench/blob/main/notebooks/TaskTestingNotebook.ipynb)\n"
]
},
{
|
docs: Add a resource to code review doc.
Add James J. Porter's article on code review. | @@ -123,6 +123,9 @@ We also strongly recommend reviewers to go through the following resources.
article by Sarah Sharp
* [Zulip & Good Code Review](https://www.harihareswara.net/sumana/2016/05/17/0)
article by Sumana Harihareswara
+* [Code Review - A consolidation of advice and stuff from the
+ sinternet](https://gist.github.com/porterjamesj/002fb27dd70df003646df46f15e898de)
+ article by James J. Porter
* [Zulip Code of Conduct](https://zulip.readthedocs.io/en/latest/code-of-conduct.html)
[code-style]: code-style.html
|
Filtering: don't delete messages in DMs
Bots are incapable of deleting direct messages authored by others. | @@ -4,7 +4,7 @@ from typing import Optional, Union
import discord.errors
from dateutil.relativedelta import relativedelta
-from discord import Colour, DMChannel, Member, Message, TextChannel
+from discord import Colour, Member, Message, TextChannel
from discord.ext.commands import Cog
from discord.utils import escape_markdown
@@ -161,8 +161,10 @@ class Filtering(Cog):
match = await _filter["function"](msg)
if match:
- # If this is a filter (not a watchlist), we should delete the message.
- if _filter["type"] == "filter":
+ is_private = msg.channel.type is discord.ChannelType.private
+
+ # If this is a filter (not a watchlist) and not in a DM, delete the message.
+ if _filter["type"] == "filter" and not is_private:
try:
# Embeds (can?) trigger both the `on_message` and `on_message_edit`
# event handlers, triggering filtering twice for the same message.
@@ -181,7 +183,7 @@ class Filtering(Cog):
if _filter["user_notification"]:
await self.notify_member(msg.author, _filter["notification_msg"], msg.channel)
- if isinstance(msg.channel, DMChannel):
+ if is_private:
channel_str = "via DM"
else:
channel_str = f"in {msg.channel.mention}"
|
add testing for optimizing videos
need sleep(1) to allow for conversion to go through. | # -*- coding: utf-8 -*-
import pytest
from datetime import datetime
+from time import sleep
from plexapi.exceptions import BadRequest, NotFound
from . import conftest as utils
@@ -686,3 +687,19 @@ def test_video_exists_accessible(movie, episode):
assert episode.media[0].parts[0].exists is True
assert episode.media[0].parts[0].accessible is True
+
+def test_video_optimize(movie, plex):
+ plex.optimizedItems(removeAll=True)
+ movie.optimize(targetTagID=1)
+ sleep(1)
+ assert len(plex.optimizedItems()) == 1
+ assert len(plex.conversions()) == 1
+ conversion = plex.conversions()[0]
+ conversion.remove()
+ assert len(plex.conversions()) == 0
+ assert len(plex.optimizedItems()) == 1
+ optimized = plex.optimizedItems()[0]
+ video = plex.optimizedItem(optimizedID=optimized.id)
+ assert movie.key == video.key
+ plex.optimizedItems(removeAll=True)
+ assert len(plex.optimizedItems()) == 0
\ No newline at end of file
|
Revive doctest_tb_sysexit
Output changed with `pure-eval>=0.2.0` which learned to parse binary operators and to show its arguments. Test changed to not use the binary operator '%' to pass on any version. | @@ -8,7 +8,7 @@ def div0():
x/y
def sysexit(stat, mode):
- raise SystemExit(stat, 'Mode = %s' % mode)
+ raise SystemExit(stat, f'Mode = {mode}')
def bar(mode):
"bar"
|
support complex-valued arrays in numeric.inv
Currently, `numeric.inv` assumes its input is real-valued. In future commits
support for complex numbers will be added to the `evaluable` module and
`numeric.inv` will be called with complex-valued matrices. This patch makes the
necessary changes. | @@ -185,7 +185,7 @@ def inv(A):
Ainv = numpy.linalg.inv(A)
except numpy.linalg.LinAlgError:
warnings.warn('singular matrix', RuntimeWarning)
- Ainv = numpy.empty(A.shape, dtype=float)
+ Ainv = numpy.empty(A.shape, dtype=complex if A.dtype.kind == 'c' else float)
for index in numpy.ndindex(A.shape[:-2]):
try:
Ainv[index] = numpy.linalg.inv(A[index])
|
settings org: Deduplicate `upload_realm_logo_or_icon`.
Now that we have arranged our HTML and CSS classes in a similar
fashion for each of the cases, we can remove the duplicated lists of
objects. | @@ -944,10 +944,7 @@ exports.build_page = function () {
function upload_realm_logo_or_icon(file_input, night, icon) {
const form_data = new FormData();
- let spinner;
- let error_field;
- let upload_text;
- let delete_button;
+ let widget;
let url;
form_data.append('csrfmiddlewaretoken', csrf_token);
@@ -956,29 +953,20 @@ exports.build_page = function () {
}
if (icon) {
url = '/json/realm/icon';
- spinner = $('#realm-icon-upload-widget .upload-spinner-background');
- upload_text = $('#realm-icon-upload-widget .settings-page-upload-text');
- delete_button = $('#realm-icon-upload-widget .settings-page-delete-button');
- error_field = $("#realm-icon-upload-widget .image_file_input_error");
+ widget = '#realm-icon-upload-widget';
} else {
if (night) {
- error_field = $("#realm-night-logo-upload-widget .image_file_input_error");
- spinner = $("#realm-night-logo-upload-widget .upload-spinner-background");
- upload_text = $('#realm-night-logo-upload-widget .settings-page-upload-text');
- delete_button = $('#realm-night-logo-upload-widget .settings-page-delete-button');
+ widget = '#realm-night-logo-upload-widget';
} else {
- error_field = $("#realm-day-logo-upload-widget .image_file_input_error");
- spinner = $("#realm-day-logo-upload-widget .upload-spinner-background");
- upload_text = $('#realm-day-logo-upload-widget .settings-page-upload-text');
- delete_button = $('#realm-day-logo-upload-widget .settings-page-delete-button');
+ widget = '#realm-day-logo-upload-widget';
}
url = '/json/realm/logo';
form_data.append('night', JSON.stringify(night));
}
- spinner.expectOne();
- upload_text.expectOne();
- delete_button.expectOne();
- error_field.expectOne();
+ const spinner = $(`${widget} .upload-spinner-background`).expectOne();
+ const upload_text = $(`${widget} .settings-page-upload-text`).expectOne();
+ const delete_button = $(`${widget} .settings-page-delete-button`).expectOne();
+ const error_field = $(`${widget} .image_file_input_error`).expectOne();
realm_icon_logo_upload_start(spinner, upload_text, delete_button);
error_field.hide();
channel.post({
|
fix tutorial
add missing dep curl
create channels directory | @@ -32,7 +32,7 @@ Then create an environment:
```
mamba create -n quetz -c conda-forge python fastapi authlib httpx=0.12.0 sqlalchemy sqlite \
-python-multipart uvicorn zstandard conda-build appdirs toml
+python-multipart uvicorn zstandard conda-build appdirs toml curl
conda activate quetz
```
@@ -49,6 +49,12 @@ Initialize test database:
python init_db.py
```
+Create a directory to store channels
+
+```
+mkdir channels
+```
+
Run the fastapi server:
```
|
Use `version.to_docker_tag` to normalize the package version.
e.g.
```
>>> TAG = version.to_docker_tag('1.2.3')
1.2.3
```
or
```
>>> TAG = version.to_docker_tag('1.2.3+build456abc')
1.2.3.build456abc
``` | @@ -7,9 +7,10 @@ import docker.errors
import requests
import armory
+from armory.utils import version
from armory.logs import log, is_progress
-TAG = armory.__version__
+TAG = version.to_docker_tag(armory.__version__)
log.trace(f"armory.__version__: {armory.__version__}")
DOCKER_REPOSITORY = "twosixarmory"
|
Track model browser row popup
Clean it when we need to create a new one.
This way the signal handling still works (we're not detaching the popup,
and break action propagation), and we're not creating a lot of inactive
popups. | @@ -380,8 +380,10 @@ def list_item_factory_setup(_factory, list_item, event_manager, modeling_languag
-1,
)
row = builder.get_object("row")
+ menu = None
def on_show_popup(ctrl, n_press, x, y):
+ nonlocal menu
list_item.get_child().activate_action(
"list.select-item",
GLib.Variant.new_tuple(
@@ -391,6 +393,9 @@ def list_item_factory_setup(_factory, list_item, event_manager, modeling_languag
),
)
element = list_item.get_item().get_item().element
+ if menu:
+ menu.unparent()
+ menu = None
if element:
menu = Gtk.PopoverMenu.new_from_model(
popup_model(element, modeling_language)
|
Changed info about available lib list in the user guide
Changed info about available lib list in the user guide | @@ -197,7 +197,7 @@ After clicking you see the window with 4 fields:

-You need to wait for a while after resource choosing till list of all available libraries is received. If available libraries list is not gained due to some reasons you are able to proceed to work without autocomplete feature.
+You need to wait for a while after resource and group choosing till list of all available libraries is received for a particular group. If available libraries list is not gained due to some reasons you are able to proceed to work without autocomplete feature.

|
CompileCtx: add a short_name_or_long shortcut
TN: | @@ -403,6 +403,7 @@ class CompileCtx(object):
names.Name(lib_name)
)
self.short_name = names.Name(short_name) if short_name else None
+ self.short_name_or_long = self.short_name or self.lib_name
self.ada_api_settings = AdaAPISettings(self)
self.c_api_settings = CAPISettings(
@@ -1773,8 +1774,7 @@ class CompileCtx(object):
'gdb_py',
langkit_path=os.path.dirname(os.path.dirname(__file__)),
lib_name=lib_name,
- prefix=(self.short_name.lower
- if self.short_name else lib_name),
+ prefix=self.short_name_or_long.lower,
),
self.post_process_python
)
|
Fix commented fairseq call
also remove testing for variable existance | @@ -26,9 +26,9 @@ echo "fairseq-generate
$FAIRSEQ_GENERATE_ARGS
--path $checkpoint
--results-path $results_folder/valid"
-#fairseq-generate $FAIRSEQ_GENERATE_ARGS \
-# --path $checkpoint \
-# --results-path $results_folder/valid
+fairseq-generate $FAIRSEQ_GENERATE_ARGS \
+ --path $checkpoint \
+ --results-path $results_folder/valid
# to profile decoder
# 1. pip install line_profiler
# 2. decorate target function with @profile
@@ -48,7 +48,7 @@ if [ "$TASK_TAG" == "AMR" ];then
--in-actions $results_folder/valid.actions \
--out-amr $results_folder/valid.amr \
- if [ ! -v WIKI_DEV ];then
+ if [ "$WIKI_DEV" == "" ];then
# Smatch evaluation without wiki
python smatch/smatch.py \
|
Fix return types
Apparently Sphinx can do list(type) or list[type]. If we ever use it this'll make things more clear. | @@ -289,7 +289,7 @@ class Mail(object):
def contents(self):
"""The Contents of this Mail. Must include at least one MIME type.
- :rtype: list
+ :rtype: list(Content)
"""
return self._contents
@@ -308,7 +308,7 @@ class Mail(object):
"""The attachments included with this Mail.
:returns: List of Attachment objects.
- :rtype: list
+ :rtype: list(Attachment)
"""
return self._attachments
@@ -326,7 +326,7 @@ class Mail(object):
"""The sections included with this Mail.
:returns: List of Section objects.
- :rtype: list
+ :rtype: list(Section)
"""
return self._sections
@@ -344,7 +344,7 @@ class Mail(object):
"""The Headers included with this Mail.
:returns: List of Header objects.
- :rtype: list
+ :rtype: list(Header)
"""
return self._headers
@@ -367,7 +367,7 @@ class Mail(object):
def categories(self):
"""The Categories applied to this Mail. Must not exceed 10 items
- :rtype: list
+ :rtype: list(Category)
"""
return self._categories
@@ -385,7 +385,7 @@ class Mail(object):
"""The CustomArgs attached to this Mail.
Must not exceed 10,000 characters.
- :rtype: list
+ :rtype: list(CustomArg)
"""
return self._custom_args
|
make lax.full require concrete shapes
improves error message for | @@ -418,6 +418,13 @@ def tie_in(x, y):
return tie_in_p.bind(x, y)
def full(shape, fill_value, dtype):
+ try:
+ shape = tuple(map(int, shape))
+ except TypeError:
+ msg = ("`full` requires shapes to be concrete. If using `jit`, try using "
+ "`static_argnums` or applying `jit` to smaller subfunctions instead.")
+ raise TypeError(msg)
+
if onp.shape(fill_value):
msg = "full must be called with scalar fill_value, got fill_value.shape {}."
raise TypeError(msg.format(onp.shape(fill_value)))
@@ -2532,7 +2539,9 @@ def _check_shapelike(fun_name, arg_name, obj):
def _dynamic_slice_indices(operand, start_indices):
if isinstance(start_indices, (tuple, list)):
start_indices = concatenate([reshape(i, [1]) for i in start_indices], 0)
- return rem(start_indices, onp.array(operand.shape, start_indices.dtype))
+ # map int over operand.shape to raise any dynamic-shape errors
+ shape = onp.asarray(map(int, operand.shape), start_indices.dtype)
+ return rem(start_indices, shape)
def _const(example, val):
|
MAINT: adopted a better pattern for progressive_align
in align, maintenances | @@ -195,7 +195,7 @@ class progressive_align(ComposableSeq):
self._make_tree = guide_tree
guide_tree = None # callback takes precedence
else:
- self._make_tree = quick_tree().quick_tree
+ self._make_tree = align_to_ref(moltype=self._moltype) + dist.fast_slow_dist() + quick_tree()
if guide_tree is not None:
if type(guide_tree) == str:
@@ -217,9 +217,6 @@ class progressive_align(ComposableSeq):
def _build_guide(self, seqs):
crude_aligner = align_to_ref(moltype=self._moltype)
aln = crude_aligner(seqs)
- if self._make_tree.__name__ == "quick_tree":
- fast_slow_dist = dist.fast_slow_dist()
- aln = fast_slow_dist(aln)
tree = self._make_tree(aln)
if self._scalar != 1:
scaler = scale_branches(scalar=self._scalar)
|
ENH: annotated tree now includes motif probs per node
[NEW] get_lengths_as_ens() and get_paralinear_metric() now take motif_probs
as an optional argument. get_annotated_tree() uses this to reduce the
number of times they're calculated. Also adds them to edge.params['mprobs']. | @@ -571,8 +571,9 @@ class LikelihoodFunction(ParameterController):
assert length_as in ("ENS", "paralinear", None)
d = self.get_param_value_dict(["edge"])
lengths = d.pop("length")
- ens = self.get_lengths_as_ens()
- plin = self.get_paralinear_metric()
+ mprobs = self.get_motif_probs_by_node()
+ ens = self.get_lengths_as_ens(motif_probs=mprobs)
+ plin = self.get_paralinear_metric(motif_probs=mprobs)
if length_as == "ENS":
lengths = ens
elif length_as == "paralinear":
@@ -581,10 +582,12 @@ class LikelihoodFunction(ParameterController):
tree = self._tree.deepcopy()
for edge in tree.get_edge_vector():
if edge.name == "root":
+ edge.params["mprobs"] = mprobs[edge.name].todict()
continue
edge.params["ENS"] = ens[edge.name]
edge.params["paralinear"] = plin[edge.name]
edge.params["length"] = lengths[edge.name]
+ edge.params["mprobs"] = mprobs[edge.name].todict()
for par in d:
val = d[par][edge.name]
if par == length_as:
@@ -667,8 +670,14 @@ class LikelihoodFunction(ParameterController):
)
return scaled_lengths
- def get_paralinear_metric(self):
- """returns {edge.name: paralinear, ...}"""
+ def get_paralinear_metric(self, motif_probs=None):
+ """returns {edge.name: paralinear, ...}
+ Parameters
+ ----------
+ motif_probs : dict or DictArray
+ an item for each edge of the tree. Computed if not provided.
+ """
+ if motif_probs is None:
motif_probs = self.get_motif_probs_by_node()
plin = {}
for edge in self.tree.get_edge_vector(include_root=False):
@@ -681,10 +690,16 @@ class LikelihoodFunction(ParameterController):
return plin
- def get_lengths_as_ens(self):
+ def get_lengths_as_ens(self, motif_probs=None):
"""returns {edge.name: ens, ...} where ens is the expected number of substitutions
- for a stationary Markov process, this is just branch length"""
+ for a stationary Markov process, this is just branch length
+ Parameters
+ ----------
+ motif_probs : dict or DictArray
+ an item for each edge of the tree. Computed if not provided.
+ """
+ if motif_probs is None:
motif_probs = self.get_motif_probs_by_node()
node_names = self.tree.get_node_names()
node_names.remove("root")
|
[DOC] Wrong filename and minor grammar fix
* Upstart has .conf files, not .service files
See
* Fix small grammar problem | @@ -49,7 +49,7 @@ unzip /opt/oracle/instantclient-sdk-linux.x64-12.1.0.2.0.zip
export LD_LIBRARY_PATH=/opt/oracle/instantclient/lib:$LD_LIBRARY_PATH
```
-**Note:** Agent 6 uses upstart or systemd to orchestrate the datadog-agent service. Environment variables may need to be added to the service configuration files at the default locations of `/etc/init/datadog-agent.service` (Upstart) or `/lib/systemd/system/datadog-agent.service` (systemd). See documentation on [Upstart][4] or [systemd][5] for more information on how to configured these settings.
+**Note:** Agent 6 uses upstart or systemd to orchestrate the datadog-agent service. Environment variables may need to be added to the service configuration files at the default locations of `/etc/init/datadog-agent.conf` (Upstart) or `/lib/systemd/system/datadog-agent.service` (systemd). See documentation on [Upstart][4] or [systemd][5] for more information on how to configure these settings.
#### After installing either the JDBC Driver or the Instant Client
|
[modules/nic] reduce code complexity (hopefully)
fixes | @@ -47,6 +47,16 @@ class Module(bumblebee.engine.Module):
def _istunnel(self, intf):
return intf.startswith("tun")
+ def get_addresses(self, intf):
+ retval = []
+ try:
+ for ip in netifaces.ifaddresses(intf).get(netifaces.AF_INET, []):
+ if ip.get("addr", "") != "":
+ retval.append(ip.get("addr"))
+ except Exception:
+ return []
+ return retval
+
def _update_widgets(self, widgets):
interfaces = [ i for i in netifaces.interfaces() if not i.startswith(self._exclude) ]
@@ -56,14 +66,9 @@ class Module(bumblebee.engine.Module):
for intf in interfaces:
addr = []
state = "down"
- try:
- if netifaces.AF_INET in netifaces.ifaddresses(intf):
- for ip in netifaces.ifaddresses(intf)[netifaces.AF_INET]:
- if "addr" in ip and ip["addr"] != "":
- addr.append(ip["addr"])
+ for ip in self.get_addresses(intf):
+ addr.append(ip)
state = "up"
- except Exception as e:
- addr = []
widget = self.widget(intf)
if not widget:
widget = bumblebee.output.Widget(name=intf)
|
Update Friday the 13th (USA).cht
The code "Set Jason's Health To Zero" actually makes it impossible to kill Jason even after shutting it off mid-game. | @@ -44,8 +44,8 @@ cheat10_desc = "Set Powered Character"
cheat10_code = "051B:03"
cheat10_enable = false
-cheat11_desc = "Set Jason's Health To Zero"
-cheat11_code = "051C:00"
+cheat11_desc = "Jason only has one health bar left"
+cheat11_code = "051C:01"
cheat11_enable = false
cheat12_desc = "Infinite Children"
|
Fix typo in release note
This is a follow up to
Story:
Task: 28506 | ---
fixes:
- |
- Fixes an issue where the master TFTP image cache could not be disbled.
+ Fixes an issue where the master TFTP image cache could not be disabled.
The configuration option ``[pxe]/tftp_master_path`` may now be set to
the empty string to disable the cache. For more information, see
story `2004608 <https://storyboard.openstack.org/#!/story/2004608>`_.
|
2.5.14
Automatically generated by python-semantic-release | @@ -9,7 +9,7 @@ https://community.home-assistant.io/t/echo-devices-alexa-as-media-player-testers
"""
from datetime import timedelta
-__version__ = "2.5.13"
+__version__ = "2.5.14"
PROJECT_URL = "https://github.com/custom-components/alexa_media_player/"
ISSUE_URL = "{}issues".format(PROJECT_URL)
|
Update README.md
Tech and Comp section arrangement | @@ -24,12 +24,12 @@ Stern, or Data East pinball machines. MPF interfaces with machines via modern pi
MPF is written in Python 3. It is compatible with Windows, Mac, and Linux using the same code and configurations.
-MPF is a work in progress we are actively developing. We are reviewing commits weekly. MPF is MIT-licensed, developed by fun people, and supported by a vibrant pinball-loving community.
+MPF is MIT-licensed, developed by fun people, and supported by a vibrant pinball-loving community. It is a work in progress we are actively developing. We review commits weekly.
See also the [MPF Media Controller](https://github.com/missionpinball/mpf-mc/) (based on [Kivy](http://kivy.org))
which is used to control graphics and sounds, including high-res LCD displays, classic DMDs, and modern RGB LED DMDs.
-The MPF project homepage is here : http://missionpinball.org
+Visit the MPF project homepage is here : http://missionpinball.org
[](https://coveralls.io/github/missionpinball/mpf?branch=dev)
[](https://travis-ci.org/missionpinball/mpf)
|
Fixed pybullet envs doc
added missing modules in mock imports | @@ -208,7 +208,7 @@ epub_exclude_files = ['search.html']
# -- Options for autodoc ---------------------------------------------------
autodoc_member_order = 'bysource'
-autodoc_mock_imports = ['torch', 'pybullet', 'dm_control', 'mujoco', 'glfw',
+autodoc_mock_imports = ['torch', 'pybullet', 'pybullet_data', 'pybullet_utils', 'dm_control', 'mujoco', 'glfw',
'habitat', 'habitat_baselines', 'habitat_sim', 'igibson',
'gym_minigrid']
add_module_names = False
|
Add test with encoder dependencies for global defaults
Test with bert since it requires a path to the pretrained model to be injected into the config | @@ -91,3 +91,24 @@ def test_global_default_parameters_merge_with_defaults(csv_filename):
output_feature = updated_config[OUTPUT_FEATURES][0]
assert output_feature[DECODER] == updated_config[DEFAULTS][output_feature[TYPE]][DECODER][TYPE]
+
+
+def test_global_defaults_with_encoder_dependencies(csv_filename):
+ input_features = [text_feature(name="title", reduce_output="sum")]
+ output_features = [category_feature(name="article", embedding_size=3)]
+
+ config = {
+ INPUT_FEATURES: input_features,
+ OUTPUT_FEATURES: output_features,
+ DEFAULTS: {
+ TEXT: {
+ ENCODER: {TYPE: "bert"},
+ }
+ },
+ }
+
+ # Config should populate with the additional required fields for bert
+ updated_config = merge_with_defaults(config)
+
+ assert updated_config[INPUT_FEATURES][0][ENCODER] == "bert"
+ assert updated_config[INPUT_FEATURES][0]["pretrained_model_name_or_path"] == "bert-base-uncased"
|
Fix torch::jit::load docs
Summary:
`torch::jit::load` is currently incorrectly documented/rendered
soumith ezyang
Pull Request resolved: | @@ -15,18 +15,19 @@ TORCH_API void import_ir_module(
ModuleLookup module_lookup,
const std::string& filename);
-TORCH_API void import_ir_module(
- ModuleLookup module_lookup,
- std::istream& in);
+TORCH_API void import_ir_module(ModuleLookup module_lookup, std::istream& in);
+
+/// Loads a serialized `script::Module` from the given `istream`.
+///
+/// The istream must contain a serialized `script::Module`, exported via
+/// `torch::jit::ExportModule` in C++.
+TORCH_API std::shared_ptr<script::Module> load(std::istream& in);
/// Loads a serialized `script::Module` from the given `filename`.
///
/// The file stored at the location given in `filename` must contain a
/// serialized `script::Module`, exported either via `ScriptModule.save()` in
/// Python or `torch::jit::ExportModule` in C++.
-
-TORCH_API std::shared_ptr<script::Module> load(std::istream& in);
-
TORCH_API std::shared_ptr<script::Module> load(const std::string& filename);
} // namespace jit
|
Change map_location to be 'cpu'
* Change map_location to be 'cpu'
If you are on a CPU-only machine, it will give an error otherwise. Model averaging should not require a GPU; moreover, it may be faster to use CPU rather than move all models to the GPU to average them. | @@ -10,7 +10,7 @@ def average_models(model_files):
avg_generator = None
for i, model_file in enumerate(model_files):
- m = torch.load(model_file)
+ m = torch.load(model_file, map_location='cpu')
model_weights = m['model']
generator_weights = m['generator']
|
Everyone Ping: PR Review
Changed cryptic variable name.
Changed ping response to use `bot.constants.NEGATIVE_REPLIES`.
Changed ping repsonse to only ping user once. | +import random
import textwrap
from typing import Dict, Iterable, List, Optional, Tuple
from discord import Embed, Member, Message
-from bot.constants import Colours
+from bot.constants import Colours, NEGATIVE_REPLIES
async def apply(
@@ -14,23 +15,27 @@ async def apply(
"""Detects if a user has sent an '@everyone' ping."""
relevant_messages = tuple(msg for msg in recent_messages if msg.author == last_message.author)
- ev_msgs_ct = 0
+ everyone_messages_count = 0
for msg in relevant_messages:
if "@everyone" in msg.content:
- ev_msgs_ct += 1
+ everyone_messages_count += 1
- if ev_msgs_ct > config["max"]:
+ if everyone_messages_count > config["max"]:
# Send the user an embed giving them more info:
embed_text = textwrap.dedent(
f"""
+ **{random.choice(NEGATIVE_REPLIES)}**
Please don't try to ping {last_message.guild.member_count:,} people.
- **It will not have good results.**
"""
)
+
+ # Make embed:
embed = Embed(description=embed_text, colour=Colours.soft_red)
- await last_message.channel.send(f"Hey {last_message.author.mention}!", embed=embed)
+
+ # Send embed:
+ await last_message.channel.send(embed=embed)
return (
- f"pinged the everyone role {ev_msgs_ct} times in {config['interval']}s",
+ f"pinged the everyone role {everyone_messages_count} times in {config['interval']}s",
(last_message.author,),
relevant_messages,
)
|
Fixed code block in "Displaying Figures Using Dash"
No code was changed. Just edited the markdown to show the code block correctly. | @@ -235,7 +235,7 @@ It is important to note that Dash does not use the renderers framework discussed
Instead, pass your figure as the `figure` parameter to the [`dcc.Graph`](https://dash.plot.ly/dash-core-components/graph) component, which is part of the [Dash Core Components](https://dash.plot.ly/dash-core-components) library. The code below demonstrates how to do this.
-<!-- #raw -->
+```python
import dash_core_components as dcc
import plotly.graph_objs as go
@@ -245,7 +245,7 @@ dcc.Graph(
id='example-graph-2',
figure=fig
)
-<!-- #endraw -->
+```
## Displaying Figures Using `ipywidgets`
Plotly figures can be displayed in [ipywidgets](https://ipywidgets.readthedocs.io/en/stable/) contexts using `plotly.graph_objects.FigureWidget` objects. `FigureWidget` is a figure graph object (just like `plotly.graph_objects.Figure`), so you can add traces to it and update it just like a regular `Figure`. But `FigureWidget` is also an `ipywidgets` object, which means that you can display it alongside other `ipywidgets` to build user interfaces right in the notebook.
|
TST: fix non-intercepted warnings from lxml in Python 3.6
pytest was (accidentally) filtering deprecation warnings raised
by lxml calling the getargspec() method of inspect in 3.5.
Added a filter to also catch the warning in its slightly different
3.6 form. | @@ -146,11 +146,16 @@ _warnings_to_ignore_by_pyver = {
r"The value of convert_charrefs will become True in 3\.5\. "
r"You are encouraged to set the value explicitly\."]),
(3, 5): set([
- # py.test raises this warning on Python 3.5.
- # This can be removed when fixed in py.test.
+ # py.test raised this warning in inspect on Python 3.5.
# See https://github.com/pytest-dev/pytest/pull/1009
+ # Keeping it since e.g. lxml as of 3.8.0 is still calling getargspec()
r"inspect\.getargspec\(\) is deprecated, use "
- r"inspect\.signature\(\) instead"])}
+ r"inspect\.signature\(\) instead"]),
+ (3, 6): set([
+ # inspect raises this slightly different warning on Python 3.6.
+ # Keeping it since e.g. lxml as of 3.8.0 is still calling getargspec()
+ r"inspect\.getargspec\(\) is deprecated, use "
+ r"inspect\.signature\(\) or inspect\.getfullargspec\(\)"])}
def enable_deprecations_as_exceptions(include_astropy_deprecations=True,
|
[doc] Upgrade README.md file in test folder.
fixes | -# Contribute CTS test case
+# Test Documentation
+## Introduction
+This **test** folder contains test cases and relative assistant tools for testing **WebNN API**.
+
+According to [API Documentation](https://github.com/intel/webml-polyfill/blob/master/docs/api.md) and [API Info](https://github.com/intel/webml-polyfill/blob/master/src/nn/Enums.js), these test cases including:
+* **Unit Tests** in `base` folder and `unit-test` folder, checking Interface, Attribute and Method to ensure meet its design and behaves as intended,
+* **CTS(Compatibility Test Suite) Tests** in `cts` folder, focus on testing behaves of [supported ops](https://github.com/intel/webml-polyfill/blob/master/docs/supported_ops.md), checking compatibility on cross platforms,
+* **End-to-End Tests** in `end2end-test` folder, testing using [ONNX models and sample data](https://github.com/intel/webml-polyfill/blob/master/test/end2end-test/README.md), checking model flow being performed as intended,
+* **Real Model Tests** in `realmodel` folder, testing behaves of some supported ops in a [real-model](https://github.com/intel/webml-polyfill/tree/master/test/realmodel#3-supported-onnxjs-models) scenario.
+```
+Note: Above 4 type tests were wrote with [Mocha test framework](https://mochajs.org/).
+```
+* **IDL(Interface Definition Language) Tests** in `wpt` folder which were generated by [idlharness.js API](https://web-platform-tests.org/writing-tests/idlharness.html).
+
+These relative assistant tools:
+* **Accuracy Measurement** in `tool/MeasureAccuracy` folder, for checking accuracy of semantic segmentation inference result with inputing lots of images,
+* **CI(Continuous Integration)** its baseline data in `tool/CI` folder, for checking webml-polyfill repo's PRs, leveraging [Travis CI](https://travis-ci.com/) tool testing on macOS platform, [Circle CI](https://circleci.com/) tool testing on Linux platform and [AppVeyor CI](https://www.appveyor.com/) on Windows platform,
+* **CTS Converter** in `cts/tool/CTSConverter` folder for converting [NN API CTS Tests](https://android.googlesource.com/platform/frameworks/ml/+/refs/tags/android-cts-10.0_r2) to these CTS test cases for WebNN API,
+* **per Layer Analyzer** in `tool/perLayerAnalyzer` folder, for running **Real Model Tests**, collecting and displaying each layer's performance to help analysing,
+* **Regression Checker** in `tool/RegressionChecker` folder, for checking native backend implement PRs with high quality(avoiding new problems such as crash, freeze, etc.).
+
+
+## Contribute CTS test case
The CTS tests in `./cts/test` folder were converted from original Android CTS test, for new created CTS test file, please add them into `./cts/test_supplement` folder
-## Steps
+### Steps
Here are steps to update `./cts/test_supplement/cts_supplement-all.js`:
* Copy your test file into `./cts/test_supplement`.
* Change directory into `CTSConverter`:
|
changed the boolean to testSaveHugeFiles (had same name has a method)
lower the number of created matrices in some tests: it took too much time | @@ -4,7 +4,7 @@ from raytracing import *
inf = float("+inf")
-testSaveHugeFile = True
+testSaveHugeFiles = True
class TestMatrixGroup(envtest.RaytracingTestCase):
@@ -591,11 +591,11 @@ class TestSaveAndLoadMatrixGroup(envtest.RaytracingTestCase):
mg = MatrixGroup([Space(20), ThickLens(1.22, 10, 10, 10)])
self.assertSaveNotFailed(mg, self.fileName)
- @envtest.skipIf(not testSaveHugeFile, "Don't test saving a lot of matrices")
+ @envtest.skipIf(not testSaveHugeFiles, "Don't test saving a lot of matrices")
def testSaveHugeFile(self):
fname = self.tempFilePath("hugeFile.pkl")
- spaces = [Space(10) for _ in range(500)]
- lenses = [Lens(10) for _ in range(500)]
+ spaces = [Space(10) for _ in range(200)]
+ lenses = [Lens(10) for _ in range(200)]
elements = spaces + lenses
mg = MatrixGroup(elements)
self.assertSaveNotFailed(mg, fname)
@@ -655,11 +655,11 @@ class TestSaveAndLoadMatrixGroup(envtest.RaytracingTestCase):
self.assertLoadNotFailed(mg2, fname)
self.assertLoadEqualsMatrixGroup(mg2, mg1)
- @envtest.skipIf(not testSaveHugeFile, "Don't test saving a lot of matrices")
+ @envtest.skipIf(not testSaveHugeFiles, "Don't test saving a lot of matrices")
def testSaveThenLoadHugeFile(self):
fname = self.tempFilePath("hugeFile.pkl")
- spaces = [Space(10) for _ in range(500)]
- lenses = [Lens(10) for _ in range(500)]
+ spaces = [Space(10) for _ in range(125)]
+ lenses = [Lens(10) for _ in range(125)]
elements = spaces + lenses
mg1 = MatrixGroup(elements)
mg2 = MatrixGroup()
|
[internal] scala: fix scalac plugin help
Fix some typos and wording for scalac plugin help.
[ci skip-rust]
[ci skip-build-wheels] | @@ -35,7 +35,7 @@ class Scalac(Subsystem):
.advanced()
.deprecated(
removal_version="2.12.0dev0",
- hint="Use `--plugins-for-resolve` instead to use user resolves",
+ hint="Use `--scalac-plugins-for-resolve` instead to use user resolves",
)
)
@@ -44,10 +44,9 @@ class Scalac(Subsystem):
"--plugins-for-resolve",
help=(
"A dictionary, whose keys are the names of each JVM resolve that requires default "
- "Scala plugins, and the value is a comma-separated string consisting of scala plugin "
- "names. Each speficied plugin must have a corresponding `jvm_artifact` that specifies "
- "the name in its `experimental_provides_scala_plugin` field, and is compatible with "
- "the current resolve."
+ "`scalac` plugins, and the value is a comma-separated string consisting of scalac plugin "
+ "names. Each specified plugin must have a corresponding `scalac_plugin` target that specifies "
+ "that name in either its `plugin_name` field or is the same as its target name."
),
)
@@ -64,7 +63,7 @@ class Scalac(Subsystem):
.advanced()
.deprecated(
removal_version="2.12.0dev0",
- hint="Use `--plugins-for-resolve`, which will add plugin dependencies to JVM user resolves instead.",
+ hint="Use `--scalac-plugins-for-resolve` instead, which will add plugin dependencies to JVM user resolves.",
)
)
|
readme: remove outdated ref to test requirements
This commit also includes a whitespace change picked up by pre-commit. | @@ -92,9 +92,8 @@ You can check out the [CDK Definition of Infrastructure](https://gitlab.com/femi
1. Python, redis, and libmagic are required, but node and postgres are not.
2. Install dependencies with `pip install -r requirements.txt`
-3. Install the test dependencies with `pip install -r requirements-test.txt`
-4. Run the tests with `python -m pytest`
-5. The tests are not affected by your configuration in `config.yaml`.
+3. Run the tests with `python -m pytest`
+4. The tests are not affected by your configuration in `config.yaml`.
If you wish to run the tests against production database or
authentication servers (instead of the defaults, which are sqlite and
local authentication), you may put configuration settings in
|
changed month abbreviations in month_plot
to use three letter abbreviations (i.e. 'Jan', 'Feb') instead a single letter abbreviation (i.e. 'j', 'f') | @@ -444,7 +444,7 @@ def month_plot(x, dates=None, ylabel=None, ax=None):
else:
x = pd.Series(x, index=pd.PeriodIndex(dates, freq="M"))
- xticklabels = ["j", "f", "m", "a", "m", "j", "j", "a", "s", "o", "n", "d"]
+ xticklabels = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sept", "Oct", "Nov", "Dec"]
return seasonal_plot(
x.groupby(lambda y: y.month), xticklabels, ylabel=ylabel, ax=ax
)
|
Fix bug in JSON help rendering
Wasn't displaying command help description. | @@ -82,7 +82,7 @@ class JSONHelpFormatter(object):
self._help_text = None
self._cur_dl = None
self._buf = []
- self.width = 0
+ self.width = 999999999
def write_usage(self, prog, args='', **_kw):
self._val["usage"] = {
|
Debugging: Add aborting to displayed stuff.
* However, this should not be in XML, but currently is, that one
should not include display stuff by default. | @@ -70,6 +70,9 @@ class StatementTry(StatementChildrenHavingBase):
source_ref=source_ref,
)
+ def getDetailsForDisplay(self):
+ return {"aborting": self.isStatementAborting()}
+
def computeStatement(self, trace_collection):
# This node has many children to handle, pylint: disable=I0021,too-many-branches,too-many-locals,too-many-statements
tried = self.subnode_tried
|
Update hrv_time.py
The feature name in the comment is hcvnn, but the code is mcvnn | @@ -65,7 +65,7 @@ def hrv_time(peaks, sampling_rate=1000, show=False, **kwargs):
* **MedianNN**: The median of the absolute values of the successive differences between RR
intervals.
* **MadNN**: The median absolute deviation of the RR intervals.
- * **HCVNN**: The median absolute deviation of the RR intervals (**MadNN**) divided by the
+ * **MCVNN**: The median absolute deviation of the RR intervals (**MadNN**) divided by the
median of the absolute differences of their successive differences (**MedianNN**).
* **IQRNN**: The interquartile range (**IQR**) of the RR intervals.
* **Prc20NN**: The 20th percentile of the RR intervals (Han, 2017; Hovsepian, 2015).
|
Add an alias for `ensure_future` as `async`
`async` is still present in 3.6 and `ensure_future` doesn't exist before 3.4.4 | @@ -48,6 +48,7 @@ from asyncio.tasks import (
ALL_COMPLETED as ALL_COMPLETED,
as_completed as as_completed,
ensure_future as ensure_future,
+ ensure_future as async,
gather as gather,
run_coroutine_threadsafe as run_coroutine_threadsafe,
shield as shield,
|
Eliminate self.tasks[id] from launch_if_ready
see | @@ -349,7 +349,7 @@ class DataFlowKernel(object):
# it might be that in the course of the update, we've gone back to being
# pending - in which case, we should consider ourself for relaunch
if task_record['status'] == States.pending:
- self.launch_if_ready(task_id)
+ self.launch_if_ready(task_record)
def handle_join_update(self, outer_task_id, inner_app_future):
# Use the result of the inner_app_future as the final result of
@@ -451,7 +451,7 @@ class DataFlowKernel(object):
def check_staging_inhibited(kwargs):
return kwargs.get('_parsl_staging_inhibit', False)
- def launch_if_ready(self, task_id):
+ def launch_if_ready(self, task_record):
"""
launch_if_ready will launch the specified task, if it is ready
to run (for example, without dependencies, and in pending state).
@@ -466,14 +466,7 @@ class DataFlowKernel(object):
launch_if_ready is thread safe, so may be called from any thread
or callback.
"""
- # after launching the task, self.tasks[task_id] is no longer
- # guaranteed to exist (because it can complete fast as part of the
- # submission - eg memoization)
- task_record = self.tasks.get(task_id)
- if task_record is None:
- # assume this task has already been processed to completion
- logger.debug("Task {} has no task record. Assuming it has already been processed to completion.".format(task_id))
- return
+ task_id = task_record['id']
if self._count_deps(task_record['depends']) == 0:
# We can now launch *task*
@@ -898,14 +891,14 @@ class DataFlowKernel(object):
for d in depends:
def callback_adapter(dep_fut):
- self.launch_if_ready(task_id)
+ self.launch_if_ready(task_def)
try:
d.add_done_callback(callback_adapter)
except Exception as e:
logger.error("add_done_callback got an exception {} which will be ignored".format(e))
- self.launch_if_ready(task_id)
+ self.launch_if_ready(task_def)
return app_fu
|
typo
typo discovered in run | @@ -245,7 +245,7 @@ SAI
('mode rate of water transmitted' +
' through a unit width of saturated soil - ' +
'either provided or calculated with Ksat ' +
- 'and soil depth),
+ 'and soil depth'),
'soil__saturated_hydraulic_conductivity':
('mode rate of water transmitted' +
' through soil - provided if transmissivity ' +
|
GDB helpers: enhance TokenReference pretty-print to rely on Token
TN: | @@ -724,5 +724,6 @@ class TokenReferencePrinter(BasePrinter):
if not self.value['tdh']:
return 'No_Token'
+ tdh = TDH(self.value['tdh'])
index = self.value['index']
- return '<Token {}/{}>'.format(index['token'], index['trivia'])
+ return str(tdh.get(index['token'], index['trivia']))
|
doc: add comments re: change in normalize() function
* doc: add comments re: change in normalize() function
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see | @@ -268,6 +268,15 @@ class NumberFeatureMixin(BaseFeatureMixin):
backend,
skip_save_processed_input,
):
+ # Had to replace normalize() function due to issue #1911
+ # this comment is to provide context for the change.
+ # original code
+ # def normalize(series: pd.Series) -> pd.Series:
+ # series = series.copy()
+ # numeric_transformer = get_transformer(metadata[feature_config[NAME]], preprocessing_parameters)
+ # series.update(numeric_transformer.transform(series.values))
+ # return series
+
def normalize(series: pd.Series) -> pd.Series:
# retrieve request numeric transformer
numeric_transformer = get_transformer(metadata[feature_config[NAME]], preprocessing_parameters)
|
Update android_roamingmantis.txt
Merging ```android_fakecop``` into ```roamingmantis. | # Copyright (c) 2014-2020 Maltrail developers (https://github.com/stamparm/maltrail/)
# See the file 'LICENSE' for copying permission
-# Aliases: roamingmantis, xloader, fakespy, moqhao, xighost
+# Aliases: roamingmantis, xloader, fakecop, fakespy, moqhao, xighost
# Reference: https://securelist.com/roaming-mantis-uses-dns-hijacking-to-infect-android-smartphones/85178/
@@ -22961,6 +22961,7 @@ sing-?post|smbc|smyoga|i?softbank|starbank|suyan|upsp|wygm|xinheli|yamato|yang|y
/shinhan.apk
/SingPost.apk
/smartcat.apk
+/Swiss%20Post.apk
/uuocrteytw.apk
/chrome.apk
/chrome_bate.apk
@@ -23143,6 +23144,25 @@ http://102.129.249.124
wmsquare.store
tribiko.club
+# Reference: https://twitter.com/ninoseki/status/1255794508173701121
+
+test-b50fd.firebaseio.com
+
+# Reference: https://twitter.com/malwrhunterteam/status/1271058195344220162
+
+http://111.251.77.22
+
+# Reference: https://twitter.com/malwrhunterteam/status/1277681383268286469
+# Reference: https://www.virustotal.com/gui/file/ecdc73645331d992820450f2b1350b78020d2de342a1556f66213e0d264cadb3/detection
+
+61.218.32.17:8888
+grd77.com
+
+# Reference: https://www.virustotal.com/gui/file/e6818b120babd15f2446c8a5595f98b5a7303033ad8ab0efa752e2bb8a6f0796/detection
+
+172.67.221.238:2095
+apiserver.zzfyp.com
+
# Reference: https://twitter.com/malwaretracekr/status/1272785015353438209
http://102.129.249.140
|
Bump cache versions relying on RenamedDistricts
When we added FIN<-->IN in
we should have also bumped these, because it's possible we had poisoned
data stored from before the mapping where we're be returning `None`
without taking the mapping into consideration.
And indeed, this will fix | @@ -16,7 +16,7 @@ from backend.common.tasklets import typed_tasklet
class DistrictQuery(CachedDatabaseQuery[Optional[District], Optional[DistrictDict]]):
- CACHE_VERSION = 1
+ CACHE_VERSION = 2
CACHE_KEY_FORMAT = "district_{district_key}"
DICT_CONVERTER = DistrictConverter
@@ -53,7 +53,7 @@ class DistrictsInYearQuery(CachedDatabaseQuery[List[District], List[DistrictDict
class DistrictHistoryQuery(CachedDatabaseQuery[List[District], List[DistrictDict]]):
- CACHE_VERSION = 1
+ CACHE_VERSION = 2
CACHE_KEY_FORMAT = "district_history_{abbreviation}"
DICT_CONVERTER = DistrictConverter
|
values bugfix
Fixed statistic output to return values instead of xarray objects. | @@ -78,27 +78,27 @@ def compare_model_and_inst(pairs=None, inst_name=[], mod_name=[],
# Calculate the desired statistics
if 'mean_err' in methods:
for iname in inst_name:
- stat_dict[iname]['mean_err'] = diff_data[iname].mean()
+ stat_dict[iname]['mean_err'] = diff_data[iname].mean().values
if 'mean_abs_err' in methods:
for iname in inst_name:
- stat_dict[iname]['mean_abs_err'] = abs(diff_data[iname]).mean()
+ stat_dict[iname]['mean_abs_err'] = abs(diff_data[iname]).mean().values
if 'median_err' in methods:
for iname in inst_name:
- stat_dict[iname]['median_err'] = diff_data[iname].median()
+ stat_dict[iname]['median_err'] = diff_data[iname].median().values
if 'median_abs_err' in methods:
for iname in inst_name:
- stat_dict[iname]['median_abs_err'] = abs(diff_data[iname]).median()
+ stat_dict[iname]['median_abs_err'] = abs(diff_data[iname]).median().values
if 'moments_err' in methods:
for iname in inst_name:
- mmean = diff_data[iname].mean()
- mstd = diff_data[iname].std()
+ mmean = diff_data[iname].mean().values
+ mstd = diff_data[iname].std().values
mskew = stats.skew(diff_data[iname], nan_policy='omit')
mkurt = stats.kurtosis(diff_data[iname], nan_policy='omit')
stat_dict[iname]['moments_err'] = [mmean, mstd, mskew, mkurt]
if 'moments_abs_err' in methods:
for iname in inst_name:
- mmean = abs(diff_data[iname]).mean()
- mstd = abs(diff_data[iname]).std()
+ mmean = abs(diff_data[iname]).mean().values
+ mstd = abs(diff_data[iname]).std().values
mskew = stats.skew(abs(diff_data[iname]), nan_policy='omit')
mkurt = stats.kurtosis(abs(diff_data[iname]), nan_policy='omit')
stat_dict[iname]['moments_abs_err'] = [mmean, mstd, mskew, mkurt]
@@ -130,12 +130,11 @@ def compare_model_and_inst(pairs=None, inst_name=[], mod_name=[],
stat_dict[iname]['deciles_abs_err'] = [q1, q3]
if 'percent_bias' in methods:
for iname in inst_name:
- stat_dict[iname]['percent_bias'] = (diff_data[iname].sum() /
- pairs.data_vars[iname].sum()) \
- * 100.0
+ stat_dict[iname]['percent_bias'] = (diff_data[iname].sum() / \
+ pairs.data_vars[iname].sum()).values * 100.0
if 'mean_sq_err' in methods:
for iname in inst_name:
- stat_dict[iname]['mean_sq_err'] = (diff_data[iname]**2).mean()
+ stat_dict[iname]['mean_sq_err'] = (diff_data[iname]**2).mean().values
return stat_dict, data_units
|
zerver: Remove dead code for accessing subscribers.
These haven't been used in years, and clutter the codebase. | @@ -2259,10 +2259,6 @@ def get_subscribers_query(stream: Stream, requesting_user: Optional[UserProfile]
)
return subscriptions
-def get_subscribers(stream: Stream,
- requesting_user: Optional[UserProfile]=None) -> List[UserProfile]:
- subscriptions = get_subscribers_query(stream, requesting_user).select_related()
- return [subscription.user_profile for subscription in subscriptions]
def get_subscriber_emails(stream: Stream,
requesting_user: Optional[UserProfile]=None) -> List[str]:
@@ -2270,15 +2266,6 @@ def get_subscriber_emails(stream: Stream,
subscriptions = subscriptions_query.values('user_profile__email')
return [subscription['user_profile__email'] for subscription in subscriptions]
-def maybe_get_subscriber_emails(stream: Stream, user_profile: UserProfile) -> List[str]:
- """ Alternate version of get_subscriber_emails that takes a Stream object only
- (not a name), and simply returns an empty list if unable to get a real
- subscriber list (because we're on the MIT realm). """
- try:
- subscribers = get_subscriber_emails(stream, requesting_user=user_profile)
- except JsonableError:
- subscribers = []
- return subscribers
def notify_subscriptions_added(user_profile: UserProfile,
sub_pairs: Iterable[Tuple[Subscription, Stream]],
|
Update windows-setup for Kubernetes 1.19
Corrects an issue whereby trying to sign a CSR thows an error within webhook-create-signed-cert.sh:
No resources found
error: no kind "CertificateSigningRequest" is registered for version "certificates.k8s.io/v1" in scheme "k8s.io/kubernetes/pkg/kubectl/scheme/scheme.go:28"
service/vpc-admission-webhook-svc created | #!/bin/bash
echo ${AWS_REGION}
yum install -y jq && yum install -y openssl
-curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.15.10/2020-02-22/bin/linux/amd64/kubectl
+curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/bin/kubectl
kubectl apply -f https://amazon-eks.s3.us-west-2.amazonaws.com/manifests/${AWS_REGION}/vpc-resource-controller/latest/vpc-resource-controller.yaml
|
Actually pass through date params
This was a confusing bug | @@ -279,8 +279,8 @@ def external_id(external_id):
return filters.term('external_id', external_id)
-def indexed_on(gt=None, gte=None, lt=None, lte=None, eq=None):
- return filters.date_range('@indexed_on', gt=None, gte=None, lt=None, lte=None, eq=None)
+def indexed_on(gt=None, gte=None, lt=None, lte=None):
+ return filters.date_range('@indexed_on', gt, gte, lt, lte)
def flatten_result(hit, include_score=False):
|
remove pip install for olefile dependency
remove pip install for olefile dependency, not needed | @@ -7,24 +7,6 @@ script: |-
from email.header import decode_header
from base64 import b64decode
from re import match
- import pip
- pip.main(['-q', 'install', 'olefile'])
- # --- LICENSE -----------------------------------------------------------------
- #
- # Copyright 2013 Matthew Walker
- #
- # This program is free software: you can redistribute it and/or modify
- # it under the terms of the GNU General Public License as published by
- # the Free Software Foundation, either version 3 of the License, or
- # (at your option) any later version.
- #
- # This program is distributed in the hope that it will be useful,
- # but WITHOUT ANY WARRANTY; without even the implied warranty of
- # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- # GNU General Public License for more details.
- #
- # You should have received a copy of the GNU General Public License
- # along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import sys
import glob
@@ -623,3 +605,4 @@ outputs:
scripttarget: 0
dependson: {}
timeout: 0s
+releaseNotes: "remove pip install for olefile dependency, not needed"
|
move tooltip DOM id to hover-content span
where props.className is applied | @@ -12,12 +12,13 @@ const Tooltip = props => {
return (
<>
- <div id={id} className="dcc-tooltip-bounding-box">
+ <div className="dcc-tooltip-bounding-box">
<span
data-dash-is-loading={is_loading || undefined}
className={`hover hover-${props.direction}`}
>
<span
+ id={id}
className={`hover-content ${props.className}`}
style={props.style}
>
|
Use urljoin to specify Gotify URL
User only has to give base URL, flexget will find endpoint just like the Android app does | @@ -10,18 +10,13 @@ from flexget.plugin import PluginWarning
from flexget.utils.requests import Session as RequestSession, TimedLimiter
from requests.exceptions import RequestException
from http import HTTPStatus
+from urllib.parse import urljoin
plugin_name = 'gotify'
log = logging.getLogger(plugin_name)
requests = RequestSession(max_retries=3)
-gotify_url_pattern = {
- 'type': 'string',
- 'pattern': r'^http(s?)\:\/\/.*\/message$',
- 'error_pattern': 'Gotify URL must begin with http(s) and end with `/message`',
-}
-
class GotifyNotifier(object):
"""
Example::
@@ -38,7 +33,7 @@ class GotifyNotifier(object):
schema = {
'type': 'object',
'properties': {
- 'url': gotify_url_pattern,
+ 'url': {'format': 'url'},
'token': {'type': 'string'},
'priority': {'type': 'integer', 'default': 4},
},
@@ -53,7 +48,9 @@ class GotifyNotifier(object):
"""
Send a Gotify notification
"""
- url = config['url']
+ base_url = config['url']
+ api_endpoint = '/message'
+ url = urljoin(base_url, api_endpoint)
params = {'token': config['token']}
priority = config['priority']
|
Review of Notes indentation
This patch fix
+ It also fix (forgoten coma) | @@ -1147,7 +1147,7 @@ def domain_to_idna(line):
Notes
-----
- This function encode only the domain to `idna` format because in
- most cases the encoding issue is due to a domain which looks like
+ most cases, the encoding issue is due to a domain which looks like
`b'\xc9\xa2oogle.com'.decode('idna')`.
- About the splitting:
We split because we only want to encode the domain and not the full
|
Update migration guide
Add a note about the `-c` option's behavior, which is reverted to its original
2.x behavior. | @@ -159,3 +159,11 @@ Now you have to simply replace the import statement with the following:
Similarly for schedulers, the ``reframe.core.schedulers.registry`` module must be replaced with ``reframe.core.backends``.
+
+
+Other Changes
+-------------
+
+ReFrame 3.0-dev0 introduced a `change <https://github.com/eth-cscs/reframe/pull/1125>`__ in the way that a search path for checks was constructed in the command-line using the ``-c`` option.
+ReFrame 3.0 reverts the behavior of the ``-c`` to its original one (i.e., ReFrame 2.x behavior), in which multiple paths can be specified by passing multiple times the ``-c`` option.
+Overriding completely the check search path can be achieved in ReFrame 3.0 through the :envvar:`RFM_CHECK_SEARCH_PATH` environment variable or the corresponding configuration option.
|
NullExpr: disable ref-counting, add comments
TN: | @@ -2741,9 +2741,19 @@ class NullExpr(BasicExpr):
Resolved expression for the null expression corresponding to some type.
"""
+ # Note that this is a BasicExpr subclass instead of a LiteralExpr one
+ # because we want the null value to be an identifier, so that the Ada
+ # overloading resolution works when such expressions are passed as
+ # subprogram arguments.
+
def __init__(self, type, abstract_expr=None):
super(NullExpr, self).__init__(
'Null_Value', type.nullexpr(), type, [],
+
+ # Ref-counting is correct, but not needed for null values, so avoid
+ # generating it to reduce code bloat.
+ requires_incref=False,
+
abstract_expr=abstract_expr,
)
|
Python extensions need to, and are allowed to, bring in Python.h as
a dependency. | @@ -71,6 +71,7 @@ def nucleus_py_extension(name, srcs = [], deps = [], **kwargs):
linkstatic = 0,
linkshared = 1,
srcs = srcs,
+ deps = ["//external:python_headers"],
**kwargs
)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.