message
stringlengths 13
484
| diff
stringlengths 38
4.63k
|
---|---|
fix http redirect and json logging
HG--
branch : feature/microservices | @@ -73,7 +73,7 @@ upstream grafanads {
{% endif %}
-{% if nginx_json_logging %}
+{% if nginx_json_logging == "1" %}
log_format noc_format '{ "@timestamp": "$time_iso8601", '
'"@fields": { '
'"remote_addr": "$remote_addr", '
@@ -97,7 +97,7 @@ log_format noc_format '$remote_addr - $remote_user [$time_local] '
{% endif %}
-{% if nginx_http_redirect_enabled %}
+{% if nginx_http_redirect_enabled == "1" %}
server {
listen 80;
server_name {{ noc_web_host }};
|
travis didn't like that last commit
Just going to put this here to see if it fixes that | #include <cinttypes>
#include <stdexcept>
-#define DEF(METHOD) def_static(#METHOD, &JaggedArraySrc::METHOD<std::int64_t>)\
+#define DEF(METHOD) .def_static(#METHOD, &JaggedArraySrc::METHOD<std::int64_t>)\
.def_static(#METHOD, &JaggedArraySrc::METHOD<std::uint64_t>)\
.def_static(#METHOD, &JaggedArraySrc::METHOD<std::int32_t>)\
.def_static(#METHOD, &JaggedArraySrc::METHOD<std::uint32_t>)\
@@ -234,10 +234,10 @@ public:
PYBIND11_MODULE(_jagged, m) {
py::class_<JaggedArraySrc>(m, "JaggedArraySrc")
.def(py::init<>())
- .DEF(testEndian)
- .DEF(offsets2parents)
- .DEF(counts2offsets)
- .DEF(startsstops2parents)
- .DEF(parents2startsstops)
- .DEF(uniques2offsetsparents);
+ DEF(testEndian)
+ DEF(offsets2parents)
+ DEF(counts2offsets)
+ DEF(startsstops2parents)
+ DEF(parents2startsstops)
+ DEF(uniques2offsetsparents);
}
|
Update dagster airflow api docs
Summary: Update dagster airflow api docs
Test Plan: none
Reviewers: nate | @@ -8,3 +8,11 @@ Airflow (dagster_airflow)
.. autofunction:: make_airflow_dag_for_operator
.. autofunction:: make_airflow_dag_containerized
+
+.. autofunction:: make_dagster_pipeline_from_airflow_dag
+
+.. autofunction:: make_dagster_repo_from_airflow_dags_path
+
+.. autofunction:: make_dagster_repo_from_airflow_dag_bag
+
+.. autofunction:: make_dagster_repo_from_airflow_example_dags
|
Typo in diffgeom.py
// edited by skirpichev | @@ -50,7 +50,7 @@ class Patch(Basic):
On a manifold one can have many patches that do not always include the
whole manifold. On these patches coordinate charts can be defined that
- permit the parametrization of any point on the patch in terms of a tuple
+ permit the parameterization of any point on the patch in terms of a tuple
of real numbers (the coordinates).
This object serves as a container/parent for all coordinate system charts
|
Update README.md
Updated information about using the cache. | @@ -151,6 +151,14 @@ Better documentation coming later, but for now, here's a summary of the most use
* mesh - Access mesh operations
* get - Download an object. Can merge multiple segmentids
* save - Download an object and save it in `.obj` format. You can combine equivialences into a single object too.
+* cache - Access cache operations
+ * enabled - Boolean switch to enable/disable cache. If true, on reading, check local disk cache before downloading, and save downloaded chunks to cache. When writing, write to the cloud then save the chunks you wrote to cache. If false, bypass cache completely. The cache is located at `$HOME/.cloudvolume/cache`.
+ * path - Property that shows the current filesystem path to the cache
+ * list - List files in cache
+ * num_files - Number of files in cache at this mip level , use all_mips=True to get them all
+ * num_bytes - Return the number of bytes in cache at this mip level, all_mips=True to get them all
+ * flush - Delete the cache at this mip level, preserve=Bbox/slice to save a spatial region
+ * flush_region - Delete a spatial region at this mip level
* exists - Generate a report on which chunks within a bounding box exist.
* delete - Delete the chunks within this bounding box.
@@ -163,7 +171,6 @@ Accessed as `vol.$PROPERTY` like `vol.mip`. Parens next to each property mean (d
* bounded (bool:True, rw) - If a region outside of volume bounds is accessed throw an error if True or Fill the region with black (useful for e.g. marching cubes's 1px boundary) if False.
* autocrop (bool:False, rw) - If bounded is False and this option is True, automatically crop requested uploads and downloads to the volume boundary.
* fill_missing (bool:False, rw) - If a file inside volume bounds is unable to be fetched use a block of zeros if True, else throw an error.
-* cache (bool:False, rw) - If true, on reading, check local disk cache before downloading, and save downloaded chunks to cache. When writing, write to the cloud then save the chunks you wrote to cache. If false, bypass cache completely. The cache is located at `$HOME/.cloudvolume/cache`.
* info (dict, rw) - Python dict representation of Neuroglancer info JSON file. You must call `vol.commit_info()` to save your changes to storage.
* provenance (dict-like, rw) - Data layer provenance file representation. You must call `vol.commit_provenance()` to save your changes to storage.
* available_mips (list of ints, r) - Query which mip levels are defined for reading and writing.
|
update grpc to 1.48.0
Updating grpc to 1.48.0
1.47.0 added support for mac m1 | @@ -236,8 +236,8 @@ def ray_deps_setup():
auto_http_archive(
name = "com_github_grpc_grpc",
# NOTE: If you update this, also update @boringssl's hash.
- url = "https://github.com/grpc/grpc/archive/refs/tags/v1.45.2.tar.gz",
- sha256 = "e18b16f7976aab9a36c14c38180f042bb0fd196b75c9fd6a20a2b5f934876ad6",
+ url = "https://github.com/grpc/grpc/archive/refs/tags/v1.48.0.tar.gz",
+ sha256 = "9b1f348b15a7637f5191e4e673194549384f2eccf01fcef7cc1515864d71b424",
patches = [
"@com_github_ray_project_ray//thirdparty/patches:grpc-cython-copts.patch",
"@com_github_ray_project_ray//thirdparty/patches:grpc-python.patch",
@@ -250,11 +250,11 @@ def ray_deps_setup():
# https://github.com/grpc/grpc/blob/1ff1feaa83e071d87c07827b0a317ffac673794f/bazel/grpc_deps.bzl#L189
# Ensure this rule matches the rule used by grpc's bazel/grpc_deps.bzl
name = "boringssl",
- sha256 = "e168777eb0fc14ea5a65749a2f53c095935a6ea65f38899a289808fb0c221dc4",
- strip_prefix = "boringssl-4fb158925f7753d80fb858cb0239dff893ef9f15",
+ sha256 = "534fa658bd845fd974b50b10f444d392dfd0d93768c4a51b61263fd37d851c40",
+ strip_prefix = "boringssl-b9232f9e27e5668bc0414879dcdedb2a59ea75f2",
urls = [
- "https://storage.googleapis.com/grpc-bazel-mirror/github.com/google/boringssl/archive/4fb158925f7753d80fb858cb0239dff893ef9f15.tar.gz",
- "https://github.com/google/boringssl/archive/4fb158925f7753d80fb858cb0239dff893ef9f15.tar.gz",
+ "https://storage.googleapis.com/grpc-bazel-mirror/github.com/google/boringssl/archive/b9232f9e27e5668bc0414879dcdedb2a59ea75f2.tar.gz",
+ "https://github.com/google/boringssl/archive/b9232f9e27e5668bc0414879dcdedb2a59ea75f2.tar.gz",
],
)
|
Made hvg option take a value that defaults to 2000
Added this for atac-seq extension that will use 10k+ features for integration. | @@ -11,7 +11,7 @@ def runIntegration(inPath, outPath, method, hvg, batch):
adata = sc.read(inPath)
if hvg:
- adata = scIB.preprocessing.hvg_intersect(adata, batch, adataOut=True)
+ adata = scIB.preprocessing.hvg_intersect(adata, batch, adataOut=True, max_genes=hvg)
integrated_tmp = scIB.metrics.measureTM(method, adata, batch)
@@ -32,7 +32,7 @@ if __name__=='__main__':
parser.add_argument('-i', '--input_file', required=True)
parser.add_argument('-o', '--output_file', required=True)
parser.add_argument('-b', '--batch', required=True, help='Batch variable')
- parser.add_argument('-v', '--hvgs', help='Preselect for HVGs', action='store_true')
+ parser.add_argument('-v', '--hvgs', help='Number of highly variable genes', default=2000)
args = parser.parse_args()
file = args.input_file
|
compose: Fix spacing of presence circles in typeahead.
This commit increases spacing between presence circles and user avatar
in typeahead suggestions.
These changes were discussed in #design>Presence. | @@ -630,7 +630,7 @@ strong {
margin: 0 5px 0 -6px;
position: relative;
top: 6px;
- right: 7px;
+ right: 8px;
display: inline-block;
}
}
|
minor DOC: fix unfinished sentence, some punctuation
in the QUICKSTARD.md | @@ -7,18 +7,18 @@ and verify that update as a client.
Unlike the underlying TUF modules that the CLI uses, the CLI itself is a bit
bare-bones. Using the CLI is the easiest way to familiarize yourself with
-how TUF works, however. It will serve as a very basic update system and use
+how TUF works, however. It will serve as a very basic update system.
----
-**Step (0)** - Make sure TUF is installed
+**Step (0)** - Make sure TUF is installed.
-See the [installation instructions for TUF](docs/INSTALLATION.rst).
+See the [installation instructions for TUF](INSTALLATION.rst).
The TUF CLI makes use of some crypto dependencies, so please include the
optional `pip install securesystemslib[crypto,pynacl]` step.
-**Step (1)** - Create a basic repository and client
+**Step (1)** - Create a basic repository and client.
The following command will set up a basic update repository and basic client
that knows about the repository. `tufrepo`, `tufkeystore`, and
@@ -69,7 +69,7 @@ signed by the Targets role's key, so that clients can verify that metadata
about `testfile` and then verify `testfile` itself.
-**Step (3)** - Serve the repo
+**Step (3)** - Serve the repo.
We'll host a toy http server containing the `testfile` update and the
repository's metadata.
|
Adds a HACK to catch NaN errors in customlm.py & proceed as if
linear solver failed to converge (some python 2.7 tests are running
into NaNs I think b/c of old scipy versions). | @@ -227,6 +227,8 @@ def custom_leastsq(obj_fn, jac_fn, x0, f_norm2_tol=1e-6, jac_norm_tol=1e-6,
#except _np.linalg.LinAlgError:
except _scipy.linalg.LinAlgError:
success = False
+ except ValueError: # HACK: Python2.7 test compatibility, in case NaNs occur
+ success = False
if profiler: profiler.mem_check("custom_leastsq: after linsolve")
if success: # linear solve succeeded
|
Aaand, it was not a database problem. Making search
case-insensitive again | @@ -268,7 +268,7 @@ def all_domain_new(domain, page):
def search(page, term):
""" The index page, with basic title search """
term = re.sub('[^A-Za-z0-9.,\-_\'" ]+', '', term)
- posts = misc.getPostList(misc.postListQueryBase().where(SubPost.title % ('%' + term + '%')),
+ posts = misc.getPostList(misc.postListQueryBase().where(SubPost.title ** ('%' + term + '%')),
'new', 1).dicts()
return render_template('index.html', page=page, sort_type='search',
|
dev_settings: Remove email forwarding related settings.
These are now set in dev-secrets. | @@ -88,12 +88,6 @@ POST_MIGRATION_CACHE_FLUSHING = True
PASSWORD_MIN_LENGTH = 0
PASSWORD_MIN_GUESSES = 0
-# SMTP settings for forwarding emails sent in development
-# environment to an email account.
-EMAIL_HOST = ""
-EMAIL_PORT = 25
-EMAIL_HOST_USER = ""
-
# Two factor authentication: Use the fake backend for development.
TWO_FACTOR_CALL_GATEWAY = 'two_factor.gateways.fake.Fake'
TWO_FACTOR_SMS_GATEWAY = 'two_factor.gateways.fake.Fake'
|
JSON stringify object column values
JSON stringify any column values that are objects leaving non-object values untouched.
e.g. "[object Object]" will show as ""{""WALTHER P38"":1,""GEWEHR 43"":1,""FELDSPATEN"":1}"" | @@ -224,6 +224,20 @@ const RawScores = pure(({ classes, scores }) => {
selectableRows: "none",
rowsPerPageOptions: [10, 25, 50, 100, 250, 500, 1000],
onChangeRowsPerPage: (v) => setRowsPerPage(v),
+ onDownload: (buildHead, buildBody, columns, data) => {
+ // Convert any column values that are objects to JSON so they display in the csv as data instead of [object Object]
+ const expandedData = data.map((row) => {
+ return {
+ index: row.index,
+ data: row.data.map((colValue) =>
+ typeof colValue === "object"
+ ? JSON.stringify(colValue)
+ : colValue
+ ),
+ };
+ });
+ return buildHead(columns) + buildBody(expandedData);
+ },
}}
data={scores ? scores.toJS() : []}
columns={[
|
kivy: InvoiceDialog: make LN invoice QR code scannable
Don't show the text and the QR code together, only the QR code:
the text takes up too much space, which make the QR hard to scan. | @@ -40,7 +40,9 @@ Builder.load_string('''
text: _('Invoice data')+ ':'
RefLabel:
data: root.data
+ text: root.data[:40] + "..."
name: _('Data')
+ show_text_with_qr: False
TopLabel:
text: _('Description') + ':'
RefLabel:
|
added two new printing commands
Also fixed | @@ -1901,7 +1901,8 @@ def VY_oct(item):
str: lambda: (lambda: item, lambda: oct(int(item)))[item.isnumeric()]()[2:]
}.get(VY_type(item), lambda:vectorise(VY_oct, item))()
def VY_print(item, end="\n", raw=False):
- global output
+ global output, printed
+ printed = True
t_item = type(item)
if t_item is Generator:
item._print(end)
|
Bug fixes
[formerly 2b000019c9dcae81df7891eb63bb84128b9aa4bd] [formerly d7d0723fb1a25f8081be2a8a64fa25cd08f8c9f3] [formerly 1a5e84f7033dc45bb0a21710cd5a77e08e17e26e] | @@ -344,11 +344,9 @@ def main(greppable=False, Cipher=False, text=None, debug=False, withArgs=False)
results = locals()
result = None
if withArgs:
- result = parse_args()
- else:
- result = locals()
+ result.update(arg_parsing())
- output = None
+ output = call_encryption(**result)
return output
@@ -357,6 +355,7 @@ def call_encryption(greppable=False, Cipher=False, text=None, debug=False):
if text is not None:
cipher_obj = Ciphey(text, greppable, Cipher, debug)
output = cipher_obj.decrypt()
+ return output
if __name__ == "__main__":
|
Improve set of AMI description
Avoid appending to AMI description if version is empty | @@ -63,10 +63,11 @@ phases:
DESCRIPTION="AWS ParallelCluster AMI for {{ test.OperatingSystemName.outputs.stdout }}"
append_description () {
- VALUE="$1"
+ KEY="$1"
+ VALUE="$2"
if [[ -n "${VALUE}" ]] && [[ ! "${VALUE}" =~ NOT_INSTALLED ]]; then
- echo "Appending ${VALUE} to decription"
- DESCRIPTION="${DESCRIPTION}, ${VALUE}"
+ echo "Appending ${KEY}-${VALUE} to decription"
+ DESCRIPTION="${DESCRIPTION}, ${KEY}-${VALUE}"
fi
}
@@ -128,7 +129,7 @@ phases:
# Kernel
KERNEL_VERSION="$(uname -r)"
add_tag "parallelcluster:kernel_version" "${KERNEL_VERSION}"
- append_description "kernel-${KERNEL_VERSION}"
+ append_description "kernel" "${KERNEL_VERSION}"
# sudo
add_tag "parallelcluster:sudo_version" "$(get_package_version "sudo")"
@@ -136,35 +137,35 @@ phases:
# Lustre
LUSTRE_VERSION="$(get_package_version "lustre-client")"
add_tag "parallelcluster:lustre_version" "${LUSTRE_VERSION}"
- append_description "lustre-${LUSTRE_VERSION}"
+ append_description "lustre" "${LUSTRE_VERSION}"
LUSTRE_VERSION="$(get_package_version "lustre-client-modules-aws")"
add_tag "parallelcluster:lustre_version" "${LUSTRE_VERSION}"
- append_description "lustre-${LUSTRE_VERSION}"
+ append_description "lustre" "${LUSTRE_VERSION}"
# EFA
EFA_VERSION="$(get_package_version "efa")"
add_tag "parallelcluster:efa_version" "${EFA_VERSION}"
- append_description "efa-${EFA_VERSION}"
+ append_description "efa" "${EFA_VERSION}"
# DCV
DCV_VERSION="$(get_package_version "nice-dcv-server")"
add_tag "parallelcluster:dcv_version" "${DCV_VERSION}"
- append_description "dcv-${DCV_VERSION}"
+ append_description "dcv" "${DCV_VERSION}"
# Slurm, Munge and PMIx
SLURM_VERSION="$(get_source_version "slurm")"
add_tag "parallelcluster:slurm_version" "${SLURM_VERSION}"
- append_description "slurm-${SLURM_VERSION}"
+ append_description "slurm" "${SLURM_VERSION}"
add_tag "parallelcluster:munge_version" "$(get_source_version "munge")"
add_tag "parallelcluster:pmix_version" "$(get_source_version "pmix")"
# Nvidia, Cuda and Nvidia FabricManager
NVIDIA_VERSION="$(get_modinfo "nvidia")"
add_tag "parallelcluster:nvidia_version" "${NVIDIA_VERSION}"
- append_description "nvidia-${NVIDIA_VERSION}"
+ append_description "nvidia" "${NVIDIA_VERSION}"
CUDA_VERSION="$(cat /usr/local/cuda/version.txt | cut -d' ' -f3)"
add_tag "parallelcluster:cuda_version" "${CUDA_VERSION}"
- append_description "cuda-${CUDA_VERSION}"
+ append_description "cuda" "${CUDA_VERSION}"
add_tag "parallelcluster:nvidia_fabricmanager_version" "$(get_package_version "nvidia-fabricmanager")"
# Add description
|
Integration tests for model-based spark transform
Summary: Add integration tests for model-based sequence model cfeval spark transform | @@ -716,7 +716,7 @@ class Seq2SlateRewardWithPreprocessor(ModelBase):
max_tgt_seq_len = self.model.max_tgt_seq_len
max_src_seq_len = self.model.max_src_seq_len
- # we use a fake slate_idx_with_presence to retrive the first
+ # we use a fake slate_idx_with_presence to retrieve the first
# max_tgt_seq_len candidates from
# len(slate_idx_with presence) == batch_size
# component: 1d tensor with length max_tgt_seq_len
|
django_language was removed in 1.8
We also don't need this block anyways as you can't edit your own user's
language on this page. | @@ -332,14 +332,7 @@ class BaseEditUserView(BaseUserSettingsView):
def update_user(self):
if self.form_user_update.is_valid():
- old_lang = self.request.couch_user.language
- if self.form_user_update.update_user():
- # if editing our own account we should also update the language in the session
- if self.editable_user._id == self.request.couch_user._id:
- new_lang = self.request.couch_user.language
- if new_lang != old_lang:
- self.request.session['django_language'] = new_lang
- return True
+ return self.form_user_update.update_user()
def post(self, request, *args, **kwargs):
saved = False
|
switch from raising an exception to logging error
Otherwise, blocked from disabling plugin | @@ -291,7 +291,7 @@ class KolibriHookMeta(SingletonMeta):
and cls._registered_hooks
and hook not in cls.registered_hooks
):
- raise RuntimeError(
+ logger.error(
"Attempted to register more than one instance of {}".format(
hook.__class__
)
|
Refresh add-flag-to-disable-local-history-expiration.patch for Chromium
90 | #include "base/callback_helpers.h"
#include "base/compiler_specific.h"
#include "base/containers/flat_set.h"
-@@ -871,7 +872,8 @@ void HistoryBackend::InitImpl(
+@@ -879,7 +880,8 @@ void HistoryBackend::InitImpl(
db_->GetStartDate(&first_recorded_time_);
// Start expiring old stuff.
|
Fix documentations style in DataFrame.pop
```
Warning, treated as error:
/.../koalas/databricks/koalas/frame.py:docstring of databricks.koalas.DataFrame.pop:6:Unexpected indentation.
```
Similar with For some reasons, it didn't fail before but after new more changes in the current master, it started to fail as above. We should fix the style mistakes anyway .. | @@ -2845,13 +2845,16 @@ defaultdict(<class 'list'>, {'col..., 'col...})]
def pop(self, item):
"""
Return item and drop from frame. Raise KeyError if not found.
+
Parameters
----------
item : str
Label of column to be popped.
+
Returns
-------
Series
+
Examples
--------
>>> df = ks.DataFrame([('falcon', 'bird', 389.0),
@@ -2859,18 +2862,21 @@ defaultdict(<class 'list'>, {'col..., 'col...})]
... ('lion', 'mammal', 80.5),
... ('monkey','mammal', np.nan)],
... columns=('name', 'class', 'max_speed'))
+
>>> df
name class max_speed
0 falcon bird 389.0
1 parrot bird 24.0
2 lion mammal 80.5
3 monkey mammal NaN
+
>>> df.pop('class')
0 bird
1 bird
2 mammal
3 mammal
Name: class, dtype: object
+
>>> df
name max_speed
0 falcon 389.0
@@ -2894,12 +2900,14 @@ defaultdict(<class 'list'>, {'col..., 'col...})]
1 parrot bird 24.0
2 lion mammal 80.5
3 monkey mammal NaN
+
>>> df.pop('a')
name class
0 falcon bird
1 parrot bird
2 lion mammal
3 monkey mammal
+
>>> df
b
max_speed
|
extension_modules should default to /proxy/extmods
Additionally, append it to the append_minionid_config_dirs
list, so each proxy caches its extension modules separately. | @@ -1633,7 +1633,8 @@ DEFAULT_PROXY_MINION_OPTS = {
'log_file': os.path.join(salt.syspaths.LOGS_DIR, 'proxy'),
'add_proxymodule_to_opts': False,
'proxy_merge_grains_in_module': True,
- 'append_minionid_config_dirs': ['cachedir', 'pidfile', 'default_include'],
+ 'extension_modules': os.path.join(salt.syspaths.CACHE_DIR, 'proxy', 'extmods'),
+ 'append_minionid_config_dirs': ['cachedir', 'pidfile', 'default_include', 'extension_modules'],
'default_include': 'proxy.d/*.conf',
# By default, proxies will preserve the connection.
|
window: don't log error on valid focus_behavior
The focus request handler has/had a not very readable collection of cases.
One case fell through and triggered a warning that shouldn't have been
emitted. This pr fixes this fall-through and makes the corresponding
section more readable.
Fixes:
Closes: | @@ -1164,11 +1164,18 @@ class Window(_Window):
logger.info("Focusing window")
self.qtile.current_screen.set_group(self.group)
self.group.focus(self)
- elif focus_behavior == "smart" and self.group.screen and self.group.screen == self.qtile.current_screen:
+ elif focus_behavior == "smart":
+ if not self.group.screen:
+ logger.info("Ignoring focus request")
+ return
+ if self.group.screen == self.qtile.current_screen:
logger.info("Focusing window")
self.qtile.current_screen.set_group(self.group)
self.group.focus(self)
- elif focus_behavior == "urgent" or (focus_behavior == "smart" and not self.group.screen):
+ else: # self.group.screen != self.qtile.current_screen:
+ logger.info("Setting urgent flag for window")
+ self.urgent = True
+ elif focus_behavior == "urgent":
logger.info("Setting urgent flag for window")
self.urgent = True
elif focus_behavior == "never":
|
Update CHANGELOG.md
Resolved problems | @@ -8,6 +8,8 @@ This release extends the planned support of the modules to OneView REST API vers
2. Modules upgraded in this release requires hpOneView version 5.0.0. Also, OneView Python library is now migrated to new repository which is available at https://github.com/HewlettPackard/oneview-python.
#### Modules supported in this release
+- image_streamer_deployment_plan
+- image_streamer_deployment_plan_facts
- oneview_enclosure
- oneview_enclosure_facts
- oneview_enclosure_group
@@ -70,8 +72,6 @@ This release extends the planned support of the modules to OneView REST API vers
3. Modules upgraded in this release requires hpOneView version 5.0.0b0 or above.
#### Modules supported in this release
-- image_streamer_deployment_plan
-- image_streamer_deployment_plan_facts
- oneview_connection_template
- oneview_connection_template_facts
- oneview_enclosure
|
Fix layer 66 not being the real layer 66
Telegram decided to update the scheme.tl without increasing
the layer number, so it had been unnoticed until now. | @@ -505,7 +505,7 @@ sendMessageGeoLocationAction#176f8ba1 = SendMessageAction;
sendMessageChooseContactAction#628cbc6f = SendMessageAction;
sendMessageGamePlayAction#dd6a8f48 = SendMessageAction;
sendMessageRecordRoundAction#88f27fbc = SendMessageAction;
-sendMessageUploadRoundAction#bb718624 = SendMessageAction;
+sendMessageUploadRoundAction#243e1c66 progress:int = SendMessageAction;
contacts.found#1aa1f784 results:Vector<Peer> chats:Vector<Chat> users:Vector<User> = contacts.Found;
@@ -794,6 +794,7 @@ pageBlockEmbed#cde200d1 flags:# full_width:flags.0?true allow_scrolling:flags.3?
pageBlockEmbedPost#292c7be9 url:string webpage_id:long author_photo_id:long author:string date:int blocks:Vector<PageBlock> caption:RichText = PageBlock;
pageBlockCollage#8b31c4f items:Vector<PageBlock> caption:RichText = PageBlock;
pageBlockSlideshow#130c8963 items:Vector<PageBlock> caption:RichText = PageBlock;
+pageBlockChannel#ef1751b5 channel:Chat = PageBlock;
pagePart#8dee6c44 blocks:Vector<PageBlock> photos:Vector<Photo> videos:Vector<Document> = Page;
pageFull#d7a19d69 blocks:Vector<PageBlock> photos:Vector<Photo> videos:Vector<Document> = Page;
|
TrivialFix: fix a typo in comment
octavia/amphorae/backends/utils/haproxy_query.py Line:78
show status -> show stat | @@ -75,7 +75,7 @@ class HAProxyQuery(object):
return dict_results
def show_stat(self, proxy_iid=-1, object_type=-1, server_id=-1):
- """Get and parse output from 'show status' command.
+ """Get and parse output from 'show stat' command.
:param proxy_iid: Proxy ID (column 27 in CSV output). -1 for all.
:param object_type: Select the type of dumpable object. Values can
|
Correcting source pybind11 library to install into Python
Summary: Pull Request resolved: | @@ -595,7 +595,7 @@ class build_ext(build_ext_parent):
def build_extensions(self):
# The caffe2 extensions are created in
- # build/caffe2/python/
+ # <pytorch_root>/torch/lib/pythonM.m/site-packages/caffe2/python/
# and need to be copied to build/lib.linux.... , which will be a
# platform dependent build folder created by the "build" command of
# setuptools. Only the contents of this folder are installed in the
@@ -616,7 +616,7 @@ class build_ext(build_ext_parent):
filename = self.get_ext_filename(fullname)
report("\nCopying extension {}".format(ext.name))
- src = os.path.join('build', filename)
+ src = os.path.join(cwd, 'torch', rel_site_packages, filename)
if not os.path.exists(src):
report("{} does not exist".format(src))
del self.extensions[i]
|
Update auto-comment.yml
fix typos | # Add comment related to interrupts action on new issues.
issueOpened: >
- Thank your for opening an issue. Our team's interrupts engineer will reivew your issue shortly.
+ Thank you for opening an issue. Our team's interrupts engineer will review your issue shortly.
**Issue Resolution:**
- [ ] _[Interrupts Engineer]_ Triage / apply categorization labels
@@ -9,4 +9,4 @@ issueOpened: >
- [ ] _Forseti Engineer]_ Add tasks and next steps to resolve this issue.
#pullRequestOpened: >
-# Thank your for raising your pull request.
+# Thank you for raising your pull request.
|
Fix test_utils error due unmocked subprocess call
While I could have just added check_output=True in the call, it seemed
good to check the other call path. Seems related to this change: | @@ -91,8 +91,8 @@ key2: value2
mock_output.assert_called_once_with(["command", "to", "run"])
self.assertEqual(output, "command output")
- @mock.patch.object(subprocess, "check_output")
- def test_run_command_failure(self, mock_output):
- mock_output.side_effect = subprocess.CalledProcessError(1, "command")
+ @mock.patch.object(subprocess, "check_call")
+ def test_run_command_failure(self, mock_call):
+ mock_call.side_effect = subprocess.CalledProcessError(1, "command")
self.assertRaises(subprocess.CalledProcessError, utils.run_command,
["command", "to", "run"])
|
setup: Fix regex failure to match version in case of CRLF line feeds
This could happen e.g. in case of using pip3 to install Telethon directly from the git repo. | @@ -110,7 +110,7 @@ def main():
long_description = f.read()
with open('telethon/version.py', encoding='utf-8') as f:
- version = re.search(r"^__version__\s+=\s+'(.*)'$",
+ version = re.search(r"^__version__\s*=\s*'(.*)'.*$",
f.read(), flags=re.MULTILINE).group(1)
setup(
name='Telethon',
|
Add recommended configuration
There are too many options available when configuring AWS load
balancers. Many users would want to see a recommended configuration and
not have to read through all the options avaiable to them to decide what
they want to do. | @@ -4,6 +4,52 @@ The Ambassador Edge Stack is a platform agnostic Kubernetes API gateway. It will
This document serves as a reference for how different configuration options available when running Kubernetes in AWS. See [Installing Ambassador Edge Stack](../../user-guide/install/) for the various installation methods available.
+## tl;dr Recommended Configuration:
+There are lot of configuration options available to you when running Ambassador in AWS. While you should read this entire document to understand what is best for you, the following is the recommended configuration when running Ambassador in AWS:
+
+It is recommended to terminate TLS at Ambassador so you can take advantage of all the tls configuration options available in Ambassador including setting the allowed tls versions, setting `alpn_protocol` options, and enforcing http -> https redirection.
+
+When terminating TLS at Ambassador, you should deploy a L4 NLB with the proxy protocol enabled for the best performance out of your load balancer while still preserving the client ip address.
+
+The following `Service` should be configured to deploy an NLB with cross zone load balancing enabled (see [Network Load Balancer (NLB(#network-load-balancer-nlb) for caveat on the cross-zone-load-balancing annotation). You will need to configure the proxy protocol in the NLB manually in the AWS Console.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: ambassador
+ namespace: ambassador
+ annotations:
+ service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
+ service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
+spec:
+ type: LoadBalancer
+ ports:
+ - name: HTTP
+ port: 80
+ targetPort: 8080
+ - name: HTTPS
+ port: 443
+ targetPort: 8443
+ selector:
+ service: ambassador
+```
+
+ After deploying the `Service` above and manually enabling the proxy protocol you will need to deploy the following [Ambassador `Module`](../ambassador) to tell Ambassador to use the proxy protocol and then restart Ambassador for the configuration to take effect.
+
+ ```yaml
+ apiVersion: getambassador.io/v2
+ kind: Module
+ metadata:
+ name: ambassador
+ namespace: ambassador
+ spec:
+ config:
+ use_proxy_proto: true
+ ```
+
+ Ambassador will now expect to traffic from the load balancer to be wrapped with the proxy protocol so it can read the client ip address.
+
## AWS load balancer notes
AWS provides three types of load balancers:
|
Derive Debug trait for consensus sdk primitives
The SDK should derive these so users can debug their engines | @@ -21,6 +21,7 @@ use std::sync::{Mutex, atomic::{AtomicBool, Ordering}, mpsc::Receiver};
use consensus::service::Service;
/// An update from the validator
+#[derive(Debug)]
pub enum Update {
PeerConnected(PeerInfo),
PeerDisconnected(PeerId),
@@ -31,7 +32,7 @@ pub enum Update {
BlockCommit(BlockId),
}
-#[derive(Default)]
+#[derive(Default, Debug)]
pub struct BlockId(Vec<u8>);
impl Deref for BlockId {
type Target = Vec<u8>;
@@ -52,7 +53,7 @@ impl From<Vec<u8>> for BlockId {
}
/// All information about a block that is relevant to consensus
-#[derive(Default)]
+#[derive(Default, Debug)]
pub struct Block {
pub block_id: BlockId,
pub previous_id: BlockId,
@@ -61,7 +62,7 @@ pub struct Block {
pub payload: Vec<u8>,
}
-#[derive(Default)]
+#[derive(Default, Debug)]
pub struct PeerId(Vec<u8>);
impl Deref for PeerId {
type Target = Vec<u8>;
@@ -82,13 +83,13 @@ impl From<Vec<u8>> for PeerId {
}
/// Information about a peer that is relevant to consensus
-#[derive(Default)]
+#[derive(Default, Debug)]
pub struct PeerInfo {
pub peer_id: PeerId,
}
/// A consensus-related message sent between peers
-#[derive(Default)]
+#[derive(Default, Debug)]
pub struct PeerMessage {
pub message_type: String,
pub content: Vec<u8>,
|
Disallow `!` patterns in `build_ignore`.
As discussed in including a negation in `build_ignore` (unlike `pants_ignore`, which is interpreted directly by the `ignore` crate) amounts to adding an include pattern, which will not work as expected.
Closes
[ci skip-rust]
[ci skip-build-wheels] | @@ -1507,12 +1507,10 @@ class GlobalOptions(BootstrapOptions, Subsystem):
"--build-ignore",
help=softwrap(
"""
- Paths to ignore when identifying BUILD files.
+ Path globs or literals to ignore when identifying BUILD files.
This does not affect any other filesystem operations; use `--pants-ignore` for
that instead.
-
- Patterns use the gitignore pattern syntax (https://git-scm.com/docs/gitignore).
"""
),
advanced=True,
@@ -1684,6 +1682,13 @@ class GlobalOptions(BootstrapOptions, Subsystem):
validate_remote_headers("remote_execution_headers")
validate_remote_headers("remote_store_headers")
+ illegal_build_ignores = [i for i in opts.build_ignore if i.startswith("!")]
+ if illegal_build_ignores:
+ raise OptionsError(
+ "The `--build-ignore` option does not support negated globs, but was "
+ f"given: {illegal_build_ignores}."
+ )
+
@staticmethod
def create_py_executor(bootstrap_options: OptionValueContainer) -> PyExecutor:
rule_threads_max = (
|
Move the minio artifact download under try block
If minio artifact download fails the workteam is not being deleted. | @@ -57,11 +57,12 @@ def test_workteamjob(
)
outputs = {"sagemaker-private-workforce": ["workteam_arn"]}
+
+ try:
output_files = minio_utils.artifact_download_iterator(
workflow_json, outputs, download_dir
)
- try:
response = sagemaker_utils.describe_workteam(sagemaker_client, workteam_name)
# Verify WorkTeam was created in SageMaker
|
Use correct command to activate virtualenv on Win10
Attempting to activate a virtualenv using `source` will result in
`CommandNotFoundException`. | @@ -78,7 +78,7 @@ Windows
pip2 install virtualenv
mkdir C:\Python27\venvs\cci\
virtualenv --python=C:\Python27\python.exe C:\Python27\venvs\cci\
- source C:\Python27\venvs\cci\Scripts\activate
+ C:\Python27\venvs\cci\Scripts\activate.ps1
Install CumulusCI
-----------------
|
Apply suggestions from code review
tiny wording changes | "id": "QZI9i2FJ0k3H"
},
"source": [
- "The first step is an `import` step that downloads the MNIST dataset and returns 4 numpy arrays as its output. "
+ "The first step is an `import` step that downloads the MNIST dataset and returns four numpy arrays as its output. "
]
},
{
"id": "ma53mucU0yF3"
},
"source": [
- "We then add a `Trainer` step, that takes the normalized data and trains a Keras classifier on the data. Note that the model is not explicitely saved within the step. Under the hood ZenML uses Materializers to automatically persist the Artifacts that result from each step into the Artifact Store."
+ "We then add a `Trainer` step, that takes the normalized data and trains a Keras classifier on the data. Note that the model is not explicitly saved within the step. Under the hood ZenML uses Materializers to automatically persist the Artifacts that result from each step into the Artifact Store."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "We did mention above that the Materializer takes care of persisiting your artifacts for you. But how do you acess your runs and their associated artifacts from code? Let's do that step by step."
+ "We did mention above that the Materializer takes care of persisting your artifacts for you. But how do you access your runs and their associated artifacts from code? Let's do that step by step."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "First off, we load your repository, this is where all your pipelines live. "
+ "First off, we load your repository: this is where all your pipelines live. "
]
},
{
"metadata": {},
"outputs": [],
"source": [
- "# Lets first extract out first run on the standard mnist dataset\n",
+ "# Let's first extract out the first run on the standard mnist dataset\n",
"mnist_run = mnist_pipeline.get_run(\"standard_mnist_training_run\")\n",
"\n",
"# Now we can extract our second run trained on fashion mnist\n",
|
Landmark bug fix
Pass the right coordinates to get_distance, remove meters attribute. | @@ -114,7 +114,7 @@ class Landmark:
else:
point = Point(*coordinates)
nearest = self.nearest_point(point)
- return get_distance(coordinates[0], coordinates[1]).meters
+ return get_distance(coordinates, nearest.coords[0])
def nearest_point(self, point):
'''Find nearest point in geometry, measured from given point.'''
|
avoid forcing new window/tab for github links
fix | <ul class="vertical-tabs__list">
<li>
<a class="vertical-tabs__tab vertical-tabs__tab--with-icon vertical-tabs__tab--condensed github-repo-info__item"
- data-key="html_url" data-attr="href" data-supplement="/stargazers" rel="noopener"
- target="_blank">
+ data-key="html_url" data-attr="href" data-supplement="/stargazers" rel="noopener">
<i class="fa fa-star" aria-hidden="true"></i>
<strong>{% trans %}Stars:{% endtrans %}</strong>
<span class="github-repo-info__item" data-key="stargazers_count"></span>
</li>
<li>
<a class="vertical-tabs__tab vertical-tabs__tab--with-icon vertical-tabs__tab--condensed github-repo-info__item"
- data-key="html_url" data-attr="href" data-supplement="/network" rel="noopener"
- target="_blank">
+ data-key="html_url" data-attr="href" data-supplement="/network" rel="noopener">
<i class="fa fa-code-branch" aria-hidden="true"></i>
<strong>{% trans %}Forks:{% endtrans %}</strong>
<span class="github-repo-info__item" data-key="forks_count"></span>
</li>
<li>
<a class="vertical-tabs__tab vertical-tabs__tab--with-icon vertical-tabs__tab--condensed github-repo-info__item"
- data-key="html_url" data-attr="href" data-supplement="/issues" rel="noopener"
- target="_blank">
+ data-key="html_url" data-attr="href" data-supplement="/issues" rel="noopener">
<i class="fa fa-exclamation-circle" aria-hidden="true"></i>
<strong>{% trans %}Open issues/PRs:{% endtrans %}</strong>
<span class="github-repo-info__item" data-key="open_issues_count"></span>
|
clear keys from set expires after deleting them from redis
if the expire "cache" is not reset, the class assumes an expire is set,
even though no expire was set for the key | from __future__ import annotations
import datetime
+import fnmatch
from typing import Optional, TYPE_CHECKING
from async_rediscache.types.base import RedisObject, namespace_lock
@@ -49,12 +50,15 @@ class DocRedisCache(RedisObject):
@namespace_lock
async def delete(self, package: str) -> bool:
"""Remove all values for `package`; return True if at least one key was deleted, False otherwise."""
+ pattern = f"{self.namespace}:{package}:*"
+
with await self._get_pool_connection() as connection:
package_keys = [
- package_key async for package_key in connection.iscan(match=f"{self.namespace}:{package}:*")
+ package_key async for package_key in connection.iscan(match=pattern)
]
if package_keys:
await connection.delete(*package_keys)
+ self._set_expires = {key for key in self._set_expires if not fnmatch.fnmatchcase(key, pattern)}
return True
return False
|
Slightly improve irfft doc
Summary: Pull Request resolved: | @@ -6289,6 +6289,7 @@ this normalizes the result by multiplying it with
:math:`\sqrt{\prod_{i=1}^K N_i}` so that the operator is unitary, where
:math:`N_i` is the size of signal dimension :math:`i`.
+.. note::
Due to the conjugate symmetry, :attr:`input` do not need to contain the full
complex frequency values. Roughly half of the values will be sufficient, as
is the case when :attr:`input` is given by :func:`~torch.rfft` with
@@ -6308,7 +6309,7 @@ See :func:`~torch.rfft` for details on conjugate symmetry.
The inverse of this function is :func:`~torch.rfft`.
.. warning::
- Generally speaking, the input of this function should contain values
+ Generally speaking, input to this function should contain values
following conjugate symmetry. Note that even if :attr:`onesided` is
``True``, often symmetry on some part is still needed. When this
requirement is not satisfied, the behavior of :func:`~torch.irfft` is
|
issue remove 'ssh' from checked processes
Can't be used due to regular Ansible behaviour | @@ -215,7 +215,9 @@ def make_containers(name_prefix='', port_offset=0):
return lst
-INTERESTING_COMMS = ('python', 'ssh', 'sudo', 'su', 'doas')
+# ssh removed from here because 'linear' strategy relies on processes that hang
+# around after the Ansible run completes
+INTERESTING_COMMS = ('python', 'sudo', 'su', 'doas')
def proc_is_docker(pid):
|
Incidents: catch 404s
Fixes BOT-ZN | @@ -143,7 +143,14 @@ async def add_signals(incident: discord.Message) -> None:
log.trace(f"Skipping emoji as it's already been placed: {signal_emoji}")
else:
log.trace(f"Adding reaction: {signal_emoji}")
+ try:
await incident.add_reaction(signal_emoji.value)
+ except discord.NotFound as e:
+ if e.code != 10008:
+ raise
+
+ log.trace(f"Couldn't react with signal because message {incident.id} was deleted; skipping incident")
+ return
class Incidents(Cog):
@@ -288,14 +295,20 @@ class Incidents(Cog):
members_roles: t.Set[int] = {role.id for role in member.roles}
if not members_roles & ALLOWED_ROLES: # Intersection is truthy on at least 1 common element
log.debug(f"Removing invalid reaction: user {member} is not permitted to send signals")
+ try:
await incident.remove_reaction(reaction, member)
+ except discord.NotFound:
+ log.trace("Couldn't remove reaction because the reaction or its message was deleted")
return
try:
signal = Signal(reaction)
except ValueError:
log.debug(f"Removing invalid reaction: emoji {reaction} is not a valid signal")
+ try:
await incident.remove_reaction(reaction, member)
+ except discord.NotFound:
+ log.trace("Couldn't remove reaction because the reaction or its message was deleted")
return
log.trace(f"Received signal: {signal}")
@@ -313,7 +326,10 @@ class Incidents(Cog):
confirmation_task = self.make_confirmation_task(incident, timeout)
log.trace("Deleting original message")
+ try:
await incident.delete()
+ except discord.NotFound:
+ log.trace("Couldn't delete message because it was already deleted")
log.trace(f"Awaiting deletion confirmation: {timeout=} seconds")
try:
|
Fix pointer events in #settings overlay form sidebar.
The pointer events for the sidebar were incorrect in the way they
were set such that when the sidebar was off to the right and
hidden it would still attract pointer events. | @@ -766,10 +766,12 @@ input[type=checkbox].inline-block {
background-color: #fff;
border-left: 1px solid #ddd;
+ pointer-events: none;
transition: all 0.3s ease;
}
#settings_page .form-sidebar.show {
+ pointer-events: auto;
transform: translateX(0px);
}
|
common: Use rhsm_repository module for RHCS
Instead of using subscription-manager with command module we can use
the rhsm_repository ansible module.
This module already uses repos list feature to determine if a
repository is enabled or not. That way this module is idempotent so
we don't need changed_when: false anymore. | ---
-- name: check if the red hat storage monitor repo is already present
- yum:
- list: repos
- update_cache: no
- register: rhcs_mon_repo
- when:
- - (mon_group_name in group_names or mgr_group_name in group_names)
- until: rhcs_mon_repo is succeeded
-
- name: enable red hat storage monitor repository
- command: subscription-manager repos --enable rhel-7-server-rhceph-{{ ceph_rhcs_version }}-mon-rpms
- changed_when: false
+ rhsm_repository:
+ name: "rhel-7-server-rhceph-{{ ceph_rhcs_version }}-mon-rpms"
when:
- (mon_group_name in group_names or mgr_group_name in group_names)
- - "'rhel-7-server-rhceph-'+ ceph_rhcs_version | string +'-mon-rpms' not in rhcs_mon_repo.results"
-
-- name: check if the red hat storage osd repo is already present
- yum:
- list: repos
- update_cache: no
- register: rhcs_osd_repo
- check_mode: no
- when:
- - osd_group_name in group_names
- until: rhcs_osd_repo is succeeded
- name: enable red hat storage osd repository
- command: subscription-manager repos --enable rhel-7-server-rhceph-{{ ceph_rhcs_version }}-osd-rpms
- changed_when: false
+ rhsm_repository:
+ name: "rhel-7-server-rhceph-{{ ceph_rhcs_version }}-osd-rpms"
when:
- osd_group_name in group_names
- - "'rhel-7-server-rhceph-'+ ceph_rhcs_version | string +'-osd-rpms' not in rhcs_osd_repo.results"
-
-- name: check if the red hat storage tools repo is already present
- yum:
- list: repos
- update_cache: no
- register: rhcs_tools_repo
- check_mode: no
- when:
- - (rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names)
- until: rhcs_tools_repo is succeeded
- name: enable red hat storage tools repository
- command: subscription-manager repos --enable rhel-7-server-rhceph-{{ ceph_rhcs_version }}-tools-rpms
- changed_when: false
+ rhsm_repository:
+ name: "rhel-7-server-rhceph-{{ ceph_rhcs_version }}-tools-rpms"
when:
- (rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names)
\ No newline at end of file
- - "'rhel-7-server-rhceph-'+ ceph_rhcs_version | string +'-tools-rpms' not in rhcs_tools_repo.results"
\ No newline at end of file
|
fixed a bug in `check_download_conflict`
fixed a bug when raising a `MaestralApiError` inside `check_download_conflict` if the Dropbox item has been deleted | @@ -1396,8 +1396,11 @@ class UpDownSync(object):
# get metadata of remote file
md = self.client.get_metadata(dbx_path)
if not md:
- raise MaestralApiError("Could not download. Item does not exist on Dropbox",
- dbx_path)
+ raise MaestralApiError(
+ "Could not download.",
+ "Item does not exist on Dropbox",
+ dbx_path=dbx_path
+ )
# no conflict if local file does not exist yet
if not osp.exists(local_path):
|
Bulk stocktake API
Pass list of pk/quantity dict objects | @@ -51,6 +51,43 @@ class StockFilter(FilterSet):
fields = ['quantity', 'part', 'location']
+class StockStocktake(APIView):
+
+ permission_classes = [
+ permissions.IsAuthenticatedOrReadOnly,
+ ]
+
+ def post(self, request, *args, **kwargs):
+
+ data = request.data
+
+ items = []
+
+ # Ensure each entry is valid
+ for entry in data:
+ if not 'pk' in entry:
+ raise ValidationError({'pk': 'Each entry must contain pk field'})
+ if not 'quantity' in entry:
+ raise ValidationError({'quantity': 'Each entry must contain quantity field'})
+
+ item = {}
+ try:
+ item['item'] = StockItem.objects.get(pk=entry['pk'])
+ except StockItem.DoesNotExist:
+ raise ValidationError({'pk': 'No matching StockItem found for pk={pk}'.format(pk=entry['pk'])})
+ try:
+ item['quantity'] = int(entry['quantity'])
+ except ValueError:
+ raise ValidationError({'quantity': 'Quantity must be an integer'})
+
+ items.append(item)
+
+ for item in items:
+ item['item'].stocktake(item['quantity'], request.user)
+
+ return Response({'success': 'success'})
+
+
class StockMove(APIView):
permission_classes = [
@@ -221,6 +258,8 @@ stock_api_urls = [
url(r'location/(?P<pk>\d+)/', include(location_endpoints)),
+ url(r'stocktake/?', StockStocktake.as_view(), name='api-stock-stocktake'),
+
url(r'move/?', StockMove.as_view(), name='api-stock-move'),
url(r'^tree/?', StockCategoryTree.as_view(), name='api-stock-tree'),
|
ceph-defaults: remove backwards compat for containerized_deployment
The validation module does not get config options with the template
syntax rendered, so we're gonna remove that and just default it to
False. The backwards compat was schedule to be removed in 3.1 anyway. | @@ -503,8 +503,7 @@ mon_containerized_deployment: False # backward compatibility with stable-2.2, wi
osd_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
mds_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
rgw_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-containerized_deployment: "{{ True if mon_containerized_deployment or osd_containerized_deployment or mds_containerized_deployment or rgw_containerized_deployment else False }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-
+containerized_deployment: False
############
# KV store #
|
added new dict for encryption_at_rest and encryption_in_flight
properties | -encryption_property = {
+encryption_at_rest_property = {
'aws_db_instance': 'storage_encrypted',
'ebs_block_device': 'encrypted',
'aws_ebs_volume': 'encrypted',
@@ -8,6 +8,10 @@ encryption_property = {
'aws_elasticache_replication_group': 'at_rest_encryption_enabled',
'aws_emr_security_configuration': 'EnableAtRestEncryption'
}
+encryption_in_flight_property = {
+ 'aws_elasticache_replication_group': 'transit_encryption_enabled',
+ 'aws_emr_security_configuration': 'EnableInTransitEncryption'
+}
resource_name = {
'AWS Auto-Scaling Group': 'aws_autoscaling_group',
@@ -25,5 +29,6 @@ resource_name = {
'AWS VPC': 'aws_vpc',
'Azure SQL Database': 'azurerm_sql_database',
'Azure Storage Account': 'azurerm_storage_account',
- 'AWS Subnet': 'aws_subnet'
+ 'AWS Subnet': 'aws_subnet',
+ 'AWS ElastiCache Replication Group': 'aws_elasticache_replication_group'
}
|
Rename some of the build scripts arguments
"no_static_library" to "debug"
"no_shared_library" to "release" | @@ -53,11 +53,10 @@ def main():
# Build everything if a components list is non provided.
components_to_build = args.component if args.component is not None else COMPONENTS
-
- if 'static_library' in components_to_build and args.no_static_library:
- components_to_build.remove('static_library')
- if 'shared_library' in components_to_build and args.no_shared_library:
- components_to_build.remove('shared_library')
+ if args.debug:
+ components_to_build = ['shared_library', 'ffmpeg']
+ if args.release:
+ components_to_build = ['static_library', 'ffmpeg']
for component in components_to_build:
out_dir = get_output_dir(SOURCE_ROOT, target_arch, component)
@@ -75,10 +74,10 @@ def parse_args():
parser.add_argument('-t', '--target_arch', default='x64', help='x64 or ia32')
parser.add_argument('-c', '--component', nargs='+', default=None,
help='static_library, shared_library, ffmpeg')
- parser.add_argument('-D', '--no_static_library', action='store_true',
- help='Do not build static library version')
- parser.add_argument('-R', '--no_shared_library', action='store_true',
- help='Do not build shared library version')
+ parser.add_argument('-D', '--debug', action='store_true',
+ help='Build debug configuration')
+ parser.add_argument('-R', '--release', action='store_true',
+ help='Build release configuration')
return parser.parse_args()
|
Adding LsBoot to sos_archive spec source
The sos_archive context does not currently collect the contents of
/boot so contexts wrapping sosreports can not provide the LsBoot
spec set or parser. This commit adds a missing call to
simple_file() to pull in the specs for LsBoot. | @@ -67,6 +67,7 @@ class SosSpecs(Specs):
journal_since_boot = first_of([simple_file("sos_commands/logs/journalctl_--no-pager_--boot"), simple_file("sos_commands/logs/journalctl_--no-pager_--catalog_--boot")])
locale = simple_file("sos_commands/i18n/locale")
lsblk = first_file(["sos_commands/block/lsblk", "sos_commands/filesys/lsblk"])
+ ls_boot = simple_file("sos_commands/boot/ls_-lanR_.boot")
lscpu = simple_file("sos_commands/processor/lscpu")
lsinitrd = simple_file("sos_commands/boot/lsinitrd")
lsof = simple_file("sos_commands/process/lsof_-b_M_-n_-l")
|
override equality of Matrix
Matrix has a custom __eq__ method that check if parameters are the same between two objects | @@ -1362,6 +1362,11 @@ class Matrix(object):
description += "\nf = +inf (afocal)\n"
return description
+ def __eq__(self, other):
+ if isinstance(other, Matrix):
+ return self.__dict__ == other.__dict__
+ return False
+
class Lens(Matrix):
r"""A thin lens of focal f, null thickness and infinite or finite diameter
|
Debian: Allow the tool to also work with installed package.
* To test the package, we remove the "nuitka" package from the
source, and then run the tests. But as of recently, now the
test runner depends on "nuitka.tools".
* This now allows to use the installed package "nuitka.tools"
which normally isn't public. | # limitations under the License.
#
-""" Launcher for pylint checker tool.
+""" Launcher for output comparison tool.
"""
@@ -35,6 +35,10 @@ sys.path.insert(
)
)
)
+sys.path.insert(
+ 1,
+ "/usr/share/nuitka"
+)
from nuitka.tools.compare_with_cpython.__main__ import main # isort:skip
main()
|
Flag if an org has no default branding
This will help us identify when we might need to go and set the branding
as the default for the organisation. | @@ -799,6 +799,7 @@ def test_zendesk_new_email_branding_report(notify_db_session, mocker):
create_email_branding(id=uuid.UUID("1b7deb1f-ff1f-4d00-a7a7-05b0b57a185e"), name="brand-3")
org_1.email_branding_pool = [email_brand_1, email_brand_2]
org_2.email_branding_pool = [email_brand_2]
+ org_2.email_branding = email_brand_1
notify_db_session.commit()
mock_send_ticket = mocker.patch("app.celery.scheduled_tasks.zendesk_client.send_ticket_to_zendesk")
@@ -833,10 +834,10 @@ def test_zendesk_new_email_branding_report(notify_db_session, mocker):
}
for expected_html_fragment in (
- "<p>These are the new email brands uploaded since Monday 31 October 2022.</p>",
+ "<h2>New email branding to review</h2>\n<p>Uploaded since Monday 31 October 2022:</p>",
(
"<p>"
- '<a href="http://localhost:6012/organisations/113d51e7-f204-44d0-99c6-020f3542a527">org-1</a>:'
+ '<a href="http://localhost:6012/organisations/113d51e7-f204-44d0-99c6-020f3542a527">org-1</a> (no default):'
"</p>"
"<ul>"
"<li>"
@@ -857,8 +858,7 @@ def test_zendesk_new_email_branding_report(notify_db_session, mocker):
"</ul>"
),
(
- "<p>These email brands have also been uploaded. "
- "They are not associated with any organisation and do not need reviewing:</p>"
+ "<p>These new brands are not associated with any organisation and do not need reviewing:</p>"
"\n <ul>"
"<li>"
'<a href="http://localhost:6012/email-branding/1b7deb1f-ff1f-4d00-a7a7-05b0b57a185e/edit">brand-3</a>'
|
fix typo: container-format is bare
When adding an image to glance for use with anaconda deploy
interface, the tarball being added should be in the 'bare'
container-format, not 'compressed'. | @@ -127,8 +127,8 @@ glance:
--disk-format ari --shared anaconda-ramdisk-<version>
openstack image create --file ./squashfs.img --container-format ari \
--disk-format ari --shared anaconda-stage-<verison>
- openstack image create --file ./os-image.tar.gz --container-format \
- compressed --disk-format raw --shared \
+ openstack image create --file ./os-image.tar.gz \
+ --container-format bare --disk-format raw --shared \
--property kernel_id=<glance_uuid_vmlinuz> \
--property ramdisk_id=<glance_uuid_ramdisk> \
--property stage2_id=<glance_uuid_stage2> disto-name-version \
|
fixed AdaBound
fixed AdaBound
revert default code formatting | -import collections
import math
+import numbers
+import collections
from .. import utils
@@ -45,6 +46,11 @@ class AdaBound(base.Optimizer):
"""
def __init__(self, lr=1e-3, beta_1=0.9, beta_2=0.999, eps=1e-8, gamma=1e-3, final_lr=0.1):
+
+ if not isinstance(lr, numbers.Number):
+ raise ValueError(
+ f'Learning rate in AdaBound should be numeric but got {type(lr)}')
+
super().__init__(lr)
self.base_lr = lr
self.final_lr = final_lr
|
Map shortcuts to use a platform specific primary modifier
This should fix shortcuts on macOS for GTK4. | +from __future__ import annotations
+
+import sys
from typing import NamedTuple
from gi.repository import Gio, GLib, Gtk
from gaphor.abc import ActionProvider
+from gaphor.action import action
def apply_application_actions(component_registry, gtk_app):
@@ -12,11 +16,14 @@ def apply_application_actions(component_registry, gtk_app):
a = create_gio_action(act, provider, attrname)
gtk_app.add_action(a)
if act.shortcut:
- gtk_app.set_accels_for_action(f"{scope}.{act.name}", [act.shortcut])
+ gtk_app.set_accels_for_action(
+ f"{scope}.{act.name}",
+ [_platform_specific(s) for s in act.shortcuts],
+ )
return gtk_app
-if Gtk.get_major_version() == 3:
+if Gtk.get_major_version() == 3: # noqa C901
class ActionGroup(NamedTuple):
actions: Gio.SimpleActionGroup
@@ -59,6 +66,9 @@ if Gtk.get_major_version() == 3:
)
return ActionGroup(actions=action_group, shortcuts=accel_group)
+ def _platform_specific(shortcut):
+ return shortcut
+
else:
class ActionGroup(NamedTuple): # type: ignore[no-redef]
@@ -74,7 +84,9 @@ else:
a = create_gio_action(act, provider, attrname)
action_group.add_action(a)
for shortcut in act.shortcuts:
- store.append(_new_shortcut(shortcut, act.detailed_name))
+ store.append(
+ _new_shortcut(_platform_specific(shortcut), act.detailed_name)
+ )
return ActionGroup(actions=action_group, shortcuts=store)
def create_action_group(provider, scope) -> ActionGroup: # type: ignore[misc]
@@ -84,9 +96,17 @@ else:
a = create_gio_action(act, provider, attrname)
action_group.add_action(a)
for shortcut in act.shortcuts:
- store.append(_new_shortcut(shortcut, act.detailed_name))
+ store.append(
+ _new_shortcut(_platform_specific(shortcut), act.detailed_name)
+ )
return ActionGroup(actions=action_group, shortcuts=store)
+ def _platform_specific(shortcut):
+ return shortcut.replace(
+ "<Primary>", "<Meta>" if sys.platform == "darwin" else "<Control>"
+ )
+
+
def _new_shortcut(shortcut, detailed_name):
return Gtk.Shortcut.new(
trigger=Gtk.ShortcutTrigger.parse_string(shortcut),
@@ -111,7 +131,7 @@ def iter_actions(provider, scope):
provider_class = type(provider)
for attrname in dir(provider_class):
method = getattr(provider_class, attrname)
- act = getattr(method, "__action__", None)
+ act: action | None = getattr(method, "__action__", None)
if act and act.scope == scope:
yield (attrname, act)
|
Enable wayland backend tests in tox.ini
This makes pytest run with both the x11 and wayland backends when run
via tox. | @@ -37,7 +37,7 @@ commands =
pip install pywlroots>=0.13.4
python3 setup.py install
{toxinidir}/scripts/ffibuild
- python3 -m pytest -W error --cov libqtile --cov-report term-missing {posargs}
+ python3 -m pytest -W error --cov libqtile --cov-report term-missing --backend=x11 --backend=wayland {posargs}
[testenv:packaging]
deps =
|
stylechecks: restore __future__ checks
TN: | @@ -585,6 +585,12 @@ class PythonLang(LanguageChecker):
for node in ast.walk(root):
if isinstance(node, ast.ImportFrom):
report.set_context(filename, node_lineno(node) - 1)
+ if node.module == '__future__':
+ for alias in node.names:
+ if alias.name != 'annotations':
+ report.add('Forbidden annotation: {}'
+ .format(alias.name))
+ else:
self._check_imported_entities(report, node)
elif (
|
Add Responsive YT Video Embedding styling
The IBRACORP YouTube video embed was causing overflow scrolling horizontally on smaller screens/mobile devices. Just tweaked some CSS to make the embed iframe responsive, which solves the horizontal overflow problem. | @@ -188,3 +188,23 @@ a .logo-hover {
a:hover .logo-hover {
display: block;
}
+/* Responsive YT Video Embedding */
+.responsiveYT {
+ position: relative;
+ height: 0;
+ padding-top: 56%;
+ overflow: hidden;
+ max-width: 100%;
+}
+.responsiveYT iframe,
+.responsiveYT object,
+.responsiveYT embed {
+ position: absolute;
+ top: 0;
+ left: 0;
+ width: 100%;
+ height: 100%;
+}
+.responsiveYT .fluid-vids {
+ position: initial !important
+}
|
More pythonic endpoint name completion
Realized that I can use generator for endpoint names rather than list-builder. | @@ -48,10 +48,8 @@ def version_callback(value):
def complete_endpoint_name():
config_files = glob.glob('{}/*/config.py'.format(State.FUNCX_DIR))
- return [
- os.path.basename(os.path.dirname(config_file))
- for config_file in config_files
- ]
+ for config_file in config_files:
+ yield os.path.basename(os.path.dirname(config_file))
def check_pidfile(filepath, match_name, endpoint_name):
|
Closes Added Colors to SVG for Front and Rear Ports
* Added Colors to SVG for Front and Reaer Ports
Fix for feature request 10904 thanks to
* Simplify termination color resolution | @@ -166,7 +166,7 @@ class CableTraceSVG:
"""
if hasattr(instance, 'parent_object'):
# Termination
- return 'f0f0f0'
+ return getattr(instance, 'color', 'f0f0f0') or 'f0f0f0'
if hasattr(instance, 'device_role'):
# Device
return instance.device_role.color
|
Update Arkansas.md
Removed erroneous Nashville link
Added AK geos | @@ -12,7 +12,7 @@ tags: less-lethal, protester, rubber-bullet, tear-gas, tear-gas-canister
id: ar-bentonville-1
-geolocation:
+geolocation: 36.3726466,-94.2106367
**Links**
@@ -36,11 +36,10 @@ tags: less-lethal, protester, tear-gas
id: ar-littlerock-2
-geolocation:
+geolocation: 34.7443623,-92.2879971
**Links**
-* https://www.youtube.com/watch?v=Pfn65qaXosU
* https://www.youtube.com/watch?v=p7z-u_a8qo0
@@ -54,7 +53,7 @@ tags: explosive, less-lethal, lrad, projectile, protester, tear-gas
id: ar-littlerock-1
-geolocation:
+geolocation: 34.746483,-92.2880644
**Links**
|
Adds addIdleNoiseToAllGates arg to build_nqnoise_gateset(...)
In order to facilitate testing idle tomography, this argument allows
the user to toggle whether the noise given by Gi, the global idle gate
is included in the non-Gi gates constructed by `build_nqnoise_gateset`. | @@ -141,6 +141,7 @@ def build_nqnoise_gateset(nQubits, geometry="line", cnot_edges=None,
extraWeight1Hops=0, extraGateWeight=0, sparse=False,
gateNoise=None, prepNoise=None, povmNoise=None,
sim_type="matrix", parameterization="H+S",
+ addIdleNoiseToAllGates=True,
return_clouds=False, verbosity=0): #, debug=False):
"""
TODO: docstring (cnot_edges)
@@ -212,6 +213,10 @@ def build_nqnoise_gateset(nQubits, geometry="line", cnot_edges=None,
using a path-integral approach designed for larger numbers of qubits,
and are considered expert options.
+ addIdleNoiseToAllGates: bool, optional
+ Whether the global idle should be added as a factor following the
+ ideal action of each of the non-idle gates.
+
return_clouds : bool, optional
Whether to return a dictionary of "cloud" objects, used for constructing
the gate sequences necessary for probing the returned GateSet's
@@ -282,6 +287,9 @@ def build_nqnoise_gateset(nQubits, geometry="line", cnot_edges=None,
idleOP = False
+ if not addIdleNoiseToAllGates:
+ idleOP = False # then never add global idle noise.
+
# a dictionary of "cloud" objects
# keys = (target_qubit_indices, cloud_qubit_indices) tuples
# values = list of gate-labels giving the gates associated with that cloud (necessary?)
|
Stop defaulting simple_polygons to empty array
This is now done by the Admin app [1].
[1]: | @@ -2336,9 +2336,6 @@ class BroadcastMessage(db.Model):
self._personalisation = encryption.encrypt(personalisation or {})
def serialize(self):
- areas = dict(self.areas)
- areas["simple_polygons"] = areas.get("simple_polygons", [])
-
return {
'id': str(self.id),
'reference': self.reference,
@@ -2351,7 +2348,7 @@ class BroadcastMessage(db.Model):
'personalisation': self.personalisation if self.template else None,
'content': self.content,
- 'areas': areas,
+ 'areas': self.areas,
'status': self.status,
'starts_at': get_dt_string_or_none(self.starts_at),
|
Lexical envs: minor refactoring for Decorate
TN: | @@ -946,7 +946,6 @@ package body Langkit_Support.Lexical_Env is
function Create_Entity (Elt : Internal_Map_Element) return Entity
is
- Resolved : Entity;
Result : constant Entity :=
(El => Elt.Element,
Info => (MD => Combine (Elt.MD, MD),
@@ -956,8 +955,7 @@ package body Langkit_Support.Lexical_Env is
Inc_Ref (Result.Info.Rebindings);
return Result;
else
- Resolved := Elt.Resolver.all (Result);
- return Resolved;
+ return Elt.Resolver.all (Result);
end if;
end Create_Entity;
|
Update lower limit for inertials
Set to 1e-3 instead of 0.0 | @@ -572,12 +572,12 @@ def fuse_inertia_data(inertials):
fused_inertia += numpy.dot(numpy.dot(current_Rotation.T, current_Inertia), current_Rotation)
# Check the inertia
- if any(element <= 0.0 for element in fused_inertia.diagonal()):
+ if any(element <= 1e-3 for element in fused_inertia.diagonal()):
log(" Correting fused inertia : negative semidefinite diagonal entries.", 'WARNING')
for i in range(3):
fused_inertia[i, i] = 1e-3 if fused_inertia[i, i] <= 1e-3 else fused_inertia[i, i]
- if any(element <= 0.0 for element in numpy.linalg.eigvals(fused_inertia)):
+ if any(element <= 1e-3 for element in numpy.linalg.eigvals(fused_inertia)):
log(" Correcting fused inertia : negative semidefinite eigenvalues", 'WARNING')
U, S, V = numpy.linalg.svd(fused_inertia)
S[S <= 0.0] = 1e-3
|
Improve the stubs in charset.pyi under python3.
The python3 charset stubs didn't include certain necessary module level
constansts (like `QP`) and wrongly defined the arguments to some of
the functions in the module. This is no longer the case.
Fixes | from typing import List, Optional, Iterator, Any
+QP: int
+BASE64: int
+SHORTEST: int
+
class Charset:
input_charset = ... # type: str
header_encoding = ... # type: int
@@ -20,7 +24,7 @@ class Charset:
def __eq__(self, other: Any) -> bool: ...
def __ne__(self, other: Any) -> bool: ...
-def add_charset(charset: Charset, header_enc: Optional[int] = ...,
+def add_charset(charset: str, header_enc: Optional[int] = ...,
body_enc: Optional[int] = ...,
output_charset: Optional[str] = ...) -> None: ...
def add_alias(alias: str, canonical: str) -> None: ...
|
boot sync optimisation
extends reduced block validation for reveal hashes to boot up cycle.. | @@ -1532,7 +1532,7 @@ def verify_chain():
return False
for x in range(2,len(m_blockchain)):
if x > len(m_blockchain)-250:
- if validate_block(m_blockchain[x],last_block=m_blockchain[x].blockheader.blocknumber-1, verbose=1) is False:
+ if validate_block(m_blockchain[x],last_block=m_blockchain[x].blockheader.blocknumber-1, new=0, verbose=1) is False:
printL(( 'block failed:', m_blockchain[x].blockheader.blocknumber))
return False
if state_add_block(m_blockchain[x]) == False:
@@ -1831,6 +1831,7 @@ def validate_block(block, last_block='default', verbose=0, new=0): #check valid
t = sha256(t)
if t == s[1]:
i+=1
+
if i != len(b.reveal_list):
printL(( 'Not all the reveal_hashes are valid..'))
return False
|
ENH: added MetaHeader import
Added the MetaHeader import to init. | @@ -103,6 +103,7 @@ with Lock(version_filename, 'r', params['file_timeout']) as version_file:
from pysat._files import Files
from pysat._instrument import Instrument
from pysat._meta import Meta
+from pysat._meta import MetaHeader
from pysat._meta import MetaLabels
from pysat._orbits import Orbits
from pysat import instruments
|
fix: init integration tests - event bridge relies on runtime numbering
Why is this change necessary?
* adding ruby2.7 changes the list of runtimes.
How does it address the issue?
* appropriate runtime is selected.
What side effects does this change have?
* None | @@ -28,7 +28,7 @@ class TestBasicInitWithEventBridgeCommand(SchemaTestDataSetup):
user_input = """
1
-11
+12
1
eb-app-maven
3
@@ -62,7 +62,7 @@ Y
user_input = """
1
-11
+12
1
eb-app-maven
3
@@ -109,7 +109,7 @@ Y
user_input = """
1
-11
+12
1
eb-app-maven
3
@@ -145,7 +145,7 @@ P
user_input = """
1
-11
+12
1
eb-app-maven
3
|
Fix window focus
Save attached window before set_layout_hook actually execute the hooks
and select the saved after the hooks. | @@ -162,6 +162,7 @@ def set_layout_hook(session, hook_name):
"""
cmd = ['set-hook', '-t', session.id, hook_name]
hook_cmd = []
+ attached_window = session.attached_window
for window in session.windows:
# unfortunately, select-layout won't work unless
# we've literally selected the window at least once
@@ -177,6 +178,7 @@ def set_layout_hook(session, hook_name):
target_session=session.id, hook_name=hook_name
)
)
+ hook_cmd.append('selectw -t {}'.format(attached_window.id))
# join the hook's commands with semicolons
hook_cmd = '{}'.format('; '.join(hook_cmd))
|
resview.py: allow automatic percentage-based scaling factor in pv_plot()
update parse_options() for '%' character | @@ -84,6 +84,8 @@ def parse_options(opts, separator=':'):
val = True
elif v[1:].isalpha():
val = v[1:]
+ elif v[-1] == '%':
+ val = ('%', float(v[1:-1]))
else:
val = literal_eval(v[1:])
@@ -273,6 +275,12 @@ def pv_plot(filenames, options, plotter=None, step=None,
warp = opts.get('w', options.warp) # warp mesh
factor = opts.get('f', options.factor)
+ if isinstance(factor, tuple):
+ bnds = pipe[-1].bounds
+ ws = nm.diff(nm.reshape(pipe[-1].bounds, (-1, 2)), axis=1)
+ size = ws[ws > 0.0].min()
+ factor = 0.01 * float(factor[1]) * size
+
if warp:
field_data = pipe[-1][warp]
if field_data.ndim == 1:
|
Change image for displaying metadata type informaiton
Summary: This appeared to be the wrong image
Test Plan: view it
Reviewers: max | @@ -68,7 +68,7 @@ solid are indeed instances of SimpleDataFrame. As usual, run:
$ dagit -f custom_types.py -n custom_type_pipeline
-.. thumbnail:: config_figure_two.png
+.. thumbnail:: custom_types_figure_one.png
You can see that the output of ``read_csv`` (which by default has the name ``result``) is marked
to be of type ``SimpleDataFrame``.
|
Fix slow compiling in parallel mode
* Fix parallezation in transpile.compile
* Revert "Fix parallezation in transpile.compile"
This reverts commit
* remove oops files
* Fix parallel compile slow down | @@ -88,6 +88,7 @@ def compile(circuits, backend,
and not _matches_coupling_map(dag, coupling_map)):
_initial_layout = _pick_best_layout(dag, backend)
initial_layouts.append(_initial_layout)
+
dags = _transpile_dags(dags, basis_gates=basis_gates, coupling_map=coupling_map,
initial_layouts=initial_layouts, seed=seed,
pass_manager=pass_manager)
@@ -137,9 +138,8 @@ def _transpile_dags(dags, basis_gates='u1,u2,u3,cx,id', coupling_map=None,
TranspilerError: if the format is not valid.
"""
- index = list(range(len(dags)))
- final_dags = parallel_map(_transpile_dags_parallel, index,
- task_args=(dags, initial_layouts),
+ dags_layouts = list(zip(dags, initial_layouts))
+ final_dags = parallel_map(_transpile_dags_parallel, dags_layouts,
task_kwargs={'basis_gates': basis_gates,
'coupling_map': coupling_map,
'seed': seed,
@@ -147,14 +147,12 @@ def _transpile_dags(dags, basis_gates='u1,u2,u3,cx,id', coupling_map=None,
return final_dags
-def _transpile_dags_parallel(idx, dags, initial_layouts, basis_gates='u1,u2,u3,cx,id',
+def _transpile_dags_parallel(dag_layout_tuple, basis_gates='u1,u2,u3,cx,id',
coupling_map=None, seed=None, pass_manager=None):
"""Helper function for transpiling in parallel (if available).
Args:
- idx (int): Index for dag of interest
- dags (list): List of dags
- initial_layouts (list): List of initial layouts
+ dag_layout_tuple (tuple): Tuples of dags and their initial_layouts
basis_gates (str): a comma seperated string for the target basis gates
coupling_map (list): A graph of coupling
seed (int): random seed for the swap mapper
@@ -165,13 +163,11 @@ def _transpile_dags_parallel(idx, dags, initial_layouts, basis_gates='u1,u2,u3,c
Returns:
DAGCircuit: DAG circuit after going through transpilation.
"""
- dag = dags[idx]
- initial_layout = initial_layouts[idx]
final_dag, final_layout = transpile(
- dag,
+ dag_layout_tuple[0],
basis_gates=basis_gates,
coupling_map=coupling_map,
- initial_layout=initial_layout,
+ initial_layout=dag_layout_tuple[1],
get_layout=True,
seed=seed,
pass_manager=pass_manager)
|
Save esxception title in problem message
HG--
branch : feature/microservices | @@ -385,9 +385,13 @@ class DiscoveryCheck(object):
self.logger.error(
"RPC Remote error (%s): %s",
e.remote_code, e)
+ if e.remote_code:
+ message = "Remote error code %s" % e.remote_code
+ else:
+ message = "Remote error code %s, message: %s" % (e.remote_code, e)
self.set_problem(
alarm_class=self.error_map.get(e.remote_code),
- message="Remote error code %s" % e.remote_code,
+ message=message,
fatal=e.remote_code in self.fatal_errors
)
except RPCError as e:
|
[FIX] Missing r-rmarkdown dependency
Fixes
```
ERROR ~ Error executing process > 'output_documentation (1)'
Caused by:
Process `output_documentation (1)` terminated with an error exit status (127)
Command executed:
markdown_to_html.r output.md results_description.html
Command exit status:
127
Command output:
(empty)
Command error:
/usr/bin/env: 'Rscript': No such file or directory
``` | @@ -9,3 +9,4 @@ dependencies:
# TODO nf-core: Add required software dependencies here
- bioconda::fastqc=0.11.8
- bioconda::multiqc=1.7
+ - conda-forge::r-markdown=0.9
|
pkg: changed the minimum Python version
The patches in 3.6 possibly won't make the package unworkable.
Needs testing though. | @@ -30,7 +30,7 @@ setup(
"Intended Audience :: Financial and Insurance Industry",
],
include_package_data=True, # for MANIFEST.in
- python_requires='>=3.6.6',
+ python_requires='>=3.6.0',
install_requires = [
'pandas',
|
Docs: fix bad link
Sadly CI merged before the action finished and we missed this. | @@ -6,5 +6,5 @@ Specify library name and version: **lib/1.0**
---
- [ ] I've read the [contributing guidelines](https://github.com/conan-io/conan-center-index/blob/master/CONTRIBUTING.md).
-- [ ] I've used a [recent](https://github.com/conan-io/conan/releases/latest) Conan client version close to the [currently deployed](https://github.com/conan-io/conan-center-index/blob/docs/recent-client/.c3i/config_v1.yml#L6).
+- [ ] I've used a [recent](https://github.com/conan-io/conan/releases/latest) Conan client version close to the [currently deployed](https://github.com/conan-io/conan-center-index/blob/master/.c3i/config_v1.yml#L6).
- [ ] I've tried at least one configuration locally with the [conan-center hook](https://github.com/conan-io/hooks.git) activated.
|
Ten more Portland incidents georeferenced
got coordinates on ten more pdx incidents | @@ -434,7 +434,7 @@ tags: arrest, grab, journalist, threaten
id: or-portland-7
-geolocation:
+geolocation: 45.515557, -122.676821 // Chapman Square caught on video
**Links**
@@ -452,7 +452,7 @@ tags: less-lethal, protester, stun-grenade, tear-gas
id: or-portland-10
-geolocation:
+geolocation: 45.515641, -122.677136 // Elk caught on video
**Links**
@@ -516,7 +516,7 @@ tags: journalist, threaten
id: or-portland-14
-geolocation:
+geolocation: 45.517616, -122.675936 // AT&T caught on Video
**Links**
@@ -531,7 +531,7 @@ tags: less-lethal, protester, stun-grenade
id: or-portland-391
-geolocation: 45.515646, -122.677034
+geolocation: 45.514879, -122.677353 // SW 3rd & JC Cited
**Links**
@@ -563,7 +563,7 @@ tags: less-lethal, protester, rubber-bullet, shoot
id: or-portland-29
-geolocation:
+geolocation: 45.515243, -122.677090 // JC assumed in photo
**Links**
@@ -600,7 +600,7 @@ tags: bystander, grab, less-lethal, protester, tear-gas, vehicle
id: or-portland-27
-geolocation:
+geolocation: 45.563947, -122.661452 // US Bank caught on video
**Links**
@@ -619,7 +619,7 @@ tags: foam-bullet, journalist, legal-observer, less-lethal, projectile, proteste
id: or-portland-28
-geolocation:
+geolocation: 45.565273, -122.661381 // Top to Bottom caught on video
**Links**
@@ -653,7 +653,7 @@ tags: explosive, less-lethal, projectile, protester
id: or-portland-40
-geolocation:
+geolocation: 45.515507, -122.676783 // Chapman Square caught on video
**Links**
@@ -668,7 +668,7 @@ tags: baton, beat, protester, shove, strike, throw
id: or-portland-30
-geolocation:
+geolocation: 45.515311, -122.675940 // 2nd & Main referenced on video
**Links**
|
TerminusX Update
Removed TerminusX in beta message | @@ -26,9 +26,7 @@ graph all through a simple document API.
[terminusdb-docs]: https://terminusdb.com/docs/
[terminusdb-repo]: https://github.com/terminusdb/terminusdb
-**TerminusX** is a self-service data platform that allows you to build, deploy,
-execute, monitor, and share versioned data products. It is built on TerminusDB.
-TerminusX is in public beta and you can [sign up now][dashboard].
+**TerminusX** is TerminusDB as a service. SOC 2 certified hosting. Build your beta and get to market fast. Scale up and deploy your own instance. [Sign up now][dashboard].
[dashboard]: https://dashboard.terminusdb.com/
|
common: ensure rsync is installed for local install
rsync is required by the ansible synchronize package. Ensure
it is installed when local installation is selected. | - ceph_origin == 'local'
- use_installer
+- name: ensure rsync is installed
+ package:
+ name: rsync
+ state: present
+ when:
+ - ceph_origin == 'local'
+
- name: synchronize ceph install
synchronize:
src: "{{ceph_installation_dir}}/"
|
Storage: document thread-safety of client.
Also note general best practice for multiprocessing use.
Closes | @@ -23,6 +23,15 @@ Install the ``google-cloud-storage`` library using ``pip``:
Usage
-----
+.. note::
+
+ Becuase the :class:`~google.cloud.storage.client.Client` uses the
+ third-party :mod:`requests` library by default, it is safe to
+ share instances across threads. In multiprocessing scenarious, best
+ practice is to create client instances *after*
+ :class:`multiprocessing.Pool` or :class:`multiprocessing.Process` invokes
+ :func:`os.fork`.
+
.. automodule:: google.cloud.storage.client
:members:
:show-inheritance:
|
fix
fix None sources which gave error | @@ -546,6 +546,7 @@ class StocksController(StockBaseController):
"--sources",
dest="sources",
type=str,
+ default="",
help="Show news only from the sources specified (e.g bloomberg,reuters)",
)
if other_args and "-" not in other_args[0][0]:
|
users: prevent creating repo on push for restricted users
Prevent restricted users' from automatically creating repos or orgs
on v2 auth requests | @@ -261,12 +261,21 @@ def _authorize_or_downscope_request(scope_param, has_valid_auth_context):
)
else:
logger.debug("No permission to modify repository %s/%s", namespace, reponame)
+
+ elif (
+ features.RESTRICTED_USERS
+ and user is not None
+ and usermanager.is_restricted_user(user.username)
+ and user.username == namespace
+ ):
+ logger.debug("Restricted users cannot create repository %s/%s", namespace, reponame)
+
else:
if (
app.config.get("CREATE_NAMESPACE_ON_PUSH", False)
and model.user.get_namespace_user(namespace) is None
):
- logger.debug("Creating organization: %s/%s", namespace, reponame)
+ logger.debug("Creating organization for: %s/%s", namespace, reponame)
try:
model.organization.create_organization(
namespace,
|
Added self to perror() for consistency and ease of overriding
poutput(), perror(), and pwarning() can now be called with no args | @@ -428,7 +428,7 @@ class Cmd(cmd.Cmd):
"""
return ansi.strip_ansi(self.prompt)
- def poutput(self, msg: Any, *, end: str = '\n') -> None:
+ def poutput(self, msg: Any = '', *, end: str = '\n') -> None:
"""Print message to self.stdout and appends a newline by default
Also handles BrokenPipeError exceptions for when a commands's output has
@@ -449,8 +449,8 @@ class Cmd(cmd.Cmd):
if self.broken_pipe_warning:
sys.stderr.write(self.broken_pipe_warning)
- @staticmethod
- def perror(msg: Any, *, end: str = '\n', apply_style: bool = True) -> None:
+ # noinspection PyMethodMayBeStatic
+ def perror(self, msg: Any = '', *, end: str = '\n', apply_style: bool = True) -> None:
"""Print message to sys.stderr
:param msg: message to print (anything convertible to a str with '{}'.format() is OK)
@@ -464,8 +464,8 @@ class Cmd(cmd.Cmd):
final_msg = "{}".format(msg)
ansi.ansi_aware_write(sys.stderr, final_msg + end)
- def pwarning(self, msg: Any, *, end: str = '\n', apply_style: bool = True) -> None:
- """Like perror, but applies ansi.style_warning by default
+ def pwarning(self, msg: Any = '', *, end: str = '\n', apply_style: bool = True) -> None:
+ """Wraps perror, but applies ansi.style_warning by default
:param msg: message to print (anything convertible to a str with '{}'.format() is OK)
:param end: string appended after the end of the message, default a newline
@@ -1397,7 +1397,7 @@ class Cmd(cmd.Cmd):
except Exception as e:
# Insert a newline so the exception doesn't print in the middle of the command line being tab completed
- self.perror('\n', end='')
+ self.perror()
self.pexcept(e)
return None
@@ -2770,7 +2770,7 @@ class Cmd(cmd.Cmd):
response = self.read_input(prompt)
except EOFError:
response = ''
- self.poutput('\n', end='')
+ self.poutput()
except KeyboardInterrupt as ex:
self.poutput('^C')
raise ex
|
Update anima-takeover.yaml
Update url to "Can I take Over XYZ" github issue. | @@ -5,7 +5,7 @@ info:
author: pdteam
severity: high
reference:
- - https://github.com/EdOverflow/can-i-take-over-xyz
+ - https://github.com/EdOverflow/can-i-take-over-xyz/issues/126
tags: takeover
requests:
|
Add macs_for_objects method to DiscoveryID
HG--
branch : feature/microservices | @@ -12,7 +12,6 @@ from threading import Lock
# Third-party modules
from mongoengine.queryset import DoesNotExist
import cachetools
-# Third-party modules
from mongoengine.document import Document, EmbeddedDocument
from mongoengine.fields import (StringField, ListField, LongField,
EmbeddedDocumentField)
@@ -111,6 +110,8 @@ class DiscoveryID(Document):
def find_object(cls, mac=None, ipv4_address=None):
"""
Find managed object
+ :param mac:
+ :param ipv4_address:
:param cls:
:return: Managed object instance or None
"""
@@ -195,3 +196,33 @@ class DiscoveryID(Document):
# Not in range
i_macs.add(i.mac)
return c_macs + [(m, m) for m in i_macs]
+
+ @classmethod
+ def macs_for_objects(cls, objects_ids):
+ """
+ Get MAC addresses for object
+ :param cls:
+ :param objects_ids: Lis IDs of Managed Object Instance
+ :type: list
+ :return: Dictionary mac: objects
+ :rtype: dict
+ """
+ if not objects_ids:
+ return None
+ if isinstance(objects_ids, list):
+ objects = objects_ids
+ else:
+ objects = list(objects_ids)
+
+ os = cls.objects.filter(object__in=objects)
+ if not os:
+ return None
+ # Discovered chassis id range
+ c_macs = {int(did[0][0]): did[1] for did in os.scalar("macs", "object") if did[0]}
+ # c_macs = [r.macs for r in os]
+ # Other interface macs
+ i_macs = {int(MAC(i[0])): i[1] for i in Interface.objects.filter(
+ managed_object__in=objects, mac__exists=True).scalar("mac", "managed_object")}
+ c_macs.update(i_macs)
+
+ return c_macs
|
SSH: fix host key conflicts
The original logic saves host key to ssh configs for validation. But for
cloud testing, the IP address may be reused by different VMs, so it
shouldn't be saved. With this change, it avoid to save an validate host
key. | @@ -143,7 +143,11 @@ def try_connect(connection_info: ConnectionInfo) -> Any:
# spur always run a posix command and will fail on Windows.
# So try with paramiko firstly.
paramiko_client = paramiko.SSHClient()
- paramiko_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
+ # Use base policy, do nothing on host key. The host key shouldn't be saved
+ # locally, or make any warning message. The IP addresses in cloud may be
+ # reused by different servers. If they are saved, there will be conflict
+ # error in paramiko.
+ paramiko_client.set_missing_host_key_policy(paramiko.MissingHostKeyPolicy())
paramiko_client.connect(
hostname=connection_info.address,
@@ -231,6 +235,10 @@ class SshShell(InitializableMixin):
"password": self._connection_info.password,
"private_key_file": self._connection_info.private_key_file,
"missing_host_key": spur.ssh.MissingHostKey.accept,
+ # There are too many servers in cloud, and they may reuse the same
+ # IP in different time. If so, there is host key conflict. So do not
+ # load host keys to avoid this kind of error.
+ "load_system_host_keys": False,
}
spur_ssh_shell = spur.SshShell(shell_type=shell_type, **spur_kwargs)
|
[dnf] simplify threading
use framework threading to simplify the dnf module
see | Requires the following executable:
* dnf
-Parameters:
- * dnf.interval: Time in minutes between two consecutive update checks (defaults to 30 minutes)
-
"""
-import threading
-
import core.event
import core.module
import core.widget
@@ -20,7 +15,20 @@ import core.decorators
import util.cli
-def get_dnf_info(widget):
+class Module(core.module.Module):
+ @core.decorators.every(minutes=30)
+ def __init__(self, config, theme):
+ super().__init__(config, theme, core.widget.Widget(self.updates))
+
+ self.background = True
+
+ def updates(self, widget):
+ result = []
+ for t in ["security", "bugfixes", "enhancements", "other"]:
+ result.append(str(widget.get(t, 0)))
+ return "/".join(result)
+
+ def update(self):
res = util.cli.execute("dnf updateinfo", ignore_errors=True)
security = 0
@@ -52,23 +60,6 @@ def get_dnf_info(widget):
widget.set("enhancements", enhancements)
widget.set("other", other)
- core.event.trigger("update", [widget.module.id], redraw_only=True)
-
-
-class Module(core.module.Module):
- @core.decorators.every(minutes=30)
- def __init__(self, config, theme):
- super().__init__(config, theme, core.widget.Widget(self.updates))
-
- def updates(self, widget):
- result = []
- for t in ["security", "bugfixes", "enhancements", "other"]:
- result.append(str(widget.get(t, 0)))
- return "/".join(result)
-
- def update(self):
- thread = threading.Thread(target=get_dnf_info, args=(self.widget(),))
- thread.start()
def state(self, widget):
cnt = 0
|
Update __init__.py
add mgb2 to __init__.py | +from .mgb2 import prepare_mgb2
from .adept import download_adept, prepare_adept
from .aishell import download_aishell, prepare_aishell
from .aishell4 import download_aishell4, prepare_aishell4
|
An error occurs when the audio length equals the predicted length
When the audio length is equal to the preset length, i.e. `Duration_sample = snt_len_sample`, the function that takes the random value becomes `random.randint(0,-1)`. | @@ -155,7 +155,7 @@ def dataio_prep(hparams):
def audio_pipeline(wav, start, stop, duration):
if hparams["random_chunk"]:
duration_sample = int(duration * hparams["sample_rate"])
- start = random.randint(0, duration_sample - snt_len_sample - 1)
+ start = random.randint(0, duration_sample - snt_len_sample)
stop = start + snt_len_sample
else:
start = int(start)
|
Only give broadcasts worker IAM creds for CBC proxy
There is no need to give it to any of the other workers and so the fewer
instances that have these creds the better.
You can verify this works by running
```
CF_APP=notify-api CF_SPACE=preview make generate-manifest
```
vs
```
CF_APP=notify-delivery-worker-broadcasts CF_SPACE=preview make generate-manifest
``` | 'notify-delivery-worker-broadcasts': {
'additional_env_vars': {
'CELERYD_PREFETCH_MULTIPLIER': 1,
+ 'CBC_PROXY_AWS_ACCESS_KEY_ID': CBC_PROXY_AWS_ACCESS_KEY_ID,
+ 'CBC_PROXY_AWS_SECRET_ACCESS_KEY': CBC_PROXY_AWS_SECRET_ACCESS_KEY,
}
},
'notify-delivery-worker-receipts': {},
@@ -127,11 +129,6 @@ applications:
AWS_ACCESS_KEY_ID: '{{ AWS_ACCESS_KEY_ID }}'
AWS_SECRET_ACCESS_KEY: '{{ AWS_SECRET_ACCESS_KEY }}'
- {% if CBC_PROXY_AWS_ACCESS_KEY_ID is defined %}
- CBC_PROXY_AWS_ACCESS_KEY_ID: '{{ CBC_PROXY_AWS_ACCESS_KEY_ID }}'
- CBC_PROXY_AWS_SECRET_ACCESS_KEY: '{{ CBC_PROXY_AWS_SECRET_ACCESS_KEY }}'
- {% endif %}
-
STATSD_HOST: "notify-statsd-exporter-{{ environment }}.apps.internal"
ZENDESK_API_KEY: '{{ ZENDESK_API_KEY }}'
|
Update description() call to to_string()
As of updating the Sawtooth SDK dependency to 0.4, the `description`
method for errors has been deprecated and `to_string` replaces this
method. | @@ -51,14 +51,14 @@ const VERSION: &str = env!("CARGO_PKG_VERSION");
fn main() {
match SimpleLogger::init(LevelFilter::Warn, Config::default()) {
Ok(_) => (),
- Err(err) => println!("Failed to load logger: {}", err.description()),
+ Err(err) => println!("Failed to load logger: {}", err.to_string()),
}
let arg_matches = get_arg_matches();
match run_load_command(&arg_matches) {
Ok(_) => (),
- Err(err) => println!("{}", err.description()),
+ Err(err) => println!("{}", err.to_string()),
}
}
@@ -379,7 +379,7 @@ impl fmt::Display for IntKeyCliError {
impl From<ParseIntError> for IntKeyCliError {
fn from(error: ParseIntError) -> Self {
IntKeyCliError {
- msg: error.description().to_string(),
+ msg: error.to_string(),
}
}
}
@@ -387,7 +387,7 @@ impl From<ParseIntError> for IntKeyCliError {
impl From<ParseFloatError> for IntKeyCliError {
fn from(error: ParseFloatError) -> Self {
IntKeyCliError {
- msg: error.description().to_string(),
+ msg: error.to_string(),
}
}
}
|
Update deployment script
Relax requirement for Python version | @@ -26,12 +26,11 @@ if [ -z $version ]; then
fi
py_minor_version=$(python3 -c 'import sys; print(sys.version_info.minor)')
-if [ $py_minor_version -ne 5 ]; then
- echo "$0: deployment script requires Python 3.5" >&2
+if [ $py_minor_version -lt 5 ]; then
+ echo "$0: deployment script requires Python>=3.5" >&2
exit 1
fi
-
tmpdir=$(mktemp -d)
echo "Deploying ReFrame version $version ..."
echo "Working directory: $tmpdir ..."
|
fix `sudo provision.sh` => `sudo ./provision.sh`
with `sudo provision.sh` (not executed) => `sudo: provision.sh: command not found` | @@ -55,7 +55,7 @@ Ubuntu 16.04 (manual way)
$ cd splash/dockerfiles/splash
- $ sudo provision.sh \
+ $ sudo ./provision.sh \
prepare_install \
install_msfonts \
install_extra_fonts \
|
fixing test so they work with travis
have a look at the two line change i made. | @@ -1125,7 +1125,7 @@ class TestQuantumProgram(unittest.TestCase):
bell.barrier()
bell.measure(q[0], c[0])
bell.measure(q[1], c[1])
- qp.set_api(Qconfig.APItoken, Qconfig.config["url"])
+ qp.set_api(API_TOKEN, URL)
bellobj = qp.compile(["bell"], backend='local_qasm_simulator',
shots=2048, seed=10)
ghzobj = qp.compile(["ghz"], backend='local_qasm_simulator',
@@ -1171,7 +1171,7 @@ class TestQuantumProgram(unittest.TestCase):
}]
}
qp = QuantumProgram(specs=QPS_SPECS)
- qp.set_api(Qconfig.APItoken, Qconfig.config["url"])
+ qp.set_api(API_TOKEN, URL)
if backend not in qp.online_simulators():
return
qc = qp.get_circuit("swapping")
|
Fixing `to_heterogeneous` to work with GPU
* fixing to_heterogeneous to work with gpu data
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see | @@ -657,7 +657,8 @@ class Data(BaseData, FeatureStore, GraphStore):
node_ids, index_map = {}, torch.empty_like(node_type)
for i, key in enumerate(node_type_names):
node_ids[i] = (node_type == i).nonzero(as_tuple=False).view(-1)
- index_map[node_ids[i]] = torch.arange(len(node_ids[i]))
+ index_map[node_ids[i]] = torch.arange(len(node_ids[i]),
+ device=index_map.device)
# We iterate over edge types to find the local edge indices:
edge_ids = {}
|
help_docs: Update `message-a-stream-by-email` help doc.
Uses new `select-stream-view-general.md` for instructions. | @@ -28,6 +28,8 @@ API](/api/send-message).
1. Select a stream.
+{!select-stream-view-general.md!}
+
1. Copy the stream email address under **Email address**.
1. Send an email to that address.
|
Add tooltips for "A"/"S" in editor
Fixes | @@ -142,6 +142,22 @@ class ClassEnumerationLiterals(EditableTreeModel):
self._item.subject.ownedLiteral.order(new_order.index)
+def tree_view_column_tooltips(tree_view, tooltips):
+ assert tree_view.get_n_columns() == len(tooltips)
+
+ def on_query_tooltip(widget, x, y, keyboard_mode, tooltip):
+ path_and_more = widget.get_path_at_pos(x, y)
+ if path_and_more:
+ path, column, cx, cy = path_and_more
+ n = widget.get_columns().index(column)
+ if tooltips[n]:
+ tooltip.set_text(tooltips[n])
+ return True
+ return False
+
+ tree_view.connect("query-tooltip", on_query_tooltip)
+
+
@PropertyPages.register(NamedElement)
class NamedElementPropertyPage(PropertyPageBase):
"""An adapter which works for any named item view.
@@ -282,6 +298,7 @@ class AttributesPage(PropertyPageBase):
tree_view: Gtk.TreeView = builder.get_object("attributes-list")
tree_view.set_model(self.model)
+ tree_view_column_tooltips(tree_view, ["", gettext("Static")])
def handler(event):
attribute = event.element
@@ -346,6 +363,9 @@ class OperationsPage(PropertyPageBase):
tree_view: Gtk.TreeView = builder.get_object("operations-list")
tree_view.set_model(self.model)
+ tree_view_column_tooltips(
+ tree_view, ["", gettext("Abstract"), gettext("Static")]
+ )
def handler(event):
operation = event.element
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.