message
stringlengths
13
484
diff
stringlengths
38
4.63k
Update sso-saml.rst linked Paul's script for creating a user file.
@@ -19,6 +19,8 @@ Mattermost officially supports Okta, OneLogin and Microsoft ADFS as the identity If you'd like, you may also try configuring SAML for a custom IDP. For instance, customers have successfully set up Duo, PingFederate and SimpleSAMLphp as a custom IDPs. We are open to providing assistance when configuring your custom IDP by answering Mattermost technical configuration questions and working with your IDP provider in support of resolving issues as they relate to Mattermost SAML configuration settings. However, we cannot guarantee your connection will work with Mattermost. +To assist with the process of getting a user file for your custom IDP, please see this `documentation <https://github.com/icelander/mattermost_generate_user_file>`_. + Please see more information on getting support `here <https://mattermost.com/support/>`_ and submit requests for official support of a particular provider on our `feature idea forum <https://mattermost.uservoice.com>`_. Please note that we may not be able to guarantee that your connection will work with Mattermost, however we will consider improvements to our feature as we are able. Please submit requests for official support of a particular provider on our `feature idea forum <https://mattermost.uservoice.com>`_.
Refactor cascade interface Replace nested ''if'' by dictionaries lookup. Make pep8 compliant.
-from . import bandpass_filters -from . import decomposition +from pysteps.cascade import decomposition, bandpass_filters + +_cascade_methods = dict() +_cascade_methods['fft'] = decomposition.decomposition_fft +_cascade_methods['gaussian'] = bandpass_filters.filter_gaussian +_cascade_methods['uniform'] = bandpass_filters.filter_uniform + def get_method(name): - """Return a callable function for the bandpass filter or decomposition method - corresponding to the given name.\n\ + """ + Return a callable function for the bandpass filter or decomposition method + corresponding to the given name.\n Filter methods: - +-------------------+--------------------------------------------------------+ + +-------------------+------------------------------------------------------+ | Name | Description | - +===================+========================================================+ + +===================+======================================================+ | gaussian | implementation of a bandpass filter using Gaussian | | | weights | - +-------------------+--------------------------------------------------------+ + +-------------------+------------------------------------------------------+ | uniform | implementation of a filter where all weights are set | | | to one | - +-------------------+--------------------------------------------------------+ + +-------------------+------------------------------------------------------+ Decomposition methods: - +-------------------+--------------------------------------------------------+ + +-------------------+------------------------------------------------------+ | Name | Description | - +===================+========================================================+ + +===================+======================================================+ | fft | decomposition based on Fast Fourier Transform (FFT) | | | and a bandpass filter | - +-------------------+--------------------------------------------------------+ + +-------------------+------------------------------------------------------+ """ - if name.lower() == "fft": - return decomposition.decomposition_fft - elif name.lower() == "gaussian": - return bandpass_filters.filter_gaussian - elif name.lower() == "uniform": - return bandpass_filters.filter_uniform - else: - raise ValueError("unknown method %s, the currently implemented methods are 'fft', 'gaussian' and 'uniform'" % name) + + if isinstance(name, str): + name = name.lower() + + try: + return _cascade_methods[name] + except KeyError: + raise ValueError("Unknown method {}\n".format(name) + + "The available methods are:" + + str(list(_cascade_methods.keys()))) from None
Update README.md Leaderboard reoopen
[![Documentation Status](https://readthedocs.org/projects/microsoft-recommenders/badge/?version=latest)](https://microsoft-recommenders.readthedocs.io/en/latest/?badge=latest) -## What's New (October 5, 2020) +## What's New (October 19, 2020) -[Microsoft News Recommendation Competition Winners Announced, Leaderboard to Reopen!](https://msnews.github.io/competition.html) +[Microsoft News Recommendation Competition Winners Announced, Leaderboard Reopen!](https://msnews.github.io/competition.html) Congratulations to all participants and [winners](https://msnews.github.io/competition.html#winner) of the Microsoft News Recommendation Competition! In the last two months, over 200 participants from more than 90 institutions in 19 countries and regions joined the competition and collectively advanced the state of the art of news recommendation. The competition is based on the recently released [MIND dataset](https://msnews.github.io/), an open, large-scale English news dataset with impression logs. Details of the dataset are available at this [ACL paper](https://msnews.github.io/assets/doc/ACL2020_MIND.pdf). -With the competition successfully closed, the [leaderboard](https://msnews.github.io/competition.html#leaderboard) will reopen soon. Want to see if you can grab the top spot? Get familiar with the [news recommendation scenario](https://github.com/microsoft/recommenders/tree/master/scenarios/news). Then dive into some baselines such as [DKN](examples/00_quick_start/dkn_MIND.ipynb), [LSTUR](examples/00_quick_start/lstur_MIND.ipynb), [NAML](examples/00_quick_start/naml_MIND.ipynb), [NPA](examples/00_quick_start/npa_MIND.ipynb) and [NRMS](examples/00_quick_start/nrms_MIND.ipynb) and get ready! +With the competition successfully closed, the [leaderboard](https://msnews.github.io/competition.html#leaderboard) is now reopn. Want to see if you can grab the top spot? Get familiar with the [news recommendation scenario](https://github.com/microsoft/recommenders/tree/master/scenarios/news). Then dive into some baselines such as [DKN](examples/00_quick_start/dkn_MIND.ipynb), [LSTUR](examples/00_quick_start/lstur_MIND.ipynb), [NAML](examples/00_quick_start/naml_MIND.ipynb), [NPA](examples/00_quick_start/npa_MIND.ipynb) and [NRMS](examples/00_quick_start/nrms_MIND.ipynb) and start hacking! See past announcements in [NEWS.md](NEWS.md).
setup.sh: Remove redundant chmod chmod 755 makes chmod +x redundant, meaning that chmod +x should be removed
@@ -34,7 +34,6 @@ if [[ "$OS" == "Fedora" ]]; then unzip chromedriver_linux64_2.3.zip sudo cp chromedriver /usr/bin/chromedriver sudo chown root /usr/bin/chromedriver - sudo chmod +x /usr/bin/chromedriver sudo chmod 755 /usr/bin/chromedriver elif [[ "$OS" == "Ubuntu" ]] || [[ "$OS" == "LinuxMint" ]]; then sudo apt-get install ffmpeg python-imdbpy python-notify2
update parameters for texture_mapping do not use the config file and pass a list of images instead
@@ -473,10 +473,8 @@ def main(config_fpath): except Exception as e: print("Error: ", e, " (file ", pan, ")") - # the config file contains the list of the images used - config_file = "xxxx.conf" - # if the config file is not used a list of images can be passed - # with the argument "--images img1 img2 ..." + # List of images + images_to_use = ["image1.tif", "image2.tif", "image3.tif"] # Prepare the mesh mesh_file = "xxxx.obj" @@ -490,33 +488,38 @@ def main(config_fpath): # It should contains all the buildings of the area occlusion_mesh = "xxxx.obj" + # Images fusion method + fusion_method = "test" + # Run texture mapping (process buildings separately) logging.info("---- Running texture_mapping ----") building_id = 0 - initial_working_dir = "" + + # here we except that meshes is a list as follows: [[mesh1_filename, mesh1_name], [mesh2_filename, mesh2_name], ...] + # it can be changed but mesh and mesh_name are required when we call the texture_mapping binary for mesh, mesh_name in meshes: + # create a sub-working-directory per building current_working_dir = os.path.join(working_dir, mesh_name) - # create a subdirectory per building mesh = os.path.join(mesh_dir, mesh) if not os.path.isdir(current_working_dir): os.mkdir(current_working_dir) call_args = [mesh, current_working_dir, str(utm), "--output-name", "building_" + mesh_name, - "--config", config_file, "--offset_x", str(x_offset), "--offset_y", str(y_offset), "--offset_z", str(z_offset), - "--fusion-method", fusion_method] + "--fusion-method", fusion_method, + "--shadows", "no", + "--images"] + images_to_use if building_id == 0: # for the first building we compute the depthmaps and output them call_args += ["--output-depthmap", os.path.join(current_working_dir, "depthmaps.txt"), - "--occlusions", occlusion_mesh, "--shadows", "no"] + "--occlusions", occlusion_mesh] initial_working_dir = current_working_dir else: # for the next buildings, we re-use the generated depthmaps - call_args += ["--occlusions", os.path.join(initial_working_dir, "depthmaps.txt"), - "--shadows", "no"] + call_args += ["--occlusions", os.path.join(initial_working_dir, "depthmaps.txt")] subprocess.call(["run_texture_mapping"] + call_args) print("\n\n") building_id += 1
Update conf.py Changed copyright date for page footers in doc set.
@@ -61,7 +61,7 @@ master_doc = 'index' # General information about the project. project = u'Mattermost' -copyright = u'2015-2020 Mattermost' +copyright = u'2015-2021 Mattermost' author = u'Mattermost' # The version info for the project you're documenting, acts as replacement for
Fix issue: model should be relative to source directory Not relative to the document.
@@ -66,9 +66,8 @@ class DiagramDirective(sphinx.util.docutils.SphinxDirective): ) ) - rel_filename, filename = self.env.relfn2path(model_file) - self.env.note_dependency(rel_filename) - model = load_model(filename) + self.env.note_dependency(model_file) + model = load_model(Path(self.env.srcdir) / model_file) outdir = ( Path(self.env.app.doctreedir).relative_to(self.env.srcdir) / ".." / "gaphor"
Remove 008 protocol from mainnet baking Problem: 009 protocol was activated mainnet, now there is no sense in running 008 daemons on mainnet. Solution: Remove 008 daemons from the contents of mainnet baking service.
@@ -7,7 +7,7 @@ from .model import Service, ServiceFile, SystemdUnit, Unit, Install, OpamBasedPa networks = ["mainnet", "edo2net", "florencenet"] networks_protos = { - "mainnet": ["008-PtEdo2Zk", "009-PsFLoren"], + "mainnet": ["009-PsFLoren"], "edo2net": ["008-PtEdo2Zk"], "florencenet": ["009-PsFLoren"] }
[Doc] Generative models, edit for readability Edit pass for grammar and style
@@ -4,18 +4,18 @@ Generative models ================== * **DGMG** `[paper] <https://arxiv.org/abs/1803.03324>`__ `[tutorial] - <3_generative_model/5_dgmg.html>`__ `[code] + <3_generative_model/5_dgmg.html>`__ `[PyTorch code] <https://github.com/dmlc/dgl/tree/master/examples/pytorch/dgmg>`__: - this model belongs to the important family that deals with structural - generation. DGMG is interesting because its state-machine approach is the - most general. It is also very challenging because, unlike Tree-LSTM, every + This model belongs to the family that deals with structural + generation. Deep generative models of graphs (DGMG) uses a state-machine approach. + It is also very challenging because, unlike Tree-LSTM, every sample has a dynamic, probability-driven structure that is not available - before training. We are able to progressively leverage intra- and + before training. You can progressively leverage intra- and inter-graph parallelism to steadily improve the performance. -* **JTNN** `[paper] <https://arxiv.org/abs/1802.04364>`__ `[code] +* **JTNN** `[paper] <https://arxiv.org/abs/1802.04364>`__ `[PyTorch code] <https://github.com/dmlc/dgl/tree/master/examples/pytorch/jtnn>`__: - unlike DGMG, this paper generates molecular graphs using the framework of - variational auto-encoder. Perhaps more interesting is its approach to build - structure hierarchically, in the case of molecular, with junction tree as + This network generates molecular graphs using the framework of + a variational auto-encoder. The junction tree neural network (JTNN) builds + structure hierarchically. In the case of molecular graphs, it uses a junction tree as the middle scaffolding.
Call `Ephem.from_horizons()` with `epochs` arg `Ephem.from_horizons()` requires the `epochs` positional argument.
@@ -516,7 +516,8 @@ The data is fetched using the wrappers to these services provided by [astroquery](https://astroquery.readthedocs.io/). ```python -Ephem.from_horizons("Ceres") +epoch = time.Time("2020-04-29 10:43") +Ephem.from_horizons("Ceres", epoch) Orbit.from_sbdb("Apophis") ```
contrib/lkt_semantic/char: fix wrong expected output TN:
@@ -9,9 +9,8 @@ Expr <StringLit test.lkt:1:18-1:23> Id <RefId "Char" test.lkt:2:9-2:13> references <StructDecl "Char" __prelude:20:11-20:25> -test.lkt:2:16: error: Mismatched types: expected `Char`, got a string literal -1 | val b : Char = 'l' - | ^^^ +Expr <CharLit test.lkt:2:16-2:19> + has type <StructDecl "Char" __prelude:20:11-20:25> Id <RefId "Char" test.lkt:3:9-3:13> references <StructDecl "Char" __prelude:20:11-20:25>
Bugfix ensure defaults aren't copied between blueprint routes Simple mistake of not copying a mutable variable.
@@ -826,7 +826,7 @@ class BlueprintSetupState: endpoint = f"{self.blueprint.name}.{endpoint}" url_defaults = self.url_defaults if defaults is not None: - url_defaults.update(defaults) + url_defaults = {**url_defaults, **defaults} self.app.add_url_rule( path, endpoint,
Readd the __proxy_keepalive scheduled job Although added by in - see for very obscure reasons, these changes somehow disappeared. Readding them, hopefully they'll resist longer this time.
@@ -3205,6 +3205,28 @@ class ProxyMinion(Minion): self.schedule.delete_job(master_event(type='alive', master=self.opts['master']), persist=True) self.schedule.delete_job(master_event(type='failback'), persist=True) + # proxy keepalive + proxy_alive_fn = fq_proxyname+'.alive' + if proxy_alive_fn in self.proxy and 'status.proxy_reconnect' in self.functions and \ + ('proxy_keep_alive' not in self.opts or ('proxy_keep_alive' in self.opts and self.opts['proxy_keep_alive'])): + # if `proxy_keep_alive` is either not specified, either set to False does not retry reconnecting + self.schedule.add_job({ + '__proxy_keepalive': + { + 'function': 'status.proxy_reconnect', + 'minutes': self.opts.get('proxy_keep_alive_interval', 1), # by default, check once per minute + 'jid_include': True, + 'maxrunning': 1, + 'return_job': False, + 'kwargs': { + 'proxy_name': fq_proxyname + } + } + }, persist=True) + self.schedule.enable_schedule() + else: + self.schedule.delete_job('__proxy_keepalive', persist=True) + # Sync the grains here so the proxy can communicate them to the master self.functions['saltutil.sync_grains'](saltenv='base') self.grains_cache = self.opts['grains']
config_service: README update Updated README.md file inside ui folder. Review-Url:
-# \<Config UI\> +# LUCI Config UI -This is a UI for the configuration service +This is a UI for the configuration service. -## Install the Polymer-CLI -First, make sure you have the [Polymer CLI](https://www.npmjs.com/package/polymer-cli) installed. Then run `polymer serve` to serve your application locally. +## Setting up -## Viewing Your Application +* First, make sure you have the [Polymer CLI](https://www.polymer-project.org/2.0/docs/tools/polymer-cli) installed. -``` -$ polymer serve -``` +* Install [Google App Engine SDK](https://cloud.google.com/appengine/downloads). -## Building Your Application +* Run `bower install` in the ui directory to make sure you have all the dependecies installed. -``` -$ polymer build -``` -This will create builds of your application in the `build/` directory, optimized to be served in production. You can then serve the built versions by giving `polymer serve` a folder to serve from: +## Running locally -``` -$ polymer serve build/default -``` +* First, change all the URLs in the iron-ajax elements. Simply add "https://luci-config.appspot.com" before each URL. + * One in the src/config-ui/front-page.html + * Two in the src/config-ui/config-set.html -## Running Tests +* In the config-service folder run `dev_appserver.py app.yaml` + +* Visit [http://localhost:8080](http://localhost:8080) -``` -$ polymer test -``` -Your application is already set up to be tested via [web-component-tester](https://github.com/Polymer/web-component-tester). Run `polymer test` to run your application's test suite locally. +## Running Tests + +* Your application is already set up to be tested via [web-component-tester](https://github.com/Polymer/web-component-tester). + Run 'wct, 'wct -p' or 'polymer test' inside ui folder to run your application's test suites locally. + These commands will run tests for all browsers installed on your computer. ## Third Party Files -In order to use proper authentication, the google-signin-aware element was needed. However, this element has not been updated to Polymer 2.0, so edits were made to the current version to ensure compatibility. +In order to use proper authentication, the google-signin-aware element was needed. However, this element has not been updated to +Polymer 2.0, so edits were made to the current version to ensure compatibility. The modified google-signin-aware element can be found in the ui/common/third_party/google-signin folder. \ No newline at end of file
ENH: RunEngine bail methods all return runstart uid list This is to match the clean behavior of `__call__` and `resume`
@@ -899,6 +899,7 @@ class RunEngine: task.cancel() if self.state == 'paused': self._resume_event_loop() + return self._run_start_uids def stop(self): """ @@ -918,6 +919,7 @@ class RunEngine: self._task.cancel() if self.state == 'paused': self._resume_event_loop() + return self._run_start_uids def halt(self): ''' @@ -937,6 +939,7 @@ class RunEngine: self._task.cancel() if self.state == 'paused': self._resume_event_loop() + return self._run_start_uids def _stop_movable_objects(self, *, success=True): "Call obj.stop() for all objects we have moved. Log any exceptions."
Hotfix for psum transpose The previous patch has been causing some failures in the `is_undefined_primal` assertion in `broadcast_position`, but it looks like in all of those cases there are no positional axes, so this should fix them. More debugging underway, but I wanted to make sure they're unblocked.
@@ -654,6 +654,7 @@ def _psum_transpose_rule(cts, *args, axes, axis_index_groups): for axis in axes: axes_partition[isinstance(axis, int)].append(axis) + if pos_axes: def broadcast_positional(ct, arg): assert ad.is_undefined_primal(arg) if type(ct) is ad.Zero: return ad.Zero(arg.aval)
Update badges in README to reflect new workflows I've also restructured it a bit by using reference links instead of inline links.
# Python Discord: Site [![Discord](https://img.shields.io/static/v1?label=Python%20Discord&logo=discord&message=%3E100k%20members&color=%237289DA&logoColor=white)](https://discord.gg/2B963hn) -![Lint, Test & Deploy](https://github.com/python-discord/site/workflows/Lint,%20Test%20&%20Deploy/badge.svg?branch=master) -[![Coverage Status](https://coveralls.io/repos/github/python-discord/site/badge.svg?branch=master)](https://coveralls.io/github/python-discord/site?branch=master) +[![Lint & Test][1]][2] +[![Build & Deploy][3]][4] +[![Coverage Status][5]][6] [![License](https://img.shields.io/github/license/python-discord/site)](LICENSE) -[![Status](https://img.shields.io/website?url=https%3A%2F%2Fpythondiscord.com)][1] +[![Status](https://img.shields.io/website?url=https%3A%2F%2Fpythondiscord.com)][7] -This is all of the code that is responsible for maintaining [our website][1] and all of its subdomains. +This is all of the code that is responsible for maintaining [our website][7] and all of its subdomains. The website is built on Django and should be simple to set up and get started with. If you happen to run into issues with setup, please don't hesitate to open an issue! -If you're looking to contribute or play around with the code, take a look at [the wiki][2] or the [`docs` directory](docs). If you're looking for things to do, check out [our issues][3]. +If you're looking to contribute or play around with the code, take a look at [the wiki][8] or the [`docs` directory](docs). If you're looking for things to do, check out [our issues][9]. -[1]: https://pythondiscord.com -[2]: https://pythondiscord.com/pages/contributing/site/ -[3]: https://github.com/python-discord/site/issues +[1]: https://github.com/python-discord/site/workflows/Lint%20&%20Test/badge.svg?branch=master +[2]: https://github.com/python-discord/site/actions?query=workflow%3A%22Lint+%26+Test%22+branch%3Amaster +[3]: https://github.com/python-discord/site/workflows/Build%20&%20Deploy/badge.svg?branch=master +[4]: https://github.com/python-discord/site/actions?query=workflow%3A%22Build+%26+Deploy%22+branch%3Amaster +[5]: https://coveralls.io/repos/github/python-discord/site/badge.svg?branch=master +[6]: https://coveralls.io/github/python-discord/site?branch=master +[7]: https://pythondiscord.com +[8]: https://pythondiscord.com/pages/contributing/site/ +[9]: https://github.com/python-discord/site/issues
Fix Huawei.VRP get_capabilties for Stack.Members HG-- branch : feature/microservices
@@ -70,10 +70,14 @@ class Script(BaseScript): Check stack members :return: """ - r = self.cli("display stack peer") - return len([l for l in r.splitlines() if "STACK" in l]) + r = self.profile.parse_table(self.cli("display stack peer")) + return [l[0] for l in r["table"]] + # return len([l for l in r.splitlines() if "STACK" in l]) def execute_platform(self, caps): if self.has_ndp(): caps["Huawei | NDP"] = True - caps["Stack | Members"] = self.has_stack() if self.has_stack() else 0 + s = self.has_stack() + if s: + caps["Stack | Members"] = len(s) if len(s) != 1 else 0 + caps["Stack | Member Ids"] = " | ".join(s)
fixed wrong download url Update ytmusic.py Update ytmusic.py
@@ -145,8 +145,8 @@ def get_results(self, search_term: str, **kwargs) -> List[Dict[str, Any]]: "name": result["title"], "type": result["resultType"], "link": ( - f'https://{"music" if result["resultType"] == "song" else "www"}.', - f".youtube.com/watch?v={result['videoId']}", + f'https://{"music" if result["resultType"] == "song" else "www"}' + f".youtube.com/watch?v={result['videoId']}" ), "album": result.get("album", {}).get("name") if result.get("album")
Fix Issue unitary matrix size is 2^n \times 2^n Previously said n \times n
@@ -17,7 +17,7 @@ of single-qubit gates and a two-qubit entangling gate (CNOT) target but a mechanism to define other gates. For many gates of practical interest, there is a circuit representation with a polynomial number of one- and two-qubit gates, giving a more compact representation -than requiring the programmer to express the full :math:`n \times n` +than requiring the programmer to express the full :math:`2^n \times 2^n` matrix. However, a general :math:`n`-qubit gate can be defined using an exponential number of these gates.
Fixed problems when trying to use Digi-Key API, but not available Related to INTI-CMNB/KiBot#209
@@ -91,6 +91,9 @@ class api_digikey(distributor_class): DK_API.api_ops = {} cache_ttl = 7 cache_path = None + if not available: + debug_obsessive('Digi-Key API not available') + return for k, v in ops.items(): if k == 'client_id': DK_API.id = v
Clarify LASPH warning Forgot to include hybrids
@@ -568,7 +568,7 @@ class DictSet(VaspInputSet): elif any(el.Z > 20 for el in structure.composition): incar["LMAXMIX"] = 4 - # Warn user about LASPH for meta-GGAs, hybrids, and vdW-DF + # Warn user about LASPH for +U, meta-GGAs, hybrids, and vdW-DF if not settings.get("LASPH", False) and ( settings.get("METAGGA", False) or settings.get("LHFCALC", False) @@ -576,7 +576,7 @@ class DictSet(VaspInputSet): or settings.get("LUSE_VDW", False) ): warnings.warn( - "LASPH = True should be set for +U, meta-GGAs, and vdW-DFT", + "LASPH = True should be set for +U, meta-GGAs, hybrids, and vdW-DFT", BadInputSetWarning, )
Default to a transaparent background Fixes
@@ -788,7 +788,7 @@ class GtkView(Gtk.DrawingArea, Gtk.Scrollable, View): cr = cairo.Context(self._back_buffer) cr.save() - cr.set_source_rgb(1, 1, 1) + cr.set_operator(cairo.OPERATOR_CLEAR) cr.paint() cr.restore() @@ -855,13 +855,10 @@ class GtkView(Gtk.DrawingArea, Gtk.Scrollable, View): def do_configure_event(self, event): if self.get_window(): self._back_buffer = self.get_window().create_similar_surface( - cairo.Content.COLOR, + cairo.Content.COLOR_ALPHA, self.get_allocated_width(), self.get_allocated_height(), ) - cr = cairo.Context(self._back_buffer) - cr.set_source_rgb(1, 1, 1) - cr.paint() self.update() else: self._back_buffer = None
Vagrantfile: Check for OS before patching the lxc-config. Followup of Without this vagrant up will fail on Windows. Vagrant.20up.20error
@@ -31,6 +31,7 @@ end # have the box (e.g. on first setup), Vagrant would download it but too # late for us to patch it like this; so we prompt them to explicitly add it # first and then rerun. +if Vagrant::Util::Platform.linux? if ['up', 'provision'].include? ARGV[0] LXC_VERSION = `lxc-ls --version`.strip unless defined? LXC_VERSION if LXC_VERSION == "2.1.0" @@ -53,6 +54,7 @@ if ['up', 'provision'].include? ARGV[0] end end end +end # Workaround: Vagrant removed the atlas.hashicorp.com to # vagrantcloud.com redirect in February 2018. The value of
Incidents: reduce log level of 403 exception In addition to 404, this shouldn't send Sentry notifs.
@@ -51,12 +51,13 @@ async def download_file(attachment: discord.Attachment) -> t.Optional[discord.Fi Download & return `attachment` file. If the download fails, the reason is logged and None will be returned. + 404 and 403 errors are only logged at debug level. """ log.debug(f"Attempting to download attachment: {attachment.filename}") try: return await attachment.to_file() - except discord.NotFound as not_found: - log.debug(f"Failed to download attachment: {not_found}") + except (discord.NotFound, discord.Forbidden) as exc: + log.debug(f"Failed to download attachment: {exc}") except Exception: log.exception("Failed to download attachment")
Update TAXII example It was set up for an old version of the default test data in medallion.
@@ -8,7 +8,7 @@ import stix2 def main(): collection = Collection( - "http://127.0.0.1:5000/trustgroup1/collections/52892447-4d7e-4f70-b94d-d7f22742ff63/", + "http://127.0.0.1:5000/trustgroup1/collections/91a7b528-80eb-42ed-a74d-c6fbd5a26116/", user="admin", password="Password0", ) @@ -16,12 +16,12 @@ def main(): taxii = stix2.TAXIICollectionSource(collection) # get (url watch indicator) - indicator_fw = taxii.get("indicator--00000000-0000-4000-8000-000000000001") + indicator_fw = taxii.get("indicator--6770298f-0fd8-471a-ab8c-1c658a46574e") print("\n\n-------Queried for Indicator - got:") print(indicator_fw.serialize(indent=4)) # all versions (url watch indicator - currently two) - indicator_fw_versions = taxii.all_versions("indicator--00000000-0000-4000-8000-000000000001") + indicator_fw_versions = taxii.all_versions("indicator--6770298f-0fd8-471a-ab8c-1c658a46574e") print("\n\n------Queried for indicator (all_versions()) - got:") for indicator in indicator_fw_versions: print(indicator.serialize(indent=4))
AnimationEditor : Add tooltips for frame, value and interpolation. ref
@@ -436,16 +436,21 @@ class _KeyWidget( GafferUI.GridContainer ) : GafferUI.GridContainer.__init__( self, spacing=4, borderWidth=4 ) + # tool tips + frameToolTip = "# Frame\n\nThe frame of the currently selected keys." + valueToolTip = "# Value\n\nThe value of the currently selected keys." + interpolationToolTip = "# Interpolation\n\nThe interpolation of the currently selected keys." + # create key labels - frameLabel = GafferUI.Label( text="Frame" ) - valueLabel = GafferUI.Label( text="Value" ) - interpolationLabel = GafferUI.Label( text="Interpolation" ) + frameLabel = GafferUI.Label( text="Frame", toolTip=frameToolTip ) + valueLabel = GafferUI.Label( text="Value", toolTip=valueToolTip ) + interpolationLabel = GafferUI.Label( text="Interpolation", toolTip=interpolationToolTip ) # create editors # NOTE: initial value type (e.g. int or float) determines validated value type of widget - self.__frameEditor = GafferUI.NumericWidget( value=int(0) ) - self.__valueEditor = GafferUI.NumericWidget( value=float(0) ) - self.__interpolationEditor = GafferUI.MenuButton() + self.__frameEditor = GafferUI.NumericWidget( value=int(0), toolTip=frameToolTip ) + self.__valueEditor = GafferUI.NumericWidget( value=float(0), toolTip=valueToolTip ) + self.__interpolationEditor = GafferUI.MenuButton( toolTip=interpolationToolTip ) # build key interpolation menu im = IECore.MenuDefinition()
Fix the text.Span.__repr__ method Use `repr` instead of `str` on nested objects, to improve clarity when debugging. Fixes Issue
@@ -57,7 +57,7 @@ class Span(NamedTuple): return ( f"Span({self.start}, {self.end}, {self.style!r})" if (isinstance(self.style, Style) and self.style._meta) - else f"Span({self.start}, {self.end}, {str(self.style)!r})" + else f"Span({self.start}, {self.end}, {repr(self.style)})" ) def __bool__(self) -> bool:
Only process valid connections The `_from_server_socket routine` may return a None which should not be sent to workers to process. for
@@ -216,6 +216,7 @@ class ConnectionManager: if conn is self.server: # New connection new_conn = self._from_server_socket(self.server.socket) + if new_conn is not None: self.server.process_conn(new_conn) else: # unregister connection from the selector until the server
Fix symlink bug in file_path.set_read_only() If read_only is True, then we modify the mode, which makes stat.S_ISLNK() return False, which causes an error in fs.chmod for symlinks. Store the original mode and check that instead.
@@ -888,7 +888,8 @@ def set_read_only(path, read_only): Zaps out access to 'group' and 'others'. """ - mode = fs.lstat(path).st_mode + orig_mode = fs.lstat(path).st_mode + mode = orig_mode # TODO(maruel): Stop removing GO bits. if read_only: mode &= stat.S_IRUSR|stat.S_IXUSR # 0500 @@ -899,7 +900,7 @@ def set_read_only(path, read_only): if hasattr(os, 'lchmod'): fs.lchmod(path, mode) # pylint: disable=E1101 else: - if stat.S_ISLNK(mode): + if stat.S_ISLNK(orig_mode): # Skip symlink without lchmod() support. return
DOC: updated introduction Updated introduction by re-writing the text, adding labels, and fixing the section header style.
+.. _introduction: -============ Introduction ============ -Every scientific instrument has unique properties though the general process for -science data analysis is independent of platform. Find and download the data, -write code to load the data, clean the data, apply custom analysis functions, -and plot the results. The Python Satellite Data Analysis Toolkit (pysat) -provides a framework for this general process that builds upon these -commonalities to simplify adding new instruments, reduce data management -overhead, and enable instrument independent analysis routines. Though pysat was -initially designed for in-situ satellite based measurements it aims to support -all instruments in space science. +Every scientific instrument has unique properties, though the general process +for science data analysis is platform independent. This process can by described +as: finding and downloading data, writing code to load the data, cleaning the +data to an appropriate level, and applying the specific analysis for a project, +and plotting the results. The Python Satellite Data Analysis Toolkit (pysat) +provides a framework to support this general process that builds upon these +commonalities. In doing so, pysat simplifies the process of using new +instruments, reduces data management overhead, and enables the creation of +instrument independent analysis routines. Although pysat was initially designed +for `in situ` satellite measurements, has grown to support both observational +and modelled space science measurements. + +The newest incarnation of pysat has been pared down to focus on the core +elements of our mission: providing a framework for data management and analysis. +The instruments and analysis tools currently supported by the greater pysat +ecosystem can be found in the :ref:`ecosystem` section. -This document covers installation, a tutorial on pysat including demonstration -code, coverage of supported instruments, an overview of adding new instruments -to pysat, and an API reference. -**Logos** +.. _logos: +Logos ----- Does your project use pysat? If so, grab a "powered by pysat" logo!
fix bug in toy sensor Test Plan: ran dagster-daemon Reviewers: dish
@@ -49,7 +49,7 @@ def _wrapped_fn(context): continue fstats = os.stat(filepath) if fstats.st_mtime > since: - fileinfo_since.append(filename, fstats.st_mtime) + fileinfo_since.append((filename, fstats.st_mtime)) result = fn(context, fileinfo_since)
Add distinction between requested times and data times in the EdbMnemomic class Expand docstrings.
@@ -31,6 +31,10 @@ Notes A valid MAST authentication token has to be present in the local ``jwql`` configuration file (config.json). + When querying mnemonic values, the underlying MAST service returns + data that include the datapoint preceding the requested start time + and the datapoint that follows the requested end time. + """ from collections import OrderedDict @@ -72,17 +76,19 @@ class EdbMnemonic: """ self.mnemonic_identifier = mnemonic_identifier - self.start_time = start_time - self.end_time = end_time + self.requested_start_time = start_time + self.requested_end_time = end_time self.data = data + self.data_start_time = Time(np.min(np.array(self.data['MJD'])), format='mjd') + self.data_end_time = Time(np.max(np.array(self.data['MJD'])), format='mjd') self.meta = meta self.info = info def __str__(self): """Return string describing the instance.""" return 'EdbMnemonic {} with {} records between {} and {}'.format( - self.mnemonic_identifier, len(self.data), self.start_time.isot, - self.end_time.isot) + self.mnemonic_identifier, len(self.data), self.data_start_time.isot, + self.data_end_time.isot) def interpolate(self, times, **kwargs): """Interpolate value at specified times.""" @@ -112,7 +118,27 @@ class EdbMnemonic: def get_mnemonic(mnemonic_identifier, start_time, end_time): - """Execute query and return a EdbMnemonic instance.""" + """Execute query and return a EdbMnemonic instance. + + The underlying MAST service returns data that include the + datapoint preceding the requested start time and the datapoint + that follows the requested end time. + + Parameters + ---------- + mnemonic_identifier : str + Telemetry mnemonic identifiers, e.g. 'SA_ZFGOUTFOV' + start_time : astropy.time.Time instance + Start time + end_time : astropy.time.Time instance + End time + + Returns + ------- + mnemonic : instance of EdbMnemonic + EdbMnemonic object containing query results + + """ data, meta, info = query_single_mnemonic(mnemonic_identifier, start_time, end_time, token=MAST_TOKEN)
Adding definition of backup_flags During the upgrade from M to N i encountered an error in a step requiring the upgrade of mysql version. The variable backup_flags is undefined at that point. Closes-Bug:
@@ -50,6 +50,7 @@ mysql_need_update if [[ -n $(is_bootstrap_node) ]]; then if [ $DO_MYSQL_UPGRADE -eq 1 ]; then + backup_flags="--defaults-extra-file=/root/.my.cnf -u root --flush-privileges --all-databases --single-transaction" mysqldump $backup_flags > "$MYSQL_BACKUP_DIR/openstack_database.sql" cp -rdp /etc/my.cnf* "$MYSQL_BACKUP_DIR" fi
notifications: Switch to use `make_links_absolute()` from lxml library. Instead of using custom regexes for converting relative URLs to absolute URLs switch to using `make_links_absolute()` function from lxml library.
@@ -26,6 +26,7 @@ from zerver.models import ( import datetime from email.utils import formataddr +import lxml.html import re import subprocess import ujson @@ -69,18 +70,10 @@ def topic_narrow_url(realm, stream, topic): def relative_to_full_url(base_url, content): # type: (Text, Text) -> Text - # URLs for uploaded content and avatars are of the form: - # "/user_uploads/abc.png". - # "/avatar/[email protected]?s=30". - # Make them full paths. Explanation for all the regexes below: - # (\=['\"]) matches anything that starts with `=` followed by `"` or `'`. - # ([^\r\n\t\f <]) matches any character which is not a whitespace or `<`. - # ([^<]+>) matches any sequence of characters which does not contain `<` - # and ends in `>`. - # The last positive lookahead ensures that we replace URLs only within a tag. - content = re.sub( - r"(?<=\=['\"])/(user_uploads|avatar)/([^\r\n\t\f <]*)(?=[^<]+>)", - base_url + r"/\1/\2", content) + # Convert relative URLs to absolute URLs. + elem = lxml.html.fromstring(content) + elem.make_links_absolute(base_url) + content = lxml.html.tostring(elem).decode("utf-8") # Inline images can't be displayed in the emails as the request # from the mail server can't be authenticated because it has no @@ -89,24 +82,6 @@ def relative_to_full_url(base_url, content): content = re.sub( r"<img src=(\S+)/user_uploads/(\S+)>", "", content) - # Convert the zulip emoji's relative url to absolute one. - content = re.sub( - r"(?<=\=['\"])/static/generated/emoji/images/emoji/unicode/zulip.png(?=[^<]+>)", - base_url + r"/static/generated/emoji/images/emoji/unicode/zulip.png", - content) - - # Realm emoji should use absolute URLs when referenced in missed-message emails. - content = re.sub( - r"(?<=\=['\"])/user_avatars/(\d+)/emoji/(?=[^<]+>)", - base_url + r"/user_avatars/\1/emoji/", content) - - # Stream links need to be converted from relative to absolute. They - # have href values in the form of "/#narrow/stream/...". - content = re.sub( - r"(?<=\=['\"])/#narrow/stream/(?=[^<]+>)", - base_url + r"/#narrow/stream/", - content) - return content def fix_emojis(content, base_url):
Update detect_dga_domains_using_pretrained_model_in_dsdl.yml updating story names
@@ -26,8 +26,7 @@ references: - https://en.wikipedia.org/wiki/Domain_generation_algorithm tags: analytic_story: - - Data Protection - - Prohibited Traffic Allowed or Protocol Mismatch + - Data Exfiltration - DNS Hijacking - Suspicious DNS Traffic - Dynamic DNS
Use a child instead of a background for checked boxes It fixes the style of checkboxes in PDF forms.
@@ -333,12 +333,15 @@ input[type="radio"] { height: 1.2em; width: 1.2em; } -input[type="checkbox"][checked], -input[type="radio"][checked] { - background: black content-box; +input[type="checkbox"][checked]:before, +input[type="radio"][checked]:before { + background: black; + content: ""; + display: block; + height: 100%; } -input[type="color"] { - background: lightgrey; +input[type="radio"][checked]:before { + border-radius: 50%; } input[type="hidden"] { display: none;
get_lldp_neighbors.py edited online with Bitbucket fix brokin merge HG-- branch : e_zombie/get_lldp_neighborspy-edited-online-with--1493193154832
@@ -215,61 +215,3 @@ class Script(BaseScript): ------------------------------------------------------------------------------- """ - -<<<<<<< local -======= - device_id = self.scripts.get_fqdn() - # Get neighbors - neighbors = [] - - # try ladvdc - for match in self.rx_ladvdc.finditer(self.cli("ladvdc -L")): - # ladvdc show remote CISCO(!!!) interface -> "Gi1/0/4" - # but cisco.iso profile need remote interface -> "Gi 1/0/4" !!! - # check and convert remote_interface if remote host CISCO - if re.match(check_ifcfg, match.group("remote_interface")): - remote_if = match.group("remote_interface") - else: - remote_if = self.profile.convert_interface_name_cisco(match.group("remote_interface")) - - neighbors += [{ - # "device_id": match.group("device_id"), - "local_interface": match.group("local_interface"), - # "remote_interface": remote_if, - }] - - # try lldpd - r = [] - v = self.cli("lldpcli show neighbors summary") - if "Permission denied" in v: - logging.error("Add <NOCuser> to _lldpd group. Like that ' # usermod -G _lldpd -a <NOCuser> ' ") - return r - - else: - - for match in self.rx_lldpd.finditer(self.cli("lldpcli show neighbors summary")): - if re.match(check_ifcfg, match.group("remote_port")): - remote_if = match.group("remote_port") - else: - remote_if = self.profile.convert_interface_name_cisco(match.group("remote_port")) - - i = {"local_interface": match.group("local_interface"), - "neighbors": [] - } - - # print (match.group("remote_port")) - - n = { - 'remote_capabilities': 4, - "remote_chassis_id": match.group("remote_id"), - 'remote_chassis_id_subtype': 4, - "remote_port": match.group("remote_chassis_id"), - "remote_port_subtype": 3, - "remote_system_name": match.group("remote_system_name"), - } - - i["neighbors"] += [n] - r += [i] - - return r ->>>>>>> other
Add missing add to set I missed this as well when refactoring and is probably the cause of the leaking incomplete batches.
@@ -807,6 +807,7 @@ class IncomingBatchQueue: def put(self, batch): if batch.header_signature not in self._ids: + self._ids.add(batch.header_signature) self._queue.put(batch) def get(self, timeout=None):
Update android_generic.txt Port :2222 is related to ```Android.Spy``` malware samples on ```bbb123.ddns.net``` dyn-domain.
@@ -670,10 +670,40 @@ commealamaison1.zapto.org adnab.ir rozup.ir/download/3039645/ +# Reference: https://www.virustotal.com/gui/domain/bbb123.ddns.net/relations # Reference: https://www.virustotal.com/gui/file/153e52d552fdd1b4533d3eb9aa8f59bda645e8a4409b28a336c0cab1d26bd876/detection - -94.49.131.95:2222 - # Reference: https://www.virustotal.com/gui/file/1f2eb62e57e29d27d83d88bfbac654bdbd6772ee7bab981b6930806c550e4b7c/detection - +# Reference: https://www.virustotal.com/gui/file/e321d63c061503d341ba9076a6fa5b85383f7e6ac9f0bf5b4ccbfe68a6f808b3/detection + +159.0.64.216:2222 +159.0.90.166:2222 +178.87.136.11:2222 +178.87.138.222:2222 +178.87.157.88:2222 +178.87.212.96:2222 +2.88.187.83:2222 +2.88.190.5:2222 +51.223.107.14:2222 +51.223.117.108:2222 +51.223.124.255:2222 +51.223.127.88:2222 +51.223.152.150:2222 +51.223.159.160:2222 +51.223.78.70:2222 +51.223.92.246:2222 +51.223.98.156:2222 +79.173.195.249:2222 +92.253.65.44:2222 +93.182.171.21:2222 +94.49.131.95:2222 +94.49.138.66:2222 94.49.143.58:2222 +94.49.156.68:2222 +94.49.175.31:2222 +94.49.191.93:2222 +94.99.92.43:2222 +95.219.144.182:2222 +95.219.152.127:2222 +95.219.187.144:2222 +95.219.230.215:2222 +95.219.255.163:2222
bug fix fix an error in `Monitor._get_local_changes` if `last_sync is None`
@@ -1636,13 +1636,14 @@ Any changes to local files during this process may be lost. """) # get modified or added items for path in snapshot.paths: stats = snapshot.stat_info(path) + last_sync = CONF.get("internal", "lastsync") or 0 # check if item was created or modified since last sync dbx_path = self.sync.to_dbx_path(path).lower() is_new = (self.sync.get_local_rev(dbx_path) is None and not self.sync.is_excluded(dbx_path)) is_modified = (self.sync.get_local_rev(dbx_path) and - max(stats.st_ctime, stats.st_mtime) > self.sync.last_sync) + max(stats.st_ctime, stats.st_mtime) > last_sync) if is_new: if osp.isdir(path):
message_edit: Remove unnecessary comment. This comment is not required now since topic and stream editing is not allowed from this UI now and is instead done from a modal.
@@ -440,7 +440,6 @@ function edit_message($row, raw_content) { } const is_editable = editability === editability_types.FULL; - // current message's stream has been already been added and selected in Handlebars const $form = $( render_message_edit_form({
parent: reuse _=codecs.decode alias in exec'd first stage SSH command size: 453 (-8 bytes) Preamble size: 8946 (no change)
@@ -323,7 +323,7 @@ class Stream(mitogen.core.Stream): # replaced with the context name. Optimized for size. @staticmethod def _first_stage(): - import os,sys,zlib + import os,sys R,W=os.pipe() r,w=os.pipe() if os.fork(): @@ -337,7 +337,7 @@ class Stream(mitogen.core.Stream): os.environ['ARGV0']=e=sys.executable os.execv(e,['mitogen:CONTEXT_NAME']) os.write(1,'EC0\n') - C=zlib.decompress(sys.stdin.read(input())) + C=_(sys.stdin.read(input()), 'zlib') os.fdopen(W,'w',0).write(C) os.fdopen(w,'w',0).write('%s\n'%len(C)+C) os.write(1,'EC1\n')
FLuxTerm now uses data provided by solver, VolTerm modified for use in update with plus sign solver passes vec to evaluator which ends up - after being put into variable by equation - as parameter for function(); flipped sign for convenience in EUSolver and RK solver
@@ -32,13 +32,14 @@ class AdvVolDGTerm(Term): if doeval: vols = self.region.domain.cmesh.get_volumes(1) # TODO which dimension do we really want? + # integral over element with constant test # function is just volume of the element out[:] = 0 # out[:, 0, 0, 0] = vols # out[:, 0, 1, 1] = vols / 3.0 - out[:nm.shape(vols)[0], 0, 0, 0] = -vols # TODO how to arrange DOFs into variable vector? - out[nm.shape(vols)[0]:, 0, 0, 0] = -vols / 3.0 + out[:nm.shape(vols)[0], 0, 0, 0] = vols # TODO sign does correspond to standard way solvers expect + out[nm.shape(vols)[0]:, 0, 0, 0] = vols / 3.0 # TODO move to for cycle to add values for higher order approx else: out[:] = 0.0 @@ -69,12 +70,20 @@ class AdvFluxDGTerm(Term): mode=None, term_mode=None, diff_var=None, **kwargs): # varc = self.get_variables(as_list=False)['u'] - u = self.get(state, 'dg', step=-1) - + # ur = self.get(state, 'dg', step=-1) if diff_var is not None: doeval = False + return None, None, doeval else: doeval = True + ur = state.data[0] + # ur = self.get(state, 'dg', step=-1) + + # TODO how to pass order or number of cells to term? + n_cell = self.region.get_n_cells(False) + u = nm.zeros((n_cell, 2)) # 2 is approx order + for i in range(2): + u[:, i] = ur[n_cell * i : n_cell*(i+1)] fargs = u, a[:, :, 0, 0], doeval return fargs @@ -138,7 +147,13 @@ class AdvFluxDGTerm(Term): out[:] = 0.0 # out[:, 0, 0, 0] = (fl - fp)[:, 0, 0] # out[:, 0, 1, 0] = (- fl - fp + intg)[:, 0, 0] - out[:nm.shape(fp)[0], 0, 0, 0] = (fl - fp) # this is how DGField should work - out[nm.shape(fp)[0]:, 0, 0, 0] = (- fl - fp + intg) + flux0 = (fl - fp) # this is how DGField should work + flux1 = (- fl - fp + intg) + + out[:nm.shape(fp)[0], 0, 0, 0] = flux0 + out[nm.shape(fp)[0]:, 0, 0, 0] = flux1 + + # out[:nm.shape(fp)[0], 0, 0, 0] = vols * u[:, 0] - flux0 + # out[nm.shape(fp)[0]:, 0, 0, 0] = vols/3 * u[:, 1] - flux1 status = None return status \ No newline at end of file
Not redrawing all the post every time a new one is added via websocket. Fixes T323
@@ -34,7 +34,14 @@ socket.on('uscore', function(d){ socket.on('thread', function(data){ socket.emit('subscribe', {target: data.pid}) - document.getElementsByClassName('alldaposts')[0].innerHTML = data.html + document.getElementsByClassName('alldaposts')[0].innerHTML; + var ndata = document.createElement( "div" ); + ndata.innerHTML = data.html; + var x =document.getElementsByClassName('alldaposts')[0]; + + while (ndata.firstChild) { + x.insertBefore(ndata.firstChild ,x.children[0]); + } + icon.rendericons(); })
Corrected import in tutorial example. 'path' was imported instead of 're_path' from django.urls
@@ -233,7 +233,7 @@ Put the following code in ``chat/routing.py``: .. code-block:: python # chat/routing.py - from django.urls import path + from django.urls import re_path from . import consumers
Update xknx.md Type in the connection_state_changed_cb example. It currently says "connection_state_change_cb" instead of "connection_state_changed_cb". While this is a simple typo it makes the code fail.
@@ -146,7 +146,7 @@ async def main(): asyncio.run(main()) ``` -An awaitable `connection_state_change_cb` will be called every time the connection state to the gateway changes. Example: +An awaitable `connection_state_changed_cb` will be called every time the connection state to the gateway changes. Example: ```python import asyncio @@ -154,12 +154,12 @@ from xknx import XKNX from xknx.core import XknxConnectionState -async def connection_state_change_cb(state: XknxConnectionState): +async def connection_state_changed_cb(state: XknxConnectionState): print("Callback received with state {0}".format(state.name)) async def main(): - xknx = XKNX(connection_state_change_cb=connection_state_change_cb, daemon_mode=True) + xknx = XKNX(connection_state_changed_cb=connection_state_changed_cb, daemon_mode=True) await xknx.start() await xknx.stop()
SceneInspector : Add tooltip to labels This just shows the same as the label, but can be useful in the case of extremely long names which don't fit fully on the label itself (same approach we use in the NodeEditor).
@@ -772,7 +772,8 @@ class DiffRow( Row ) : label = GafferUI.Label( inspector.name(), horizontalAlignment = GafferUI.Label.HorizontalAlignment.Right, - verticalAlignment = GafferUI.Label.VerticalAlignment.Top + verticalAlignment = GafferUI.Label.VerticalAlignment.Top, + toolTip = inspector.name() ) label._qtWidget().setFixedWidth( 150 )
added the missing padding_strategy in the function squad_convert_examples_to_features
@@ -389,6 +389,7 @@ def squad_convert_examples_to_features( doc_stride, max_query_length, is_training, + padding_strategy="max_length", return_dataset=False, threads=1, tqdm_enabled=True, @@ -439,6 +440,7 @@ def squad_convert_examples_to_features( max_seq_length=max_seq_length, doc_stride=doc_stride, max_query_length=max_query_length, + padding_strategy=padding_strategy, is_training=is_training, ) features = list(
Added information for empty parameter when using Azure CLI in PowerShell * Added information for empty parameter when using Azure CLI in PowerShell Added information to use '""' or --% operator in PowerShell for parameters: public-ip-address nsg * Escaped ' characters Added missing escape sequence * Update _params.py
@@ -289,7 +289,7 @@ def load_arguments(self, _): c.argument('os_disk_size_gb', type=int, help='the size of the os disk in GB', arg_group='Storage') c.argument('availability_set', help='Name or ID of an existing availability set to add the VM to. None by default.') c.argument('vmss', help='Name or ID of an existing virtual machine scale set that the virtual machine should be assigned to. None by default.') - c.argument('nsg', help='The name to use when creating a new Network Security Group (default) or referencing an existing one. Can also reference an existing NSG by ID or specify "" for none.', arg_group='Network') + c.argument('nsg', help='The name to use when creating a new Network Security Group (default) or referencing an existing one. Can also reference an existing NSG by ID or specify "" for none (\'""\' in Azure CLI using PowerShell or --% operator).', arg_group='Network') c.argument('nsg_rule', help='NSG rule to create when creating a new NSG. Defaults to open ports for allowing RDP on Windows and allowing SSH on Linux.', arg_group='Network', arg_type=get_enum_type(['RDP', 'SSH'])) c.argument('application_security_groups', resource_type=ResourceType.MGMT_NETWORK, min_api='2017-09-01', nargs='+', options_list=['--asgs'], help='Space-separated list of existing application security groups to associate with the VM.', arg_group='Network', validator=validate_asg_names_or_ids) c.argument('boot_diagnostics_storage', @@ -677,7 +677,7 @@ def load_arguments(self, _): c.argument('subnet_address_prefix', help='The subnet IP address prefix to use when creating a new VNet in CIDR format.') c.argument('nics', nargs='+', help='Names or IDs of existing NICs to attach to the VM. The first NIC will be designated as primary. If omitted, a new NIC will be created. If an existing NIC is specified, do not specify subnet, VNet, public IP or NSG.') c.argument('private_ip_address', help='Static private IP address (e.g. 10.0.0.5).') - c.argument('public_ip_address', help='Name of the public IP address when creating one (default) or referencing an existing one. Can also reference an existing public IP by ID or specify "" for None.') + c.argument('public_ip_address', help='Name of the public IP address when creating one (default) or referencing an existing one. Can also reference an existing public IP by ID or specify "" for None (\'""\' in Azure CLI using PowerShell or --% operator).') c.argument('public_ip_address_allocation', help=None, default=None, arg_type=get_enum_type(['dynamic', 'static'])) c.argument('public_ip_address_dns_name', help='Globally unique DNS name for a newly created public IP.') if self.supported_api_version(min_api='2017-08-01', resource_type=ResourceType.MGMT_NETWORK):
Fix spurious intermittent failure in test_machines.py::test_status Not sure why the agent_status seems to revert sometimes to 'pending' but the additional checks were basically redundant anyway.
@@ -26,18 +26,17 @@ async def test_status(event_loop): assert machine.agent_status == 'pending' assert not machine.agent_version + # there is some inconsistency in the capitalization of status_message + # between different providers await asyncio.wait_for( - model.block_until(lambda: (machine.status == 'running' and + model.block_until( + lambda: (machine.status == 'running' and + machine.status_message.lower() == 'running' and machine.agent_status == 'started' and - machine.agent_version is not None)), + machine.agent_version is not None and + machine.agent_version.major >= 2)), timeout=480) - assert machine.status == 'running' - # there is some inconsistency in the message case between providers - assert machine.status_message.lower() == 'running' - assert machine.agent_status == 'started' - assert machine.agent_version.major >= 2 - @base.bootstrapped @pytest.mark.asyncio
Fixed input_formatter scoreDiff issue Also removed unnecessary calculations (for now)
@@ -3,11 +3,11 @@ from modelHelpers import feature_creator def get_state_dim_with_features(): - return 198 + return 196 class InputFormatter: - last_score_diff = 0 + last_total_score = 0 """ This is a class that takes in a game_tick_packet and will return an array of that value @@ -54,14 +54,14 @@ class InputFormatter: total_score = enemyTeamScore - ownTeamScore # we subtract so that when they score it becomes negative for this frame # and when we score it is positive - diff_in_score = self.last_score_diff - total_score + diff_in_score = self.last_total_score - total_score score_info.append(diff_in_score) - - extra_features = feature_creator.get_extra_features(game_tick_packet, self.index) + self.last_total_score = total_score + # extra_features = feature_creator.get_extra_features(game_tick_packet, self.index) return np.array(game_info + score_info + player_car + ball_data + self.flattenArrays(team_members) + self.flattenArrays(enemies) + boost_info, dtype=np.float32), \ - np.array(extra_features, dtype=np.float32) + [] def get_player_goals(self, game_tick_packet, index): return game_tick_packet.gamecars[index].Score.Goals
Update dumpstyle.py Added from __future__ import absolute_import, and adjusted import statement for compatibility. Corrected directory location in __main__ routine to be relative to rst2pdf package (os.path.listdir('styles') wouldn't work if we weren't running from the right directory).
to .style in the styles directory. ''' +from __future__ import absolute_import import sys import os -from rson import loads as rloads +from .rson import loads as rloads from json import loads as jloads def dumps(obj, forcestyledict=True): @@ -155,7 +156,8 @@ def convert(srcname): dstf.write(dstr) dstf.close() - if __name__ == '__main__': - for fname in [os.path.join('styles', x) for x in os.listdir('styles') if x.endswith('.json')]: + _dir = os.path.dirname(sys.argv[0]) + _stylesdir = os.path.join(_dir, 'styles') + for fname in [os.path.join('styles', x) for x in os.listdir(_stylesdir) if x.endswith('.json')]: convert(fname)
Fix string decode issue for python3 A binary representation `"\n" * 5 + "hh".encode('utf-8')` reproduces the error from travis-ci.
@@ -52,7 +52,7 @@ class PythonScript(threading.Thread): else: # something unexpected happend here, this script was supposed to survive at leat the timeout if len(self.err) is not 0: - stderr = "\n" * 5 + self.err + stderr = "\n\n\n\n\n %s" % self.err raise AssertionError(stderr)
Fix syntax issues in vsts Fix syntax issues in vsts
@@ -3,13 +3,13 @@ steps: # Fix Git SSL errors pip install certifi python -m certifi > cacert.txt - Write-Host "##vso[task.setvariable variable=GIT_SSL_CAINFO]"$(Get-Content cacert.txt)" + Write-Host "##vso[task.setvariable variable=GIT_SSL_CAINFO]$(Get-Content cacert.txt)" # Shorten paths to get under MAX_PATH or else integration tests will fail # https://bugs.python.org/issue18199 subst T: "$env:TEMP" - Write-Host "##vso[task.setvariable variable=TEMP]"T:\" - Write-Host "##vso[task.setvariable variable=TMP]"T:\" - Get-ChildItem Env + Write-Host "##vso[task.setvariable variable=TEMP]T:\" + Write-Host "##vso[task.setvariable variable=TMP]T:\" + Get-ChildItem Env: D:\.venv\Scripts\pipenv run pytest -n 4 -ra --ignore=pipenv\patched --ignore=pipenv\vendor --junitxml=test-results.xml tests displayName: Run integration tests
ENH: added MetaLabels and reference tags Added a section for MetaLabels and added reference tags to some of the sections.
@@ -22,6 +22,7 @@ General .. automodule:: pysat.instruments.methods.general :members: +.. _api-instrument-template: Instrument Template ------------------- @@ -41,12 +42,24 @@ Files .. autoclass:: pysat.Files :members: +.. _api-meta: + Meta ---- .. autoclass:: pysat.Meta :members: +.. _api-metalabels: + +MetaLabels +---------- + +.. autoclass:: pysat.MetaLabels + :members: + +.. _api-orbits: + Orbits ------
verify that files exist before trying to remove them, win_file.remove raises an exception if the file does not exist
@@ -2832,6 +2832,7 @@ def _findOptionValueInSeceditFile(option): _reader = codecs.open(_tfile, 'r', encoding='utf-16') _secdata = _reader.readlines() _reader.close() + if __salt__['file.file_exists'](_tfile): _ret = __salt__['file.remove'](_tfile) for _line in _secdata: if _line.startswith(option): @@ -2853,7 +2854,9 @@ def _importSeceditConfig(infdata): _tInfFile = '{0}\\{1}'.format(__salt__['config.get']('cachedir'), 'salt-secedit-config-{0}.inf'.format(_d)) # make sure our temp files don't already exist + if __salt__['file.file_exists'](_tSdbfile): _ret = __salt__['file.remove'](_tSdbfile) + if __salt__['file.file_exists'](_tInfFile): _ret = __salt__['file.remove'](_tInfFile) # add the inf data to the file, win_file sure could use the write() function _ret = __salt__['file.touch'](_tInfFile) @@ -2861,7 +2864,9 @@ def _importSeceditConfig(infdata): # run secedit to make the change _ret = __salt__['cmd.run']('secedit /configure /db {0} /cfg {1}'.format(_tSdbfile, _tInfFile)) # cleanup our temp files + if __salt__['file.file_exists'](_tSdbfile): _ret = __salt__['file.remove'](_tSdbfile) + if __salt__['file.file_exists'](_tInfFile): _ret = __salt__['file.remove'](_tInfFile) return True except Exception as e:
[CI] Add alexnet and googlenet caffe model to request hook This PR intends to move the alexnet and googlenet caffe models from the old link to s3, therefore getting rid of the flakiness in `caffe/test_forward.py` introduced by external url timeouts. Fixes
@@ -37,6 +37,9 @@ URL_MAP = { "http://images.cocodataset.org/zips/val2017.zip": f"{BASE}/cocodataset-val2017.zip", "https://bj.bcebos.com/x2paddle/models/paddle_resnet50.tar": f"{BASE}/bcebos-paddle_resnet50.tar", "https://data.deepai.org/stanfordcars.zip": f"{BASE}/deepai-stanfordcars.zip", + "http://dl.caffe.berkeleyvision.org/bvlc_alexnet.caffemodel": f"{BASE}/bvlc_alexnet.caffemodel", + "http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel": f"{BASE}/bvlc_googlenet.caffemodel", + "https://github.com/dmlc/web-data/blob/main/darknet/data/dog.jpg": f"{BASE}/dog.jpg", }
fix session typo thanks to Vintas Avinash closes
@@ -3016,7 +3016,7 @@ def postStartSetup( self ): ## NOTE: MAKE SURE THIS IS LAST THING CALLED # Start the CLI if enabled if self.appPrefs['startCLI'] == '1': - info( "\n\n NOTE: PLEASE REMEMBER TO EXIT THE CLI BEFORE YOU PRESS THE STOP BUTTON. Not exiting will prevent MiniEdit from quitting and will prevent you from starting the network again during this sessoin.\n\n") + info( "\n\n NOTE: PLEASE REMEMBER TO EXIT THE CLI BEFORE YOU PRESS THE STOP BUTTON. Not exiting will prevent MiniEdit from quitting and will prevent you from starting the network again during this session.\n\n") CLI(self.net) def start( self ):
Add missing closing paren in hint text The hint at the end of the pong tutorial associated with making the game end after a certain score was missing its closing parenthesis.
@@ -455,7 +455,7 @@ you could do: :class:`~kivy.uix.button.Button` and :class:`~kivy.uix.label.Label` classes, and figure out how to use their `add_widget` and `remove_widget` - functions to add or remove widgets dynamically. + functions to add or remove widgets dynamically.) * Make it a 4 player Pong Game. Most tablets have Multi-Touch support, so wouldn't it be cool to have a player on each side and have four
Script to compare the scales calculated from DIALS vs aimless. Also start of code to create a simulated dataset.
@@ -238,7 +238,7 @@ def initialise_absorption_scales(self, reflection_table, lmax): def calc_absorption_constraint(self): n_g_scale = self.n_g_scale_params n_g_decay = self.n_g_decay_params - return (1e5 * (self.active_parameters[n_g_scale + n_g_decay:])**2) + return (1e7 * (self.active_parameters[n_g_scale + n_g_decay:])**2) def expand_scales_to_all_reflections(self): "recalculate scales for reflections in sorted_reflection table" @@ -267,10 +267,11 @@ def clean_reflection_table(self): self.initial_keys.append('inverse_scale_factor') self.initial_keys.append('Ih_values') self.initial_keys.append('asu_miller_index') + self.initial_keys.append('phi') for key in self.reflection_table.keys(): if not key in self.initial_keys: del self.reflection_table[key] - added_columns = ['h_index', 'phi', 's2', 's2d', + added_columns = ['h_index', 's2', 's2d', 'decay_factor', 'angular_scale_factor', 'normalised_rotation_angle', 'normalised_time_values', 'wilson_outlier_flag', 'centric_flag', 'absorption_factor']
Make test assertions match the test name more closely This updates test_editable_vcs_install_in_pipfile_with_dependency_resolution_doesnt_traceback to check (a) that dependency resolution was triggered, and (b) that there was no traceback (rather than just the specific traceback we are currently seeing).
@@ -456,8 +456,8 @@ requests = {git = "https://github.com/requests/requests.git", editable = true} f.write(contents) c = p.pipenv('install') assert c.return_code == 1 - assert 'FileNotFoundError' not in c.out - assert 'FileNotFoundError' not in c.err + assert "Your dependencies could not be resolved" in c.err + assert 'Traceback' not in c.err @pytest.mark.run
docs: Document how to use Zulip behind an haproxy reverse proxy. With significant rearrangement by tabbott to have more common text between different proxy implementations.
@@ -74,6 +74,43 @@ those providers, Zulip's full-text search will be unavailable. ## Putting the Zulip application behind a reverse proxy Zulip is designed to support being run behind a reverse proxy server. +This section contains notes on the configuration required with +variable reverse proxy implementations. + +### Installer options + +If your Zulip server will not be on the public Internet, we recommend, +installing with the `--self-signed-cert` option (rather than the +`--certbot` option), since CertBot requires the server to be on the +public Internet. + +#### Configuring Zulip to allow HTTP + +Depending on your environment, you may want the reverse proxy to talk +to the Zulip server over HTTP; this can be secure when the Zulip +server is not directly exposed to the public Internet. + +After installing the Zulip server as +[described above](#installer-options), you can configure Zulip to talk +HTTP as follows: + +1. Add the following block to `/etc/zulip/zulip.conf`: + + ``` + [application_server] + http_only = true + ``` + +1. As root, run +`/home/zulip/deployments/current/scripts/zulip-puppet-apply`. This +will convert Zulip's main `nginx` configuration file to allow HTTP +instead of HTTPS. + +1. Finally, restart the Zulip server, using +`/home/zulip/deployments/current/scripts/restart-server`. + +### nginx configuration + There are few things you need to be careful about when configuring a reverse proxy: @@ -116,3 +153,20 @@ available via the `zulip::nginx` puppet module). [zulipchat-puppet]: https://github.com/zulip/zulip/tree/master/puppet/zulip_ops/manifests [nginx-loadbalancer]: https://github.com/zulip/zulip/blob/master/puppet/zulip_ops/files/nginx/sites-available/loadbalancer +### HAProxy configuration + +If you want to use HAProxy with Zulip, this `backend` config is a good +place to start. + +``` +backend zulip + mode http + balance leastconn + http-request set-header X-Client-IP %[src] + reqadd X-Forwarded-Proto:\ https + server zulip 10.10.10.10:80 check +``` + +Since this configuration uses the `http` mode, you will also need to +[configure Zulip to allow HTTP](#configuring-zulip-to-allow-http) as +described above.
tools/downloader/downloader.py: use the correct exit code on failure I accidentally removed this in
@@ -269,6 +269,7 @@ def main(): reporter.print('FAILED:') for failed_model_name in failed_models: reporter.print(failed_model_name) + sys.exit(1) if __name__ == '__main__': main()
Using raw ynode instead Since we don't have jenkins.yaml
@@ -13,7 +13,7 @@ commit = '' ircMsgResult(CHANNELS) { ystage('Test') { - ynode.forConfiguredHostType(ownerName: 'Yelp', repoName: 'paasta') { + ynode { ensureCleanWorkspace { commit = clone( PACKAGE_NAME,
tests/mechanism/RecurrentTransferMechanism: Enable LLVM tests The existing tests are simple enough for parent TransferMechanism execution. Recurrent projection is not used and the mechanism is reset each time.
@@ -94,28 +94,30 @@ class TestRecurrentTransferMechanismInputs: @pytest.mark.mechanism @pytest.mark.recurrent_transfer_mechanism @pytest.mark.benchmark(group="RecurrentTransferMechanism") - def test_recurrent_mech_inputs_list_of_ints(self, benchmark): + @pytest.mark.parametrize('mode', ['Python', 'LLVM']) + def test_recurrent_mech_inputs_list_of_ints(self, benchmark, mode): R = RecurrentTransferMechanism( name='R', default_variable=[0, 0, 0, 0] ) - val = R.execute([10, 12, 0, -1]) + val = R.execute([10, 12, 0, -1], bin_execute=(mode=='LLVM')) np.testing.assert_allclose(val, [[10.0, 12.0, 0, -1]]) - val = R.execute([1, 2, 3, 0]) + val = R.execute([1, 2, 3, 0], bin_execute=(mode=='LLVM')) np.testing.assert_allclose(val, [[1, 2, 3, 0]]) # because recurrent projection is not used when executing: mech is reset each time - benchmark(R.execute, [1, 2, 3, 0]) + benchmark(R.execute, [1, 2, 3, 0], bin_execute=(mode=='LLVM')) @pytest.mark.mechanism @pytest.mark.recurrent_transfer_mechanism @pytest.mark.benchmark(group="RecurrentTransferMechanism") - def test_recurrent_mech_inputs_list_of_floats(self, benchmark): + @pytest.mark.parametrize('mode', ['Python', 'LLVM']) + def test_recurrent_mech_inputs_list_of_floats(self, benchmark, mode): R = RecurrentTransferMechanism( name='R', size=4 ) - val = R.execute([10.0, 10.0, 10.0, 10.0]) + val = R.execute([10.0, 10.0, 10.0, 10.0], bin_execute=(mode=='LLVM')) np.testing.assert_allclose(val, [[10.0, 10.0, 10.0, 10.0]]) - benchmark(R.execute, [1, 2, 3, 0]) + benchmark(R.execute, [1, 2, 3, 0], bin_execute=(mode=='LLVM')) # def test_recurrent_mech_inputs_list_of_fns(self): # R = RecurrentTransferMechanism( @@ -133,14 +135,15 @@ class TestRecurrentTransferMechanismInputs: @pytest.mark.mechanism @pytest.mark.recurrent_transfer_mechanism @pytest.mark.benchmark(group="RecurrentTransferMechanism") - def test_recurrent_mech_no_inputs(self, benchmark): + @pytest.mark.parametrize('mode', ['Python', 'LLVM']) + def test_recurrent_mech_no_inputs(self, benchmark, mode): R = RecurrentTransferMechanism( name='R' ) np.testing.assert_allclose(R.instance_defaults.variable, [[0]]) - val = R.execute([10]) + val = R.execute([10], bin_execute=(mode=='LLVM')) np.testing.assert_allclose(val, [[10.]]) - benchmark(R.execute, [1]) + benchmark(R.execute, [1], bin_execute=(mode=='LLVM')) def test_recurrent_mech_inputs_list_of_strings(self): with pytest.raises(UtilitiesError) as error_text:
[libjpeg] Update checksums for tarballs The Independent JPEG Group uploaded new tarballs that removed some control characters from the beginning of some build files.
sources: "9c": url: "http://ijg.org/files/jpegsrc.v9c.tar.gz" - sha256: "650250979303a649e21f87b5ccd02672af1ea6954b911342ea491f351ceb7122" + sha256: "1e9793e1c6ba66e7e0b6e5fe7fd0f9e935cc697854d5737adec54d93e5b3f730" "9d": url: "http://ijg.org/files/jpegsrc.v9d.tar.gz" - sha256: "99cb50e48a4556bc571dadd27931955ff458aae32f68c4d9c39d624693f69c32" + sha256: "6c434a3be59f8f62425b2e3c077e785c9ce30ee5874ea1c270e843f273ba71ee" patches: "9c": - patch_file: "patches/0001-libjpeg-add-msvc-dll-support.patch"
Update install.py hotfix
@@ -81,7 +81,7 @@ def linux_installation(): # Make the binary executable for path, _, _ in os.walk(os.path.join(worlds_path, "LinuxDefaultWorlds")): os.chmod(path, 0o777) - binary_path = os.path.join(worlds_path, "LinuxDefaultWorlds/LinuxNoEditor/Holodeck/Binaries/Linux/holodeck") + binary_path = os.path.join(worlds_path, "LinuxDefaultWorlds/LinuxNoEditor/Holodeck/Binaries/Linux/Holodeck") os.chmod(binary_path, 0o755) print("To continue installation, follow instructions on the github page")
Replace factory reset with select-all and delete. For developers with custom system scripts folders, as described in the DEBUGGING.md document, the factory reset was killing off the glTF addon itself.
@@ -25,7 +25,9 @@ try: filepath = argv[0] - bpy.ops.wm.read_factory_settings(use_empty=True) + bpy.ops.object.select_all(action='SELECT') + bpy.ops.object.delete(use_global=False) + bpy.ops.import_scene.gltf(filepath=argv[0]) extension = '.gltf'
[query] Don't broadcast the contigRecoding if there are no contigs We end up with tens of thousands of miniscule broadcasts across thousands of executors, the management of which seems to take signifigant time.
@@ -1465,14 +1465,15 @@ class PartitionedVCFRDD( @(transient@param) _partitions: Array[Partition] ) extends RDD[String](SparkBackend.sparkContext("PartitionedVCFRDD"), Seq()) { - val contigRemappingBc = sparkContext.broadcast(reverseContigMapping) + val contigRemappingBc = if (reverseContigMapping.size != 0) sparkContext.broadcast(reverseContigMapping) else null protected def getPartitions: Array[Partition] = _partitions def compute(split: Partition, context: TaskContext): Iterator[String] = { val p = split.asInstanceOf[PartitionedVCFPartition] - val chromToQuery = contigRemappingBc.value.getOrElse(p.chrom, p.chrom) + val chromToQuery = if (contigRemappingBc != null) contigRemappingBc.value.getOrElse(p.chrom, p.chrom) else p.chrom + val reg = { val r = new TabixReader(file, fsBc.value) val tid = r.chr2tid(chromToQuery)
Fix lightweight themes data in search API Fix mozilla/addons-frontend#1964
@@ -421,7 +421,9 @@ class ESBaseAddonSerializer(BaseESSerializer): display_username=persona_data['author'], header=persona_data['header'], footer=persona_data['footer'], - persona_id=1 if persona_data['is_new'] else None, + # "New" Persona do not have a persona_id, it's an old relic + # from old ones. + persona_id=0 if persona_data['is_new'] else 42, textcolor=persona_data['textcolor'] ) else:
exe_test: Use assertNotIn Use assertNotIn instead of assertFalse(x in y) because it will give better error messages.
@@ -467,7 +467,7 @@ class TestExecutorTest(unittest.TestCase): self.assertTrue(record.outcome, Outcome.FAIL) # Verify phase_one was not run ran_phase = [phase.name for phase in record.phases] - self.assertFalse('phase_one' in ran_phase) + self.assertNotIn('phase_one', ran_phase) # Teardown function should be executed. self.assertTrue(ev.wait(1)) executor.close() @@ -494,7 +494,7 @@ class TestExecutorTest(unittest.TestCase): self.assertTrue(record.outcome, Outcome.FAIL) # Verify phase_one was not run ran_phase = [phase.name for phase in record.phases] - self.assertFalse('phase_one' in ran_phase) + self.assertNotIn('phase_one', ran_phase) # Teardown function should be executed. self.assertTrue(ev.wait(1)) executor.close()
[varLib.featureVars] Fix overlap remainder logic Fixes
@@ -187,7 +187,8 @@ def overlayBox(top, bot): # Remainder is empty if bot's each axis range lies within that of intersection. # # Remainder is shrank if bot's each, except for exactly one, axis range lies - # within that of intersection. + # within that of intersection, and that one axis, it spills out of the + # intersection only on one side. # # Bot is returned in full as remainder otherwise, as true remainder is not # representable as a single box. @@ -198,11 +199,11 @@ def overlayBox(top, bot): for axisTag in bot: if axisTag not in intersection: fullyInside = False - continue # Lies fully within + continue # Axis range lies fully within min1, max1 = intersection[axisTag] min2, max2 = bot[axisTag] if min1 <= min2 and max2 <= max1: - continue # Lies fully within + continue # Axis range lies fully within # Bot's range doesn't fully lie within that of top's for this axis. # We know they intersect, so it cannot lie fully without either; so they @@ -216,11 +217,11 @@ def overlayBox(top, bot): fullyInside = False # Otherwise, cut remainder on this axis and continue. - if max1 < max2: + if min1 <= min2: # Right side survives. minimum = max(max1, min2) maximum = max2 - elif min2 < min1: + elif max2 <= max1: # Left side survives. minimum = min2 maximum = min(min1, max2)
Update oudated CutomLoader code example py3 for CustomLoader example
@@ -55,8 +55,8 @@ class BaseLoader: if not exists(path): raise TemplateNotFound(template) mtime = getmtime(path) - with file(path) as f: - source = f.read().decode('utf-8') + with open(path) as f: + source = f.read() return source, path, lambda: mtime == getmtime(path) """
cythonize --no-docstrings * cythonize -D, --no-docstrings Add `-D, --no-docstrings` option for the cythonize script. * remove the short `-D` option, remaining only `--no-docstrings`
@@ -94,6 +94,9 @@ def cython_compile(path_pattern, options): # assume it's a file(-like thing) paths = [path] + if options.no_docstrings: + Options.docstrings = False + ext_modules = cythonize( paths, nthreads=options.parallel, @@ -194,6 +197,8 @@ def parse_args(args): help='increase Python compatibility by ignoring some compile time errors') parser.add_option('-k', '--keep-going', dest='keep_going', action='store_true', help='compile as much as possible, ignore compilation failures') + parser.add_option('--no-docstrings', dest='no_docstrings', action='store_true', + help='strip docstrings') options, args = parser.parse_args(args) if not args:
circleci: Use the joy of `os.makedirs(..., exist_ok=True)`. Since Python 3.2, we no longer need to write this little wrapper all over our own code! There was much rejoicing.
#!/usr/bin/env python3 -import errno import os import yaml -def generate_dockerfile_directories(dockerfile_path: str) -> None: - if not os.path.exists(os.path.dirname(dockerfile_path)): - try: - os.makedirs(os.path.dirname(dockerfile_path)) - except OSError as e: - if e.errno != errno.EEXIST: - raise - if __name__ == "__main__": os.chdir(os.path.abspath(os.path.dirname(__file__))) @@ -30,6 +21,6 @@ if __name__ == "__main__": dockerfile_content = docker_template.format_map(dockerfile_settings[distro]) dockerfile_path = "images/{}/Dockerfile".format(distro) - generate_dockerfile_directories(dockerfile_path) + os.makedirs(os.path.dirname(dockerfile_path), exist_ok=True) with open(dockerfile_path, "w") as f: f.write(dockerfile_content)
tools/downloader/README.md: move the text about alternatives to --all It makes more sense to explain this right after --all is introduced.
@@ -67,17 +67,18 @@ The basic usage is to run the script like this: ./downloader.py --all ``` -This will download all models into a directory tree rooted in the current -directory. To download into a different directory, use the `-o`/`--output_dir` -option: +This will download all models. The `--all` option can be replaced with +other filter options to download only a subset of models. See the "Shared options" +section. + +By default, the script will download models into a directory tree rooted +in the current directory. To download into a different directory, use +the `-o`/`--output_dir` option: ```sh ./downloader.py --all --output_dir my/download/directory ``` -The `--all` option can be replaced with other filter options to download only -a subset of models. See the "Shared options" section. - You may use `--precisions` flag to specify comma separated precisions of weights to be downloaded. @@ -221,6 +222,9 @@ This will convert all models into the Inference Engine IR format. Models that were originally in that format are ignored. Models in PyTorch and Caffe2 formats will be converted in ONNX format first. +The `--all` option can be replaced with other filter options to convert only +a subset of models. See the "Shared options" section. + The current directory must be the root of a download tree created by the model downloader. To specify a different download tree path, use the `-d`/`--download_dir` option: @@ -237,9 +241,6 @@ into a different directory tree, use the `-o`/`--output_dir` option: ``` >Note: models in intermediate format are placed to this directory too. -The `--all` option can be replaced with other filter options to convert only -a subset of models. See the "Shared options" section. - By default, the script will produce models in every precision that is supported for conversion. To only produce models in a specific precision, use the `--precisions` option:
PrimitiveVariablesTest: check geometric interpretation interpretation needs to make it through interpretation needs to be updated properly (tests hash)
@@ -68,5 +68,42 @@ class PrimitiveVariablesTest( GafferSceneTest.SceneTestCase ) : del o2["a"] self.assertEqual( o1, o2 ) + def testGeometricInterpretation( self ) : + + s = GafferScene.Sphere() + p = GafferScene.PrimitiveVariables() + p["in"].setInput( s["out"] ) + + p["primitiveVariables"].addMember( "myFirstData", IECore.V3fData( IECore.V3f( 0 ), IECore.GeometricData.Interpretation.Vector ) ) + p["primitiveVariables"].addMember( "mySecondData", IECore.V3fData( IECore.V3f( 0 ), IECore.GeometricData.Interpretation.Normal ) ) + p["primitiveVariables"].addMember( "myThirdData", IECore.V3fData( IECore.V3f( 0 ), IECore.GeometricData.Interpretation.Point ) ) + + o = p["out"].object( "/sphere" ) + + # test if the geometric interpretation makes it into the primitive variable + self.assertEqual( o["myFirstData"].data.getInterpretation(), IECore.GeometricData.Interpretation.Vector ) + self.assertEqual( o["mySecondData"].data.getInterpretation(), IECore.GeometricData.Interpretation.Normal ) + self.assertEqual( o["myThirdData"].data.getInterpretation(), IECore.GeometricData.Interpretation.Point ) + + del o["myFirstData"] + del o["mySecondData"] + del o["myThirdData"] + + self.assertFalse( 'myFirstData' in o ) + self.assertFalse( 'mySecondData' in o ) + self.assertFalse( 'myThirdData' in o ) + + p["primitiveVariables"].addMember( "myFirstData", IECore.V3fData( IECore.V3f( 0 ), IECore.GeometricData.Interpretation.Point ) ) + p["primitiveVariables"].addMember( "mySecondData", IECore.V3fData( IECore.V3f( 0 ), IECore.GeometricData.Interpretation.Vector ) ) + p["primitiveVariables"].addMember( "myThirdData", IECore.V3fData( IECore.V3f( 0 ), IECore.GeometricData.Interpretation.Normal ) ) + + o = p["out"].object( "/sphere" ) + + # test if the new geometric interpretation makes it into the primitive variable + # this tests the hashing on the respective plugs + self.assertEqual( o["myFirstData"].data.getInterpretation(), IECore.GeometricData.Interpretation.Point ) + self.assertEqual( o["mySecondData"].data.getInterpretation(), IECore.GeometricData.Interpretation.Vector ) + self.assertEqual( o["myThirdData"].data.getInterpretation(), IECore.GeometricData.Interpretation.Normal ) + if __name__ == "__main__": unittest.main()
don't use HiDPI icons in linux this fixes a too small tray icon in GNOME with HiDPI scaling, at the cost at the other icons being pixelated
@@ -338,6 +338,10 @@ class MaestralApp(QtWidgets.QSystemTrayIcon): def run(): QtCore.QCoreApplication.setAttribute(QtCore.Qt.AA_EnableHighDpiScaling) app = QtWidgets.QApplication(["Maestral"]) + if platform.system() == "Darwin": + # Fixes a Qt bug where the tray icon is too small on GNOME with HiDPI + # scaling enabled. As a trade off, we get low resolution pixmaps in the rest of + # the UI - but this is better than a bad tray icon. app.setAttribute(QtCore.Qt.AA_UseHighDpiPixmaps) app.setQuitOnLastWindowClosed(False)
BUG: fix raise exception on granger causality test Issue Fix for the appropriate behavior when passing a list containing a zero lag to `maxlag` (e.g. `[0, 1, 2]`), now it raises `ValueError`
@@ -1336,18 +1336,17 @@ def grangercausalitytests(x, maxlag, addconst=True, verbose=True): addconst = bool_like(addconst, "addconst") verbose = bool_like(verbose, "verbose") try: + maxlag = int_like(maxlag, "maxlag") + if maxlag <= 0: + raise ValueError("maxlag must a a positive integer") + lags = np.arange(1, maxlag + 1) + except TypeError: lags = np.array([int(lag) for lag in maxlag]) maxlag = lags.max() if lags.min() <= 0 or lags.size == 0: raise ValueError( "maxlag must be a non-empty list containing only " - "positive integers" - ) - except Exception: - maxlag = int_like(maxlag, "maxlag") - if maxlag <= 0: - raise ValueError("maxlag must a a positive integer") - lags = np.arange(1, maxlag + 1) + "positive integers") if x.shape[0] <= 3 * maxlag + int(addconst): raise ValueError(
small fixes removed a TODO changed separator in one test
@@ -121,7 +121,6 @@ Define a boolean variable BEFORE the for loop. Then change its value INSIDE the """ # TODO: MessageStep: catch the "obvious solution" where the user adds the separator after the last word? - # TODO: is overriding generate_inputs needed? def solution(self, words: List[str], separator: str): total = '' @@ -137,7 +136,7 @@ Define a boolean variable BEFORE the for loop. Then change its value INSIDE the tests = [ ((['This', 'is', 'a', 'list'], ' - '), 'This - is - a - list'), - ((['The', 'quick', 'brown', 'fox', 'jumps'], ' - '), 'The - quick - brown - fox - jumps'), + ((['The', 'quick', 'brown', 'fox', 'jumps'], '**'), 'The**quick**brown**fox**jumps'), ] final_text = """
WL: add Core.simulate_keypress This lets `Qtile.cmd_simulate_keypress` work when using the Wayland backend. The key is only processed internally for keybinds and if required passing it to a focussed `Internal` window.
@@ -687,3 +687,15 @@ class Core(base.Core, wlrq.HasListeners): def keysym_from_name(self, name: str) -> int: """Get the keysym for a key from its name""" return xkb.keysym_from_name(name, case_insensitive=True) + + def simulate_keypress(self, modifiers: List[str], key: str) -> None: + #"""Simulates a keypress on the focused window.""" + keysym = xkb.keysym_from_name(key, case_insensitive=True) + mods = wlrq.translate_masks(modifiers) + + if (keysym, mods) in self.grabbed_keys: + self.qtile.process_key_event(keysym, mods) + return + + if self.focused_internal: + self.focused_internal.process_key_press(keysym)
Display NVCC version in CI for convenience to look at Summary: Pull Request resolved:
@@ -53,6 +53,11 @@ gcc --version echo "CMake version:" cmake --version +if [[ "$BUILD_ENVIRONMENT" == *cuda* ]]; then + echo "NVCC version:" + nvcc --version +fi + # TODO: Don't run this... pip_install -r requirements.txt || true
Fix a super annoying validation issue Was throwing opaque "too many values to unpack" error Simply needed the name of the field.
@@ -49,7 +49,9 @@ class InvenTreeMoneySerializer(MoneyField): if amount is not None: amount = Decimal(amount) except: - raise ValidationError(_("Must be a valid number")) + raise ValidationError({ + self.field_name: _("Must be a valid number") + }) currency = data.get(get_currency_field_name(self.field_name), self.default_currency)
Remove gw_port expire call This was initially added as part of the patch here: However, it doesn't serve a purpose anymore because nothing references the gateway port DB object before it is deleted. Closes-Bug:
@@ -436,11 +436,9 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase, def _delete_router_gw_port_db(self, context, router): with context.session.begin(subtransactions=True): - gw_port = router.gw_port router.gw_port = None if router not in context.session: context.session.add(router) - context.session.expire(gw_port) try: kwargs = {'context': context, 'router_id': router.id} registry.notify(
use [...]= operator instead of [()] or [:] () was aparently more of a nonstandard legacy thing ... is more recent : is avoided due to 0-d array assignment problem
@@ -634,7 +634,7 @@ class CPUCodeGenerator(PyGen): def generate_op(self, op, out, *args): recv_id = len(self.recv_nodes) self.recv_nodes.append(op) - self.append("{}[()] = self.recv_from_queue_send({})", out, recv_id) + self.append("{}[...] = self.recv_from_queue_send({})", out, recv_id) @generate_op.on_type(CPUQueueGatherSendOp) def generate_op(self, op, out, *args): @@ -646,7 +646,7 @@ class CPUCodeGenerator(PyGen): def generate_op(self, op, out, *args): gather_recv_id = len(self.gather_recv_nodes) self.gather_recv_nodes.append(op) - self.append("{}[:] = self.gather_recv_from_queue_gather_send({})", out, gather_recv_id) + self.append("{}[...] = self.gather_recv_from_queue_gather_send({})", out, gather_recv_id) @generate_op.on_type(CPUQueueScatterSendOp) def generate_op(self, op, out, *args): @@ -658,7 +658,8 @@ class CPUCodeGenerator(PyGen): def generate_op(self, op, out, *args): scatter_recv_id = len(self.scatter_recv_nodes) self.scatter_recv_nodes.append(op) - self.append("{}[:] = self.scatter_recv_from_queue_scatter_send({})", out, scatter_recv_id) + self.append("{}[...] = self.scatter_recv_from_queue_scatter_send({})", + out, scatter_recv_id) class CPUTransformer(Transformer):
Remove unnecessary "distinct" clause in Node.objects.can_view. Forming queryset where node id is in list of node ids. List of node ids can have duplicates.
@@ -137,12 +137,12 @@ class AbstractNodeQuerySet(GuidMixinQuerySet): qs |= self.filter(private_links__is_deleted=False, private_links__key=private_link) if user is not None and not isinstance(user, AnonymousUser): - read_user_query = get_objects_for_user(user, 'read_node', self) + read_user_query = get_objects_for_user(user, 'read_node', self, with_superuser=False) qs |= read_user_query qs |= self.extra(where=[""" "osf_abstractnode".id in ( WITH RECURSIVE implicit_read AS ( - SELECT DISTINCT N.id as node_id + SELECT N.id as node_id FROM osf_abstractnode as N, auth_permission as P, osf_nodegroupobjectpermission as G, osf_osfuser_groups as UG WHERE P.codename = 'admin_node' AND G.permission_id = P.id
Avoid reconnect prompt after error if connection is still valid Instead of whitelisting all errors that do not require reconnecting, we simply only reconnect if we detect a disconnect has occurred. psql notably behaves in a similar way: Fixes
@@ -354,21 +354,16 @@ class PGExecute(object): def _must_raise(self, e): """Return true if e is an error that should not be caught in ``run``. - ``OperationalError``s are raised for errors that are not under the - control of the programmer. Usually that means unexpected disconnects, - which we shouldn't catch; we handle uncaught errors by prompting the - user to reconnect. We *do* want to catch OperationalErrors caused by a - lock being unavailable, as reconnecting won't solve that problem. + An uncaught error will prompt the user to reconnect; as long as we + detect that the connection is stil open, we catch the error, as + reconnecting won't solve that problem. :param e: DatabaseError. An exception raised while executing a query. :return: Bool. True if ``run`` must raise this exception. """ - return (isinstance(e, psycopg2.OperationalError) and - (not e.pgcode or - psycopg2.errorcodes.lookup(e.pgcode) not in - ('LOCK_NOT_AVAILABLE', 'CANT_CHANGE_RUNTIME_PARAM'))) + return self.conn.closed != 0 def execute_normal_sql(self, split_sql): """Returns tuple (title, rows, headers, status)"""
fix image export bug fixes
@@ -23,8 +23,8 @@ class ImageExporter(Exporter): bg.setAlpha(0) self.params = Parameter(name='params', type='group', children=[ - {'name': 'width', 'type': 'int', 'value': tr.width(), 'limits': (0, None)}, - {'name': 'height', 'type': 'int', 'value': tr.height(), 'limits': (0, None)}, + {'name': 'width', 'type': 'int', 'value': int(tr.width()), 'limits': (0, None)}, + {'name': 'height', 'type': 'int', 'value': int(tr.height()), 'limits': (0, None)}, {'name': 'antialias', 'type': 'bool', 'value': True}, {'name': 'background', 'type': 'color', 'value': bg}, ]) @@ -34,12 +34,12 @@ class ImageExporter(Exporter): def widthChanged(self): sr = self.getSourceRect() ar = float(sr.height()) / sr.width() - self.params.param('height').setValue(self.params['width'] * ar, blockSignal=self.heightChanged) + self.params.param('height').setValue(int(self.params['width'] * ar), blockSignal=self.heightChanged) def heightChanged(self): sr = self.getSourceRect() ar = float(sr.width()) / sr.height() - self.params.param('width').setValue(self.params['height'] * ar, blockSignal=self.widthChanged) + self.params.param('width').setValue(int(self.params['height'] * ar), blockSignal=self.widthChanged) def parameters(self): return self.params
fix JNI wrapper for IValue interface change Summary: Pull Request resolved: Seems CI was broken by PR - fix based on interface change. Test Plan: - build locally
@@ -324,7 +324,7 @@ class JIValue : public facebook::jni::JavaClass<JIValue> { return jMethodListArr(JIValue::javaClassStatic(), jArray); } else if (ivalue.isGenericDict()) { auto dict = ivalue.toGenericDict(); - const auto keyType = dict._keyType(); + const auto keyType = dict.keyType(); if (!keyType) { facebook::jni::throwNewJavaException( @@ -332,7 +332,7 @@ class JIValue : public facebook::jni::JavaClass<JIValue> { "Unknown IValue-Dict key type"); } - const auto keyTypeKind = keyType.value()->kind(); + const auto keyTypeKind = keyType->kind(); if (c10::TypeKind::StringType == keyTypeKind) { static auto jMethodDictStringKey = JIValue::javaClassStatic() @@ -421,17 +421,12 @@ class JIValue : public facebook::jni::JavaClass<JIValue> { std::vector<at::IValue> elements; elements.reserve(n); - std::vector<c10::TypePtr> types; - types.reserve(n); for (auto i = 0; i < n; ++i) { auto jivalue_element = jarray->getElement(i); auto element = JIValue::JIValueToAtIValue(jivalue_element); - c10::TypePtr typePtr = c10::attemptToRecoverType(element); elements.push_back(std::move(element)); - types.push_back(std::move(typePtr)); } - return c10::ivalue::Tuple::create( - std::move(elements), c10::TupleType::create(std::move(types))); + return c10::ivalue::Tuple::create(std::move(elements)); } else if (JIValue::kTypeCodeBoolList == typeCode) { static const auto jMethodGetBoolList = JIValue::javaClassStatic()->getMethod<jbooleanArray()>("getBoolList");
Fix gnocchi repository URL in local.conf.controller This patch set updates gnocchi repository URL in local.conf.controller bacause it moved from under openstack to their own repository.
@@ -46,7 +46,7 @@ disable_service ceilometer-acompute enable_service ceilometer-api # Enable the Gnocchi plugin -enable_plugin gnocchi https://git.openstack.org/openstack/gnocchi +enable_plugin gnocchi https://github.com/gnocchixyz/gnocchi LOGFILE=$DEST/logs/stack.sh.log LOGDAYS=2
Justify no cover for AccessDenied case The permissions with which stratis-cli makes requests on the D-Bus are controlled by the "stratisd.conf" file. The CLI tests do not control the contents or installation of "stratisd.conf" and therefore, we cannot test this case reliably.
@@ -168,9 +168,10 @@ def _interpret_errors(errors): # Inspect lowest error error = errors[-1] - # pylint: disable=fixme - # FIXME: remove no coverage pragma when adequate testing for CLI output - # exists. + # The permissions with which stratis-cli makes requests on the D-Bus + # are controlled by the "stratisd.conf" file. The CLI tests do not + # control the contents or installation of "stratisd.conf" + # and therefore, we cannot test this case reliably. if ( # pylint: disable=bad-continuation isinstance(error, dbus.exceptions.DBusException)
Update commit Update commit as discussed here and advised there:
127.0.0.1 consumerproductsusa.com 127.0.0.1 ceromobi.club 127.0.0.1 com-notice.info -127.0.0.1 cws.conviva.com -#*.cws.conviva.com 127.0.0.1 www.com-notice.info 127.0.0.1 apple.com-notice.info 127.0.0.1 www.apple.com-notice.info #*.angiemktg.com 127.0.0.1 weconfirmyou.com #*.weconfirmyou.com +127.0.0.1 cws.conviva.com +#*.cws.conviva.com # These stink-up yahoo finance 0.0.0.0 beap-bc.yahoo.com
Fix missing job_id parameter in the log message This is to fix the missing job_id parameter in the log message.
@@ -66,7 +66,8 @@ def get_job(node, job_id): except drac_exceptions.BaseClientException as exc: LOG.error('DRAC driver failed to get the job %(job_id)s ' 'for node %(node_uuid)s. Reason: %(error)s.', - {'node_uuid': node.uuid, + {'job_id': job_id, + 'node_uuid': node.uuid, 'error': exc}) raise exception.DracOperationError(error=exc)
added hash method internally protobuf convert list to set, which requires hashable type
@@ -220,6 +220,9 @@ class SignedMessage(SyftMessage): def get_protobuf_schema() -> GeneratedProtocolMessageType: return SignedMessage_PB + def __hash__(self) -> int: + return hash((self.signature, self.verify_key)) + class SignedImmediateSyftMessageWithReply(SignedMessage): """ """
Removed bit about unit testing in Karma Because there are no unit tests written in JS, it is probably more clear to remove the text explaining how to run the Karma tests.
@@ -73,44 +73,6 @@ You could also look in `tox.ini` and see which tests it runs, and run those comm python -m unittest test.test_integration ``` -#### To run lint and unit tests: - -```sh -$ npm test -``` - -#### To run unit tests and watch for changes: - -```sh -$ npm run test-watch -``` - -#### To debug unit tests in a browser (Chrome): - -```sh -$ npm run test-debug -``` - -1. Wait until Chrome launches. -2. Click the "DEBUG" button in the top right corner. -3. Open up Chrome Devtools (`Cmd+opt+i`). -4. Click the "Sources" tab. -5. Find source files - - Navigate to `webpack:// -> . -> spec/components` to find your test source files. - - Navigate to `webpack:// -> [your/repo/path]] -> dash-core-components -> src` to find your component source files. -6. Now you can set breakpoints and reload the page to hit them. -7. The test output is available in the "Console" tab, or in any tab by pressing "Esc". - -#### To run a specific test - -In your test, append `.only` to a `describe` or `it` statement: - -```javascript -describe.only('Foo component', () => { - // ... -}); -``` - ### Testing your components in Dash 1. Build development bundle to `lib/` and watch for changes
Bump the minimum Python dependency to 3.7 See
@@ -26,7 +26,7 @@ include = ["rich/py.typed"] [tool.poetry.dependencies] -python = "^3.6.3" +python = "^3.7.13" typing-extensions = { version = ">=4.0.0, <5.0", python = "<3.9" } dataclasses = { version = ">=0.7,<0.9", python = "<3.7" } pygments = "^2.6.0"
fix truth value of of numpy array when --input_crop arg is given, it is throwing error with "truth value of an array with more than on element ambiguous". since already there is check for crop_size>0 I I guess .any() should be sufficient.
@@ -228,7 +228,7 @@ def main(): if frame_num == 0: raise ValueError("Can't read an image from the input") break - if input_crop: + if input_crop it not None: frame = center_crop(frame, input_crop) if frame_num == 0: output_transform = OutputTransform(frame.shape[:2], args.output_resolution)
Fix url in python/setup.py setuptools metadata. Authors: - Carl Simon Adorf (https://github.com/csadorf) Approvers: - GALI PREM SAGAR (https://github.com/galipremsagar) - Corey J. Nolet (https://github.com/cjnolet) URL:
@@ -107,7 +107,7 @@ setup(name='cuml', "Programming Language :: Python :: 3.9" ], author="NVIDIA Corporation", - url="https://github.com/rapidsai/cudf", + url="https://github.com/rapidsai/cuml", setup_requires=['Cython>=0.29,<0.30'], packages=find_packages(include=['cuml', 'cuml.*']), package_data={
handle webview print requests in cocoa The WebKit webview in macOS does not automatically handle the Javascript window.print method. This fix let pywebview handle the print event via WebUIDelegate's webView:printFrameView: method.
@@ -57,6 +57,27 @@ class BrowserView: def webView_contextMenuItemsForElement_defaultMenuItems_(self, webview, element, defaultMenuItems): return nil + def webView_printFrameView_(self, webview, frameview): + """ + This delegate method is invoked when a script or a user wants to print a webpage (e.g. using the Javascript window.print() method) + :param webview: the webview that sent the message + :param frameview: the web frame view whose contents to print + """ + def printView(frameview): + # check if the view can handle the content without intervention by the delegate + can_print = frameview.documentViewShouldHandlePrint() + + if can_print: + # tell the view to print the content + frameview.printDocumentView() + else: + # get an NSPrintOperaion object to print the view + info = AppKit.NSPrintInfo.sharedPrintInfo() + print_op = frameview.printOperationWithPrintInfo_(info) + print_op.runOperation() + + PyObjCTools.AppHelper.callAfter(printView, frameview) + class WebKitHost(WebKit.WebView): def performKeyEquivalent_(self, theEvent): """