code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|

# pybind11 — Seamless operability between C++11 and Python
[](http://pybind11.readthedocs.org/en/master/?badge=master)
[](http://pybind11.readthedocs.org/en/stable/?badge=stable)
[](https://gitter.im/pybind/Lobby)
[](https://travis-ci.org/pybind/pybind11)
[](https://ci.appveyor.com/project/wjakob/pybind11)
**pybind11** is a lightweight header-only library that exposes C++ types in Python
and vice versa, mainly to create Python bindings of existing C++ code. Its
goals and syntax are similar to the excellent
[Boost.Python](http://www.boost.org/doc/libs/1_58_0/libs/python/doc/) library
by David Abrahams: to minimize boilerplate code in traditional extension
modules by inferring type information using compile-time introspection.
The main issue with Boost.Python—and the reason for creating such a similar
project—is Boost. Boost is an enormously large and complex suite of utility
libraries that works with almost every C++ compiler in existence. This
compatibility has its cost: arcane template tricks and workarounds are
necessary to support the oldest and buggiest of compiler specimens. Now that
C++11-compatible compilers are widely available, this heavy machinery has
become an excessively large and unnecessary dependency.
Think of this library as a tiny self-contained version of Boost.Python with
everything stripped away that isn't relevant for binding generation. Without
comments, the core header files only require ~4K lines of code and depend on
Python (2.7 or 3.x, or PyPy2.7 >= 5.7) and the C++ standard library. This
compact implementation was possible thanks to some of the new C++11 language
features (specifically: tuples, lambda functions and variadic templates). Since
its creation, this library has grown beyond Boost.Python in many ways, leading
to dramatically simpler binding code in many common situations.
Tutorial and reference documentation is provided at
[http://pybind11.readthedocs.org/en/master](http://pybind11.readthedocs.org/en/master).
A PDF version of the manual is available
[here](https://media.readthedocs.org/pdf/pybind11/master/pybind11.pdf).
## Core features
pybind11 can map the following core C++ features to Python
- Functions accepting and returning custom data structures per value, reference, or pointer
- Instance methods and static methods
- Overloaded functions
- Instance attributes and static attributes
- Arbitrary exception types
- Enumerations
- Callbacks
- Iterators and ranges
- Custom operators
- Single and multiple inheritance
- STL data structures
- Smart pointers with reference counting like ``std::shared_ptr``
- Internal references with correct reference counting
- C++ classes with virtual (and pure virtual) methods can be extended in Python
## Goodies
In addition to the core functionality, pybind11 provides some extra goodies:
- Python 2.7, 3.x, and PyPy (PyPy2.7 >= 5.7) are supported with an
implementation-agnostic interface.
- It is possible to bind C++11 lambda functions with captured variables. The
lambda capture data is stored inside the resulting Python function object.
- pybind11 uses C++11 move constructors and move assignment operators whenever
possible to efficiently transfer custom data types.
- It's easy to expose the internal storage of custom data types through
Pythons' buffer protocols. This is handy e.g. for fast conversion between
C++ matrix classes like Eigen and NumPy without expensive copy operations.
- pybind11 can automatically vectorize functions so that they are transparently
applied to all entries of one or more NumPy array arguments.
- Python's slice-based access and assignment operations can be supported with
just a few lines of code.
- Everything is contained in just a few header files; there is no need to link
against any additional libraries.
- Binaries are generally smaller by a factor of at least 2 compared to
equivalent bindings generated by Boost.Python. A recent pybind11 conversion
of PyRosetta, an enormous Boost.Python binding project,
[reported](http://graylab.jhu.edu/RosettaCon2016/PyRosetta-4.pdf) a binary
size reduction of **5.4x** and compile time reduction by **5.8x**.
- Function signatures are precomputed at compile time (using ``constexpr``),
leading to smaller binaries.
- With little extra effort, C++ types can be pickled and unpickled similar to
regular Python objects.
## Supported compilers
1. Clang/LLVM 3.3 or newer (for Apple Xcode's clang, this is 5.0.0 or newer)
2. GCC 4.8 or newer
3. Microsoft Visual Studio 2015 Update 3 or newer
4. Intel C++ compiler 17 or newer (16 with pybind11 v2.0 and 15 with pybind11 v2.0 and a [workaround](https://github.com/pybind/pybind11/issues/276))
5. Cygwin/GCC (tested on 2.5.1)
## About
This project was created by [Wenzel Jakob](http://rgl.epfl.ch/people/wjakob).
Significant features and/or improvements to the code were contributed by
Jonas Adler,
Lori A. Burns,
Sylvain Corlay,
Trent Houliston,
Axel Huebl,
@hulucc,
Sergey Lyskov
Johan Mabille,
Tomasz Miąsko,
Dean Moldovan,
Ben Pritchard,
Jason Rhinelander,
Boris Schäling,
Pim Schellart,
Henry Schreiner,
Ivan Smirnov, and
Patrick Stewart.
### License
pybind11 is provided under a BSD-style license that can be found in the
``LICENSE`` file. By using, distributing, or contributing to this project,
you agree to the terms and conditions of this license.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/README.md | README.md |
Thank you for your interest in this project! Please refer to the following
sections on how to contribute code and bug reports.
### Reporting bugs
At the moment, this project is run in the spare time of a single person
([Wenzel Jakob](http://rgl.epfl.ch/people/wjakob)) with very limited resources
for issue tracker tickets. Thus, before submitting a question or bug report,
please take a moment of your time and ensure that your issue isn't already
discussed in the project documentation provided at
[http://pybind11.readthedocs.org/en/latest](http://pybind11.readthedocs.org/en/latest).
Assuming that you have identified a previously unknown problem or an important
question, it's essential that you submit a self-contained and minimal piece of
code that reproduces the problem. In other words: no external dependencies,
isolate the function(s) that cause breakage, submit matched and complete C++
and Python snippets that can be easily compiled and run on my end.
## Pull requests
Contributions are submitted, reviewed, and accepted using Github pull requests.
Please refer to [this
article](https://help.github.com/articles/using-pull-requests) for details and
adhere to the following rules to make the process as smooth as possible:
* Make a new branch for every feature you're working on.
* Make small and clean pull requests that are easy to review but make sure they
do add value by themselves.
* Add tests for any new functionality and run the test suite (``make pytest``)
to ensure that no existing features break.
* Please run ``flake8`` and ``tools/check-style.sh`` to check your code matches
the project style. (Note that ``check-style.sh`` requires ``gawk``.)
* This project has a strong focus on providing general solutions using a
minimal amount of code, thus small pull requests are greatly preferred.
### Licensing of contributions
pybind11 is provided under a BSD-style license that can be found in the
``LICENSE`` file. By using, distributing, or contributing to this project, you
agree to the terms and conditions of this license.
You are under no obligation whatsoever to provide any bug fixes, patches, or
upgrades to the features, functionality or performance of the source code
("Enhancements") to anyone; however, if you choose to make your Enhancements
available either publicly, or directly to the author of this software, without
imposing a separate written license agreement for such Enhancements, then you
hereby grant the following license: a non-exclusive, royalty-free perpetual
license to install, use, modify, prepare derivative works, incorporate into
other computer software, distribute, and sublicense such enhancements or
derivative works thereof, in binary and source code form.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/CONTRIBUTING.md | CONTRIBUTING.md |
Frequently asked questions
##########################
"ImportError: dynamic module does not define init function"
===========================================================
1. Make sure that the name specified in PYBIND11_MODULE is identical to the
filename of the extension library (without prefixes such as .so)
2. If the above did not fix the issue, you are likely using an incompatible
version of Python (for instance, the extension library was compiled against
Python 2, while the interpreter is running on top of some version of Python
3, or vice versa).
"Symbol not found: ``__Py_ZeroStruct`` / ``_PyInstanceMethod_Type``"
========================================================================
See the first answer.
"SystemError: dynamic module not initialized properly"
======================================================
See the first answer.
The Python interpreter immediately crashes when importing my module
===================================================================
See the first answer.
CMake doesn't detect the right Python version
=============================================
The CMake-based build system will try to automatically detect the installed
version of Python and link against that. When this fails, or when there are
multiple versions of Python and it finds the wrong one, delete
``CMakeCache.txt`` and then invoke CMake as follows:
.. code-block:: bash
cmake -DPYTHON_EXECUTABLE:FILEPATH=<path-to-python-executable> .
.. _faq_reference_arguments:
Limitations involving reference arguments
=========================================
In C++, it's fairly common to pass arguments using mutable references or
mutable pointers, which allows both read and write access to the value
supplied by the caller. This is sometimes done for efficiency reasons, or to
realize functions that have multiple return values. Here are two very basic
examples:
.. code-block:: cpp
void increment(int &i) { i++; }
void increment_ptr(int *i) { (*i)++; }
In Python, all arguments are passed by reference, so there is no general
issue in binding such code from Python.
However, certain basic Python types (like ``str``, ``int``, ``bool``,
``float``, etc.) are **immutable**. This means that the following attempt
to port the function to Python doesn't have the same effect on the value
provided by the caller -- in fact, it does nothing at all.
.. code-block:: python
def increment(i):
i += 1 # nope..
pybind11 is also affected by such language-level conventions, which means that
binding ``increment`` or ``increment_ptr`` will also create Python functions
that don't modify their arguments.
Although inconvenient, one workaround is to encapsulate the immutable types in
a custom type that does allow modifications.
An other alternative involves binding a small wrapper lambda function that
returns a tuple with all output arguments (see the remainder of the
documentation for examples on binding lambda functions). An example:
.. code-block:: cpp
int foo(int &i) { i++; return 123; }
and the binding code
.. code-block:: cpp
m.def("foo", [](int i) { int rv = foo(i); return std::make_tuple(rv, i); });
How can I reduce the build time?
================================
It's good practice to split binding code over multiple files, as in the
following example:
:file:`example.cpp`:
.. code-block:: cpp
void init_ex1(py::module &);
void init_ex2(py::module &);
/* ... */
PYBIND11_MODULE(example, m) {
init_ex1(m);
init_ex2(m);
/* ... */
}
:file:`ex1.cpp`:
.. code-block:: cpp
void init_ex1(py::module &m) {
m.def("add", [](int a, int b) { return a + b; });
}
:file:`ex2.cpp`:
.. code-block:: cpp
void init_ex2(py::module &m) {
m.def("sub", [](int a, int b) { return a - b; });
}
:command:`python`:
.. code-block:: pycon
>>> import example
>>> example.add(1, 2)
3
>>> example.sub(1, 1)
0
As shown above, the various ``init_ex`` functions should be contained in
separate files that can be compiled independently from one another, and then
linked together into the same final shared object. Following this approach
will:
1. reduce memory requirements per compilation unit.
2. enable parallel builds (if desired).
3. allow for faster incremental builds. For instance, when a single class
definition is changed, only a subset of the binding code will generally need
to be recompiled.
"recursive template instantiation exceeded maximum depth of 256"
================================================================
If you receive an error about excessive recursive template evaluation, try
specifying a larger value, e.g. ``-ftemplate-depth=1024`` on GCC/Clang. The
culprit is generally the generation of function signatures at compile time
using C++14 template metaprogramming.
.. _`faq:hidden_visibility`:
"‘SomeClass’ declared with greater visibility than the type of its field ‘SomeClass::member’ [-Wattributes]"
============================================================================================================
This error typically indicates that you are compiling without the required
``-fvisibility`` flag. pybind11 code internally forces hidden visibility on
all internal code, but if non-hidden (and thus *exported*) code attempts to
include a pybind type (for example, ``py::object`` or ``py::list``) you can run
into this warning.
To avoid it, make sure you are specifying ``-fvisibility=hidden`` when
compiling pybind code.
As to why ``-fvisibility=hidden`` is necessary, because pybind modules could
have been compiled under different versions of pybind itself, it is also
important that the symbols defined in one module do not clash with the
potentially-incompatible symbols defined in another. While Python extension
modules are usually loaded with localized symbols (under POSIX systems
typically using ``dlopen`` with the ``RTLD_LOCAL`` flag), this Python default
can be changed, but even if it isn't it is not always enough to guarantee
complete independence of the symbols involved when not using
``-fvisibility=hidden``.
Additionally, ``-fvisiblity=hidden`` can deliver considerably binary size
savings. (See the following section for more details).
.. _`faq:symhidden`:
How can I create smaller binaries?
==================================
To do its job, pybind11 extensively relies on a programming technique known as
*template metaprogramming*, which is a way of performing computation at compile
time using type information. Template metaprogamming usually instantiates code
involving significant numbers of deeply nested types that are either completely
removed or reduced to just a few instructions during the compiler's optimization
phase. However, due to the nested nature of these types, the resulting symbol
names in the compiled extension library can be extremely long. For instance,
the included test suite contains the following symbol:
.. only:: html
.. code-block:: none
__ZN8pybind1112cpp_functionC1Iv8Example2JRNSt3__16vectorINS3_12basic_stringIwNS3_11char_traitsIwEENS3_9allocatorIwEEEENS8_ISA_EEEEEJNS_4nameENS_7siblingENS_9is_methodEA28_cEEEMT0_FT_DpT1_EDpRKT2_
.. only:: not html
.. code-block:: cpp
__ZN8pybind1112cpp_functionC1Iv8Example2JRNSt3__16vectorINS3_12basic_stringIwNS3_11char_traitsIwEENS3_9allocatorIwEEEENS8_ISA_EEEEEJNS_4nameENS_7siblingENS_9is_methodEA28_cEEEMT0_FT_DpT1_EDpRKT2_
which is the mangled form of the following function type:
.. code-block:: cpp
pybind11::cpp_function::cpp_function<void, Example2, std::__1::vector<std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> >, std::__1::allocator<std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> > > >&, pybind11::name, pybind11::sibling, pybind11::is_method, char [28]>(void (Example2::*)(std::__1::vector<std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> >, std::__1::allocator<std::__1::basic_string<wchar_t, std::__1::char_traits<wchar_t>, std::__1::allocator<wchar_t> > > >&), pybind11::name const&, pybind11::sibling const&, pybind11::is_method const&, char const (&) [28])
The memory needed to store just the mangled name of this function (196 bytes)
is larger than the actual piece of code (111 bytes) it represents! On the other
hand, it's silly to even give this function a name -- after all, it's just a
tiny cog in a bigger piece of machinery that is not exposed to the outside
world. So we'll generally only want to export symbols for those functions which
are actually called from the outside.
This can be achieved by specifying the parameter ``-fvisibility=hidden`` to GCC
and Clang, which sets the default symbol visibility to *hidden*, which has a
tremendous impact on the final binary size of the resulting extension library.
(On Visual Studio, symbols are already hidden by default, so nothing needs to
be done there.)
In addition to decreasing binary size, ``-fvisibility=hidden`` also avoids
potential serious issues when loading multiple modules and is required for
proper pybind operation. See the previous FAQ entry for more details.
Working with ancient Visual Studio 2008 builds on Windows
=========================================================
The official Windows distributions of Python are compiled using truly
ancient versions of Visual Studio that lack good C++11 support. Some users
implicitly assume that it would be impossible to load a plugin built with
Visual Studio 2015 into a Python distribution that was compiled using Visual
Studio 2008. However, no such issue exists: it's perfectly legitimate to
interface DLLs that are built with different compilers and/or C libraries.
Common gotchas to watch out for involve not ``free()``-ing memory region
that that were ``malloc()``-ed in another shared library, using data
structures with incompatible ABIs, and so on. pybind11 is very careful not
to make these types of mistakes.
How can I properly handle Ctrl-C in long-running functions?
===========================================================
Ctrl-C is received by the Python interpreter, and holds it until the GIL
is released, so a long-running function won't be interrupted.
To interrupt from inside your function, you can use the ``PyErr_CheckSignals()``
function, that will tell if a signal has been raised on the Python side. This
function merely checks a flag, so its impact is negligible. When a signal has
been received, you must either explicitly interrupt execution by throwing
``py::error_already_set`` (which will propagate the existing
``KeyboardInterrupt``), or clear the error (which you usually will not want):
.. code-block:: cpp
PYBIND11_MODULE(example, m)
{
m.def("long running_func", []()
{
for (;;) {
if (PyErr_CheckSignals() != 0)
throw py::error_already_set();
// Long running iteration
}
});
}
Inconsistent detection of Python version in CMake and pybind11
==============================================================
The functions ``find_package(PythonInterp)`` and ``find_package(PythonLibs)`` provided by CMake
for Python version detection are not used by pybind11 due to unreliability and limitations that make
them unsuitable for pybind11's needs. Instead pybind provides its own, more reliable Python detection
CMake code. Conflicts can arise, however, when using pybind11 in a project that *also* uses the CMake
Python detection in a system with several Python versions installed.
This difference may cause inconsistencies and errors if *both* mechanisms are used in the same project. Consider the following
Cmake code executed in a system with Python 2.7 and 3.x installed:
.. code-block:: cmake
find_package(PythonInterp)
find_package(PythonLibs)
find_package(pybind11)
It will detect Python 2.7 and pybind11 will pick it as well.
In contrast this code:
.. code-block:: cmake
find_package(pybind11)
find_package(PythonInterp)
find_package(PythonLibs)
will detect Python 3.x for pybind11 and may crash on ``find_package(PythonLibs)`` afterwards.
It is advised to avoid using ``find_package(PythonInterp)`` and ``find_package(PythonLibs)`` from CMake and rely
on pybind11 in detecting Python version. If this is not possible CMake machinery should be called *before* including pybind11.
How to cite this project?
=========================
We suggest the following BibTeX template to cite pybind11 in scientific
discourse:
.. code-block:: bash
@misc{pybind11,
author = {Wenzel Jakob and Jason Rhinelander and Dean Moldovan},
year = {2017},
note = {https://github.com/pybind/pybind11},
title = {pybind11 -- Seamless operability between C++11 and Python}
}
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/faq.rst | faq.rst |
Upgrade guide
#############
This is a companion guide to the :doc:`changelog`. While the changelog briefly
lists all of the new features, improvements and bug fixes, this upgrade guide
focuses only the subset which directly impacts your experience when upgrading
to a new version. But it goes into more detail. This includes things like
deprecated APIs and their replacements, build system changes, general code
modernization and other useful information.
v2.2
====
Deprecation of the ``PYBIND11_PLUGIN`` macro
--------------------------------------------
``PYBIND11_MODULE`` is now the preferred way to create module entry points.
The old macro emits a compile-time deprecation warning.
.. code-block:: cpp
// old
PYBIND11_PLUGIN(example) {
py::module m("example", "documentation string");
m.def("add", [](int a, int b) { return a + b; });
return m.ptr();
}
// new
PYBIND11_MODULE(example, m) {
m.doc() = "documentation string"; // optional
m.def("add", [](int a, int b) { return a + b; });
}
New API for defining custom constructors and pickling functions
---------------------------------------------------------------
The old placement-new custom constructors have been deprecated. The new approach
uses ``py::init()`` and factory functions to greatly improve type safety.
Placement-new can be called accidentally with an incompatible type (without any
compiler errors or warnings), or it can initialize the same object multiple times
if not careful with the Python-side ``__init__`` calls. The new-style custom
constructors prevent such mistakes. See :ref:`custom_constructors` for details.
.. code-block:: cpp
// old -- deprecated (runtime warning shown only in debug mode)
py::class<Foo>(m, "Foo")
.def("__init__", [](Foo &self, ...) {
new (&self) Foo(...); // uses placement-new
});
// new
py::class<Foo>(m, "Foo")
.def(py::init([](...) { // Note: no `self` argument
return new Foo(...); // return by raw pointer
// or: return std::make_unique<Foo>(...); // return by holder
// or: return Foo(...); // return by value (move constructor)
}));
Mirroring the custom constructor changes, ``py::pickle()`` is now the preferred
way to get and set object state. See :ref:`pickling` for details.
.. code-block:: cpp
// old -- deprecated (runtime warning shown only in debug mode)
py::class<Foo>(m, "Foo")
...
.def("__getstate__", [](const Foo &self) {
return py::make_tuple(self.value1(), self.value2(), ...);
})
.def("__setstate__", [](Foo &self, py::tuple t) {
new (&self) Foo(t[0].cast<std::string>(), ...);
});
// new
py::class<Foo>(m, "Foo")
...
.def(py::pickle(
[](const Foo &self) { // __getstate__
return py::make_tuple(f.value1(), f.value2(), ...); // unchanged
},
[](py::tuple t) { // __setstate__, note: no `self` argument
return new Foo(t[0].cast<std::string>(), ...);
// or: return std::make_unique<Foo>(...); // return by holder
// or: return Foo(...); // return by value (move constructor)
}
));
For both the constructors and pickling, warnings are shown at module
initialization time (on import, not when the functions are called).
They're only visible when compiled in debug mode. Sample warning:
.. code-block:: none
pybind11-bound class 'mymodule.Foo' is using an old-style placement-new '__init__'
which has been deprecated. See the upgrade guide in pybind11's docs.
Stricter enforcement of hidden symbol visibility for pybind11 modules
---------------------------------------------------------------------
pybind11 now tries to actively enforce hidden symbol visibility for modules.
If you're using either one of pybind11's :doc:`CMake or Python build systems
<compiling>` (the two example repositories) and you haven't been exporting any
symbols, there's nothing to be concerned about. All the changes have been done
transparently in the background. If you were building manually or relied on
specific default visibility, read on.
Setting default symbol visibility to *hidden* has always been recommended for
pybind11 (see :ref:`faq:symhidden`). On Linux and macOS, hidden symbol
visibility (in conjunction with the ``strip`` utility) yields much smaller
module binaries. `CPython's extension docs`_ also recommend hiding symbols
by default, with the goal of avoiding symbol name clashes between modules.
Starting with v2.2, pybind11 enforces this more strictly: (1) by declaring
all symbols inside the ``pybind11`` namespace as hidden and (2) by including
the ``-fvisibility=hidden`` flag on Linux and macOS (only for extension
modules, not for embedding the interpreter).
.. _CPython's extension docs: https://docs.python.org/3/extending/extending.html#providing-a-c-api-for-an-extension-module
The namespace-scope hidden visibility is done automatically in pybind11's
headers and it's generally transparent to users. It ensures that:
* Modules compiled with different pybind11 versions don't clash with each other.
* Some new features, like ``py::module_local`` bindings, can work as intended.
The ``-fvisibility=hidden`` flag applies the same visibility to user bindings
outside of the ``pybind11`` namespace. It's now set automatic by pybind11's
CMake and Python build systems, but this needs to be done manually by users
of other build systems. Adding this flag:
* Minimizes the chances of symbol conflicts between modules. E.g. if two
unrelated modules were statically linked to different (ABI-incompatible)
versions of the same third-party library, a symbol clash would be likely
(and would end with unpredictable results).
* Produces smaller binaries on Linux and macOS, as pointed out previously.
Within pybind11's CMake build system, ``pybind11_add_module`` has always been
setting the ``-fvisibility=hidden`` flag in release mode. From now on, it's
being applied unconditionally, even in debug mode and it can no longer be opted
out of with the ``NO_EXTRAS`` option. The ``pybind11::module`` target now also
adds this flag to it's interface. The ``pybind11::embed`` target is unchanged.
The most significant change here is for the ``pybind11::module`` target. If you
were previously relying on default visibility, i.e. if your Python module was
doubling as a shared library with dependents, you'll need to either export
symbols manually (recommended for cross-platform libraries) or factor out the
shared library (and have the Python module link to it like the other
dependents). As a temporary workaround, you can also restore default visibility
using the CMake code below, but this is not recommended in the long run:
.. code-block:: cmake
target_link_libraries(mymodule PRIVATE pybind11::module)
add_library(restore_default_visibility INTERFACE)
target_compile_options(restore_default_visibility INTERFACE -fvisibility=default)
target_link_libraries(mymodule PRIVATE restore_default_visibility)
Local STL container bindings
----------------------------
Previous pybind11 versions could only bind types globally -- all pybind11
modules, even unrelated ones, would have access to the same exported types.
However, this would also result in a conflict if two modules exported the
same C++ type, which is especially problematic for very common types, e.g.
``std::vector<int>``. :ref:`module_local` were added to resolve this (see
that section for a complete usage guide).
``py::class_`` still defaults to global bindings (because these types are
usually unique across modules), however in order to avoid clashes of opaque
types, ``py::bind_vector`` and ``py::bind_map`` will now bind STL containers
as ``py::module_local`` if their elements are: builtins (``int``, ``float``,
etc.), not bound using ``py::class_``, or bound as ``py::module_local``. For
example, this change allows multiple modules to bind ``std::vector<int>``
without causing conflicts. See :ref:`stl_bind` for more details.
When upgrading to this version, if you have multiple modules which depend on
a single global binding of an STL container, note that all modules can still
accept foreign ``py::module_local`` types in the direction of Python-to-C++.
The locality only affects the C++-to-Python direction. If this is needed in
multiple modules, you'll need to either:
* Add a copy of the same STL binding to all of the modules which need it.
* Restore the global status of that single binding by marking it
``py::module_local(false)``.
The latter is an easy workaround, but in the long run it would be best to
localize all common type bindings in order to avoid conflicts with
third-party modules.
Negative strides for Python buffer objects and numpy arrays
-----------------------------------------------------------
Support for negative strides required changing the integer type from unsigned
to signed in the interfaces of ``py::buffer_info`` and ``py::array``. If you
have compiler warnings enabled, you may notice some new conversion warnings
after upgrading. These can be resolved using ``static_cast``.
Deprecation of some ``py::object`` APIs
---------------------------------------
To compare ``py::object`` instances by pointer, you should now use
``obj1.is(obj2)`` which is equivalent to ``obj1 is obj2`` in Python.
Previously, pybind11 used ``operator==`` for this (``obj1 == obj2``), but
that could be confusing and is now deprecated (so that it can eventually
be replaced with proper rich object comparison in a future release).
For classes which inherit from ``py::object``, ``borrowed`` and ``stolen``
were previously available as protected constructor tags. Now the types
should be used directly instead: ``borrowed_t{}`` and ``stolen_t{}``
(`#771 <https://github.com/pybind/pybind11/pull/771>`_).
Stricter compile-time error checking
------------------------------------
Some error checks have been moved from run time to compile time. Notably,
automatic conversion of ``std::shared_ptr<T>`` is not possible when ``T`` is
not directly registered with ``py::class_<T>`` (e.g. ``std::shared_ptr<int>``
or ``std::shared_ptr<std::vector<T>>`` are not automatically convertible).
Attempting to bind a function with such arguments now results in a compile-time
error instead of waiting to fail at run time.
``py::init<...>()`` constructor definitions are also stricter and now prevent
bindings which could cause unexpected behavior:
.. code-block:: cpp
struct Example {
Example(int &);
};
py::class_<Example>(m, "Example")
.def(py::init<int &>()); // OK, exact match
// .def(py::init<int>()); // compile-time error, mismatch
A non-``const`` lvalue reference is not allowed to bind to an rvalue. However,
note that a constructor taking ``const T &`` can still be registered using
``py::init<T>()`` because a ``const`` lvalue reference can bind to an rvalue.
v2.1
====
Minimum compiler versions are enforced at compile time
------------------------------------------------------
The minimums also apply to v2.0 but the check is now explicit and a compile-time
error is raised if the compiler does not meet the requirements:
* GCC >= 4.8
* clang >= 3.3 (appleclang >= 5.0)
* MSVC >= 2015u3
* Intel C++ >= 15.0
The ``py::metaclass`` attribute is not required for static properties
---------------------------------------------------------------------
Binding classes with static properties is now possible by default. The
zero-parameter version of ``py::metaclass()`` is deprecated. However, a new
one-parameter ``py::metaclass(python_type)`` version was added for rare
cases when a custom metaclass is needed to override pybind11's default.
.. code-block:: cpp
// old -- emits a deprecation warning
py::class_<Foo>(m, "Foo", py::metaclass())
.def_property_readonly_static("foo", ...);
// new -- static properties work without the attribute
py::class_<Foo>(m, "Foo")
.def_property_readonly_static("foo", ...);
// new -- advanced feature, override pybind11's default metaclass
py::class_<Bar>(m, "Bar", py::metaclass(custom_python_type))
...
v2.0
====
Breaking changes in ``py::class_``
----------------------------------
These changes were necessary to make type definitions in pybind11
future-proof, to support PyPy via its ``cpyext`` mechanism (`#527
<https://github.com/pybind/pybind11/pull/527>`_), and to improve efficiency
(`rev. 86d825 <https://github.com/pybind/pybind11/commit/86d825>`_).
1. Declarations of types that provide access via the buffer protocol must
now include the ``py::buffer_protocol()`` annotation as an argument to
the ``py::class_`` constructor.
.. code-block:: cpp
py::class_<Matrix>("Matrix", py::buffer_protocol())
.def(py::init<...>())
.def_buffer(...);
2. Classes which include static properties (e.g. ``def_readwrite_static()``)
must now include the ``py::metaclass()`` attribute. Note: this requirement
has since been removed in v2.1. If you're upgrading from 1.x, it's
recommended to skip directly to v2.1 or newer.
3. This version of pybind11 uses a redesigned mechanism for instantiating
trampoline classes that are used to override virtual methods from within
Python. This led to the following user-visible syntax change:
.. code-block:: cpp
// old v1.x syntax
py::class_<TrampolineClass>("MyClass")
.alias<MyClass>()
...
// new v2.x syntax
py::class_<MyClass, TrampolineClass>("MyClass")
...
Importantly, both the original and the trampoline class are now specified
as arguments to the ``py::class_`` template, and the ``alias<..>()`` call
is gone. The new scheme has zero overhead in cases when Python doesn't
override any functions of the underlying C++ class.
`rev. 86d825 <https://github.com/pybind/pybind11/commit/86d825>`_.
The class type must be the first template argument given to ``py::class_``
while the trampoline can be mixed in arbitrary order with other arguments
(see the following section).
Deprecation of the ``py::base<T>()`` attribute
----------------------------------------------
``py::base<T>()`` was deprecated in favor of specifying ``T`` as a template
argument to ``py::class_``. This new syntax also supports multiple inheritance.
Note that, while the type being exported must be the first argument in the
``py::class_<Class, ...>`` template, the order of the following types (bases,
holder and/or trampoline) is not important.
.. code-block:: cpp
// old v1.x
py::class_<Derived>("Derived", py::base<Base>());
// new v2.x
py::class_<Derived, Base>("Derived");
// new -- multiple inheritance
py::class_<Derived, Base1, Base2>("Derived");
// new -- apart from `Derived` the argument order can be arbitrary
py::class_<Derived, Base1, Holder, Base2, Trampoline>("Derived");
Out-of-the-box support for ``std::shared_ptr``
----------------------------------------------
The relevant type caster is now built in, so it's no longer necessary to
include a declaration of the form:
.. code-block:: cpp
PYBIND11_DECLARE_HOLDER_TYPE(T, std::shared_ptr<T>)
Continuing to do so won’t cause an error or even a deprecation warning,
but it's completely redundant.
Deprecation of a few ``py::object`` APIs
----------------------------------------
All of the old-style calls emit deprecation warnings.
+---------------------------------------+---------------------------------------------+
| Old syntax | New syntax |
+=======================================+=============================================+
| ``obj.call(args...)`` | ``obj(args...)`` |
+---------------------------------------+---------------------------------------------+
| ``obj.str()`` | ``py::str(obj)`` |
+---------------------------------------+---------------------------------------------+
| ``auto l = py::list(obj); l.check()`` | ``py::isinstance<py::list>(obj)`` |
+---------------------------------------+---------------------------------------------+
| ``py::object(ptr, true)`` | ``py::reinterpret_borrow<py::object>(ptr)`` |
+---------------------------------------+---------------------------------------------+
| ``py::object(ptr, false)`` | ``py::reinterpret_steal<py::object>(ptr)`` |
+---------------------------------------+---------------------------------------------+
| ``if (obj.attr("foo"))`` | ``if (py::hasattr(obj, "foo"))`` |
+---------------------------------------+---------------------------------------------+
| ``if (obj["bar"])`` | ``if (obj.contains("bar"))`` |
+---------------------------------------+---------------------------------------------+
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/upgrade.rst | upgrade.rst |
Limitations
###########
pybind11 strives to be a general solution to binding generation, but it also has
certain limitations:
- pybind11 casts away ``const``-ness in function arguments and return values.
This is in line with the Python language, which has no concept of ``const``
values. This means that some additional care is needed to avoid bugs that
would be caught by the type checker in a traditional C++ program.
- The NumPy interface ``pybind11::array`` greatly simplifies accessing
numerical data from C++ (and vice versa), but it's not a full-blown array
class like ``Eigen::Array`` or ``boost.multi_array``.
These features could be implemented but would lead to a significant increase in
complexity. I've decided to draw the line here to keep this project simple and
compact. Users who absolutely require these features are encouraged to fork
pybind11.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/limitations.rst | limitations.rst |
.. only: not latex
.. image:: pybind11-logo.png
pybind11 --- Seamless operability between C++11 and Python
==========================================================
.. only: not latex
Contents:
.. toctree::
:maxdepth: 1
intro
changelog
upgrade
.. toctree::
:caption: The Basics
:maxdepth: 2
basics
classes
compiling
.. toctree::
:caption: Advanced Topics
:maxdepth: 2
advanced/functions
advanced/classes
advanced/exceptions
advanced/smart_ptrs
advanced/cast/index
advanced/pycpp/index
advanced/embedding
advanced/misc
.. toctree::
:caption: Extra Information
:maxdepth: 1
faq
benchmark
limitations
reference
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/index.rst | index.rst |
To release a new version of pybind11:
- Update the version number and push to pypi
- Update ``pybind11/_version.py`` (set release version, remove 'dev').
- Update ``PYBIND11_VERSION_MAJOR`` etc. in ``include/pybind11/detail/common.h``.
- Ensure that all the information in ``setup.py`` is up-to-date.
- Update version in ``docs/conf.py``.
- Tag release date in ``docs/changelog.rst``.
- ``git add`` and ``git commit``.
- if new minor version: ``git checkout -b vX.Y``, ``git push -u origin vX.Y``
- ``git tag -a vX.Y.Z -m 'vX.Y.Z release'``.
- ``git push``
- ``git push --tags``.
- ``python setup.py sdist upload``.
- ``python setup.py bdist_wheel upload``.
- Get back to work
- Update ``_version.py`` (add 'dev' and increment minor).
- Update version in ``docs/conf.py``
- Update version macros in ``include/pybind11/common.h``
- ``git add`` and ``git commit``.
``git push``
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/release.rst | release.rst |
.. _changelog:
Changelog
#########
Starting with version 1.8.0, pybind11 releases use a `semantic versioning
<http://semver.org>`_ policy.
v2.5.0 (Mar 31, 2020)
-----------------------------------------------------
* Use C++17 fold expressions in type casters, if available. This can
improve performance during overload resolution when functions have
multiple arguments.
`#2043 <https://github.com/pybind/pybind11/pull/2043>`_.
* Changed include directory resolution in ``pybind11/__init__.py``
and installation in ``setup.py``. This fixes a number of open issues
where pybind11 headers could not be found in certain environments.
`#1995 <https://github.com/pybind/pybind11/pull/1995>`_.
* C++20 ``char8_t`` and ``u8string`` support. `#2026
<https://github.com/pybind/pybind11/pull/2026>`_.
* CMake: search for Python 3.9. `bb9c91
<https://github.com/pybind/pybind11/commit/bb9c91>`_.
* Fixes for MSYS-based build environments.
`#2087 <https://github.com/pybind/pybind11/pull/2087>`_,
`#2053 <https://github.com/pybind/pybind11/pull/2053>`_.
* STL bindings for ``std::vector<...>::clear``. `#2074
<https://github.com/pybind/pybind11/pull/2074>`_.
* Read-only flag for ``py::buffer``. `#1466
<https://github.com/pybind/pybind11/pull/1466>`_.
* Exception handling during module initialization.
`bf2b031 <https://github.com/pybind/pybind11/commit/bf2b031>`_.
* Support linking against a CPython debug build.
`#2025 <https://github.com/pybind/pybind11/pull/2025>`_.
* Fixed issues involving the availability and use of aligned ``new`` and
``delete``. `#1988 <https://github.com/pybind/pybind11/pull/1988>`_,
`759221 <https://github.com/pybind/pybind11/commit/759221>`_.
* Fixed a resource leak upon interpreter shutdown.
`#2020 <https://github.com/pybind/pybind11/pull/2020>`_.
* Fixed error handling in the boolean caster.
`#1976 <https://github.com/pybind/pybind11/pull/1976>`_.
v2.4.3 (Oct 15, 2019)
-----------------------------------------------------
* Adapt pybind11 to a C API convention change in Python 3.8. `#1950
<https://github.com/pybind/pybind11/pull/1950>`_.
v2.4.2 (Sep 21, 2019)
-----------------------------------------------------
* Replaced usage of a C++14 only construct. `#1929
<https://github.com/pybind/pybind11/pull/1929>`_.
* Made an ifdef future-proof for Python >= 4. `f3109d
<https://github.com/pybind/pybind11/commit/f3109d>`_.
v2.4.1 (Sep 20, 2019)
-----------------------------------------------------
* Fixed a problem involving implicit conversion from enumerations to integers
on Python 3.8. `#1780 <https://github.com/pybind/pybind11/pull/1780>`_.
v2.4.0 (Sep 19, 2019)
-----------------------------------------------------
* Try harder to keep pybind11-internal data structures separate when there
are potential ABI incompatibilities. Fixes crashes that occurred when loading
multiple pybind11 extensions that were e.g. compiled by GCC (libstdc++)
and Clang (libc++).
`#1588 <https://github.com/pybind/pybind11/pull/1588>`_ and
`c9f5a <https://github.com/pybind/pybind11/commit/c9f5a>`_.
* Added support for ``__await__``, ``__aiter__``, and ``__anext__`` protocols.
`#1842 <https://github.com/pybind/pybind11/pull/1842>`_.
* ``pybind11_add_module()``: don't strip symbols when compiling in
``RelWithDebInfo`` mode. `#1980
<https://github.com/pybind/pybind11/pull/1980>`_.
* ``enum_``: Reproduce Python behavior when comparing against invalid values
(e.g. ``None``, strings, etc.). Add back support for ``__invert__()``.
`#1912 <https://github.com/pybind/pybind11/pull/1912>`_,
`#1907 <https://github.com/pybind/pybind11/pull/1907>`_.
* List insertion operation for ``py::list``.
Added ``.empty()`` to all collection types.
Added ``py::set::contains()`` and ``py::dict::contains()``.
`#1887 <https://github.com/pybind/pybind11/pull/1887>`_,
`#1884 <https://github.com/pybind/pybind11/pull/1884>`_,
`#1888 <https://github.com/pybind/pybind11/pull/1888>`_.
* ``py::details::overload_cast_impl`` is available in C++11 mode, can be used
like ``overload_cast`` with an additional set of parantheses.
`#1581 <https://github.com/pybind/pybind11/pull/1581>`_.
* Fixed ``get_include()`` on Conda.
`#1877 <https://github.com/pybind/pybind11/pull/1877>`_.
* ``stl_bind.h``: negative indexing support.
`#1882 <https://github.com/pybind/pybind11/pull/1882>`_.
* Minor CMake fix to add MinGW compatibility.
`#1851 <https://github.com/pybind/pybind11/pull/1851>`_.
* GIL-related fixes.
`#1836 <https://github.com/pybind/pybind11/pull/1836>`_,
`8b90b <https://github.com/pybind/pybind11/commit/8b90b>`_.
* Other very minor/subtle fixes and improvements.
`#1329 <https://github.com/pybind/pybind11/pull/1329>`_,
`#1910 <https://github.com/pybind/pybind11/pull/1910>`_,
`#1863 <https://github.com/pybind/pybind11/pull/1863>`_,
`#1847 <https://github.com/pybind/pybind11/pull/1847>`_,
`#1890 <https://github.com/pybind/pybind11/pull/1890>`_,
`#1860 <https://github.com/pybind/pybind11/pull/1860>`_,
`#1848 <https://github.com/pybind/pybind11/pull/1848>`_,
`#1821 <https://github.com/pybind/pybind11/pull/1821>`_,
`#1837 <https://github.com/pybind/pybind11/pull/1837>`_,
`#1833 <https://github.com/pybind/pybind11/pull/1833>`_,
`#1748 <https://github.com/pybind/pybind11/pull/1748>`_,
`#1852 <https://github.com/pybind/pybind11/pull/1852>`_.
v2.3.0 (June 11, 2019)
-----------------------------------------------------
* Significantly reduced module binary size (10-20%) when compiled in C++11 mode
with GCC/Clang, or in any mode with MSVC. Function signatures are now always
precomputed at compile time (this was previously only available in C++14 mode
for non-MSVC compilers).
`#934 <https://github.com/pybind/pybind11/pull/934>`_.
* Add basic support for tag-based static polymorphism, where classes
provide a method to returns the desired type of an instance.
`#1326 <https://github.com/pybind/pybind11/pull/1326>`_.
* Python type wrappers (``py::handle``, ``py::object``, etc.)
now support map Python's number protocol onto C++ arithmetic
operators such as ``operator+``, ``operator/=``, etc.
`#1511 <https://github.com/pybind/pybind11/pull/1511>`_.
* A number of improvements related to enumerations:
1. The ``enum_`` implementation was rewritten from scratch to reduce
code bloat. Rather than instantiating a full implementation for each
enumeration, most code is now contained in a generic base class.
`#1511 <https://github.com/pybind/pybind11/pull/1511>`_.
2. The ``value()`` method of ``py::enum_`` now accepts an optional
docstring that will be shown in the documentation of the associated
enumeration. `#1160 <https://github.com/pybind/pybind11/pull/1160>`_.
3. check for already existing enum value and throw an error if present.
`#1453 <https://github.com/pybind/pybind11/pull/1453>`_.
* Support for over-aligned type allocation via C++17's aligned ``new``
statement. `#1582 <https://github.com/pybind/pybind11/pull/1582>`_.
* Added ``py::ellipsis()`` method for slicing of multidimensional NumPy arrays
`#1502 <https://github.com/pybind/pybind11/pull/1502>`_.
* Numerous Improvements to the ``mkdoc.py`` script for extracting documentation
from C++ header files.
`#1788 <https://github.com/pybind/pybind11/pull/1788>`_.
* ``pybind11_add_module()``: allow including Python as a ``SYSTEM`` include path.
`#1416 <https://github.com/pybind/pybind11/pull/1416>`_.
* ``pybind11/stl.h`` does not convert strings to ``vector<string>`` anymore.
`#1258 <https://github.com/pybind/pybind11/issues/1258>`_.
* Mark static methods as such to fix auto-generated Sphinx documentation.
`#1732 <https://github.com/pybind/pybind11/pull/1732>`_.
* Re-throw forced unwind exceptions (e.g. during pthread termination).
`#1208 <https://github.com/pybind/pybind11/pull/1208>`_.
* Added ``__contains__`` method to the bindings of maps (``std::map``,
``std::unordered_map``).
`#1767 <https://github.com/pybind/pybind11/pull/1767>`_.
* Improvements to ``gil_scoped_acquire``.
`#1211 <https://github.com/pybind/pybind11/pull/1211>`_.
* Type caster support for ``std::deque<T>``.
`#1609 <https://github.com/pybind/pybind11/pull/1609>`_.
* Support for ``std::unique_ptr`` holders, whose deleters differ between a base and derived
class. `#1353 <https://github.com/pybind/pybind11/pull/1353>`_.
* Construction of STL array/vector-like data structures from
iterators. Added an ``extend()`` operation.
`#1709 <https://github.com/pybind/pybind11/pull/1709>`_,
* CMake build system improvements for projects that include non-C++
files (e.g. plain C, CUDA) in ``pybind11_add_module`` et al.
`#1678 <https://github.com/pybind/pybind11/pull/1678>`_.
* Fixed asynchronous invocation and deallocation of Python functions
wrapped in ``std::function``.
`#1595 <https://github.com/pybind/pybind11/pull/1595>`_.
* Fixes regarding return value policy propagation in STL type casters.
`#1603 <https://github.com/pybind/pybind11/pull/1603>`_.
* Fixed scoped enum comparisons.
`#1571 <https://github.com/pybind/pybind11/pull/1571>`_.
* Fixed iostream redirection for code that releases the GIL.
`#1368 <https://github.com/pybind/pybind11/pull/1368>`_,
* A number of CI-related fixes.
`#1757 <https://github.com/pybind/pybind11/pull/1757>`_,
`#1744 <https://github.com/pybind/pybind11/pull/1744>`_,
`#1670 <https://github.com/pybind/pybind11/pull/1670>`_.
v2.2.4 (September 11, 2018)
-----------------------------------------------------
* Use new Python 3.7 Thread Specific Storage (TSS) implementation if available.
`#1454 <https://github.com/pybind/pybind11/pull/1454>`_,
`#1517 <https://github.com/pybind/pybind11/pull/1517>`_.
* Fixes for newer MSVC versions and C++17 mode.
`#1347 <https://github.com/pybind/pybind11/pull/1347>`_,
`#1462 <https://github.com/pybind/pybind11/pull/1462>`_.
* Propagate return value policies to type-specific casters
when casting STL containers.
`#1455 <https://github.com/pybind/pybind11/pull/1455>`_.
* Allow ostream-redirection of more than 1024 characters.
`#1479 <https://github.com/pybind/pybind11/pull/1479>`_.
* Set ``Py_DEBUG`` define when compiling against a debug Python build.
`#1438 <https://github.com/pybind/pybind11/pull/1438>`_.
* Untangle integer logic in number type caster to work for custom
types that may only be castable to a restricted set of builtin types.
`#1442 <https://github.com/pybind/pybind11/pull/1442>`_.
* CMake build system: Remember Python version in cache file.
`#1434 <https://github.com/pybind/pybind11/pull/1434>`_.
* Fix for custom smart pointers: use ``std::addressof`` to obtain holder
address instead of ``operator&``.
`#1435 <https://github.com/pybind/pybind11/pull/1435>`_.
* Properly report exceptions thrown during module initialization.
`#1362 <https://github.com/pybind/pybind11/pull/1362>`_.
* Fixed a segmentation fault when creating empty-shaped NumPy array.
`#1371 <https://github.com/pybind/pybind11/pull/1371>`_.
* The version of Intel C++ compiler must be >= 2017, and this is now checked by
the header files. `#1363 <https://github.com/pybind/pybind11/pull/1363>`_.
* A few minor typo fixes and improvements to the test suite, and
patches that silence compiler warnings.
* Vectors now support construction from generators, as well as ``extend()`` from a
list or generator.
`#1496 <https://github.com/pybind/pybind11/pull/1496>`_.
v2.2.3 (April 29, 2018)
-----------------------------------------------------
* The pybind11 header location detection was replaced by a new implementation
that no longer depends on ``pip`` internals (the recently released ``pip``
10 has restricted access to this API).
`#1190 <https://github.com/pybind/pybind11/pull/1190>`_.
* Small adjustment to an implementation detail to work around a compiler segmentation fault in Clang 3.3/3.4.
`#1350 <https://github.com/pybind/pybind11/pull/1350>`_.
* The minimal supported version of the Intel compiler was >= 17.0 since
pybind11 v2.1. This check is now explicit, and a compile-time error is raised
if the compiler meet the requirement.
`#1363 <https://github.com/pybind/pybind11/pull/1363>`_.
* Fixed an endianness-related fault in the test suite.
`#1287 <https://github.com/pybind/pybind11/pull/1287>`_.
v2.2.2 (February 7, 2018)
-----------------------------------------------------
* Fixed a segfault when combining embedded interpreter
shutdown/reinitialization with external loaded pybind11 modules.
`#1092 <https://github.com/pybind/pybind11/pull/1092>`_.
* Eigen support: fixed a bug where Nx1/1xN numpy inputs couldn't be passed as
arguments to Eigen vectors (which for Eigen are simply compile-time fixed
Nx1/1xN matrices).
`#1106 <https://github.com/pybind/pybind11/pull/1106>`_.
* Clarified to license by moving the licensing of contributions from
``LICENSE`` into ``CONTRIBUTING.md``: the licensing of contributions is not
actually part of the software license as distributed. This isn't meant to be
a substantial change in the licensing of the project, but addresses concerns
that the clause made the license non-standard.
`#1109 <https://github.com/pybind/pybind11/issues/1109>`_.
* Fixed a regression introduced in 2.1 that broke binding functions with lvalue
character literal arguments.
`#1128 <https://github.com/pybind/pybind11/pull/1128>`_.
* MSVC: fix for compilation failures under /permissive-, and added the flag to
the appveyor test suite.
`#1155 <https://github.com/pybind/pybind11/pull/1155>`_.
* Fixed ``__qualname__`` generation, and in turn, fixes how class names
(especially nested class names) are shown in generated docstrings.
`#1171 <https://github.com/pybind/pybind11/pull/1171>`_.
* Updated the FAQ with a suggested project citation reference.
`#1189 <https://github.com/pybind/pybind11/pull/1189>`_.
* Added fixes for deprecation warnings when compiled under C++17 with
``-Wdeprecated`` turned on, and add ``-Wdeprecated`` to the test suite
compilation flags.
`#1191 <https://github.com/pybind/pybind11/pull/1191>`_.
* Fixed outdated PyPI URLs in ``setup.py``.
`#1213 <https://github.com/pybind/pybind11/pull/1213>`_.
* Fixed a refcount leak for arguments that end up in a ``py::args`` argument
for functions with both fixed positional and ``py::args`` arguments.
`#1216 <https://github.com/pybind/pybind11/pull/1216>`_.
* Fixed a potential segfault resulting from possible premature destruction of
``py::args``/``py::kwargs`` arguments with overloaded functions.
`#1223 <https://github.com/pybind/pybind11/pull/1223>`_.
* Fixed ``del map[item]`` for a ``stl_bind.h`` bound stl map.
`#1229 <https://github.com/pybind/pybind11/pull/1229>`_.
* Fixed a regression from v2.1.x where the aggregate initialization could
unintentionally end up at a constructor taking a templated
``std::initializer_list<T>`` argument.
`#1249 <https://github.com/pybind/pybind11/pull/1249>`_.
* Fixed an issue where calling a function with a keep_alive policy on the same
nurse/patient pair would cause the internal patient storage to needlessly
grow (unboundedly, if the nurse is long-lived).
`#1251 <https://github.com/pybind/pybind11/issues/1251>`_.
* Various other minor fixes.
v2.2.1 (September 14, 2017)
-----------------------------------------------------
* Added ``py::module::reload()`` member function for reloading a module.
`#1040 <https://github.com/pybind/pybind11/pull/1040>`_.
* Fixed a reference leak in the number converter.
`#1078 <https://github.com/pybind/pybind11/pull/1078>`_.
* Fixed compilation with Clang on host GCC < 5 (old libstdc++ which isn't fully
C++11 compliant). `#1062 <https://github.com/pybind/pybind11/pull/1062>`_.
* Fixed a regression where the automatic ``std::vector<bool>`` caster would
fail to compile. The same fix also applies to any container which returns
element proxies instead of references.
`#1053 <https://github.com/pybind/pybind11/pull/1053>`_.
* Fixed a regression where the ``py::keep_alive`` policy could not be applied
to constructors. `#1065 <https://github.com/pybind/pybind11/pull/1065>`_.
* Fixed a nullptr dereference when loading a ``py::module_local`` type
that's only registered in an external module.
`#1058 <https://github.com/pybind/pybind11/pull/1058>`_.
* Fixed implicit conversion of accessors to types derived from ``py::object``.
`#1076 <https://github.com/pybind/pybind11/pull/1076>`_.
* The ``name`` in ``PYBIND11_MODULE(name, variable)`` can now be a macro.
`#1082 <https://github.com/pybind/pybind11/pull/1082>`_.
* Relaxed overly strict ``py::pickle()`` check for matching get and set types.
`#1064 <https://github.com/pybind/pybind11/pull/1064>`_.
* Conversion errors now try to be more informative when it's likely that
a missing header is the cause (e.g. forgetting ``<pybind11/stl.h>``).
`#1077 <https://github.com/pybind/pybind11/pull/1077>`_.
v2.2.0 (August 31, 2017)
-----------------------------------------------------
* Support for embedding the Python interpreter. See the
:doc:`documentation page </advanced/embedding>` for a
full overview of the new features.
`#774 <https://github.com/pybind/pybind11/pull/774>`_,
`#889 <https://github.com/pybind/pybind11/pull/889>`_,
`#892 <https://github.com/pybind/pybind11/pull/892>`_,
`#920 <https://github.com/pybind/pybind11/pull/920>`_.
.. code-block:: cpp
#include <pybind11/embed.h>
namespace py = pybind11;
int main() {
py::scoped_interpreter guard{}; // start the interpreter and keep it alive
py::print("Hello, World!"); // use the Python API
}
* Support for inheriting from multiple C++ bases in Python.
`#693 <https://github.com/pybind/pybind11/pull/693>`_.
.. code-block:: python
from cpp_module import CppBase1, CppBase2
class PyDerived(CppBase1, CppBase2):
def __init__(self):
CppBase1.__init__(self) # C++ bases must be initialized explicitly
CppBase2.__init__(self)
* ``PYBIND11_MODULE`` is now the preferred way to create module entry points.
``PYBIND11_PLUGIN`` is deprecated. See :ref:`macros` for details.
`#879 <https://github.com/pybind/pybind11/pull/879>`_.
.. code-block:: cpp
// new
PYBIND11_MODULE(example, m) {
m.def("add", [](int a, int b) { return a + b; });
}
// old
PYBIND11_PLUGIN(example) {
py::module m("example");
m.def("add", [](int a, int b) { return a + b; });
return m.ptr();
}
* pybind11's headers and build system now more strictly enforce hidden symbol
visibility for extension modules. This should be seamless for most users,
but see the :doc:`upgrade` if you use a custom build system.
`#995 <https://github.com/pybind/pybind11/pull/995>`_.
* Support for ``py::module_local`` types which allow multiple modules to
export the same C++ types without conflicts. This is useful for opaque
types like ``std::vector<int>``. ``py::bind_vector`` and ``py::bind_map``
now default to ``py::module_local`` if their elements are builtins or
local types. See :ref:`module_local` for details.
`#949 <https://github.com/pybind/pybind11/pull/949>`_,
`#981 <https://github.com/pybind/pybind11/pull/981>`_,
`#995 <https://github.com/pybind/pybind11/pull/995>`_,
`#997 <https://github.com/pybind/pybind11/pull/997>`_.
* Custom constructors can now be added very easily using lambdas or factory
functions which return a class instance by value, pointer or holder. This
supersedes the old placement-new ``__init__`` technique.
See :ref:`custom_constructors` for details.
`#805 <https://github.com/pybind/pybind11/pull/805>`_,
`#1014 <https://github.com/pybind/pybind11/pull/1014>`_.
.. code-block:: cpp
struct Example {
Example(std::string);
};
py::class_<Example>(m, "Example")
.def(py::init<std::string>()) // existing constructor
.def(py::init([](int n) { // custom constructor
return std::make_unique<Example>(std::to_string(n));
}));
* Similarly to custom constructors, pickling support functions are now bound
using the ``py::pickle()`` adaptor which improves type safety. See the
:doc:`upgrade` and :ref:`pickling` for details.
`#1038 <https://github.com/pybind/pybind11/pull/1038>`_.
* Builtin support for converting C++17 standard library types and general
conversion improvements:
1. C++17 ``std::variant`` is supported right out of the box. C++11/14
equivalents (e.g. ``boost::variant``) can also be added with a simple
user-defined specialization. See :ref:`cpp17_container_casters` for details.
`#811 <https://github.com/pybind/pybind11/pull/811>`_,
`#845 <https://github.com/pybind/pybind11/pull/845>`_,
`#989 <https://github.com/pybind/pybind11/pull/989>`_.
2. Out-of-the-box support for C++17 ``std::string_view``.
`#906 <https://github.com/pybind/pybind11/pull/906>`_.
3. Improved compatibility of the builtin ``optional`` converter.
`#874 <https://github.com/pybind/pybind11/pull/874>`_.
4. The ``bool`` converter now accepts ``numpy.bool_`` and types which
define ``__bool__`` (Python 3.x) or ``__nonzero__`` (Python 2.7).
`#925 <https://github.com/pybind/pybind11/pull/925>`_.
5. C++-to-Python casters are now more efficient and move elements out
of rvalue containers whenever possible.
`#851 <https://github.com/pybind/pybind11/pull/851>`_,
`#936 <https://github.com/pybind/pybind11/pull/936>`_,
`#938 <https://github.com/pybind/pybind11/pull/938>`_.
6. Fixed ``bytes`` to ``std::string/char*`` conversion on Python 3.
`#817 <https://github.com/pybind/pybind11/pull/817>`_.
7. Fixed lifetime of temporary C++ objects created in Python-to-C++ conversions.
`#924 <https://github.com/pybind/pybind11/pull/924>`_.
* Scope guard call policy for RAII types, e.g. ``py::call_guard<py::gil_scoped_release>()``,
``py::call_guard<py::scoped_ostream_redirect>()``. See :ref:`call_policies` for details.
`#740 <https://github.com/pybind/pybind11/pull/740>`_.
* Utility for redirecting C++ streams to Python (e.g. ``std::cout`` ->
``sys.stdout``). Scope guard ``py::scoped_ostream_redirect`` in C++ and
a context manager in Python. See :ref:`ostream_redirect`.
`#1009 <https://github.com/pybind/pybind11/pull/1009>`_.
* Improved handling of types and exceptions across module boundaries.
`#915 <https://github.com/pybind/pybind11/pull/915>`_,
`#951 <https://github.com/pybind/pybind11/pull/951>`_,
`#995 <https://github.com/pybind/pybind11/pull/995>`_.
* Fixed destruction order of ``py::keep_alive`` nurse/patient objects
in reference cycles.
`#856 <https://github.com/pybind/pybind11/pull/856>`_.
* Numpy and buffer protocol related improvements:
1. Support for negative strides in Python buffer objects/numpy arrays. This
required changing integers from unsigned to signed for the related C++ APIs.
Note: If you have compiler warnings enabled, you may notice some new conversion
warnings after upgrading. These can be resolved with ``static_cast``.
`#782 <https://github.com/pybind/pybind11/pull/782>`_.
2. Support ``std::complex`` and arrays inside ``PYBIND11_NUMPY_DTYPE``.
`#831 <https://github.com/pybind/pybind11/pull/831>`_,
`#832 <https://github.com/pybind/pybind11/pull/832>`_.
3. Support for constructing ``py::buffer_info`` and ``py::arrays`` using
arbitrary containers or iterators instead of requiring a ``std::vector``.
`#788 <https://github.com/pybind/pybind11/pull/788>`_,
`#822 <https://github.com/pybind/pybind11/pull/822>`_,
`#860 <https://github.com/pybind/pybind11/pull/860>`_.
4. Explicitly check numpy version and require >= 1.7.0.
`#819 <https://github.com/pybind/pybind11/pull/819>`_.
* Support for allowing/prohibiting ``None`` for specific arguments and improved
``None`` overload resolution order. See :ref:`none_arguments` for details.
`#843 <https://github.com/pybind/pybind11/pull/843>`_.
`#859 <https://github.com/pybind/pybind11/pull/859>`_.
* Added ``py::exec()`` as a shortcut for ``py::eval<py::eval_statements>()``
and support for C++11 raw string literals as input. See :ref:`eval`.
`#766 <https://github.com/pybind/pybind11/pull/766>`_,
`#827 <https://github.com/pybind/pybind11/pull/827>`_.
* ``py::vectorize()`` ignores non-vectorizable arguments and supports
member functions.
`#762 <https://github.com/pybind/pybind11/pull/762>`_.
* Support for bound methods as callbacks (``pybind11/functional.h``).
`#815 <https://github.com/pybind/pybind11/pull/815>`_.
* Allow aliasing pybind11 methods: ``cls.attr("foo") = cls.attr("bar")``.
`#802 <https://github.com/pybind/pybind11/pull/802>`_.
* Don't allow mixed static/non-static overloads.
`#804 <https://github.com/pybind/pybind11/pull/804>`_.
* Fixed overriding static properties in derived classes.
`#784 <https://github.com/pybind/pybind11/pull/784>`_.
* Added support for write only properties.
`#1144 <https://github.com/pybind/pybind11/pull/1144>`_.
* Improved deduction of member functions of a derived class when its bases
aren't registered with pybind11.
`#855 <https://github.com/pybind/pybind11/pull/855>`_.
.. code-block:: cpp
struct Base {
int foo() { return 42; }
}
struct Derived : Base {}
// Now works, but previously required also binding `Base`
py::class_<Derived>(m, "Derived")
.def("foo", &Derived::foo); // function is actually from `Base`
* The implementation of ``py::init<>`` now uses C++11 brace initialization
syntax to construct instances, which permits binding implicit constructors of
aggregate types. `#1015 <https://github.com/pybind/pybind11/pull/1015>`_.
.. code-block:: cpp
struct Aggregate {
int a;
std::string b;
};
py::class_<Aggregate>(m, "Aggregate")
.def(py::init<int, const std::string &>());
* Fixed issues with multiple inheritance with offset base/derived pointers.
`#812 <https://github.com/pybind/pybind11/pull/812>`_,
`#866 <https://github.com/pybind/pybind11/pull/866>`_,
`#960 <https://github.com/pybind/pybind11/pull/960>`_.
* Fixed reference leak of type objects.
`#1030 <https://github.com/pybind/pybind11/pull/1030>`_.
* Improved support for the ``/std:c++14`` and ``/std:c++latest`` modes
on MSVC 2017.
`#841 <https://github.com/pybind/pybind11/pull/841>`_,
`#999 <https://github.com/pybind/pybind11/pull/999>`_.
* Fixed detection of private operator new on MSVC.
`#893 <https://github.com/pybind/pybind11/pull/893>`_,
`#918 <https://github.com/pybind/pybind11/pull/918>`_.
* Intel C++ compiler compatibility fixes.
`#937 <https://github.com/pybind/pybind11/pull/937>`_.
* Fixed implicit conversion of `py::enum_` to integer types on Python 2.7.
`#821 <https://github.com/pybind/pybind11/pull/821>`_.
* Added ``py::hash`` to fetch the hash value of Python objects, and
``.def(hash(py::self))`` to provide the C++ ``std::hash`` as the Python
``__hash__`` method.
`#1034 <https://github.com/pybind/pybind11/pull/1034>`_.
* Fixed ``__truediv__`` on Python 2 and ``__itruediv__`` on Python 3.
`#867 <https://github.com/pybind/pybind11/pull/867>`_.
* ``py::capsule`` objects now support the ``name`` attribute. This is useful
for interfacing with ``scipy.LowLevelCallable``.
`#902 <https://github.com/pybind/pybind11/pull/902>`_.
* Fixed ``py::make_iterator``'s ``__next__()`` for past-the-end calls.
`#897 <https://github.com/pybind/pybind11/pull/897>`_.
* Added ``error_already_set::matches()`` for checking Python exceptions.
`#772 <https://github.com/pybind/pybind11/pull/772>`_.
* Deprecated ``py::error_already_set::clear()``. It's no longer needed
following a simplification of the ``py::error_already_set`` class.
`#954 <https://github.com/pybind/pybind11/pull/954>`_.
* Deprecated ``py::handle::operator==()`` in favor of ``py::handle::is()``
`#825 <https://github.com/pybind/pybind11/pull/825>`_.
* Deprecated ``py::object::borrowed``/``py::object::stolen``.
Use ``py::object::borrowed_t{}``/``py::object::stolen_t{}`` instead.
`#771 <https://github.com/pybind/pybind11/pull/771>`_.
* Changed internal data structure versioning to avoid conflicts between
modules compiled with different revisions of pybind11.
`#1012 <https://github.com/pybind/pybind11/pull/1012>`_.
* Additional compile-time and run-time error checking and more informative messages.
`#786 <https://github.com/pybind/pybind11/pull/786>`_,
`#794 <https://github.com/pybind/pybind11/pull/794>`_,
`#803 <https://github.com/pybind/pybind11/pull/803>`_.
* Various minor improvements and fixes.
`#764 <https://github.com/pybind/pybind11/pull/764>`_,
`#791 <https://github.com/pybind/pybind11/pull/791>`_,
`#795 <https://github.com/pybind/pybind11/pull/795>`_,
`#840 <https://github.com/pybind/pybind11/pull/840>`_,
`#844 <https://github.com/pybind/pybind11/pull/844>`_,
`#846 <https://github.com/pybind/pybind11/pull/846>`_,
`#849 <https://github.com/pybind/pybind11/pull/849>`_,
`#858 <https://github.com/pybind/pybind11/pull/858>`_,
`#862 <https://github.com/pybind/pybind11/pull/862>`_,
`#871 <https://github.com/pybind/pybind11/pull/871>`_,
`#872 <https://github.com/pybind/pybind11/pull/872>`_,
`#881 <https://github.com/pybind/pybind11/pull/881>`_,
`#888 <https://github.com/pybind/pybind11/pull/888>`_,
`#899 <https://github.com/pybind/pybind11/pull/899>`_,
`#928 <https://github.com/pybind/pybind11/pull/928>`_,
`#931 <https://github.com/pybind/pybind11/pull/931>`_,
`#944 <https://github.com/pybind/pybind11/pull/944>`_,
`#950 <https://github.com/pybind/pybind11/pull/950>`_,
`#952 <https://github.com/pybind/pybind11/pull/952>`_,
`#962 <https://github.com/pybind/pybind11/pull/962>`_,
`#965 <https://github.com/pybind/pybind11/pull/965>`_,
`#970 <https://github.com/pybind/pybind11/pull/970>`_,
`#978 <https://github.com/pybind/pybind11/pull/978>`_,
`#979 <https://github.com/pybind/pybind11/pull/979>`_,
`#986 <https://github.com/pybind/pybind11/pull/986>`_,
`#1020 <https://github.com/pybind/pybind11/pull/1020>`_,
`#1027 <https://github.com/pybind/pybind11/pull/1027>`_,
`#1037 <https://github.com/pybind/pybind11/pull/1037>`_.
* Testing improvements.
`#798 <https://github.com/pybind/pybind11/pull/798>`_,
`#882 <https://github.com/pybind/pybind11/pull/882>`_,
`#898 <https://github.com/pybind/pybind11/pull/898>`_,
`#900 <https://github.com/pybind/pybind11/pull/900>`_,
`#921 <https://github.com/pybind/pybind11/pull/921>`_,
`#923 <https://github.com/pybind/pybind11/pull/923>`_,
`#963 <https://github.com/pybind/pybind11/pull/963>`_.
v2.1.1 (April 7, 2017)
-----------------------------------------------------
* Fixed minimum version requirement for MSVC 2015u3
`#773 <https://github.com/pybind/pybind11/pull/773>`_.
v2.1.0 (March 22, 2017)
-----------------------------------------------------
* pybind11 now performs function overload resolution in two phases. The first
phase only considers exact type matches, while the second allows for implicit
conversions to take place. A special ``noconvert()`` syntax can be used to
completely disable implicit conversions for specific arguments.
`#643 <https://github.com/pybind/pybind11/pull/643>`_,
`#634 <https://github.com/pybind/pybind11/pull/634>`_,
`#650 <https://github.com/pybind/pybind11/pull/650>`_.
* Fixed a regression where static properties no longer worked with classes
using multiple inheritance. The ``py::metaclass`` attribute is no longer
necessary (and deprecated as of this release) when binding classes with
static properties.
`#679 <https://github.com/pybind/pybind11/pull/679>`_,
* Classes bound using ``pybind11`` can now use custom metaclasses.
`#679 <https://github.com/pybind/pybind11/pull/679>`_,
* ``py::args`` and ``py::kwargs`` can now be mixed with other positional
arguments when binding functions using pybind11.
`#611 <https://github.com/pybind/pybind11/pull/611>`_.
* Improved support for C++11 unicode string and character types; added
extensive documentation regarding pybind11's string conversion behavior.
`#624 <https://github.com/pybind/pybind11/pull/624>`_,
`#636 <https://github.com/pybind/pybind11/pull/636>`_,
`#715 <https://github.com/pybind/pybind11/pull/715>`_.
* pybind11 can now avoid expensive copies when converting Eigen arrays to NumPy
arrays (and vice versa). `#610 <https://github.com/pybind/pybind11/pull/610>`_.
* The "fast path" in ``py::vectorize`` now works for any full-size group of C or
F-contiguous arrays. The non-fast path is also faster since it no longer performs
copies of the input arguments (except when type conversions are necessary).
`#610 <https://github.com/pybind/pybind11/pull/610>`_.
* Added fast, unchecked access to NumPy arrays via a proxy object.
`#746 <https://github.com/pybind/pybind11/pull/746>`_.
* Transparent support for class-specific ``operator new`` and
``operator delete`` implementations.
`#755 <https://github.com/pybind/pybind11/pull/755>`_.
* Slimmer and more efficient STL-compatible iterator interface for sequence types.
`#662 <https://github.com/pybind/pybind11/pull/662>`_.
* Improved custom holder type support.
`#607 <https://github.com/pybind/pybind11/pull/607>`_.
* ``nullptr`` to ``None`` conversion fixed in various builtin type casters.
`#732 <https://github.com/pybind/pybind11/pull/732>`_.
* ``enum_`` now exposes its members via a special ``__members__`` attribute.
`#666 <https://github.com/pybind/pybind11/pull/666>`_.
* ``std::vector`` bindings created using ``stl_bind.h`` can now optionally
implement the buffer protocol. `#488 <https://github.com/pybind/pybind11/pull/488>`_.
* Automated C++ reference documentation using doxygen and breathe.
`#598 <https://github.com/pybind/pybind11/pull/598>`_.
* Added minimum compiler version assertions.
`#727 <https://github.com/pybind/pybind11/pull/727>`_.
* Improved compatibility with C++1z.
`#677 <https://github.com/pybind/pybind11/pull/677>`_.
* Improved ``py::capsule`` API. Can be used to implement cleanup
callbacks that are involved at module destruction time.
`#752 <https://github.com/pybind/pybind11/pull/752>`_.
* Various minor improvements and fixes.
`#595 <https://github.com/pybind/pybind11/pull/595>`_,
`#588 <https://github.com/pybind/pybind11/pull/588>`_,
`#589 <https://github.com/pybind/pybind11/pull/589>`_,
`#603 <https://github.com/pybind/pybind11/pull/603>`_,
`#619 <https://github.com/pybind/pybind11/pull/619>`_,
`#648 <https://github.com/pybind/pybind11/pull/648>`_,
`#695 <https://github.com/pybind/pybind11/pull/695>`_,
`#720 <https://github.com/pybind/pybind11/pull/720>`_,
`#723 <https://github.com/pybind/pybind11/pull/723>`_,
`#729 <https://github.com/pybind/pybind11/pull/729>`_,
`#724 <https://github.com/pybind/pybind11/pull/724>`_,
`#742 <https://github.com/pybind/pybind11/pull/742>`_,
`#753 <https://github.com/pybind/pybind11/pull/753>`_.
v2.0.1 (Jan 4, 2017)
-----------------------------------------------------
* Fix pointer to reference error in type_caster on MSVC
`#583 <https://github.com/pybind/pybind11/pull/583>`_.
* Fixed a segmentation in the test suite due to a typo
`cd7eac <https://github.com/pybind/pybind11/commit/cd7eac>`_.
v2.0.0 (Jan 1, 2017)
-----------------------------------------------------
* Fixed a reference counting regression affecting types with custom metaclasses
(introduced in v2.0.0-rc1).
`#571 <https://github.com/pybind/pybind11/pull/571>`_.
* Quenched a CMake policy warning.
`#570 <https://github.com/pybind/pybind11/pull/570>`_.
v2.0.0-rc1 (Dec 23, 2016)
-----------------------------------------------------
The pybind11 developers are excited to issue a release candidate of pybind11
with a subsequent v2.0.0 release planned in early January next year.
An incredible amount of effort by went into pybind11 over the last ~5 months,
leading to a release that is jam-packed with exciting new features and numerous
usability improvements. The following list links PRs or individual commits
whenever applicable.
Happy Christmas!
* Support for binding C++ class hierarchies that make use of multiple
inheritance. `#410 <https://github.com/pybind/pybind11/pull/410>`_.
* PyPy support: pybind11 now supports nightly builds of PyPy and will
interoperate with the future 5.7 release. No code changes are necessary,
everything "just" works as usual. Note that we only target the Python 2.7
branch for now; support for 3.x will be added once its ``cpyext`` extension
support catches up. A few minor features remain unsupported for the time
being (notably dynamic attributes in custom types).
`#527 <https://github.com/pybind/pybind11/pull/527>`_.
* Significant work on the documentation -- in particular, the monolithic
``advanced.rst`` file was restructured into a easier to read hierarchical
organization. `#448 <https://github.com/pybind/pybind11/pull/448>`_.
* Many NumPy-related improvements:
1. Object-oriented API to access and modify NumPy ``ndarray`` instances,
replicating much of the corresponding NumPy C API functionality.
`#402 <https://github.com/pybind/pybind11/pull/402>`_.
2. NumPy array ``dtype`` array descriptors are now first-class citizens and
are exposed via a new class ``py::dtype``.
3. Structured dtypes can be registered using the ``PYBIND11_NUMPY_DTYPE()``
macro. Special ``array`` constructors accepting dtype objects were also
added.
One potential caveat involving this change: format descriptor strings
should now be accessed via ``format_descriptor::format()`` (however, for
compatibility purposes, the old syntax ``format_descriptor::value`` will
still work for non-structured data types). `#308
<https://github.com/pybind/pybind11/pull/308>`_.
4. Further improvements to support structured dtypes throughout the system.
`#472 <https://github.com/pybind/pybind11/pull/472>`_,
`#474 <https://github.com/pybind/pybind11/pull/474>`_,
`#459 <https://github.com/pybind/pybind11/pull/459>`_,
`#453 <https://github.com/pybind/pybind11/pull/453>`_,
`#452 <https://github.com/pybind/pybind11/pull/452>`_, and
`#505 <https://github.com/pybind/pybind11/pull/505>`_.
5. Fast access operators. `#497 <https://github.com/pybind/pybind11/pull/497>`_.
6. Constructors for arrays whose storage is owned by another object.
`#440 <https://github.com/pybind/pybind11/pull/440>`_.
7. Added constructors for ``array`` and ``array_t`` explicitly accepting shape
and strides; if strides are not provided, they are deduced assuming
C-contiguity. Also added simplified constructors for 1-dimensional case.
8. Added buffer/NumPy support for ``char[N]`` and ``std::array<char, N>`` types.
9. Added ``memoryview`` wrapper type which is constructible from ``buffer_info``.
* Eigen: many additional conversions and support for non-contiguous
arrays/slices.
`#427 <https://github.com/pybind/pybind11/pull/427>`_,
`#315 <https://github.com/pybind/pybind11/pull/315>`_,
`#316 <https://github.com/pybind/pybind11/pull/316>`_,
`#312 <https://github.com/pybind/pybind11/pull/312>`_, and
`#267 <https://github.com/pybind/pybind11/pull/267>`_
* Incompatible changes in ``class_<...>::class_()``:
1. Declarations of types that provide access via the buffer protocol must
now include the ``py::buffer_protocol()`` annotation as an argument to
the ``class_`` constructor.
2. Declarations of types that require a custom metaclass (i.e. all classes
which include static properties via commands such as
``def_readwrite_static()``) must now include the ``py::metaclass()``
annotation as an argument to the ``class_`` constructor.
These two changes were necessary to make type definitions in pybind11
future-proof, and to support PyPy via its cpyext mechanism. `#527
<https://github.com/pybind/pybind11/pull/527>`_.
3. This version of pybind11 uses a redesigned mechanism for instantiating
trampoline classes that are used to override virtual methods from within
Python. This led to the following user-visible syntax change: instead of
.. code-block:: cpp
py::class_<TrampolineClass>("MyClass")
.alias<MyClass>()
....
write
.. code-block:: cpp
py::class_<MyClass, TrampolineClass>("MyClass")
....
Importantly, both the original and the trampoline class are now
specified as an arguments (in arbitrary order) to the ``py::class_``
template, and the ``alias<..>()`` call is gone. The new scheme has zero
overhead in cases when Python doesn't override any functions of the
underlying C++ class. `rev. 86d825
<https://github.com/pybind/pybind11/commit/86d825>`_.
* Added ``eval`` and ``eval_file`` functions for evaluating expressions and
statements from a string or file. `rev. 0d3fc3
<https://github.com/pybind/pybind11/commit/0d3fc3>`_.
* pybind11 can now create types with a modifiable dictionary.
`#437 <https://github.com/pybind/pybind11/pull/437>`_ and
`#444 <https://github.com/pybind/pybind11/pull/444>`_.
* Support for translation of arbitrary C++ exceptions to Python counterparts.
`#296 <https://github.com/pybind/pybind11/pull/296>`_ and
`#273 <https://github.com/pybind/pybind11/pull/273>`_.
* Report full backtraces through mixed C++/Python code, better reporting for
import errors, fixed GIL management in exception processing.
`#537 <https://github.com/pybind/pybind11/pull/537>`_,
`#494 <https://github.com/pybind/pybind11/pull/494>`_,
`rev. e72d95 <https://github.com/pybind/pybind11/commit/e72d95>`_, and
`rev. 099d6e <https://github.com/pybind/pybind11/commit/099d6e>`_.
* Support for bit-level operations, comparisons, and serialization of C++
enumerations. `#503 <https://github.com/pybind/pybind11/pull/503>`_,
`#508 <https://github.com/pybind/pybind11/pull/508>`_,
`#380 <https://github.com/pybind/pybind11/pull/380>`_,
`#309 <https://github.com/pybind/pybind11/pull/309>`_.
`#311 <https://github.com/pybind/pybind11/pull/311>`_.
* The ``class_`` constructor now accepts its template arguments in any order.
`#385 <https://github.com/pybind/pybind11/pull/385>`_.
* Attribute and item accessors now have a more complete interface which makes
it possible to chain attributes as in
``obj.attr("a")[key].attr("b").attr("method")(1, 2, 3)``. `#425
<https://github.com/pybind/pybind11/pull/425>`_.
* Major redesign of the default and conversion constructors in ``pytypes.h``.
`#464 <https://github.com/pybind/pybind11/pull/464>`_.
* Added built-in support for ``std::shared_ptr`` holder type. It is no longer
necessary to to include a declaration of the form
``PYBIND11_DECLARE_HOLDER_TYPE(T, std::shared_ptr<T>)`` (though continuing to
do so won't cause an error).
`#454 <https://github.com/pybind/pybind11/pull/454>`_.
* New ``py::overload_cast`` casting operator to select among multiple possible
overloads of a function. An example:
.. code-block:: cpp
py::class_<Pet>(m, "Pet")
.def("set", py::overload_cast<int>(&Pet::set), "Set the pet's age")
.def("set", py::overload_cast<const std::string &>(&Pet::set), "Set the pet's name");
This feature only works on C++14-capable compilers.
`#541 <https://github.com/pybind/pybind11/pull/541>`_.
* C++ types are automatically cast to Python types, e.g. when assigning
them as an attribute. For instance, the following is now legal:
.. code-block:: cpp
py::module m = /* ... */
m.attr("constant") = 123;
(Previously, a ``py::cast`` call was necessary to avoid a compilation error.)
`#551 <https://github.com/pybind/pybind11/pull/551>`_.
* Redesigned ``pytest``-based test suite. `#321 <https://github.com/pybind/pybind11/pull/321>`_.
* Instance tracking to detect reference leaks in test suite. `#324 <https://github.com/pybind/pybind11/pull/324>`_
* pybind11 can now distinguish between multiple different instances that are
located at the same memory address, but which have different types.
`#329 <https://github.com/pybind/pybind11/pull/329>`_.
* Improved logic in ``move`` return value policy.
`#510 <https://github.com/pybind/pybind11/pull/510>`_,
`#297 <https://github.com/pybind/pybind11/pull/297>`_.
* Generalized unpacking API to permit calling Python functions from C++ using
notation such as ``foo(a1, a2, *args, "ka"_a=1, "kb"_a=2, **kwargs)``. `#372 <https://github.com/pybind/pybind11/pull/372>`_.
* ``py::print()`` function whose behavior matches that of the native Python
``print()`` function. `#372 <https://github.com/pybind/pybind11/pull/372>`_.
* Added ``py::dict`` keyword constructor:``auto d = dict("number"_a=42,
"name"_a="World");``. `#372 <https://github.com/pybind/pybind11/pull/372>`_.
* Added ``py::str::format()`` method and ``_s`` literal: ``py::str s = "1 + 2
= {}"_s.format(3);``. `#372 <https://github.com/pybind/pybind11/pull/372>`_.
* Added ``py::repr()`` function which is equivalent to Python's builtin
``repr()``. `#333 <https://github.com/pybind/pybind11/pull/333>`_.
* Improved construction and destruction logic for holder types. It is now
possible to reference instances with smart pointer holder types without
constructing the holder if desired. The ``PYBIND11_DECLARE_HOLDER_TYPE``
macro now accepts an optional second parameter to indicate whether the holder
type uses intrusive reference counting.
`#533 <https://github.com/pybind/pybind11/pull/533>`_ and
`#561 <https://github.com/pybind/pybind11/pull/561>`_.
* Mapping a stateless C++ function to Python and back is now "for free" (i.e.
no extra indirections or argument conversion overheads). `rev. 954b79
<https://github.com/pybind/pybind11/commit/954b79>`_.
* Bindings for ``std::valarray<T>``.
`#545 <https://github.com/pybind/pybind11/pull/545>`_.
* Improved support for C++17 capable compilers.
`#562 <https://github.com/pybind/pybind11/pull/562>`_.
* Bindings for ``std::optional<t>``.
`#475 <https://github.com/pybind/pybind11/pull/475>`_,
`#476 <https://github.com/pybind/pybind11/pull/476>`_,
`#479 <https://github.com/pybind/pybind11/pull/479>`_,
`#499 <https://github.com/pybind/pybind11/pull/499>`_, and
`#501 <https://github.com/pybind/pybind11/pull/501>`_.
* ``stl_bind.h``: general improvements and support for ``std::map`` and
``std::unordered_map``.
`#490 <https://github.com/pybind/pybind11/pull/490>`_,
`#282 <https://github.com/pybind/pybind11/pull/282>`_,
`#235 <https://github.com/pybind/pybind11/pull/235>`_.
* The ``std::tuple``, ``std::pair``, ``std::list``, and ``std::vector`` type
casters now accept any Python sequence type as input. `rev. 107285
<https://github.com/pybind/pybind11/commit/107285>`_.
* Improved CMake Python detection on multi-architecture Linux.
`#532 <https://github.com/pybind/pybind11/pull/532>`_.
* Infrastructure to selectively disable or enable parts of the automatically
generated docstrings. `#486 <https://github.com/pybind/pybind11/pull/486>`_.
* ``reference`` and ``reference_internal`` are now the default return value
properties for static and non-static properties, respectively. `#473
<https://github.com/pybind/pybind11/pull/473>`_. (the previous defaults
were ``automatic``). `#473 <https://github.com/pybind/pybind11/pull/473>`_.
* Support for ``std::unique_ptr`` with non-default deleters or no deleter at
all (``py::nodelete``). `#384 <https://github.com/pybind/pybind11/pull/384>`_.
* Deprecated ``handle::call()`` method. The new syntax to call Python
functions is simply ``handle()``. It can also be invoked explicitly via
``handle::operator<X>()``, where ``X`` is an optional return value policy.
* Print more informative error messages when ``make_tuple()`` or ``cast()``
fail. `#262 <https://github.com/pybind/pybind11/pull/262>`_.
* Creation of holder types for classes deriving from
``std::enable_shared_from_this<>`` now also works for ``const`` values.
`#260 <https://github.com/pybind/pybind11/pull/260>`_.
* ``make_iterator()`` improvements for better compatibility with various
types (now uses prefix increment operator); it now also accepts iterators
with different begin/end types as long as they are equality comparable.
`#247 <https://github.com/pybind/pybind11/pull/247>`_.
* ``arg()`` now accepts a wider range of argument types for default values.
`#244 <https://github.com/pybind/pybind11/pull/244>`_.
* Support ``keep_alive`` where the nurse object may be ``None``. `#341
<https://github.com/pybind/pybind11/pull/341>`_.
* Added constructors for ``str`` and ``bytes`` from zero-terminated char
pointers, and from char pointers and length. Added constructors for ``str``
from ``bytes`` and for ``bytes`` from ``str``, which will perform UTF-8
decoding/encoding as required.
* Many other improvements of library internals without user-visible changes
1.8.1 (July 12, 2016)
----------------------
* Fixed a rare but potentially very severe issue when the garbage collector ran
during pybind11 type creation.
1.8.0 (June 14, 2016)
----------------------
* Redesigned CMake build system which exports a convenient
``pybind11_add_module`` function to parent projects.
* ``std::vector<>`` type bindings analogous to Boost.Python's ``indexing_suite``
* Transparent conversion of sparse and dense Eigen matrices and vectors (``eigen.h``)
* Added an ``ExtraFlags`` template argument to the NumPy ``array_t<>`` wrapper
to disable an enforced cast that may lose precision, e.g. to create overloads
for different precisions and complex vs real-valued matrices.
* Prevent implicit conversion of floating point values to integral types in
function arguments
* Fixed incorrect default return value policy for functions returning a shared
pointer
* Don't allow registering a type via ``class_`` twice
* Don't allow casting a ``None`` value into a C++ lvalue reference
* Fixed a crash in ``enum_::operator==`` that was triggered by the ``help()`` command
* Improved detection of whether or not custom C++ types can be copy/move-constructed
* Extended ``str`` type to also work with ``bytes`` instances
* Added a ``"name"_a`` user defined string literal that is equivalent to ``py::arg("name")``.
* When specifying function arguments via ``py::arg``, the test that verifies
the number of arguments now runs at compile time.
* Added ``[[noreturn]]`` attribute to ``pybind11_fail()`` to quench some
compiler warnings
* List function arguments in exception text when the dispatch code cannot find
a matching overload
* Added ``PYBIND11_OVERLOAD_NAME`` and ``PYBIND11_OVERLOAD_PURE_NAME`` macros which
can be used to override virtual methods whose name differs in C++ and Python
(e.g. ``__call__`` and ``operator()``)
* Various minor ``iterator`` and ``make_iterator()`` improvements
* Transparently support ``__bool__`` on Python 2.x and Python 3.x
* Fixed issue with destructor of unpickled object not being called
* Minor CMake build system improvements on Windows
* New ``pybind11::args`` and ``pybind11::kwargs`` types to create functions which
take an arbitrary number of arguments and keyword arguments
* New syntax to call a Python function from C++ using ``*args`` and ``*kwargs``
* The functions ``def_property_*`` now correctly process docstring arguments (these
formerly caused a segmentation fault)
* Many ``mkdoc.py`` improvements (enumerations, template arguments, ``DOC()``
macro accepts more arguments)
* Cygwin support
* Documentation improvements (pickling support, ``keep_alive``, macro usage)
1.7 (April 30, 2016)
----------------------
* Added a new ``move`` return value policy that triggers C++11 move semantics.
The automatic return value policy falls back to this case whenever a rvalue
reference is encountered
* Significantly more general GIL state routines that are used instead of
Python's troublesome ``PyGILState_Ensure`` and ``PyGILState_Release`` API
* Redesign of opaque types that drastically simplifies their usage
* Extended ability to pass values of type ``[const] void *``
* ``keep_alive`` fix: don't fail when there is no patient
* ``functional.h``: acquire the GIL before calling a Python function
* Added Python RAII type wrappers ``none`` and ``iterable``
* Added ``*args`` and ``*kwargs`` pass-through parameters to
``pybind11.get_include()`` function
* Iterator improvements and fixes
* Documentation on return value policies and opaque types improved
1.6 (April 30, 2016)
----------------------
* Skipped due to upload to PyPI gone wrong and inability to recover
(https://github.com/pypa/packaging-problems/issues/74)
1.5 (April 21, 2016)
----------------------
* For polymorphic types, use RTTI to try to return the closest type registered with pybind11
* Pickling support for serializing and unserializing C++ instances to a byte stream in Python
* Added a convenience routine ``make_iterator()`` which turns a range indicated
by a pair of C++ iterators into a iterable Python object
* Added ``len()`` and a variadic ``make_tuple()`` function
* Addressed a rare issue that could confuse the current virtual function
dispatcher and another that could lead to crashes in multi-threaded
applications
* Added a ``get_include()`` function to the Python module that returns the path
of the directory containing the installed pybind11 header files
* Documentation improvements: import issues, symbol visibility, pickling, limitations
* Added casting support for ``std::reference_wrapper<>``
1.4 (April 7, 2016)
--------------------------
* Transparent type conversion for ``std::wstring`` and ``wchar_t``
* Allow passing ``nullptr``-valued strings
* Transparent passing of ``void *`` pointers using capsules
* Transparent support for returning values wrapped in ``std::unique_ptr<>``
* Improved docstring generation for compatibility with Sphinx
* Nicer debug error message when default parameter construction fails
* Support for "opaque" types that bypass the transparent conversion layer for STL containers
* Redesigned type casting interface to avoid ambiguities that could occasionally cause compiler errors
* Redesigned property implementation; fixes crashes due to an unfortunate default return value policy
* Anaconda package generation support
1.3 (March 8, 2016)
--------------------------
* Added support for the Intel C++ compiler (v15+)
* Added support for the STL unordered set/map data structures
* Added support for the STL linked list data structure
* NumPy-style broadcasting support in ``pybind11::vectorize``
* pybind11 now displays more verbose error messages when ``arg::operator=()`` fails
* pybind11 internal data structures now live in a version-dependent namespace to avoid ABI issues
* Many, many bugfixes involving corner cases and advanced usage
1.2 (February 7, 2016)
--------------------------
* Optional: efficient generation of function signatures at compile time using C++14
* Switched to a simpler and more general way of dealing with function default
arguments. Unused keyword arguments in function calls are now detected and
cause errors as expected
* New ``keep_alive`` call policy analogous to Boost.Python's ``with_custodian_and_ward``
* New ``pybind11::base<>`` attribute to indicate a subclass relationship
* Improved interface for RAII type wrappers in ``pytypes.h``
* Use RAII type wrappers consistently within pybind11 itself. This
fixes various potential refcount leaks when exceptions occur
* Added new ``bytes`` RAII type wrapper (maps to ``string`` in Python 2.7)
* Made handle and related RAII classes const correct, using them more
consistently everywhere now
* Got rid of the ugly ``__pybind11__`` attributes on the Python side---they are
now stored in a C++ hash table that is not visible in Python
* Fixed refcount leaks involving NumPy arrays and bound functions
* Vastly improved handling of shared/smart pointers
* Removed an unnecessary copy operation in ``pybind11::vectorize``
* Fixed naming clashes when both pybind11 and NumPy headers are included
* Added conversions for additional exception types
* Documentation improvements (using multiple extension modules, smart pointers,
other minor clarifications)
* unified infrastructure for parsing variadic arguments in ``class_`` and cpp_function
* Fixed license text (was: ZLIB, should have been: 3-clause BSD)
* Python 3.2 compatibility
* Fixed remaining issues when accessing types in another plugin module
* Added enum comparison and casting methods
* Improved SFINAE-based detection of whether types are copy-constructible
* Eliminated many warnings about unused variables and the use of ``offsetof()``
* Support for ``std::array<>`` conversions
1.1 (December 7, 2015)
--------------------------
* Documentation improvements (GIL, wrapping functions, casting, fixed many typos)
* Generalized conversion of integer types
* Improved support for casting function objects
* Improved support for ``std::shared_ptr<>`` conversions
* Initial support for ``std::set<>`` conversions
* Fixed type resolution issue for types defined in a separate plugin module
* Cmake build system improvements
* Factored out generic functionality to non-templated code (smaller code size)
* Added a code size / compile time benchmark vs Boost.Python
* Added an appveyor CI script
1.0 (October 15, 2015)
------------------------
* Initial release
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/changelog.rst | changelog.rst |
.. _compiling:
Build systems
#############
Building with setuptools
========================
For projects on PyPI, building with setuptools is the way to go. Sylvain Corlay
has kindly provided an example project which shows how to set up everything,
including automatic generation of documentation using Sphinx. Please refer to
the [python_example]_ repository.
.. [python_example] https://github.com/pybind/python_example
Building with cppimport
========================
[cppimport]_ is a small Python import hook that determines whether there is a C++
source file whose name matches the requested module. If there is, the file is
compiled as a Python extension using pybind11 and placed in the same folder as
the C++ source file. Python is then able to find the module and load it.
.. [cppimport] https://github.com/tbenthompson/cppimport
.. _cmake:
Building with CMake
===================
For C++ codebases that have an existing CMake-based build system, a Python
extension module can be created with just a few lines of code:
.. code-block:: cmake
cmake_minimum_required(VERSION 2.8.12)
project(example)
add_subdirectory(pybind11)
pybind11_add_module(example example.cpp)
This assumes that the pybind11 repository is located in a subdirectory named
:file:`pybind11` and that the code is located in a file named :file:`example.cpp`.
The CMake command ``add_subdirectory`` will import the pybind11 project which
provides the ``pybind11_add_module`` function. It will take care of all the
details needed to build a Python extension module on any platform.
A working sample project, including a way to invoke CMake from :file:`setup.py` for
PyPI integration, can be found in the [cmake_example]_ repository.
.. [cmake_example] https://github.com/pybind/cmake_example
pybind11_add_module
-------------------
To ease the creation of Python extension modules, pybind11 provides a CMake
function with the following signature:
.. code-block:: cmake
pybind11_add_module(<name> [MODULE | SHARED] [EXCLUDE_FROM_ALL]
[NO_EXTRAS] [SYSTEM] [THIN_LTO] source1 [source2 ...])
This function behaves very much like CMake's builtin ``add_library`` (in fact,
it's a wrapper function around that command). It will add a library target
called ``<name>`` to be built from the listed source files. In addition, it
will take care of all the Python-specific compiler and linker flags as well
as the OS- and Python-version-specific file extension. The produced target
``<name>`` can be further manipulated with regular CMake commands.
``MODULE`` or ``SHARED`` may be given to specify the type of library. If no
type is given, ``MODULE`` is used by default which ensures the creation of a
Python-exclusive module. Specifying ``SHARED`` will create a more traditional
dynamic library which can also be linked from elsewhere. ``EXCLUDE_FROM_ALL``
removes this target from the default build (see CMake docs for details).
Since pybind11 is a template library, ``pybind11_add_module`` adds compiler
flags to ensure high quality code generation without bloat arising from long
symbol names and duplication of code in different translation units. It
sets default visibility to *hidden*, which is required for some pybind11
features and functionality when attempting to load multiple pybind11 modules
compiled under different pybind11 versions. It also adds additional flags
enabling LTO (Link Time Optimization) and strip unneeded symbols. See the
:ref:`FAQ entry <faq:symhidden>` for a more detailed explanation. These
latter optimizations are never applied in ``Debug`` mode. If ``NO_EXTRAS`` is
given, they will always be disabled, even in ``Release`` mode. However, this
will result in code bloat and is generally not recommended.
By default, pybind11 and Python headers will be included with ``-I``. In order
to include pybind11 as system library, e.g. to avoid warnings in downstream
code with warn-levels outside of pybind11's scope, set the option ``SYSTEM``.
As stated above, LTO is enabled by default. Some newer compilers also support
different flavors of LTO such as `ThinLTO`_. Setting ``THIN_LTO`` will cause
the function to prefer this flavor if available. The function falls back to
regular LTO if ``-flto=thin`` is not available.
.. _ThinLTO: http://clang.llvm.org/docs/ThinLTO.html
Configuration variables
-----------------------
By default, pybind11 will compile modules with the C++14 standard, if available
on the target compiler, falling back to C++11 if C++14 support is not
available. Note, however, that this default is subject to change: future
pybind11 releases are expected to migrate to newer C++ standards as they become
available. To override this, the standard flag can be given explicitly in
``PYBIND11_CPP_STANDARD``:
.. code-block:: cmake
# Use just one of these:
# GCC/clang:
set(PYBIND11_CPP_STANDARD -std=c++11)
set(PYBIND11_CPP_STANDARD -std=c++14)
set(PYBIND11_CPP_STANDARD -std=c++1z) # Experimental C++17 support
# MSVC:
set(PYBIND11_CPP_STANDARD /std:c++14)
set(PYBIND11_CPP_STANDARD /std:c++latest) # Enables some MSVC C++17 features
add_subdirectory(pybind11) # or find_package(pybind11)
Note that this and all other configuration variables must be set **before** the
call to ``add_subdirectory`` or ``find_package``. The variables can also be set
when calling CMake from the command line using the ``-D<variable>=<value>`` flag.
The target Python version can be selected by setting ``PYBIND11_PYTHON_VERSION``
or an exact Python installation can be specified with ``PYTHON_EXECUTABLE``.
For example:
.. code-block:: bash
cmake -DPYBIND11_PYTHON_VERSION=3.6 ..
# or
cmake -DPYTHON_EXECUTABLE=path/to/python ..
find_package vs. add_subdirectory
---------------------------------
For CMake-based projects that don't include the pybind11 repository internally,
an external installation can be detected through ``find_package(pybind11)``.
See the `Config file`_ docstring for details of relevant CMake variables.
.. code-block:: cmake
cmake_minimum_required(VERSION 2.8.12)
project(example)
find_package(pybind11 REQUIRED)
pybind11_add_module(example example.cpp)
Note that ``find_package(pybind11)`` will only work correctly if pybind11
has been correctly installed on the system, e. g. after downloading or cloning
the pybind11 repository :
.. code-block:: bash
cd pybind11
mkdir build
cd build
cmake ..
make install
Once detected, the aforementioned ``pybind11_add_module`` can be employed as
before. The function usage and configuration variables are identical no matter
if pybind11 is added as a subdirectory or found as an installed package. You
can refer to the same [cmake_example]_ repository for a full sample project
-- just swap out ``add_subdirectory`` for ``find_package``.
.. _Config file: https://github.com/pybind/pybind11/blob/master/tools/pybind11Config.cmake.in
Advanced: interface library target
----------------------------------
When using a version of CMake greater than 3.0, pybind11 can additionally
be used as a special *interface library* . The target ``pybind11::module``
is available with pybind11 headers, Python headers and libraries as needed,
and C++ compile definitions attached. This target is suitable for linking
to an independently constructed (through ``add_library``, not
``pybind11_add_module``) target in the consuming project.
.. code-block:: cmake
cmake_minimum_required(VERSION 3.0)
project(example)
find_package(pybind11 REQUIRED) # or add_subdirectory(pybind11)
add_library(example MODULE main.cpp)
target_link_libraries(example PRIVATE pybind11::module)
set_target_properties(example PROPERTIES PREFIX "${PYTHON_MODULE_PREFIX}"
SUFFIX "${PYTHON_MODULE_EXTENSION}")
.. warning::
Since pybind11 is a metatemplate library, it is crucial that certain
compiler flags are provided to ensure high quality code generation. In
contrast to the ``pybind11_add_module()`` command, the CMake interface
library only provides the *minimal* set of parameters to ensure that the
code using pybind11 compiles, but it does **not** pass these extra compiler
flags (i.e. this is up to you).
These include Link Time Optimization (``-flto`` on GCC/Clang/ICPC, ``/GL``
and ``/LTCG`` on Visual Studio) and .OBJ files with many sections on Visual
Studio (``/bigobj``). The :ref:`FAQ <faq:symhidden>` contains an
explanation on why these are needed.
Embedding the Python interpreter
--------------------------------
In addition to extension modules, pybind11 also supports embedding Python into
a C++ executable or library. In CMake, simply link with the ``pybind11::embed``
target. It provides everything needed to get the interpreter running. The Python
headers and libraries are attached to the target. Unlike ``pybind11::module``,
there is no need to manually set any additional properties here. For more
information about usage in C++, see :doc:`/advanced/embedding`.
.. code-block:: cmake
cmake_minimum_required(VERSION 3.0)
project(example)
find_package(pybind11 REQUIRED) # or add_subdirectory(pybind11)
add_executable(example main.cpp)
target_link_libraries(example PRIVATE pybind11::embed)
.. _building_manually:
Building manually
=================
pybind11 is a header-only library, hence it is not necessary to link against
any special libraries and there are no intermediate (magic) translation steps.
On Linux, you can compile an example such as the one given in
:ref:`simple_example` using the following command:
.. code-block:: bash
$ c++ -O3 -Wall -shared -std=c++11 -fPIC `python3 -m pybind11 --includes` example.cpp -o example`python3-config --extension-suffix`
The flags given here assume that you're using Python 3. For Python 2, just
change the executable appropriately (to ``python`` or ``python2``).
The ``python3 -m pybind11 --includes`` command fetches the include paths for
both pybind11 and Python headers. This assumes that pybind11 has been installed
using ``pip`` or ``conda``. If it hasn't, you can also manually specify
``-I <path-to-pybind11>/include`` together with the Python includes path
``python3-config --includes``.
Note that Python 2.7 modules don't use a special suffix, so you should simply
use ``example.so`` instead of ``example`python3-config --extension-suffix```.
Besides, the ``--extension-suffix`` option may or may not be available, depending
on the distribution; in the latter case, the module extension can be manually
set to ``.so``.
On Mac OS: the build command is almost the same but it also requires passing
the ``-undefined dynamic_lookup`` flag so as to ignore missing symbols when
building the module:
.. code-block:: bash
$ c++ -O3 -Wall -shared -std=c++11 -undefined dynamic_lookup `python3 -m pybind11 --includes` example.cpp -o example`python3-config --extension-suffix`
In general, it is advisable to include several additional build parameters
that can considerably reduce the size of the created binary. Refer to section
:ref:`cmake` for a detailed example of a suitable cross-platform CMake-based
build system that works on all platforms including Windows.
.. note::
On Linux and macOS, it's better to (intentionally) not link against
``libpython``. The symbols will be resolved when the extension library
is loaded into a Python binary. This is preferable because you might
have several different installations of a given Python version (e.g. the
system-provided Python, and one that ships with a piece of commercial
software). In this way, the plugin will work with both versions, instead
of possibly importing a second Python library into a process that already
contains one (which will lead to a segfault).
Generating binding code automatically
=====================================
The ``Binder`` project is a tool for automatic generation of pybind11 binding
code by introspecting existing C++ codebases using LLVM/Clang. See the
[binder]_ documentation for details.
.. [binder] http://cppbinder.readthedocs.io/en/latest/about.html
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/compiling.rst | compiling.rst |
Benchmark
=========
The following is the result of a synthetic benchmark comparing both compilation
time and module size of pybind11 against Boost.Python. A detailed report about a
Boost.Python to pybind11 conversion of a real project is available here: [#f1]_.
.. [#f1] http://graylab.jhu.edu/RosettaCon2016/PyRosetta-4.pdf
Setup
-----
A python script (see the ``docs/benchmark.py`` file) was used to generate a set
of files with dummy classes whose count increases for each successive benchmark
(between 1 and 2048 classes in powers of two). Each class has four methods with
a randomly generated signature with a return value and four arguments. (There
was no particular reason for this setup other than the desire to generate many
unique function signatures whose count could be controlled in a simple way.)
Here is an example of the binding code for one class:
.. code-block:: cpp
...
class cl034 {
public:
cl279 *fn_000(cl084 *, cl057 *, cl065 *, cl042 *);
cl025 *fn_001(cl098 *, cl262 *, cl414 *, cl121 *);
cl085 *fn_002(cl445 *, cl297 *, cl145 *, cl421 *);
cl470 *fn_003(cl200 *, cl323 *, cl332 *, cl492 *);
};
...
PYBIND11_MODULE(example, m) {
...
py::class_<cl034>(m, "cl034")
.def("fn_000", &cl034::fn_000)
.def("fn_001", &cl034::fn_001)
.def("fn_002", &cl034::fn_002)
.def("fn_003", &cl034::fn_003)
...
}
The Boost.Python version looks almost identical except that a return value
policy had to be specified as an argument to ``def()``. For both libraries,
compilation was done with
.. code-block:: bash
Apple LLVM version 7.0.2 (clang-700.1.81)
and the following compilation flags
.. code-block:: bash
g++ -Os -shared -rdynamic -undefined dynamic_lookup -fvisibility=hidden -std=c++14
Compilation time
----------------
The following log-log plot shows how the compilation time grows for an
increasing number of class and function declarations. pybind11 includes many
fewer headers, which initially leads to shorter compilation times, but the
performance is ultimately fairly similar (pybind11 is 19.8 seconds faster for
the largest largest file with 2048 classes and a total of 8192 methods -- a
modest **1.2x** speedup relative to Boost.Python, which required 116.35
seconds).
.. only:: not latex
.. image:: pybind11_vs_boost_python1.svg
.. only:: latex
.. image:: pybind11_vs_boost_python1.png
Module size
-----------
Differences between the two libraries become much more pronounced when
considering the file size of the generated Python plugin: for the largest file,
the binary generated by Boost.Python required 16.8 MiB, which was **2.17
times** / **9.1 megabytes** larger than the output generated by pybind11. For
very small inputs, Boost.Python has an edge in the plot below -- however, note
that it stores many definitions in an external library, whose size was not
included here, hence the comparison is slightly shifted in Boost.Python's
favor.
.. only:: not latex
.. image:: pybind11_vs_boost_python2.svg
.. only:: latex
.. image:: pybind11_vs_boost_python2.png
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/benchmark.rst | benchmark.rst |
.. _reference:
.. warning::
Please be advised that the reference documentation discussing pybind11
internals is currently incomplete. Please refer to the previous sections
and the pybind11 header files for the nitty gritty details.
Reference
#########
.. _macros:
Macros
======
.. doxygendefine:: PYBIND11_MODULE
.. _core_types:
Convenience classes for arbitrary Python types
==============================================
Common member functions
-----------------------
.. doxygenclass:: object_api
:members:
Without reference counting
--------------------------
.. doxygenclass:: handle
:members:
With reference counting
-----------------------
.. doxygenclass:: object
:members:
.. doxygenfunction:: reinterpret_borrow
.. doxygenfunction:: reinterpret_steal
Convenience classes for specific Python types
=============================================
.. doxygenclass:: module
:members:
.. doxygengroup:: pytypes
:members:
.. _extras:
Passing extra arguments to ``def`` or ``class_``
================================================
.. doxygengroup:: annotations
:members:
Embedding the interpreter
=========================
.. doxygendefine:: PYBIND11_EMBEDDED_MODULE
.. doxygenfunction:: initialize_interpreter
.. doxygenfunction:: finalize_interpreter
.. doxygenclass:: scoped_interpreter
Redirecting C++ streams
=======================
.. doxygenclass:: scoped_ostream_redirect
.. doxygenclass:: scoped_estream_redirect
.. doxygenfunction:: add_ostream_redirect
Python built-in functions
=========================
.. doxygengroup:: python_builtins
:members:
Inheritance
===========
See :doc:`/classes` and :doc:`/advanced/classes` for more detail.
.. doxygendefine:: PYBIND11_OVERLOAD
.. doxygendefine:: PYBIND11_OVERLOAD_PURE
.. doxygendefine:: PYBIND11_OVERLOAD_NAME
.. doxygendefine:: PYBIND11_OVERLOAD_PURE_NAME
.. doxygenfunction:: get_overload
Exceptions
==========
.. doxygenclass:: error_already_set
:members:
.. doxygenclass:: builtin_exception
:members:
Literals
========
.. doxygennamespace:: literals
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/reference.rst | reference.rst |
import random
import os
import time
import datetime as dt
nfns = 4 # Functions per class
nargs = 4 # Arguments per function
def generate_dummy_code_pybind11(nclasses=10):
decl = ""
bindings = ""
for cl in range(nclasses):
decl += "class cl%03i;\n" % cl
decl += '\n'
for cl in range(nclasses):
decl += "class cl%03i {\n" % cl
decl += "public:\n"
bindings += ' py::class_<cl%03i>(m, "cl%03i")\n' % (cl, cl)
for fn in range(nfns):
ret = random.randint(0, nclasses - 1)
params = [random.randint(0, nclasses - 1) for i in range(nargs)]
decl += " cl%03i *fn_%03i(" % (ret, fn)
decl += ", ".join("cl%03i *" % p for p in params)
decl += ");\n"
bindings += ' .def("fn_%03i", &cl%03i::fn_%03i)\n' % \
(fn, cl, fn)
decl += "};\n\n"
bindings += ' ;\n'
result = "#include <pybind11/pybind11.h>\n\n"
result += "namespace py = pybind11;\n\n"
result += decl + '\n'
result += "PYBIND11_MODULE(example, m) {\n"
result += bindings
result += "}"
return result
def generate_dummy_code_boost(nclasses=10):
decl = ""
bindings = ""
for cl in range(nclasses):
decl += "class cl%03i;\n" % cl
decl += '\n'
for cl in range(nclasses):
decl += "class cl%03i {\n" % cl
decl += "public:\n"
bindings += ' py::class_<cl%03i>("cl%03i")\n' % (cl, cl)
for fn in range(nfns):
ret = random.randint(0, nclasses - 1)
params = [random.randint(0, nclasses - 1) for i in range(nargs)]
decl += " cl%03i *fn_%03i(" % (ret, fn)
decl += ", ".join("cl%03i *" % p for p in params)
decl += ");\n"
bindings += ' .def("fn_%03i", &cl%03i::fn_%03i, py::return_value_policy<py::manage_new_object>())\n' % \
(fn, cl, fn)
decl += "};\n\n"
bindings += ' ;\n'
result = "#include <boost/python.hpp>\n\n"
result += "namespace py = boost::python;\n\n"
result += decl + '\n'
result += "BOOST_PYTHON_MODULE(example) {\n"
result += bindings
result += "}"
return result
for codegen in [generate_dummy_code_pybind11, generate_dummy_code_boost]:
print ("{")
for i in range(0, 10):
nclasses = 2 ** i
with open("test.cpp", "w") as f:
f.write(codegen(nclasses))
n1 = dt.datetime.now()
os.system("g++ -Os -shared -rdynamic -undefined dynamic_lookup "
"-fvisibility=hidden -std=c++14 test.cpp -I include "
"-I /System/Library/Frameworks/Python.framework/Headers -o test.so")
n2 = dt.datetime.now()
elapsed = (n2 - n1).total_seconds()
size = os.stat('test.so').st_size
print(" {%i, %f, %i}," % (nclasses * nfns, elapsed, size))
print ("}") | ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/benchmark.py | benchmark.py |
.. _basics:
First steps
###########
This sections demonstrates the basic features of pybind11. Before getting
started, make sure that development environment is set up to compile the
included set of test cases.
Compiling the test cases
========================
Linux/MacOS
-----------
On Linux you'll need to install the **python-dev** or **python3-dev** packages as
well as **cmake**. On Mac OS, the included python version works out of the box,
but **cmake** must still be installed.
After installing the prerequisites, run
.. code-block:: bash
mkdir build
cd build
cmake ..
make check -j 4
The last line will both compile and run the tests.
Windows
-------
On Windows, only **Visual Studio 2015** and newer are supported since pybind11 relies
on various C++11 language features that break older versions of Visual Studio.
To compile and run the tests:
.. code-block:: batch
mkdir build
cd build
cmake ..
cmake --build . --config Release --target check
This will create a Visual Studio project, compile and run the target, all from the
command line.
.. Note::
If all tests fail, make sure that the Python binary and the testcases are compiled
for the same processor type and bitness (i.e. either **i386** or **x86_64**). You
can specify **x86_64** as the target architecture for the generated Visual Studio
project using ``cmake -A x64 ..``.
.. seealso::
Advanced users who are already familiar with Boost.Python may want to skip
the tutorial and look at the test cases in the :file:`tests` directory,
which exercise all features of pybind11.
Header and namespace conventions
================================
For brevity, all code examples assume that the following two lines are present:
.. code-block:: cpp
#include <pybind11/pybind11.h>
namespace py = pybind11;
Some features may require additional headers, but those will be specified as needed.
.. _simple_example:
Creating bindings for a simple function
=======================================
Let's start by creating Python bindings for an extremely simple function, which
adds two numbers and returns their result:
.. code-block:: cpp
int add(int i, int j) {
return i + j;
}
For simplicity [#f1]_, we'll put both this function and the binding code into
a file named :file:`example.cpp` with the following contents:
.. code-block:: cpp
#include <pybind11/pybind11.h>
int add(int i, int j) {
return i + j;
}
PYBIND11_MODULE(example, m) {
m.doc() = "pybind11 example plugin"; // optional module docstring
m.def("add", &add, "A function which adds two numbers");
}
.. [#f1] In practice, implementation and binding code will generally be located
in separate files.
The :func:`PYBIND11_MODULE` macro creates a function that will be called when an
``import`` statement is issued from within Python. The module name (``example``)
is given as the first macro argument (it should not be in quotes). The second
argument (``m``) defines a variable of type :class:`py::module <module>` which
is the main interface for creating bindings. The method :func:`module::def`
generates binding code that exposes the ``add()`` function to Python.
.. note::
Notice how little code was needed to expose our function to Python: all
details regarding the function's parameters and return value were
automatically inferred using template metaprogramming. This overall
approach and the used syntax are borrowed from Boost.Python, though the
underlying implementation is very different.
pybind11 is a header-only library, hence it is not necessary to link against
any special libraries and there are no intermediate (magic) translation steps.
On Linux, the above example can be compiled using the following command:
.. code-block:: bash
$ c++ -O3 -Wall -shared -std=c++11 -fPIC `python3 -m pybind11 --includes` example.cpp -o example`python3-config --extension-suffix`
For more details on the required compiler flags on Linux and MacOS, see
:ref:`building_manually`. For complete cross-platform compilation instructions,
refer to the :ref:`compiling` page.
The `python_example`_ and `cmake_example`_ repositories are also a good place
to start. They are both complete project examples with cross-platform build
systems. The only difference between the two is that `python_example`_ uses
Python's ``setuptools`` to build the module, while `cmake_example`_ uses CMake
(which may be preferable for existing C++ projects).
.. _python_example: https://github.com/pybind/python_example
.. _cmake_example: https://github.com/pybind/cmake_example
Building the above C++ code will produce a binary module file that can be
imported to Python. Assuming that the compiled module is located in the
current directory, the following interactive Python session shows how to
load and execute the example:
.. code-block:: pycon
$ python
Python 2.7.10 (default, Aug 22 2015, 20:33:39)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import example
>>> example.add(1, 2)
3L
>>>
.. _keyword_args:
Keyword arguments
=================
With a simple code modification, it is possible to inform Python about the
names of the arguments ("i" and "j" in this case).
.. code-block:: cpp
m.def("add", &add, "A function which adds two numbers",
py::arg("i"), py::arg("j"));
:class:`arg` is one of several special tag classes which can be used to pass
metadata into :func:`module::def`. With this modified binding code, we can now
call the function using keyword arguments, which is a more readable alternative
particularly for functions taking many parameters:
.. code-block:: pycon
>>> import example
>>> example.add(i=1, j=2)
3L
The keyword names also appear in the function signatures within the documentation.
.. code-block:: pycon
>>> help(example)
....
FUNCTIONS
add(...)
Signature : (i: int, j: int) -> int
A function which adds two numbers
A shorter notation for named arguments is also available:
.. code-block:: cpp
// regular notation
m.def("add1", &add, py::arg("i"), py::arg("j"));
// shorthand
using namespace pybind11::literals;
m.def("add2", &add, "i"_a, "j"_a);
The :var:`_a` suffix forms a C++11 literal which is equivalent to :class:`arg`.
Note that the literal operator must first be made visible with the directive
``using namespace pybind11::literals``. This does not bring in anything else
from the ``pybind11`` namespace except for literals.
.. _default_args:
Default arguments
=================
Suppose now that the function to be bound has default arguments, e.g.:
.. code-block:: cpp
int add(int i = 1, int j = 2) {
return i + j;
}
Unfortunately, pybind11 cannot automatically extract these parameters, since they
are not part of the function's type information. However, they are simple to specify
using an extension of :class:`arg`:
.. code-block:: cpp
m.def("add", &add, "A function which adds two numbers",
py::arg("i") = 1, py::arg("j") = 2);
The default values also appear within the documentation.
.. code-block:: pycon
>>> help(example)
....
FUNCTIONS
add(...)
Signature : (i: int = 1, j: int = 2) -> int
A function which adds two numbers
The shorthand notation is also available for default arguments:
.. code-block:: cpp
// regular notation
m.def("add1", &add, py::arg("i") = 1, py::arg("j") = 2);
// shorthand
m.def("add2", &add, "i"_a=1, "j"_a=2);
Exporting variables
===================
To expose a value from C++, use the ``attr`` function to register it in a
module as shown below. Built-in types and general objects (more on that later)
are automatically converted when assigned as attributes, and can be explicitly
converted using the function ``py::cast``.
.. code-block:: cpp
PYBIND11_MODULE(example, m) {
m.attr("the_answer") = 42;
py::object world = py::cast("World");
m.attr("what") = world;
}
These are then accessible from Python:
.. code-block:: pycon
>>> import example
>>> example.the_answer
42
>>> example.what
'World'
.. _supported_types:
Supported data types
====================
A large number of data types are supported out of the box and can be used
seamlessly as functions arguments, return values or with ``py::cast`` in general.
For a full overview, see the :doc:`advanced/cast/index` section.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/basics.rst | basics.rst |
.. _classes:
Object-oriented code
####################
Creating bindings for a custom type
===================================
Let's now look at a more complex example where we'll create bindings for a
custom C++ data structure named ``Pet``. Its definition is given below:
.. code-block:: cpp
struct Pet {
Pet(const std::string &name) : name(name) { }
void setName(const std::string &name_) { name = name_; }
const std::string &getName() const { return name; }
std::string name;
};
The binding code for ``Pet`` looks as follows:
.. code-block:: cpp
#include <pybind11/pybind11.h>
namespace py = pybind11;
PYBIND11_MODULE(example, m) {
py::class_<Pet>(m, "Pet")
.def(py::init<const std::string &>())
.def("setName", &Pet::setName)
.def("getName", &Pet::getName);
}
:class:`class_` creates bindings for a C++ *class* or *struct*-style data
structure. :func:`init` is a convenience function that takes the types of a
constructor's parameters as template arguments and wraps the corresponding
constructor (see the :ref:`custom_constructors` section for details). An
interactive Python session demonstrating this example is shown below:
.. code-block:: pycon
% python
>>> import example
>>> p = example.Pet('Molly')
>>> print(p)
<example.Pet object at 0x10cd98060>
>>> p.getName()
u'Molly'
>>> p.setName('Charly')
>>> p.getName()
u'Charly'
.. seealso::
Static member functions can be bound in the same way using
:func:`class_::def_static`.
Keyword and default arguments
=============================
It is possible to specify keyword and default arguments using the syntax
discussed in the previous chapter. Refer to the sections :ref:`keyword_args`
and :ref:`default_args` for details.
Binding lambda functions
========================
Note how ``print(p)`` produced a rather useless summary of our data structure in the example above:
.. code-block:: pycon
>>> print(p)
<example.Pet object at 0x10cd98060>
To address this, we could bind an utility function that returns a human-readable
summary to the special method slot named ``__repr__``. Unfortunately, there is no
suitable functionality in the ``Pet`` data structure, and it would be nice if
we did not have to change it. This can easily be accomplished by binding a
Lambda function instead:
.. code-block:: cpp
py::class_<Pet>(m, "Pet")
.def(py::init<const std::string &>())
.def("setName", &Pet::setName)
.def("getName", &Pet::getName)
.def("__repr__",
[](const Pet &a) {
return "<example.Pet named '" + a.name + "'>";
}
);
Both stateless [#f1]_ and stateful lambda closures are supported by pybind11.
With the above change, the same Python code now produces the following output:
.. code-block:: pycon
>>> print(p)
<example.Pet named 'Molly'>
.. [#f1] Stateless closures are those with an empty pair of brackets ``[]`` as the capture object.
.. _properties:
Instance and static fields
==========================
We can also directly expose the ``name`` field using the
:func:`class_::def_readwrite` method. A similar :func:`class_::def_readonly`
method also exists for ``const`` fields.
.. code-block:: cpp
py::class_<Pet>(m, "Pet")
.def(py::init<const std::string &>())
.def_readwrite("name", &Pet::name)
// ... remainder ...
This makes it possible to write
.. code-block:: pycon
>>> p = example.Pet('Molly')
>>> p.name
u'Molly'
>>> p.name = 'Charly'
>>> p.name
u'Charly'
Now suppose that ``Pet::name`` was a private internal variable
that can only be accessed via setters and getters.
.. code-block:: cpp
class Pet {
public:
Pet(const std::string &name) : name(name) { }
void setName(const std::string &name_) { name = name_; }
const std::string &getName() const { return name; }
private:
std::string name;
};
In this case, the method :func:`class_::def_property`
(:func:`class_::def_property_readonly` for read-only data) can be used to
provide a field-like interface within Python that will transparently call
the setter and getter functions:
.. code-block:: cpp
py::class_<Pet>(m, "Pet")
.def(py::init<const std::string &>())
.def_property("name", &Pet::getName, &Pet::setName)
// ... remainder ...
Write only properties can be defined by passing ``nullptr`` as the
input for the read function.
.. seealso::
Similar functions :func:`class_::def_readwrite_static`,
:func:`class_::def_readonly_static` :func:`class_::def_property_static`,
and :func:`class_::def_property_readonly_static` are provided for binding
static variables and properties. Please also see the section on
:ref:`static_properties` in the advanced part of the documentation.
Dynamic attributes
==================
Native Python classes can pick up new attributes dynamically:
.. code-block:: pycon
>>> class Pet:
... name = 'Molly'
...
>>> p = Pet()
>>> p.name = 'Charly' # overwrite existing
>>> p.age = 2 # dynamically add a new attribute
By default, classes exported from C++ do not support this and the only writable
attributes are the ones explicitly defined using :func:`class_::def_readwrite`
or :func:`class_::def_property`.
.. code-block:: cpp
py::class_<Pet>(m, "Pet")
.def(py::init<>())
.def_readwrite("name", &Pet::name);
Trying to set any other attribute results in an error:
.. code-block:: pycon
>>> p = example.Pet()
>>> p.name = 'Charly' # OK, attribute defined in C++
>>> p.age = 2 # fail
AttributeError: 'Pet' object has no attribute 'age'
To enable dynamic attributes for C++ classes, the :class:`py::dynamic_attr` tag
must be added to the :class:`py::class_` constructor:
.. code-block:: cpp
py::class_<Pet>(m, "Pet", py::dynamic_attr())
.def(py::init<>())
.def_readwrite("name", &Pet::name);
Now everything works as expected:
.. code-block:: pycon
>>> p = example.Pet()
>>> p.name = 'Charly' # OK, overwrite value in C++
>>> p.age = 2 # OK, dynamically add a new attribute
>>> p.__dict__ # just like a native Python class
{'age': 2}
Note that there is a small runtime cost for a class with dynamic attributes.
Not only because of the addition of a ``__dict__``, but also because of more
expensive garbage collection tracking which must be activated to resolve
possible circular references. Native Python classes incur this same cost by
default, so this is not anything to worry about. By default, pybind11 classes
are more efficient than native Python classes. Enabling dynamic attributes
just brings them on par.
.. _inheritance:
Inheritance and automatic downcasting
=====================================
Suppose now that the example consists of two data structures with an
inheritance relationship:
.. code-block:: cpp
struct Pet {
Pet(const std::string &name) : name(name) { }
std::string name;
};
struct Dog : Pet {
Dog(const std::string &name) : Pet(name) { }
std::string bark() const { return "woof!"; }
};
There are two different ways of indicating a hierarchical relationship to
pybind11: the first specifies the C++ base class as an extra template
parameter of the :class:`class_`:
.. code-block:: cpp
py::class_<Pet>(m, "Pet")
.def(py::init<const std::string &>())
.def_readwrite("name", &Pet::name);
// Method 1: template parameter:
py::class_<Dog, Pet /* <- specify C++ parent type */>(m, "Dog")
.def(py::init<const std::string &>())
.def("bark", &Dog::bark);
Alternatively, we can also assign a name to the previously bound ``Pet``
:class:`class_` object and reference it when binding the ``Dog`` class:
.. code-block:: cpp
py::class_<Pet> pet(m, "Pet");
pet.def(py::init<const std::string &>())
.def_readwrite("name", &Pet::name);
// Method 2: pass parent class_ object:
py::class_<Dog>(m, "Dog", pet /* <- specify Python parent type */)
.def(py::init<const std::string &>())
.def("bark", &Dog::bark);
Functionality-wise, both approaches are equivalent. Afterwards, instances will
expose fields and methods of both types:
.. code-block:: pycon
>>> p = example.Dog('Molly')
>>> p.name
u'Molly'
>>> p.bark()
u'woof!'
The C++ classes defined above are regular non-polymorphic types with an
inheritance relationship. This is reflected in Python:
.. code-block:: cpp
// Return a base pointer to a derived instance
m.def("pet_store", []() { return std::unique_ptr<Pet>(new Dog("Molly")); });
.. code-block:: pycon
>>> p = example.pet_store()
>>> type(p) # `Dog` instance behind `Pet` pointer
Pet # no pointer downcasting for regular non-polymorphic types
>>> p.bark()
AttributeError: 'Pet' object has no attribute 'bark'
The function returned a ``Dog`` instance, but because it's a non-polymorphic
type behind a base pointer, Python only sees a ``Pet``. In C++, a type is only
considered polymorphic if it has at least one virtual function and pybind11
will automatically recognize this:
.. code-block:: cpp
struct PolymorphicPet {
virtual ~PolymorphicPet() = default;
};
struct PolymorphicDog : PolymorphicPet {
std::string bark() const { return "woof!"; }
};
// Same binding code
py::class_<PolymorphicPet>(m, "PolymorphicPet");
py::class_<PolymorphicDog, PolymorphicPet>(m, "PolymorphicDog")
.def(py::init<>())
.def("bark", &PolymorphicDog::bark);
// Again, return a base pointer to a derived instance
m.def("pet_store2", []() { return std::unique_ptr<PolymorphicPet>(new PolymorphicDog); });
.. code-block:: pycon
>>> p = example.pet_store2()
>>> type(p)
PolymorphicDog # automatically downcast
>>> p.bark()
u'woof!'
Given a pointer to a polymorphic base, pybind11 performs automatic downcasting
to the actual derived type. Note that this goes beyond the usual situation in
C++: we don't just get access to the virtual functions of the base, we get the
concrete derived type including functions and attributes that the base type may
not even be aware of.
.. seealso::
For more information about polymorphic behavior see :ref:`overriding_virtuals`.
Overloaded methods
==================
Sometimes there are several overloaded C++ methods with the same name taking
different kinds of input arguments:
.. code-block:: cpp
struct Pet {
Pet(const std::string &name, int age) : name(name), age(age) { }
void set(int age_) { age = age_; }
void set(const std::string &name_) { name = name_; }
std::string name;
int age;
};
Attempting to bind ``Pet::set`` will cause an error since the compiler does not
know which method the user intended to select. We can disambiguate by casting
them to function pointers. Binding multiple functions to the same Python name
automatically creates a chain of function overloads that will be tried in
sequence.
.. code-block:: cpp
py::class_<Pet>(m, "Pet")
.def(py::init<const std::string &, int>())
.def("set", (void (Pet::*)(int)) &Pet::set, "Set the pet's age")
.def("set", (void (Pet::*)(const std::string &)) &Pet::set, "Set the pet's name");
The overload signatures are also visible in the method's docstring:
.. code-block:: pycon
>>> help(example.Pet)
class Pet(__builtin__.object)
| Methods defined here:
|
| __init__(...)
| Signature : (Pet, str, int) -> NoneType
|
| set(...)
| 1. Signature : (Pet, int) -> NoneType
|
| Set the pet's age
|
| 2. Signature : (Pet, str) -> NoneType
|
| Set the pet's name
If you have a C++14 compatible compiler [#cpp14]_, you can use an alternative
syntax to cast the overloaded function:
.. code-block:: cpp
py::class_<Pet>(m, "Pet")
.def("set", py::overload_cast<int>(&Pet::set), "Set the pet's age")
.def("set", py::overload_cast<const std::string &>(&Pet::set), "Set the pet's name");
Here, ``py::overload_cast`` only requires the parameter types to be specified.
The return type and class are deduced. This avoids the additional noise of
``void (Pet::*)()`` as seen in the raw cast. If a function is overloaded based
on constness, the ``py::const_`` tag should be used:
.. code-block:: cpp
struct Widget {
int foo(int x, float y);
int foo(int x, float y) const;
};
py::class_<Widget>(m, "Widget")
.def("foo_mutable", py::overload_cast<int, float>(&Widget::foo))
.def("foo_const", py::overload_cast<int, float>(&Widget::foo, py::const_));
If you prefer the ``py::overload_cast`` syntax but have a C++11 compatible compiler only,
you can use ``py::detail::overload_cast_impl`` with an additional set of parentheses:
.. code-block:: cpp
template <typename... Args>
using overload_cast_ = pybind11::detail::overload_cast_impl<Args...>;
py::class_<Pet>(m, "Pet")
.def("set", overload_cast_<int>()(&Pet::set), "Set the pet's age")
.def("set", overload_cast_<const std::string &>()(&Pet::set), "Set the pet's name");
.. [#cpp14] A compiler which supports the ``-std=c++14`` flag
or Visual Studio 2015 Update 2 and newer.
.. note::
To define multiple overloaded constructors, simply declare one after the
other using the ``.def(py::init<...>())`` syntax. The existing machinery
for specifying keyword and default arguments also works.
Enumerations and internal types
===============================
Let's now suppose that the example class contains an internal enumeration type,
e.g.:
.. code-block:: cpp
struct Pet {
enum Kind {
Dog = 0,
Cat
};
Pet(const std::string &name, Kind type) : name(name), type(type) { }
std::string name;
Kind type;
};
The binding code for this example looks as follows:
.. code-block:: cpp
py::class_<Pet> pet(m, "Pet");
pet.def(py::init<const std::string &, Pet::Kind>())
.def_readwrite("name", &Pet::name)
.def_readwrite("type", &Pet::type);
py::enum_<Pet::Kind>(pet, "Kind")
.value("Dog", Pet::Kind::Dog)
.value("Cat", Pet::Kind::Cat)
.export_values();
To ensure that the ``Kind`` type is created within the scope of ``Pet``, the
``pet`` :class:`class_` instance must be supplied to the :class:`enum_`.
constructor. The :func:`enum_::export_values` function exports the enum entries
into the parent scope, which should be skipped for newer C++11-style strongly
typed enums.
.. code-block:: pycon
>>> p = Pet('Lucy', Pet.Cat)
>>> p.type
Kind.Cat
>>> int(p.type)
1L
The entries defined by the enumeration type are exposed in the ``__members__`` property:
.. code-block:: pycon
>>> Pet.Kind.__members__
{'Dog': Kind.Dog, 'Cat': Kind.Cat}
The ``name`` property returns the name of the enum value as a unicode string.
.. note::
It is also possible to use ``str(enum)``, however these accomplish different
goals. The following shows how these two approaches differ.
.. code-block:: pycon
>>> p = Pet( "Lucy", Pet.Cat )
>>> pet_type = p.type
>>> pet_type
Pet.Cat
>>> str(pet_type)
'Pet.Cat'
>>> pet_type.name
'Cat'
.. note::
When the special tag ``py::arithmetic()`` is specified to the ``enum_``
constructor, pybind11 creates an enumeration that also supports rudimentary
arithmetic and bit-level operations like comparisons, and, or, xor, negation,
etc.
.. code-block:: cpp
py::enum_<Pet::Kind>(pet, "Kind", py::arithmetic())
...
By default, these are omitted to conserve space.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/classes.rst | classes.rst |
.. image:: pybind11-logo.png
About this project
==================
**pybind11** is a lightweight header-only library that exposes C++ types in Python
and vice versa, mainly to create Python bindings of existing C++ code. Its
goals and syntax are similar to the excellent `Boost.Python`_ library by David
Abrahams: to minimize boilerplate code in traditional extension modules by
inferring type information using compile-time introspection.
.. _Boost.Python: http://www.boost.org/doc/libs/release/libs/python/doc/index.html
The main issue with Boost.Python—and the reason for creating such a similar
project—is Boost. Boost is an enormously large and complex suite of utility
libraries that works with almost every C++ compiler in existence. This
compatibility has its cost: arcane template tricks and workarounds are
necessary to support the oldest and buggiest of compiler specimens. Now that
C++11-compatible compilers are widely available, this heavy machinery has
become an excessively large and unnecessary dependency.
Think of this library as a tiny self-contained version of Boost.Python with
everything stripped away that isn't relevant for binding generation. Without
comments, the core header files only require ~4K lines of code and depend on
Python (2.7 or 3.x, or PyPy2.7 >= 5.7) and the C++ standard library. This
compact implementation was possible thanks to some of the new C++11 language
features (specifically: tuples, lambda functions and variadic templates). Since
its creation, this library has grown beyond Boost.Python in many ways, leading
to dramatically simpler binding code in many common situations.
Core features
*************
The following core C++ features can be mapped to Python
- Functions accepting and returning custom data structures per value, reference, or pointer
- Instance methods and static methods
- Overloaded functions
- Instance attributes and static attributes
- Arbitrary exception types
- Enumerations
- Callbacks
- Iterators and ranges
- Custom operators
- Single and multiple inheritance
- STL data structures
- Smart pointers with reference counting like ``std::shared_ptr``
- Internal references with correct reference counting
- C++ classes with virtual (and pure virtual) methods can be extended in Python
Goodies
*******
In addition to the core functionality, pybind11 provides some extra goodies:
- Python 2.7, 3.x, and PyPy (PyPy2.7 >= 5.7) are supported with an
implementation-agnostic interface.
- It is possible to bind C++11 lambda functions with captured variables. The
lambda capture data is stored inside the resulting Python function object.
- pybind11 uses C++11 move constructors and move assignment operators whenever
possible to efficiently transfer custom data types.
- It's easy to expose the internal storage of custom data types through
Pythons' buffer protocols. This is handy e.g. for fast conversion between
C++ matrix classes like Eigen and NumPy without expensive copy operations.
- pybind11 can automatically vectorize functions so that they are transparently
applied to all entries of one or more NumPy array arguments.
- Python's slice-based access and assignment operations can be supported with
just a few lines of code.
- Everything is contained in just a few header files; there is no need to link
against any additional libraries.
- Binaries are generally smaller by a factor of at least 2 compared to
equivalent bindings generated by Boost.Python. A recent pybind11 conversion
of `PyRosetta`_, an enormous Boost.Python binding project, reported a binary
size reduction of **5.4x** and compile time reduction by **5.8x**.
- Function signatures are precomputed at compile time (using ``constexpr``),
leading to smaller binaries.
- With little extra effort, C++ types can be pickled and unpickled similar to
regular Python objects.
.. _PyRosetta: http://graylab.jhu.edu/RosettaCon2016/PyRosetta-4.pdf
Supported compilers
*******************
1. Clang/LLVM (any non-ancient version with C++11 support)
2. GCC 4.8 or newer
3. Microsoft Visual Studio 2015 or newer
4. Intel C++ compiler v17 or newer (v16 with pybind11 v2.0 and v15 with pybind11 v2.0 and a `workaround <https://github.com/pybind/pybind11/issues/276>`_ )
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/intro.rst | intro.rst |
Functions
#########
Before proceeding with this section, make sure that you are already familiar
with the basics of binding functions and classes, as explained in :doc:`/basics`
and :doc:`/classes`. The following guide is applicable to both free and member
functions, i.e. *methods* in Python.
.. _return_value_policies:
Return value policies
=====================
Python and C++ use fundamentally different ways of managing the memory and
lifetime of objects managed by them. This can lead to issues when creating
bindings for functions that return a non-trivial type. Just by looking at the
type information, it is not clear whether Python should take charge of the
returned value and eventually free its resources, or if this is handled on the
C++ side. For this reason, pybind11 provides a several *return value policy*
annotations that can be passed to the :func:`module::def` and
:func:`class_::def` functions. The default policy is
:enum:`return_value_policy::automatic`.
Return value policies are tricky, and it's very important to get them right.
Just to illustrate what can go wrong, consider the following simple example:
.. code-block:: cpp
/* Function declaration */
Data *get_data() { return _data; /* (pointer to a static data structure) */ }
...
/* Binding code */
m.def("get_data", &get_data); // <-- KABOOM, will cause crash when called from Python
What's going on here? When ``get_data()`` is called from Python, the return
value (a native C++ type) must be wrapped to turn it into a usable Python type.
In this case, the default return value policy (:enum:`return_value_policy::automatic`)
causes pybind11 to assume ownership of the static ``_data`` instance.
When Python's garbage collector eventually deletes the Python
wrapper, pybind11 will also attempt to delete the C++ instance (via ``operator
delete()``) due to the implied ownership. At this point, the entire application
will come crashing down, though errors could also be more subtle and involve
silent data corruption.
In the above example, the policy :enum:`return_value_policy::reference` should have
been specified so that the global data instance is only *referenced* without any
implied transfer of ownership, i.e.:
.. code-block:: cpp
m.def("get_data", &get_data, return_value_policy::reference);
On the other hand, this is not the right policy for many other situations,
where ignoring ownership could lead to resource leaks.
As a developer using pybind11, it's important to be familiar with the different
return value policies, including which situation calls for which one of them.
The following table provides an overview of available policies:
.. tabularcolumns:: |p{0.5\textwidth}|p{0.45\textwidth}|
+--------------------------------------------------+----------------------------------------------------------------------------+
| Return value policy | Description |
+==================================================+============================================================================+
| :enum:`return_value_policy::take_ownership` | Reference an existing object (i.e. do not create a new copy) and take |
| | ownership. Python will call the destructor and delete operator when the |
| | object's reference count reaches zero. Undefined behavior ensues when the |
| | C++ side does the same, or when the data was not dynamically allocated. |
+--------------------------------------------------+----------------------------------------------------------------------------+
| :enum:`return_value_policy::copy` | Create a new copy of the returned object, which will be owned by Python. |
| | This policy is comparably safe because the lifetimes of the two instances |
| | are decoupled. |
+--------------------------------------------------+----------------------------------------------------------------------------+
| :enum:`return_value_policy::move` | Use ``std::move`` to move the return value contents into a new instance |
| | that will be owned by Python. This policy is comparably safe because the |
| | lifetimes of the two instances (move source and destination) are decoupled.|
+--------------------------------------------------+----------------------------------------------------------------------------+
| :enum:`return_value_policy::reference` | Reference an existing object, but do not take ownership. The C++ side is |
| | responsible for managing the object's lifetime and deallocating it when |
| | it is no longer used. Warning: undefined behavior will ensue when the C++ |
| | side deletes an object that is still referenced and used by Python. |
+--------------------------------------------------+----------------------------------------------------------------------------+
| :enum:`return_value_policy::reference_internal` | Indicates that the lifetime of the return value is tied to the lifetime |
| | of a parent object, namely the implicit ``this``, or ``self`` argument of |
| | the called method or property. Internally, this policy works just like |
| | :enum:`return_value_policy::reference` but additionally applies a |
| | ``keep_alive<0, 1>`` *call policy* (described in the next section) that |
| | prevents the parent object from being garbage collected as long as the |
| | return value is referenced by Python. This is the default policy for |
| | property getters created via ``def_property``, ``def_readwrite``, etc. |
+--------------------------------------------------+----------------------------------------------------------------------------+
| :enum:`return_value_policy::automatic` | **Default policy.** This policy falls back to the policy |
| | :enum:`return_value_policy::take_ownership` when the return value is a |
| | pointer. Otherwise, it uses :enum:`return_value_policy::move` or |
| | :enum:`return_value_policy::copy` for rvalue and lvalue references, |
| | respectively. See above for a description of what all of these different |
| | policies do. |
+--------------------------------------------------+----------------------------------------------------------------------------+
| :enum:`return_value_policy::automatic_reference` | As above, but use policy :enum:`return_value_policy::reference` when the |
| | return value is a pointer. This is the default conversion policy for |
| | function arguments when calling Python functions manually from C++ code |
| | (i.e. via handle::operator()). You probably won't need to use this. |
+--------------------------------------------------+----------------------------------------------------------------------------+
Return value policies can also be applied to properties:
.. code-block:: cpp
class_<MyClass>(m, "MyClass")
.def_property("data", &MyClass::getData, &MyClass::setData,
py::return_value_policy::copy);
Technically, the code above applies the policy to both the getter and the
setter function, however, the setter doesn't really care about *return*
value policies which makes this a convenient terse syntax. Alternatively,
targeted arguments can be passed through the :class:`cpp_function` constructor:
.. code-block:: cpp
class_<MyClass>(m, "MyClass")
.def_property("data"
py::cpp_function(&MyClass::getData, py::return_value_policy::copy),
py::cpp_function(&MyClass::setData)
);
.. warning::
Code with invalid return value policies might access uninitialized memory or
free data structures multiple times, which can lead to hard-to-debug
non-determinism and segmentation faults, hence it is worth spending the
time to understand all the different options in the table above.
.. note::
One important aspect of the above policies is that they only apply to
instances which pybind11 has *not* seen before, in which case the policy
clarifies essential questions about the return value's lifetime and
ownership. When pybind11 knows the instance already (as identified by its
type and address in memory), it will return the existing Python object
wrapper rather than creating a new copy.
.. note::
The next section on :ref:`call_policies` discusses *call policies* that can be
specified *in addition* to a return value policy from the list above. Call
policies indicate reference relationships that can involve both return values
and parameters of functions.
.. note::
As an alternative to elaborate call policies and lifetime management logic,
consider using smart pointers (see the section on :ref:`smart_pointers` for
details). Smart pointers can tell whether an object is still referenced from
C++ or Python, which generally eliminates the kinds of inconsistencies that
can lead to crashes or undefined behavior. For functions returning smart
pointers, it is not necessary to specify a return value policy.
.. _call_policies:
Additional call policies
========================
In addition to the above return value policies, further *call policies* can be
specified to indicate dependencies between parameters or ensure a certain state
for the function call.
Keep alive
----------
In general, this policy is required when the C++ object is any kind of container
and another object is being added to the container. ``keep_alive<Nurse, Patient>``
indicates that the argument with index ``Patient`` should be kept alive at least
until the argument with index ``Nurse`` is freed by the garbage collector. Argument
indices start at one, while zero refers to the return value. For methods, index
``1`` refers to the implicit ``this`` pointer, while regular arguments begin at
index ``2``. Arbitrarily many call policies can be specified. When a ``Nurse``
with value ``None`` is detected at runtime, the call policy does nothing.
When the nurse is not a pybind11-registered type, the implementation internally
relies on the ability to create a *weak reference* to the nurse object. When
the nurse object is not a pybind11-registered type and does not support weak
references, an exception will be thrown.
Consider the following example: here, the binding code for a list append
operation ties the lifetime of the newly added element to the underlying
container:
.. code-block:: cpp
py::class_<List>(m, "List")
.def("append", &List::append, py::keep_alive<1, 2>());
For consistency, the argument indexing is identical for constructors. Index
``1`` still refers to the implicit ``this`` pointer, i.e. the object which is
being constructed. Index ``0`` refers to the return type which is presumed to
be ``void`` when a constructor is viewed like a function. The following example
ties the lifetime of the constructor element to the constructed object:
.. code-block:: cpp
py::class_<Nurse>(m, "Nurse")
.def(py::init<Patient &>(), py::keep_alive<1, 2>());
.. note::
``keep_alive`` is analogous to the ``with_custodian_and_ward`` (if Nurse,
Patient != 0) and ``with_custodian_and_ward_postcall`` (if Nurse/Patient ==
0) policies from Boost.Python.
Call guard
----------
The ``call_guard<T>`` policy allows any scope guard type ``T`` to be placed
around the function call. For example, this definition:
.. code-block:: cpp
m.def("foo", foo, py::call_guard<T>());
is equivalent to the following pseudocode:
.. code-block:: cpp
m.def("foo", [](args...) {
T scope_guard;
return foo(args...); // forwarded arguments
});
The only requirement is that ``T`` is default-constructible, but otherwise any
scope guard will work. This is very useful in combination with `gil_scoped_release`.
See :ref:`gil`.
Multiple guards can also be specified as ``py::call_guard<T1, T2, T3...>``. The
constructor order is left to right and destruction happens in reverse.
.. seealso::
The file :file:`tests/test_call_policies.cpp` contains a complete example
that demonstrates using `keep_alive` and `call_guard` in more detail.
.. _python_objects_as_args:
Python objects as arguments
===========================
pybind11 exposes all major Python types using thin C++ wrapper classes. These
wrapper classes can also be used as parameters of functions in bindings, which
makes it possible to directly work with native Python types on the C++ side.
For instance, the following statement iterates over a Python ``dict``:
.. code-block:: cpp
void print_dict(py::dict dict) {
/* Easily interact with Python types */
for (auto item : dict)
std::cout << "key=" << std::string(py::str(item.first)) << ", "
<< "value=" << std::string(py::str(item.second)) << std::endl;
}
It can be exported:
.. code-block:: cpp
m.def("print_dict", &print_dict);
And used in Python as usual:
.. code-block:: pycon
>>> print_dict({'foo': 123, 'bar': 'hello'})
key=foo, value=123
key=bar, value=hello
For more information on using Python objects in C++, see :doc:`/advanced/pycpp/index`.
Accepting \*args and \*\*kwargs
===============================
Python provides a useful mechanism to define functions that accept arbitrary
numbers of arguments and keyword arguments:
.. code-block:: python
def generic(*args, **kwargs):
... # do something with args and kwargs
Such functions can also be created using pybind11:
.. code-block:: cpp
void generic(py::args args, py::kwargs kwargs) {
/// .. do something with args
if (kwargs)
/// .. do something with kwargs
}
/// Binding code
m.def("generic", &generic);
The class ``py::args`` derives from ``py::tuple`` and ``py::kwargs`` derives
from ``py::dict``.
You may also use just one or the other, and may combine these with other
arguments as long as the ``py::args`` and ``py::kwargs`` arguments are the last
arguments accepted by the function.
Please refer to the other examples for details on how to iterate over these,
and on how to cast their entries into C++ objects. A demonstration is also
available in ``tests/test_kwargs_and_defaults.cpp``.
.. note::
When combining \*args or \*\*kwargs with :ref:`keyword_args` you should
*not* include ``py::arg`` tags for the ``py::args`` and ``py::kwargs``
arguments.
Default arguments revisited
===========================
The section on :ref:`default_args` previously discussed basic usage of default
arguments using pybind11. One noteworthy aspect of their implementation is that
default arguments are converted to Python objects right at declaration time.
Consider the following example:
.. code-block:: cpp
py::class_<MyClass>("MyClass")
.def("myFunction", py::arg("arg") = SomeType(123));
In this case, pybind11 must already be set up to deal with values of the type
``SomeType`` (via a prior instantiation of ``py::class_<SomeType>``), or an
exception will be thrown.
Another aspect worth highlighting is that the "preview" of the default argument
in the function signature is generated using the object's ``__repr__`` method.
If not available, the signature may not be very helpful, e.g.:
.. code-block:: pycon
FUNCTIONS
...
| myFunction(...)
| Signature : (MyClass, arg : SomeType = <SomeType object at 0x101b7b080>) -> NoneType
...
The first way of addressing this is by defining ``SomeType.__repr__``.
Alternatively, it is possible to specify the human-readable preview of the
default argument manually using the ``arg_v`` notation:
.. code-block:: cpp
py::class_<MyClass>("MyClass")
.def("myFunction", py::arg_v("arg", SomeType(123), "SomeType(123)"));
Sometimes it may be necessary to pass a null pointer value as a default
argument. In this case, remember to cast it to the underlying type in question,
like so:
.. code-block:: cpp
py::class_<MyClass>("MyClass")
.def("myFunction", py::arg("arg") = (SomeType *) nullptr);
.. _nonconverting_arguments:
Non-converting arguments
========================
Certain argument types may support conversion from one type to another. Some
examples of conversions are:
* :ref:`implicit_conversions` declared using ``py::implicitly_convertible<A,B>()``
* Calling a method accepting a double with an integer argument
* Calling a ``std::complex<float>`` argument with a non-complex python type
(for example, with a float). (Requires the optional ``pybind11/complex.h``
header).
* Calling a function taking an Eigen matrix reference with a numpy array of the
wrong type or of an incompatible data layout. (Requires the optional
``pybind11/eigen.h`` header).
This behaviour is sometimes undesirable: the binding code may prefer to raise
an error rather than convert the argument. This behaviour can be obtained
through ``py::arg`` by calling the ``.noconvert()`` method of the ``py::arg``
object, such as:
.. code-block:: cpp
m.def("floats_only", [](double f) { return 0.5 * f; }, py::arg("f").noconvert());
m.def("floats_preferred", [](double f) { return 0.5 * f; }, py::arg("f"));
Attempting the call the second function (the one without ``.noconvert()``) with
an integer will succeed, but attempting to call the ``.noconvert()`` version
will fail with a ``TypeError``:
.. code-block:: pycon
>>> floats_preferred(4)
2.0
>>> floats_only(4)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: floats_only(): incompatible function arguments. The following argument types are supported:
1. (f: float) -> float
Invoked with: 4
You may, of course, combine this with the :var:`_a` shorthand notation (see
:ref:`keyword_args`) and/or :ref:`default_args`. It is also permitted to omit
the argument name by using the ``py::arg()`` constructor without an argument
name, i.e. by specifying ``py::arg().noconvert()``.
.. note::
When specifying ``py::arg`` options it is necessary to provide the same
number of options as the bound function has arguments. Thus if you want to
enable no-convert behaviour for just one of several arguments, you will
need to specify a ``py::arg()`` annotation for each argument with the
no-convert argument modified to ``py::arg().noconvert()``.
.. _none_arguments:
Allow/Prohibiting None arguments
================================
When a C++ type registered with :class:`py::class_` is passed as an argument to
a function taking the instance as pointer or shared holder (e.g. ``shared_ptr``
or a custom, copyable holder as described in :ref:`smart_pointers`), pybind
allows ``None`` to be passed from Python which results in calling the C++
function with ``nullptr`` (or an empty holder) for the argument.
To explicitly enable or disable this behaviour, using the
``.none`` method of the :class:`py::arg` object:
.. code-block:: cpp
py::class_<Dog>(m, "Dog").def(py::init<>());
py::class_<Cat>(m, "Cat").def(py::init<>());
m.def("bark", [](Dog *dog) -> std::string {
if (dog) return "woof!"; /* Called with a Dog instance */
else return "(no dog)"; /* Called with None, dog == nullptr */
}, py::arg("dog").none(true));
m.def("meow", [](Cat *cat) -> std::string {
// Can't be called with None argument
return "meow";
}, py::arg("cat").none(false));
With the above, the Python call ``bark(None)`` will return the string ``"(no
dog)"``, while attempting to call ``meow(None)`` will raise a ``TypeError``:
.. code-block:: pycon
>>> from animals import Dog, Cat, bark, meow
>>> bark(Dog())
'woof!'
>>> meow(Cat())
'meow'
>>> bark(None)
'(no dog)'
>>> meow(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: meow(): incompatible function arguments. The following argument types are supported:
1. (cat: animals.Cat) -> str
Invoked with: None
The default behaviour when the tag is unspecified is to allow ``None``.
.. note::
Even when ``.none(true)`` is specified for an argument, ``None`` will be converted to a
``nullptr`` *only* for custom and :ref:`opaque <opaque>` types. Pointers to built-in types
(``double *``, ``int *``, ...) and STL types (``std::vector<T> *``, ...; if ``pybind11/stl.h``
is included) are copied when converted to C++ (see :doc:`/advanced/cast/overview`) and will
not allow ``None`` as argument. To pass optional argument of these copied types consider
using ``std::optional<T>``
Overload resolution order
=========================
When a function or method with multiple overloads is called from Python,
pybind11 determines which overload to call in two passes. The first pass
attempts to call each overload without allowing argument conversion (as if
every argument had been specified as ``py::arg().noconvert()`` as described
above).
If no overload succeeds in the no-conversion first pass, a second pass is
attempted in which argument conversion is allowed (except where prohibited via
an explicit ``py::arg().noconvert()`` attribute in the function definition).
If the second pass also fails a ``TypeError`` is raised.
Within each pass, overloads are tried in the order they were registered with
pybind11.
What this means in practice is that pybind11 will prefer any overload that does
not require conversion of arguments to an overload that does, but otherwise prefers
earlier-defined overloads to later-defined ones.
.. note::
pybind11 does *not* further prioritize based on the number/pattern of
overloaded arguments. That is, pybind11 does not prioritize a function
requiring one conversion over one requiring three, but only prioritizes
overloads requiring no conversion at all to overloads that require
conversion of at least one argument.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/functions.rst | functions.rst |
Smart pointers
##############
std::unique_ptr
===============
Given a class ``Example`` with Python bindings, it's possible to return
instances wrapped in C++11 unique pointers, like so
.. code-block:: cpp
std::unique_ptr<Example> create_example() { return std::unique_ptr<Example>(new Example()); }
.. code-block:: cpp
m.def("create_example", &create_example);
In other words, there is nothing special that needs to be done. While returning
unique pointers in this way is allowed, it is *illegal* to use them as function
arguments. For instance, the following function signature cannot be processed
by pybind11.
.. code-block:: cpp
void do_something_with_example(std::unique_ptr<Example> ex) { ... }
The above signature would imply that Python needs to give up ownership of an
object that is passed to this function, which is generally not possible (for
instance, the object might be referenced elsewhere).
std::shared_ptr
===============
The binding generator for classes, :class:`class_`, can be passed a template
type that denotes a special *holder* type that is used to manage references to
the object. If no such holder type template argument is given, the default for
a type named ``Type`` is ``std::unique_ptr<Type>``, which means that the object
is deallocated when Python's reference count goes to zero.
It is possible to switch to other types of reference counting wrappers or smart
pointers, which is useful in codebases that rely on them. For instance, the
following snippet causes ``std::shared_ptr`` to be used instead.
.. code-block:: cpp
py::class_<Example, std::shared_ptr<Example> /* <- holder type */> obj(m, "Example");
Note that any particular class can only be associated with a single holder type.
One potential stumbling block when using holder types is that they need to be
applied consistently. Can you guess what's broken about the following binding
code?
.. code-block:: cpp
class Child { };
class Parent {
public:
Parent() : child(std::make_shared<Child>()) { }
Child *get_child() { return child.get(); } /* Hint: ** DON'T DO THIS ** */
private:
std::shared_ptr<Child> child;
};
PYBIND11_MODULE(example, m) {
py::class_<Child, std::shared_ptr<Child>>(m, "Child");
py::class_<Parent, std::shared_ptr<Parent>>(m, "Parent")
.def(py::init<>())
.def("get_child", &Parent::get_child);
}
The following Python code will cause undefined behavior (and likely a
segmentation fault).
.. code-block:: python
from example import Parent
print(Parent().get_child())
The problem is that ``Parent::get_child()`` returns a pointer to an instance of
``Child``, but the fact that this instance is already managed by
``std::shared_ptr<...>`` is lost when passing raw pointers. In this case,
pybind11 will create a second independent ``std::shared_ptr<...>`` that also
claims ownership of the pointer. In the end, the object will be freed **twice**
since these shared pointers have no way of knowing about each other.
There are two ways to resolve this issue:
1. For types that are managed by a smart pointer class, never use raw pointers
in function arguments or return values. In other words: always consistently
wrap pointers into their designated holder types (such as
``std::shared_ptr<...>``). In this case, the signature of ``get_child()``
should be modified as follows:
.. code-block:: cpp
std::shared_ptr<Child> get_child() { return child; }
2. Adjust the definition of ``Child`` by specifying
``std::enable_shared_from_this<T>`` (see cppreference_ for details) as a
base class. This adds a small bit of information to ``Child`` that allows
pybind11 to realize that there is already an existing
``std::shared_ptr<...>`` and communicate with it. In this case, the
declaration of ``Child`` should look as follows:
.. _cppreference: http://en.cppreference.com/w/cpp/memory/enable_shared_from_this
.. code-block:: cpp
class Child : public std::enable_shared_from_this<Child> { };
.. _smart_pointers:
Custom smart pointers
=====================
pybind11 supports ``std::unique_ptr`` and ``std::shared_ptr`` right out of the
box. For any other custom smart pointer, transparent conversions can be enabled
using a macro invocation similar to the following. It must be declared at the
top namespace level before any binding code:
.. code-block:: cpp
PYBIND11_DECLARE_HOLDER_TYPE(T, SmartPtr<T>);
The first argument of :func:`PYBIND11_DECLARE_HOLDER_TYPE` should be a
placeholder name that is used as a template parameter of the second argument.
Thus, feel free to use any identifier, but use it consistently on both sides;
also, don't use the name of a type that already exists in your codebase.
The macro also accepts a third optional boolean parameter that is set to false
by default. Specify
.. code-block:: cpp
PYBIND11_DECLARE_HOLDER_TYPE(T, SmartPtr<T>, true);
if ``SmartPtr<T>`` can always be initialized from a ``T*`` pointer without the
risk of inconsistencies (such as multiple independent ``SmartPtr`` instances
believing that they are the sole owner of the ``T*`` pointer). A common
situation where ``true`` should be passed is when the ``T`` instances use
*intrusive* reference counting.
Please take a look at the :ref:`macro_notes` before using this feature.
By default, pybind11 assumes that your custom smart pointer has a standard
interface, i.e. provides a ``.get()`` member function to access the underlying
raw pointer. If this is not the case, pybind11's ``holder_helper`` must be
specialized:
.. code-block:: cpp
// Always needed for custom holder types
PYBIND11_DECLARE_HOLDER_TYPE(T, SmartPtr<T>);
// Only needed if the type's `.get()` goes by another name
namespace pybind11 { namespace detail {
template <typename T>
struct holder_helper<SmartPtr<T>> { // <-- specialization
static const T *get(const SmartPtr<T> &p) { return p.getPointer(); }
};
}}
The above specialization informs pybind11 that the custom ``SmartPtr`` class
provides ``.get()`` functionality via ``.getPointer()``.
.. seealso::
The file :file:`tests/test_smart_ptr.cpp` contains a complete example
that demonstrates how to work with custom reference-counting holder types
in more detail.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/smart_ptrs.rst | smart_ptrs.rst |
Exceptions
##########
Built-in exception translation
==============================
When C++ code invoked from Python throws an ``std::exception``, it is
automatically converted into a Python ``Exception``. pybind11 defines multiple
special exception classes that will map to different types of Python
exceptions:
.. tabularcolumns:: |p{0.5\textwidth}|p{0.45\textwidth}|
+--------------------------------------+--------------------------------------+
| C++ exception type | Python exception type |
+======================================+======================================+
| :class:`std::exception` | ``RuntimeError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::bad_alloc` | ``MemoryError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::domain_error` | ``ValueError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::invalid_argument` | ``ValueError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::length_error` | ``ValueError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::out_of_range` | ``IndexError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::range_error` | ``ValueError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::overflow_error` | ``OverflowError`` |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::stop_iteration` | ``StopIteration`` (used to implement |
| | custom iterators) |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::index_error` | ``IndexError`` (used to indicate out |
| | of bounds access in ``__getitem__``, |
| | ``__setitem__``, etc.) |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::value_error` | ``ValueError`` (used to indicate |
| | wrong value passed in |
| | ``container.remove(...)``) |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::key_error` | ``KeyError`` (used to indicate out |
| | of bounds access in ``__getitem__``, |
| | ``__setitem__`` in dict-like |
| | objects, etc.) |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::error_already_set` | Indicates that the Python exception |
| | flag has already been set via Python |
| | API calls from C++ code; this C++ |
| | exception is used to propagate such |
| | a Python exception back to Python. |
+--------------------------------------+--------------------------------------+
When a Python function invoked from C++ throws an exception, it is converted
into a C++ exception of type :class:`error_already_set` whose string payload
contains a textual summary.
There is also a special exception :class:`cast_error` that is thrown by
:func:`handle::call` when the input arguments cannot be converted to Python
objects.
Registering custom translators
==============================
If the default exception conversion policy described above is insufficient,
pybind11 also provides support for registering custom exception translators.
To register a simple exception conversion that translates a C++ exception into
a new Python exception using the C++ exception's ``what()`` method, a helper
function is available:
.. code-block:: cpp
py::register_exception<CppExp>(module, "PyExp");
This call creates a Python exception class with the name ``PyExp`` in the given
module and automatically converts any encountered exceptions of type ``CppExp``
into Python exceptions of type ``PyExp``.
When more advanced exception translation is needed, the function
``py::register_exception_translator(translator)`` can be used to register
functions that can translate arbitrary exception types (and which may include
additional logic to do so). The function takes a stateless callable (e.g. a
function pointer or a lambda function without captured variables) with the call
signature ``void(std::exception_ptr)``.
When a C++ exception is thrown, the registered exception translators are tried
in reverse order of registration (i.e. the last registered translator gets the
first shot at handling the exception).
Inside the translator, ``std::rethrow_exception`` should be used within
a try block to re-throw the exception. One or more catch clauses to catch
the appropriate exceptions should then be used with each clause using
``PyErr_SetString`` to set a Python exception or ``ex(string)`` to set
the python exception to a custom exception type (see below).
To declare a custom Python exception type, declare a ``py::exception`` variable
and use this in the associated exception translator (note: it is often useful
to make this a static declaration when using it inside a lambda expression
without requiring capturing).
The following example demonstrates this for a hypothetical exception classes
``MyCustomException`` and ``OtherException``: the first is translated to a
custom python exception ``MyCustomError``, while the second is translated to a
standard python RuntimeError:
.. code-block:: cpp
static py::exception<MyCustomException> exc(m, "MyCustomError");
py::register_exception_translator([](std::exception_ptr p) {
try {
if (p) std::rethrow_exception(p);
} catch (const MyCustomException &e) {
exc(e.what());
} catch (const OtherException &e) {
PyErr_SetString(PyExc_RuntimeError, e.what());
}
});
Multiple exceptions can be handled by a single translator, as shown in the
example above. If the exception is not caught by the current translator, the
previously registered one gets a chance.
If none of the registered exception translators is able to handle the
exception, it is handled by the default converter as described in the previous
section.
.. seealso::
The file :file:`tests/test_exceptions.cpp` contains examples
of various custom exception translators and custom exception types.
.. note::
You must call either ``PyErr_SetString`` or a custom exception's call
operator (``exc(string)``) for every exception caught in a custom exception
translator. Failure to do so will cause Python to crash with ``SystemError:
error return without exception set``.
Exceptions that you do not plan to handle should simply not be caught, or
may be explicitly (re-)thrown to delegate it to the other,
previously-declared existing exception translators.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/exceptions.rst | exceptions.rst |
Classes
#######
This section presents advanced binding code for classes and it is assumed
that you are already familiar with the basics from :doc:`/classes`.
.. _overriding_virtuals:
Overriding virtual functions in Python
======================================
Suppose that a C++ class or interface has a virtual function that we'd like to
to override from within Python (we'll focus on the class ``Animal``; ``Dog`` is
given as a specific example of how one would do this with traditional C++
code).
.. code-block:: cpp
class Animal {
public:
virtual ~Animal() { }
virtual std::string go(int n_times) = 0;
};
class Dog : public Animal {
public:
std::string go(int n_times) override {
std::string result;
for (int i=0; i<n_times; ++i)
result += "woof! ";
return result;
}
};
Let's also suppose that we are given a plain function which calls the
function ``go()`` on an arbitrary ``Animal`` instance.
.. code-block:: cpp
std::string call_go(Animal *animal) {
return animal->go(3);
}
Normally, the binding code for these classes would look as follows:
.. code-block:: cpp
PYBIND11_MODULE(example, m) {
py::class_<Animal>(m, "Animal")
.def("go", &Animal::go);
py::class_<Dog, Animal>(m, "Dog")
.def(py::init<>());
m.def("call_go", &call_go);
}
However, these bindings are impossible to extend: ``Animal`` is not
constructible, and we clearly require some kind of "trampoline" that
redirects virtual calls back to Python.
Defining a new type of ``Animal`` from within Python is possible but requires a
helper class that is defined as follows:
.. code-block:: cpp
class PyAnimal : public Animal {
public:
/* Inherit the constructors */
using Animal::Animal;
/* Trampoline (need one for each virtual function) */
std::string go(int n_times) override {
PYBIND11_OVERLOAD_PURE(
std::string, /* Return type */
Animal, /* Parent class */
go, /* Name of function in C++ (must match Python name) */
n_times /* Argument(s) */
);
}
};
The macro :c:macro:`PYBIND11_OVERLOAD_PURE` should be used for pure virtual
functions, and :c:macro:`PYBIND11_OVERLOAD` should be used for functions which have
a default implementation. There are also two alternate macros
:c:macro:`PYBIND11_OVERLOAD_PURE_NAME` and :c:macro:`PYBIND11_OVERLOAD_NAME` which
take a string-valued name argument between the *Parent class* and *Name of the
function* slots, which defines the name of function in Python. This is required
when the C++ and Python versions of the
function have different names, e.g. ``operator()`` vs ``__call__``.
The binding code also needs a few minor adaptations (highlighted):
.. code-block:: cpp
:emphasize-lines: 2,3
PYBIND11_MODULE(example, m) {
py::class_<Animal, PyAnimal /* <--- trampoline*/>(m, "Animal")
.def(py::init<>())
.def("go", &Animal::go);
py::class_<Dog, Animal>(m, "Dog")
.def(py::init<>());
m.def("call_go", &call_go);
}
Importantly, pybind11 is made aware of the trampoline helper class by
specifying it as an extra template argument to :class:`class_`. (This can also
be combined with other template arguments such as a custom holder type; the
order of template types does not matter). Following this, we are able to
define a constructor as usual.
Bindings should be made against the actual class, not the trampoline helper class.
.. code-block:: cpp
:emphasize-lines: 3
py::class_<Animal, PyAnimal /* <--- trampoline*/>(m, "Animal");
.def(py::init<>())
.def("go", &PyAnimal::go); /* <--- THIS IS WRONG, use &Animal::go */
Note, however, that the above is sufficient for allowing python classes to
extend ``Animal``, but not ``Dog``: see :ref:`virtual_and_inheritance` for the
necessary steps required to providing proper overload support for inherited
classes.
The Python session below shows how to override ``Animal::go`` and invoke it via
a virtual method call.
.. code-block:: pycon
>>> from example import *
>>> d = Dog()
>>> call_go(d)
u'woof! woof! woof! '
>>> class Cat(Animal):
... def go(self, n_times):
... return "meow! " * n_times
...
>>> c = Cat()
>>> call_go(c)
u'meow! meow! meow! '
If you are defining a custom constructor in a derived Python class, you *must*
ensure that you explicitly call the bound C++ constructor using ``__init__``,
*regardless* of whether it is a default constructor or not. Otherwise, the
memory for the C++ portion of the instance will be left uninitialized, which
will generally leave the C++ instance in an invalid state and cause undefined
behavior if the C++ instance is subsequently used.
Here is an example:
.. code-block:: python
class Dachshund(Dog):
def __init__(self, name):
Dog.__init__(self) # Without this, undefined behavior may occur if the C++ portions are referenced.
self.name = name
def bark(self):
return "yap!"
Note that a direct ``__init__`` constructor *should be called*, and ``super()``
should not be used. For simple cases of linear inheritance, ``super()``
may work, but once you begin mixing Python and C++ multiple inheritance,
things will fall apart due to differences between Python's MRO and C++'s
mechanisms.
Please take a look at the :ref:`macro_notes` before using this feature.
.. note::
When the overridden type returns a reference or pointer to a type that
pybind11 converts from Python (for example, numeric values, std::string,
and other built-in value-converting types), there are some limitations to
be aware of:
- because in these cases there is no C++ variable to reference (the value
is stored in the referenced Python variable), pybind11 provides one in
the PYBIND11_OVERLOAD macros (when needed) with static storage duration.
Note that this means that invoking the overloaded method on *any*
instance will change the referenced value stored in *all* instances of
that type.
- Attempts to modify a non-const reference will not have the desired
effect: it will change only the static cache variable, but this change
will not propagate to underlying Python instance, and the change will be
replaced the next time the overload is invoked.
.. seealso::
The file :file:`tests/test_virtual_functions.cpp` contains a complete
example that demonstrates how to override virtual functions using pybind11
in more detail.
.. _virtual_and_inheritance:
Combining virtual functions and inheritance
===========================================
When combining virtual methods with inheritance, you need to be sure to provide
an override for each method for which you want to allow overrides from derived
python classes. For example, suppose we extend the above ``Animal``/``Dog``
example as follows:
.. code-block:: cpp
class Animal {
public:
virtual std::string go(int n_times) = 0;
virtual std::string name() { return "unknown"; }
};
class Dog : public Animal {
public:
std::string go(int n_times) override {
std::string result;
for (int i=0; i<n_times; ++i)
result += bark() + " ";
return result;
}
virtual std::string bark() { return "woof!"; }
};
then the trampoline class for ``Animal`` must, as described in the previous
section, override ``go()`` and ``name()``, but in order to allow python code to
inherit properly from ``Dog``, we also need a trampoline class for ``Dog`` that
overrides both the added ``bark()`` method *and* the ``go()`` and ``name()``
methods inherited from ``Animal`` (even though ``Dog`` doesn't directly
override the ``name()`` method):
.. code-block:: cpp
class PyAnimal : public Animal {
public:
using Animal::Animal; // Inherit constructors
std::string go(int n_times) override { PYBIND11_OVERLOAD_PURE(std::string, Animal, go, n_times); }
std::string name() override { PYBIND11_OVERLOAD(std::string, Animal, name, ); }
};
class PyDog : public Dog {
public:
using Dog::Dog; // Inherit constructors
std::string go(int n_times) override { PYBIND11_OVERLOAD(std::string, Dog, go, n_times); }
std::string name() override { PYBIND11_OVERLOAD(std::string, Dog, name, ); }
std::string bark() override { PYBIND11_OVERLOAD(std::string, Dog, bark, ); }
};
.. note::
Note the trailing commas in the ``PYBIND11_OVERLOAD`` calls to ``name()``
and ``bark()``. These are needed to portably implement a trampoline for a
function that does not take any arguments. For functions that take
a nonzero number of arguments, the trailing comma must be omitted.
A registered class derived from a pybind11-registered class with virtual
methods requires a similar trampoline class, *even if* it doesn't explicitly
declare or override any virtual methods itself:
.. code-block:: cpp
class Husky : public Dog {};
class PyHusky : public Husky {
public:
using Husky::Husky; // Inherit constructors
std::string go(int n_times) override { PYBIND11_OVERLOAD_PURE(std::string, Husky, go, n_times); }
std::string name() override { PYBIND11_OVERLOAD(std::string, Husky, name, ); }
std::string bark() override { PYBIND11_OVERLOAD(std::string, Husky, bark, ); }
};
There is, however, a technique that can be used to avoid this duplication
(which can be especially helpful for a base class with several virtual
methods). The technique involves using template trampoline classes, as
follows:
.. code-block:: cpp
template <class AnimalBase = Animal> class PyAnimal : public AnimalBase {
public:
using AnimalBase::AnimalBase; // Inherit constructors
std::string go(int n_times) override { PYBIND11_OVERLOAD_PURE(std::string, AnimalBase, go, n_times); }
std::string name() override { PYBIND11_OVERLOAD(std::string, AnimalBase, name, ); }
};
template <class DogBase = Dog> class PyDog : public PyAnimal<DogBase> {
public:
using PyAnimal<DogBase>::PyAnimal; // Inherit constructors
// Override PyAnimal's pure virtual go() with a non-pure one:
std::string go(int n_times) override { PYBIND11_OVERLOAD(std::string, DogBase, go, n_times); }
std::string bark() override { PYBIND11_OVERLOAD(std::string, DogBase, bark, ); }
};
This technique has the advantage of requiring just one trampoline method to be
declared per virtual method and pure virtual method override. It does,
however, require the compiler to generate at least as many methods (and
possibly more, if both pure virtual and overridden pure virtual methods are
exposed, as above).
The classes are then registered with pybind11 using:
.. code-block:: cpp
py::class_<Animal, PyAnimal<>> animal(m, "Animal");
py::class_<Dog, PyDog<>> dog(m, "Dog");
py::class_<Husky, PyDog<Husky>> husky(m, "Husky");
// ... add animal, dog, husky definitions
Note that ``Husky`` did not require a dedicated trampoline template class at
all, since it neither declares any new virtual methods nor provides any pure
virtual method implementations.
With either the repeated-virtuals or templated trampoline methods in place, you
can now create a python class that inherits from ``Dog``:
.. code-block:: python
class ShihTzu(Dog):
def bark(self):
return "yip!"
.. seealso::
See the file :file:`tests/test_virtual_functions.cpp` for complete examples
using both the duplication and templated trampoline approaches.
.. _extended_aliases:
Extended trampoline class functionality
=======================================
.. _extended_class_functionality_forced_trampoline:
Forced trampoline class initialisation
--------------------------------------
The trampoline classes described in the previous sections are, by default, only
initialized when needed. More specifically, they are initialized when a python
class actually inherits from a registered type (instead of merely creating an
instance of the registered type), or when a registered constructor is only
valid for the trampoline class but not the registered class. This is primarily
for performance reasons: when the trampoline class is not needed for anything
except virtual method dispatching, not initializing the trampoline class
improves performance by avoiding needing to do a run-time check to see if the
inheriting python instance has an overloaded method.
Sometimes, however, it is useful to always initialize a trampoline class as an
intermediate class that does more than just handle virtual method dispatching.
For example, such a class might perform extra class initialization, extra
destruction operations, and might define new members and methods to enable a
more python-like interface to a class.
In order to tell pybind11 that it should *always* initialize the trampoline
class when creating new instances of a type, the class constructors should be
declared using ``py::init_alias<Args, ...>()`` instead of the usual
``py::init<Args, ...>()``. This forces construction via the trampoline class,
ensuring member initialization and (eventual) destruction.
.. seealso::
See the file :file:`tests/test_virtual_functions.cpp` for complete examples
showing both normal and forced trampoline instantiation.
Different method signatures
---------------------------
The macro's introduced in :ref:`overriding_virtuals` cover most of the standard
use cases when exposing C++ classes to Python. Sometimes it is hard or unwieldy
to create a direct one-on-one mapping between the arguments and method return
type.
An example would be when the C++ signature contains output arguments using
references (See also :ref:`faq_reference_arguments`). Another way of solving
this is to use the method body of the trampoline class to do conversions to the
input and return of the Python method.
The main building block to do so is the :func:`get_overload`, this function
allows retrieving a method implemented in Python from within the trampoline's
methods. Consider for example a C++ method which has the signature
``bool myMethod(int32_t& value)``, where the return indicates whether
something should be done with the ``value``. This can be made convenient on the
Python side by allowing the Python function to return ``None`` or an ``int``:
.. code-block:: cpp
bool MyClass::myMethod(int32_t& value)
{
pybind11::gil_scoped_acquire gil; // Acquire the GIL while in this scope.
// Try to look up the overloaded method on the Python side.
pybind11::function overload = pybind11::get_overload(this, "myMethod");
if (overload) { // method is found
auto obj = overload(value); // Call the Python function.
if (py::isinstance<py::int_>(obj)) { // check if it returned a Python integer type
value = obj.cast<int32_t>(); // Cast it and assign it to the value.
return true; // Return true; value should be used.
} else {
return false; // Python returned none, return false.
}
}
return false; // Alternatively return MyClass::myMethod(value);
}
.. _custom_constructors:
Custom constructors
===================
The syntax for binding constructors was previously introduced, but it only
works when a constructor of the appropriate arguments actually exists on the
C++ side. To extend this to more general cases, pybind11 makes it possible
to bind factory functions as constructors. For example, suppose you have a
class like this:
.. code-block:: cpp
class Example {
private:
Example(int); // private constructor
public:
// Factory function:
static Example create(int a) { return Example(a); }
};
py::class_<Example>(m, "Example")
.def(py::init(&Example::create));
While it is possible to create a straightforward binding of the static
``create`` method, it may sometimes be preferable to expose it as a constructor
on the Python side. This can be accomplished by calling ``.def(py::init(...))``
with the function reference returning the new instance passed as an argument.
It is also possible to use this approach to bind a function returning a new
instance by raw pointer or by the holder (e.g. ``std::unique_ptr``).
The following example shows the different approaches:
.. code-block:: cpp
class Example {
private:
Example(int); // private constructor
public:
// Factory function - returned by value:
static Example create(int a) { return Example(a); }
// These constructors are publicly callable:
Example(double);
Example(int, int);
Example(std::string);
};
py::class_<Example>(m, "Example")
// Bind the factory function as a constructor:
.def(py::init(&Example::create))
// Bind a lambda function returning a pointer wrapped in a holder:
.def(py::init([](std::string arg) {
return std::unique_ptr<Example>(new Example(arg));
}))
// Return a raw pointer:
.def(py::init([](int a, int b) { return new Example(a, b); }))
// You can mix the above with regular C++ constructor bindings as well:
.def(py::init<double>())
;
When the constructor is invoked from Python, pybind11 will call the factory
function and store the resulting C++ instance in the Python instance.
When combining factory functions constructors with :ref:`virtual function
trampolines <overriding_virtuals>` there are two approaches. The first is to
add a constructor to the alias class that takes a base value by
rvalue-reference. If such a constructor is available, it will be used to
construct an alias instance from the value returned by the factory function.
The second option is to provide two factory functions to ``py::init()``: the
first will be invoked when no alias class is required (i.e. when the class is
being used but not inherited from in Python), and the second will be invoked
when an alias is required.
You can also specify a single factory function that always returns an alias
instance: this will result in behaviour similar to ``py::init_alias<...>()``,
as described in the :ref:`extended trampoline class documentation
<extended_aliases>`.
The following example shows the different factory approaches for a class with
an alias:
.. code-block:: cpp
#include <pybind11/factory.h>
class Example {
public:
// ...
virtual ~Example() = default;
};
class PyExample : public Example {
public:
using Example::Example;
PyExample(Example &&base) : Example(std::move(base)) {}
};
py::class_<Example, PyExample>(m, "Example")
// Returns an Example pointer. If a PyExample is needed, the Example
// instance will be moved via the extra constructor in PyExample, above.
.def(py::init([]() { return new Example(); }))
// Two callbacks:
.def(py::init([]() { return new Example(); } /* no alias needed */,
[]() { return new PyExample(); } /* alias needed */))
// *Always* returns an alias instance (like py::init_alias<>())
.def(py::init([]() { return new PyExample(); }))
;
Brace initialization
--------------------
``pybind11::init<>`` internally uses C++11 brace initialization to call the
constructor of the target class. This means that it can be used to bind
*implicit* constructors as well:
.. code-block:: cpp
struct Aggregate {
int a;
std::string b;
};
py::class_<Aggregate>(m, "Aggregate")
.def(py::init<int, const std::string &>());
.. note::
Note that brace initialization preferentially invokes constructor overloads
taking a ``std::initializer_list``. In the rare event that this causes an
issue, you can work around it by using ``py::init(...)`` with a lambda
function that constructs the new object as desired.
.. _classes_with_non_public_destructors:
Non-public destructors
======================
If a class has a private or protected destructor (as might e.g. be the case in
a singleton pattern), a compile error will occur when creating bindings via
pybind11. The underlying issue is that the ``std::unique_ptr`` holder type that
is responsible for managing the lifetime of instances will reference the
destructor even if no deallocations ever take place. In order to expose classes
with private or protected destructors, it is possible to override the holder
type via a holder type argument to ``class_``. Pybind11 provides a helper class
``py::nodelete`` that disables any destructor invocations. In this case, it is
crucial that instances are deallocated on the C++ side to avoid memory leaks.
.. code-block:: cpp
/* ... definition ... */
class MyClass {
private:
~MyClass() { }
};
/* ... binding code ... */
py::class_<MyClass, std::unique_ptr<MyClass, py::nodelete>>(m, "MyClass")
.def(py::init<>())
.. _implicit_conversions:
Implicit conversions
====================
Suppose that instances of two types ``A`` and ``B`` are used in a project, and
that an ``A`` can easily be converted into an instance of type ``B`` (examples of this
could be a fixed and an arbitrary precision number type).
.. code-block:: cpp
py::class_<A>(m, "A")
/// ... members ...
py::class_<B>(m, "B")
.def(py::init<A>())
/// ... members ...
m.def("func",
[](const B &) { /* .... */ }
);
To invoke the function ``func`` using a variable ``a`` containing an ``A``
instance, we'd have to write ``func(B(a))`` in Python. On the other hand, C++
will automatically apply an implicit type conversion, which makes it possible
to directly write ``func(a)``.
In this situation (i.e. where ``B`` has a constructor that converts from
``A``), the following statement enables similar implicit conversions on the
Python side:
.. code-block:: cpp
py::implicitly_convertible<A, B>();
.. note::
Implicit conversions from ``A`` to ``B`` only work when ``B`` is a custom
data type that is exposed to Python via pybind11.
To prevent runaway recursion, implicit conversions are non-reentrant: an
implicit conversion invoked as part of another implicit conversion of the
same type (i.e. from ``A`` to ``B``) will fail.
.. _static_properties:
Static properties
=================
The section on :ref:`properties` discussed the creation of instance properties
that are implemented in terms of C++ getters and setters.
Static properties can also be created in a similar way to expose getters and
setters of static class attributes. Note that the implicit ``self`` argument
also exists in this case and is used to pass the Python ``type`` subclass
instance. This parameter will often not be needed by the C++ side, and the
following example illustrates how to instantiate a lambda getter function
that ignores it:
.. code-block:: cpp
py::class_<Foo>(m, "Foo")
.def_property_readonly_static("foo", [](py::object /* self */) { return Foo(); });
Operator overloading
====================
Suppose that we're given the following ``Vector2`` class with a vector addition
and scalar multiplication operation, all implemented using overloaded operators
in C++.
.. code-block:: cpp
class Vector2 {
public:
Vector2(float x, float y) : x(x), y(y) { }
Vector2 operator+(const Vector2 &v) const { return Vector2(x + v.x, y + v.y); }
Vector2 operator*(float value) const { return Vector2(x * value, y * value); }
Vector2& operator+=(const Vector2 &v) { x += v.x; y += v.y; return *this; }
Vector2& operator*=(float v) { x *= v; y *= v; return *this; }
friend Vector2 operator*(float f, const Vector2 &v) {
return Vector2(f * v.x, f * v.y);
}
std::string toString() const {
return "[" + std::to_string(x) + ", " + std::to_string(y) + "]";
}
private:
float x, y;
};
The following snippet shows how the above operators can be conveniently exposed
to Python.
.. code-block:: cpp
#include <pybind11/operators.h>
PYBIND11_MODULE(example, m) {
py::class_<Vector2>(m, "Vector2")
.def(py::init<float, float>())
.def(py::self + py::self)
.def(py::self += py::self)
.def(py::self *= float())
.def(float() * py::self)
.def(py::self * float())
.def(-py::self)
.def("__repr__", &Vector2::toString);
}
Note that a line like
.. code-block:: cpp
.def(py::self * float())
is really just short hand notation for
.. code-block:: cpp
.def("__mul__", [](const Vector2 &a, float b) {
return a * b;
}, py::is_operator())
This can be useful for exposing additional operators that don't exist on the
C++ side, or to perform other types of customization. The ``py::is_operator``
flag marker is needed to inform pybind11 that this is an operator, which
returns ``NotImplemented`` when invoked with incompatible arguments rather than
throwing a type error.
.. note::
To use the more convenient ``py::self`` notation, the additional
header file :file:`pybind11/operators.h` must be included.
.. seealso::
The file :file:`tests/test_operator_overloading.cpp` contains a
complete example that demonstrates how to work with overloaded operators in
more detail.
.. _pickling:
Pickling support
================
Python's ``pickle`` module provides a powerful facility to serialize and
de-serialize a Python object graph into a binary data stream. To pickle and
unpickle C++ classes using pybind11, a ``py::pickle()`` definition must be
provided. Suppose the class in question has the following signature:
.. code-block:: cpp
class Pickleable {
public:
Pickleable(const std::string &value) : m_value(value) { }
const std::string &value() const { return m_value; }
void setExtra(int extra) { m_extra = extra; }
int extra() const { return m_extra; }
private:
std::string m_value;
int m_extra = 0;
};
Pickling support in Python is enabled by defining the ``__setstate__`` and
``__getstate__`` methods [#f3]_. For pybind11 classes, use ``py::pickle()``
to bind these two functions:
.. code-block:: cpp
py::class_<Pickleable>(m, "Pickleable")
.def(py::init<std::string>())
.def("value", &Pickleable::value)
.def("extra", &Pickleable::extra)
.def("setExtra", &Pickleable::setExtra)
.def(py::pickle(
[](const Pickleable &p) { // __getstate__
/* Return a tuple that fully encodes the state of the object */
return py::make_tuple(p.value(), p.extra());
},
[](py::tuple t) { // __setstate__
if (t.size() != 2)
throw std::runtime_error("Invalid state!");
/* Create a new C++ instance */
Pickleable p(t[0].cast<std::string>());
/* Assign any additional state */
p.setExtra(t[1].cast<int>());
return p;
}
));
The ``__setstate__`` part of the ``py::picke()`` definition follows the same
rules as the single-argument version of ``py::init()``. The return type can be
a value, pointer or holder type. See :ref:`custom_constructors` for details.
An instance can now be pickled as follows:
.. code-block:: python
try:
import cPickle as pickle # Use cPickle on Python 2.7
except ImportError:
import pickle
p = Pickleable("test_value")
p.setExtra(15)
data = pickle.dumps(p, 2)
Note that only the cPickle module is supported on Python 2.7. The second
argument to ``dumps`` is also crucial: it selects the pickle protocol version
2, since the older version 1 is not supported. Newer versions are also fine—for
instance, specify ``-1`` to always use the latest available version. Beware:
failure to follow these instructions will cause important pybind11 memory
allocation routines to be skipped during unpickling, which will likely lead to
memory corruption and/or segmentation faults.
.. seealso::
The file :file:`tests/test_pickling.cpp` contains a complete example
that demonstrates how to pickle and unpickle types using pybind11 in more
detail.
.. [#f3] http://docs.python.org/3/library/pickle.html#pickling-class-instances
Multiple Inheritance
====================
pybind11 can create bindings for types that derive from multiple base types
(aka. *multiple inheritance*). To do so, specify all bases in the template
arguments of the ``class_`` declaration:
.. code-block:: cpp
py::class_<MyType, BaseType1, BaseType2, BaseType3>(m, "MyType")
...
The base types can be specified in arbitrary order, and they can even be
interspersed with alias types and holder types (discussed earlier in this
document)---pybind11 will automatically find out which is which. The only
requirement is that the first template argument is the type to be declared.
It is also permitted to inherit multiply from exported C++ classes in Python,
as well as inheriting from multiple Python and/or pybind11-exported classes.
There is one caveat regarding the implementation of this feature:
When only one base type is specified for a C++ type that actually has multiple
bases, pybind11 will assume that it does not participate in multiple
inheritance, which can lead to undefined behavior. In such cases, add the tag
``multiple_inheritance`` to the class constructor:
.. code-block:: cpp
py::class_<MyType, BaseType2>(m, "MyType", py::multiple_inheritance());
The tag is redundant and does not need to be specified when multiple base types
are listed.
.. _module_local:
Module-local class bindings
===========================
When creating a binding for a class, pybind11 by default makes that binding
"global" across modules. What this means is that a type defined in one module
can be returned from any module resulting in the same Python type. For
example, this allows the following:
.. code-block:: cpp
// In the module1.cpp binding code for module1:
py::class_<Pet>(m, "Pet")
.def(py::init<std::string>())
.def_readonly("name", &Pet::name);
.. code-block:: cpp
// In the module2.cpp binding code for module2:
m.def("create_pet", [](std::string name) { return new Pet(name); });
.. code-block:: pycon
>>> from module1 import Pet
>>> from module2 import create_pet
>>> pet1 = Pet("Kitty")
>>> pet2 = create_pet("Doggy")
>>> pet2.name()
'Doggy'
When writing binding code for a library, this is usually desirable: this
allows, for example, splitting up a complex library into multiple Python
modules.
In some cases, however, this can cause conflicts. For example, suppose two
unrelated modules make use of an external C++ library and each provide custom
bindings for one of that library's classes. This will result in an error when
a Python program attempts to import both modules (directly or indirectly)
because of conflicting definitions on the external type:
.. code-block:: cpp
// dogs.cpp
// Binding for external library class:
py::class<pets::Pet>(m, "Pet")
.def("name", &pets::Pet::name);
// Binding for local extension class:
py::class<Dog, pets::Pet>(m, "Dog")
.def(py::init<std::string>());
.. code-block:: cpp
// cats.cpp, in a completely separate project from the above dogs.cpp.
// Binding for external library class:
py::class<pets::Pet>(m, "Pet")
.def("get_name", &pets::Pet::name);
// Binding for local extending class:
py::class<Cat, pets::Pet>(m, "Cat")
.def(py::init<std::string>());
.. code-block:: pycon
>>> import cats
>>> import dogs
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: generic_type: type "Pet" is already registered!
To get around this, you can tell pybind11 to keep the external class binding
localized to the module by passing the ``py::module_local()`` attribute into
the ``py::class_`` constructor:
.. code-block:: cpp
// Pet binding in dogs.cpp:
py::class<pets::Pet>(m, "Pet", py::module_local())
.def("name", &pets::Pet::name);
.. code-block:: cpp
// Pet binding in cats.cpp:
py::class<pets::Pet>(m, "Pet", py::module_local())
.def("get_name", &pets::Pet::name);
This makes the Python-side ``dogs.Pet`` and ``cats.Pet`` into distinct classes,
avoiding the conflict and allowing both modules to be loaded. C++ code in the
``dogs`` module that casts or returns a ``Pet`` instance will result in a
``dogs.Pet`` Python instance, while C++ code in the ``cats`` module will result
in a ``cats.Pet`` Python instance.
This does come with two caveats, however: First, external modules cannot return
or cast a ``Pet`` instance to Python (unless they also provide their own local
bindings). Second, from the Python point of view they are two distinct classes.
Note that the locality only applies in the C++ -> Python direction. When
passing such a ``py::module_local`` type into a C++ function, the module-local
classes are still considered. This means that if the following function is
added to any module (including but not limited to the ``cats`` and ``dogs``
modules above) it will be callable with either a ``dogs.Pet`` or ``cats.Pet``
argument:
.. code-block:: cpp
m.def("pet_name", [](const pets::Pet &pet) { return pet.name(); });
For example, suppose the above function is added to each of ``cats.cpp``,
``dogs.cpp`` and ``frogs.cpp`` (where ``frogs.cpp`` is some other module that
does *not* bind ``Pets`` at all).
.. code-block:: pycon
>>> import cats, dogs, frogs # No error because of the added py::module_local()
>>> mycat, mydog = cats.Cat("Fluffy"), dogs.Dog("Rover")
>>> (cats.pet_name(mycat), dogs.pet_name(mydog))
('Fluffy', 'Rover')
>>> (cats.pet_name(mydog), dogs.pet_name(mycat), frogs.pet_name(mycat))
('Rover', 'Fluffy', 'Fluffy')
It is possible to use ``py::module_local()`` registrations in one module even
if another module registers the same type globally: within the module with the
module-local definition, all C++ instances will be cast to the associated bound
Python type. In other modules any such values are converted to the global
Python type created elsewhere.
.. note::
STL bindings (as provided via the optional :file:`pybind11/stl_bind.h`
header) apply ``py::module_local`` by default when the bound type might
conflict with other modules; see :ref:`stl_bind` for details.
.. note::
The localization of the bound types is actually tied to the shared object
or binary generated by the compiler/linker. For typical modules created
with ``PYBIND11_MODULE()``, this distinction is not significant. It is
possible, however, when :ref:`embedding` to embed multiple modules in the
same binary (see :ref:`embedding_modules`). In such a case, the
localization will apply across all embedded modules within the same binary.
.. seealso::
The file :file:`tests/test_local_bindings.cpp` contains additional examples
that demonstrate how ``py::module_local()`` works.
Binding protected member functions
==================================
It's normally not possible to expose ``protected`` member functions to Python:
.. code-block:: cpp
class A {
protected:
int foo() const { return 42; }
};
py::class_<A>(m, "A")
.def("foo", &A::foo); // error: 'foo' is a protected member of 'A'
On one hand, this is good because non-``public`` members aren't meant to be
accessed from the outside. But we may want to make use of ``protected``
functions in derived Python classes.
The following pattern makes this possible:
.. code-block:: cpp
class A {
protected:
int foo() const { return 42; }
};
class Publicist : public A { // helper type for exposing protected functions
public:
using A::foo; // inherited with different access modifier
};
py::class_<A>(m, "A") // bind the primary class
.def("foo", &Publicist::foo); // expose protected methods via the publicist
This works because ``&Publicist::foo`` is exactly the same function as
``&A::foo`` (same signature and address), just with a different access
modifier. The only purpose of the ``Publicist`` helper class is to make
the function name ``public``.
If the intent is to expose ``protected`` ``virtual`` functions which can be
overridden in Python, the publicist pattern can be combined with the previously
described trampoline:
.. code-block:: cpp
class A {
public:
virtual ~A() = default;
protected:
virtual int foo() const { return 42; }
};
class Trampoline : public A {
public:
int foo() const override { PYBIND11_OVERLOAD(int, A, foo, ); }
};
class Publicist : public A {
public:
using A::foo;
};
py::class_<A, Trampoline>(m, "A") // <-- `Trampoline` here
.def("foo", &Publicist::foo); // <-- `Publicist` here, not `Trampoline`!
.. note::
MSVC 2015 has a compiler bug (fixed in version 2017) which
requires a more explicit function binding in the form of
``.def("foo", static_cast<int (A::*)() const>(&Publicist::foo));``
where ``int (A::*)() const`` is the type of ``A::foo``.
Custom automatic downcasters
============================
As explained in :ref:`inheritance`, pybind11 comes with built-in
understanding of the dynamic type of polymorphic objects in C++; that
is, returning a Pet to Python produces a Python object that knows it's
wrapping a Dog, if Pet has virtual methods and pybind11 knows about
Dog and this Pet is in fact a Dog. Sometimes, you might want to
provide this automatic downcasting behavior when creating bindings for
a class hierarchy that does not use standard C++ polymorphism, such as
LLVM [#f4]_. As long as there's some way to determine at runtime
whether a downcast is safe, you can proceed by specializing the
``pybind11::polymorphic_type_hook`` template:
.. code-block:: cpp
enum class PetKind { Cat, Dog, Zebra };
struct Pet { // Not polymorphic: has no virtual methods
const PetKind kind;
int age = 0;
protected:
Pet(PetKind _kind) : kind(_kind) {}
};
struct Dog : Pet {
Dog() : Pet(PetKind::Dog) {}
std::string sound = "woof!";
std::string bark() const { return sound; }
};
namespace pybind11 {
template<> struct polymorphic_type_hook<Pet> {
static const void *get(const Pet *src, const std::type_info*& type) {
// note that src may be nullptr
if (src && src->kind == PetKind::Dog) {
type = &typeid(Dog);
return static_cast<const Dog*>(src);
}
return src;
}
};
} // namespace pybind11
When pybind11 wants to convert a C++ pointer of type ``Base*`` to a
Python object, it calls ``polymorphic_type_hook<Base>::get()`` to
determine if a downcast is possible. The ``get()`` function should use
whatever runtime information is available to determine if its ``src``
parameter is in fact an instance of some class ``Derived`` that
inherits from ``Base``. If it finds such a ``Derived``, it sets ``type
= &typeid(Derived)`` and returns a pointer to the ``Derived`` object
that contains ``src``. Otherwise, it just returns ``src``, leaving
``type`` at its default value of nullptr. If you set ``type`` to a
type that pybind11 doesn't know about, no downcasting will occur, and
the original ``src`` pointer will be used with its static type
``Base*``.
It is critical that the returned pointer and ``type`` argument of
``get()`` agree with each other: if ``type`` is set to something
non-null, the returned pointer must point to the start of an object
whose type is ``type``. If the hierarchy being exposed uses only
single inheritance, a simple ``return src;`` will achieve this just
fine, but in the general case, you must cast ``src`` to the
appropriate derived-class pointer (e.g. using
``static_cast<Derived>(src)``) before allowing it to be returned as a
``void*``.
.. [#f4] https://llvm.org/docs/HowToSetUpLLVMStyleRTTI.html
.. note::
pybind11's standard support for downcasting objects whose types
have virtual methods is implemented using
``polymorphic_type_hook`` too, using the standard C++ ability to
determine the most-derived type of a polymorphic object using
``typeid()`` and to cast a base pointer to that most-derived type
(even if you don't know what it is) using ``dynamic_cast<void*>``.
.. seealso::
The file :file:`tests/test_tagbased_polymorphic.cpp` contains a
more complete example, including a demonstration of how to provide
automatic downcasting for an entire class hierarchy without
writing one get() function for each class.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/classes.rst | classes.rst |
.. _embedding:
Embedding the interpreter
#########################
While pybind11 is mainly focused on extending Python using C++, it's also
possible to do the reverse: embed the Python interpreter into a C++ program.
All of the other documentation pages still apply here, so refer to them for
general pybind11 usage. This section will cover a few extra things required
for embedding.
Getting started
===============
A basic executable with an embedded interpreter can be created with just a few
lines of CMake and the ``pybind11::embed`` target, as shown below. For more
information, see :doc:`/compiling`.
.. code-block:: cmake
cmake_minimum_required(VERSION 3.0)
project(example)
find_package(pybind11 REQUIRED) # or `add_subdirectory(pybind11)`
add_executable(example main.cpp)
target_link_libraries(example PRIVATE pybind11::embed)
The essential structure of the ``main.cpp`` file looks like this:
.. code-block:: cpp
#include <pybind11/embed.h> // everything needed for embedding
namespace py = pybind11;
int main() {
py::scoped_interpreter guard{}; // start the interpreter and keep it alive
py::print("Hello, World!"); // use the Python API
}
The interpreter must be initialized before using any Python API, which includes
all the functions and classes in pybind11. The RAII guard class `scoped_interpreter`
takes care of the interpreter lifetime. After the guard is destroyed, the interpreter
shuts down and clears its memory. No Python functions can be called after this.
Executing Python code
=====================
There are a few different ways to run Python code. One option is to use `eval`,
`exec` or `eval_file`, as explained in :ref:`eval`. Here is a quick example in
the context of an executable with an embedded interpreter:
.. code-block:: cpp
#include <pybind11/embed.h>
namespace py = pybind11;
int main() {
py::scoped_interpreter guard{};
py::exec(R"(
kwargs = dict(name="World", number=42)
message = "Hello, {name}! The answer is {number}".format(**kwargs)
print(message)
)");
}
Alternatively, similar results can be achieved using pybind11's API (see
:doc:`/advanced/pycpp/index` for more details).
.. code-block:: cpp
#include <pybind11/embed.h>
namespace py = pybind11;
using namespace py::literals;
int main() {
py::scoped_interpreter guard{};
auto kwargs = py::dict("name"_a="World", "number"_a=42);
auto message = "Hello, {name}! The answer is {number}"_s.format(**kwargs);
py::print(message);
}
The two approaches can also be combined:
.. code-block:: cpp
#include <pybind11/embed.h>
#include <iostream>
namespace py = pybind11;
using namespace py::literals;
int main() {
py::scoped_interpreter guard{};
auto locals = py::dict("name"_a="World", "number"_a=42);
py::exec(R"(
message = "Hello, {name}! The answer is {number}".format(**locals())
)", py::globals(), locals);
auto message = locals["message"].cast<std::string>();
std::cout << message;
}
Importing modules
=================
Python modules can be imported using `module::import()`:
.. code-block:: cpp
py::module sys = py::module::import("sys");
py::print(sys.attr("path"));
For convenience, the current working directory is included in ``sys.path`` when
embedding the interpreter. This makes it easy to import local Python files:
.. code-block:: python
"""calc.py located in the working directory"""
def add(i, j):
return i + j
.. code-block:: cpp
py::module calc = py::module::import("calc");
py::object result = calc.attr("add")(1, 2);
int n = result.cast<int>();
assert(n == 3);
Modules can be reloaded using `module::reload()` if the source is modified e.g.
by an external process. This can be useful in scenarios where the application
imports a user defined data processing script which needs to be updated after
changes by the user. Note that this function does not reload modules recursively.
.. _embedding_modules:
Adding embedded modules
=======================
Embedded binary modules can be added using the `PYBIND11_EMBEDDED_MODULE` macro.
Note that the definition must be placed at global scope. They can be imported
like any other module.
.. code-block:: cpp
#include <pybind11/embed.h>
namespace py = pybind11;
PYBIND11_EMBEDDED_MODULE(fast_calc, m) {
// `m` is a `py::module` which is used to bind functions and classes
m.def("add", [](int i, int j) {
return i + j;
});
}
int main() {
py::scoped_interpreter guard{};
auto fast_calc = py::module::import("fast_calc");
auto result = fast_calc.attr("add")(1, 2).cast<int>();
assert(result == 3);
}
Unlike extension modules where only a single binary module can be created, on
the embedded side an unlimited number of modules can be added using multiple
`PYBIND11_EMBEDDED_MODULE` definitions (as long as they have unique names).
These modules are added to Python's list of builtins, so they can also be
imported in pure Python files loaded by the interpreter. Everything interacts
naturally:
.. code-block:: python
"""py_module.py located in the working directory"""
import cpp_module
a = cpp_module.a
b = a + 1
.. code-block:: cpp
#include <pybind11/embed.h>
namespace py = pybind11;
PYBIND11_EMBEDDED_MODULE(cpp_module, m) {
m.attr("a") = 1;
}
int main() {
py::scoped_interpreter guard{};
auto py_module = py::module::import("py_module");
auto locals = py::dict("fmt"_a="{} + {} = {}", **py_module.attr("__dict__"));
assert(locals["a"].cast<int>() == 1);
assert(locals["b"].cast<int>() == 2);
py::exec(R"(
c = a + b
message = fmt.format(a, b, c)
)", py::globals(), locals);
assert(locals["c"].cast<int>() == 3);
assert(locals["message"].cast<std::string>() == "1 + 2 = 3");
}
Interpreter lifetime
====================
The Python interpreter shuts down when `scoped_interpreter` is destroyed. After
this, creating a new instance will restart the interpreter. Alternatively, the
`initialize_interpreter` / `finalize_interpreter` pair of functions can be used
to directly set the state at any time.
Modules created with pybind11 can be safely re-initialized after the interpreter
has been restarted. However, this may not apply to third-party extension modules.
The issue is that Python itself cannot completely unload extension modules and
there are several caveats with regard to interpreter restarting. In short, not
all memory may be freed, either due to Python reference cycles or user-created
global data. All the details can be found in the CPython documentation.
.. warning::
Creating two concurrent `scoped_interpreter` guards is a fatal error. So is
calling `initialize_interpreter` for a second time after the interpreter
has already been initialized.
Do not use the raw CPython API functions ``Py_Initialize`` and
``Py_Finalize`` as these do not properly handle the lifetime of
pybind11's internal data.
Sub-interpreter support
=======================
Creating multiple copies of `scoped_interpreter` is not possible because it
represents the main Python interpreter. Sub-interpreters are something different
and they do permit the existence of multiple interpreters. This is an advanced
feature of the CPython API and should be handled with care. pybind11 does not
currently offer a C++ interface for sub-interpreters, so refer to the CPython
documentation for all the details regarding this feature.
We'll just mention a couple of caveats the sub-interpreters support in pybind11:
1. Sub-interpreters will not receive independent copies of embedded modules.
Instead, these are shared and modifications in one interpreter may be
reflected in another.
2. Managing multiple threads, multiple interpreters and the GIL can be
challenging and there are several caveats here, even within the pure
CPython API (please refer to the Python docs for details). As for
pybind11, keep in mind that `gil_scoped_release` and `gil_scoped_acquire`
do not take sub-interpreters into account.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/embedding.rst | embedding.rst |
Miscellaneous
#############
.. _macro_notes:
General notes regarding convenience macros
==========================================
pybind11 provides a few convenience macros such as
:func:`PYBIND11_DECLARE_HOLDER_TYPE` and ``PYBIND11_OVERLOAD_*``. Since these
are "just" macros that are evaluated in the preprocessor (which has no concept
of types), they *will* get confused by commas in a template argument; for
example, consider:
.. code-block:: cpp
PYBIND11_OVERLOAD(MyReturnType<T1, T2>, Class<T3, T4>, func)
The limitation of the C preprocessor interprets this as five arguments (with new
arguments beginning after each comma) rather than three. To get around this,
there are two alternatives: you can use a type alias, or you can wrap the type
using the ``PYBIND11_TYPE`` macro:
.. code-block:: cpp
// Version 1: using a type alias
using ReturnType = MyReturnType<T1, T2>;
using ClassType = Class<T3, T4>;
PYBIND11_OVERLOAD(ReturnType, ClassType, func);
// Version 2: using the PYBIND11_TYPE macro:
PYBIND11_OVERLOAD(PYBIND11_TYPE(MyReturnType<T1, T2>),
PYBIND11_TYPE(Class<T3, T4>), func)
The ``PYBIND11_MAKE_OPAQUE`` macro does *not* require the above workarounds.
.. _gil:
Global Interpreter Lock (GIL)
=============================
When calling a C++ function from Python, the GIL is always held.
The classes :class:`gil_scoped_release` and :class:`gil_scoped_acquire` can be
used to acquire and release the global interpreter lock in the body of a C++
function call. In this way, long-running C++ code can be parallelized using
multiple Python threads. Taking :ref:`overriding_virtuals` as an example, this
could be realized as follows (important changes highlighted):
.. code-block:: cpp
:emphasize-lines: 8,9,31,32
class PyAnimal : public Animal {
public:
/* Inherit the constructors */
using Animal::Animal;
/* Trampoline (need one for each virtual function) */
std::string go(int n_times) {
/* Acquire GIL before calling Python code */
py::gil_scoped_acquire acquire;
PYBIND11_OVERLOAD_PURE(
std::string, /* Return type */
Animal, /* Parent class */
go, /* Name of function */
n_times /* Argument(s) */
);
}
};
PYBIND11_MODULE(example, m) {
py::class_<Animal, PyAnimal> animal(m, "Animal");
animal
.def(py::init<>())
.def("go", &Animal::go);
py::class_<Dog>(m, "Dog", animal)
.def(py::init<>());
m.def("call_go", [](Animal *animal) -> std::string {
/* Release GIL before calling into (potentially long-running) C++ code */
py::gil_scoped_release release;
return call_go(animal);
});
}
The ``call_go`` wrapper can also be simplified using the `call_guard` policy
(see :ref:`call_policies`) which yields the same result:
.. code-block:: cpp
m.def("call_go", &call_go, py::call_guard<py::gil_scoped_release>());
Binding sequence data types, iterators, the slicing protocol, etc.
==================================================================
Please refer to the supplemental example for details.
.. seealso::
The file :file:`tests/test_sequences_and_iterators.cpp` contains a
complete example that shows how to bind a sequence data type, including
length queries (``__len__``), iterators (``__iter__``), the slicing
protocol and other kinds of useful operations.
Partitioning code over multiple extension modules
=================================================
It's straightforward to split binding code over multiple extension modules,
while referencing types that are declared elsewhere. Everything "just" works
without any special precautions. One exception to this rule occurs when
extending a type declared in another extension module. Recall the basic example
from Section :ref:`inheritance`.
.. code-block:: cpp
py::class_<Pet> pet(m, "Pet");
pet.def(py::init<const std::string &>())
.def_readwrite("name", &Pet::name);
py::class_<Dog>(m, "Dog", pet /* <- specify parent */)
.def(py::init<const std::string &>())
.def("bark", &Dog::bark);
Suppose now that ``Pet`` bindings are defined in a module named ``basic``,
whereas the ``Dog`` bindings are defined somewhere else. The challenge is of
course that the variable ``pet`` is not available anymore though it is needed
to indicate the inheritance relationship to the constructor of ``class_<Dog>``.
However, it can be acquired as follows:
.. code-block:: cpp
py::object pet = (py::object) py::module::import("basic").attr("Pet");
py::class_<Dog>(m, "Dog", pet)
.def(py::init<const std::string &>())
.def("bark", &Dog::bark);
Alternatively, you can specify the base class as a template parameter option to
``class_``, which performs an automated lookup of the corresponding Python
type. Like the above code, however, this also requires invoking the ``import``
function once to ensure that the pybind11 binding code of the module ``basic``
has been executed:
.. code-block:: cpp
py::module::import("basic");
py::class_<Dog, Pet>(m, "Dog")
.def(py::init<const std::string &>())
.def("bark", &Dog::bark);
Naturally, both methods will fail when there are cyclic dependencies.
Note that pybind11 code compiled with hidden-by-default symbol visibility (e.g.
via the command line flag ``-fvisibility=hidden`` on GCC/Clang), which is
required for proper pybind11 functionality, can interfere with the ability to
access types defined in another extension module. Working around this requires
manually exporting types that are accessed by multiple extension modules;
pybind11 provides a macro to do just this:
.. code-block:: cpp
class PYBIND11_EXPORT Dog : public Animal {
...
};
Note also that it is possible (although would rarely be required) to share arbitrary
C++ objects between extension modules at runtime. Internal library data is shared
between modules using capsule machinery [#f6]_ which can be also utilized for
storing, modifying and accessing user-defined data. Note that an extension module
will "see" other extensions' data if and only if they were built with the same
pybind11 version. Consider the following example:
.. code-block:: cpp
auto data = (MyData *) py::get_shared_data("mydata");
if (!data)
data = (MyData *) py::set_shared_data("mydata", new MyData(42));
If the above snippet was used in several separately compiled extension modules,
the first one to be imported would create a ``MyData`` instance and associate
a ``"mydata"`` key with a pointer to it. Extensions that are imported later
would be then able to access the data behind the same pointer.
.. [#f6] https://docs.python.org/3/extending/extending.html#using-capsules
Module Destructors
==================
pybind11 does not provide an explicit mechanism to invoke cleanup code at
module destruction time. In rare cases where such functionality is required, it
is possible to emulate it using Python capsules or weak references with a
destruction callback.
.. code-block:: cpp
auto cleanup_callback = []() {
// perform cleanup here -- this function is called with the GIL held
};
m.add_object("_cleanup", py::capsule(cleanup_callback));
This approach has the potential downside that instances of classes exposed
within the module may still be alive when the cleanup callback is invoked
(whether this is acceptable will generally depend on the application).
Alternatively, the capsule may also be stashed within a type object, which
ensures that it not called before all instances of that type have been
collected:
.. code-block:: cpp
auto cleanup_callback = []() { /* ... */ };
m.attr("BaseClass").attr("_cleanup") = py::capsule(cleanup_callback);
Both approaches also expose a potentially dangerous ``_cleanup`` attribute in
Python, which may be undesirable from an API standpoint (a premature explicit
call from Python might lead to undefined behavior). Yet another approach that
avoids this issue involves weak reference with a cleanup callback:
.. code-block:: cpp
// Register a callback function that is invoked when the BaseClass object is colelcted
py::cpp_function cleanup_callback(
[](py::handle weakref) {
// perform cleanup here -- this function is called with the GIL held
weakref.dec_ref(); // release weak reference
}
);
// Create a weak reference with a cleanup callback and initially leak it
(void) py::weakref(m.attr("BaseClass"), cleanup_callback).release();
.. note::
PyPy (at least version 5.9) does not garbage collect objects when the
interpreter exits. An alternative approach (which also works on CPython) is to use
the :py:mod:`atexit` module [#f7]_, for example:
.. code-block:: cpp
auto atexit = py::module::import("atexit");
atexit.attr("register")(py::cpp_function([]() {
// perform cleanup here -- this function is called with the GIL held
}));
.. [#f7] https://docs.python.org/3/library/atexit.html
Generating documentation using Sphinx
=====================================
Sphinx [#f4]_ has the ability to inspect the signatures and documentation
strings in pybind11-based extension modules to automatically generate beautiful
documentation in a variety formats. The python_example repository [#f5]_ contains a
simple example repository which uses this approach.
There are two potential gotchas when using this approach: first, make sure that
the resulting strings do not contain any :kbd:`TAB` characters, which break the
docstring parsing routines. You may want to use C++11 raw string literals,
which are convenient for multi-line comments. Conveniently, any excess
indentation will be automatically be removed by Sphinx. However, for this to
work, it is important that all lines are indented consistently, i.e.:
.. code-block:: cpp
// ok
m.def("foo", &foo, R"mydelimiter(
The foo function
Parameters
----------
)mydelimiter");
// *not ok*
m.def("foo", &foo, R"mydelimiter(The foo function
Parameters
----------
)mydelimiter");
By default, pybind11 automatically generates and prepends a signature to the docstring of a function
registered with ``module::def()`` and ``class_::def()``. Sometimes this
behavior is not desirable, because you want to provide your own signature or remove
the docstring completely to exclude the function from the Sphinx documentation.
The class ``options`` allows you to selectively suppress auto-generated signatures:
.. code-block:: cpp
PYBIND11_MODULE(example, m) {
py::options options;
options.disable_function_signatures();
m.def("add", [](int a, int b) { return a + b; }, "A function which adds two numbers");
}
Note that changes to the settings affect only function bindings created during the
lifetime of the ``options`` instance. When it goes out of scope at the end of the module's init function,
the default settings are restored to prevent unwanted side effects.
.. [#f4] http://www.sphinx-doc.org
.. [#f5] http://github.com/pybind/python_example
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/misc.rst | misc.rst |
Utilities
#########
Using Python's print function in C++
====================================
The usual way to write output in C++ is using ``std::cout`` while in Python one
would use ``print``. Since these methods use different buffers, mixing them can
lead to output order issues. To resolve this, pybind11 modules can use the
:func:`py::print` function which writes to Python's ``sys.stdout`` for consistency.
Python's ``print`` function is replicated in the C++ API including optional
keyword arguments ``sep``, ``end``, ``file``, ``flush``. Everything works as
expected in Python:
.. code-block:: cpp
py::print(1, 2.0, "three"); // 1 2.0 three
py::print(1, 2.0, "three", "sep"_a="-"); // 1-2.0-three
auto args = py::make_tuple("unpacked", true);
py::print("->", *args, "end"_a="<-"); // -> unpacked True <-
.. _ostream_redirect:
Capturing standard output from ostream
======================================
Often, a library will use the streams ``std::cout`` and ``std::cerr`` to print,
but this does not play well with Python's standard ``sys.stdout`` and ``sys.stderr``
redirection. Replacing a library's printing with `py::print <print>` may not
be feasible. This can be fixed using a guard around the library function that
redirects output to the corresponding Python streams:
.. code-block:: cpp
#include <pybind11/iostream.h>
...
// Add a scoped redirect for your noisy code
m.def("noisy_func", []() {
py::scoped_ostream_redirect stream(
std::cout, // std::ostream&
py::module::import("sys").attr("stdout") // Python output
);
call_noisy_func();
});
This method respects flushes on the output streams and will flush if needed
when the scoped guard is destroyed. This allows the output to be redirected in
real time, such as to a Jupyter notebook. The two arguments, the C++ stream and
the Python output, are optional, and default to standard output if not given. An
extra type, `py::scoped_estream_redirect <scoped_estream_redirect>`, is identical
except for defaulting to ``std::cerr`` and ``sys.stderr``; this can be useful with
`py::call_guard`, which allows multiple items, but uses the default constructor:
.. code-block:: py
// Alternative: Call single function using call guard
m.def("noisy_func", &call_noisy_function,
py::call_guard<py::scoped_ostream_redirect,
py::scoped_estream_redirect>());
The redirection can also be done in Python with the addition of a context
manager, using the `py::add_ostream_redirect() <add_ostream_redirect>` function:
.. code-block:: cpp
py::add_ostream_redirect(m, "ostream_redirect");
The name in Python defaults to ``ostream_redirect`` if no name is passed. This
creates the following context manager in Python:
.. code-block:: python
with ostream_redirect(stdout=True, stderr=True):
noisy_function()
It defaults to redirecting both streams, though you can use the keyword
arguments to disable one of the streams if needed.
.. note::
The above methods will not redirect C-level output to file descriptors, such
as ``fprintf``. For those cases, you'll need to redirect the file
descriptors either directly in C or with Python's ``os.dup2`` function
in an operating-system dependent way.
.. _eval:
Evaluating Python expressions from strings and files
====================================================
pybind11 provides the `eval`, `exec` and `eval_file` functions to evaluate
Python expressions and statements. The following example illustrates how they
can be used.
.. code-block:: cpp
// At beginning of file
#include <pybind11/eval.h>
...
// Evaluate in scope of main module
py::object scope = py::module::import("__main__").attr("__dict__");
// Evaluate an isolated expression
int result = py::eval("my_variable + 10", scope).cast<int>();
// Evaluate a sequence of statements
py::exec(
"print('Hello')\n"
"print('world!');",
scope);
// Evaluate the statements in an separate Python file on disk
py::eval_file("script.py", scope);
C++11 raw string literals are also supported and quite handy for this purpose.
The only requirement is that the first statement must be on a new line following
the raw string delimiter ``R"(``, ensuring all lines have common leading indent:
.. code-block:: cpp
py::exec(R"(
x = get_answer()
if x == 42:
print('Hello World!')
else:
print('Bye!')
)", scope
);
.. note::
`eval` and `eval_file` accept a template parameter that describes how the
string/file should be interpreted. Possible choices include ``eval_expr``
(isolated expression), ``eval_single_statement`` (a single statement, return
value is always ``none``), and ``eval_statements`` (sequence of statements,
return value is always ``none``). `eval` defaults to ``eval_expr``,
`eval_file` defaults to ``eval_statements`` and `exec` is just a shortcut
for ``eval<eval_statements>``.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/pycpp/utilities.rst | utilities.rst |
.. _numpy:
NumPy
#####
Buffer protocol
===============
Python supports an extremely general and convenient approach for exchanging
data between plugin libraries. Types can expose a buffer view [#f2]_, which
provides fast direct access to the raw internal data representation. Suppose we
want to bind the following simplistic Matrix class:
.. code-block:: cpp
class Matrix {
public:
Matrix(size_t rows, size_t cols) : m_rows(rows), m_cols(cols) {
m_data = new float[rows*cols];
}
float *data() { return m_data; }
size_t rows() const { return m_rows; }
size_t cols() const { return m_cols; }
private:
size_t m_rows, m_cols;
float *m_data;
};
The following binding code exposes the ``Matrix`` contents as a buffer object,
making it possible to cast Matrices into NumPy arrays. It is even possible to
completely avoid copy operations with Python expressions like
``np.array(matrix_instance, copy = False)``.
.. code-block:: cpp
py::class_<Matrix>(m, "Matrix", py::buffer_protocol())
.def_buffer([](Matrix &m) -> py::buffer_info {
return py::buffer_info(
m.data(), /* Pointer to buffer */
sizeof(float), /* Size of one scalar */
py::format_descriptor<float>::format(), /* Python struct-style format descriptor */
2, /* Number of dimensions */
{ m.rows(), m.cols() }, /* Buffer dimensions */
{ sizeof(float) * m.cols(), /* Strides (in bytes) for each index */
sizeof(float) }
);
});
Supporting the buffer protocol in a new type involves specifying the special
``py::buffer_protocol()`` tag in the ``py::class_`` constructor and calling the
``def_buffer()`` method with a lambda function that creates a
``py::buffer_info`` description record on demand describing a given matrix
instance. The contents of ``py::buffer_info`` mirror the Python buffer protocol
specification.
.. code-block:: cpp
struct buffer_info {
void *ptr;
ssize_t itemsize;
std::string format;
ssize_t ndim;
std::vector<ssize_t> shape;
std::vector<ssize_t> strides;
};
To create a C++ function that can take a Python buffer object as an argument,
simply use the type ``py::buffer`` as one of its arguments. Buffers can exist
in a great variety of configurations, hence some safety checks are usually
necessary in the function body. Below, you can see an basic example on how to
define a custom constructor for the Eigen double precision matrix
(``Eigen::MatrixXd``) type, which supports initialization from compatible
buffer objects (e.g. a NumPy matrix).
.. code-block:: cpp
/* Bind MatrixXd (or some other Eigen type) to Python */
typedef Eigen::MatrixXd Matrix;
typedef Matrix::Scalar Scalar;
constexpr bool rowMajor = Matrix::Flags & Eigen::RowMajorBit;
py::class_<Matrix>(m, "Matrix", py::buffer_protocol())
.def("__init__", [](Matrix &m, py::buffer b) {
typedef Eigen::Stride<Eigen::Dynamic, Eigen::Dynamic> Strides;
/* Request a buffer descriptor from Python */
py::buffer_info info = b.request();
/* Some sanity checks ... */
if (info.format != py::format_descriptor<Scalar>::format())
throw std::runtime_error("Incompatible format: expected a double array!");
if (info.ndim != 2)
throw std::runtime_error("Incompatible buffer dimension!");
auto strides = Strides(
info.strides[rowMajor ? 0 : 1] / (py::ssize_t)sizeof(Scalar),
info.strides[rowMajor ? 1 : 0] / (py::ssize_t)sizeof(Scalar));
auto map = Eigen::Map<Matrix, 0, Strides>(
static_cast<Scalar *>(info.ptr), info.shape[0], info.shape[1], strides);
new (&m) Matrix(map);
});
For reference, the ``def_buffer()`` call for this Eigen data type should look
as follows:
.. code-block:: cpp
.def_buffer([](Matrix &m) -> py::buffer_info {
return py::buffer_info(
m.data(), /* Pointer to buffer */
sizeof(Scalar), /* Size of one scalar */
py::format_descriptor<Scalar>::format(), /* Python struct-style format descriptor */
2, /* Number of dimensions */
{ m.rows(), m.cols() }, /* Buffer dimensions */
{ sizeof(Scalar) * (rowMajor ? m.cols() : 1),
sizeof(Scalar) * (rowMajor ? 1 : m.rows()) }
/* Strides (in bytes) for each index */
);
})
For a much easier approach of binding Eigen types (although with some
limitations), refer to the section on :doc:`/advanced/cast/eigen`.
.. seealso::
The file :file:`tests/test_buffers.cpp` contains a complete example
that demonstrates using the buffer protocol with pybind11 in more detail.
.. [#f2] http://docs.python.org/3/c-api/buffer.html
Arrays
======
By exchanging ``py::buffer`` with ``py::array`` in the above snippet, we can
restrict the function so that it only accepts NumPy arrays (rather than any
type of Python object satisfying the buffer protocol).
In many situations, we want to define a function which only accepts a NumPy
array of a certain data type. This is possible via the ``py::array_t<T>``
template. For instance, the following function requires the argument to be a
NumPy array containing double precision values.
.. code-block:: cpp
void f(py::array_t<double> array);
When it is invoked with a different type (e.g. an integer or a list of
integers), the binding code will attempt to cast the input into a NumPy array
of the requested type. Note that this feature requires the
:file:`pybind11/numpy.h` header to be included.
Data in NumPy arrays is not guaranteed to packed in a dense manner;
furthermore, entries can be separated by arbitrary column and row strides.
Sometimes, it can be useful to require a function to only accept dense arrays
using either the C (row-major) or Fortran (column-major) ordering. This can be
accomplished via a second template argument with values ``py::array::c_style``
or ``py::array::f_style``.
.. code-block:: cpp
void f(py::array_t<double, py::array::c_style | py::array::forcecast> array);
The ``py::array::forcecast`` argument is the default value of the second
template parameter, and it ensures that non-conforming arguments are converted
into an array satisfying the specified requirements instead of trying the next
function overload.
Structured types
================
In order for ``py::array_t`` to work with structured (record) types, we first
need to register the memory layout of the type. This can be done via
``PYBIND11_NUMPY_DTYPE`` macro, called in the plugin definition code, which
expects the type followed by field names:
.. code-block:: cpp
struct A {
int x;
double y;
};
struct B {
int z;
A a;
};
// ...
PYBIND11_MODULE(test, m) {
// ...
PYBIND11_NUMPY_DTYPE(A, x, y);
PYBIND11_NUMPY_DTYPE(B, z, a);
/* now both A and B can be used as template arguments to py::array_t */
}
The structure should consist of fundamental arithmetic types, ``std::complex``,
previously registered substructures, and arrays of any of the above. Both C++
arrays and ``std::array`` are supported. While there is a static assertion to
prevent many types of unsupported structures, it is still the user's
responsibility to use only "plain" structures that can be safely manipulated as
raw memory without violating invariants.
Vectorizing functions
=====================
Suppose we want to bind a function with the following signature to Python so
that it can process arbitrary NumPy array arguments (vectors, matrices, general
N-D arrays) in addition to its normal arguments:
.. code-block:: cpp
double my_func(int x, float y, double z);
After including the ``pybind11/numpy.h`` header, this is extremely simple:
.. code-block:: cpp
m.def("vectorized_func", py::vectorize(my_func));
Invoking the function like below causes 4 calls to be made to ``my_func`` with
each of the array elements. The significant advantage of this compared to
solutions like ``numpy.vectorize()`` is that the loop over the elements runs
entirely on the C++ side and can be crunched down into a tight, optimized loop
by the compiler. The result is returned as a NumPy array of type
``numpy.dtype.float64``.
.. code-block:: pycon
>>> x = np.array([[1, 3],[5, 7]])
>>> y = np.array([[2, 4],[6, 8]])
>>> z = 3
>>> result = vectorized_func(x, y, z)
The scalar argument ``z`` is transparently replicated 4 times. The input
arrays ``x`` and ``y`` are automatically converted into the right types (they
are of type ``numpy.dtype.int64`` but need to be ``numpy.dtype.int32`` and
``numpy.dtype.float32``, respectively).
.. note::
Only arithmetic, complex, and POD types passed by value or by ``const &``
reference are vectorized; all other arguments are passed through as-is.
Functions taking rvalue reference arguments cannot be vectorized.
In cases where the computation is too complicated to be reduced to
``vectorize``, it will be necessary to create and access the buffer contents
manually. The following snippet contains a complete example that shows how this
works (the code is somewhat contrived, since it could have been done more
simply using ``vectorize``).
.. code-block:: cpp
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
namespace py = pybind11;
py::array_t<double> add_arrays(py::array_t<double> input1, py::array_t<double> input2) {
py::buffer_info buf1 = input1.request(), buf2 = input2.request();
if (buf1.ndim != 1 || buf2.ndim != 1)
throw std::runtime_error("Number of dimensions must be one");
if (buf1.size != buf2.size)
throw std::runtime_error("Input shapes must match");
/* No pointer is passed, so NumPy will allocate the buffer */
auto result = py::array_t<double>(buf1.size);
py::buffer_info buf3 = result.request();
double *ptr1 = (double *) buf1.ptr,
*ptr2 = (double *) buf2.ptr,
*ptr3 = (double *) buf3.ptr;
for (size_t idx = 0; idx < buf1.shape[0]; idx++)
ptr3[idx] = ptr1[idx] + ptr2[idx];
return result;
}
PYBIND11_MODULE(test, m) {
m.def("add_arrays", &add_arrays, "Add two NumPy arrays");
}
.. seealso::
The file :file:`tests/test_numpy_vectorize.cpp` contains a complete
example that demonstrates using :func:`vectorize` in more detail.
Direct access
=============
For performance reasons, particularly when dealing with very large arrays, it
is often desirable to directly access array elements without internal checking
of dimensions and bounds on every access when indices are known to be already
valid. To avoid such checks, the ``array`` class and ``array_t<T>`` template
class offer an unchecked proxy object that can be used for this unchecked
access through the ``unchecked<N>`` and ``mutable_unchecked<N>`` methods,
where ``N`` gives the required dimensionality of the array:
.. code-block:: cpp
m.def("sum_3d", [](py::array_t<double> x) {
auto r = x.unchecked<3>(); // x must have ndim = 3; can be non-writeable
double sum = 0;
for (ssize_t i = 0; i < r.shape(0); i++)
for (ssize_t j = 0; j < r.shape(1); j++)
for (ssize_t k = 0; k < r.shape(2); k++)
sum += r(i, j, k);
return sum;
});
m.def("increment_3d", [](py::array_t<double> x) {
auto r = x.mutable_unchecked<3>(); // Will throw if ndim != 3 or flags.writeable is false
for (ssize_t i = 0; i < r.shape(0); i++)
for (ssize_t j = 0; j < r.shape(1); j++)
for (ssize_t k = 0; k < r.shape(2); k++)
r(i, j, k) += 1.0;
}, py::arg().noconvert());
To obtain the proxy from an ``array`` object, you must specify both the data
type and number of dimensions as template arguments, such as ``auto r =
myarray.mutable_unchecked<float, 2>()``.
If the number of dimensions is not known at compile time, you can omit the
dimensions template parameter (i.e. calling ``arr_t.unchecked()`` or
``arr.unchecked<T>()``. This will give you a proxy object that works in the
same way, but results in less optimizable code and thus a small efficiency
loss in tight loops.
Note that the returned proxy object directly references the array's data, and
only reads its shape, strides, and writeable flag when constructed. You must
take care to ensure that the referenced array is not destroyed or reshaped for
the duration of the returned object, typically by limiting the scope of the
returned instance.
The returned proxy object supports some of the same methods as ``py::array`` so
that it can be used as a drop-in replacement for some existing, index-checked
uses of ``py::array``:
- ``r.ndim()`` returns the number of dimensions
- ``r.data(1, 2, ...)`` and ``r.mutable_data(1, 2, ...)``` returns a pointer to
the ``const T`` or ``T`` data, respectively, at the given indices. The
latter is only available to proxies obtained via ``a.mutable_unchecked()``.
- ``itemsize()`` returns the size of an item in bytes, i.e. ``sizeof(T)``.
- ``ndim()`` returns the number of dimensions.
- ``shape(n)`` returns the size of dimension ``n``
- ``size()`` returns the total number of elements (i.e. the product of the shapes).
- ``nbytes()`` returns the number of bytes used by the referenced elements
(i.e. ``itemsize()`` times ``size()``).
.. seealso::
The file :file:`tests/test_numpy_array.cpp` contains additional examples
demonstrating the use of this feature.
Ellipsis
========
Python 3 provides a convenient ``...`` ellipsis notation that is often used to
slice multidimensional arrays. For instance, the following snippet extracts the
middle dimensions of a tensor with the first and last index set to zero.
.. code-block:: python
a = # a NumPy array
b = a[0, ..., 0]
The function ``py::ellipsis()`` function can be used to perform the same
operation on the C++ side:
.. code-block:: cpp
py::array a = /* A NumPy array */;
py::array b = a[py::make_tuple(0, py::ellipsis(), 0)];
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/pycpp/numpy.rst | numpy.rst |
Python types
############
Available wrappers
==================
All major Python types are available as thin C++ wrapper classes. These
can also be used as function parameters -- see :ref:`python_objects_as_args`.
Available types include :class:`handle`, :class:`object`, :class:`bool_`,
:class:`int_`, :class:`float_`, :class:`str`, :class:`bytes`, :class:`tuple`,
:class:`list`, :class:`dict`, :class:`slice`, :class:`none`, :class:`capsule`,
:class:`iterable`, :class:`iterator`, :class:`function`, :class:`buffer`,
:class:`array`, and :class:`array_t`.
Casting back and forth
======================
In this kind of mixed code, it is often necessary to convert arbitrary C++
types to Python, which can be done using :func:`py::cast`:
.. code-block:: cpp
MyClass *cls = ..;
py::object obj = py::cast(cls);
The reverse direction uses the following syntax:
.. code-block:: cpp
py::object obj = ...;
MyClass *cls = obj.cast<MyClass *>();
When conversion fails, both directions throw the exception :class:`cast_error`.
.. _python_libs:
Accessing Python libraries from C++
===================================
It is also possible to import objects defined in the Python standard
library or available in the current Python environment (``sys.path``) and work
with these in C++.
This example obtains a reference to the Python ``Decimal`` class.
.. code-block:: cpp
// Equivalent to "from decimal import Decimal"
py::object Decimal = py::module::import("decimal").attr("Decimal");
.. code-block:: cpp
// Try to import scipy
py::object scipy = py::module::import("scipy");
return scipy.attr("__version__");
.. _calling_python_functions:
Calling Python functions
========================
It is also possible to call Python classes, functions and methods
via ``operator()``.
.. code-block:: cpp
// Construct a Python object of class Decimal
py::object pi = Decimal("3.14159");
.. code-block:: cpp
// Use Python to make our directories
py::object os = py::module::import("os");
py::object makedirs = os.attr("makedirs");
makedirs("/tmp/path/to/somewhere");
One can convert the result obtained from Python to a pure C++ version
if a ``py::class_`` or type conversion is defined.
.. code-block:: cpp
py::function f = <...>;
py::object result_py = f(1234, "hello", some_instance);
MyClass &result = result_py.cast<MyClass>();
.. _calling_python_methods:
Calling Python methods
========================
To call an object's method, one can again use ``.attr`` to obtain access to the
Python method.
.. code-block:: cpp
// Calculate e^π in decimal
py::object exp_pi = pi.attr("exp")();
py::print(py::str(exp_pi));
In the example above ``pi.attr("exp")`` is a *bound method*: it will always call
the method for that same instance of the class. Alternately one can create an
*unbound method* via the Python class (instead of instance) and pass the ``self``
object explicitly, followed by other arguments.
.. code-block:: cpp
py::object decimal_exp = Decimal.attr("exp");
// Compute the e^n for n=0..4
for (int n = 0; n < 5; n++) {
py::print(decimal_exp(Decimal(n));
}
Keyword arguments
=================
Keyword arguments are also supported. In Python, there is the usual call syntax:
.. code-block:: python
def f(number, say, to):
... # function code
f(1234, say="hello", to=some_instance) # keyword call in Python
In C++, the same call can be made using:
.. code-block:: cpp
using namespace pybind11::literals; // to bring in the `_a` literal
f(1234, "say"_a="hello", "to"_a=some_instance); // keyword call in C++
Unpacking arguments
===================
Unpacking of ``*args`` and ``**kwargs`` is also possible and can be mixed with
other arguments:
.. code-block:: cpp
// * unpacking
py::tuple args = py::make_tuple(1234, "hello", some_instance);
f(*args);
// ** unpacking
py::dict kwargs = py::dict("number"_a=1234, "say"_a="hello", "to"_a=some_instance);
f(**kwargs);
// mixed keywords, * and ** unpacking
py::tuple args = py::make_tuple(1234);
py::dict kwargs = py::dict("to"_a=some_instance);
f(*args, "say"_a="hello", **kwargs);
Generalized unpacking according to PEP448_ is also supported:
.. code-block:: cpp
py::dict kwargs1 = py::dict("number"_a=1234);
py::dict kwargs2 = py::dict("to"_a=some_instance);
f(**kwargs1, "say"_a="hello", **kwargs2);
.. seealso::
The file :file:`tests/test_pytypes.cpp` contains a complete
example that demonstrates passing native Python types in more detail. The
file :file:`tests/test_callbacks.cpp` presents a few examples of calling
Python functions from C++, including keywords arguments and unpacking.
.. _PEP448: https://www.python.org/dev/peps/pep-0448/
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/pycpp/object.rst | object.rst |
Functional
##########
The following features must be enabled by including :file:`pybind11/functional.h`.
Callbacks and passing anonymous functions
=========================================
The C++11 standard brought lambda functions and the generic polymorphic
function wrapper ``std::function<>`` to the C++ programming language, which
enable powerful new ways of working with functions. Lambda functions come in
two flavors: stateless lambda function resemble classic function pointers that
link to an anonymous piece of code, while stateful lambda functions
additionally depend on captured variables that are stored in an anonymous
*lambda closure object*.
Here is a simple example of a C++ function that takes an arbitrary function
(stateful or stateless) with signature ``int -> int`` as an argument and runs
it with the value 10.
.. code-block:: cpp
int func_arg(const std::function<int(int)> &f) {
return f(10);
}
The example below is more involved: it takes a function of signature ``int -> int``
and returns another function of the same kind. The return value is a stateful
lambda function, which stores the value ``f`` in the capture object and adds 1 to
its return value upon execution.
.. code-block:: cpp
std::function<int(int)> func_ret(const std::function<int(int)> &f) {
return [f](int i) {
return f(i) + 1;
};
}
This example demonstrates using python named parameters in C++ callbacks which
requires using ``py::cpp_function`` as a wrapper. Usage is similar to defining
methods of classes:
.. code-block:: cpp
py::cpp_function func_cpp() {
return py::cpp_function([](int i) { return i+1; },
py::arg("number"));
}
After including the extra header file :file:`pybind11/functional.h`, it is almost
trivial to generate binding code for all of these functions.
.. code-block:: cpp
#include <pybind11/functional.h>
PYBIND11_MODULE(example, m) {
m.def("func_arg", &func_arg);
m.def("func_ret", &func_ret);
m.def("func_cpp", &func_cpp);
}
The following interactive session shows how to call them from Python.
.. code-block:: pycon
$ python
>>> import example
>>> def square(i):
... return i * i
...
>>> example.func_arg(square)
100L
>>> square_plus_1 = example.func_ret(square)
>>> square_plus_1(4)
17L
>>> plus_1 = func_cpp()
>>> plus_1(number=43)
44L
.. warning::
Keep in mind that passing a function from C++ to Python (or vice versa)
will instantiate a piece of wrapper code that translates function
invocations between the two languages. Naturally, this translation
increases the computational cost of each function call somewhat. A
problematic situation can arise when a function is copied back and forth
between Python and C++ many times in a row, in which case the underlying
wrappers will accumulate correspondingly. The resulting long sequence of
C++ -> Python -> C++ -> ... roundtrips can significantly decrease
performance.
There is one exception: pybind11 detects case where a stateless function
(i.e. a function pointer or a lambda function without captured variables)
is passed as an argument to another C++ function exposed in Python. In this
case, there is no overhead. Pybind11 will extract the underlying C++
function pointer from the wrapped function to sidestep a potential C++ ->
Python -> C++ roundtrip. This is demonstrated in :file:`tests/test_callbacks.cpp`.
.. note::
This functionality is very useful when generating bindings for callbacks in
C++ libraries (e.g. GUI libraries, asynchronous networking libraries, etc.).
The file :file:`tests/test_callbacks.cpp` contains a complete example
that demonstrates how to work with callbacks and anonymous functions in
more detail.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/cast/functional.rst | functional.rst |
Strings, bytes and Unicode conversions
######################################
.. note::
This section discusses string handling in terms of Python 3 strings. For
Python 2.7, replace all occurrences of ``str`` with ``unicode`` and
``bytes`` with ``str``. Python 2.7 users may find it best to use ``from
__future__ import unicode_literals`` to avoid unintentionally using ``str``
instead of ``unicode``.
Passing Python strings to C++
=============================
When a Python ``str`` is passed from Python to a C++ function that accepts
``std::string`` or ``char *`` as arguments, pybind11 will encode the Python
string to UTF-8. All Python ``str`` can be encoded in UTF-8, so this operation
does not fail.
The C++ language is encoding agnostic. It is the responsibility of the
programmer to track encodings. It's often easiest to simply `use UTF-8
everywhere <http://utf8everywhere.org/>`_.
.. code-block:: c++
m.def("utf8_test",
[](const std::string &s) {
cout << "utf-8 is icing on the cake.\n";
cout << s;
}
);
m.def("utf8_charptr",
[](const char *s) {
cout << "My favorite food is\n";
cout << s;
}
);
.. code-block:: python
>>> utf8_test('🎂')
utf-8 is icing on the cake.
🎂
>>> utf8_charptr('🍕')
My favorite food is
🍕
.. note::
Some terminal emulators do not support UTF-8 or emoji fonts and may not
display the example above correctly.
The results are the same whether the C++ function accepts arguments by value or
reference, and whether or not ``const`` is used.
Passing bytes to C++
--------------------
A Python ``bytes`` object will be passed to C++ functions that accept
``std::string`` or ``char*`` *without* conversion. On Python 3, in order to
make a function *only* accept ``bytes`` (and not ``str``), declare it as taking
a ``py::bytes`` argument.
Returning C++ strings to Python
===============================
When a C++ function returns a ``std::string`` or ``char*`` to a Python caller,
**pybind11 will assume that the string is valid UTF-8** and will decode it to a
native Python ``str``, using the same API as Python uses to perform
``bytes.decode('utf-8')``. If this implicit conversion fails, pybind11 will
raise a ``UnicodeDecodeError``.
.. code-block:: c++
m.def("std_string_return",
[]() {
return std::string("This string needs to be UTF-8 encoded");
}
);
.. code-block:: python
>>> isinstance(example.std_string_return(), str)
True
Because UTF-8 is inclusive of pure ASCII, there is never any issue with
returning a pure ASCII string to Python. If there is any possibility that the
string is not pure ASCII, it is necessary to ensure the encoding is valid
UTF-8.
.. warning::
Implicit conversion assumes that a returned ``char *`` is null-terminated.
If there is no null terminator a buffer overrun will occur.
Explicit conversions
--------------------
If some C++ code constructs a ``std::string`` that is not a UTF-8 string, one
can perform a explicit conversion and return a ``py::str`` object. Explicit
conversion has the same overhead as implicit conversion.
.. code-block:: c++
// This uses the Python C API to convert Latin-1 to Unicode
m.def("str_output",
[]() {
std::string s = "Send your r\xe9sum\xe9 to Alice in HR"; // Latin-1
py::str py_s = PyUnicode_DecodeLatin1(s.data(), s.length());
return py_s;
}
);
.. code-block:: python
>>> str_output()
'Send your résumé to Alice in HR'
The `Python C API
<https://docs.python.org/3/c-api/unicode.html#built-in-codecs>`_ provides
several built-in codecs.
One could also use a third party encoding library such as libiconv to transcode
to UTF-8.
Return C++ strings without conversion
-------------------------------------
If the data in a C++ ``std::string`` does not represent text and should be
returned to Python as ``bytes``, then one can return the data as a
``py::bytes`` object.
.. code-block:: c++
m.def("return_bytes",
[]() {
std::string s("\xba\xd0\xba\xd0"); // Not valid UTF-8
return py::bytes(s); // Return the data without transcoding
}
);
.. code-block:: python
>>> example.return_bytes()
b'\xba\xd0\xba\xd0'
Note the asymmetry: pybind11 will convert ``bytes`` to ``std::string`` without
encoding, but cannot convert ``std::string`` back to ``bytes`` implicitly.
.. code-block:: c++
m.def("asymmetry",
[](std::string s) { // Accepts str or bytes from Python
return s; // Looks harmless, but implicitly converts to str
}
);
.. code-block:: python
>>> isinstance(example.asymmetry(b"have some bytes"), str)
True
>>> example.asymmetry(b"\xba\xd0\xba\xd0") # invalid utf-8 as bytes
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xba in position 0: invalid start byte
Wide character strings
======================
When a Python ``str`` is passed to a C++ function expecting ``std::wstring``,
``wchar_t*``, ``std::u16string`` or ``std::u32string``, the ``str`` will be
encoded to UTF-16 or UTF-32 depending on how the C++ compiler implements each
type, in the platform's native endianness. When strings of these types are
returned, they are assumed to contain valid UTF-16 or UTF-32, and will be
decoded to Python ``str``.
.. code-block:: c++
#define UNICODE
#include <windows.h>
m.def("set_window_text",
[](HWND hwnd, std::wstring s) {
// Call SetWindowText with null-terminated UTF-16 string
::SetWindowText(hwnd, s.c_str());
}
);
m.def("get_window_text",
[](HWND hwnd) {
const int buffer_size = ::GetWindowTextLength(hwnd) + 1;
auto buffer = std::make_unique< wchar_t[] >(buffer_size);
::GetWindowText(hwnd, buffer.data(), buffer_size);
std::wstring text(buffer.get());
// wstring will be converted to Python str
return text;
}
);
.. warning::
Wide character strings may not work as described on Python 2.7 or Python
3.3 compiled with ``--enable-unicode=ucs2``.
Strings in multibyte encodings such as Shift-JIS must transcoded to a
UTF-8/16/32 before being returned to Python.
Character literals
==================
C++ functions that accept character literals as input will receive the first
character of a Python ``str`` as their input. If the string is longer than one
Unicode character, trailing characters will be ignored.
When a character literal is returned from C++ (such as a ``char`` or a
``wchar_t``), it will be converted to a ``str`` that represents the single
character.
.. code-block:: c++
m.def("pass_char", [](char c) { return c; });
m.def("pass_wchar", [](wchar_t w) { return w; });
.. code-block:: python
>>> example.pass_char('A')
'A'
While C++ will cast integers to character types (``char c = 0x65;``), pybind11
does not convert Python integers to characters implicitly. The Python function
``chr()`` can be used to convert integers to characters.
.. code-block:: python
>>> example.pass_char(0x65)
TypeError
>>> example.pass_char(chr(0x65))
'A'
If the desire is to work with an 8-bit integer, use ``int8_t`` or ``uint8_t``
as the argument type.
Grapheme clusters
-----------------
A single grapheme may be represented by two or more Unicode characters. For
example 'é' is usually represented as U+00E9 but can also be expressed as the
combining character sequence U+0065 U+0301 (that is, the letter 'e' followed by
a combining acute accent). The combining character will be lost if the
two-character sequence is passed as an argument, even though it renders as a
single grapheme.
.. code-block:: python
>>> example.pass_wchar('é')
'é'
>>> combining_e_acute = 'e' + '\u0301'
>>> combining_e_acute
'é'
>>> combining_e_acute == 'é'
False
>>> example.pass_wchar(combining_e_acute)
'e'
Normalizing combining characters before passing the character literal to C++
may resolve *some* of these issues:
.. code-block:: python
>>> example.pass_wchar(unicodedata.normalize('NFC', combining_e_acute))
'é'
In some languages (Thai for example), there are `graphemes that cannot be
expressed as a single Unicode code point
<http://unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries>`_, so there is
no way to capture them in a C++ character type.
C++17 string views
==================
C++17 string views are automatically supported when compiling in C++17 mode.
They follow the same rules for encoding and decoding as the corresponding STL
string type (for example, a ``std::u16string_view`` argument will be passed
UTF-16-encoded data, and a returned ``std::string_view`` will be decoded as
UTF-8).
References
==========
* `The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) <https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/>`_
* `C++ - Using STL Strings at Win32 API Boundaries <https://msdn.microsoft.com/en-ca/magazine/mt238407.aspx>`_
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/cast/strings.rst | strings.rst |
Eigen
#####
`Eigen <http://eigen.tuxfamily.org>`_ is C++ header-based library for dense and
sparse linear algebra. Due to its popularity and widespread adoption, pybind11
provides transparent conversion and limited mapping support between Eigen and
Scientific Python linear algebra data types.
To enable the built-in Eigen support you must include the optional header file
:file:`pybind11/eigen.h`.
Pass-by-value
=============
When binding a function with ordinary Eigen dense object arguments (for
example, ``Eigen::MatrixXd``), pybind11 will accept any input value that is
already (or convertible to) a ``numpy.ndarray`` with dimensions compatible with
the Eigen type, copy its values into a temporary Eigen variable of the
appropriate type, then call the function with this temporary variable.
Sparse matrices are similarly copied to or from
``scipy.sparse.csr_matrix``/``scipy.sparse.csc_matrix`` objects.
Pass-by-reference
=================
One major limitation of the above is that every data conversion implicitly
involves a copy, which can be both expensive (for large matrices) and disallows
binding functions that change their (Matrix) arguments. Pybind11 allows you to
work around this by using Eigen's ``Eigen::Ref<MatrixType>`` class much as you
would when writing a function taking a generic type in Eigen itself (subject to
some limitations discussed below).
When calling a bound function accepting a ``Eigen::Ref<const MatrixType>``
type, pybind11 will attempt to avoid copying by using an ``Eigen::Map`` object
that maps into the source ``numpy.ndarray`` data: this requires both that the
data types are the same (e.g. ``dtype='float64'`` and ``MatrixType::Scalar`` is
``double``); and that the storage is layout compatible. The latter limitation
is discussed in detail in the section below, and requires careful
consideration: by default, numpy matrices and Eigen matrices are *not* storage
compatible.
If the numpy matrix cannot be used as is (either because its types differ, e.g.
passing an array of integers to an Eigen parameter requiring doubles, or
because the storage is incompatible), pybind11 makes a temporary copy and
passes the copy instead.
When a bound function parameter is instead ``Eigen::Ref<MatrixType>`` (note the
lack of ``const``), pybind11 will only allow the function to be called if it
can be mapped *and* if the numpy array is writeable (that is
``a.flags.writeable`` is true). Any access (including modification) made to
the passed variable will be transparently carried out directly on the
``numpy.ndarray``.
This means you can can write code such as the following and have it work as
expected:
.. code-block:: cpp
void scale_by_2(Eigen::Ref<Eigen::VectorXd> v) {
v *= 2;
}
Note, however, that you will likely run into limitations due to numpy and
Eigen's difference default storage order for data; see the below section on
:ref:`storage_orders` for details on how to bind code that won't run into such
limitations.
.. note::
Passing by reference is not supported for sparse types.
Returning values to Python
==========================
When returning an ordinary dense Eigen matrix type to numpy (e.g.
``Eigen::MatrixXd`` or ``Eigen::RowVectorXf``) pybind11 keeps the matrix and
returns a numpy array that directly references the Eigen matrix: no copy of the
data is performed. The numpy array will have ``array.flags.owndata`` set to
``False`` to indicate that it does not own the data, and the lifetime of the
stored Eigen matrix will be tied to the returned ``array``.
If you bind a function with a non-reference, ``const`` return type (e.g.
``const Eigen::MatrixXd``), the same thing happens except that pybind11 also
sets the numpy array's ``writeable`` flag to false.
If you return an lvalue reference or pointer, the usual pybind11 rules apply,
as dictated by the binding function's return value policy (see the
documentation on :ref:`return_value_policies` for full details). That means,
without an explicit return value policy, lvalue references will be copied and
pointers will be managed by pybind11. In order to avoid copying, you should
explicitly specify an appropriate return value policy, as in the following
example:
.. code-block:: cpp
class MyClass {
Eigen::MatrixXd big_mat = Eigen::MatrixXd::Zero(10000, 10000);
public:
Eigen::MatrixXd &getMatrix() { return big_mat; }
const Eigen::MatrixXd &viewMatrix() { return big_mat; }
};
// Later, in binding code:
py::class_<MyClass>(m, "MyClass")
.def(py::init<>())
.def("copy_matrix", &MyClass::getMatrix) // Makes a copy!
.def("get_matrix", &MyClass::getMatrix, py::return_value_policy::reference_internal)
.def("view_matrix", &MyClass::viewMatrix, py::return_value_policy::reference_internal)
;
.. code-block:: python
a = MyClass()
m = a.get_matrix() # flags.writeable = True, flags.owndata = False
v = a.view_matrix() # flags.writeable = False, flags.owndata = False
c = a.copy_matrix() # flags.writeable = True, flags.owndata = True
# m[5,6] and v[5,6] refer to the same element, c[5,6] does not.
Note in this example that ``py::return_value_policy::reference_internal`` is
used to tie the life of the MyClass object to the life of the returned arrays.
You may also return an ``Eigen::Ref``, ``Eigen::Map`` or other map-like Eigen
object (for example, the return value of ``matrix.block()`` and related
methods) that map into a dense Eigen type. When doing so, the default
behaviour of pybind11 is to simply reference the returned data: you must take
care to ensure that this data remains valid! You may ask pybind11 to
explicitly *copy* such a return value by using the
``py::return_value_policy::copy`` policy when binding the function. You may
also use ``py::return_value_policy::reference_internal`` or a
``py::keep_alive`` to ensure the data stays valid as long as the returned numpy
array does.
When returning such a reference of map, pybind11 additionally respects the
readonly-status of the returned value, marking the numpy array as non-writeable
if the reference or map was itself read-only.
.. note::
Sparse types are always copied when returned.
.. _storage_orders:
Storage orders
==============
Passing arguments via ``Eigen::Ref`` has some limitations that you must be
aware of in order to effectively pass matrices by reference. First and
foremost is that the default ``Eigen::Ref<MatrixType>`` class requires
contiguous storage along columns (for column-major types, the default in Eigen)
or rows if ``MatrixType`` is specifically an ``Eigen::RowMajor`` storage type.
The former, Eigen's default, is incompatible with ``numpy``'s default row-major
storage, and so you will not be able to pass numpy arrays to Eigen by reference
without making one of two changes.
(Note that this does not apply to vectors (or column or row matrices): for such
types the "row-major" and "column-major" distinction is meaningless).
The first approach is to change the use of ``Eigen::Ref<MatrixType>`` to the
more general ``Eigen::Ref<MatrixType, 0, Eigen::Stride<Eigen::Dynamic,
Eigen::Dynamic>>`` (or similar type with a fully dynamic stride type in the
third template argument). Since this is a rather cumbersome type, pybind11
provides a ``py::EigenDRef<MatrixType>`` type alias for your convenience (along
with EigenDMap for the equivalent Map, and EigenDStride for just the stride
type).
This type allows Eigen to map into any arbitrary storage order. This is not
the default in Eigen for performance reasons: contiguous storage allows
vectorization that cannot be done when storage is not known to be contiguous at
compile time. The default ``Eigen::Ref`` stride type allows non-contiguous
storage along the outer dimension (that is, the rows of a column-major matrix
or columns of a row-major matrix), but not along the inner dimension.
This type, however, has the added benefit of also being able to map numpy array
slices. For example, the following (contrived) example uses Eigen with a numpy
slice to multiply by 2 all coefficients that are both on even rows (0, 2, 4,
...) and in columns 2, 5, or 8:
.. code-block:: cpp
m.def("scale", [](py::EigenDRef<Eigen::MatrixXd> m, double c) { m *= c; });
.. code-block:: python
# a = np.array(...)
scale_by_2(myarray[0::2, 2:9:3])
The second approach to avoid copying is more intrusive: rearranging the
underlying data types to not run into the non-contiguous storage problem in the
first place. In particular, that means using matrices with ``Eigen::RowMajor``
storage, where appropriate, such as:
.. code-block:: cpp
using RowMatrixXd = Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>;
// Use RowMatrixXd instead of MatrixXd
Now bound functions accepting ``Eigen::Ref<RowMatrixXd>`` arguments will be
callable with numpy's (default) arrays without involving a copying.
You can, alternatively, change the storage order that numpy arrays use by
adding the ``order='F'`` option when creating an array:
.. code-block:: python
myarray = np.array(source, order='F')
Such an object will be passable to a bound function accepting an
``Eigen::Ref<MatrixXd>`` (or similar column-major Eigen type).
One major caveat with this approach, however, is that it is not entirely as
easy as simply flipping all Eigen or numpy usage from one to the other: some
operations may alter the storage order of a numpy array. For example, ``a2 =
array.transpose()`` results in ``a2`` being a view of ``array`` that references
the same data, but in the opposite storage order!
While this approach allows fully optimized vectorized calculations in Eigen, it
cannot be used with array slices, unlike the first approach.
When *returning* a matrix to Python (either a regular matrix, a reference via
``Eigen::Ref<>``, or a map/block into a matrix), no special storage
consideration is required: the created numpy array will have the required
stride that allows numpy to properly interpret the array, whatever its storage
order.
Failing rather than copying
===========================
The default behaviour when binding ``Eigen::Ref<const MatrixType>`` Eigen
references is to copy matrix values when passed a numpy array that does not
conform to the element type of ``MatrixType`` or does not have a compatible
stride layout. If you want to explicitly avoid copying in such a case, you
should bind arguments using the ``py::arg().noconvert()`` annotation (as
described in the :ref:`nonconverting_arguments` documentation).
The following example shows an example of arguments that don't allow data
copying to take place:
.. code-block:: cpp
// The method and function to be bound:
class MyClass {
// ...
double some_method(const Eigen::Ref<const MatrixXd> &matrix) { /* ... */ }
};
float some_function(const Eigen::Ref<const MatrixXf> &big,
const Eigen::Ref<const MatrixXf> &small) {
// ...
}
// The associated binding code:
using namespace pybind11::literals; // for "arg"_a
py::class_<MyClass>(m, "MyClass")
// ... other class definitions
.def("some_method", &MyClass::some_method, py::arg().noconvert());
m.def("some_function", &some_function,
"big"_a.noconvert(), // <- Don't allow copying for this arg
"small"_a // <- This one can be copied if needed
);
With the above binding code, attempting to call the the ``some_method(m)``
method on a ``MyClass`` object, or attempting to call ``some_function(m, m2)``
will raise a ``RuntimeError`` rather than making a temporary copy of the array.
It will, however, allow the ``m2`` argument to be copied into a temporary if
necessary.
Note that explicitly specifying ``.noconvert()`` is not required for *mutable*
Eigen references (e.g. ``Eigen::Ref<MatrixXd>`` without ``const`` on the
``MatrixXd``): mutable references will never be called with a temporary copy.
Vectors versus column/row matrices
==================================
Eigen and numpy have fundamentally different notions of a vector. In Eigen, a
vector is simply a matrix with the number of columns or rows set to 1 at
compile time (for a column vector or row vector, respectively). Numpy, in
contrast, has comparable 2-dimensional 1xN and Nx1 arrays, but *also* has
1-dimensional arrays of size N.
When passing a 2-dimensional 1xN or Nx1 array to Eigen, the Eigen type must
have matching dimensions: That is, you cannot pass a 2-dimensional Nx1 numpy
array to an Eigen value expecting a row vector, or a 1xN numpy array as a
column vector argument.
On the other hand, pybind11 allows you to pass 1-dimensional arrays of length N
as Eigen parameters. If the Eigen type can hold a column vector of length N it
will be passed as such a column vector. If not, but the Eigen type constraints
will accept a row vector, it will be passed as a row vector. (The column
vector takes precedence when both are supported, for example, when passing a
1D numpy array to a MatrixXd argument). Note that the type need not be
explicitly a vector: it is permitted to pass a 1D numpy array of size 5 to an
Eigen ``Matrix<double, Dynamic, 5>``: you would end up with a 1x5 Eigen matrix.
Passing the same to an ``Eigen::MatrixXd`` would result in a 5x1 Eigen matrix.
When returning an Eigen vector to numpy, the conversion is ambiguous: a row
vector of length 4 could be returned as either a 1D array of length 4, or as a
2D array of size 1x4. When encountering such a situation, pybind11 compromises
by considering the returned Eigen type: if it is a compile-time vector--that
is, the type has either the number of rows or columns set to 1 at compile
time--pybind11 converts to a 1D numpy array when returning the value. For
instances that are a vector only at run-time (e.g. ``MatrixXd``,
``Matrix<float, Dynamic, 4>``), pybind11 returns the vector as a 2D array to
numpy. If this isn't want you want, you can use ``array.reshape(...)`` to get
a view of the same data in the desired dimensions.
.. seealso::
The file :file:`tests/test_eigen.cpp` contains a complete example that
shows how to pass Eigen sparse and dense data types in more detail.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/cast/eigen.rst | eigen.rst |
Chrono
======
When including the additional header file :file:`pybind11/chrono.h` conversions
from C++11 chrono datatypes to python datetime objects are automatically enabled.
This header also enables conversions of python floats (often from sources such
as ``time.monotonic()``, ``time.perf_counter()`` and ``time.process_time()``)
into durations.
An overview of clocks in C++11
------------------------------
A point of confusion when using these conversions is the differences between
clocks provided in C++11. There are three clock types defined by the C++11
standard and users can define their own if needed. Each of these clocks have
different properties and when converting to and from python will give different
results.
The first clock defined by the standard is ``std::chrono::system_clock``. This
clock measures the current date and time. However, this clock changes with to
updates to the operating system time. For example, if your time is synchronised
with a time server this clock will change. This makes this clock a poor choice
for timing purposes but good for measuring the wall time.
The second clock defined in the standard is ``std::chrono::steady_clock``.
This clock ticks at a steady rate and is never adjusted. This makes it excellent
for timing purposes, however the value in this clock does not correspond to the
current date and time. Often this clock will be the amount of time your system
has been on, although it does not have to be. This clock will never be the same
clock as the system clock as the system clock can change but steady clocks
cannot.
The third clock defined in the standard is ``std::chrono::high_resolution_clock``.
This clock is the clock that has the highest resolution out of the clocks in the
system. It is normally a typedef to either the system clock or the steady clock
but can be its own independent clock. This is important as when using these
conversions as the types you get in python for this clock might be different
depending on the system.
If it is a typedef of the system clock, python will get datetime objects, but if
it is a different clock they will be timedelta objects.
Provided conversions
--------------------
.. rubric:: C++ to Python
- ``std::chrono::system_clock::time_point`` → ``datetime.datetime``
System clock times are converted to python datetime instances. They are
in the local timezone, but do not have any timezone information attached
to them (they are naive datetime objects).
- ``std::chrono::duration`` → ``datetime.timedelta``
Durations are converted to timedeltas, any precision in the duration
greater than microseconds is lost by rounding towards zero.
- ``std::chrono::[other_clocks]::time_point`` → ``datetime.timedelta``
Any clock time that is not the system clock is converted to a time delta.
This timedelta measures the time from the clocks epoch to now.
.. rubric:: Python to C++
- ``datetime.datetime`` or ``datetime.date`` or ``datetime.time`` → ``std::chrono::system_clock::time_point``
Date/time objects are converted into system clock timepoints. Any
timezone information is ignored and the type is treated as a naive
object.
- ``datetime.timedelta`` → ``std::chrono::duration``
Time delta are converted into durations with microsecond precision.
- ``datetime.timedelta`` → ``std::chrono::[other_clocks]::time_point``
Time deltas that are converted into clock timepoints are treated as
the amount of time from the start of the clocks epoch.
- ``float`` → ``std::chrono::duration``
Floats that are passed to C++ as durations be interpreted as a number of
seconds. These will be converted to the duration using ``duration_cast``
from the float.
- ``float`` → ``std::chrono::[other_clocks]::time_point``
Floats that are passed to C++ as time points will be interpreted as the
number of seconds from the start of the clocks epoch.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/cast/chrono.rst | chrono.rst |
STL containers
##############
Automatic conversion
====================
When including the additional header file :file:`pybind11/stl.h`, conversions
between ``std::vector<>``/``std::deque<>``/``std::list<>``/``std::array<>``,
``std::set<>``/``std::unordered_set<>``, and
``std::map<>``/``std::unordered_map<>`` and the Python ``list``, ``set`` and
``dict`` data structures are automatically enabled. The types ``std::pair<>``
and ``std::tuple<>`` are already supported out of the box with just the core
:file:`pybind11/pybind11.h` header.
The major downside of these implicit conversions is that containers must be
converted (i.e. copied) on every Python->C++ and C++->Python transition, which
can have implications on the program semantics and performance. Please read the
next sections for more details and alternative approaches that avoid this.
.. note::
Arbitrary nesting of any of these types is possible.
.. seealso::
The file :file:`tests/test_stl.cpp` contains a complete
example that demonstrates how to pass STL data types in more detail.
.. _cpp17_container_casters:
C++17 library containers
========================
The :file:`pybind11/stl.h` header also includes support for ``std::optional<>``
and ``std::variant<>``. These require a C++17 compiler and standard library.
In C++14 mode, ``std::experimental::optional<>`` is supported if available.
Various versions of these containers also exist for C++11 (e.g. in Boost).
pybind11 provides an easy way to specialize the ``type_caster`` for such
types:
.. code-block:: cpp
// `boost::optional` as an example -- can be any `std::optional`-like container
namespace pybind11 { namespace detail {
template <typename T>
struct type_caster<boost::optional<T>> : optional_caster<boost::optional<T>> {};
}}
The above should be placed in a header file and included in all translation units
where automatic conversion is needed. Similarly, a specialization can be provided
for custom variant types:
.. code-block:: cpp
// `boost::variant` as an example -- can be any `std::variant`-like container
namespace pybind11 { namespace detail {
template <typename... Ts>
struct type_caster<boost::variant<Ts...>> : variant_caster<boost::variant<Ts...>> {};
// Specifies the function used to visit the variant -- `apply_visitor` instead of `visit`
template <>
struct visit_helper<boost::variant> {
template <typename... Args>
static auto call(Args &&...args) -> decltype(boost::apply_visitor(args...)) {
return boost::apply_visitor(args...);
}
};
}} // namespace pybind11::detail
The ``visit_helper`` specialization is not required if your ``name::variant`` provides
a ``name::visit()`` function. For any other function name, the specialization must be
included to tell pybind11 how to visit the variant.
.. note::
pybind11 only supports the modern implementation of ``boost::variant``
which makes use of variadic templates. This requires Boost 1.56 or newer.
Additionally, on Windows, MSVC 2017 is required because ``boost::variant``
falls back to the old non-variadic implementation on MSVC 2015.
.. _opaque:
Making opaque types
===================
pybind11 heavily relies on a template matching mechanism to convert parameters
and return values that are constructed from STL data types such as vectors,
linked lists, hash tables, etc. This even works in a recursive manner, for
instance to deal with lists of hash maps of pairs of elementary and custom
types, etc.
However, a fundamental limitation of this approach is that internal conversions
between Python and C++ types involve a copy operation that prevents
pass-by-reference semantics. What does this mean?
Suppose we bind the following function
.. code-block:: cpp
void append_1(std::vector<int> &v) {
v.push_back(1);
}
and call it from Python, the following happens:
.. code-block:: pycon
>>> v = [5, 6]
>>> append_1(v)
>>> print(v)
[5, 6]
As you can see, when passing STL data structures by reference, modifications
are not propagated back the Python side. A similar situation arises when
exposing STL data structures using the ``def_readwrite`` or ``def_readonly``
functions:
.. code-block:: cpp
/* ... definition ... */
class MyClass {
std::vector<int> contents;
};
/* ... binding code ... */
py::class_<MyClass>(m, "MyClass")
.def(py::init<>())
.def_readwrite("contents", &MyClass::contents);
In this case, properties can be read and written in their entirety. However, an
``append`` operation involving such a list type has no effect:
.. code-block:: pycon
>>> m = MyClass()
>>> m.contents = [5, 6]
>>> print(m.contents)
[5, 6]
>>> m.contents.append(7)
>>> print(m.contents)
[5, 6]
Finally, the involved copy operations can be costly when dealing with very
large lists. To deal with all of the above situations, pybind11 provides a
macro named ``PYBIND11_MAKE_OPAQUE(T)`` that disables the template-based
conversion machinery of types, thus rendering them *opaque*. The contents of
opaque objects are never inspected or extracted, hence they *can* be passed by
reference. For instance, to turn ``std::vector<int>`` into an opaque type, add
the declaration
.. code-block:: cpp
PYBIND11_MAKE_OPAQUE(std::vector<int>);
before any binding code (e.g. invocations to ``class_::def()``, etc.). This
macro must be specified at the top level (and outside of any namespaces), since
it instantiates a partial template overload. If your binding code consists of
multiple compilation units, it must be present in every file (typically via a
common header) preceding any usage of ``std::vector<int>``. Opaque types must
also have a corresponding ``class_`` declaration to associate them with a name
in Python, and to define a set of available operations, e.g.:
.. code-block:: cpp
py::class_<std::vector<int>>(m, "IntVector")
.def(py::init<>())
.def("clear", &std::vector<int>::clear)
.def("pop_back", &std::vector<int>::pop_back)
.def("__len__", [](const std::vector<int> &v) { return v.size(); })
.def("__iter__", [](std::vector<int> &v) {
return py::make_iterator(v.begin(), v.end());
}, py::keep_alive<0, 1>()) /* Keep vector alive while iterator is used */
// ....
.. seealso::
The file :file:`tests/test_opaque_types.cpp` contains a complete
example that demonstrates how to create and expose opaque types using
pybind11 in more detail.
.. _stl_bind:
Binding STL containers
======================
The ability to expose STL containers as native Python objects is a fairly
common request, hence pybind11 also provides an optional header file named
:file:`pybind11/stl_bind.h` that does exactly this. The mapped containers try
to match the behavior of their native Python counterparts as much as possible.
The following example showcases usage of :file:`pybind11/stl_bind.h`:
.. code-block:: cpp
// Don't forget this
#include <pybind11/stl_bind.h>
PYBIND11_MAKE_OPAQUE(std::vector<int>);
PYBIND11_MAKE_OPAQUE(std::map<std::string, double>);
// ...
// later in binding code:
py::bind_vector<std::vector<int>>(m, "VectorInt");
py::bind_map<std::map<std::string, double>>(m, "MapStringDouble");
When binding STL containers pybind11 considers the types of the container's
elements to decide whether the container should be confined to the local module
(via the :ref:`module_local` feature). If the container element types are
anything other than already-bound custom types bound without
``py::module_local()`` the container binding will have ``py::module_local()``
applied. This includes converting types such as numeric types, strings, Eigen
types; and types that have not yet been bound at the time of the stl container
binding. This module-local binding is designed to avoid potential conflicts
between module bindings (for example, from two separate modules each attempting
to bind ``std::vector<int>`` as a python type).
It is possible to override this behavior to force a definition to be either
module-local or global. To do so, you can pass the attributes
``py::module_local()`` (to make the binding module-local) or
``py::module_local(false)`` (to make the binding global) into the
``py::bind_vector`` or ``py::bind_map`` arguments:
.. code-block:: cpp
py::bind_vector<std::vector<int>>(m, "VectorInt", py::module_local(false));
Note, however, that such a global binding would make it impossible to load this
module at the same time as any other pybind module that also attempts to bind
the same container type (``std::vector<int>`` in the above example).
See :ref:`module_local` for more details on module-local bindings.
.. seealso::
The file :file:`tests/test_stl_binders.cpp` shows how to use the
convenience STL container wrappers.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/cast/stl.rst | stl.rst |
Custom type casters
===================
In very rare cases, applications may require custom type casters that cannot be
expressed using the abstractions provided by pybind11, thus requiring raw
Python C API calls. This is fairly advanced usage and should only be pursued by
experts who are familiar with the intricacies of Python reference counting.
The following snippets demonstrate how this works for a very simple ``inty``
type that that should be convertible from Python types that provide a
``__int__(self)`` method.
.. code-block:: cpp
struct inty { long long_value; };
void print(inty s) {
std::cout << s.long_value << std::endl;
}
The following Python snippet demonstrates the intended usage from the Python side:
.. code-block:: python
class A:
def __int__(self):
return 123
from example import print
print(A())
To register the necessary conversion routines, it is necessary to add
a partial overload to the ``pybind11::detail::type_caster<T>`` template.
Although this is an implementation detail, adding partial overloads to this
type is explicitly allowed.
.. code-block:: cpp
namespace pybind11 { namespace detail {
template <> struct type_caster<inty> {
public:
/**
* This macro establishes the name 'inty' in
* function signatures and declares a local variable
* 'value' of type inty
*/
PYBIND11_TYPE_CASTER(inty, _("inty"));
/**
* Conversion part 1 (Python->C++): convert a PyObject into a inty
* instance or return false upon failure. The second argument
* indicates whether implicit conversions should be applied.
*/
bool load(handle src, bool) {
/* Extract PyObject from handle */
PyObject *source = src.ptr();
/* Try converting into a Python integer value */
PyObject *tmp = PyNumber_Long(source);
if (!tmp)
return false;
/* Now try to convert into a C++ int */
value.long_value = PyLong_AsLong(tmp);
Py_DECREF(tmp);
/* Ensure return code was OK (to avoid out-of-range errors etc) */
return !(value.long_value == -1 && !PyErr_Occurred());
}
/**
* Conversion part 2 (C++ -> Python): convert an inty instance into
* a Python object. The second and third arguments are used to
* indicate the return value policy and parent object (for
* ``return_value_policy::reference_internal``) and are generally
* ignored by implicit casters.
*/
static handle cast(inty src, return_value_policy /* policy */, handle /* parent */) {
return PyLong_FromLong(src.long_value);
}
};
}} // namespace pybind11::detail
.. note::
A ``type_caster<T>`` defined with ``PYBIND11_TYPE_CASTER(T, ...)`` requires
that ``T`` is default-constructible (``value`` is first default constructed
and then ``load()`` assigns to it).
.. warning::
When using custom type casters, it's important to declare them consistently
in every compilation unit of the Python extension module. Otherwise,
undefined behavior can ensue.
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/cast/custom.rst | custom.rst |
Overview
########
.. rubric:: 1. Native type in C++, wrapper in Python
Exposing a custom C++ type using :class:`py::class_` was covered in detail
in the :doc:`/classes` section. There, the underlying data structure is
always the original C++ class while the :class:`py::class_` wrapper provides
a Python interface. Internally, when an object like this is sent from C++ to
Python, pybind11 will just add the outer wrapper layer over the native C++
object. Getting it back from Python is just a matter of peeling off the
wrapper.
.. rubric:: 2. Wrapper in C++, native type in Python
This is the exact opposite situation. Now, we have a type which is native to
Python, like a ``tuple`` or a ``list``. One way to get this data into C++ is
with the :class:`py::object` family of wrappers. These are explained in more
detail in the :doc:`/advanced/pycpp/object` section. We'll just give a quick
example here:
.. code-block:: cpp
void print_list(py::list my_list) {
for (auto item : my_list)
std::cout << item << " ";
}
.. code-block:: pycon
>>> print_list([1, 2, 3])
1 2 3
The Python ``list`` is not converted in any way -- it's just wrapped in a C++
:class:`py::list` class. At its core it's still a Python object. Copying a
:class:`py::list` will do the usual reference-counting like in Python.
Returning the object to Python will just remove the thin wrapper.
.. rubric:: 3. Converting between native C++ and Python types
In the previous two cases we had a native type in one language and a wrapper in
the other. Now, we have native types on both sides and we convert between them.
.. code-block:: cpp
void print_vector(const std::vector<int> &v) {
for (auto item : v)
std::cout << item << "\n";
}
.. code-block:: pycon
>>> print_vector([1, 2, 3])
1 2 3
In this case, pybind11 will construct a new ``std::vector<int>`` and copy each
element from the Python ``list``. The newly constructed object will be passed
to ``print_vector``. The same thing happens in the other direction: a new
``list`` is made to match the value returned from C++.
Lots of these conversions are supported out of the box, as shown in the table
below. They are very convenient, but keep in mind that these conversions are
fundamentally based on copying data. This is perfectly fine for small immutable
types but it may become quite expensive for large data structures. This can be
avoided by overriding the automatic conversion with a custom wrapper (i.e. the
above-mentioned approach 1). This requires some manual effort and more details
are available in the :ref:`opaque` section.
.. _conversion_table:
List of all builtin conversions
-------------------------------
The following basic data types are supported out of the box (some may require
an additional extension header to be included). To pass other data structures
as arguments and return values, refer to the section on binding :ref:`classes`.
+------------------------------------+---------------------------+-------------------------------+
| Data type | Description | Header file |
+====================================+===========================+===============================+
| ``int8_t``, ``uint8_t`` | 8-bit integers | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``int16_t``, ``uint16_t`` | 16-bit integers | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``int32_t``, ``uint32_t`` | 32-bit integers | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``int64_t``, ``uint64_t`` | 64-bit integers | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``ssize_t``, ``size_t`` | Platform-dependent size | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``float``, ``double`` | Floating point types | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``bool`` | Two-state Boolean type | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``char`` | Character literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``char16_t`` | UTF-16 character literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``char32_t`` | UTF-32 character literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``wchar_t`` | Wide character literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``const char *`` | UTF-8 string literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``const char16_t *`` | UTF-16 string literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``const char32_t *`` | UTF-32 string literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``const wchar_t *`` | Wide string literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::string`` | STL dynamic UTF-8 string | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::u16string`` | STL dynamic UTF-16 string | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::u32string`` | STL dynamic UTF-32 string | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::wstring`` | STL dynamic wide string | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::string_view``, | STL C++17 string views | :file:`pybind11/pybind11.h` |
| ``std::u16string_view``, etc. | | |
+------------------------------------+---------------------------+-------------------------------+
| ``std::pair<T1, T2>`` | Pair of two custom types | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::tuple<...>`` | Arbitrary tuple of types | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::reference_wrapper<...>`` | Reference type wrapper | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::complex<T>`` | Complex numbers | :file:`pybind11/complex.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::array<T, Size>`` | STL static array | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::vector<T>`` | STL dynamic array | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::deque<T>`` | STL double-ended queue | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::valarray<T>`` | STL value array | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::list<T>`` | STL linked list | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::map<T1, T2>`` | STL ordered map | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::unordered_map<T1, T2>`` | STL unordered map | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::set<T>`` | STL ordered set | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::unordered_set<T>`` | STL unordered set | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::optional<T>`` | STL optional type (C++17) | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::experimental::optional<T>`` | STL optional type (exp.) | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::variant<...>`` | Type-safe union (C++17) | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::function<...>`` | STL polymorphic function | :file:`pybind11/functional.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::chrono::duration<...>`` | STL time duration | :file:`pybind11/chrono.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``std::chrono::time_point<...>`` | STL date/time | :file:`pybind11/chrono.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``Eigen::Matrix<...>`` | Eigen: dense matrix | :file:`pybind11/eigen.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``Eigen::Map<...>`` | Eigen: mapped memory | :file:`pybind11/eigen.h` |
+------------------------------------+---------------------------+-------------------------------+
| ``Eigen::SparseMatrix<...>`` | Eigen: sparse matrix | :file:`pybind11/eigen.h` |
+------------------------------------+---------------------------+-------------------------------+
| ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/docs/advanced/cast/overview.rst | overview.rst |
import os
import sys
import platform
import re
import textwrap
from clang import cindex
from clang.cindex import CursorKind
from collections import OrderedDict
from glob import glob
from threading import Thread, Semaphore
from multiprocessing import cpu_count
RECURSE_LIST = [
CursorKind.TRANSLATION_UNIT,
CursorKind.NAMESPACE,
CursorKind.CLASS_DECL,
CursorKind.STRUCT_DECL,
CursorKind.ENUM_DECL,
CursorKind.CLASS_TEMPLATE
]
PRINT_LIST = [
CursorKind.CLASS_DECL,
CursorKind.STRUCT_DECL,
CursorKind.ENUM_DECL,
CursorKind.ENUM_CONSTANT_DECL,
CursorKind.CLASS_TEMPLATE,
CursorKind.FUNCTION_DECL,
CursorKind.FUNCTION_TEMPLATE,
CursorKind.CONVERSION_FUNCTION,
CursorKind.CXX_METHOD,
CursorKind.CONSTRUCTOR,
CursorKind.FIELD_DECL
]
PREFIX_BLACKLIST = [
CursorKind.TRANSLATION_UNIT
]
CPP_OPERATORS = {
'<=': 'le', '>=': 'ge', '==': 'eq', '!=': 'ne', '[]': 'array',
'+=': 'iadd', '-=': 'isub', '*=': 'imul', '/=': 'idiv', '%=':
'imod', '&=': 'iand', '|=': 'ior', '^=': 'ixor', '<<=': 'ilshift',
'>>=': 'irshift', '++': 'inc', '--': 'dec', '<<': 'lshift', '>>':
'rshift', '&&': 'land', '||': 'lor', '!': 'lnot', '~': 'bnot',
'&': 'band', '|': 'bor', '+': 'add', '-': 'sub', '*': 'mul', '/':
'div', '%': 'mod', '<': 'lt', '>': 'gt', '=': 'assign', '()': 'call'
}
CPP_OPERATORS = OrderedDict(
sorted(CPP_OPERATORS.items(), key=lambda t: -len(t[0])))
job_count = cpu_count()
job_semaphore = Semaphore(job_count)
class NoFilenamesError(ValueError):
pass
def d(s):
return s if isinstance(s, str) else s.decode('utf8')
def sanitize_name(name):
name = re.sub(r'type-parameter-0-([0-9]+)', r'T\1', name)
for k, v in CPP_OPERATORS.items():
name = name.replace('operator%s' % k, 'operator_%s' % v)
name = re.sub('<.*>', '', name)
name = ''.join([ch if ch.isalnum() else '_' for ch in name])
name = re.sub('_$', '', re.sub('_+', '_', name))
return '__doc_' + name
def process_comment(comment):
result = ''
# Remove C++ comment syntax
leading_spaces = float('inf')
for s in comment.expandtabs(tabsize=4).splitlines():
s = s.strip()
if s.startswith('/*'):
s = s[2:].lstrip('*')
elif s.endswith('*/'):
s = s[:-2].rstrip('*')
elif s.startswith('///'):
s = s[3:]
if s.startswith('*'):
s = s[1:]
if len(s) > 0:
leading_spaces = min(leading_spaces, len(s) - len(s.lstrip()))
result += s + '\n'
if leading_spaces != float('inf'):
result2 = ""
for s in result.splitlines():
result2 += s[leading_spaces:] + '\n'
result = result2
# Doxygen tags
cpp_group = '([\w:]+)'
param_group = '([\[\w:\]]+)'
s = result
s = re.sub(r'\\c\s+%s' % cpp_group, r'``\1``', s)
s = re.sub(r'\\a\s+%s' % cpp_group, r'*\1*', s)
s = re.sub(r'\\e\s+%s' % cpp_group, r'*\1*', s)
s = re.sub(r'\\em\s+%s' % cpp_group, r'*\1*', s)
s = re.sub(r'\\b\s+%s' % cpp_group, r'**\1**', s)
s = re.sub(r'\\ingroup\s+%s' % cpp_group, r'', s)
s = re.sub(r'\\param%s?\s+%s' % (param_group, cpp_group),
r'\n\n$Parameter ``\2``:\n\n', s)
s = re.sub(r'\\tparam%s?\s+%s' % (param_group, cpp_group),
r'\n\n$Template parameter ``\2``:\n\n', s)
for in_, out_ in {
'return': 'Returns',
'author': 'Author',
'authors': 'Authors',
'copyright': 'Copyright',
'date': 'Date',
'remark': 'Remark',
'sa': 'See also',
'see': 'See also',
'extends': 'Extends',
'throw': 'Throws',
'throws': 'Throws'
}.items():
s = re.sub(r'\\%s\s*' % in_, r'\n\n$%s:\n\n' % out_, s)
s = re.sub(r'\\details\s*', r'\n\n', s)
s = re.sub(r'\\brief\s*', r'', s)
s = re.sub(r'\\short\s*', r'', s)
s = re.sub(r'\\ref\s*', r'', s)
s = re.sub(r'\\code\s?(.*?)\s?\\endcode',
r"```\n\1\n```\n", s, flags=re.DOTALL)
# HTML/TeX tags
s = re.sub(r'<tt>(.*?)</tt>', r'``\1``', s, flags=re.DOTALL)
s = re.sub(r'<pre>(.*?)</pre>', r"```\n\1\n```\n", s, flags=re.DOTALL)
s = re.sub(r'<em>(.*?)</em>', r'*\1*', s, flags=re.DOTALL)
s = re.sub(r'<b>(.*?)</b>', r'**\1**', s, flags=re.DOTALL)
s = re.sub(r'\\f\$(.*?)\\f\$', r'$\1$', s, flags=re.DOTALL)
s = re.sub(r'<li>', r'\n\n* ', s)
s = re.sub(r'</?ul>', r'', s)
s = re.sub(r'</li>', r'\n\n', s)
s = s.replace('``true``', '``True``')
s = s.replace('``false``', '``False``')
# Re-flow text
wrapper = textwrap.TextWrapper()
wrapper.expand_tabs = True
wrapper.replace_whitespace = True
wrapper.drop_whitespace = True
wrapper.width = 70
wrapper.initial_indent = wrapper.subsequent_indent = ''
result = ''
in_code_segment = False
for x in re.split(r'(```)', s):
if x == '```':
if not in_code_segment:
result += '```\n'
else:
result += '\n```\n\n'
in_code_segment = not in_code_segment
elif in_code_segment:
result += x.strip()
else:
for y in re.split(r'(?: *\n *){2,}', x):
wrapped = wrapper.fill(re.sub(r'\s+', ' ', y).strip())
if len(wrapped) > 0 and wrapped[0] == '$':
result += wrapped[1:] + '\n'
wrapper.initial_indent = \
wrapper.subsequent_indent = ' ' * 4
else:
if len(wrapped) > 0:
result += wrapped + '\n\n'
wrapper.initial_indent = wrapper.subsequent_indent = ''
return result.rstrip().lstrip('\n')
def extract(filename, node, prefix, output):
if not (node.location.file is None or
os.path.samefile(d(node.location.file.name), filename)):
return 0
if node.kind in RECURSE_LIST:
sub_prefix = prefix
if node.kind not in PREFIX_BLACKLIST:
if len(sub_prefix) > 0:
sub_prefix += '_'
sub_prefix += d(node.spelling)
for i in node.get_children():
extract(filename, i, sub_prefix, output)
if node.kind in PRINT_LIST:
comment = d(node.raw_comment) if node.raw_comment is not None else ''
comment = process_comment(comment)
sub_prefix = prefix
if len(sub_prefix) > 0:
sub_prefix += '_'
if len(node.spelling) > 0:
name = sanitize_name(sub_prefix + d(node.spelling))
output.append((name, filename, comment))
class ExtractionThread(Thread):
def __init__(self, filename, parameters, output):
Thread.__init__(self)
self.filename = filename
self.parameters = parameters
self.output = output
job_semaphore.acquire()
def run(self):
print('Processing "%s" ..' % self.filename, file=sys.stderr)
try:
index = cindex.Index(
cindex.conf.lib.clang_createIndex(False, True))
tu = index.parse(self.filename, self.parameters)
extract(self.filename, tu.cursor, '', self.output)
finally:
job_semaphore.release()
def read_args(args):
parameters = []
filenames = []
if "-x" not in args:
parameters.extend(['-x', 'c++'])
if not any(it.startswith("-std=") for it in args):
parameters.append('-std=c++11')
if platform.system() == 'Darwin':
dev_path = '/Applications/Xcode.app/Contents/Developer/'
lib_dir = dev_path + 'Toolchains/XcodeDefault.xctoolchain/usr/lib/'
sdk_dir = dev_path + 'Platforms/MacOSX.platform/Developer/SDKs'
libclang = lib_dir + 'libclang.dylib'
if os.path.exists(libclang):
cindex.Config.set_library_path(os.path.dirname(libclang))
if os.path.exists(sdk_dir):
sysroot_dir = os.path.join(sdk_dir, next(os.walk(sdk_dir))[1][0])
parameters.append('-isysroot')
parameters.append(sysroot_dir)
elif platform.system() == 'Linux':
# clang doesn't find its own base includes by default on Linux,
# but different distros install them in different paths.
# Try to autodetect, preferring the highest numbered version.
def clang_folder_version(d):
return [int(ver) for ver in re.findall(r'(?<!lib)(?<!\d)\d+', d)]
clang_include_dir = max((
path
for libdir in ['lib64', 'lib', 'lib32']
for path in glob('/usr/%s/clang/*/include' % libdir)
if os.path.isdir(path)
), default=None, key=clang_folder_version)
if clang_include_dir:
parameters.extend(['-isystem', clang_include_dir])
for item in args:
if item.startswith('-'):
parameters.append(item)
else:
filenames.append(item)
if len(filenames) == 0:
raise NoFilenamesError("args parameter did not contain any filenames")
return parameters, filenames
def extract_all(args):
parameters, filenames = read_args(args)
output = []
for filename in filenames:
thr = ExtractionThread(filename, parameters, output)
thr.start()
print('Waiting for jobs to finish ..', file=sys.stderr)
for i in range(job_count):
job_semaphore.acquire()
return output
def write_header(comments, out_file=sys.stdout):
print('''/*
This file contains docstrings for the Python bindings.
Do not edit! These were automatically extracted by mkdoc.py
*/
#define __EXPAND(x) x
#define __COUNT(_1, _2, _3, _4, _5, _6, _7, COUNT, ...) COUNT
#define __VA_SIZE(...) __EXPAND(__COUNT(__VA_ARGS__, 7, 6, 5, 4, 3, 2, 1))
#define __CAT1(a, b) a ## b
#define __CAT2(a, b) __CAT1(a, b)
#define __DOC1(n1) __doc_##n1
#define __DOC2(n1, n2) __doc_##n1##_##n2
#define __DOC3(n1, n2, n3) __doc_##n1##_##n2##_##n3
#define __DOC4(n1, n2, n3, n4) __doc_##n1##_##n2##_##n3##_##n4
#define __DOC5(n1, n2, n3, n4, n5) __doc_##n1##_##n2##_##n3##_##n4##_##n5
#define __DOC6(n1, n2, n3, n4, n5, n6) __doc_##n1##_##n2##_##n3##_##n4##_##n5##_##n6
#define __DOC7(n1, n2, n3, n4, n5, n6, n7) __doc_##n1##_##n2##_##n3##_##n4##_##n5##_##n6##_##n7
#define DOC(...) __EXPAND(__EXPAND(__CAT2(__DOC, __VA_SIZE(__VA_ARGS__)))(__VA_ARGS__))
#if defined(__GNUG__)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-variable"
#endif
''', file=out_file)
name_ctr = 1
name_prev = None
for name, _, comment in list(sorted(comments, key=lambda x: (x[0], x[1]))):
if name == name_prev:
name_ctr += 1
name = name + "_%i" % name_ctr
else:
name_prev = name
name_ctr = 1
print('\nstatic const char *%s =%sR"doc(%s)doc";' %
(name, '\n' if '\n' in comment else ' ', comment), file=out_file)
print('''
#if defined(__GNUG__)
#pragma GCC diagnostic pop
#endif
''', file=out_file)
def mkdoc(args):
args = list(args)
out_path = None
for idx, arg in enumerate(args):
if arg.startswith("-o"):
args.remove(arg)
try:
out_path = arg[2:] or args.pop(idx)
except IndexError:
print("-o flag requires an argument")
exit(-1)
break
comments = extract_all(args)
if out_path:
try:
with open(out_path, 'w') as out_file:
write_header(comments, out_file)
except:
# In the event of an error, don't leave a partially-written
# output file.
try:
os.unlink(out_path)
except:
pass
raise
else:
write_header(comments)
if __name__ == '__main__':
try:
mkdoc(sys.argv[1:])
except NoFilenamesError:
print('Syntax: %s [.. a list of header files ..]' % sys.argv[0])
exit(-1) | ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/tools/mkdoc.py | mkdoc.py |
check_style_errors=0
IFS=$'\n'
found="$( GREP_COLORS='mt=41' GREP_COLOR='41' grep $'\t' include tests/*.{cpp,py,h} docs/*.rst -rn --color=always )"
if [ -n "$found" ]; then
# The mt=41 sets a red background for matched tabs:
echo -e '\033[31;01mError: found tab characters in the following files:\033[0m'
check_style_errors=1
echo "$found" | sed -e 's/^/ /'
fi
found="$( grep -IUlr $'\r' include tests/*.{cpp,py,h} docs/*.rst --color=always )"
if [ -n "$found" ]; then
echo -e '\033[31;01mError: found CRLF characters in the following files:\033[0m'
check_style_errors=1
echo "$found" | sed -e 's/^/ /'
fi
found="$(GREP_COLORS='mt=41' GREP_COLOR='41' grep '[[:blank:]]\+$' include tests/*.{cpp,py,h} docs/*.rst -rn --color=always )"
if [ -n "$found" ]; then
# The mt=41 sets a red background for matched trailing spaces
echo -e '\033[31;01mError: found trailing spaces in the following files:\033[0m'
check_style_errors=1
echo "$found" | sed -e 's/^/ /'
fi
found="$(grep '\<\(if\|for\|while\|catch\)(\|){' include tests/*.{cpp,h} -rn --color=always)"
if [ -n "$found" ]; then
echo -e '\033[31;01mError: found the following coding style problems:\033[0m'
check_style_errors=1
echo "$found" | sed -e 's/^/ /'
fi
found="$(awk '
function prefix(filename, lineno) {
return " \033[35m" filename "\033[36m:\033[32m" lineno "\033[36m:\033[0m"
}
function mark(pattern, string) { sub(pattern, "\033[01;31m&\033[0m", string); return string }
last && /^\s*{/ {
print prefix(FILENAME, FNR-1) mark("\\)\\s*$", last)
print prefix(FILENAME, FNR) mark("^\\s*{", $0)
last=""
}
{ last = /(if|for|while|catch|switch)\s*\(.*\)\s*$/ ? $0 : "" }
' $(find include -type f) tests/*.{cpp,h} docs/*.rst)"
if [ -n "$found" ]; then
check_style_errors=1
echo -e '\033[31;01mError: braces should occur on the same line as the if/while/.. statement. Found issues in the following files:\033[0m'
echo "$found"
fi
exit $check_style_errors | ACTIONet | /ACTIONet-0.1.1.tar.gz/ACTIONet-0.1.1/pybind11/tools/check-style.sh | check-style.sh |
import re
import subprocess
import time
from pathlib import Path
import pyperclip
schema = (
r"(aws_access_key_id = )[^\n]+\n"
r"(aws_secret_access_key = )[^\n]+\n"
r"(aws_session_token = )[^\n]+"
)
def main():
recent_value = pyperclip.paste()
while True:
tmp_value = pyperclip.paste()
if tmp_value != recent_value:
update_credentials(Path.home() / ".aws" / "credentials", tmp_value)
recent_value = tmp_value
time.sleep(0.5)
def update_credentials(credentials_file: Path, new_credentials: str):
new_credentials_match = re.fullmatch(
re.compile(r"(?P<account>\[\d{12}_\w+\])\n%s" % schema), new_credentials
)
if new_credentials_match:
try:
with open(credentials_file.as_posix(), "r") as f:
file_content = f.read()
except FileNotFoundError:
append_to_file(credentials_file, new_credentials)
return
old_credentials_match = re.search(
re.compile(
r"(%s)\n%s" % (re.escape(new_credentials_match["account"]), schema)
),
file_content,
)
if old_credentials_match:
write_to_file(
credentials_file,
new_credentials,
old_credentials_match[0],
file_content,
)
else:
append_to_file(credentials_file, new_credentials)
def write_to_file(
credentials_file: Path,
new_credentials: str,
old_credentials: str,
file_content: str,
):
with open(credentials_file.as_posix(), "w") as f:
f.write(file_content.replace(old_credentials, new_credentials))
display_notification("Existing credentials updated.")
def append_to_file(credentials_file: Path, credentials: str):
with open(credentials_file.as_posix(), "a") as f:
f.write(f"\n{credentials}\n")
display_notification("New credentials added.")
def display_notification(message: str):
try:
subprocess.run(["notify-send", "ACU", message])
except FileNotFoundError:
# notify-send may not be installed
pass | ACU | /ACU-0.2.1-py3-none-any.whl/watcher/watcher.py | watcher.py |
A command line tool to run your code against sample test cases. Without leaving the terminal :)
Supported sites
^^^^^^^^^^^^^^^
- Codeforces
- Codechef
- Spoj
- Hackerrank
- Atcoder
Supported languages
^^^^^^^^^^^^^^^^^^^
- C
- C++
- Python
- Java
- Ruby
- Haskell
Installation
^^^^^^^^^^^^
Build from source
'''''''''''''''''
- ``git clone https://github.com/coderick14/ACedIt``
- ``cd ACedIt``
- ``python setup.py install``
As a Python package
'''''''''''''''''''
::
pip install --user ACedIt
Usage
^^^^^
::
usage: acedit [-h] [-s {codeforces,codechef,hackerrank,spoj}] [-c CONTEST]
[-p PROBLEM] [-f] [--run SOURCE_FILE]
[--set-default-site {codeforces,codechef,hackerrank,spoj}]
[--set-default-contest DEFAULT_CONTEST]
optional arguments:
-h, --help show this help message and exit
-s {codeforces,codechef,hackerrank,spoj}, --site {codeforces,codechef,hackerrank,spoj}
The competitive programming platform, e.g. codeforces,
codechef etc
-c CONTEST, --contest CONTEST
The name of the contest, e.g. JUNE17, LTIME49, COOK83
etc
-p PROBLEM, --problem PROBLEM
The problem code, e.g. OAK, PRMQ etc
-f, --force Force download the test cases, even if they are cached
--run SOURCE_FILE Name of source file to be run
--set-default-site {codeforces,codechef,hackerrank,spoj}
Name of default site to be used when -s flag is not
specified
--set-default-contest DEFAULT_CONTEST
Name of default contest to be used when -c flag is not
specified
--clear-cache Clear cached test cases for a given site. Takes
default site if -s flag is omitted
During installation, the default site is set to ``codeforces``. You
can change it anytime using the above mentioned flags.
Examples
^^^^^^^^
- Fetch test cases for a single problem
::
acedit -s codechef -c AUG17 -p CHEFFA
- Fetch test cases for all problems in a contest
::
acedit -s codechef -c AUG17
- Force download test cases, even when they are cached
::
acedit -s codeforces -c 86 -p D -f
- Test your code (when default-site and default-contest is set and filename is same as problem_code)
::
acedit --run D.cpp
::
acedit --run CHEFFA.py
**Since your filename is same as problem code, there's no need for the -p flag.**
- Test your code (specifying contest and problem codes explicitly)
::
acedit --run solve.cpp -c 835 -p D
::
acedit --run test.py -s codechef -c AUG17 -p CHEFFA
Note :
''''''
- The working directory structure mentioned in the previous versions is no longer required and supported.
- There might be some issues with Spoj, as they have widely varying DOM trees for different problems. Feel free to contribute on this. Or anything else that you can come up with :) | ACedIt | /ACedIt-1.2.1.tar.gz/ACedIt-1.2.1/README.rst | README.rst |
import sys
import json
import re
import os
import functools
import platform
import threading
try:
from bs4 import BeautifulSoup as bs
import requests as rq
from argparse import ArgumentParser
except:
err = """
You haven't installed the required dependencies.
Run 'python setup.py install' to install the dependencies.
"""
print(err)
sys.exit(0)
class Utilities:
cache_dir = os.path.join(os.path.expanduser('~'), '.cache', 'ACedIt')
colors = {
'GREEN': '\033[92m',
'YELLOW': '\033[93m',
'RED': '\033[91m',
'ENDC': '\033[0m',
'BOLD': '\033[1m',
}
@staticmethod
def parse_flags(supported_sites):
"""
Utility function to parse command line flags
"""
parser = ArgumentParser()
parser.add_argument('-s', '--site',
dest='site',
choices=supported_sites,
help='The competitive programming platform, e.g. codeforces, codechef etc')
parser.add_argument('-c', '--contest',
dest='contest',
help='The name of the contest, e.g. JUNE17, LTIME49, COOK83 etc')
parser.add_argument('-p', '--problem',
dest='problem',
help='The problem code, e.g. OAK, PRMQ etc')
parser.add_argument('-f', '--force',
dest='force',
action='store_true',
help='Force download the test cases, even if they are cached')
parser.add_argument('--run',
dest='source_file',
help='Name of source file to be run')
parser.add_argument('--set-default-site',
dest='default_site',
choices=supported_sites,
help='Name of default site to be used when -s flag is not specified')
parser.add_argument('--set-default-contest',
dest='default_contest',
help='Name of default contest to be used when -c flag is not specified')
parser.add_argument('--clear-cache',
dest='clear_cache',
action='store_true',
help='Clear cached test cases for a given site. Takes default site if -s flag is omitted')
parser.set_defaults(force=False, clear_cache=False)
args = parser.parse_args()
flags = {}
if args.site is None or args.contest is None:
import json
site, contest = None, None
try:
with open(os.path.join(Utilities.cache_dir, 'constants.json'), 'r') as f:
data = f.read()
data = json.loads(data)
site = data.get(
'default_site', None) if args.site is None else args.site
contest = data.get(
'default_contest', None) if args.contest is None else args.contest
except:
pass
flags['site'] = site
flags['contest'] = contest if not site == 'spoj' else None
else:
flags['site'] = args.site
flags['contest'] = args.contest
flags['problem'] = args.problem
flags['force'] = args.force
flags['clear_cache'] = args.clear_cache
flags['source'] = args.source_file
flags['default_site'] = args.default_site
flags['default_contest'] = args.default_contest
return flags
@staticmethod
def set_constants(key, value):
"""
Utility method to set default site and contest
"""
with open(os.path.join(Utilities.cache_dir, 'constants.json'), 'r+') as f:
data = f.read()
data = json.loads(data)
data[key] = value
f.seek(0)
f.write(json.dumps(data, indent=2))
f.truncate()
print('Set %s to %s' % (key, value))
@staticmethod
def check_cache(site, contest, problem):
"""
Method to check if the test cases already exist in cache
If not, create the directory structure to store test cases
"""
if problem is None:
if not os.path.isdir(os.path.join(Utilities.cache_dir, site, contest)):
os.makedirs(os.path.join(Utilities.cache_dir, site, contest))
return False
# Handle case for SPOJ specially as it does not have contests
contest = '' if site == 'spoj' else contest
if os.path.isdir(os.path.join(Utilities.cache_dir, site, contest, problem)):
return True
else:
os.makedirs(os.path.join(Utilities.cache_dir, site,
contest, problem))
return False
@staticmethod
def clear_cache(site):
"""
Method to clear cached test cases
"""
confirm = input(
'Remove entire cache for site %s? (y/N) : ' % (site))
if confirm == 'y':
from shutil import rmtree
try:
rmtree(os.path.join(Utilities.cache_dir, site))
except:
print('Some error occured. Try again.')
return
os.makedirs(os.path.join(Utilities.cache_dir, site))
print('Done.')
@staticmethod
def store_files(site, contest, problem, inputs, outputs):
"""
Method to store the test cases in files
"""
# Handle case for SPOJ specially as it does not have contests
contest = '' if site == 'spoj' else contest
for i, inp in enumerate(inputs):
filename = os.path.join(
Utilities.cache_dir, site, contest, problem, 'Input' + str(i))
with open(filename, 'w') as handler:
handler.write(inp)
for i, out in enumerate(outputs):
filename = os.path.join(
Utilities.cache_dir, site, contest, problem, 'Output' + str(i))
with open(filename, 'w') as handler:
handler.write(out)
@staticmethod
def download_problem_testcases(args):
"""
Download test cases for a given problem
"""
if args['site'] == 'codeforces':
platform = Codeforces(args)
elif args['site'] == 'codechef':
platform = Codechef(args)
elif args['site'] == 'spoj':
platform = Spoj(args)
elif args['site'] == 'atcoder':
platform = AtCoder(args)
else:
platform = Hackerrank(args)
is_in_cache = Utilities.check_cache(
platform.site, platform.contest, platform.problem)
if not args['force'] and is_in_cache:
print('Test cases found in cache...')
sys.exit(0)
platform.scrape_problem()
@staticmethod
def download_contest_testcases(args):
"""
Download test cases for all problems in a given contest
"""
if args['site'] == 'codeforces':
platform = Codeforces(args)
elif args['site'] == 'codechef':
platform = Codechef(args)
elif args['site'] == 'hackerrank':
platform = Hackerrank(args)
elif args['site'] == 'atcoder':
platform = AtCoder(args)
Utilities.check_cache(
platform.site, platform.contest, platform.problem)
platform.scrape_contest()
@staticmethod
def input_file_to_string(path, num_cases):
"""
Method to return sample inputs as a list
"""
inputs = []
for i in range(num_cases):
with open(os.path.join(path, 'Input' + str(i)), 'r') as fh:
inputs += [fh.read()]
return inputs
@staticmethod
def cleanup(num_cases, basename, extension):
"""
Method to clean up temporarily created files
"""
for i in range(num_cases):
if os.path.isfile('temp_output' + str(i)):
os.remove('temp_output' + str(i))
if extension == 'java':
os.system('rm ' + basename + '*.class')
if extension == 'cpp':
os.system('rm ' + basename)
@staticmethod
def handle_kbd_interrupt(site, contest, problem):
"""
Method to handle keyboard interrupt
"""
from shutil import rmtree
print('Cleaning up...')
# Handle case for SPOJ specially as it does not have contests
contest = '' if site == 'spoj' else contest
if problem is not None:
path = os.path.join(Utilities.cache_dir, site, contest, problem)
if os.path.isdir(path):
rmtree(path)
else:
path = os.path.join(Utilities.cache_dir, site, contest)
if os.path.isdir(path):
rmtree(path)
print('Done. Exiting gracefully.')
@staticmethod
def run_solution(args):
"""
Method to run and test the user's solution against sample cases
"""
problem = args['source']
extension = problem.split('.')[-1]
problem = problem.split('.')[0]
basename = problem.split('/')[-1]
problem_path = os.path.join(os.getcwd(), problem)
if not os.path.isfile(problem_path + '.' + extension):
print('ERROR : No such file')
sys.exit(0)
problem_code = args['problem'] if args['problem'] else basename
contest_code = '' if args['site'] == 'spoj' else args['contest']
testcases_path = os.path.join(Utilities.cache_dir, args[
'site'], contest_code, problem_code)
if os.path.isdir(testcases_path):
num_cases = len(os.listdir(testcases_path)) // 2
results, expected_outputs, user_outputs = [], [], []
if extension in ['c', 'cpp', 'java', 'py', 'hs', 'rb']:
# Compiler flags taken from http://codeforces.com/blog/entry/79
compiler = {
'hs': 'ghc --make -O -dynamic -o ' + basename,
'py': None,
'rb': None,
'c': 'gcc -static -DONLINE_JUDGE -fno-asm -lm -s -O2 -o ' + basename,
'cpp': 'g++ -static -DONLINE_JUDGE -lm -s -x c++ -O2 -std=c++14 -o ' + basename,
'java': 'javac -d .'
}[extension]
execute_command = {
'py': 'python \'' + problem_path + '.' + extension + '\'',
'rb': 'ruby \'' + problem_path + '.' + extension + '\'',
'hs': './' + basename,
'c': './' + basename,
'cpp': './' + basename,
'java': 'java -DONLINE_JUDGE=true -Duser.language=en -Duser.region=US -Duser.variant=US ' + basename
}[extension]
if compiler is None:
compile_status = 0
else:
compile_status = os.system(
compiler + ' \'' + problem_path + '.' + extension + '\'')
if compile_status == 0:
# Compiled successfully
timeout_command = 'timeout' if platform.system() == 'Linux' else 'gtimeout'
for i in range(num_cases):
status = os.system(timeout_command + ' 2s ' + execute_command + ' < ' + os.path.join(
testcases_path, 'Input' + str(i)) + ' > temp_output' + str(i))
with open(os.path.join(testcases_path, 'Output' + str(i)), 'r') as out_handler:
expected_output = out_handler.read().strip().split('\n')
expected_output = '\n'.join(
[line.strip() for line in expected_output])
expected_outputs += [expected_output]
if status == 31744:
# Time Limit Exceeded
results += [Utilities.colors['BOLD'] + Utilities.colors[
'YELLOW'] + 'TLE' + Utilities.colors['ENDC']]
user_outputs += ['']
elif status == 0:
# Ran successfully
with open('temp_output' + str(i), 'r') as temp_handler:
user_output = temp_handler.read().strip().split('\n')
user_output = '\n'.join(
[line.strip() for line in user_output])
user_outputs += [user_output]
if expected_output == user_output:
# All Correct
results += [Utilities.colors['BOLD'] + Utilities.colors[
'GREEN'] + 'AC' + Utilities.colors['ENDC']]
else:
# Wrong Answer
results += [Utilities.colors['BOLD'] + Utilities.colors[
'RED'] + 'WA' + Utilities.colors['ENDC']]
else:
# Runtime Error
results += [Utilities.colors['BOLD'] +
Utilities.colors['RED'] + 'RTE' + Utilities.colors['ENDC']]
user_outputs += ['']
else:
# Compilation error occurred
message = Utilities.colors['BOLD'] + Utilities.colors[
'RED'] + 'Compilation error. Not run against test cases' + Utilities.colors['ENDC'] + '.'
print(message)
sys.exit(0)
else:
print('Supports only C, C++, Python, Java, Ruby and Haskell as of now.')
sys.exit(0)
from terminaltables import AsciiTable
table_data = [['Serial No', 'Input',
'Expected Output', 'Your Output', 'Result']]
inputs = Utilities.input_file_to_string(testcases_path, num_cases)
for i in range(num_cases):
row = [
i + 1,
inputs[i],
expected_outputs[i],
user_outputs[i] if any(sub in results[i]
for sub in ['AC', 'WA']) else 'N/A',
results[i]
]
table_data.append(row)
table = AsciiTable(table_data)
print(table.table)
# Clean up temporary files
Utilities.cleanup(num_cases, basename, extension)
else:
print('Test cases not found locally...')
args['problem'] = problem_code
args['force'] = True
args['source'] = problem + '.' + extension
Utilities.download_problem_testcases(args)
print('Running your solution against sample cases...')
Utilities.run_solution(args)
@staticmethod
def get_html(url):
"""
Utility function get the html content of an url
"""
sys.setrecursionlimit(10000)
MAX_TRIES = 3
try:
for try_count in range(MAX_TRIES):
r = rq.get(url)
if r.status_code == 200:
break
if try_count >= MAX_TRIES:
print('Could not fetch content. Please try again.')
sys.exit(0)
except Exception as e:
print('Please check your internet connection and try again.')
sys.exit(0)
return r
class Platform:
"""
Base class for platforms
"""
def __init__(self, args):
self.site = args['site']
self.contest = args['contest']
self.force_download = args['force']
self.responses = []
self.lock = threading.Lock()
def get_problem_name(self, response):
return response.url.split('/')[-1]
def build_problem_url(self):
raise NotImplementedError
def parse_html(self):
raise NotImplementedError
def scrape_problem(self):
"""
Method to scrape a single problem
"""
contest = '' if self.site == 'spoj' else self.contest
print('Fetching problem %s-%s from %s...' % (contest, self.problem, self.site))
req = Utilities.get_html(self.build_problem_url())
inputs, outputs = self.parse_html(req)
Utilities.store_files(self.site, self.contest,
self.problem, inputs, outputs)
print('Done.')
def fetch_html(self, link):
r = rq.get(link)
with self.lock:
self.responses += [r]
def handle_batch_requests(self, links):
"""
Method to send simultaneous requests to all problem pages
"""
threads = [threading.Thread(target=self.fetch_html, args=(link,)) for link in links]
for t in threads:
t.start()
for t in threads:
t.join()
failed_requests = []
for response in self.responses:
if response is not None and response.status_code == 200:
inputs, outputs = self.parse_html(response)
self.problem = self.get_problem_name(response)
Utilities.check_cache(self.site, self.contest, self.problem)
Utilities.store_files(
self.site, self.contest, self.problem, inputs, outputs)
else:
failed_requests += [response.url]
return failed_requests
def scrape_contest(self):
"""
Method to scrape all problems from a given contest
"""
print('Checking problems available for contest %s-%s...' % (self.site, self.contest))
req = Utilities.get_html(self.build_contest_url())
links = self.get_problem_links(req)
print('Found %d problems..' % (len(links)))
if not self.force_download:
cached_problems = os.listdir(os.path.join(
Utilities.cache_dir, self.site, self.contest))
links = [link for link in links if link.split(
'/')[-1] not in cached_problems]
failed_requests = self.handle_batch_requests(links)
if len(failed_requests) > 0:
self.handle_batch_requests(failed_requests)
class Codeforces(Platform):
"""
Class to handle downloading of test cases from Codeforces
"""
def __init__(self, args):
self.problem = args['problem']
super(Codeforces, self).__init__(args)
def parse_html(self, req):
"""
Method to parse the html and get test cases
from a codeforces problem
"""
soup = bs(req.text, 'html.parser')
inputs = soup.findAll('div', {'class': 'input'})
outputs = soup.findAll('div', {'class': 'output'})
if len(inputs) == 0 or len(outputs) == 0:
print('Problem not found..')
Utilities.handle_kbd_interrupt(
self.site, self.contest, self.problem)
sys.exit(0)
repls = ('<br>', '\n'), ('<br/>', '\n'), ('</br>', '')
formatted_inputs, formatted_outputs = [], []
for inp in inputs:
pre = inp.find('pre').decode_contents()
pre = functools.reduce(lambda a, kv: a.replace(*kv), repls, pre)
pre = re.sub('<[^<]+?>', '', pre)
formatted_inputs += [pre]
for out in outputs:
pre = out.find('pre').decode_contents()
pre = functools.reduce(lambda a, kv: a.replace(*kv), repls, pre)
pre = re.sub('<[^<]+?>', '', pre)
formatted_outputs += [pre]
# print 'Inputs', formatted_inputs
# print 'Outputs', formatted_outputs
return formatted_inputs, formatted_outputs
def get_problem_links(self, req):
"""
Method to get the links for the problems
in a given codeforces contest
"""
soup = bs(req.text, 'html.parser')
table = soup.find('table', {'class': 'problems'})
if table is None:
print('Contest not found..')
Utilities.handle_kbd_interrupt(
self.site, self.contest, self.problem)
sys.exit(0)
links = ['http://codeforces.com' +
td.find('a')['href'] for td in table.findAll('td', {'class': 'id'})]
return links
def build_problem_url(self):
contest_type = 'contest' if int(self.contest) <= 100000 else 'gym'
return 'http://codeforces.com/%s/%s/problem/%s' % (contest_type, self.contest, self.problem)
def build_contest_url(self):
contest_type = 'contest' if int(self.contest) <= 100000 else 'gym'
return 'http://codeforces.com/%s/%s' % (contest_type, self.contest)
class Codechef(Platform):
"""
Class to handle downloading of test cases from Codechef
"""
def __init__(self, args):
self.problem = args['problem']
super(Codechef, self).__init__(args)
def _extract(self, data, marker):
data_low = data.lower()
extracts = []
idx = data_low.find(marker, 0)
while not idx == -1:
start = data_low.find('```', idx)
end = data_low.find('```', start + 3)
extracts += [data[start + 3:end]]
idx = data_low.find(marker, end)
return [extract.strip() for extract in extracts]
def parse_html(self, req):
"""
Method to parse the html and get test cases
from a codechef problem
"""
try:
data = str(json.loads(req.text)['body'])
except (KeyError, ValueError):
print('Problem not found..')
Utilities.handle_kbd_interrupt(
self.site, self.contest, self.problem)
sys.exit(0)
inputs = self._extract(data, 'example input')
outputs = self._extract(data, 'example output')
return inputs, outputs
def get_problem_links(self, req):
"""
Method to get the links for the problems
in a given codechef contest
"""
soup = bs(req.text, 'html.parser')
table = soup.find('table', {'class': 'dataTable'})
if table is None:
print('Contest not found..')
Utilities.handle_kbd_interrupt(
self.site, self.contest, self.problem)
sys.exit(0)
links = [div.find('a')['href']
for div in table.findAll('div', {'class': 'problemname'})]
links = ['https://codechef.com/api/contests/' + self.contest +
'/problems/' + link.split('/')[-1] for link in links]
return links
def build_problem_url(self):
return 'https://codechef.com/api/contests/%s/problems/%s' % (self.contest, self.problem)
def build_contest_url(self):
return 'https://codechef.com/%s' % self.contest
class Spoj(Platform):
"""
Class to handle downloading of test cases from Spoj
"""
def __init__(self, args):
self.problem = args['problem'].upper()
super(Spoj, self).__init__(args)
def parse_html(self, req):
"""
Method to parse the html and get test cases
from a spoj problem
"""
soup = bs(req.text, 'html.parser')
test_cases = soup.findAll('pre')
if test_cases is None or len(test_cases) == 0:
print('Problem not found..')
Utilities.handle_kbd_interrupt(
self.site, self.contest, self.problem)
sys.exit(0)
formatted_inputs, formatted_outputs = [], []
input_list = [
'<pre>(.|\n|\r)*<b>Input:?</b>:?',
'<b>Output:?</b>(.|\n|\r)*'
]
output_list = [
'<pre>(.|\n|\r)*<b>Output:?</b>:?',
'</pre>'
]
input_regex = re.compile('(%s)' % '|'.join(input_list))
output_regex = re.compile('(%s)' % '|'.join(output_list))
for case in test_cases:
inp = input_regex.sub('', str(case))
out = output_regex.sub('', str(case))
inp = re.sub('<[^<]+?>', '', inp)
out = re.sub('<[^<]+?>', '', out)
formatted_inputs += [inp.strip()]
formatted_outputs += [out.strip()]
# print 'Inputs', formatted_inputs
# print 'Outputs', formatted_outputs
return formatted_inputs, formatted_outputs
def build_problem_url(self):
return 'http://spoj.com/problems/%s' % self.problem
class Hackerrank(Platform):
"""
Class to handle downloading of test cases from Hackerrank
"""
def __init__(self, args):
self.problem = '-'.join(args['problem'].split()
).lower() if args['problem'] is not None else None
super(Hackerrank, self).__init__(args)
def parse_html(self, req):
"""
Method to parse the html and get test cases
from a hackerrank problem
"""
try:
data = json.loads(req.text)
soup = bs(data['model']['body_html'], 'html.parser')
except (KeyError, ValueError):
print('Problem not found..')
Utilities.handle_kbd_interrupt(
self.site, self.contest, self.problem)
sys.exit(0)
input_divs = soup.findAll('div', {'class': 'challenge_sample_input'})
output_divs = soup.findAll('div', {'class': 'challenge_sample_output'})
inputs = [input_div.find('pre') for input_div in input_divs]
outputs = [output_div.find('pre') for output_div in output_divs]
regex_list = [
'<pre>(<code>)?',
'(</code>)?</pre>'
]
regex = re.compile('(%s)' % '|'.join(regex_list))
formatted_inputs, formatted_outputs = [], []
for inp in inputs:
spans = inp.findAll('span')
if len(spans) > 0:
formatted_input = '\n'.join(
[span.decode_contents() for span in spans])
else:
formatted_input = regex.sub('', str(inp))
formatted_inputs += [formatted_input.strip()]
for out in outputs:
spans = out.findAll('span')
if len(spans) > 0:
formatted_output = '\n'.join(
[span.decode_contents() for span in spans])
else:
formatted_output = regex.sub('', str(out))
formatted_outputs += [formatted_output.strip()]
# print 'Inputs', formatted_inputs
# print 'Outputs', formatted_outputs
return formatted_inputs, formatted_outputs
def get_problem_links(self, req):
"""
Method to get the links for the problems
in a given hackerrank contest
"""
try:
data = json.loads(req.text)
data = data['models']
except (KeyError, ValueError):
print('Contest not found..')
Utilities.handle_kbd_interrupt(
self.site, self.contest, self.problem)
sys.exit(0)
links = ['https://www.hackerrank.com/rest/contests/' + self.contest +
'/challenges/' + problem['slug'] for problem in data]
return links
def build_problem_url(self):
return 'https://www.hackerrank.com/rest/contests/%s/challenges/%s' % (self.contest, self.problem)
def build_contest_url(self):
'https://www.hackerrank.com/rest/contests/%s/challenges' % self.contest
class AtCoder(Platform):
"""
Class to handle downloading of test cases from atcoder
"""
def __init__(self, args):
self.problem = args['problem']
super(AtCoder, self).__init__(args)
def parse_html(self, req):
"""
Method to parse the html and get test cases
from a atcoder problem
"""
soup = bs(req.text, 'html.parser')
inouts= soup.findAll('div', {'class': 'part'})
repls = ('<br>', '\n'), ('<br/>', '\n'), ('</br>', '')
formatted_inputs, formatted_outputs = [], []
inouts = filter((lambda x: x.find('section') and x.find('section').find('h3')), inouts)
for inp in inouts:
if inp.find('section').find('h3').text[:3] == "入力例":
pre = inp.find('pre').decode_contents()
pre = functools.reduce(lambda a, kv: a.replace(*kv), repls, pre)
pre = re.sub('<[^<]+?>', '', pre)
pre = pre.replace("&", "&")
pre = pre.replace("<", "<")
pre = pre.replace(">", ">")
formatted_inputs += [pre]
if inp.find('section').find('h3').text[:3] == "出力例":
pre = inp.find('pre').decode_contents()
pre = functools.reduce(lambda a, kv: a.replace(*kv), repls, pre)
pre = re.sub('<[^<]+?>', '', pre)
pre = pre.replace("&", "&")
pre = pre.replace("<", "<")
pre = pre.replace(">", ">")
formatted_outputs += [pre]
return formatted_inputs, formatted_outputs
def get_problem_links(self, req):
"""
Method to get the links for the problems
in a given atcoder contest
"""
soup = bs(req.text, 'html.parser')
table = soup.find('tbody')
if table is None:
print('Contest not found..')
Utilities.handle_kbd_interrupt(
self.site, self.contest, self.problem)
sys.exit(0)
links = ['http://beta.atcoder.jp' +
td.find('a')['href'] for td in soup.findAll('td', {'class': 'text-center no-break'})]
return links
def get_problem_name(self, response):
"""
Method to get the names for the problems
in a given atcoder contest
"""
soup = bs(response.text, 'html.parser')
return soup.find('title').get_text()[0].lower()
def build_problem_url(self):
return 'https://beta.atcoder.jp/contests/%s/tasks/%s' % (self.contest, self.problem)
def build_contest_url(self):
return 'https://beta.atcoder.jp/contests/%s/tasks/' % self.contest | ACedIt | /ACedIt-1.2.1.tar.gz/ACedIt-1.2.1/acedit/util.py | util.py |
######
Readme
######
Description
===========
PyAChemKit is a collection of Artificial Chemistry software written in Python - a library and collection of tools.
Artificial Chemistry (AChem) is a spin-off topic of Artificial Life. AChem is aimed at emergence of life from
non-living environment - primordial soup etc.
Installation
============
To install on Ubuntu Linux, run ::
sudo easy_install -U AChemKit
This package should work on other Linux distributions and versions of Windows, but is untested.
This package requires the following:
* Python >= 2.6 http://www.python.org/
Some features use the following:
* NetworkX
* GraphViz http://www.graphviz.org/
Optionally, the following can be installed to improve performance:
* Psyco http://psyco.sourceforge.net
* PyPy http://codespeak.net/pypy
Source
======
The latest version of the source code is available from https://github.com/afaulconbridge/PyAChemKit
The source code additionally requires the following:
* Sphinx >= 1.0 http://sphinx.pocoo.org/
* Graphviz http://www.graphviz.org/
* Make http://www.gnu.org/software/make/
* LaTeX http://www.latex-project.org/
* PyLint >=0.13.0 http://www.logilab.org/project/pylint/
* Coverage http://nedbatchelder.com/code/coverage/
For a Debian-based Linux distrbution --- e.g. Debian, Ubuntu --- these can be installed / updated with::
make setup
(Note, LaTeX is not installed via this method because it is very large. Run ``sudo apt-get install texlive-full`` if you want to be able to compile the PDF documentation.)
There is a makefile that will run some useful tasks for you (generate documentation, test, benchmark). This can be accessed by running the following command::
make help
Copyright
=========
This project is licensed under a modified-BSD license. See the fie ``COPYRIGHT`` for details.
| AChemKit | /AChemKit-0.3.0.tar.gz/AChemKit-0.3.0/README.txt | README.txt |
Search.setIndex({objects:{"AChemKit.utils.bag_test.TestOrderedFrozenBag":{test_iter:[2,1,1],setUp:[2,1,1],cls:[2,4,1]},"AChemKit.reactionnet":{ReactionNetwork:[4,3,1]},"AChemKit.tools":{chem_to_dot:[1,0,1],chem_to_pdf:[1,0,1],log_to_chem:[1,0,1],chem_pp:[1,0,1],chem_uniform:[1,0,1],chem_linear:[1,0,1]},"AChemKit.utils":{utils_test:[2,0,1],bag:[2,0,1],simpledot:[2,0,1],bag_test:[2,0,1],utils:[2,0,1]},"AChemKit.utils.bag_test.TestFrozenBag":{test_len:[2,1,1],test_hash:[2,1,1],setUp:[2,1,1],test_pickle:[2,1,1],test_repr:[2,1,1],test_iter:[2,1,1],test_eq:[2,1,1],test_str:[2,1,1],cls:[2,4,1]},"AChemKit.properties_wnx":{make_catalysis_graph:[4,2,1],make_linkage_graph:[4,2,1]},"AChemKit.utils.bag_test.TestBag":{test_add:[2,1,1],setUp:[2,1,1],test_discard:[2,1,1],test_hash:[2,1,1],cls:[2,4,1]},"AChemKit.sims_simple.AChemAbstract":{react:[4,1,1]},"AChemKit.bucket.Bucket":{from_file:[4,5,1],reactionnet:[4,4,1],from_string:[4,5,1],from_filename:[4,5,1]},"AChemKit.utils.bag":{FrozenBag:[2,3,1],OrderedFrozenBagCache:[2,3,1],OrderedBag:[2,3,1],Bag:[2,3,1],OrderedFrozenBag:[2,3,1]},"AChemKit.sims_simple_test.TestStepwise":{test_basic:[4,1,1]},"AChemKit.sims_simple_test":{TestItterative:[4,3,1],TestStepwise:[4,3,1]},"AChemKit.randomnet_test.TestUniform":{test_nprods_tuple:[4,1,1],test_rate_int:[4,1,1],test_nmols_tuple:[4,1,1],test_nmols_dict:[4,1,1],test_nreacts_int:[4,1,1],test_nprods_list:[4,1,1],test_nreacts_dict:[4,1,1],test_nprods_none_tuple:[4,1,1],test_nprods_none_dict:[4,1,1],test_rate_dict:[4,1,1],test_nprods_dict:[4,1,1],test_rate_float:[4,1,1],test_nprods_int:[4,1,1],test_nreacts_tuple:[4,1,1],test_rate_tuple:[4,1,1],test_rate_list:[4,1,1],test_nreacts_list:[4,1,1],test_nmols_int:[4,1,1],setUp:[4,1,1],test_nmols_list:[4,1,1]},"AChemKit.sims_gillespie_test":{TestGillespie:[4,3,1]},"AChemKit.reactionnet_test":{TestReactionNetwork:[4,3,1],TestReactionNetwork_from_string:[4,3,1]},AChemKit:{reactionnetdot_test:[4,0,1],randomnet:[4,0,1],sims_simple_test:[4,0,1],reactionnetdot:[4,0,1],utils:[2,0,1],randomnet_test:[4,0,1],bucket:[4,0,1],reactionnet_test:[4,0,1],reactionnet:[4,0,1],sims_simple:[4,0,1],properties_motif:[4,0,1],sims_gillespie:[4,0,1],properties_wnx:[4,0,1],sims_gillespie_test:[4,0,1],tools:[1,0,1],properties:[4,0,1],"__init__":[4,0,1]},"AChemKit.reactionnet_test.TestReactionNetwork":{setUp:[4,1,1],test_reactions:[4,1,1],test_reaction_to_string:[4,1,1],test_to_string:[4,1,1],test_seen:[4,1,1],test_equal:[4,1,1],test_hash:[4,1,1],test_rate:[4,1,1]},"AChemKit.sims_gillespie.Gillespie":{next_reaction:[4,1,1]},"AChemKit.reactionnet.ReactionNetwork":{from_file:[4,5,1],reactions:[4,4,1],from_filename:[4,5,1],rate:[4,1,1],rates:[4,1,1],reaction_to_string:[4,5,1],seen:[4,4,1],from_string:[4,5,1],from_reactions:[4,5,1]},"AChemKit.utils.bag.OrderedBag":{discard:[2,1,1],add:[2,1,1]},"AChemKit.properties":{get_reversible:[4,2,1],has_autocatalysis_direct:[4,2,1],has_synthesis:[4,2,1],has_varying_rates:[4,2,1],get_decomposition:[4,2,1],has_catalysis_direct:[4,2,1],get_divergence:[4,2,1],get_catalysis_direct:[4,2,1],has_divergence:[4,2,1],has_reversible:[4,2,1],get_autocatalysis_direct:[4,2,1],not_conservation_mass:[4,2,1],has_decomposition:[4,2,1],get_synthesis:[4,2,1]},"AChemKit.sims_simple":{simulate_stepwise_iter:[4,2,1],simulate_stepwise_multiprocessing_iter:[4,2,1],simulate_itterative:[4,2,1],simulate_itterative_iter:[4,2,1],simulate_stepwise_multiprocessing:[4,2,1],AChemReactionNetwork:[4,3,1],AChemAbstract:[4,3,1],simulate_stepwise:[4,2,1]},"AChemKit.sims_gillespie_test.TestGillespie":{test_basic:[4,1,1],setUp:[4,1,1]},"AChemKit.randomnet":{combinations_with_replacement:[4,2,1],Linear:[4,2,1],Uniform:[4,2,1]},"AChemKit.bucket.Event":{reactants:[4,4,1],products:[4,4,1],time:[4,4,1]},"AChemKit.randomnet_test":{TestLinear:[4,3,1],TestUniform:[4,3,1]},"AChemKit.utils.bag.FrozenBag":{count:[2,1,1]},"AChemKit.utils.simpledot":{SimpleDot:[2,3,1]},"AChemKit.reactionnet_test.TestReactionNetwork_from_string":{setUp:[4,1,1]},"AChemKit.utils.bag_test":{TestBag:[2,3,1],TestOrderedFrozenBag:[2,3,1],TestFrozenBag:[2,3,1],TestOrderedBag:[2,3,1]},"AChemKit.properties_motif":{find_loops:[4,2,1]},"AChemKit.utils.utils_test.TestGetSample":{test_ints:[2,1,1],test_dicts:[2,1,1],test_lists:[2,1,1]},"AChemKit.utils.utils":{unpck:[2,2,1],long_subseq:[2,2,1],get_sample:[2,2,1],memory_free:[2,2,1],myp:[2,2,1]},"AChemKit.sims_simple.AChemReactionNetwork":{react:[4,1,1]},"AChemKit.reactionnetdot_test.TestReactionNetworkDot":{setUp:[4,1,1],test_to_dot_str:[4,1,1]},"AChemKit.utils.utils_test":{TestGetSample:[2,3,1]},"AChemKit.utils.bag.OrderedFrozenBag":{count:[2,1,1],index:[2,1,1]},"AChemKit.reactionnetdot.ReactionNetworkDot":{to_dot:[4,1,1],dot:[4,4,1]},"AChemKit.utils.bag_test.TestOrderedBag":{setUp:[2,1,1],test_hash:[2,1,1],cls:[2,4,1]},"AChemKit.reactionnetdot":{ReactionNetworkDot:[4,3,1]},"AChemKit.utils.simpledot.SimpleDot":{keys:[2,1,1],plot:[2,1,1],add:[2,1,1],get:[2,1,1]},"AChemKit.utils.bag.Bag":{discard:[2,1,1],add:[2,1,1]},"AChemKit.bucket":{Bucket:[4,3,1],Event:[4,3,1]},"AChemKit.randomnet_test.TestLinear":{test_maxlengt_tuple:[4,1,1],test_natoms_dict:[4,1,1],test_natoms_int:[4,1,1],setUp:[4,1,1],test_natoms_tuple:[4,1,1],test_maxlengt_dict:[4,1,1],test_pform_tuple:[4,1,1],test_undirected:[4,1,1],test_directed:[4,1,1],test_pbreak_tuple:[4,1,1]},"AChemKit.reactionnetdot_test":{TestReactionNetworkDot:[4,3,1]},"AChemKit.sims_gillespie":{Gillespie:[4,3,1],simulate_gillespie_iter:[4,2,1],simulate_gillespie:[4,2,1]},"AChemKit.sims_simple_test.TestItterative":{test_basic:[4,1,1],setUp:[4,1,1]}},terms:{get_revers:4,represent:[7,4],all:[2,4,7],code:[5,4],partial:4,edg:2,consum:4,orderedbag:2,has_synthesi:4,has_varying_r:4,overlap:6,follow:[5,2,4,7],teststepwisemultiprocess:[],orderedfrozenbag:2,extra:2,catalyt:4,depend:4,graph:[2,4],readabl:4,descript:[5,0],test_natoms_dict:4,program:[1,2],certain:4,under:5,exit:1,testreactionnetwork:4,test_pickl:2,sourc:[5,0,4],string:[2,4,7],fals:[2,4],util:[0,1,3,2,4],long_subseq:2,novel:4,veri:[5,2],implicitli:2,subprocess:2,leftov:4,list:[2,4],fewer:4,achemreactionnetwork:4,"try":[],item:2,adjust:2,small:2,pleas:[],upper:7,smaller:[],test_nreacts_int:4,speci:[1,7,4],isinst:2,direct:[1,4],annot:[],utils_test:[0,3,2,4],rate:[1,7,4],design:[2,4],bucket_file_format:4,test_list:2,pass:[2,4],simulate_stepwise_multiprocessing_it:4,chem:[0,1,7,4],odd:2,compat:2,index:[2,4],abc:[2,4],testfrozenbag:2,from_react:4,defin:4,resembl:4,abl:[5,4],uniform:[1,4],access:[5,2,4],test_nprods_int:4,version:5,invers:[7,4],"new":2,net:5,method:[5,2,4],unlabel:4,full:[5,1,4],themselv:4,get_diverg:4,gener:[5,1,7,4],even:2,here:[7,4],behaviour:[],nedbatcheld:5,ubuntu:5,path:4,standard:2,modifi:5,abstractli:7,valu:[2,4],convert:[2,4,7],produc:[1,2,4],test_pform_tupl:4,copyright:[5,0],larger:[],credit:4,amount:2,min_shar:4,loop:4,permit:2,spin:5,implement:4,kauffman:4,test_equ:4,via:[5,2,4],term:7,nreaction:4,instr:4,win:4,modul:[0,1,3,2,4],testorderedbag:2,apt:5,filenam:[2,4],unix:[],api:[1,2,4],duck:4,instal:[5,0,4],catalsyst:4,plot:2,from:[5,1,2,4,7],describ:7,would:6,prove:4,doubl:4,two:4,coverag:5,next:4,symbol:7,live:5,call:[2,4],recommend:7,dict:[2,4],type:[1,2,4],until:4,more:[7,4],sort:[6,2,4,7],theunivers:7,diamond:4,desir:4,randomnet:[0,3,4],pylint:5,visual:4,accept:[2,4],examin:1,particular:[1,7,4],known:2,cach:4,achemabstract:4,must:[2,4],none:[2,4],join:4,alia:2,setup:[5,2,4],work:[5,1,2,4],uniqu:2,histori:4,soup:5,kwarg:4,can:[5,1,2,4,7],teststepwis:4,test_repr:2,undirect:1,give:4,process:[1,4],sudo:5,share:4,not_conservation_mass:4,indic:4,topic:5,minimum:4,want:5,explos:4,phrase:[],test_nmols_list:4,occur:4,alwai:2,multipl:[2,4,7],thing:[6,2,4,7],rather:4,write:1,how:4,instead:4,simpl:4,either:[7,4],map:2,product:[7,4],recogn:2,test_maxlengt_dict:4,max:4,stuart:4,lump:2,befor:7,wrong:4,lot:4,mai:[2,4,7],end:[2,4],grop:4,data:[2,4,7],parallel:[2,4],vera:4,"short":[],attempt:[2,4],classmethod:4,test_nprods_none_dict:4,stdin:1,explicit:[7,4],correspond:7,assign:4,caus:[2,4],inform:1,combin:[7,4],allow:[],untest:5,egg:7,order:[2,4,7],mutableset:2,origin:[2,4],help:[5,1],over:4,becaus:[5,4],graphviz:[5,1,2],paper:4,through:4,test_hash:[2,4],still:[2,4],mainli:4,bag_test:[0,3,2,4],paramet:4,reactant:[7,4],style:1,afaulconbridg:5,reactionnetworkk:[],simulate_itt:4,distrbut:5,simpledot:[0,1,3,2,4],window:5,fie:5,hidden:4,therefor:2,might:4,non:[5,4],good:6,"return":[2,4],thei:[1,2,4],python:[5,2,4],auto:[],spell:[],autodocu:[],easy_instal:5,"break":[1,2,4],mention:[],test_seen:4,now:7,from_str:4,multiprocess:4,from_filenam:4,name:[1,2,4,7],anyth:2,revers:[7,4],separ:[7,4],arrow:4,each:[7,4],found:[],reactionnetot:[],achem:[5,4],testgetsampl:2,protein:4,pygment:[],compil:5,weight:[2,4],replac:4,individu:4,testreactionnetwork_from_str:4,equivlanet:4,adam:4,runtest:[2,4],wrap:[2,4],linkag:4,"static":[],expect:4,ommit:1,todo:4,orient:4,shown:4,unbound:4,network:[1,4],space:7,goe:[],open:4,test_nprods_none_tupl:4,test_nprods_dict:4,highli:[],texliv:5,memory_fre:2,suitabl:1,rel:2,reactionnetwork_to_multidigraph:[],gov:4,correct:[],linear:[1,4],maxmol:4,situat:[7,4],given:[2,4],free:[2,4],infil:[1,4],base:[5,2,4],theori:6,dictionari:[2,4],latest:5,put:[2,4],org:5,reactionnetdot_test:[0,3,4],driven:1,indent:[],logilab:5,frequenc:2,could:[6,4],omit:7,keep:2,filter:4,turn:2,length:[1,2,4],enforc:[2,4],test_pbreak_tupl:4,principl:4,achemkit:[0,1,2,3,4,5,7],attribtu:4,first:[],oper:4,softwar:5,rang:4,core:4,directli:4,test_nreacts_dict:4,onc:4,number:[1,6,2,4,7],restrict:7,date:4,unlik:4,alreadi:4,done:[6,7,4],wrapper:4,blank:[1,7,4],miss:[],fanci:[],test_rate_tupl:4,differ:4,start:[7,4],sims_simple_test:[0,3,4],unknown:[],licens:5,system:7,messag:1,checker:1,master:[],circl:4,properties_wnx:[0,3,4],white:7,molecular:[1,7,4],includ:[],option:[5,1,6],cope:2,tool:[5,0,1,3,4],copi:4,test_rate_int:4,specifi:[2,4,7],github:5,pars:[],conserv:4,rst:[],off:5,testitt:4,than:[7,4],chem_linear:[0,1,3,4],liter:[],properties_motif:[0,3,4],provid:[1,2,4],remov:7,tree:4,second:4,charact:7,project:[5,0,3,6],reus:[],str:2,were:[2,4],arrang:4,grei:4,seri:4,randomli:4,mini:4,comput:4,argument:[2,4],packag:[0,1,2,3,4,5],seed:1,have:[2,4,7],betaalpha:4,need:[2,4],seen:4,breakfast:7,"null":4,engin:2,subgraph:2,equival:4,min:4,destroi:7,self:2,violat:4,note:[5,2,4],also:[2,4,7],without:4,take:4,which:[2,4],environ:5,singl:[2,4,7],test_add:2,pipelin:1,printer:1,unless:4,distribut:[5,2,4],track:4,object:[2,4],mysimpledot:2,react:4,visa:4,automodul:[],pair:4,alpha:7,test_to_dot_str:4,"class":[2,4],sub:4,latex:5,don:[],compar:4,doc:[],later:[],flow:[],doe:[2,4],test_to_str:4,place:7,determin:4,reactionnetdot:[0,3,4],unchang:4,dot:[1,2,4],currentmodul:[],set:[6,2,4],make_catalysis_graph:4,show:[1,4],text:7,pform:[1,4],syntax:1,particularli:2,networkx:[5,4],pysco:6,find:[],involv:4,current:6,onli:4,explicitli:[2,4],layout:[1,2],pretti:1,has_autocatalysis_direct:4,test_react:4,less:4,activ:4,state:7,makefil:[5,1],should:[5,2,4,7],suppos:4,rich:4,combo:2,neato:2,analys:4,itter:4,count:2,unus:[],variou:[1,2,4,7],get:[5,2,4],get_autocatalysis_direct:4,pypi:[5,6],autom:1,cannot:[2,4,7],longest:2,shedskin:6,"import":[],report:[],reconstruct:4,nelli:4,test_nreacts_list:4,requir:[5,4],autocatalysi:4,nmol:4,organ:6,pydot:2,get_catalysis_direct:4,intrins:4,bag:[0,3,2,4],common:2,contain:[2,4,7],where:[2,4],simulate_gillespie_it:4,view:4,get_decomposit:4,pbreak:[1,4],"float":[2,7],bucket:[0,3,4],methodnam:[2,4],see:[5,1,4],temporarili:7,result:2,arg:[2,4],testcas:[2,4],reaction_to_str:4,test_maxlengt_tupl:4,improv:[5,6],"__lt__":4,best:4,infinit:4,appear:[7,4],said:4,correctli:[2,4],intervalsc:4,someth:4,testreactionnetworkdot:4,written:[5,4],smallest:4,between:[7,4],drawn:4,approach:4,next_react:4,multidigraph:[],attribut:[2,4],altern:4,signatur:[],sortabl:2,kei:[2,4],numer:7,sims_gillespi:[0,3,4],test_pfrom_tupl:[],cycl:[],entir:4,chem_pp:[0,1,3,4],consumpt:4,come:1,addit:[7,4],both:[2,4,7],test_bas:4,test_nprods_list:4,hashabl:[2,4],howev:[2,4],alon:1,etc:[5,2,4],test_rate_dict:4,tutori:[],maxlength:4,futher:6,pdf:[5,1,2],provis:2,com:5,alphbeta:4,testuniform:4,comment:7,raf:6,technic:4,point:[7,4],testbag:2,unittest:[2,4],outfil:1,from_fil:4,testlinear:4,littl:2,ioerror:[],respect:2,assum:4,duplic:[2,4],test_natoms_tupl:4,creat:[2,4,7],test_eq:2,addition:5,primordi:5,log_to_chem:[0,1,3,4],empti:4,compon:[7,4],netbook:[],treat:4,basic:4,valueerror:[2,4],nonexist:[],nodenam:2,test_len:2,probe:4,reaction:[1,7,4],imag:2,simulate_itterative_it:4,ani:[6,2,4,7],togeth:[2,4],demand:4,"__eq__":4,convers:4,"case":[7,4],determinist:4,test_nprods_tupl:4,main:4,look:[],gnu:5,plain:2,properti:[0,3,4],sourceforg:5,aim:5,cast:[2,4],calcul:4,unifi:2,outcom:4,error:4,"__hash__":4,codespeak:5,document:[5,0,7,4],"__ge__":4,stdout:1,metric:4,readm:[5,0],them:[2,4],cluster:2,itself:4,hash:4,test_rate_list:4,test_discard:2,"__init__":[],autocatalyt:4,test_it:2,welcom:0,simulate_stepwis:4,perform:[5,2],make:[5,2,4],shorten:4,simuatlion:4,same:[2,4,7],member:[],test_undirect:4,wierd:2,instanc:[2,4],split:[],largest:4,test_direct:4,conflict:[],complet:1,http:[5,4],optim:6,molecul:[1,4],nreactant:4,chem_to_pdf:[0,1,3,4],kit:[],orderedfrozenbagcach:2,reactionnetworkdot:4,rais:[2,4],initi:4,mani:4,test_nmols_int:4,typic:4,subpackag:[0,3,4],lower:7,task:5,kept:[],els:2,random:[1,2,4],chemistri:[5,7,4],well:[6,7,4],inherit:4,unpck:2,exampl:[1,2,4,7],command:[5,1],thi:[1,2,4,5,6,7],diverg:4,programm:2,fail:4,usual:4,interpret:[],construct:[1,4],identifi:4,execut:[2,4],pyachemkit:5,userdict:2,mol:4,test_nreacts_tupl:4,shape:4,psyco:5,human:4,gillespi:4,rest:[],speed:6,yet:[],simulate_stepwise_it:4,chemsistri:[],shoud:[],samplea:[],easi:2,mix:4,restructuredtext:[],sizeprop:[],theoret:4,reactionnet:[0,3,4],add:[2,4,7],valid:[7,4],blob:[],versa:4,appli:4,bond:1,input:[1,7],har:[2,4],subsequ:[2,4],match:2,nproduct:4,around:4,hypothesi:4,format:[0,1,2,4,7],read:[1,4],when:[2,4,7],rng:[2,4],simulate_stepwise_multiprocess:4,avali:6,know:[],"__gt__":4,part:4,chemic:6,apart:4,linux:5,autodoc:[],like:[2,4],specif:4,chem_to_dot:[0,1,3,4],has_revers:4,test_:[],collect:[5,2,4],"boolean":4,make_linkage_graph:4,output:[1,2,7],manag:2,"function":[2,4],www:5,interact:4,get_synthesi:4,infilenam:4,sure:4,sampl:[6,2,4],integ:7,find_loop:4,guarante:2,librari:[5,2,4],slice:2,lead:4,simulate_gillespi:4,test_str:2,avoid:7,size:6,definit:4,subclass:4,flatten:2,sims_gillespie_test:[0,3,4],larg:[5,4],sequenc:2,prog:2,three:2,complic:4,prob:6,reactionnetwork:4,refer:[],machin:4,"__ne__":4,run:5,journal:4,usag:1,test_rate_float:4,biologi:4,obei:4,prerequisit:4,immut:4,sims_simpl:[0,3,4],"__all__":[],comparison:[2,4],exhibit:4,test_nmols_dict:4,materi:7,memori:[2,4],simul:4,stand:1,constructor:4,natom:[1,4],discard:2,mean:2,chem_uniform:[0,1,3,4],block:[],emphasi:4,satisfi:2,digraph:2,within:[2,4,7],functional:4,automat:[],compos:4,bsd:5,test_rat:4,to_dot:4,ensur:[],chang:4,updat:5,artifici:[5,4],storag:7,your:[],per:[1,4],has_diverg:4,fast:4,span:4,log:[1,4],wai:[2,4],spam:7,properties_wnc:[],errno:[],support:[7,4],megabyt:2,iter:[2,4],has_catalysis_direct:4,custom:4,avail:5,strict:2,reli:4,interfac:2,pocoo:5,test_natoms_int:4,synthesi:4,ineffeici:[],lowest:[],testgillespi:4,form:4,catalysi:[6,4],tupl:[2,4],criteria:2,"_abcol":2,decomposit:4,link:4,catalyst:[7,4],atom:[1,4],line:[1,7,4],inlin:4,"true":[2,4],bug:[0,6],suppli:[],notat:2,made:2,get_sampl:2,consist:2,possibl:[1,4],"default":[2,7],logic:4,multidigraph_make_flow:[],maximum:[1,4],maxtim:4,intermedi:7,below:7,"__le__":4,problem:4,similar:4,connect:[],featur:[5,0,6],constant:7,evalu:4,"int":2,myp:2,"abstract":[2,4],toctre:[],life:5,repres:[7,4],strongli:[],exist:[2,4,7],file:[0,1,2,4,7],some:[5,2,4,7],subclassess:4,proport:[],check:[2,4],probabl:[1,4],again:7,quot:[2,7],test_dict:2,titl:[],has_decomposit:4,detail:[5,4],event:4,other:[5,7,4],futur:1,allreact:4,test:[5,2,4],ignor:7,you:[5,2,4],nice:[6,4],node:2,draw:2,repeat:4,intend:[1,2],stringio:4,find_cycl:[],benchmark:5,dictmixin:2,frozenbag:2,test_reaction_to_str:4,consid:[],uniformli:[2,4],debian:5,reduc:4,sphinx:5,faster:6,algorithm:[1,6,4],randomnet_test:[0,3,4],directori:[],combinations_with_replac:4,lanl:4,pseudo:1,testorderedfrozenbag:2,reactionnet_test:[0,3,4],tricki:4,emerg:5,mass:4,maxdepth:4,time:[7,4],repsent:2,escap:7,wnx:[],test_nmols_tupl:4,test_int:2},objtypes:{"0":"py:module","1":"py:method","2":"py:function","3":"py:class","4":"py:attribute","5":"py:classmethod"},titles:["Welcome to AChemKit’s documentation!","tools Package","utils Package","Project Modules","AChemKit Package","Readme","To-Dos","File Formats"],objnames:{"0":"Python module","1":"Python method","2":"Python function","3":"Python class","4":"Python attribute","5":"Python class method"},filenames:["index","AChemKit.tools","AChemKit.utils","modules","AChemKit","README","TODO","fileformats"]}) | AChemKit | /AChemKit-0.3.0.tar.gz/AChemKit-0.3.0/doc/html/searchindex.js | searchindex.js |
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Readme — AChemKit v0.1 documentation</title>
<link rel="stylesheet" href="_static/default.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: '',
VERSION: '0.1',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '.html',
HAS_SOURCE: true
};
</script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<link rel="top" title="AChemKit v0.1 documentation" href="index.html" />
<link rel="next" title="File Formats" href="fileformats.html" />
<link rel="prev" title="Welcome to AChemKit’s documentation!" href="index.html" />
</head>
<body>
<div class="related">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="genindex.html" title="General Index"
accesskey="I">index</a></li>
<li class="right" >
<a href="py-modindex.html" title="Python Module Index"
>modules</a> |</li>
<li class="right" >
<a href="fileformats.html" title="File Formats"
accesskey="N">next</a> |</li>
<li class="right" >
<a href="index.html" title="Welcome to AChemKit’s documentation!"
accesskey="P">previous</a> |</li>
<li><a href="index.html">AChemKit v0.1 documentation</a> »</li>
</ul>
</div>
<div class="document">
<div class="documentwrapper">
<div class="bodywrapper">
<div class="body">
<div class="section" id="readme">
<h1>Readme<a class="headerlink" href="#readme" title="Permalink to this headline">¶</a></h1>
<div class="section" id="description">
<h2>Description<a class="headerlink" href="#description" title="Permalink to this headline">¶</a></h2>
<p>PyAChemKit is a collection of Artificial Chemistry software written in Python - a library and collection of tools.</p>
<p>Artificial Chemistry (AChem) is a spin-off topic of Artificial Life. AChem is aimed at emergence of life from
non-living environment - primordial soup etc.</p>
</div>
<div class="section" id="installation">
<h2>Installation<a class="headerlink" href="#installation" title="Permalink to this headline">¶</a></h2>
<p>To install on Ubuntu Linux, run</p>
<div class="highlight-python"><pre>sudo easy_install -U AChemKit</pre>
</div>
<p>This package should work on other Linux distributions and versions of Windows, but is untested.</p>
<p>This package requires the following:</p>
<ul class="simple">
<li>Python >= 2.6 <a class="reference external" href="http://www.python.org/">http://www.python.org/</a></li>
</ul>
<p>Some features use the following:</p>
<ul class="simple">
<li>NetworkX</li>
<li>GraphViz <a class="reference external" href="http://www.graphviz.org/">http://www.graphviz.org/</a></li>
</ul>
<p>Optionally, the following can be installed to improve performance:</p>
<ul class="simple">
<li>Psyco <a class="reference external" href="http://psyco.sourceforge.net">http://psyco.sourceforge.net</a></li>
<li>PyPy <a class="reference external" href="http://codespeak.net/pypy">http://codespeak.net/pypy</a></li>
</ul>
</div>
<div class="section" id="source">
<h2>Source<a class="headerlink" href="#source" title="Permalink to this headline">¶</a></h2>
<p>The latest version of the source code is available from <a class="reference external" href="https://github.com/afaulconbridge/PyAChemKit">https://github.com/afaulconbridge/PyAChemKit</a></p>
<p>The source code additionally requires the following:</p>
<ul class="simple">
<li>Sphinx >= 1.0 <a class="reference external" href="http://sphinx.pocoo.org/">http://sphinx.pocoo.org/</a></li>
<li>Graphviz <a class="reference external" href="http://www.graphviz.org/">http://www.graphviz.org/</a></li>
<li>Make <a class="reference external" href="http://www.gnu.org/software/make/">http://www.gnu.org/software/make/</a></li>
<li>LaTeX <a class="reference external" href="http://www.latex-project.org/">http://www.latex-project.org/</a></li>
<li>PyLint >=0.13.0 <a class="reference external" href="http://www.logilab.org/project/pylint/">http://www.logilab.org/project/pylint/</a></li>
<li>Coverage <a class="reference external" href="http://nedbatchelder.com/code/coverage/">http://nedbatchelder.com/code/coverage/</a></li>
</ul>
<p>For a Debian-based Linux distrbution — e.g. Debian, Ubuntu — these can be installed / updated with:</p>
<div class="highlight-python"><pre>make setup</pre>
</div>
<p>(Note, LaTeX is not installed via this method because it is very large. Run <tt class="docutils literal"><span class="pre">sudo</span> <span class="pre">apt-get</span> <span class="pre">install</span> <span class="pre">texlive-full</span></tt> if you want to be able to compile the PDF documentation.)</p>
<p>There is a makefile that will run some useful tasks for you (generate documentation, test, benchmark). This can be accessed by running the following command:</p>
<div class="highlight-python"><pre>make help</pre>
</div>
</div>
<div class="section" id="copyright">
<h2>Copyright<a class="headerlink" href="#copyright" title="Permalink to this headline">¶</a></h2>
<p>This project is licensed under a modified-BSD license. See the fie <tt class="docutils literal"><span class="pre">COPYRIGHT</span></tt> for details.</p>
</div>
</div>
</div>
</div>
</div>
<div class="sphinxsidebar">
<div class="sphinxsidebarwrapper">
<h3><a href="index.html">Table Of Contents</a></h3>
<ul>
<li><a class="reference internal" href="#">Readme</a><ul>
<li><a class="reference internal" href="#description">Description</a></li>
<li><a class="reference internal" href="#installation">Installation</a></li>
<li><a class="reference internal" href="#source">Source</a></li>
<li><a class="reference internal" href="#copyright">Copyright</a></li>
</ul>
</li>
</ul>
<h4>Previous topic</h4>
<p class="topless"><a href="index.html"
title="previous chapter">Welcome to AChemKit’s documentation!</a></p>
<h4>Next topic</h4>
<p class="topless"><a href="fileformats.html"
title="next chapter">File Formats</a></p>
<h3>This Page</h3>
<ul class="this-page-menu">
<li><a href="_sources/README.txt"
rel="nofollow">Show Source</a></li>
</ul>
<div id="searchbox" style="display: none">
<h3>Quick search</h3>
<form class="search" action="search.html" method="get">
<input type="text" name="q" size="18" />
<input type="submit" value="Go" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
<p class="searchtip" style="font-size: 90%">
Enter search terms or a module, class or function name.
</p>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>
</div>
</div>
<div class="clearer"></div>
</div>
<div class="related">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="genindex.html" title="General Index"
>index</a></li>
<li class="right" >
<a href="py-modindex.html" title="Python Module Index"
>modules</a> |</li>
<li class="right" >
<a href="fileformats.html" title="File Formats"
>next</a> |</li>
<li class="right" >
<a href="index.html" title="Welcome to AChemKit’s documentation!"
>previous</a> |</li>
<li><a href="index.html">AChemKit v0.1 documentation</a> »</li>
</ul>
</div>
<div class="footer">
© Copyright 2011, Adam Faulconbridge.
Created using <a href="http://sphinx.pocoo.org/">Sphinx</a> 1.0.7.
</div>
</body>
</html> | AChemKit | /AChemKit-0.3.0.tar.gz/AChemKit-0.3.0/doc/html/README.html | README.html |
(function(A,w){function ma(){if(!c.isReady){try{s.documentElement.doScroll("left")}catch(a){setTimeout(ma,1);return}c.ready()}}function Qa(a,b){b.src?c.ajax({url:b.src,async:false,dataType:"script"}):c.globalEval(b.text||b.textContent||b.innerHTML||"");b.parentNode&&b.parentNode.removeChild(b)}function X(a,b,d,f,e,j){var i=a.length;if(typeof b==="object"){for(var o in b)X(a,o,b[o],f,e,d);return a}if(d!==w){f=!j&&f&&c.isFunction(d);for(o=0;o<i;o++)e(a[o],b,f?d.call(a[o],o,e(a[o],b)):d,j);return a}return i?
e(a[0],b):w}function J(){return(new Date).getTime()}function Y(){return false}function Z(){return true}function na(a,b,d){d[0].type=a;return c.event.handle.apply(b,d)}function oa(a){var b,d=[],f=[],e=arguments,j,i,o,k,n,r;i=c.data(this,"events");if(!(a.liveFired===this||!i||!i.live||a.button&&a.type==="click")){a.liveFired=this;var u=i.live.slice(0);for(k=0;k<u.length;k++){i=u[k];i.origType.replace(O,"")===a.type?f.push(i.selector):u.splice(k--,1)}j=c(a.target).closest(f,a.currentTarget);n=0;for(r=
j.length;n<r;n++)for(k=0;k<u.length;k++){i=u[k];if(j[n].selector===i.selector){o=j[n].elem;f=null;if(i.preType==="mouseenter"||i.preType==="mouseleave")f=c(a.relatedTarget).closest(i.selector)[0];if(!f||f!==o)d.push({elem:o,handleObj:i})}}n=0;for(r=d.length;n<r;n++){j=d[n];a.currentTarget=j.elem;a.data=j.handleObj.data;a.handleObj=j.handleObj;if(j.handleObj.origHandler.apply(j.elem,e)===false){b=false;break}}return b}}function pa(a,b){return"live."+(a&&a!=="*"?a+".":"")+b.replace(/\./g,"`").replace(/ /g,
"&")}function qa(a){return!a||!a.parentNode||a.parentNode.nodeType===11}function ra(a,b){var d=0;b.each(function(){if(this.nodeName===(a[d]&&a[d].nodeName)){var f=c.data(a[d++]),e=c.data(this,f);if(f=f&&f.events){delete e.handle;e.events={};for(var j in f)for(var i in f[j])c.event.add(this,j,f[j][i],f[j][i].data)}}})}function sa(a,b,d){var f,e,j;b=b&&b[0]?b[0].ownerDocument||b[0]:s;if(a.length===1&&typeof a[0]==="string"&&a[0].length<512&&b===s&&!ta.test(a[0])&&(c.support.checkClone||!ua.test(a[0]))){e=
true;if(j=c.fragments[a[0]])if(j!==1)f=j}if(!f){f=b.createDocumentFragment();c.clean(a,b,f,d)}if(e)c.fragments[a[0]]=j?f:1;return{fragment:f,cacheable:e}}function K(a,b){var d={};c.each(va.concat.apply([],va.slice(0,b)),function(){d[this]=a});return d}function wa(a){return"scrollTo"in a&&a.document?a:a.nodeType===9?a.defaultView||a.parentWindow:false}var c=function(a,b){return new c.fn.init(a,b)},Ra=A.jQuery,Sa=A.$,s=A.document,T,Ta=/^[^<]*(<[\w\W]+>)[^>]*$|^#([\w-]+)$/,Ua=/^.[^:#\[\.,]*$/,Va=/\S/,
Wa=/^(\s|\u00A0)+|(\s|\u00A0)+$/g,Xa=/^<(\w+)\s*\/?>(?:<\/\1>)?$/,P=navigator.userAgent,xa=false,Q=[],L,$=Object.prototype.toString,aa=Object.prototype.hasOwnProperty,ba=Array.prototype.push,R=Array.prototype.slice,ya=Array.prototype.indexOf;c.fn=c.prototype={init:function(a,b){var d,f;if(!a)return this;if(a.nodeType){this.context=this[0]=a;this.length=1;return this}if(a==="body"&&!b){this.context=s;this[0]=s.body;this.selector="body";this.length=1;return this}if(typeof a==="string")if((d=Ta.exec(a))&&
(d[1]||!b))if(d[1]){f=b?b.ownerDocument||b:s;if(a=Xa.exec(a))if(c.isPlainObject(b)){a=[s.createElement(a[1])];c.fn.attr.call(a,b,true)}else a=[f.createElement(a[1])];else{a=sa([d[1]],[f]);a=(a.cacheable?a.fragment.cloneNode(true):a.fragment).childNodes}return c.merge(this,a)}else{if(b=s.getElementById(d[2])){if(b.id!==d[2])return T.find(a);this.length=1;this[0]=b}this.context=s;this.selector=a;return this}else if(!b&&/^\w+$/.test(a)){this.selector=a;this.context=s;a=s.getElementsByTagName(a);return c.merge(this,
a)}else return!b||b.jquery?(b||T).find(a):c(b).find(a);else if(c.isFunction(a))return T.ready(a);if(a.selector!==w){this.selector=a.selector;this.context=a.context}return c.makeArray(a,this)},selector:"",jquery:"1.4.2",length:0,size:function(){return this.length},toArray:function(){return R.call(this,0)},get:function(a){return a==null?this.toArray():a<0?this.slice(a)[0]:this[a]},pushStack:function(a,b,d){var f=c();c.isArray(a)?ba.apply(f,a):c.merge(f,a);f.prevObject=this;f.context=this.context;if(b===
"find")f.selector=this.selector+(this.selector?" ":"")+d;else if(b)f.selector=this.selector+"."+b+"("+d+")";return f},each:function(a,b){return c.each(this,a,b)},ready:function(a){c.bindReady();if(c.isReady)a.call(s,c);else Q&&Q.push(a);return this},eq:function(a){return a===-1?this.slice(a):this.slice(a,+a+1)},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},slice:function(){return this.pushStack(R.apply(this,arguments),"slice",R.call(arguments).join(","))},map:function(a){return this.pushStack(c.map(this,
function(b,d){return a.call(b,d,b)}))},end:function(){return this.prevObject||c(null)},push:ba,sort:[].sort,splice:[].splice};c.fn.init.prototype=c.fn;c.extend=c.fn.extend=function(){var a=arguments[0]||{},b=1,d=arguments.length,f=false,e,j,i,o;if(typeof a==="boolean"){f=a;a=arguments[1]||{};b=2}if(typeof a!=="object"&&!c.isFunction(a))a={};if(d===b){a=this;--b}for(;b<d;b++)if((e=arguments[b])!=null)for(j in e){i=a[j];o=e[j];if(a!==o)if(f&&o&&(c.isPlainObject(o)||c.isArray(o))){i=i&&(c.isPlainObject(i)||
c.isArray(i))?i:c.isArray(o)?[]:{};a[j]=c.extend(f,i,o)}else if(o!==w)a[j]=o}return a};c.extend({noConflict:function(a){A.$=Sa;if(a)A.jQuery=Ra;return c},isReady:false,ready:function(){if(!c.isReady){if(!s.body)return setTimeout(c.ready,13);c.isReady=true;if(Q){for(var a,b=0;a=Q[b++];)a.call(s,c);Q=null}c.fn.triggerHandler&&c(s).triggerHandler("ready")}},bindReady:function(){if(!xa){xa=true;if(s.readyState==="complete")return c.ready();if(s.addEventListener){s.addEventListener("DOMContentLoaded",
L,false);A.addEventListener("load",c.ready,false)}else if(s.attachEvent){s.attachEvent("onreadystatechange",L);A.attachEvent("onload",c.ready);var a=false;try{a=A.frameElement==null}catch(b){}s.documentElement.doScroll&&a&&ma()}}},isFunction:function(a){return $.call(a)==="[object Function]"},isArray:function(a){return $.call(a)==="[object Array]"},isPlainObject:function(a){if(!a||$.call(a)!=="[object Object]"||a.nodeType||a.setInterval)return false;if(a.constructor&&!aa.call(a,"constructor")&&!aa.call(a.constructor.prototype,
"isPrototypeOf"))return false;var b;for(b in a);return b===w||aa.call(a,b)},isEmptyObject:function(a){for(var b in a)return false;return true},error:function(a){throw a;},parseJSON:function(a){if(typeof a!=="string"||!a)return null;a=c.trim(a);if(/^[\],:{}\s]*$/.test(a.replace(/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g,"@").replace(/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g,"]").replace(/(?:^|:|,)(?:\s*\[)+/g,"")))return A.JSON&&A.JSON.parse?A.JSON.parse(a):(new Function("return "+
a))();else c.error("Invalid JSON: "+a)},noop:function(){},globalEval:function(a){if(a&&Va.test(a)){var b=s.getElementsByTagName("head")[0]||s.documentElement,d=s.createElement("script");d.type="text/javascript";if(c.support.scriptEval)d.appendChild(s.createTextNode(a));else d.text=a;b.insertBefore(d,b.firstChild);b.removeChild(d)}},nodeName:function(a,b){return a.nodeName&&a.nodeName.toUpperCase()===b.toUpperCase()},each:function(a,b,d){var f,e=0,j=a.length,i=j===w||c.isFunction(a);if(d)if(i)for(f in a){if(b.apply(a[f],
d)===false)break}else for(;e<j;){if(b.apply(a[e++],d)===false)break}else if(i)for(f in a){if(b.call(a[f],f,a[f])===false)break}else for(d=a[0];e<j&&b.call(d,e,d)!==false;d=a[++e]);return a},trim:function(a){return(a||"").replace(Wa,"")},makeArray:function(a,b){b=b||[];if(a!=null)a.length==null||typeof a==="string"||c.isFunction(a)||typeof a!=="function"&&a.setInterval?ba.call(b,a):c.merge(b,a);return b},inArray:function(a,b){if(b.indexOf)return b.indexOf(a);for(var d=0,f=b.length;d<f;d++)if(b[d]===
a)return d;return-1},merge:function(a,b){var d=a.length,f=0;if(typeof b.length==="number")for(var e=b.length;f<e;f++)a[d++]=b[f];else for(;b[f]!==w;)a[d++]=b[f++];a.length=d;return a},grep:function(a,b,d){for(var f=[],e=0,j=a.length;e<j;e++)!d!==!b(a[e],e)&&f.push(a[e]);return f},map:function(a,b,d){for(var f=[],e,j=0,i=a.length;j<i;j++){e=b(a[j],j,d);if(e!=null)f[f.length]=e}return f.concat.apply([],f)},guid:1,proxy:function(a,b,d){if(arguments.length===2)if(typeof b==="string"){d=a;a=d[b];b=w}else if(b&&
!c.isFunction(b)){d=b;b=w}if(!b&&a)b=function(){return a.apply(d||this,arguments)};if(a)b.guid=a.guid=a.guid||b.guid||c.guid++;return b},uaMatch:function(a){a=a.toLowerCase();a=/(webkit)[ \/]([\w.]+)/.exec(a)||/(opera)(?:.*version)?[ \/]([\w.]+)/.exec(a)||/(msie) ([\w.]+)/.exec(a)||!/compatible/.test(a)&&/(mozilla)(?:.*? rv:([\w.]+))?/.exec(a)||[];return{browser:a[1]||"",version:a[2]||"0"}},browser:{}});P=c.uaMatch(P);if(P.browser){c.browser[P.browser]=true;c.browser.version=P.version}if(c.browser.webkit)c.browser.safari=
true;if(ya)c.inArray=function(a,b){return ya.call(b,a)};T=c(s);if(s.addEventListener)L=function(){s.removeEventListener("DOMContentLoaded",L,false);c.ready()};else if(s.attachEvent)L=function(){if(s.readyState==="complete"){s.detachEvent("onreadystatechange",L);c.ready()}};(function(){c.support={};var a=s.documentElement,b=s.createElement("script"),d=s.createElement("div"),f="script"+J();d.style.display="none";d.innerHTML=" <link/><table></table><a href='/a' style='color:red;float:left;opacity:.55;'>a</a><input type='checkbox'/>";
var e=d.getElementsByTagName("*"),j=d.getElementsByTagName("a")[0];if(!(!e||!e.length||!j)){c.support={leadingWhitespace:d.firstChild.nodeType===3,tbody:!d.getElementsByTagName("tbody").length,htmlSerialize:!!d.getElementsByTagName("link").length,style:/red/.test(j.getAttribute("style")),hrefNormalized:j.getAttribute("href")==="/a",opacity:/^0.55$/.test(j.style.opacity),cssFloat:!!j.style.cssFloat,checkOn:d.getElementsByTagName("input")[0].value==="on",optSelected:s.createElement("select").appendChild(s.createElement("option")).selected,
parentNode:d.removeChild(d.appendChild(s.createElement("div"))).parentNode===null,deleteExpando:true,checkClone:false,scriptEval:false,noCloneEvent:true,boxModel:null};b.type="text/javascript";try{b.appendChild(s.createTextNode("window."+f+"=1;"))}catch(i){}a.insertBefore(b,a.firstChild);if(A[f]){c.support.scriptEval=true;delete A[f]}try{delete b.test}catch(o){c.support.deleteExpando=false}a.removeChild(b);if(d.attachEvent&&d.fireEvent){d.attachEvent("onclick",function k(){c.support.noCloneEvent=
false;d.detachEvent("onclick",k)});d.cloneNode(true).fireEvent("onclick")}d=s.createElement("div");d.innerHTML="<input type='radio' name='radiotest' checked='checked'/>";a=s.createDocumentFragment();a.appendChild(d.firstChild);c.support.checkClone=a.cloneNode(true).cloneNode(true).lastChild.checked;c(function(){var k=s.createElement("div");k.style.width=k.style.paddingLeft="1px";s.body.appendChild(k);c.boxModel=c.support.boxModel=k.offsetWidth===2;s.body.removeChild(k).style.display="none"});a=function(k){var n=
s.createElement("div");k="on"+k;var r=k in n;if(!r){n.setAttribute(k,"return;");r=typeof n[k]==="function"}return r};c.support.submitBubbles=a("submit");c.support.changeBubbles=a("change");a=b=d=e=j=null}})();c.props={"for":"htmlFor","class":"className",readonly:"readOnly",maxlength:"maxLength",cellspacing:"cellSpacing",rowspan:"rowSpan",colspan:"colSpan",tabindex:"tabIndex",usemap:"useMap",frameborder:"frameBorder"};var G="jQuery"+J(),Ya=0,za={};c.extend({cache:{},expando:G,noData:{embed:true,object:true,
applet:true},data:function(a,b,d){if(!(a.nodeName&&c.noData[a.nodeName.toLowerCase()])){a=a==A?za:a;var f=a[G],e=c.cache;if(!f&&typeof b==="string"&&d===w)return null;f||(f=++Ya);if(typeof b==="object"){a[G]=f;e[f]=c.extend(true,{},b)}else if(!e[f]){a[G]=f;e[f]={}}a=e[f];if(d!==w)a[b]=d;return typeof b==="string"?a[b]:a}},removeData:function(a,b){if(!(a.nodeName&&c.noData[a.nodeName.toLowerCase()])){a=a==A?za:a;var d=a[G],f=c.cache,e=f[d];if(b){if(e){delete e[b];c.isEmptyObject(e)&&c.removeData(a)}}else{if(c.support.deleteExpando)delete a[c.expando];
else a.removeAttribute&&a.removeAttribute(c.expando);delete f[d]}}}});c.fn.extend({data:function(a,b){if(typeof a==="undefined"&&this.length)return c.data(this[0]);else if(typeof a==="object")return this.each(function(){c.data(this,a)});var d=a.split(".");d[1]=d[1]?"."+d[1]:"";if(b===w){var f=this.triggerHandler("getData"+d[1]+"!",[d[0]]);if(f===w&&this.length)f=c.data(this[0],a);return f===w&&d[1]?this.data(d[0]):f}else return this.trigger("setData"+d[1]+"!",[d[0],b]).each(function(){c.data(this,
a,b)})},removeData:function(a){return this.each(function(){c.removeData(this,a)})}});c.extend({queue:function(a,b,d){if(a){b=(b||"fx")+"queue";var f=c.data(a,b);if(!d)return f||[];if(!f||c.isArray(d))f=c.data(a,b,c.makeArray(d));else f.push(d);return f}},dequeue:function(a,b){b=b||"fx";var d=c.queue(a,b),f=d.shift();if(f==="inprogress")f=d.shift();if(f){b==="fx"&&d.unshift("inprogress");f.call(a,function(){c.dequeue(a,b)})}}});c.fn.extend({queue:function(a,b){if(typeof a!=="string"){b=a;a="fx"}if(b===
w)return c.queue(this[0],a);return this.each(function(){var d=c.queue(this,a,b);a==="fx"&&d[0]!=="inprogress"&&c.dequeue(this,a)})},dequeue:function(a){return this.each(function(){c.dequeue(this,a)})},delay:function(a,b){a=c.fx?c.fx.speeds[a]||a:a;b=b||"fx";return this.queue(b,function(){var d=this;setTimeout(function(){c.dequeue(d,b)},a)})},clearQueue:function(a){return this.queue(a||"fx",[])}});var Aa=/[\n\t]/g,ca=/\s+/,Za=/\r/g,$a=/href|src|style/,ab=/(button|input)/i,bb=/(button|input|object|select|textarea)/i,
cb=/^(a|area)$/i,Ba=/radio|checkbox/;c.fn.extend({attr:function(a,b){return X(this,a,b,true,c.attr)},removeAttr:function(a){return this.each(function(){c.attr(this,a,"");this.nodeType===1&&this.removeAttribute(a)})},addClass:function(a){if(c.isFunction(a))return this.each(function(n){var r=c(this);r.addClass(a.call(this,n,r.attr("class")))});if(a&&typeof a==="string")for(var b=(a||"").split(ca),d=0,f=this.length;d<f;d++){var e=this[d];if(e.nodeType===1)if(e.className){for(var j=" "+e.className+" ",
i=e.className,o=0,k=b.length;o<k;o++)if(j.indexOf(" "+b[o]+" ")<0)i+=" "+b[o];e.className=c.trim(i)}else e.className=a}return this},removeClass:function(a){if(c.isFunction(a))return this.each(function(k){var n=c(this);n.removeClass(a.call(this,k,n.attr("class")))});if(a&&typeof a==="string"||a===w)for(var b=(a||"").split(ca),d=0,f=this.length;d<f;d++){var e=this[d];if(e.nodeType===1&&e.className)if(a){for(var j=(" "+e.className+" ").replace(Aa," "),i=0,o=b.length;i<o;i++)j=j.replace(" "+b[i]+" ",
" ");e.className=c.trim(j)}else e.className=""}return this},toggleClass:function(a,b){var d=typeof a,f=typeof b==="boolean";if(c.isFunction(a))return this.each(function(e){var j=c(this);j.toggleClass(a.call(this,e,j.attr("class"),b),b)});return this.each(function(){if(d==="string")for(var e,j=0,i=c(this),o=b,k=a.split(ca);e=k[j++];){o=f?o:!i.hasClass(e);i[o?"addClass":"removeClass"](e)}else if(d==="undefined"||d==="boolean"){this.className&&c.data(this,"__className__",this.className);this.className=
this.className||a===false?"":c.data(this,"__className__")||""}})},hasClass:function(a){a=" "+a+" ";for(var b=0,d=this.length;b<d;b++)if((" "+this[b].className+" ").replace(Aa," ").indexOf(a)>-1)return true;return false},val:function(a){if(a===w){var b=this[0];if(b){if(c.nodeName(b,"option"))return(b.attributes.value||{}).specified?b.value:b.text;if(c.nodeName(b,"select")){var d=b.selectedIndex,f=[],e=b.options;b=b.type==="select-one";if(d<0)return null;var j=b?d:0;for(d=b?d+1:e.length;j<d;j++){var i=
e[j];if(i.selected){a=c(i).val();if(b)return a;f.push(a)}}return f}if(Ba.test(b.type)&&!c.support.checkOn)return b.getAttribute("value")===null?"on":b.value;return(b.value||"").replace(Za,"")}return w}var o=c.isFunction(a);return this.each(function(k){var n=c(this),r=a;if(this.nodeType===1){if(o)r=a.call(this,k,n.val());if(typeof r==="number")r+="";if(c.isArray(r)&&Ba.test(this.type))this.checked=c.inArray(n.val(),r)>=0;else if(c.nodeName(this,"select")){var u=c.makeArray(r);c("option",this).each(function(){this.selected=
c.inArray(c(this).val(),u)>=0});if(!u.length)this.selectedIndex=-1}else this.value=r}})}});c.extend({attrFn:{val:true,css:true,html:true,text:true,data:true,width:true,height:true,offset:true},attr:function(a,b,d,f){if(!a||a.nodeType===3||a.nodeType===8)return w;if(f&&b in c.attrFn)return c(a)[b](d);f=a.nodeType!==1||!c.isXMLDoc(a);var e=d!==w;b=f&&c.props[b]||b;if(a.nodeType===1){var j=$a.test(b);if(b in a&&f&&!j){if(e){b==="type"&&ab.test(a.nodeName)&&a.parentNode&&c.error("type property can't be changed");
a[b]=d}if(c.nodeName(a,"form")&&a.getAttributeNode(b))return a.getAttributeNode(b).nodeValue;if(b==="tabIndex")return(b=a.getAttributeNode("tabIndex"))&&b.specified?b.value:bb.test(a.nodeName)||cb.test(a.nodeName)&&a.href?0:w;return a[b]}if(!c.support.style&&f&&b==="style"){if(e)a.style.cssText=""+d;return a.style.cssText}e&&a.setAttribute(b,""+d);a=!c.support.hrefNormalized&&f&&j?a.getAttribute(b,2):a.getAttribute(b);return a===null?w:a}return c.style(a,b,d)}});var O=/\.(.*)$/,db=function(a){return a.replace(/[^\w\s\.\|`]/g,
function(b){return"\\"+b})};c.event={add:function(a,b,d,f){if(!(a.nodeType===3||a.nodeType===8)){if(a.setInterval&&a!==A&&!a.frameElement)a=A;var e,j;if(d.handler){e=d;d=e.handler}if(!d.guid)d.guid=c.guid++;if(j=c.data(a)){var i=j.events=j.events||{},o=j.handle;if(!o)j.handle=o=function(){return typeof c!=="undefined"&&!c.event.triggered?c.event.handle.apply(o.elem,arguments):w};o.elem=a;b=b.split(" ");for(var k,n=0,r;k=b[n++];){j=e?c.extend({},e):{handler:d,data:f};if(k.indexOf(".")>-1){r=k.split(".");
k=r.shift();j.namespace=r.slice(0).sort().join(".")}else{r=[];j.namespace=""}j.type=k;j.guid=d.guid;var u=i[k],z=c.event.special[k]||{};if(!u){u=i[k]=[];if(!z.setup||z.setup.call(a,f,r,o)===false)if(a.addEventListener)a.addEventListener(k,o,false);else a.attachEvent&&a.attachEvent("on"+k,o)}if(z.add){z.add.call(a,j);if(!j.handler.guid)j.handler.guid=d.guid}u.push(j);c.event.global[k]=true}a=null}}},global:{},remove:function(a,b,d,f){if(!(a.nodeType===3||a.nodeType===8)){var e,j=0,i,o,k,n,r,u,z=c.data(a),
C=z&&z.events;if(z&&C){if(b&&b.type){d=b.handler;b=b.type}if(!b||typeof b==="string"&&b.charAt(0)==="."){b=b||"";for(e in C)c.event.remove(a,e+b)}else{for(b=b.split(" ");e=b[j++];){n=e;i=e.indexOf(".")<0;o=[];if(!i){o=e.split(".");e=o.shift();k=new RegExp("(^|\\.)"+c.map(o.slice(0).sort(),db).join("\\.(?:.*\\.)?")+"(\\.|$)")}if(r=C[e])if(d){n=c.event.special[e]||{};for(B=f||0;B<r.length;B++){u=r[B];if(d.guid===u.guid){if(i||k.test(u.namespace)){f==null&&r.splice(B--,1);n.remove&&n.remove.call(a,u)}if(f!=
null)break}}if(r.length===0||f!=null&&r.length===1){if(!n.teardown||n.teardown.call(a,o)===false)Ca(a,e,z.handle);delete C[e]}}else for(var B=0;B<r.length;B++){u=r[B];if(i||k.test(u.namespace)){c.event.remove(a,n,u.handler,B);r.splice(B--,1)}}}if(c.isEmptyObject(C)){if(b=z.handle)b.elem=null;delete z.events;delete z.handle;c.isEmptyObject(z)&&c.removeData(a)}}}}},trigger:function(a,b,d,f){var e=a.type||a;if(!f){a=typeof a==="object"?a[G]?a:c.extend(c.Event(e),a):c.Event(e);if(e.indexOf("!")>=0){a.type=
e=e.slice(0,-1);a.exclusive=true}if(!d){a.stopPropagation();c.event.global[e]&&c.each(c.cache,function(){this.events&&this.events[e]&&c.event.trigger(a,b,this.handle.elem)})}if(!d||d.nodeType===3||d.nodeType===8)return w;a.result=w;a.target=d;b=c.makeArray(b);b.unshift(a)}a.currentTarget=d;(f=c.data(d,"handle"))&&f.apply(d,b);f=d.parentNode||d.ownerDocument;try{if(!(d&&d.nodeName&&c.noData[d.nodeName.toLowerCase()]))if(d["on"+e]&&d["on"+e].apply(d,b)===false)a.result=false}catch(j){}if(!a.isPropagationStopped()&&
f)c.event.trigger(a,b,f,true);else if(!a.isDefaultPrevented()){f=a.target;var i,o=c.nodeName(f,"a")&&e==="click",k=c.event.special[e]||{};if((!k._default||k._default.call(d,a)===false)&&!o&&!(f&&f.nodeName&&c.noData[f.nodeName.toLowerCase()])){try{if(f[e]){if(i=f["on"+e])f["on"+e]=null;c.event.triggered=true;f[e]()}}catch(n){}if(i)f["on"+e]=i;c.event.triggered=false}}},handle:function(a){var b,d,f,e;a=arguments[0]=c.event.fix(a||A.event);a.currentTarget=this;b=a.type.indexOf(".")<0&&!a.exclusive;
if(!b){d=a.type.split(".");a.type=d.shift();f=new RegExp("(^|\\.)"+d.slice(0).sort().join("\\.(?:.*\\.)?")+"(\\.|$)")}e=c.data(this,"events");d=e[a.type];if(e&&d){d=d.slice(0);e=0;for(var j=d.length;e<j;e++){var i=d[e];if(b||f.test(i.namespace)){a.handler=i.handler;a.data=i.data;a.handleObj=i;i=i.handler.apply(this,arguments);if(i!==w){a.result=i;if(i===false){a.preventDefault();a.stopPropagation()}}if(a.isImmediatePropagationStopped())break}}}return a.result},props:"altKey attrChange attrName bubbles button cancelable charCode clientX clientY ctrlKey currentTarget data detail eventPhase fromElement handler keyCode layerX layerY metaKey newValue offsetX offsetY originalTarget pageX pageY prevValue relatedNode relatedTarget screenX screenY shiftKey srcElement target toElement view wheelDelta which".split(" "),
fix:function(a){if(a[G])return a;var b=a;a=c.Event(b);for(var d=this.props.length,f;d;){f=this.props[--d];a[f]=b[f]}if(!a.target)a.target=a.srcElement||s;if(a.target.nodeType===3)a.target=a.target.parentNode;if(!a.relatedTarget&&a.fromElement)a.relatedTarget=a.fromElement===a.target?a.toElement:a.fromElement;if(a.pageX==null&&a.clientX!=null){b=s.documentElement;d=s.body;a.pageX=a.clientX+(b&&b.scrollLeft||d&&d.scrollLeft||0)-(b&&b.clientLeft||d&&d.clientLeft||0);a.pageY=a.clientY+(b&&b.scrollTop||
d&&d.scrollTop||0)-(b&&b.clientTop||d&&d.clientTop||0)}if(!a.which&&(a.charCode||a.charCode===0?a.charCode:a.keyCode))a.which=a.charCode||a.keyCode;if(!a.metaKey&&a.ctrlKey)a.metaKey=a.ctrlKey;if(!a.which&&a.button!==w)a.which=a.button&1?1:a.button&2?3:a.button&4?2:0;return a},guid:1E8,proxy:c.proxy,special:{ready:{setup:c.bindReady,teardown:c.noop},live:{add:function(a){c.event.add(this,a.origType,c.extend({},a,{handler:oa}))},remove:function(a){var b=true,d=a.origType.replace(O,"");c.each(c.data(this,
"events").live||[],function(){if(d===this.origType.replace(O,""))return b=false});b&&c.event.remove(this,a.origType,oa)}},beforeunload:{setup:function(a,b,d){if(this.setInterval)this.onbeforeunload=d;return false},teardown:function(a,b){if(this.onbeforeunload===b)this.onbeforeunload=null}}}};var Ca=s.removeEventListener?function(a,b,d){a.removeEventListener(b,d,false)}:function(a,b,d){a.detachEvent("on"+b,d)};c.Event=function(a){if(!this.preventDefault)return new c.Event(a);if(a&&a.type){this.originalEvent=
a;this.type=a.type}else this.type=a;this.timeStamp=J();this[G]=true};c.Event.prototype={preventDefault:function(){this.isDefaultPrevented=Z;var a=this.originalEvent;if(a){a.preventDefault&&a.preventDefault();a.returnValue=false}},stopPropagation:function(){this.isPropagationStopped=Z;var a=this.originalEvent;if(a){a.stopPropagation&&a.stopPropagation();a.cancelBubble=true}},stopImmediatePropagation:function(){this.isImmediatePropagationStopped=Z;this.stopPropagation()},isDefaultPrevented:Y,isPropagationStopped:Y,
isImmediatePropagationStopped:Y};var Da=function(a){var b=a.relatedTarget;try{for(;b&&b!==this;)b=b.parentNode;if(b!==this){a.type=a.data;c.event.handle.apply(this,arguments)}}catch(d){}},Ea=function(a){a.type=a.data;c.event.handle.apply(this,arguments)};c.each({mouseenter:"mouseover",mouseleave:"mouseout"},function(a,b){c.event.special[a]={setup:function(d){c.event.add(this,b,d&&d.selector?Ea:Da,a)},teardown:function(d){c.event.remove(this,b,d&&d.selector?Ea:Da)}}});if(!c.support.submitBubbles)c.event.special.submit=
{setup:function(){if(this.nodeName.toLowerCase()!=="form"){c.event.add(this,"click.specialSubmit",function(a){var b=a.target,d=b.type;if((d==="submit"||d==="image")&&c(b).closest("form").length)return na("submit",this,arguments)});c.event.add(this,"keypress.specialSubmit",function(a){var b=a.target,d=b.type;if((d==="text"||d==="password")&&c(b).closest("form").length&&a.keyCode===13)return na("submit",this,arguments)})}else return false},teardown:function(){c.event.remove(this,".specialSubmit")}};
if(!c.support.changeBubbles){var da=/textarea|input|select/i,ea,Fa=function(a){var b=a.type,d=a.value;if(b==="radio"||b==="checkbox")d=a.checked;else if(b==="select-multiple")d=a.selectedIndex>-1?c.map(a.options,function(f){return f.selected}).join("-"):"";else if(a.nodeName.toLowerCase()==="select")d=a.selectedIndex;return d},fa=function(a,b){var d=a.target,f,e;if(!(!da.test(d.nodeName)||d.readOnly)){f=c.data(d,"_change_data");e=Fa(d);if(a.type!=="focusout"||d.type!=="radio")c.data(d,"_change_data",
e);if(!(f===w||e===f))if(f!=null||e){a.type="change";return c.event.trigger(a,b,d)}}};c.event.special.change={filters:{focusout:fa,click:function(a){var b=a.target,d=b.type;if(d==="radio"||d==="checkbox"||b.nodeName.toLowerCase()==="select")return fa.call(this,a)},keydown:function(a){var b=a.target,d=b.type;if(a.keyCode===13&&b.nodeName.toLowerCase()!=="textarea"||a.keyCode===32&&(d==="checkbox"||d==="radio")||d==="select-multiple")return fa.call(this,a)},beforeactivate:function(a){a=a.target;c.data(a,
"_change_data",Fa(a))}},setup:function(){if(this.type==="file")return false;for(var a in ea)c.event.add(this,a+".specialChange",ea[a]);return da.test(this.nodeName)},teardown:function(){c.event.remove(this,".specialChange");return da.test(this.nodeName)}};ea=c.event.special.change.filters}s.addEventListener&&c.each({focus:"focusin",blur:"focusout"},function(a,b){function d(f){f=c.event.fix(f);f.type=b;return c.event.handle.call(this,f)}c.event.special[b]={setup:function(){this.addEventListener(a,
d,true)},teardown:function(){this.removeEventListener(a,d,true)}}});c.each(["bind","one"],function(a,b){c.fn[b]=function(d,f,e){if(typeof d==="object"){for(var j in d)this[b](j,f,d[j],e);return this}if(c.isFunction(f)){e=f;f=w}var i=b==="one"?c.proxy(e,function(k){c(this).unbind(k,i);return e.apply(this,arguments)}):e;if(d==="unload"&&b!=="one")this.one(d,f,e);else{j=0;for(var o=this.length;j<o;j++)c.event.add(this[j],d,i,f)}return this}});c.fn.extend({unbind:function(a,b){if(typeof a==="object"&&
!a.preventDefault)for(var d in a)this.unbind(d,a[d]);else{d=0;for(var f=this.length;d<f;d++)c.event.remove(this[d],a,b)}return this},delegate:function(a,b,d,f){return this.live(b,d,f,a)},undelegate:function(a,b,d){return arguments.length===0?this.unbind("live"):this.die(b,null,d,a)},trigger:function(a,b){return this.each(function(){c.event.trigger(a,b,this)})},triggerHandler:function(a,b){if(this[0]){a=c.Event(a);a.preventDefault();a.stopPropagation();c.event.trigger(a,b,this[0]);return a.result}},
toggle:function(a){for(var b=arguments,d=1;d<b.length;)c.proxy(a,b[d++]);return this.click(c.proxy(a,function(f){var e=(c.data(this,"lastToggle"+a.guid)||0)%d;c.data(this,"lastToggle"+a.guid,e+1);f.preventDefault();return b[e].apply(this,arguments)||false}))},hover:function(a,b){return this.mouseenter(a).mouseleave(b||a)}});var Ga={focus:"focusin",blur:"focusout",mouseenter:"mouseover",mouseleave:"mouseout"};c.each(["live","die"],function(a,b){c.fn[b]=function(d,f,e,j){var i,o=0,k,n,r=j||this.selector,
u=j?this:c(this.context);if(c.isFunction(f)){e=f;f=w}for(d=(d||"").split(" ");(i=d[o++])!=null;){j=O.exec(i);k="";if(j){k=j[0];i=i.replace(O,"")}if(i==="hover")d.push("mouseenter"+k,"mouseleave"+k);else{n=i;if(i==="focus"||i==="blur"){d.push(Ga[i]+k);i+=k}else i=(Ga[i]||i)+k;b==="live"?u.each(function(){c.event.add(this,pa(i,r),{data:f,selector:r,handler:e,origType:i,origHandler:e,preType:n})}):u.unbind(pa(i,r),e)}}return this}});c.each("blur focus focusin focusout load resize scroll unload click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup error".split(" "),
function(a,b){c.fn[b]=function(d){return d?this.bind(b,d):this.trigger(b)};if(c.attrFn)c.attrFn[b]=true});A.attachEvent&&!A.addEventListener&&A.attachEvent("onunload",function(){for(var a in c.cache)if(c.cache[a].handle)try{c.event.remove(c.cache[a].handle.elem)}catch(b){}});(function(){function a(g){for(var h="",l,m=0;g[m];m++){l=g[m];if(l.nodeType===3||l.nodeType===4)h+=l.nodeValue;else if(l.nodeType!==8)h+=a(l.childNodes)}return h}function b(g,h,l,m,q,p){q=0;for(var v=m.length;q<v;q++){var t=m[q];
if(t){t=t[g];for(var y=false;t;){if(t.sizcache===l){y=m[t.sizset];break}if(t.nodeType===1&&!p){t.sizcache=l;t.sizset=q}if(t.nodeName.toLowerCase()===h){y=t;break}t=t[g]}m[q]=y}}}function d(g,h,l,m,q,p){q=0;for(var v=m.length;q<v;q++){var t=m[q];if(t){t=t[g];for(var y=false;t;){if(t.sizcache===l){y=m[t.sizset];break}if(t.nodeType===1){if(!p){t.sizcache=l;t.sizset=q}if(typeof h!=="string"){if(t===h){y=true;break}}else if(k.filter(h,[t]).length>0){y=t;break}}t=t[g]}m[q]=y}}}var f=/((?:\((?:\([^()]+\)|[^()]+)+\)|\[(?:\[[^[\]]*\]|['"][^'"]*['"]|[^[\]'"]+)+\]|\\.|[^ >+~,(\[\\]+)+|[>+~])(\s*,\s*)?((?:.|\r|\n)*)/g,
e=0,j=Object.prototype.toString,i=false,o=true;[0,0].sort(function(){o=false;return 0});var k=function(g,h,l,m){l=l||[];var q=h=h||s;if(h.nodeType!==1&&h.nodeType!==9)return[];if(!g||typeof g!=="string")return l;for(var p=[],v,t,y,S,H=true,M=x(h),I=g;(f.exec(""),v=f.exec(I))!==null;){I=v[3];p.push(v[1]);if(v[2]){S=v[3];break}}if(p.length>1&&r.exec(g))if(p.length===2&&n.relative[p[0]])t=ga(p[0]+p[1],h);else for(t=n.relative[p[0]]?[h]:k(p.shift(),h);p.length;){g=p.shift();if(n.relative[g])g+=p.shift();
t=ga(g,t)}else{if(!m&&p.length>1&&h.nodeType===9&&!M&&n.match.ID.test(p[0])&&!n.match.ID.test(p[p.length-1])){v=k.find(p.shift(),h,M);h=v.expr?k.filter(v.expr,v.set)[0]:v.set[0]}if(h){v=m?{expr:p.pop(),set:z(m)}:k.find(p.pop(),p.length===1&&(p[0]==="~"||p[0]==="+")&&h.parentNode?h.parentNode:h,M);t=v.expr?k.filter(v.expr,v.set):v.set;if(p.length>0)y=z(t);else H=false;for(;p.length;){var D=p.pop();v=D;if(n.relative[D])v=p.pop();else D="";if(v==null)v=h;n.relative[D](y,v,M)}}else y=[]}y||(y=t);y||k.error(D||
g);if(j.call(y)==="[object Array]")if(H)if(h&&h.nodeType===1)for(g=0;y[g]!=null;g++){if(y[g]&&(y[g]===true||y[g].nodeType===1&&E(h,y[g])))l.push(t[g])}else for(g=0;y[g]!=null;g++)y[g]&&y[g].nodeType===1&&l.push(t[g]);else l.push.apply(l,y);else z(y,l);if(S){k(S,q,l,m);k.uniqueSort(l)}return l};k.uniqueSort=function(g){if(B){i=o;g.sort(B);if(i)for(var h=1;h<g.length;h++)g[h]===g[h-1]&&g.splice(h--,1)}return g};k.matches=function(g,h){return k(g,null,null,h)};k.find=function(g,h,l){var m,q;if(!g)return[];
for(var p=0,v=n.order.length;p<v;p++){var t=n.order[p];if(q=n.leftMatch[t].exec(g)){var y=q[1];q.splice(1,1);if(y.substr(y.length-1)!=="\\"){q[1]=(q[1]||"").replace(/\\/g,"");m=n.find[t](q,h,l);if(m!=null){g=g.replace(n.match[t],"");break}}}}m||(m=h.getElementsByTagName("*"));return{set:m,expr:g}};k.filter=function(g,h,l,m){for(var q=g,p=[],v=h,t,y,S=h&&h[0]&&x(h[0]);g&&h.length;){for(var H in n.filter)if((t=n.leftMatch[H].exec(g))!=null&&t[2]){var M=n.filter[H],I,D;D=t[1];y=false;t.splice(1,1);if(D.substr(D.length-
1)!=="\\"){if(v===p)p=[];if(n.preFilter[H])if(t=n.preFilter[H](t,v,l,p,m,S)){if(t===true)continue}else y=I=true;if(t)for(var U=0;(D=v[U])!=null;U++)if(D){I=M(D,t,U,v);var Ha=m^!!I;if(l&&I!=null)if(Ha)y=true;else v[U]=false;else if(Ha){p.push(D);y=true}}if(I!==w){l||(v=p);g=g.replace(n.match[H],"");if(!y)return[];break}}}if(g===q)if(y==null)k.error(g);else break;q=g}return v};k.error=function(g){throw"Syntax error, unrecognized expression: "+g;};var n=k.selectors={order:["ID","NAME","TAG"],match:{ID:/#((?:[\w\u00c0-\uFFFF-]|\\.)+)/,
CLASS:/\.((?:[\w\u00c0-\uFFFF-]|\\.)+)/,NAME:/\[name=['"]*((?:[\w\u00c0-\uFFFF-]|\\.)+)['"]*\]/,ATTR:/\[\s*((?:[\w\u00c0-\uFFFF-]|\\.)+)\s*(?:(\S?=)\s*(['"]*)(.*?)\3|)\s*\]/,TAG:/^((?:[\w\u00c0-\uFFFF\*-]|\\.)+)/,CHILD:/:(only|nth|last|first)-child(?:\((even|odd|[\dn+-]*)\))?/,POS:/:(nth|eq|gt|lt|first|last|even|odd)(?:\((\d*)\))?(?=[^-]|$)/,PSEUDO:/:((?:[\w\u00c0-\uFFFF-]|\\.)+)(?:\((['"]?)((?:\([^\)]+\)|[^\(\)]*)+)\2\))?/},leftMatch:{},attrMap:{"class":"className","for":"htmlFor"},attrHandle:{href:function(g){return g.getAttribute("href")}},
relative:{"+":function(g,h){var l=typeof h==="string",m=l&&!/\W/.test(h);l=l&&!m;if(m)h=h.toLowerCase();m=0;for(var q=g.length,p;m<q;m++)if(p=g[m]){for(;(p=p.previousSibling)&&p.nodeType!==1;);g[m]=l||p&&p.nodeName.toLowerCase()===h?p||false:p===h}l&&k.filter(h,g,true)},">":function(g,h){var l=typeof h==="string";if(l&&!/\W/.test(h)){h=h.toLowerCase();for(var m=0,q=g.length;m<q;m++){var p=g[m];if(p){l=p.parentNode;g[m]=l.nodeName.toLowerCase()===h?l:false}}}else{m=0;for(q=g.length;m<q;m++)if(p=g[m])g[m]=
l?p.parentNode:p.parentNode===h;l&&k.filter(h,g,true)}},"":function(g,h,l){var m=e++,q=d;if(typeof h==="string"&&!/\W/.test(h)){var p=h=h.toLowerCase();q=b}q("parentNode",h,m,g,p,l)},"~":function(g,h,l){var m=e++,q=d;if(typeof h==="string"&&!/\W/.test(h)){var p=h=h.toLowerCase();q=b}q("previousSibling",h,m,g,p,l)}},find:{ID:function(g,h,l){if(typeof h.getElementById!=="undefined"&&!l)return(g=h.getElementById(g[1]))?[g]:[]},NAME:function(g,h){if(typeof h.getElementsByName!=="undefined"){var l=[];
h=h.getElementsByName(g[1]);for(var m=0,q=h.length;m<q;m++)h[m].getAttribute("name")===g[1]&&l.push(h[m]);return l.length===0?null:l}},TAG:function(g,h){return h.getElementsByTagName(g[1])}},preFilter:{CLASS:function(g,h,l,m,q,p){g=" "+g[1].replace(/\\/g,"")+" ";if(p)return g;p=0;for(var v;(v=h[p])!=null;p++)if(v)if(q^(v.className&&(" "+v.className+" ").replace(/[\t\n]/g," ").indexOf(g)>=0))l||m.push(v);else if(l)h[p]=false;return false},ID:function(g){return g[1].replace(/\\/g,"")},TAG:function(g){return g[1].toLowerCase()},
CHILD:function(g){if(g[1]==="nth"){var h=/(-?)(\d*)n((?:\+|-)?\d*)/.exec(g[2]==="even"&&"2n"||g[2]==="odd"&&"2n+1"||!/\D/.test(g[2])&&"0n+"+g[2]||g[2]);g[2]=h[1]+(h[2]||1)-0;g[3]=h[3]-0}g[0]=e++;return g},ATTR:function(g,h,l,m,q,p){h=g[1].replace(/\\/g,"");if(!p&&n.attrMap[h])g[1]=n.attrMap[h];if(g[2]==="~=")g[4]=" "+g[4]+" ";return g},PSEUDO:function(g,h,l,m,q){if(g[1]==="not")if((f.exec(g[3])||"").length>1||/^\w/.test(g[3]))g[3]=k(g[3],null,null,h);else{g=k.filter(g[3],h,l,true^q);l||m.push.apply(m,
g);return false}else if(n.match.POS.test(g[0])||n.match.CHILD.test(g[0]))return true;return g},POS:function(g){g.unshift(true);return g}},filters:{enabled:function(g){return g.disabled===false&&g.type!=="hidden"},disabled:function(g){return g.disabled===true},checked:function(g){return g.checked===true},selected:function(g){return g.selected===true},parent:function(g){return!!g.firstChild},empty:function(g){return!g.firstChild},has:function(g,h,l){return!!k(l[3],g).length},header:function(g){return/h\d/i.test(g.nodeName)},
text:function(g){return"text"===g.type},radio:function(g){return"radio"===g.type},checkbox:function(g){return"checkbox"===g.type},file:function(g){return"file"===g.type},password:function(g){return"password"===g.type},submit:function(g){return"submit"===g.type},image:function(g){return"image"===g.type},reset:function(g){return"reset"===g.type},button:function(g){return"button"===g.type||g.nodeName.toLowerCase()==="button"},input:function(g){return/input|select|textarea|button/i.test(g.nodeName)}},
setFilters:{first:function(g,h){return h===0},last:function(g,h,l,m){return h===m.length-1},even:function(g,h){return h%2===0},odd:function(g,h){return h%2===1},lt:function(g,h,l){return h<l[3]-0},gt:function(g,h,l){return h>l[3]-0},nth:function(g,h,l){return l[3]-0===h},eq:function(g,h,l){return l[3]-0===h}},filter:{PSEUDO:function(g,h,l,m){var q=h[1],p=n.filters[q];if(p)return p(g,l,h,m);else if(q==="contains")return(g.textContent||g.innerText||a([g])||"").indexOf(h[3])>=0;else if(q==="not"){h=
h[3];l=0;for(m=h.length;l<m;l++)if(h[l]===g)return false;return true}else k.error("Syntax error, unrecognized expression: "+q)},CHILD:function(g,h){var l=h[1],m=g;switch(l){case "only":case "first":for(;m=m.previousSibling;)if(m.nodeType===1)return false;if(l==="first")return true;m=g;case "last":for(;m=m.nextSibling;)if(m.nodeType===1)return false;return true;case "nth":l=h[2];var q=h[3];if(l===1&&q===0)return true;h=h[0];var p=g.parentNode;if(p&&(p.sizcache!==h||!g.nodeIndex)){var v=0;for(m=p.firstChild;m;m=
m.nextSibling)if(m.nodeType===1)m.nodeIndex=++v;p.sizcache=h}g=g.nodeIndex-q;return l===0?g===0:g%l===0&&g/l>=0}},ID:function(g,h){return g.nodeType===1&&g.getAttribute("id")===h},TAG:function(g,h){return h==="*"&&g.nodeType===1||g.nodeName.toLowerCase()===h},CLASS:function(g,h){return(" "+(g.className||g.getAttribute("class"))+" ").indexOf(h)>-1},ATTR:function(g,h){var l=h[1];g=n.attrHandle[l]?n.attrHandle[l](g):g[l]!=null?g[l]:g.getAttribute(l);l=g+"";var m=h[2];h=h[4];return g==null?m==="!=":m===
"="?l===h:m==="*="?l.indexOf(h)>=0:m==="~="?(" "+l+" ").indexOf(h)>=0:!h?l&&g!==false:m==="!="?l!==h:m==="^="?l.indexOf(h)===0:m==="$="?l.substr(l.length-h.length)===h:m==="|="?l===h||l.substr(0,h.length+1)===h+"-":false},POS:function(g,h,l,m){var q=n.setFilters[h[2]];if(q)return q(g,l,h,m)}}},r=n.match.POS;for(var u in n.match){n.match[u]=new RegExp(n.match[u].source+/(?![^\[]*\])(?![^\(]*\))/.source);n.leftMatch[u]=new RegExp(/(^(?:.|\r|\n)*?)/.source+n.match[u].source.replace(/\\(\d+)/g,function(g,
h){return"\\"+(h-0+1)}))}var z=function(g,h){g=Array.prototype.slice.call(g,0);if(h){h.push.apply(h,g);return h}return g};try{Array.prototype.slice.call(s.documentElement.childNodes,0)}catch(C){z=function(g,h){h=h||[];if(j.call(g)==="[object Array]")Array.prototype.push.apply(h,g);else if(typeof g.length==="number")for(var l=0,m=g.length;l<m;l++)h.push(g[l]);else for(l=0;g[l];l++)h.push(g[l]);return h}}var B;if(s.documentElement.compareDocumentPosition)B=function(g,h){if(!g.compareDocumentPosition||
!h.compareDocumentPosition){if(g==h)i=true;return g.compareDocumentPosition?-1:1}g=g.compareDocumentPosition(h)&4?-1:g===h?0:1;if(g===0)i=true;return g};else if("sourceIndex"in s.documentElement)B=function(g,h){if(!g.sourceIndex||!h.sourceIndex){if(g==h)i=true;return g.sourceIndex?-1:1}g=g.sourceIndex-h.sourceIndex;if(g===0)i=true;return g};else if(s.createRange)B=function(g,h){if(!g.ownerDocument||!h.ownerDocument){if(g==h)i=true;return g.ownerDocument?-1:1}var l=g.ownerDocument.createRange(),m=
h.ownerDocument.createRange();l.setStart(g,0);l.setEnd(g,0);m.setStart(h,0);m.setEnd(h,0);g=l.compareBoundaryPoints(Range.START_TO_END,m);if(g===0)i=true;return g};(function(){var g=s.createElement("div"),h="script"+(new Date).getTime();g.innerHTML="<a name='"+h+"'/>";var l=s.documentElement;l.insertBefore(g,l.firstChild);if(s.getElementById(h)){n.find.ID=function(m,q,p){if(typeof q.getElementById!=="undefined"&&!p)return(q=q.getElementById(m[1]))?q.id===m[1]||typeof q.getAttributeNode!=="undefined"&&
q.getAttributeNode("id").nodeValue===m[1]?[q]:w:[]};n.filter.ID=function(m,q){var p=typeof m.getAttributeNode!=="undefined"&&m.getAttributeNode("id");return m.nodeType===1&&p&&p.nodeValue===q}}l.removeChild(g);l=g=null})();(function(){var g=s.createElement("div");g.appendChild(s.createComment(""));if(g.getElementsByTagName("*").length>0)n.find.TAG=function(h,l){l=l.getElementsByTagName(h[1]);if(h[1]==="*"){h=[];for(var m=0;l[m];m++)l[m].nodeType===1&&h.push(l[m]);l=h}return l};g.innerHTML="<a href='#'></a>";
if(g.firstChild&&typeof g.firstChild.getAttribute!=="undefined"&&g.firstChild.getAttribute("href")!=="#")n.attrHandle.href=function(h){return h.getAttribute("href",2)};g=null})();s.querySelectorAll&&function(){var g=k,h=s.createElement("div");h.innerHTML="<p class='TEST'></p>";if(!(h.querySelectorAll&&h.querySelectorAll(".TEST").length===0)){k=function(m,q,p,v){q=q||s;if(!v&&q.nodeType===9&&!x(q))try{return z(q.querySelectorAll(m),p)}catch(t){}return g(m,q,p,v)};for(var l in g)k[l]=g[l];h=null}}();
(function(){var g=s.createElement("div");g.innerHTML="<div class='test e'></div><div class='test'></div>";if(!(!g.getElementsByClassName||g.getElementsByClassName("e").length===0)){g.lastChild.className="e";if(g.getElementsByClassName("e").length!==1){n.order.splice(1,0,"CLASS");n.find.CLASS=function(h,l,m){if(typeof l.getElementsByClassName!=="undefined"&&!m)return l.getElementsByClassName(h[1])};g=null}}})();var E=s.compareDocumentPosition?function(g,h){return!!(g.compareDocumentPosition(h)&16)}:
function(g,h){return g!==h&&(g.contains?g.contains(h):true)},x=function(g){return(g=(g?g.ownerDocument||g:0).documentElement)?g.nodeName!=="HTML":false},ga=function(g,h){var l=[],m="",q;for(h=h.nodeType?[h]:h;q=n.match.PSEUDO.exec(g);){m+=q[0];g=g.replace(n.match.PSEUDO,"")}g=n.relative[g]?g+"*":g;q=0;for(var p=h.length;q<p;q++)k(g,h[q],l);return k.filter(m,l)};c.find=k;c.expr=k.selectors;c.expr[":"]=c.expr.filters;c.unique=k.uniqueSort;c.text=a;c.isXMLDoc=x;c.contains=E})();var eb=/Until$/,fb=/^(?:parents|prevUntil|prevAll)/,
gb=/,/;R=Array.prototype.slice;var Ia=function(a,b,d){if(c.isFunction(b))return c.grep(a,function(e,j){return!!b.call(e,j,e)===d});else if(b.nodeType)return c.grep(a,function(e){return e===b===d});else if(typeof b==="string"){var f=c.grep(a,function(e){return e.nodeType===1});if(Ua.test(b))return c.filter(b,f,!d);else b=c.filter(b,f)}return c.grep(a,function(e){return c.inArray(e,b)>=0===d})};c.fn.extend({find:function(a){for(var b=this.pushStack("","find",a),d=0,f=0,e=this.length;f<e;f++){d=b.length;
c.find(a,this[f],b);if(f>0)for(var j=d;j<b.length;j++)for(var i=0;i<d;i++)if(b[i]===b[j]){b.splice(j--,1);break}}return b},has:function(a){var b=c(a);return this.filter(function(){for(var d=0,f=b.length;d<f;d++)if(c.contains(this,b[d]))return true})},not:function(a){return this.pushStack(Ia(this,a,false),"not",a)},filter:function(a){return this.pushStack(Ia(this,a,true),"filter",a)},is:function(a){return!!a&&c.filter(a,this).length>0},closest:function(a,b){if(c.isArray(a)){var d=[],f=this[0],e,j=
{},i;if(f&&a.length){e=0;for(var o=a.length;e<o;e++){i=a[e];j[i]||(j[i]=c.expr.match.POS.test(i)?c(i,b||this.context):i)}for(;f&&f.ownerDocument&&f!==b;){for(i in j){e=j[i];if(e.jquery?e.index(f)>-1:c(f).is(e)){d.push({selector:i,elem:f});delete j[i]}}f=f.parentNode}}return d}var k=c.expr.match.POS.test(a)?c(a,b||this.context):null;return this.map(function(n,r){for(;r&&r.ownerDocument&&r!==b;){if(k?k.index(r)>-1:c(r).is(a))return r;r=r.parentNode}return null})},index:function(a){if(!a||typeof a===
"string")return c.inArray(this[0],a?c(a):this.parent().children());return c.inArray(a.jquery?a[0]:a,this)},add:function(a,b){a=typeof a==="string"?c(a,b||this.context):c.makeArray(a);b=c.merge(this.get(),a);return this.pushStack(qa(a[0])||qa(b[0])?b:c.unique(b))},andSelf:function(){return this.add(this.prevObject)}});c.each({parent:function(a){return(a=a.parentNode)&&a.nodeType!==11?a:null},parents:function(a){return c.dir(a,"parentNode")},parentsUntil:function(a,b,d){return c.dir(a,"parentNode",
d)},next:function(a){return c.nth(a,2,"nextSibling")},prev:function(a){return c.nth(a,2,"previousSibling")},nextAll:function(a){return c.dir(a,"nextSibling")},prevAll:function(a){return c.dir(a,"previousSibling")},nextUntil:function(a,b,d){return c.dir(a,"nextSibling",d)},prevUntil:function(a,b,d){return c.dir(a,"previousSibling",d)},siblings:function(a){return c.sibling(a.parentNode.firstChild,a)},children:function(a){return c.sibling(a.firstChild)},contents:function(a){return c.nodeName(a,"iframe")?
a.contentDocument||a.contentWindow.document:c.makeArray(a.childNodes)}},function(a,b){c.fn[a]=function(d,f){var e=c.map(this,b,d);eb.test(a)||(f=d);if(f&&typeof f==="string")e=c.filter(f,e);e=this.length>1?c.unique(e):e;if((this.length>1||gb.test(f))&&fb.test(a))e=e.reverse();return this.pushStack(e,a,R.call(arguments).join(","))}});c.extend({filter:function(a,b,d){if(d)a=":not("+a+")";return c.find.matches(a,b)},dir:function(a,b,d){var f=[];for(a=a[b];a&&a.nodeType!==9&&(d===w||a.nodeType!==1||!c(a).is(d));){a.nodeType===
1&&f.push(a);a=a[b]}return f},nth:function(a,b,d){b=b||1;for(var f=0;a;a=a[d])if(a.nodeType===1&&++f===b)break;return a},sibling:function(a,b){for(var d=[];a;a=a.nextSibling)a.nodeType===1&&a!==b&&d.push(a);return d}});var Ja=/ jQuery\d+="(?:\d+|null)"/g,V=/^\s+/,Ka=/(<([\w:]+)[^>]*?)\/>/g,hb=/^(?:area|br|col|embed|hr|img|input|link|meta|param)$/i,La=/<([\w:]+)/,ib=/<tbody/i,jb=/<|&#?\w+;/,ta=/<script|<object|<embed|<option|<style/i,ua=/checked\s*(?:[^=]|=\s*.checked.)/i,Ma=function(a,b,d){return hb.test(d)?
a:b+"></"+d+">"},F={option:[1,"<select multiple='multiple'>","</select>"],legend:[1,"<fieldset>","</fieldset>"],thead:[1,"<table>","</table>"],tr:[2,"<table><tbody>","</tbody></table>"],td:[3,"<table><tbody><tr>","</tr></tbody></table>"],col:[2,"<table><tbody></tbody><colgroup>","</colgroup></table>"],area:[1,"<map>","</map>"],_default:[0,"",""]};F.optgroup=F.option;F.tbody=F.tfoot=F.colgroup=F.caption=F.thead;F.th=F.td;if(!c.support.htmlSerialize)F._default=[1,"div<div>","</div>"];c.fn.extend({text:function(a){if(c.isFunction(a))return this.each(function(b){var d=
c(this);d.text(a.call(this,b,d.text()))});if(typeof a!=="object"&&a!==w)return this.empty().append((this[0]&&this[0].ownerDocument||s).createTextNode(a));return c.text(this)},wrapAll:function(a){if(c.isFunction(a))return this.each(function(d){c(this).wrapAll(a.call(this,d))});if(this[0]){var b=c(a,this[0].ownerDocument).eq(0).clone(true);this[0].parentNode&&b.insertBefore(this[0]);b.map(function(){for(var d=this;d.firstChild&&d.firstChild.nodeType===1;)d=d.firstChild;return d}).append(this)}return this},
wrapInner:function(a){if(c.isFunction(a))return this.each(function(b){c(this).wrapInner(a.call(this,b))});return this.each(function(){var b=c(this),d=b.contents();d.length?d.wrapAll(a):b.append(a)})},wrap:function(a){return this.each(function(){c(this).wrapAll(a)})},unwrap:function(){return this.parent().each(function(){c.nodeName(this,"body")||c(this).replaceWith(this.childNodes)}).end()},append:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.appendChild(a)})},
prepend:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.insertBefore(a,this.firstChild)})},before:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b,this)});else if(arguments.length){var a=c(arguments[0]);a.push.apply(a,this.toArray());return this.pushStack(a,"before",arguments)}},after:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b,
this.nextSibling)});else if(arguments.length){var a=this.pushStack(this,"after",arguments);a.push.apply(a,c(arguments[0]).toArray());return a}},remove:function(a,b){for(var d=0,f;(f=this[d])!=null;d++)if(!a||c.filter(a,[f]).length){if(!b&&f.nodeType===1){c.cleanData(f.getElementsByTagName("*"));c.cleanData([f])}f.parentNode&&f.parentNode.removeChild(f)}return this},empty:function(){for(var a=0,b;(b=this[a])!=null;a++)for(b.nodeType===1&&c.cleanData(b.getElementsByTagName("*"));b.firstChild;)b.removeChild(b.firstChild);
return this},clone:function(a){var b=this.map(function(){if(!c.support.noCloneEvent&&!c.isXMLDoc(this)){var d=this.outerHTML,f=this.ownerDocument;if(!d){d=f.createElement("div");d.appendChild(this.cloneNode(true));d=d.innerHTML}return c.clean([d.replace(Ja,"").replace(/=([^="'>\s]+\/)>/g,'="$1">').replace(V,"")],f)[0]}else return this.cloneNode(true)});if(a===true){ra(this,b);ra(this.find("*"),b.find("*"))}return b},html:function(a){if(a===w)return this[0]&&this[0].nodeType===1?this[0].innerHTML.replace(Ja,
""):null;else if(typeof a==="string"&&!ta.test(a)&&(c.support.leadingWhitespace||!V.test(a))&&!F[(La.exec(a)||["",""])[1].toLowerCase()]){a=a.replace(Ka,Ma);try{for(var b=0,d=this.length;b<d;b++)if(this[b].nodeType===1){c.cleanData(this[b].getElementsByTagName("*"));this[b].innerHTML=a}}catch(f){this.empty().append(a)}}else c.isFunction(a)?this.each(function(e){var j=c(this),i=j.html();j.empty().append(function(){return a.call(this,e,i)})}):this.empty().append(a);return this},replaceWith:function(a){if(this[0]&&
this[0].parentNode){if(c.isFunction(a))return this.each(function(b){var d=c(this),f=d.html();d.replaceWith(a.call(this,b,f))});if(typeof a!=="string")a=c(a).detach();return this.each(function(){var b=this.nextSibling,d=this.parentNode;c(this).remove();b?c(b).before(a):c(d).append(a)})}else return this.pushStack(c(c.isFunction(a)?a():a),"replaceWith",a)},detach:function(a){return this.remove(a,true)},domManip:function(a,b,d){function f(u){return c.nodeName(u,"table")?u.getElementsByTagName("tbody")[0]||
u.appendChild(u.ownerDocument.createElement("tbody")):u}var e,j,i=a[0],o=[],k;if(!c.support.checkClone&&arguments.length===3&&typeof i==="string"&&ua.test(i))return this.each(function(){c(this).domManip(a,b,d,true)});if(c.isFunction(i))return this.each(function(u){var z=c(this);a[0]=i.call(this,u,b?z.html():w);z.domManip(a,b,d)});if(this[0]){e=i&&i.parentNode;e=c.support.parentNode&&e&&e.nodeType===11&&e.childNodes.length===this.length?{fragment:e}:sa(a,this,o);k=e.fragment;if(j=k.childNodes.length===
1?(k=k.firstChild):k.firstChild){b=b&&c.nodeName(j,"tr");for(var n=0,r=this.length;n<r;n++)d.call(b?f(this[n],j):this[n],n>0||e.cacheable||this.length>1?k.cloneNode(true):k)}o.length&&c.each(o,Qa)}return this}});c.fragments={};c.each({appendTo:"append",prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(a,b){c.fn[a]=function(d){var f=[];d=c(d);var e=this.length===1&&this[0].parentNode;if(e&&e.nodeType===11&&e.childNodes.length===1&&d.length===1){d[b](this[0]);
return this}else{e=0;for(var j=d.length;e<j;e++){var i=(e>0?this.clone(true):this).get();c.fn[b].apply(c(d[e]),i);f=f.concat(i)}return this.pushStack(f,a,d.selector)}}});c.extend({clean:function(a,b,d,f){b=b||s;if(typeof b.createElement==="undefined")b=b.ownerDocument||b[0]&&b[0].ownerDocument||s;for(var e=[],j=0,i;(i=a[j])!=null;j++){if(typeof i==="number")i+="";if(i){if(typeof i==="string"&&!jb.test(i))i=b.createTextNode(i);else if(typeof i==="string"){i=i.replace(Ka,Ma);var o=(La.exec(i)||["",
""])[1].toLowerCase(),k=F[o]||F._default,n=k[0],r=b.createElement("div");for(r.innerHTML=k[1]+i+k[2];n--;)r=r.lastChild;if(!c.support.tbody){n=ib.test(i);o=o==="table"&&!n?r.firstChild&&r.firstChild.childNodes:k[1]==="<table>"&&!n?r.childNodes:[];for(k=o.length-1;k>=0;--k)c.nodeName(o[k],"tbody")&&!o[k].childNodes.length&&o[k].parentNode.removeChild(o[k])}!c.support.leadingWhitespace&&V.test(i)&&r.insertBefore(b.createTextNode(V.exec(i)[0]),r.firstChild);i=r.childNodes}if(i.nodeType)e.push(i);else e=
c.merge(e,i)}}if(d)for(j=0;e[j];j++)if(f&&c.nodeName(e[j],"script")&&(!e[j].type||e[j].type.toLowerCase()==="text/javascript"))f.push(e[j].parentNode?e[j].parentNode.removeChild(e[j]):e[j]);else{e[j].nodeType===1&&e.splice.apply(e,[j+1,0].concat(c.makeArray(e[j].getElementsByTagName("script"))));d.appendChild(e[j])}return e},cleanData:function(a){for(var b,d,f=c.cache,e=c.event.special,j=c.support.deleteExpando,i=0,o;(o=a[i])!=null;i++)if(d=o[c.expando]){b=f[d];if(b.events)for(var k in b.events)e[k]?
c.event.remove(o,k):Ca(o,k,b.handle);if(j)delete o[c.expando];else o.removeAttribute&&o.removeAttribute(c.expando);delete f[d]}}});var kb=/z-?index|font-?weight|opacity|zoom|line-?height/i,Na=/alpha\([^)]*\)/,Oa=/opacity=([^)]*)/,ha=/float/i,ia=/-([a-z])/ig,lb=/([A-Z])/g,mb=/^-?\d+(?:px)?$/i,nb=/^-?\d/,ob={position:"absolute",visibility:"hidden",display:"block"},pb=["Left","Right"],qb=["Top","Bottom"],rb=s.defaultView&&s.defaultView.getComputedStyle,Pa=c.support.cssFloat?"cssFloat":"styleFloat",ja=
function(a,b){return b.toUpperCase()};c.fn.css=function(a,b){return X(this,a,b,true,function(d,f,e){if(e===w)return c.curCSS(d,f);if(typeof e==="number"&&!kb.test(f))e+="px";c.style(d,f,e)})};c.extend({style:function(a,b,d){if(!a||a.nodeType===3||a.nodeType===8)return w;if((b==="width"||b==="height")&&parseFloat(d)<0)d=w;var f=a.style||a,e=d!==w;if(!c.support.opacity&&b==="opacity"){if(e){f.zoom=1;b=parseInt(d,10)+""==="NaN"?"":"alpha(opacity="+d*100+")";a=f.filter||c.curCSS(a,"filter")||"";f.filter=
Na.test(a)?a.replace(Na,b):b}return f.filter&&f.filter.indexOf("opacity=")>=0?parseFloat(Oa.exec(f.filter)[1])/100+"":""}if(ha.test(b))b=Pa;b=b.replace(ia,ja);if(e)f[b]=d;return f[b]},css:function(a,b,d,f){if(b==="width"||b==="height"){var e,j=b==="width"?pb:qb;function i(){e=b==="width"?a.offsetWidth:a.offsetHeight;f!=="border"&&c.each(j,function(){f||(e-=parseFloat(c.curCSS(a,"padding"+this,true))||0);if(f==="margin")e+=parseFloat(c.curCSS(a,"margin"+this,true))||0;else e-=parseFloat(c.curCSS(a,
"border"+this+"Width",true))||0})}a.offsetWidth!==0?i():c.swap(a,ob,i);return Math.max(0,Math.round(e))}return c.curCSS(a,b,d)},curCSS:function(a,b,d){var f,e=a.style;if(!c.support.opacity&&b==="opacity"&&a.currentStyle){f=Oa.test(a.currentStyle.filter||"")?parseFloat(RegExp.$1)/100+"":"";return f===""?"1":f}if(ha.test(b))b=Pa;if(!d&&e&&e[b])f=e[b];else if(rb){if(ha.test(b))b="float";b=b.replace(lb,"-$1").toLowerCase();e=a.ownerDocument.defaultView;if(!e)return null;if(a=e.getComputedStyle(a,null))f=
a.getPropertyValue(b);if(b==="opacity"&&f==="")f="1"}else if(a.currentStyle){d=b.replace(ia,ja);f=a.currentStyle[b]||a.currentStyle[d];if(!mb.test(f)&&nb.test(f)){b=e.left;var j=a.runtimeStyle.left;a.runtimeStyle.left=a.currentStyle.left;e.left=d==="fontSize"?"1em":f||0;f=e.pixelLeft+"px";e.left=b;a.runtimeStyle.left=j}}return f},swap:function(a,b,d){var f={};for(var e in b){f[e]=a.style[e];a.style[e]=b[e]}d.call(a);for(e in b)a.style[e]=f[e]}});if(c.expr&&c.expr.filters){c.expr.filters.hidden=function(a){var b=
a.offsetWidth,d=a.offsetHeight,f=a.nodeName.toLowerCase()==="tr";return b===0&&d===0&&!f?true:b>0&&d>0&&!f?false:c.curCSS(a,"display")==="none"};c.expr.filters.visible=function(a){return!c.expr.filters.hidden(a)}}var sb=J(),tb=/<script(.|\s)*?\/script>/gi,ub=/select|textarea/i,vb=/color|date|datetime|email|hidden|month|number|password|range|search|tel|text|time|url|week/i,N=/=\?(&|$)/,ka=/\?/,wb=/(\?|&)_=.*?(&|$)/,xb=/^(\w+:)?\/\/([^\/?#]+)/,yb=/%20/g,zb=c.fn.load;c.fn.extend({load:function(a,b,d){if(typeof a!==
"string")return zb.call(this,a);else if(!this.length)return this;var f=a.indexOf(" ");if(f>=0){var e=a.slice(f,a.length);a=a.slice(0,f)}f="GET";if(b)if(c.isFunction(b)){d=b;b=null}else if(typeof b==="object"){b=c.param(b,c.ajaxSettings.traditional);f="POST"}var j=this;c.ajax({url:a,type:f,dataType:"html",data:b,complete:function(i,o){if(o==="success"||o==="notmodified")j.html(e?c("<div />").append(i.responseText.replace(tb,"")).find(e):i.responseText);d&&j.each(d,[i.responseText,o,i])}});return this},
serialize:function(){return c.param(this.serializeArray())},serializeArray:function(){return this.map(function(){return this.elements?c.makeArray(this.elements):this}).filter(function(){return this.name&&!this.disabled&&(this.checked||ub.test(this.nodeName)||vb.test(this.type))}).map(function(a,b){a=c(this).val();return a==null?null:c.isArray(a)?c.map(a,function(d){return{name:b.name,value:d}}):{name:b.name,value:a}}).get()}});c.each("ajaxStart ajaxStop ajaxComplete ajaxError ajaxSuccess ajaxSend".split(" "),
function(a,b){c.fn[b]=function(d){return this.bind(b,d)}});c.extend({get:function(a,b,d,f){if(c.isFunction(b)){f=f||d;d=b;b=null}return c.ajax({type:"GET",url:a,data:b,success:d,dataType:f})},getScript:function(a,b){return c.get(a,null,b,"script")},getJSON:function(a,b,d){return c.get(a,b,d,"json")},post:function(a,b,d,f){if(c.isFunction(b)){f=f||d;d=b;b={}}return c.ajax({type:"POST",url:a,data:b,success:d,dataType:f})},ajaxSetup:function(a){c.extend(c.ajaxSettings,a)},ajaxSettings:{url:location.href,
global:true,type:"GET",contentType:"application/x-www-form-urlencoded",processData:true,async:true,xhr:A.XMLHttpRequest&&(A.location.protocol!=="file:"||!A.ActiveXObject)?function(){return new A.XMLHttpRequest}:function(){try{return new A.ActiveXObject("Microsoft.XMLHTTP")}catch(a){}},accepts:{xml:"application/xml, text/xml",html:"text/html",script:"text/javascript, application/javascript",json:"application/json, text/javascript",text:"text/plain",_default:"*/*"}},lastModified:{},etag:{},ajax:function(a){function b(){e.success&&
e.success.call(k,o,i,x);e.global&&f("ajaxSuccess",[x,e])}function d(){e.complete&&e.complete.call(k,x,i);e.global&&f("ajaxComplete",[x,e]);e.global&&!--c.active&&c.event.trigger("ajaxStop")}function f(q,p){(e.context?c(e.context):c.event).trigger(q,p)}var e=c.extend(true,{},c.ajaxSettings,a),j,i,o,k=a&&a.context||e,n=e.type.toUpperCase();if(e.data&&e.processData&&typeof e.data!=="string")e.data=c.param(e.data,e.traditional);if(e.dataType==="jsonp"){if(n==="GET")N.test(e.url)||(e.url+=(ka.test(e.url)?
"&":"?")+(e.jsonp||"callback")+"=?");else if(!e.data||!N.test(e.data))e.data=(e.data?e.data+"&":"")+(e.jsonp||"callback")+"=?";e.dataType="json"}if(e.dataType==="json"&&(e.data&&N.test(e.data)||N.test(e.url))){j=e.jsonpCallback||"jsonp"+sb++;if(e.data)e.data=(e.data+"").replace(N,"="+j+"$1");e.url=e.url.replace(N,"="+j+"$1");e.dataType="script";A[j]=A[j]||function(q){o=q;b();d();A[j]=w;try{delete A[j]}catch(p){}z&&z.removeChild(C)}}if(e.dataType==="script"&&e.cache===null)e.cache=false;if(e.cache===
false&&n==="GET"){var r=J(),u=e.url.replace(wb,"$1_="+r+"$2");e.url=u+(u===e.url?(ka.test(e.url)?"&":"?")+"_="+r:"")}if(e.data&&n==="GET")e.url+=(ka.test(e.url)?"&":"?")+e.data;e.global&&!c.active++&&c.event.trigger("ajaxStart");r=(r=xb.exec(e.url))&&(r[1]&&r[1]!==location.protocol||r[2]!==location.host);if(e.dataType==="script"&&n==="GET"&&r){var z=s.getElementsByTagName("head")[0]||s.documentElement,C=s.createElement("script");C.src=e.url;if(e.scriptCharset)C.charset=e.scriptCharset;if(!j){var B=
false;C.onload=C.onreadystatechange=function(){if(!B&&(!this.readyState||this.readyState==="loaded"||this.readyState==="complete")){B=true;b();d();C.onload=C.onreadystatechange=null;z&&C.parentNode&&z.removeChild(C)}}}z.insertBefore(C,z.firstChild);return w}var E=false,x=e.xhr();if(x){e.username?x.open(n,e.url,e.async,e.username,e.password):x.open(n,e.url,e.async);try{if(e.data||a&&a.contentType)x.setRequestHeader("Content-Type",e.contentType);if(e.ifModified){c.lastModified[e.url]&&x.setRequestHeader("If-Modified-Since",
c.lastModified[e.url]);c.etag[e.url]&&x.setRequestHeader("If-None-Match",c.etag[e.url])}r||x.setRequestHeader("X-Requested-With","XMLHttpRequest");x.setRequestHeader("Accept",e.dataType&&e.accepts[e.dataType]?e.accepts[e.dataType]+", */*":e.accepts._default)}catch(ga){}if(e.beforeSend&&e.beforeSend.call(k,x,e)===false){e.global&&!--c.active&&c.event.trigger("ajaxStop");x.abort();return false}e.global&&f("ajaxSend",[x,e]);var g=x.onreadystatechange=function(q){if(!x||x.readyState===0||q==="abort"){E||
d();E=true;if(x)x.onreadystatechange=c.noop}else if(!E&&x&&(x.readyState===4||q==="timeout")){E=true;x.onreadystatechange=c.noop;i=q==="timeout"?"timeout":!c.httpSuccess(x)?"error":e.ifModified&&c.httpNotModified(x,e.url)?"notmodified":"success";var p;if(i==="success")try{o=c.httpData(x,e.dataType,e)}catch(v){i="parsererror";p=v}if(i==="success"||i==="notmodified")j||b();else c.handleError(e,x,i,p);d();q==="timeout"&&x.abort();if(e.async)x=null}};try{var h=x.abort;x.abort=function(){x&&h.call(x);
g("abort")}}catch(l){}e.async&&e.timeout>0&&setTimeout(function(){x&&!E&&g("timeout")},e.timeout);try{x.send(n==="POST"||n==="PUT"||n==="DELETE"?e.data:null)}catch(m){c.handleError(e,x,null,m);d()}e.async||g();return x}},handleError:function(a,b,d,f){if(a.error)a.error.call(a.context||a,b,d,f);if(a.global)(a.context?c(a.context):c.event).trigger("ajaxError",[b,a,f])},active:0,httpSuccess:function(a){try{return!a.status&&location.protocol==="file:"||a.status>=200&&a.status<300||a.status===304||a.status===
1223||a.status===0}catch(b){}return false},httpNotModified:function(a,b){var d=a.getResponseHeader("Last-Modified"),f=a.getResponseHeader("Etag");if(d)c.lastModified[b]=d;if(f)c.etag[b]=f;return a.status===304||a.status===0},httpData:function(a,b,d){var f=a.getResponseHeader("content-type")||"",e=b==="xml"||!b&&f.indexOf("xml")>=0;a=e?a.responseXML:a.responseText;e&&a.documentElement.nodeName==="parsererror"&&c.error("parsererror");if(d&&d.dataFilter)a=d.dataFilter(a,b);if(typeof a==="string")if(b===
"json"||!b&&f.indexOf("json")>=0)a=c.parseJSON(a);else if(b==="script"||!b&&f.indexOf("javascript")>=0)c.globalEval(a);return a},param:function(a,b){function d(i,o){if(c.isArray(o))c.each(o,function(k,n){b||/\[\]$/.test(i)?f(i,n):d(i+"["+(typeof n==="object"||c.isArray(n)?k:"")+"]",n)});else!b&&o!=null&&typeof o==="object"?c.each(o,function(k,n){d(i+"["+k+"]",n)}):f(i,o)}function f(i,o){o=c.isFunction(o)?o():o;e[e.length]=encodeURIComponent(i)+"="+encodeURIComponent(o)}var e=[];if(b===w)b=c.ajaxSettings.traditional;
if(c.isArray(a)||a.jquery)c.each(a,function(){f(this.name,this.value)});else for(var j in a)d(j,a[j]);return e.join("&").replace(yb,"+")}});var la={},Ab=/toggle|show|hide/,Bb=/^([+-]=)?([\d+-.]+)(.*)$/,W,va=[["height","marginTop","marginBottom","paddingTop","paddingBottom"],["width","marginLeft","marginRight","paddingLeft","paddingRight"],["opacity"]];c.fn.extend({show:function(a,b){if(a||a===0)return this.animate(K("show",3),a,b);else{a=0;for(b=this.length;a<b;a++){var d=c.data(this[a],"olddisplay");
this[a].style.display=d||"";if(c.css(this[a],"display")==="none"){d=this[a].nodeName;var f;if(la[d])f=la[d];else{var e=c("<"+d+" />").appendTo("body");f=e.css("display");if(f==="none")f="block";e.remove();la[d]=f}c.data(this[a],"olddisplay",f)}}a=0;for(b=this.length;a<b;a++)this[a].style.display=c.data(this[a],"olddisplay")||"";return this}},hide:function(a,b){if(a||a===0)return this.animate(K("hide",3),a,b);else{a=0;for(b=this.length;a<b;a++){var d=c.data(this[a],"olddisplay");!d&&d!=="none"&&c.data(this[a],
"olddisplay",c.css(this[a],"display"))}a=0;for(b=this.length;a<b;a++)this[a].style.display="none";return this}},_toggle:c.fn.toggle,toggle:function(a,b){var d=typeof a==="boolean";if(c.isFunction(a)&&c.isFunction(b))this._toggle.apply(this,arguments);else a==null||d?this.each(function(){var f=d?a:c(this).is(":hidden");c(this)[f?"show":"hide"]()}):this.animate(K("toggle",3),a,b);return this},fadeTo:function(a,b,d){return this.filter(":hidden").css("opacity",0).show().end().animate({opacity:b},a,d)},
animate:function(a,b,d,f){var e=c.speed(b,d,f);if(c.isEmptyObject(a))return this.each(e.complete);return this[e.queue===false?"each":"queue"](function(){var j=c.extend({},e),i,o=this.nodeType===1&&c(this).is(":hidden"),k=this;for(i in a){var n=i.replace(ia,ja);if(i!==n){a[n]=a[i];delete a[i];i=n}if(a[i]==="hide"&&o||a[i]==="show"&&!o)return j.complete.call(this);if((i==="height"||i==="width")&&this.style){j.display=c.css(this,"display");j.overflow=this.style.overflow}if(c.isArray(a[i])){(j.specialEasing=
j.specialEasing||{})[i]=a[i][1];a[i]=a[i][0]}}if(j.overflow!=null)this.style.overflow="hidden";j.curAnim=c.extend({},a);c.each(a,function(r,u){var z=new c.fx(k,j,r);if(Ab.test(u))z[u==="toggle"?o?"show":"hide":u](a);else{var C=Bb.exec(u),B=z.cur(true)||0;if(C){u=parseFloat(C[2]);var E=C[3]||"px";if(E!=="px"){k.style[r]=(u||1)+E;B=(u||1)/z.cur(true)*B;k.style[r]=B+E}if(C[1])u=(C[1]==="-="?-1:1)*u+B;z.custom(B,u,E)}else z.custom(B,u,"")}});return true})},stop:function(a,b){var d=c.timers;a&&this.queue([]);
this.each(function(){for(var f=d.length-1;f>=0;f--)if(d[f].elem===this){b&&d[f](true);d.splice(f,1)}});b||this.dequeue();return this}});c.each({slideDown:K("show",1),slideUp:K("hide",1),slideToggle:K("toggle",1),fadeIn:{opacity:"show"},fadeOut:{opacity:"hide"}},function(a,b){c.fn[a]=function(d,f){return this.animate(b,d,f)}});c.extend({speed:function(a,b,d){var f=a&&typeof a==="object"?a:{complete:d||!d&&b||c.isFunction(a)&&a,duration:a,easing:d&&b||b&&!c.isFunction(b)&&b};f.duration=c.fx.off?0:typeof f.duration===
"number"?f.duration:c.fx.speeds[f.duration]||c.fx.speeds._default;f.old=f.complete;f.complete=function(){f.queue!==false&&c(this).dequeue();c.isFunction(f.old)&&f.old.call(this)};return f},easing:{linear:function(a,b,d,f){return d+f*a},swing:function(a,b,d,f){return(-Math.cos(a*Math.PI)/2+0.5)*f+d}},timers:[],fx:function(a,b,d){this.options=b;this.elem=a;this.prop=d;if(!b.orig)b.orig={}}});c.fx.prototype={update:function(){this.options.step&&this.options.step.call(this.elem,this.now,this);(c.fx.step[this.prop]||
c.fx.step._default)(this);if((this.prop==="height"||this.prop==="width")&&this.elem.style)this.elem.style.display="block"},cur:function(a){if(this.elem[this.prop]!=null&&(!this.elem.style||this.elem.style[this.prop]==null))return this.elem[this.prop];return(a=parseFloat(c.css(this.elem,this.prop,a)))&&a>-10000?a:parseFloat(c.curCSS(this.elem,this.prop))||0},custom:function(a,b,d){function f(j){return e.step(j)}this.startTime=J();this.start=a;this.end=b;this.unit=d||this.unit||"px";this.now=this.start;
this.pos=this.state=0;var e=this;f.elem=this.elem;if(f()&&c.timers.push(f)&&!W)W=setInterval(c.fx.tick,13)},show:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.show=true;this.custom(this.prop==="width"||this.prop==="height"?1:0,this.cur());c(this.elem).show()},hide:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.hide=true;this.custom(this.cur(),0)},step:function(a){var b=J(),d=true;if(a||b>=this.options.duration+this.startTime){this.now=
this.end;this.pos=this.state=1;this.update();this.options.curAnim[this.prop]=true;for(var f in this.options.curAnim)if(this.options.curAnim[f]!==true)d=false;if(d){if(this.options.display!=null){this.elem.style.overflow=this.options.overflow;a=c.data(this.elem,"olddisplay");this.elem.style.display=a?a:this.options.display;if(c.css(this.elem,"display")==="none")this.elem.style.display="block"}this.options.hide&&c(this.elem).hide();if(this.options.hide||this.options.show)for(var e in this.options.curAnim)c.style(this.elem,
e,this.options.orig[e]);this.options.complete.call(this.elem)}return false}else{e=b-this.startTime;this.state=e/this.options.duration;a=this.options.easing||(c.easing.swing?"swing":"linear");this.pos=c.easing[this.options.specialEasing&&this.options.specialEasing[this.prop]||a](this.state,e,0,1,this.options.duration);this.now=this.start+(this.end-this.start)*this.pos;this.update()}return true}};c.extend(c.fx,{tick:function(){for(var a=c.timers,b=0;b<a.length;b++)a[b]()||a.splice(b--,1);a.length||
c.fx.stop()},stop:function(){clearInterval(W);W=null},speeds:{slow:600,fast:200,_default:400},step:{opacity:function(a){c.style(a.elem,"opacity",a.now)},_default:function(a){if(a.elem.style&&a.elem.style[a.prop]!=null)a.elem.style[a.prop]=(a.prop==="width"||a.prop==="height"?Math.max(0,a.now):a.now)+a.unit;else a.elem[a.prop]=a.now}}});if(c.expr&&c.expr.filters)c.expr.filters.animated=function(a){return c.grep(c.timers,function(b){return a===b.elem}).length};c.fn.offset="getBoundingClientRect"in s.documentElement?
function(a){var b=this[0];if(a)return this.each(function(e){c.offset.setOffset(this,a,e)});if(!b||!b.ownerDocument)return null;if(b===b.ownerDocument.body)return c.offset.bodyOffset(b);var d=b.getBoundingClientRect(),f=b.ownerDocument;b=f.body;f=f.documentElement;return{top:d.top+(self.pageYOffset||c.support.boxModel&&f.scrollTop||b.scrollTop)-(f.clientTop||b.clientTop||0),left:d.left+(self.pageXOffset||c.support.boxModel&&f.scrollLeft||b.scrollLeft)-(f.clientLeft||b.clientLeft||0)}}:function(a){var b=
this[0];if(a)return this.each(function(r){c.offset.setOffset(this,a,r)});if(!b||!b.ownerDocument)return null;if(b===b.ownerDocument.body)return c.offset.bodyOffset(b);c.offset.initialize();var d=b.offsetParent,f=b,e=b.ownerDocument,j,i=e.documentElement,o=e.body;f=(e=e.defaultView)?e.getComputedStyle(b,null):b.currentStyle;for(var k=b.offsetTop,n=b.offsetLeft;(b=b.parentNode)&&b!==o&&b!==i;){if(c.offset.supportsFixedPosition&&f.position==="fixed")break;j=e?e.getComputedStyle(b,null):b.currentStyle;
k-=b.scrollTop;n-=b.scrollLeft;if(b===d){k+=b.offsetTop;n+=b.offsetLeft;if(c.offset.doesNotAddBorder&&!(c.offset.doesAddBorderForTableAndCells&&/^t(able|d|h)$/i.test(b.nodeName))){k+=parseFloat(j.borderTopWidth)||0;n+=parseFloat(j.borderLeftWidth)||0}f=d;d=b.offsetParent}if(c.offset.subtractsBorderForOverflowNotVisible&&j.overflow!=="visible"){k+=parseFloat(j.borderTopWidth)||0;n+=parseFloat(j.borderLeftWidth)||0}f=j}if(f.position==="relative"||f.position==="static"){k+=o.offsetTop;n+=o.offsetLeft}if(c.offset.supportsFixedPosition&&
f.position==="fixed"){k+=Math.max(i.scrollTop,o.scrollTop);n+=Math.max(i.scrollLeft,o.scrollLeft)}return{top:k,left:n}};c.offset={initialize:function(){var a=s.body,b=s.createElement("div"),d,f,e,j=parseFloat(c.curCSS(a,"marginTop",true))||0;c.extend(b.style,{position:"absolute",top:0,left:0,margin:0,border:0,width:"1px",height:"1px",visibility:"hidden"});b.innerHTML="<div style='position:absolute;top:0;left:0;margin:0;border:5px solid #000;padding:0;width:1px;height:1px;'><div></div></div><table style='position:absolute;top:0;left:0;margin:0;border:5px solid #000;padding:0;width:1px;height:1px;' cellpadding='0' cellspacing='0'><tr><td></td></tr></table>";
a.insertBefore(b,a.firstChild);d=b.firstChild;f=d.firstChild;e=d.nextSibling.firstChild.firstChild;this.doesNotAddBorder=f.offsetTop!==5;this.doesAddBorderForTableAndCells=e.offsetTop===5;f.style.position="fixed";f.style.top="20px";this.supportsFixedPosition=f.offsetTop===20||f.offsetTop===15;f.style.position=f.style.top="";d.style.overflow="hidden";d.style.position="relative";this.subtractsBorderForOverflowNotVisible=f.offsetTop===-5;this.doesNotIncludeMarginInBodyOffset=a.offsetTop!==j;a.removeChild(b);
c.offset.initialize=c.noop},bodyOffset:function(a){var b=a.offsetTop,d=a.offsetLeft;c.offset.initialize();if(c.offset.doesNotIncludeMarginInBodyOffset){b+=parseFloat(c.curCSS(a,"marginTop",true))||0;d+=parseFloat(c.curCSS(a,"marginLeft",true))||0}return{top:b,left:d}},setOffset:function(a,b,d){if(/static/.test(c.curCSS(a,"position")))a.style.position="relative";var f=c(a),e=f.offset(),j=parseInt(c.curCSS(a,"top",true),10)||0,i=parseInt(c.curCSS(a,"left",true),10)||0;if(c.isFunction(b))b=b.call(a,
d,e);d={top:b.top-e.top+j,left:b.left-e.left+i};"using"in b?b.using.call(a,d):f.css(d)}};c.fn.extend({position:function(){if(!this[0])return null;var a=this[0],b=this.offsetParent(),d=this.offset(),f=/^body|html$/i.test(b[0].nodeName)?{top:0,left:0}:b.offset();d.top-=parseFloat(c.curCSS(a,"marginTop",true))||0;d.left-=parseFloat(c.curCSS(a,"marginLeft",true))||0;f.top+=parseFloat(c.curCSS(b[0],"borderTopWidth",true))||0;f.left+=parseFloat(c.curCSS(b[0],"borderLeftWidth",true))||0;return{top:d.top-
f.top,left:d.left-f.left}},offsetParent:function(){return this.map(function(){for(var a=this.offsetParent||s.body;a&&!/^body|html$/i.test(a.nodeName)&&c.css(a,"position")==="static";)a=a.offsetParent;return a})}});c.each(["Left","Top"],function(a,b){var d="scroll"+b;c.fn[d]=function(f){var e=this[0],j;if(!e)return null;if(f!==w)return this.each(function(){if(j=wa(this))j.scrollTo(!a?f:c(j).scrollLeft(),a?f:c(j).scrollTop());else this[d]=f});else return(j=wa(e))?"pageXOffset"in j?j[a?"pageYOffset":
"pageXOffset"]:c.support.boxModel&&j.document.documentElement[d]||j.document.body[d]:e[d]}});c.each(["Height","Width"],function(a,b){var d=b.toLowerCase();c.fn["inner"+b]=function(){return this[0]?c.css(this[0],d,false,"padding"):null};c.fn["outer"+b]=function(f){return this[0]?c.css(this[0],d,false,f?"margin":"border"):null};c.fn[d]=function(f){var e=this[0];if(!e)return f==null?null:this;if(c.isFunction(f))return this.each(function(j){var i=c(this);i[d](f.call(this,j,i[d]()))});return"scrollTo"in
e&&e.document?e.document.compatMode==="CSS1Compat"&&e.document.documentElement["client"+b]||e.document.body["client"+b]:e.nodeType===9?Math.max(e.documentElement["client"+b],e.body["scroll"+b],e.documentElement["scroll"+b],e.body["offset"+b],e.documentElement["offset"+b]):f===w?c.css(e,d):this.css(d,typeof f==="string"?f:f+"px")}});A.jQuery=A.$=c})(window); | AChemKit | /AChemKit-0.3.0.tar.gz/AChemKit-0.3.0/doc/html/_static/jquery.js | jquery.js |
* select a different prefix for underscore
*/
$u = _.noConflict();
/**
* make the code below compatible with browsers without
* an installed firebug like debugger
if (!window.console || !console.firebug) {
var names = ["log", "debug", "info", "warn", "error", "assert", "dir",
"dirxml", "group", "groupEnd", "time", "timeEnd", "count", "trace",
"profile", "profileEnd"];
window.console = {};
for (var i = 0; i < names.length; ++i)
window.console[names[i]] = function() {};
}
*/
/**
* small helper function to urldecode strings
*/
jQuery.urldecode = function(x) {
return decodeURIComponent(x).replace(/\+/g, ' ');
}
/**
* small helper function to urlencode strings
*/
jQuery.urlencode = encodeURIComponent;
/**
* This function returns the parsed url parameters of the
* current request. Multiple values per key are supported,
* it will always return arrays of strings for the value parts.
*/
jQuery.getQueryParameters = function(s) {
if (typeof s == 'undefined')
s = document.location.search;
var parts = s.substr(s.indexOf('?') + 1).split('&');
var result = {};
for (var i = 0; i < parts.length; i++) {
var tmp = parts[i].split('=', 2);
var key = jQuery.urldecode(tmp[0]);
var value = jQuery.urldecode(tmp[1]);
if (key in result)
result[key].push(value);
else
result[key] = [value];
}
return result;
};
/**
* small function to check if an array contains
* a given item.
*/
jQuery.contains = function(arr, item) {
for (var i = 0; i < arr.length; i++) {
if (arr[i] == item)
return true;
}
return false;
};
/**
* highlight a given string on a jquery object by wrapping it in
* span elements with the given class name.
*/
jQuery.fn.highlightText = function(text, className) {
function highlight(node) {
if (node.nodeType == 3) {
var val = node.nodeValue;
var pos = val.toLowerCase().indexOf(text);
if (pos >= 0 && !jQuery(node.parentNode).hasClass(className)) {
var span = document.createElement("span");
span.className = className;
span.appendChild(document.createTextNode(val.substr(pos, text.length)));
node.parentNode.insertBefore(span, node.parentNode.insertBefore(
document.createTextNode(val.substr(pos + text.length)),
node.nextSibling));
node.nodeValue = val.substr(0, pos);
}
}
else if (!jQuery(node).is("button, select, textarea")) {
jQuery.each(node.childNodes, function() {
highlight(this);
});
}
}
return this.each(function() {
highlight(this);
});
};
/**
* Small JavaScript module for the documentation.
*/
var Documentation = {
init : function() {
this.fixFirefoxAnchorBug();
this.highlightSearchWords();
this.initIndexTable();
},
/**
* i18n support
*/
TRANSLATIONS : {},
PLURAL_EXPR : function(n) { return n == 1 ? 0 : 1; },
LOCALE : 'unknown',
// gettext and ngettext don't access this so that the functions
// can safely bound to a different name (_ = Documentation.gettext)
gettext : function(string) {
var translated = Documentation.TRANSLATIONS[string];
if (typeof translated == 'undefined')
return string;
return (typeof translated == 'string') ? translated : translated[0];
},
ngettext : function(singular, plural, n) {
var translated = Documentation.TRANSLATIONS[singular];
if (typeof translated == 'undefined')
return (n == 1) ? singular : plural;
return translated[Documentation.PLURALEXPR(n)];
},
addTranslations : function(catalog) {
for (var key in catalog.messages)
this.TRANSLATIONS[key] = catalog.messages[key];
this.PLURAL_EXPR = new Function('n', 'return +(' + catalog.plural_expr + ')');
this.LOCALE = catalog.locale;
},
/**
* add context elements like header anchor links
*/
addContextElements : function() {
$('div[id] > :header:first').each(function() {
$('<a class="headerlink">\u00B6</a>').
attr('href', '#' + this.id).
attr('title', _('Permalink to this headline')).
appendTo(this);
});
$('dt[id]').each(function() {
$('<a class="headerlink">\u00B6</a>').
attr('href', '#' + this.id).
attr('title', _('Permalink to this definition')).
appendTo(this);
});
},
/**
* workaround a firefox stupidity
*/
fixFirefoxAnchorBug : function() {
if (document.location.hash && $.browser.mozilla)
window.setTimeout(function() {
document.location.href += '';
}, 10);
},
/**
* highlight the search words provided in the url in the text
*/
highlightSearchWords : function() {
var params = $.getQueryParameters();
var terms = (params.highlight) ? params.highlight[0].split(/\s+/) : [];
if (terms.length) {
var body = $('div.body');
window.setTimeout(function() {
$.each(terms, function() {
body.highlightText(this.toLowerCase(), 'highlighted');
});
}, 10);
$('<li class="highlight-link"><a href="javascript:Documentation.' +
'hideSearchWords()">' + _('Hide Search Matches') + '</a></li>')
.appendTo($('.sidebar .this-page-menu'));
}
},
/**
* init the domain index toggle buttons
*/
initIndexTable : function() {
var togglers = $('img.toggler').click(function() {
var src = $(this).attr('src');
var idnum = $(this).attr('id').substr(7);
$('tr.cg-' + idnum).toggle();
if (src.substr(-9) == 'minus.png')
$(this).attr('src', src.substr(0, src.length-9) + 'plus.png');
else
$(this).attr('src', src.substr(0, src.length-8) + 'minus.png');
}).css('display', '');
if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) {
togglers.click();
}
},
/**
* helper function to hide the search marks again
*/
hideSearchWords : function() {
$('.sidebar .this-page-menu li.highlight-link').fadeOut(300);
$('span.highlighted').removeClass('highlighted');
},
/**
* make the url absolute
*/
makeURL : function(relativeURL) {
return DOCUMENTATION_OPTIONS.URL_ROOT + '/' + relativeURL;
},
/**
* get the current relative url
*/
getCurrentURL : function() {
var path = document.location.pathname;
var parts = path.split(/\//);
$.each(DOCUMENTATION_OPTIONS.URL_ROOT.split(/\//), function() {
if (this == '..')
parts.pop();
});
var url = parts.join('/');
return path.substring(url.lastIndexOf('/') + 1, path.length - 1);
}
};
// quick alias for translations
_ = Documentation.gettext;
$(document).ready(function() {
Documentation.init();
}); | AChemKit | /AChemKit-0.3.0.tar.gz/AChemKit-0.3.0/doc/html/_static/doctools.js | doctools.js |
$(function() {
// global elements used by the functions.
// the 'sidebarbutton' element is defined as global after its
// creation, in the add_sidebar_button function
var bodywrapper = $('.bodywrapper');
var sidebar = $('.sphinxsidebar');
var sidebarwrapper = $('.sphinxsidebarwrapper');
// original margin-left of the bodywrapper and width of the sidebar
// with the sidebar expanded
var bw_margin_expanded = bodywrapper.css('margin-left');
var ssb_width_expanded = sidebar.width();
// margin-left of the bodywrapper and width of the sidebar
// with the sidebar collapsed
var bw_margin_collapsed = '.8em';
var ssb_width_collapsed = '.8em';
// colors used by the current theme
var dark_color = $('.related').css('background-color');
var light_color = $('.document').css('background-color');
function sidebar_is_collapsed() {
return sidebarwrapper.is(':not(:visible)');
}
function toggle_sidebar() {
if (sidebar_is_collapsed())
expand_sidebar();
else
collapse_sidebar();
}
function collapse_sidebar() {
sidebarwrapper.hide();
sidebar.css('width', ssb_width_collapsed);
bodywrapper.css('margin-left', bw_margin_collapsed);
sidebarbutton.css({
'margin-left': '0',
'height': bodywrapper.height()
});
sidebarbutton.find('span').text('»');
sidebarbutton.attr('title', _('Expand sidebar'));
document.cookie = 'sidebar=collapsed';
}
function expand_sidebar() {
bodywrapper.css('margin-left', bw_margin_expanded);
sidebar.css('width', ssb_width_expanded);
sidebarwrapper.show();
sidebarbutton.css({
'margin-left': ssb_width_expanded-12,
'height': bodywrapper.height()
});
sidebarbutton.find('span').text('«');
sidebarbutton.attr('title', _('Collapse sidebar'));
document.cookie = 'sidebar=expanded';
}
function add_sidebar_button() {
sidebarwrapper.css({
'float': 'left',
'margin-right': '0',
'width': ssb_width_expanded - 28
});
// create the button
sidebar.append(
'<div id="sidebarbutton"><span>«</span></div>'
);
var sidebarbutton = $('#sidebarbutton');
light_color = sidebarbutton.css('background-color');
// find the height of the viewport to center the '<<' in the page
var viewport_height;
if (window.innerHeight)
viewport_height = window.innerHeight;
else
viewport_height = $(window).height();
sidebarbutton.find('span').css({
'display': 'block',
'margin-top': (viewport_height - sidebar.position().top - 20) / 2
});
sidebarbutton.click(toggle_sidebar);
sidebarbutton.attr('title', _('Collapse sidebar'));
sidebarbutton.css({
'color': '#FFFFFF',
'border-left': '1px solid ' + dark_color,
'font-size': '1.2em',
'cursor': 'pointer',
'height': bodywrapper.height(),
'padding-top': '1px',
'margin-left': ssb_width_expanded - 12
});
sidebarbutton.hover(
function () {
$(this).css('background-color', dark_color);
},
function () {
$(this).css('background-color', light_color);
}
);
}
function set_position_from_cookie() {
if (!document.cookie)
return;
var items = document.cookie.split(';');
for(var k=0; k<items.length; k++) {
var key_val = items[k].split('=');
var key = key_val[0];
if (key == 'sidebar') {
var value = key_val[1];
if ((value == 'collapsed') && (!sidebar_is_collapsed()))
collapse_sidebar();
else if ((value == 'expanded') && (sidebar_is_collapsed()))
expand_sidebar();
}
}
}
add_sidebar_button();
var sidebarbutton = $('#sidebarbutton');
set_position_from_cookie();
}); | AChemKit | /AChemKit-0.3.0.tar.gz/AChemKit-0.3.0/doc/html/_static/sidebar.js | sidebar.js |
(function(){var j=this,n=j._,i=function(a){this._wrapped=a},m=typeof StopIteration!=="undefined"?StopIteration:"__break__",b=j._=function(a){return new i(a)};if(typeof exports!=="undefined")exports._=b;var k=Array.prototype.slice,o=Array.prototype.unshift,p=Object.prototype.toString,q=Object.prototype.hasOwnProperty,r=Object.prototype.propertyIsEnumerable;b.VERSION="0.5.5";b.each=function(a,c,d){try{if(a.forEach)a.forEach(c,d);else if(b.isArray(a)||b.isArguments(a))for(var e=0,f=a.length;e<f;e++)c.call(d,
a[e],e,a);else{var g=b.keys(a);f=g.length;for(e=0;e<f;e++)c.call(d,a[g[e]],g[e],a)}}catch(h){if(h!=m)throw h;}return a};b.map=function(a,c,d){if(a&&b.isFunction(a.map))return a.map(c,d);var e=[];b.each(a,function(f,g,h){e.push(c.call(d,f,g,h))});return e};b.reduce=function(a,c,d,e){if(a&&b.isFunction(a.reduce))return a.reduce(b.bind(d,e),c);b.each(a,function(f,g,h){c=d.call(e,c,f,g,h)});return c};b.reduceRight=function(a,c,d,e){if(a&&b.isFunction(a.reduceRight))return a.reduceRight(b.bind(d,e),c);
var f=b.clone(b.toArray(a)).reverse();b.each(f,function(g,h){c=d.call(e,c,g,h,a)});return c};b.detect=function(a,c,d){var e;b.each(a,function(f,g,h){if(c.call(d,f,g,h)){e=f;b.breakLoop()}});return e};b.select=function(a,c,d){if(a&&b.isFunction(a.filter))return a.filter(c,d);var e=[];b.each(a,function(f,g,h){c.call(d,f,g,h)&&e.push(f)});return e};b.reject=function(a,c,d){var e=[];b.each(a,function(f,g,h){!c.call(d,f,g,h)&&e.push(f)});return e};b.all=function(a,c,d){c=c||b.identity;if(a&&b.isFunction(a.every))return a.every(c,
d);var e=true;b.each(a,function(f,g,h){(e=e&&c.call(d,f,g,h))||b.breakLoop()});return e};b.any=function(a,c,d){c=c||b.identity;if(a&&b.isFunction(a.some))return a.some(c,d);var e=false;b.each(a,function(f,g,h){if(e=c.call(d,f,g,h))b.breakLoop()});return e};b.include=function(a,c){if(b.isArray(a))return b.indexOf(a,c)!=-1;var d=false;b.each(a,function(e){if(d=e===c)b.breakLoop()});return d};b.invoke=function(a,c){var d=b.rest(arguments,2);return b.map(a,function(e){return(c?e[c]:e).apply(e,d)})};b.pluck=
function(a,c){return b.map(a,function(d){return d[c]})};b.max=function(a,c,d){if(!c&&b.isArray(a))return Math.max.apply(Math,a);var e={computed:-Infinity};b.each(a,function(f,g,h){g=c?c.call(d,f,g,h):f;g>=e.computed&&(e={value:f,computed:g})});return e.value};b.min=function(a,c,d){if(!c&&b.isArray(a))return Math.min.apply(Math,a);var e={computed:Infinity};b.each(a,function(f,g,h){g=c?c.call(d,f,g,h):f;g<e.computed&&(e={value:f,computed:g})});return e.value};b.sortBy=function(a,c,d){return b.pluck(b.map(a,
function(e,f,g){return{value:e,criteria:c.call(d,e,f,g)}}).sort(function(e,f){e=e.criteria;f=f.criteria;return e<f?-1:e>f?1:0}),"value")};b.sortedIndex=function(a,c,d){d=d||b.identity;for(var e=0,f=a.length;e<f;){var g=e+f>>1;d(a[g])<d(c)?(e=g+1):(f=g)}return e};b.toArray=function(a){if(!a)return[];if(a.toArray)return a.toArray();if(b.isArray(a))return a;if(b.isArguments(a))return k.call(a);return b.values(a)};b.size=function(a){return b.toArray(a).length};b.first=function(a,c,d){return c&&!d?k.call(a,
0,c):a[0]};b.rest=function(a,c,d){return k.call(a,b.isUndefined(c)||d?1:c)};b.last=function(a){return a[a.length-1]};b.compact=function(a){return b.select(a,function(c){return!!c})};b.flatten=function(a){return b.reduce(a,[],function(c,d){if(b.isArray(d))return c.concat(b.flatten(d));c.push(d);return c})};b.without=function(a){var c=b.rest(arguments);return b.select(a,function(d){return!b.include(c,d)})};b.uniq=function(a,c){return b.reduce(a,[],function(d,e,f){if(0==f||(c===true?b.last(d)!=e:!b.include(d,
e)))d.push(e);return d})};b.intersect=function(a){var c=b.rest(arguments);return b.select(b.uniq(a),function(d){return b.all(c,function(e){return b.indexOf(e,d)>=0})})};b.zip=function(){for(var a=b.toArray(arguments),c=b.max(b.pluck(a,"length")),d=new Array(c),e=0;e<c;e++)d[e]=b.pluck(a,String(e));return d};b.indexOf=function(a,c){if(a.indexOf)return a.indexOf(c);for(var d=0,e=a.length;d<e;d++)if(a[d]===c)return d;return-1};b.lastIndexOf=function(a,c){if(a.lastIndexOf)return a.lastIndexOf(c);for(var d=
a.length;d--;)if(a[d]===c)return d;return-1};b.range=function(a,c,d){var e=b.toArray(arguments),f=e.length<=1;a=f?0:e[0];c=f?e[0]:e[1];d=e[2]||1;e=Math.ceil((c-a)/d);if(e<=0)return[];e=new Array(e);f=a;for(var g=0;1;f+=d){if((d>0?f-c:c-f)>=0)return e;e[g++]=f}};b.bind=function(a,c){var d=b.rest(arguments,2);return function(){return a.apply(c||j,d.concat(b.toArray(arguments)))}};b.bindAll=function(a){var c=b.rest(arguments);if(c.length==0)c=b.functions(a);b.each(c,function(d){a[d]=b.bind(a[d],a)});
return a};b.delay=function(a,c){var d=b.rest(arguments,2);return setTimeout(function(){return a.apply(a,d)},c)};b.defer=function(a){return b.delay.apply(b,[a,1].concat(b.rest(arguments)))};b.wrap=function(a,c){return function(){var d=[a].concat(b.toArray(arguments));return c.apply(c,d)}};b.compose=function(){var a=b.toArray(arguments);return function(){for(var c=b.toArray(arguments),d=a.length-1;d>=0;d--)c=[a[d].apply(this,c)];return c[0]}};b.keys=function(a){if(b.isArray(a))return b.range(0,a.length);
var c=[];for(var d in a)q.call(a,d)&&c.push(d);return c};b.values=function(a){return b.map(a,b.identity)};b.functions=function(a){return b.select(b.keys(a),function(c){return b.isFunction(a[c])}).sort()};b.extend=function(a,c){for(var d in c)a[d]=c[d];return a};b.clone=function(a){if(b.isArray(a))return a.slice(0);return b.extend({},a)};b.tap=function(a,c){c(a);return a};b.isEqual=function(a,c){if(a===c)return true;var d=typeof a;if(d!=typeof c)return false;if(a==c)return true;if(!a&&c||a&&!c)return false;
if(a.isEqual)return a.isEqual(c);if(b.isDate(a)&&b.isDate(c))return a.getTime()===c.getTime();if(b.isNaN(a)&&b.isNaN(c))return true;if(b.isRegExp(a)&&b.isRegExp(c))return a.source===c.source&&a.global===c.global&&a.ignoreCase===c.ignoreCase&&a.multiline===c.multiline;if(d!=="object")return false;if(a.length&&a.length!==c.length)return false;d=b.keys(a);var e=b.keys(c);if(d.length!=e.length)return false;for(var f in a)if(!b.isEqual(a[f],c[f]))return false;return true};b.isEmpty=function(a){return b.keys(a).length==
0};b.isElement=function(a){return!!(a&&a.nodeType==1)};b.isArray=function(a){return!!(a&&a.concat&&a.unshift)};b.isArguments=function(a){return a&&b.isNumber(a.length)&&!b.isArray(a)&&!r.call(a,"length")};b.isFunction=function(a){return!!(a&&a.constructor&&a.call&&a.apply)};b.isString=function(a){return!!(a===""||a&&a.charCodeAt&&a.substr)};b.isNumber=function(a){return p.call(a)==="[object Number]"};b.isDate=function(a){return!!(a&&a.getTimezoneOffset&&a.setUTCFullYear)};b.isRegExp=function(a){return!!(a&&
a.test&&a.exec&&(a.ignoreCase||a.ignoreCase===false))};b.isNaN=function(a){return b.isNumber(a)&&isNaN(a)};b.isNull=function(a){return a===null};b.isUndefined=function(a){return typeof a=="undefined"};b.noConflict=function(){j._=n;return this};b.identity=function(a){return a};b.breakLoop=function(){throw m;};var s=0;b.uniqueId=function(a){var c=s++;return a?a+c:c};b.template=function(a,c){a=new Function("obj","var p=[],print=function(){p.push.apply(p,arguments);};with(obj){p.push('"+a.replace(/[\r\t\n]/g,
" ").replace(/'(?=[^%]*%>)/g,"\t").split("'").join("\\'").split("\t").join("'").replace(/<%=(.+?)%>/g,"',$1,'").split("<%").join("');").split("%>").join("p.push('")+"');}return p.join('');");return c?a(c):a};b.forEach=b.each;b.foldl=b.inject=b.reduce;b.foldr=b.reduceRight;b.filter=b.select;b.every=b.all;b.some=b.any;b.head=b.first;b.tail=b.rest;b.methods=b.functions;var l=function(a,c){return c?b(a).chain():a};b.each(b.functions(b),function(a){var c=b[a];i.prototype[a]=function(){var d=b.toArray(arguments);
o.call(d,this._wrapped);return l(c.apply(b,d),this._chain)}});b.each(["pop","push","reverse","shift","sort","splice","unshift"],function(a){var c=Array.prototype[a];i.prototype[a]=function(){c.apply(this._wrapped,arguments);return l(this._wrapped,this._chain)}});b.each(["concat","join","slice"],function(a){var c=Array.prototype[a];i.prototype[a]=function(){return l(c.apply(this._wrapped,arguments),this._chain)}});i.prototype.chain=function(){this._chain=true;return this};i.prototype.value=function(){return this._wrapped}})(); | AChemKit | /AChemKit-0.3.0.tar.gz/AChemKit-0.3.0/doc/html/_static/underscore.js | underscore.js |
* helper function to return a node containing the
* search summary for a given text. keywords is a list
* of stemmed words, hlwords is the list of normal, unstemmed
* words. the first one is used to find the occurance, the
* latter for highlighting it.
*/
jQuery.makeSearchSummary = function(text, keywords, hlwords) {
var textLower = text.toLowerCase();
var start = 0;
$.each(keywords, function() {
var i = textLower.indexOf(this.toLowerCase());
if (i > -1)
start = i;
});
start = Math.max(start - 120, 0);
var excerpt = ((start > 0) ? '...' : '') +
$.trim(text.substr(start, 240)) +
((start + 240 - text.length) ? '...' : '');
var rv = $('<div class="context"></div>').text(excerpt);
$.each(hlwords, function() {
rv = rv.highlightText(this, 'highlighted');
});
return rv;
}
/**
* Porter Stemmer
*/
var PorterStemmer = function() {
var step2list = {
ational: 'ate',
tional: 'tion',
enci: 'ence',
anci: 'ance',
izer: 'ize',
bli: 'ble',
alli: 'al',
entli: 'ent',
eli: 'e',
ousli: 'ous',
ization: 'ize',
ation: 'ate',
ator: 'ate',
alism: 'al',
iveness: 'ive',
fulness: 'ful',
ousness: 'ous',
aliti: 'al',
iviti: 'ive',
biliti: 'ble',
logi: 'log'
};
var step3list = {
icate: 'ic',
ative: '',
alize: 'al',
iciti: 'ic',
ical: 'ic',
ful: '',
ness: ''
};
var c = "[^aeiou]"; // consonant
var v = "[aeiouy]"; // vowel
var C = c + "[^aeiouy]*"; // consonant sequence
var V = v + "[aeiou]*"; // vowel sequence
var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0
var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1
var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1
var s_v = "^(" + C + ")?" + v; // vowel in stem
this.stemWord = function (w) {
var stem;
var suffix;
var firstch;
var origword = w;
if (w.length < 3)
return w;
var re;
var re2;
var re3;
var re4;
firstch = w.substr(0,1);
if (firstch == "y")
w = firstch.toUpperCase() + w.substr(1);
// Step 1a
re = /^(.+?)(ss|i)es$/;
re2 = /^(.+?)([^s])s$/;
if (re.test(w))
w = w.replace(re,"$1$2");
else if (re2.test(w))
w = w.replace(re2,"$1$2");
// Step 1b
re = /^(.+?)eed$/;
re2 = /^(.+?)(ed|ing)$/;
if (re.test(w)) {
var fp = re.exec(w);
re = new RegExp(mgr0);
if (re.test(fp[1])) {
re = /.$/;
w = w.replace(re,"");
}
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1];
re2 = new RegExp(s_v);
if (re2.test(stem)) {
w = stem;
re2 = /(at|bl|iz)$/;
re3 = new RegExp("([^aeiouylsz])\\1$");
re4 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re2.test(w))
w = w + "e";
else if (re3.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
else if (re4.test(w))
w = w + "e";
}
}
// Step 1c
re = /^(.+?)y$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(s_v);
if (re.test(stem))
w = stem + "i";
}
// Step 2
re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step2list[suffix];
}
// Step 3
re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step3list[suffix];
}
// Step 4
re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;
re2 = /^(.+?)(s|t)(ion)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
if (re.test(stem))
w = stem;
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1] + fp[2];
re2 = new RegExp(mgr1);
if (re2.test(stem))
w = stem;
}
// Step 5
re = /^(.+?)e$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
re2 = new RegExp(meq1);
re3 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re.test(stem) || (re2.test(stem) && !(re3.test(stem))))
w = stem;
}
re = /ll$/;
re2 = new RegExp(mgr1);
if (re.test(w) && re2.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
// and turn initial Y back to y
if (firstch == "y")
w = firstch.toLowerCase() + w.substr(1);
return w;
}
}
/**
* Search Module
*/
var Search = {
_index : null,
_queued_query : null,
_pulse_status : -1,
init : function() {
var params = $.getQueryParameters();
if (params.q) {
var query = params.q[0];
$('input[name="q"]')[0].value = query;
this.performSearch(query);
}
},
loadIndex : function(url) {
$.ajax({type: "GET", url: url, data: null, success: null,
dataType: "script", cache: true});
},
setIndex : function(index) {
var q;
this._index = index;
if ((q = this._queued_query) !== null) {
this._queued_query = null;
Search.query(q);
}
},
hasIndex : function() {
return this._index !== null;
},
deferQuery : function(query) {
this._queued_query = query;
},
stopPulse : function() {
this._pulse_status = 0;
},
startPulse : function() {
if (this._pulse_status >= 0)
return;
function pulse() {
Search._pulse_status = (Search._pulse_status + 1) % 4;
var dotString = '';
for (var i = 0; i < Search._pulse_status; i++)
dotString += '.';
Search.dots.text(dotString);
if (Search._pulse_status > -1)
window.setTimeout(pulse, 500);
};
pulse();
},
/**
* perform a search for something
*/
performSearch : function(query) {
// create the required interface elements
this.out = $('#search-results');
this.title = $('<h2>' + _('Searching') + '</h2>').appendTo(this.out);
this.dots = $('<span></span>').appendTo(this.title);
this.status = $('<p style="display: none"></p>').appendTo(this.out);
this.output = $('<ul class="search"/>').appendTo(this.out);
$('#search-progress').text(_('Preparing search...'));
this.startPulse();
// index already loaded, the browser was quick!
if (this.hasIndex())
this.query(query);
else
this.deferQuery(query);
},
query : function(query) {
var stopwords = ['and', 'then', 'into', 'it', 'as', 'are', 'in',
'if', 'for', 'no', 'there', 'their', 'was', 'is',
'be', 'to', 'that', 'but', 'they', 'not', 'such',
'with', 'by', 'a', 'on', 'these', 'of', 'will',
'this', 'near', 'the', 'or', 'at'];
// stem the searchterms and add them to the correct list
var stemmer = new PorterStemmer();
var searchterms = [];
var excluded = [];
var hlterms = [];
var tmp = query.split(/\s+/);
var object = (tmp.length == 1) ? tmp[0].toLowerCase() : null;
for (var i = 0; i < tmp.length; i++) {
if ($u.indexOf(stopwords, tmp[i]) != -1 || tmp[i].match(/^\d+$/) ||
tmp[i] == "") {
// skip this "word"
continue;
}
// stem the word
var word = stemmer.stemWord(tmp[i]).toLowerCase();
// select the correct list
if (word[0] == '-') {
var toAppend = excluded;
word = word.substr(1);
}
else {
var toAppend = searchterms;
hlterms.push(tmp[i].toLowerCase());
}
// only add if not already in the list
if (!$.contains(toAppend, word))
toAppend.push(word);
};
var highlightstring = '?highlight=' + $.urlencode(hlterms.join(" "));
// console.debug('SEARCH: searching for:');
// console.info('required: ', searchterms);
// console.info('excluded: ', excluded);
// prepare search
var filenames = this._index.filenames;
var titles = this._index.titles;
var terms = this._index.terms;
var objects = this._index.objects;
var objtypes = this._index.objtypes;
var objnames = this._index.objnames;
var fileMap = {};
var files = null;
// different result priorities
var importantResults = [];
var objectResults = [];
var regularResults = [];
var unimportantResults = [];
$('#search-progress').empty();
// lookup as object
if (object != null) {
for (var prefix in objects) {
for (var name in objects[prefix]) {
var fullname = (prefix ? prefix + '.' : '') + name;
if (fullname.toLowerCase().indexOf(object) > -1) {
match = objects[prefix][name];
descr = objnames[match[1]] + _(', in ') + titles[match[0]];
// XXX the generated anchors are not generally correct
// XXX there may be custom prefixes
result = [filenames[match[0]], fullname, '#'+fullname, descr];
switch (match[2]) {
case 1: objectResults.push(result); break;
case 0: importantResults.push(result); break;
case 2: unimportantResults.push(result); break;
}
}
}
}
}
// sort results descending
objectResults.sort(function(a, b) {
return (a[1] > b[1]) ? -1 : ((a[1] < b[1]) ? 1 : 0);
});
importantResults.sort(function(a, b) {
return (a[1] > b[1]) ? -1 : ((a[1] < b[1]) ? 1 : 0);
});
unimportantResults.sort(function(a, b) {
return (a[1] > b[1]) ? -1 : ((a[1] < b[1]) ? 1 : 0);
});
// perform the search on the required terms
for (var i = 0; i < searchterms.length; i++) {
var word = searchterms[i];
// no match but word was a required one
if ((files = terms[word]) == null)
break;
if (files.length == undefined) {
files = [files];
}
// create the mapping
for (var j = 0; j < files.length; j++) {
var file = files[j];
if (file in fileMap)
fileMap[file].push(word);
else
fileMap[file] = [word];
}
}
// now check if the files don't contain excluded terms
for (var file in fileMap) {
var valid = true;
// check if all requirements are matched
if (fileMap[file].length != searchterms.length)
continue;
// ensure that none of the excluded terms is in the
// search result.
for (var i = 0; i < excluded.length; i++) {
if (terms[excluded[i]] == file ||
$.contains(terms[excluded[i]] || [], file)) {
valid = false;
break;
}
}
// if we have still a valid result we can add it
// to the result list
if (valid)
regularResults.push([filenames[file], titles[file], '', null]);
}
// delete unused variables in order to not waste
// memory until list is retrieved completely
delete filenames, titles, terms;
// now sort the regular results descending by title
regularResults.sort(function(a, b) {
var left = a[1].toLowerCase();
var right = b[1].toLowerCase();
return (left > right) ? -1 : ((left < right) ? 1 : 0);
});
// combine all results
var results = unimportantResults.concat(regularResults)
.concat(objectResults).concat(importantResults);
// print the results
var resultCount = results.length;
function displayNextItem() {
// results left, load the summary and display it
if (results.length) {
var item = results.pop();
var listItem = $('<li style="display:none"></li>');
if (DOCUMENTATION_OPTIONS.FILE_SUFFIX == '') {
// dirhtml builder
var dirname = item[0] + '/';
if (dirname.match(/\/index\/$/)) {
dirname = dirname.substring(0, dirname.length-6);
} else if (dirname == 'index/') {
dirname = '';
}
listItem.append($('<a/>').attr('href',
DOCUMENTATION_OPTIONS.URL_ROOT + dirname +
highlightstring + item[2]).html(item[1]));
} else {
// normal html builders
listItem.append($('<a/>').attr('href',
item[0] + DOCUMENTATION_OPTIONS.FILE_SUFFIX +
highlightstring + item[2]).html(item[1]));
}
if (item[3]) {
listItem.append($('<span> (' + item[3] + ')</span>'));
Search.output.append(listItem);
listItem.slideDown(5, function() {
displayNextItem();
});
} else if (DOCUMENTATION_OPTIONS.HAS_SOURCE) {
$.get(DOCUMENTATION_OPTIONS.URL_ROOT + '_sources/' +
item[0] + '.txt', function(data) {
if (data != '') {
listItem.append($.makeSearchSummary(data, searchterms, hlterms));
Search.output.append(listItem);
}
listItem.slideDown(5, function() {
displayNextItem();
});
});
} else {
// no source available, just display title
Search.output.append(listItem);
listItem.slideDown(5, function() {
displayNextItem();
});
}
}
// search finished, update title and status message
else {
Search.stopPulse();
Search.title.text(_('Search Results'));
if (!resultCount)
Search.status.text(_('Your search did not match any documents. Please make sure that all words are spelled correctly and that you\'ve selected enough categories.'));
else
Search.status.text(_('Search finished, found %s page(s) matching the search query.').replace('%s', resultCount));
Search.status.fadeIn(500);
}
}
displayNextItem();
}
}
$(document).ready(function() {
Search.init();
}); | AChemKit | /AChemKit-0.3.0.tar.gz/AChemKit-0.3.0/doc/html/_static/searchtools.js | searchtools.js |
import asyncio
import typing
import warnings
from zlib import compress
import acord
import sys
import traceback
from inspect import iscoroutinefunction
from acord.core.decoders import ETF, JSON, decompressResponse
from acord.core.signals import gateway
from .core.http import HTTPClient
from .errors import *
from functools import wraps
from typing import (
Union, Callable
)
from acord.models import User
class Client(object):
"""
Client for interacting with the discord API
Parameters
----------
loop: :class:`~asyncio.AbstractEventLoop`
An existing loop to run the client off of
token: :class:`str`
Your API Token which can be generated at the developer portal
tokenType: typing.Union[BEARER, BOT]
The token type, which controls the payload data and restrictions.
.. warning::
If BEARER, do not use the `run` method. Your able to access data normally.
commandHandler: :class:`~typing.Callable`
An optional command handler, defaults to the built-in handler at :class:`~acord.DefaultCommandHandler`.
**Parameters passed though:**
* Message: :class:`~acord.Message`
* UpdatedCache: :class:`bool`
"""
def __init__(self, *,
loop: asyncio.AbstractEventLoop = asyncio.get_event_loop(),
token: str = None,
encoding: str = "JSON",
compress: bool = False,
commandHandler: Callable = None,
) -> None:
self.loop = loop
self.token = token
self._events = dict()
self.commandHandler = commandHandler
# Gateway connection stuff
self.encoding = encoding
self.compress = compress
# Others
self.session_id = None
self.gateway_version = None
self.user = None
def bindToken(self, token: str) -> None:
self._lruPermanent = token
def event(self, func):
if not iscoroutinefunction(func):
raise ValueError('Provided function was not a coroutine')
eventName = func.__qualname__
if eventName in self._events:
self._events[eventName].append(func)
else:
self._events.update({eventName: [func]})
return func
def on_error(self, event_method):
acord.logger.error('Failed to run event "{}".'.format(event_method))
print(f'Ignoring exception in {event_method}', file=sys.stderr)
traceback.print_exc()
async def dispatch(self, event_name: str, *args, **kwargs) -> None:
if not event_name.startswith('on_'):
event_name = 'on_' + event_name
acord.logger.info('Dispatching event: {}'.format(event_name))
events = self._events.get(event_name, [])
acord.logger.info('Total of {} events found for {}'.format(len(events), event_name))
for event in events:
try:
await event(*args, **kwargs)
except Exception:
self.on_error(event)
async def handle_websocket(self, ws):
async for message in ws:
await self.dispatch('socket_recieve')
data = message.data
if type(data) is bytes:
data = decompressResponse(data)
if not data:
continue
if not data.startswith('{'):
data = ETF(data)
else:
data = JSON(data)
if data['op'] == gateway.INVALIDSESSION:
acord.logger.error('Invalid Session - Reconnecting Shortly')
raise GatewayConnectionRefused('Invalid session data, currently not handled in this version')
if data['t'] == 'READY':
await self.dispatch('ready')
self.session_id = data['d']['session_id']
self.gateway_version = data['d']['v']
self.user = User(**data['d']['user'])
continue
if data['op'] == gateway.HEARTBEATACK:
await self.dispatch('heartbeat')
def resume(self):
""" Resumes a closed gateway connection """
def run(self, token: str = None, *, reconnect: bool = True):
if (token or self.token) and getattr(self, '_lruPermanent', False):
warnings.warn("Cannot use current token as another token was binded to the client", CannotOverideTokenWarning)
token = getattr(self, '_lruPermanent', None) or (token or self.token)
if not token:
raise ValueError('No token provided')
self.http = HTTPClient(loop=self.loop)
self.token = token
# Login to create session
self.loop.run_until_complete(self.http.login(token=token))
coro = self.http._connect(
token,
encoding=self.encoding,
compress=self.compress
)
# Connect to discord, send identity packet + start heartbeat
ws = self.loop.run_until_complete(coro)
self.loop.run_until_complete(self.dispatch('connect'))
acord.logger.info('Connected to websocket')
self.loop.run_until_complete(self.handle_websocket(ws)) | ACord | /ACord-0.0.1a0-py3-none-any.whl/acord/client.py | client.py |
try:
import uvloop
uvloop.install()
except ImportError:
__import__('warnings').warn('Failed to import UVLoop, it is recommended to install this library\npip install uvloop', ImportWarning)
import asyncio
import typing
import aiohttp
import acord
import sys
from acord.errors import GatewayConnectionRefused, HTTPException
from . import helpers
from .heartbeat import KeepAlive
from .decoders import *
from .signals import gateway
class HTTPClient(object):
"""
Base client used to connection and interact with the websocket.
Parameters
----------
loop: :class:`~asyncio.AbstractEventLoop`
A pre-existing loop for aiohttp to run of, defaults to ``asyncio.get_event_loop()``
reconnect: :class:`bool`
Attempt to reconnect to gateway if failed, If set to a integer, it will re-attempt n times.
wsTimeout: :class:`~aiohttp.ClientTimeout`
Custom timeout configuration for
**payloadData: :class:`dict`
A dictionary of payload data to be sent with any request
.. note::
This information can be overwritten with each response
"""
def __init__(self,
token: str = None,
connecter: typing.Optional[aiohttp.BaseConnector] = None,
wsTimeout: aiohttp.ClientTimeout = aiohttp.ClientTimeout(60, connect=None),
proxy: typing.Optional[str] = None,
proxy_auth: typing.Optional[aiohttp.BasicAuth] = None,
loop: typing.Optional[asyncio.AbstractEventLoop] = asyncio.get_event_loop(),
unsync_clock: bool = True,
) -> None:
self.token = token
self.loop = loop
self.wsTimeout = wsTimeout
self.connector = connecter
self._ws_connected = False
self.proxy = proxy
self.proxy_auth = proxy_auth
self.use_clock = not unsync_clock
user_agent = "ACord - https://github.com/Mecha-Karen/ACord {0} Python{1[0]}.{1[1]} aiohttp/{2}"
self.user_agent = user_agent.format(
acord.__version__, sys.version, aiohttp.__version__
)
def getIdentityPacket(self, intents = 0):
return {
"op": gateway.IDENTIFY,
"d": {
"token": self.token,
"intents": 513,
"properties": {
"$os": sys.platform,
"$browser": "acord",
"$device": "acord"
}
}
}
def updatePayloadData(self, overwrite: bool = False, **newData) -> None:
if overwrite:
self.startingPayloadData = newData
else:
self.startingPayloadData = {**self.startingPayloadData, **newData}
async def login(self, *, token: str) -> None:
""" Define a session for the http client to use. """
self._session = aiohttp.ClientSession(connector=self.connector)
ot = self.token
self.token = token
try:
data = await self.request(
helpers.Route("GET", path="/users/@me")
)
except HTTPException as exc:
self.token = ot
acord.logger.error('Failed to login to discord, improper token passed')
raise GatewayConnectionRefused('Invalid or Improper token passed') from exc
return data
async def _fetchGatewayURL(self, token):
uri = helpers.buildURL('gateway', 'bot')
async with self._session.get(uri, headers={'Authorization': f"Bot {token}"}) as resp:
data = await resp.json()
return data
async def _connect(self, token: str, *,
encoding: helpers.GATEWAY_ENCODING, compress: int = 0,
**identityPacketKwargs
) -> None:
if not getattr(self, '_session', False):
acord.logger.warn('Session not defined, user not logged in. Called login manually')
await self.login(token=(token or self.token))
self.encoding = encoding
self.compress = compress
respData = await self._fetchGatewayURL(token)
GATEWAY_WEBHOOK_URL = respData['url']
GATEWAY_WEBHOOK_URL += f'?v={helpers.API_VERSION}'
GATEWAY_WEBHOOK_URL += f'&encoding={encoding.lower()}'
if compress:
GATEWAY_WEBHOOK_URL += "&compress=zlib-stream"
acord.logger.info('Generated websocket url: %s' % GATEWAY_WEBHOOK_URL)
kwargs = {
'proxy_auth': self.proxy_auth,
'proxy': self.proxy,
'max_msg_size': 0,
'timeout': self.wsTimeout.total,
'autoclose': False,
'headers': {
'User-Agent': self.user_agent,
},
'compress': compress
}
ws = await self._session.ws_connect(GATEWAY_WEBHOOK_URL, **kwargs)
helloRecv = await ws.receive()
data = helloRecv.data
if compress:
data = decompressResponse(data)
if not data.startswith('{'):
data = ETF(data)
else:
data = JSON(data)
self._ws_connected = True
self.ws = ws
self.loop.create_task(KeepAlive(self.getIdentityPacket(**identityPacketKwargs), ws, data).run())
return ws
async def request(self, route: helpers.Route, data: dict = None, **payload) -> None:
url = route.url
headers = payload
headers['Authorization'] = "Bot " + self.token
headers['User-Agent'] = self.user_agent
kwargs = dict()
kwargs['data'] = data
kwargs['headers'] = headers
resp = await self._session.request(
method=route.method,
url=url,
**kwargs
)
return resp
@property
def connected(self):
return self._ws_connected | ACord | /ACord-0.0.1a0-py3-none-any.whl/acord/core/http.py | http.py |
# THINGS TO DO
# Complete implementation for the root finding methods
# Perhaps include a visualization?
# Perhaps include some type of automatation for the initial guess that utilizes our derivation package?
# Double check syntax - tensor vs. AD class
# Consider changing method definitions such that the main class initialization values are set to defaults (this allows the user to change the parameters for each individual numerical method)
# Root counter - count the estimated number of roots over a given domain. Include this?
# LIBRARIES OF USE
# NOTE THAT THE CHDIR COMMAND SHOULD BE DELETED PRIOR TO FINAL SUBMISSION. IT
# IS HERE SOLELY FOR TESTING PURPOSES
# import os
# os.chdir("C:/Users/DesktopID3412MNY/Desktop/cs107-FinalProject/")
from AD_Derivators.functions import tensor, autograd
import numpy as np
from AD_Derivators.helper_functions import ad_utils
import inspect
# MAIN ROOT FINDER FUNCTION
def root(x0, functions, x1 = None, tolerance = 1e-9, max_iterations = 100,
method = "newton", min_diff = None, verbose = 1):
"""
Args:
====================
x0 (list or np.ndarray): a 1-d or 2-d numpy array matrix for initial guess
a. 1d: shape == (num_inputs,)
b. 2d: shape == (num_inputs, vec_dim)
function (list) : A list of callable function with num_inputs inputs for each
tolerance : positive scaler
How close the estimated root should be to 0. The default is 0.001.
max_iterations : INT > 0, optional
Maximum number of iterations the algorithm will be run prior to
terminating, which will occur if a value within the tolerance is not
found. The default is 100.
method : STRING, optional
The name of the root finding algorithm to use. The default is "newton".
Raises
------
Exception
An exception is raised if the user enters an algorithm type that is not
defined in this code base.
Returns
-------
(dict): {"root": np.ndarray, "iters": int, "case": string}
"""
assert isinstance(x0, list) or isinstance(x0, np.ndarray), f"x0 should be a list or np.ndarray"
assert isinstance(functions, list) and all(callable(f) for f in functions), f"functions should be a list of callable function"
assert ad_utils.check_number(tolerance) and tolerance > 0, f"Expected tolerance to be a positive number, instead received {type(tolerance)}"
assert isinstance(max_iterations, int) and max_iterations > 0, f"Expected max_iterations to be a positive integer, instead received {type(max_iterations)}"
assert isinstance(method, str)
assert ad_utils.check_number(min_diff) or (min_diff is None), f"Expected tolerance to be a positive number, instead received {type(min_diff)}"
if min_diff is None:
min_diff = tolerance
elif min_diff > tolerance:
raise ValueError("Expected the min_diff no less than tolerance")
method = method.strip().lower()
num_functions = len(functions)
num_inputs = num_functions #!!!!!
for f in functions:
if len(inspect.signature(f).parameters) != num_inputs:
raise IOError("The number of initialization for each functions should all be same as number of functions")
# convert x0 to np.array first
x0 = np.array(x0)
assert len(x0.shape) < 3, f"we only accept 1 or 2 dimensional input"
assert x0.shape[0] == num_inputs, f"the dimension of initial guess x0 should match (num_functions,)"
x0 = x0.reshape(num_functions,-1) # expand dimension for 1-dim input
vec_dim = x0.shape[1]
# expand dim and repeat
x0 = np.expand_dims(x0, axis=0)
x0 = np.repeat(x0, num_functions, axis = 0 )
if x1 is None:
# build ad class
ad = autograd.AD(num_functions, num_inputs, vec_dim)
ad.add_inputs(x0)
ad.build_function(functions)
if method == "newton":
res, iters, case = _newton(ad, tolerance, max_iterations, min_diff)
elif method == "broyden1":
res, iters, case = _broyden_good(ad, tolerance, max_iterations, min_diff)
elif method == "broyden2":
res, iters, case = _broyden_bad(ad, tolerance, max_iterations, min_diff)
# elif method == 'steffensen':
# res, iters, case = _steffensen(x0, functions, tolerance, max_iterations, min_diff)
else:
raise Exception(f"Method \"{method}\" is not a valid solver algorithm when x1 is None")
else:
x1 = np.array(x1).reshape(num_functions,-1)
x1 = np.expand_dims(x1, axis=0)
x1 = np.repeat(x1, num_functions, axis = 0 )
assert x1.shape == x0.shape, "the dimension of x0 should match x1"
if method == "secant":
res, iters, case = _secant(x0,x1, functions, tolerance, max_iterations, min_diff)
elif method == "bisection":
res, iters, case = _bisection(x0,x1, functions, tolerance, max_iterations, min_diff)
else:
raise Exception(f"Method \"{method}\" is not a valid solver algorithm when x1 is not None")
if verbose:
print(f'method: {method}')
print(f'results: {res}')
print(f'number of iterations take: {iters}')
print(case)
return {'roots': res, 'iters': iters, 'case':case}
def _newton(ad,tolerance, max_iterations, min_diff):
x0 = ad.get_inputs()[0,:,:] # x0 is a np.ndarray (num_inputs, vec_dim)
case = None
for i in range(max_iterations):
x = x0 - np.linalg.pinv(ad.jacobian)@_get_fx(x0, ad.function) # x is a np.ndarray (num_inputs, vec_dim)
if _check_root(x, ad.function, tolerance):
case = "[PASS] root found"
return x, i, case
# converged
if (np.linalg.norm(x - x0) < min_diff):
case = "[FAIL] converged"
return x, i, case
x0 = x
next_input = np.repeat([x0], ad.num_functions, axis = 0)
ad.add_inputs(next_input) # update jacobian for next round
ad.build_function(ad.function) # recalculate jacobian for next step
case = "[FAIL] maximum iteration reached"
return x, i, case
def _broyden_good(ad, tolerance, max_iterations, min_diff):
# give the initialization for Jobian inverse
try:
J = ad.jacobian
J_inv = np.linalg.inv(J)
except: # use identity initialization when jacobian is not invertible
J_inv = np.eye(ad.num_functions)
x0 = ad.get_inputs()[0,:,:] # x0 is a np.ndarray (num_inputs, vec_dim)
case = None
f0 = _get_fx(x0, ad.function)
for i in range(max_iterations):
x = x0 - J_inv@f0
if _check_root(x, ad.function, tolerance):
case = "[PASS] root found"
return x, i, case
# converged
if (np.linalg.norm(x - x0)< min_diff):
case = "[FAIL] converged"
return x, i, case
delta_x = x - x0
f = _get_fx(x, ad.function)
delta_f = f - f0
# update J_inv, f0, x0
J_inv = J_inv + np.dot((delta_x - J_inv@delta_f)/np.dot(delta_x.T@J_inv,delta_f), delta_x.T@J_inv)
f0 = f
x0 = x
case = "[FAIL] maximum iteration reached"
return x, i, case
def _broyden_bad(ad, tolerance, max_iterations, min_diff):
#J = ad.jacobian
try:
J = ad.jacobian
J_inv = np.linalg.inv(J)
except:
J_inv = np.eye(ad.num_functions)
x0 = ad.get_inputs()[0,:,:] # x0 is a np.ndarray (num_inputs, vec_dim)
case = None
f0 = _get_fx(x0, ad.function)
for i in range(max_iterations):
x = x0 - J_inv@f0
if _check_root(x, ad.function, tolerance):
#print(x,i,case)
case = "[PASS] root found"
return x, i, case
# converged
if (np.linalg.norm(x - x0) < min_diff):
case = "[FAIL] converged"
return x, i, case
delta_x = x - x0
f = _get_fx(x, ad.function)
delta_f = f - f0
J_inv = J_inv + np.dot((delta_x - J_inv@delta_f)/np.power((np.linalg.norm(delta_f)),2), delta_f.T)
f0 = f
x0 = x
case = "[FAIL] maximum iteration reached"
return x, i, case
def _check_zero(a):
"""
make sure no elements in a are 0
"""
if (a == 0).any():
a = a.astype(np.float) # convert to float first
for m in range(a.shape[0]):
for n in range(a.shape[1]):
if a[m,n] ==0:
a[m,n]+= 0.1
return a
def _secant(x0,x1, functions,tolerance, max_iterations, min_diff):
if len(functions) > 1:
raise IOError("The secant method only applys to single function with single variable")
case = None
x0 = x0.astype(np.float)
x1 = x1.astype(np.float)
if x1 == x0:
x1 = x0 + 0.1
for i in range(max_iterations):
# make sure x0 does not equal to x1
f0 = _get_fx(x0,functions)
f1 = _get_fx(x1, functions)
if (f1 - f0 == 0).any():
case = "[FAIL] Zero division encountered"
return x1,i,case
g = (x1-x0)/(f1-f0)
x = x1 - f1*g
if _check_root(x, functions, tolerance):
case = "[PASS] root found"
return x1,i,case
# converged
if (np.linalg.norm(x - x1) < min_diff):
case = "[FAIL] converged"
return x1,i,case
x0 = x1
x1 = x
case = "[FAIL] maximum iteration reached"
return x, i, case
def _bisection(x0,x1, functions,tolerance, max_iterations, min_diff):
"""
Need to make sure x0 < x1 and f(x0)f(x1) <0
"""
case = None
if len(functions) > 1:
raise IOError("The bisection method only applys to single function with single variable")
x0 = x0.astype(np.float)
x1 = x1.astype(np.float)
x0,x1 = _prepare_bisection(x0,x1,functions)
for i in range(max_iterations):
c= (x0+x1)/2
if _check_root(c, functions, tolerance):
case = "[PASS] root found"
return c, i, case
x0,x1 = _update_bisection(x0,x1,c, functions)
# converged
if (np.linalg.norm(x1 - x0) < min_diff):
case = "[FAIL] converged"
return c, i, case
case = "[FAIL] maximum iteration reached"
return c, i, case
def _prepare_bisection(x0,x1, functions):
"""
make sure all element in x0 < x1 if at the same place
"""
vec1 = x0[0,:,:]
vec2 = x1[0,:,:]
res0 = _get_fx(vec1,functions)
res1 = _get_fx(vec2, functions)
if (res0*res1 > 0).any():
raise IOError("For Bisection you need to give inputs that f(x0)f(x1) < 0")
for m in range(len(vec1)):
for n in range(len(vec1[0])):
if vec1[m,n] > vec2[m,n]:
t = vec1[m,n]
vec1[m,n] = vec2[m,n]
vec2[m,n] = t
return vec1,vec2
def _update_bisection(a,b,c, functions):
"""
a,b,c: num_inputs x vec_dim
"""
fa = _get_fx(a, functions) # num_functions x vec_dim
fb = _get_fx(b, functions) #
fx = _get_fx(c, functions)
for m in range(a.shape[0]):
for n in range(a.shape[1]):
if fa[m,n]*fx[m,n] > 0:
a[m,n] = c[m,n]
elif fb[m,n]*fx[m,n] > 0:
b[m,n] = c[m,n]
return a,b
def _check_root(x, functions, tolerance):
"""
x (np.ndarray): a 2-d array, ()
functions: a list of functions
tolerance: a positive number
"""
flag = True
for f in functions:
inputs = [x[i] for i in range(len(x))]
res = f(*inputs) # res is a np.ndarray
if np.linalg.norm(res) >= tolerance:
flag = False
break
return flag
def _get_fx(x, functions):
"""
x (np.ndarray): a numpy array ( num_functions, num_inputs, vec_dim)
"""
output = [] #use list in case the output of root are vectors
for f in functions:
inputs = [x[i] for i in range(len(x))]
res = f(*inputs) # res is a (vec_dim,) np.ndarray
output.append(res)
return np.array(output) #(num_inputs, vec_dim) | AD-Derivators | /AD_Derivators-0.0.2-py3-none-any.whl/AD_Derivators/rootfinder.py | rootfinder.py |
import numpy as np
from AD_Derivators.helper_functions.ad_utils import check_number, check_array, check_list, check_anyzero, check_tan, check_anyneg, check_nontensor_input
"""
tensor.py
derivative rules for elementary operations.
"""
def sin(t):
"""
Input
=========
t (tensor.Tensor/numpy/list/scaler)
"""
if check_nontensor_input(t):
return np.sin(t)
elif isinstance(t,Tensor):
pass
else:
raise TypeError('The input of tensor.sin can only be a Tensor/list/np.array/number.')
ob = Tensor()
#new func val
ob._val = np.sin(t._val)
#chain rule
ob._der = np.cos(t._val)* t._der
return ob
def cos(t):
"""
Input
=========
t (tensor.Tensor/numpy/list/scaler)
"""
if check_nontensor_input(t):
return np.cos(t)
elif isinstance(t,Tensor):
pass
else:
raise TypeError('The input of tensor.cos can only be a Tensor/list/np.array/number.')
ob = Tensor()
#new func val
ob._val = np.cos(t._val)
#chain rule
ob._der = -np.sin(t._val)* t._der
return ob
def tan(t):
"""
Input
=========
t (tensor.Tensor/numpy/list/scaler)
"""
if check_nontensor_input(t):
if not check_tan(t):
return np.tan(t)
else:
raise ValueError('Tan undefined')
elif isinstance(t,Tensor):
if check_tan(t._val):
raise ValueError('Tan undefined')
else:
raise TypeError('The input of tensor.tan can only be a Tensor/list/np.array/number.')
ob = Tensor()
#new func val
ob._val = np.tan(t._val)
#chain rule
ob._der = t._der/(np.cos(t._val)**2)
return ob
def asin(t):
"""
Input
=========
t (tensor.Tensor/numpy/list/scaler)
"""
if check_nontensor_input(t):
t = np.array(t)
if (t > 1).any() or (t < -1).any():
raise ValueError('The value of asin is undefined outside of 1 or -1')
else:
return np.arcsin(t)
elif isinstance(t,Tensor):
if (t._val == 1).any() or (t._val == -1).any():
raise ValueError('The derivative of asin is undefined at 1 or -1')
elif (t._val > 1).any() or (t._val < -1).any():
raise ValueError('The value of asin is undefined outside of 1 or -1')
else:
ob = Tensor()
#new func val
ob._val = np.arcsin(t._val)
#chain rule
ob._der = 1/(np.sqrt(1 - t._val**2))* t._der
return ob
else:
raise TypeError('The input of tensor.asin can only be a Tensor/numpy/list/scaler object.')
def sinh(t):
"""
Input
=========
t (tensor.Tensor/numpy/list/scaler)
"""
if check_nontensor_input(t):
t = np.array(t)
return np.sinh(t)
elif isinstance(t,Tensor):
ob = Tensor()
#new func val
ob._val = np.sinh(t._val)
#chain rule
ob._der = np.cosh(t._val)* t._der
return ob
else:
raise TypeError('The input of tensor.sinh can only be a Tensor/numpy/list/scaler object.')
def acos(t):
"""
Input
=========
t (tensor.Tensor/numpy/list/scaler)
"""
if check_nontensor_input(t):
t = np.array(t)
if (t > 1).any() or (t < -1).any():
raise ValueError('The value of acos is undefined outside of 1 or -1')
else:
return np.arccos(t)
elif isinstance(t,Tensor):
if (t._val == 1).any() or (t._val == -1).any():
raise ValueError('The derivative of acos is undefined at 1 or -1')
elif (t._val > 1).any() or (t._val < -1).any():
raise ValueError('The value of acos is undefined outside of 1 or -1')
else:
ob = Tensor()
#new func val
ob._val = np.arccos(t._val)
#chain rule
ob._der = -1/(np.sqrt(1 - t._val**2))* t._der
return ob
else:
raise TypeError('The input of tensor.acos can only be a Tensor/numpy/list/scaler object.')
def cosh(t):
"""
Input
=========
t (tensor.Tensor/numpy/list/scaler)
"""
if check_nontensor_input(t):
t = np.array(t)
return np.cosh(t)
elif isinstance(t,Tensor):
ob = Tensor()
#new func val
ob._val = np.cosh(t._val)
#chain rule
ob._der = np.sinh(t._val)* t._der
return ob
else:
raise TypeError('The input of tensor.cosh can only be a Tensor/numpy/list/scaler object.')
def atan(t):
"""
Input
=========
t (tensor.Tensor/numpy/list/scaler)
"""
if check_nontensor_input(t):
t = np.array(t)
return np.arctanh(t)
elif isinstance(t,Tensor):
ob = Tensor()
#new func val
ob._val = np.arctan(t._val)
#chain rule
ob._der = t._der/(1 + t._val**2)
return ob
else:
raise TypeError('The input of tensor.atah can only be a Tensor/numpy/list/scaler object.')
def tanh(t):
"""
Input
=========
t (tensor.Tensor/numpy/list/scaler)
"""
if check_nontensor_input(t):
t = np.array(t)
return np.tanh(t)
elif isinstance(t,Tensor):
ob = Tensor()
#new func val
ob._val = np.tanh(t._val)
#chain rule
ob._der = t._der* (1/np.cosh(t._val))**2
return ob
else:
raise TypeError('The input of tensor.tanh can only be a Tensor/numpy/list/scaler object.')
def exp(t, base = np.e):
"""
Input
=========
t (tensor.Tensor/numpy/list/scaler)
base (scaler)
"""
if not check_number(base):
raise TypeError('The base must be a scaler.')
if check_nontensor_input(t): # no need to worry if base nonpositive
return np.power(base,t)
elif isinstance(t,Tensor):
if base <=0:
raise ValueError('The base must be positive, otherwise derivation undefined')
else:
raise TypeError('The input of tensor.exp can only be a Tensor/list/np.array/number.')
ob = Tensor()
#new func val
ob._val = np.power(base,t._val)
#chain rule
ob._der = np.power(base,t._val) * t._der * np.log(base)
return ob
def log(t, a = np.e):
"""
Input
=========
t (tensor.Tensor)
"""
if not check_number(a):
raise TypeError('The base should be a scaler')
if a <= 0:
raise ValueError('The base must be positive')
if check_nontensor_input(t):
t = np.array(t)
if (t <= 0).any():
raise ValueError('log undefined')
else:
return np.log(t)
elif isinstance(t,Tensor):
if check_anyneg(t._val):
raise ValueError('Log undefined')
else:
#create object for output and derivative
ob = Tensor()
#new func val
ob._val = np.log(t._val)/np.log(a)
#chain rule
ob._der = (1/(t._val*np.log(a)))*t._der
return ob
else:
raise TypeError('The input of tensor.log can only be a Tensor/list/np.array/number.')
def sigmoid(t, t0 = 0, L = 1, k = 1):
"""
A logistic function or logistic curve is a common S-shaped curve (sigmoid curve) with equation
f(t) = L/(1+exp(-k(t-t0)))
Input
=========
t needs to be a tensor.Tensor object
t0 is the x value of the sigmoid's midpoint. The default value is 0.
L is the curve's maximum value.
k is the logistic growth rate or steepness of the curve.
"""
if not isinstance(t,Tensor):
raise TypeError('The input of tensor.sigmoid can only be a Tensor object.')
if not check_number(t0):
raise TypeError('t0 must be either an int or float')
if not check_number(L):
raise TypeError('L must be either an int or float')
if not check_number(k):
raise TypeError('k must be either an int or float')
#create object for output and derivative
ob = Tensor()
#new func val
ob._val = L / (1+np.exp(-k * (t._val - t0)))
#chain rule
ob._der = t._der * (L * k * np.exp(-k * (t._val - t0))) / (1 + np.exp(-k * (t._val - t0)))**2
return ob
def sqrt(t):
"""
The function used to calculate the square root of a non-negative variable.
Input
============
t needs to be a tensor.Tensor object. All the elements must be non-negative.
"""
if check_nontensor_input(t):
t = np.array(t)
if (t < 0).any():
raise ValueError('The constant input must be all nonnegative value, no complex number allowed')
else:
return np.sqrt(t)
elif isinstance(t,Tensor):
if check_anyneg(t):
raise ValueError('The input must be all positive value, no complex number allowed')
else:
ob = Tensor()
ob._val = t._val**(0.5)
ob._der = 0.5* t._val**(-0.5) * t._der
return ob
else:
raise TypeError('The input of tensor.sqrt can only be a Tensor/list/number/np.ndarray object.')
class Tensor:
def __init__ (self, val = np.array([1.0])):
"""
Initialize Tensor object.
val (scalar/list/np.ndarray)
Attributes:
=============
self.val (number): the value of the Tensor
self.der (number): the derivative of the Tensor
Example
=============
>>> a = Tensor(2.0)
>>> print(a.val)
2.0
>>> print(a.der)
1.0
>>> a.der = 2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
"""
#check inputs
if check_number(val):
self._val = np.array([val])
elif check_list(val):
self._val = np.array(val)
elif check_array(val):
self._val = val
else:
raise TypeError('The input of val should be a number, a list or a numpy array.')
#self._flag = False
self._der = np.ones(len(self._val))
@property
def val(self):
return self._val
@property
def der(self):
return self._der
def __add__ (self, other):
"""
Overload the addition
EXAMPLES
==========
>>> f = Tensor(2.0) + 3.0
>>> (f.val, f.der)
(5.0, 1.0)
"""
x = Tensor()
if isinstance(other, Tensor):
x._val = self._val + other.val
x._der = self._der + other.der
return x
elif check_number(other) or check_array(other):
x._val = self._val + other
x._der = self._der
return x
else:
raise TypeError('Tensor object can only add a number or a Tensor object')
def __radd__ (self, other):
"""
Overload the addition and make it commutable
EXAMPLES
==========
>>> f = 3.0 + Tensor(2.0)
>>> (f.val, f.der)
(5.0, 1.0)
"""
return self.__add__(other)
def __sub__ (self, other):
"""
Overload the substraction
EXAMPLES
==========
>>> f = Tensor(2.0) - 3.0
>>> (f.val, f.der)
(-1.0, 1.0)
"""
x = Tensor()
try:
x._val = self._val - other.val
x._der = self._der- other.der
return x
except:
if check_number(other) or check_array(other):
x._val = self._val - other
x._der = self._der
return x
else:
raise TypeError('Tensor object can only multiply with Tensor or number')
def __rsub__ (self, other):
"""
Overload the substraction and make it commutable
EXAMPLES
==========
>>> f = 3.0 - Tensor(2.0)
>>> (f.val, f.der)
(1.0, -1.0)
"""
return - self.__sub__(other)
def __mul__ (self, other):
"""
Overload the multiplication
EXAMPLES
==========
>>> f = Tensor(2.0) * 3.0
>>> (f.val, f.der)
(6.0, 3.0)
"""
x =Tensor()
if isinstance(other, Tensor):
x._val = self._val * other.val
x._der = self._der * other.val + self._val * other.der
return x
elif check_number(other) or check_array(other):
x._val = self._val * other
x._der = self._der * other
return x
else:
raise TypeError('Tensor object can only multiply with Tensor or number')
def __rmul__ (self, other):
"""
Overload the multiplication and make it commutable
EXAMPLES
==========
>>> f = 3.0 * Tensor(2.0)
>>> (f.val, f.der)
(6.0, 3.0)
"""
return self.__mul__(other)
def __truediv__ (self, other):
"""
Overload the division, input denominator cannot include zero. Otherwise raise ValueError.
EXAMPLES
==========
>>> f = Tensor(2.0)/2.0
>>> (f.val, f.der)
(1.0, 0.5)
"""
x = Tensor()
if (check_number(other) and other == 0) or\
(isinstance(other, Tensor) and check_anyzero(other.val)) or \
(check_array(other) and check_anyzero(other)):
raise ZeroDivisionError('The Tensor is divided by 0')
if isinstance(other, Tensor):
x._val = self._val/ other.val
x._der = (self._der*other.val - self._val*other.der)/(other.val*other.val)
return x
elif check_number(other) or check_array(other):
x._val = self._val / other
x._der = self._der / other
return x
else:
raise TypeError('Tensor can only be divided by a number or a Tensor object')
def __rtruediv__ (self, other):
"""
Overload the division, and make it commutable. Input denominator cannot include zero, otherwise raise ValueError.
EXAMPLES
==========
>>> f = 2.0/Tensor(2.0)
>>> (f.val, f.der)
(1.0, -0.5)
"""
x = Tensor()
if check_anyzero(self._val):# a/tensor(0)
raise ZeroDivisionError('The Tensor object in denominator should not be zero.')
# if isinstance(other, Tensor):
# x._val = other.val/ self._val
# x._der = (self._val*other.der - self._der*other.val)/(self._val*self._val)
# return x
if check_number(other) or check_array(other):
x._val = other / self._val
x._der = -other * self._der / (self._val * self._val)
return x
else:
raise TypeError('Only an numpy array or number can be divided by Tensor.')
def __pow__ (self, other):
"""
Overload the power method
EXAMPLES
==========
>>> f = Tensor(2.0)**3
>>> (f.val, f.der)
(8.0, 12.0)
"""
x = Tensor()
if isinstance(other, Tensor): # x**a -> a*x**(a-1)
if (other.val > 0).all():
x._val = self._val ** other.val
x._der = (self._val ** other.val) * (other.der * np.log (self._val) + other.val * self._der/ self._val)
return x
# elif (self._val == 0 and other.val <1).any():
# raise ZeroDivisionError('the base cannot be 0 when power is negative')
else:
raise ValueError('log function undefined for exponent <= 0')
elif check_number(other) or (check_array(other) and len(other) == 1):
if other == 0:
x._val = 1
x._der = 0
return x
elif (self._val == 0).any() and other <1:
raise ZeroDivisionError('the base cannot be 0 when power is negative')
else:
other = float(other) #convert to float first
x._val = self._val** other
x._der = other * self._val ** (other - 1) * self._der
return x
else:
raise TypeError('Tensor base can only be operated with a Tensor object or a number/np.ndarray')
def __rpow__ (self, other):
"""
Overload the power method and make it commutable.
EXAMPLES
==========
>>> f = 3**Tensor(2.0)
>>> (f.val, f.der)
(9.0, 9.887510598012987)
"""
x = Tensor()
if check_number(other) or (check_array(other) and len(other) == 1):
if other <= 0:
raise ValueError('log function undefined for exponent <= 0')
else:
x._val = other ** self._val
x._der = (other ** self._val) * (self._der * np.log(other))
return x
else:
raise TypeError('Tensor base can only be operated with a Tensor object or a number/np.ndarray')
def __neg__ (self):
"""
Overload the negation method.
EXAMPLES
==========
>>> f = -Tensor(2.0)
>>> (f.val, f.der)
(-2.0, -1.0)
"""
x = Tensor()
x._val = -self._val
x._der = -self._der
return x
# Alice added functions
def __lt__(self, other):
try:
return self._val < other.val
except: # other is a scaler
return self._val < other
def __le__(self, other):
try:
return self._val <= other.val
except: # other is a scaler
return self._val <= other
def __gt__(self, other):
try:
return self._val > other.val
except: # other is a scaler
return self._val > other
def __ge__(self, other):
try:
return self._val >= other.val
except: # other is a scaler
return self._val >= other
def __eq__(self, other):
if not isinstance(other, Tensor):
raise TypeError('Tensor object can only be compared with Tensor object')
return (self._val == other.val).all()
def __ne__(self, other):
return not self.__eq__(other).all()
def __abs__(self):
# only used for calculation
return abs(self._val)
def __str__(self):
"""
Examples
================
>>> c = tensor.Tensor(3.0)
>>> print(c)
Tensor(3.0)
"""
return f"Tensor({self._val.tolist()})"
def __repr__(self):
"""
Examples
================
>>> c = tensor.Tensor(3.0)
>>> repr(c)
'Tensor: val(3.0), der(1.0)'
"""
return f"Tensor: val({self._val.tolist()}), der({self._der.tolist()})"
def __len__(self):
return len(self._val) | AD-Derivators | /AD_Derivators-0.0.2-py3-none-any.whl/AD_Derivators/functions/tensor.py | tensor.py |
import numpy as np
from AD_Derivators.functions.tensor import Tensor
from AD_Derivators.helper_functions.ad_utils import check_array, check_number, check_list, check_list_shape
class AD:
def __init__(self,num_functions, num_inputs, vec_dim = 1):
"""
Initializes the AD object with Tensor inputs and AD mode.
Args:
============
num_functions (int): number of functions
num_inputs (int)L number of inputs in each function
vec_dim: the length of each argument of functions
ATTRIBUTES:
============
self.inputs (list of Tensor): a list of Tensor objects.
self.function (function): the list of functions for automatic differentiation
self.jacobian (np.ndarray): the jacobian matrix of the inputs given self.function
self.jvp (np.ndarray): the value of automatic differentiation given inputs,
self.functions at a given direction p
self.num_functions (int): the number of functions
self.num_inputs (int): the number of inputs for each function. All should be the same.
self.shape (tuple): (self.num_functions, self.num_inputs, vec_dim). All vector Tensor should have the same length.
"""
self._num_func = num_functions
self._num_inputs = num_inputs
self._vec_dim = vec_dim
self._inputs = None
self._func = None
self._jacobian = None
self._jvp = None
@property
def num_functions(self):
return self._num_func
@property
def num_inputs(self):
return self._num_inputs
def _prepare_inputs(self, mat_inputs):
"""
This function helps user to prepare inputs of AD class by
giving a list
Args:
=======================
mat_inputs (list or np.ndarray): a. a 2-d (m x n) list for AD class with m functions, each having n inputs
b. a 3-d (m x n x v) list for class with m functions, each having n inputs and each input have a Tensor in length of v
Returns:
========================
res: a 2-d (m x n) list of Tensor
"""
if isinstance(mat_inputs, list):
mat_inputs = np.array(mat_inputs)
assert self._check_shape(mat_inputs)
res = []
for m in range(self.num_functions):
inp = []
for n in range(self.num_inputs):
inp.append(Tensor(mat_inputs[m,n]))
res.append(inp)
return res
def add_inputs(self, inputs):
"""
Add inputs for the class. The dimension of inputs should match self.shape.
Would update self.inputs
Args:
=================================
inputs(list or np.ndarray): a 2-d or 3-d array. The dimension should match self.shape.
"""
# always convert list to np.array first
if isinstance(inputs,list):
inputs = np.array(inputs)
# check the dimension
assert self._check_shape(inputs)
self._inputs = self._prepare_inputs(inputs)
self._jacobian = None # reset jacobian function
def build_function(self, input_functions):
""" Calculates the jacobian matrix given the input functions.
!!! No Tensor objects should be used in input_function
unless it's the input variable
would update self.functions and self.jacobian and erase self.jvp
Args
=========================
input_functions (list): a list of m functions. each function have n inputs. Each input could
be either a scaler or a vector. Each function should have a return vector or scalar with the
same dimension as each input of the functions.
"""
# check list and length
assert isinstance(input_functions, list) and len(input_functions) == self._num_func
# check functions
if all(callable(f) for f in input_functions):
self._func = input_functions
else:
raise TypeError('the input should be a list of callable function')
if self._inputs is None:
raise ValueError('No inputs added to AD class.')
self._jacobian = []
for f, inp in zip(self._func, self._inputs):
devs = []
if self._vec_dim == 1:
const_inp = [t.val[0] for t in inp]
else:
const_inp = [t.val.tolist() for t in inp]
for i,t in enumerate(inp):
input_values = const_inp.copy()
input_values[i] = t # only changes the ith element to be Tensor object.
# calculate partial derivatives
val = f(*input_values)
# check function returns
if not isinstance(val, Tensor):
raise TypeError('The input function should only return a Tensor object')
# if len(tensor) > 1
if self._vec_dim > 1:
devs.append(val.der.tolist())
# if tensor is a scalar
else:
devs.append(val.der[0])
self._jacobian.append(devs)
# jacobian is an np.ndarray (m x n or m x n x v)
self._jacobian = np.array(self._jacobian)
# reset self._jvp
self._jvp = None
@property
def inputs(self):
return self._inputs
def __str__(self):
"""
Examples
================
ad = autograd.AD(tensor.Tensor(2.0))
>>> print(ad)
AD(Tensor([2.0]))
"""
return f"AD(Tensor({[str(tens) for tens in self._inputs]}))"
def __repr__(self):
"""
Examples
================
ad = autograd.AD(tensor.Tensor(2.0))
>>> repr(ad)
'AD: inputs(Tensor([2.0])), function(None)'
"""
return f"AD: inputs({[str(tens) for tens in self._inputs]}), function({str(self._func)})"
@property
def function(self):
return self._func
@property
def jacobian(self):
"""Returns the Jacobian matrix given Tensor inputs and the input functions.
"""
return self._jacobian
@property
def jvp(self):
"""Returns the dot product between the Jacobian of the given
function at the point
"""
return self._jvp
def run(self, seed = [[1.0]]):
"""Returns the differentiation results given the mode.
Right now AD only allows forward mode.
would update self.jvp
INPUTS
=======
seed (list or np.ndarray): shape ==(num_inputs x vec_dim) the direction of differentiation . THE ARRAY HAS TO BE 2D!!!
RETURNS
========
results (np.ndarray): shape == (num_func x vec_dim)
"""
return self.__forward(seed)
def __forward(self, seed = [[1.0]]):
"""Returns the differentiation value of the current graph by forward mode.
INPUTS
=======
seed (list or np.ndarray): 2-d list or np.ndarray:
a. vec_dim == 1: 1 x num_inputs
b. vec_dim > 1: vec_dim x num_inputs
RETURNS
========
self._jvp (np.ndarray): shape == (num_func x vec_dim)
"""
# always convert list to np.array first
if isinstance(seed, list):
seed = np.array(seed)
if isinstance(seed, np.ndarray) and seed.shape == (self._vec_dim, self.num_inputs):
pass
else:
raise TypeError('seed should be a 2-d (vec_dim x num_inputs) list of numbers ')
self._jvp = [email protected]
assert self._jvp.shape == (self._num_func, self._vec_dim)
return self._jvp
@property
def shape(self):
return (self._num_func, self._num_inputs, self._vec_dim)
def get_inputs(self, option = "numpy"):
"""
option (str): "numpy" or "tensor"
Returens:
===============
if option == "numpy": returns the np.ndarray format inputs shape: (num_function, num_inputs, vec_dim)
elif option == "tensor":returns the same 2d Tensor list as calling self.inputs.
"""
if option == "tensor":
return self._inputs
elif option == "numpy":
output = []
for m in range(self.num_functions):
vec = []
for n in range(self.num_inputs):
vec.append(self._inputs[m][n].val)
output.append(vec)
return np.array(output)
else:
raise IOError("The option should be either numpy or tensor")
def _check_shape(self, array):
"""
array(np.ndarray): a 2d or 3d shape np.array
"""
flag = False
if isinstance(array, np.ndarray) and len(array.shape) ==2 and array.shape == self.shape[:2]:
flag = True
elif isinstance(array, np.ndarray) and len(array.shape) == 3 and array.shape == self.shape:
flag = True
return flag | AD-Derivators | /AD_Derivators-0.0.2-py3-none-any.whl/AD_Derivators/functions/autograd.py | autograd.py |
cs107 Final Project - Group 28: Byte Me
=====
Members
-----
- Luo, Yanqi
- Pasco, Paolo
- Ayouch, Rayane
- Clifton, Logan
## TravisCI badge
[](https://app.travis-ci.com/cs107-byteme/cs107-FinalProject)
## Codecov badge
[](https://codecov.io/gh/cs107-byteme/cs107-FinalProject)
## Broader Impact and Inclusivity Statement
Our automatic differentiation library can lead to a number of benefits in a variety of fields. Since automatic differentiation can be used in fields like physics, statistics, and machine learning, our package has the potential to accelerate advances in those fields by facilitating the key process of automatic differentiation. One potential impact to be wary of is the fact that while the package can be used for many important scientific methods, we don't have much control over how the package is used on the user end. This means that the burden is on the user to use our package responsibly (i.e., if the user is using this package for machine learning, it's important to check that the input data is free of biases or error. Otherwise, this package will be used to perpetuate the biases present in the training data). Ultimately, this package is a tool, and like all tools, it's helpful for its presumed purpose but can be misused.
In addition, we've worked to make this package as accessible and easy to understand as possible, making its use approachable for anyone regardless of programming/mathematical experience. Our workflow for this project has also been inclusive—all pull requests are reviewed by other members of the team, giving everyone a hand in contributing to the code. The biggest barrier to access (both on the contributing side and the implementation side) is getting started. While we've attempted to document the code and make it intuitive enough that any user can get started with our package, the fact remains that the steps needed to install/start contributing to the package (creating a virtual environment, or getting started with git/test suites/version control), while not impossible to learn, can be intimidating to someone without a certain level of programming experience. Our hope is that these steps are easy enough to research that this intimidation can be overcome, but we do recognize that for underrepresented populations without as much access to CS education, an intimidating first step can be enough to dissuade someone from pursuing a project further. | AD-byteme | /AD-byteme-0.1.5.tar.gz/AD-byteme-0.1.5/README.md | README.md |
# CS207 Final Project Repository
[](https://travis-ci.com/cs207-f18-WIRS/cs207-FinalProject)
[](https://coveralls.io/github/cs207-f18-WIRS/cs207-FinalProject?branch=master)
This repository contains the Final Project Deliverable on Automatic Differentiation for the Harvard Course:
"CS 207: Systems Development for Computational Science"
- [```Github Repository```](https://github.com/cs207-f18-WIRS/cs207-FinalProject)
- [```PyPi Python Package Index distribution: 'AD-cs207'```](https://pypi.org/project/AD-cs207/)
## Documentation can be found at [```docs```](https://github.com/cs207-f18-WIRS/cs207-FinalProject/tree/master/docs):
- [```docs/Final.ipynb:```](https://github.com/cs207-f18-WIRS/cs207-FinalProject/blob/master/docs/Final.ipynb) Automatic Differentiation package documentation.
- [```demo/Demo and Presentation Project CS207.ipynb:```](https://github.com/cs207-f18-WIRS/cs207-FinalProject/blob/master/demo/Demo%20and%20Presentation%20Project%20CS207.ipynb) Automatic Differentiation package how to install, demo and final presentation.
- [```docs/How-to-package.md:```](https://github.com/cs207-f18-WIRS/cs207-FinalProject/blob/master/docs/How-to-package.md) Explanation how the package was distributed.
- [```Course Project discription```](https://iacs-cs-207.github.io/cs207-F18/project.html) : Overview of the instruction on the project.
## Course information:
- [```Main course website```](https://iacs-cs-207.github.io/cs207-F18/) : Check this site for all course-related policies including the syllabus, course schedule, and project policies.
- [```GitHub Repo```](https://github.com/IACS-CS-207/cs207-F18) : All course materials will be released on GitHub.
## Contributors (alphabetic):
- FELDHAUS Isabelle
- JIANG Shenghao
- STRUYVEN Robbert
- WANG William
| AD-cs207 | /AD-cs207-1.0.0.tar.gz/AD-cs207-1.0.0/README.md | README.md |
import math
class FD:
""" implementation of forward AD using dual numbers """
def __init__(self, string, value, d_seed):
self.value = value
self.dual = d_seed
self.string = string
def __str__(self):
""" returns the string value of the function """
return self.string
def grad(self):
""" returns the derivative of the function based on seed """
return self.dual
""" Implementation of operators using operator overloading """
def __add__(self, other):
n = str(self) + "+" + str(other)
if not isinstance(other, FD):
z = FD(n, self.value + other, self.dual)
return z
z = FD(n, self.value + other.value, self.dual + other.dual)
return z
def __radd__(self, other):
return self.__add__(other)
def __sub__(self, other):
n = "(" + str(self) + ")" + "-(" + str(other) + ")"
if not isinstance(other, FD):
z = FD(n, self.value - other, self.dual)
return z
z = FD(n, self.value - other.value, self.dual - other.dual)
return z
def __rsub__(self, other):
n = str(other) + "-(" + str(self) + ")"
z = FD(n, other - self.value, -self.dual)
return z
def __mul__(self, other):
n = "(" + str(self) + ")" + "*(" + str(other) + ")"
if not isinstance(other, FD):
z = FD(n, self.value * other, self.dual*other)
return z
z = FD(n, self.value * other.value, self.value*other.dual + self.dual*other.value)
return z
def __rmul__(self, other):
return self.__mul__(other)
def __truediv__(self, other):
n = "(" + str(self) + ")" + "/(" + str(other) + ")"
if not isinstance(other, FD):
z = FD(n, self.value / other, self.dual/other)
return z
z = FD(n, self.value / other.value, (other.value*self.dual - self.value*other.dual)/(other.value**2))
return z
def __rtruediv__(self, other):
n = str(other) + "/" + str(self)
z = FD(n, other / self.value, -other*self.dual / self.value**2)
return z
def __pow__(self, other):
n = "POW(" + str(self) + "," + str(other) + ")"
if not isinstance(other, FD):
z = FD(n, self.value ** other, other*self.value**(other-1)*self.dual)
return z
nd = (self.value**other.value) * ((other.value/self.value*self.dual) + (other.dual * math.log(self.value)))
z = FD(n, self.value ** other.value, nd)
return z
def __rpow__(self, other):
n = "POW(" + str(other) + "," + str(self) + ")"
z = FD(n, other ** self.value, self.dual*math.log(other)*other**self.value)
return z
""" implement unary operations for forward div """
def sin(x):
if not isinstance(x, FD):
return math.sin(x)
n = "SIN(" + str(x) + ")"
z = FD(n, math.sin(x.value), x.dual*math.cos(x.value))
return z
def cos(x):
if not isinstance(x, FD):
return math.cos(x)
n = "COS(" + str(x) + ")"
z = FD(n, math.cos(x.value), -x.dual*math.sin(x.value))
return z
def tan(x):
if not isinstance(x, FD):
return math.tan(x)
n = "TAN(" + str(x) + ")"
z = FD(n, math.tan(x.value), x.dual/(math.cos(x.value)**2))
return z
def ln(x):
if not isinstance(x, FD):
return math.log(x)
n = "ln(" + str(x) + ")"
z = FD(n, math.log(x.value), x.dual/x.value)
return z
def log(x, base):
if not isinstance(x, FD):
return math.log(x,base)
n = "log(" + str(x) + ")/log(" + str(base) + ")"
z = FD(n, math.log(x.value)/math.log(base), x.dual/(x.value*math.log(base)) )
return z
def arcsin(x):
if not isinstance(x, FD):
return math.asin(x)
n = "arcsin(" + str(x) + ")"
z = FD(n, math.asin(x.value), x.dual/math.sqrt(1.0-x.value**2))
return z
def arccos(x):
if not isinstance(x, FD):
return math.acos(x)
n = "arccos(" + str(x) + ")"
z = FD(n, math.acos(x.value), -1.0*x.dual/math.sqrt(1.0-x.value**2))
return z
def arctan(x):
if not isinstance(x, FD):
return math.atan(x)
n = "arctan(" + str(x) + ")"
z = FD(n, math.atan(x.value), x.dual/(1.0+x.value**2))
return z
def sinh(x):
if not isinstance(x, FD):
return math.sinh(x)
n = "sinh(" + str(x) + ")"
z = FD(n, math.sinh(x.value), x.dual*math.cosh(x.value))
return z
def cosh(x):
if not isinstance(x, FD):
return math.cosh(x)
n = "cosh(" + str(x) + ")"
z = FD(n, math.cosh(x.value), x.dual*math.sinh(x.value))
return z
def tanh(x):
if not isinstance(x, FD):
return math.tanh(x)
n = "tanh(" + str(x) + ")"
z = FD(n, math.tanh(x.value), x.dual*(1.0-math.tanh(x.value)**2))
return z
def sqrt(x):
if not isinstance(x, FD):
return math.sqrt(x)
n = "sqrt(" + str(x) + ")"
z = FD(n, math.sqrt(x.value), 0.5*x.dual/math.sqrt(x.value) )
return z | AD-cs207 | /AD-cs207-1.0.0.tar.gz/AD-cs207-1.0.0/AD/for_ad.py | for_ad.py |
import math
print('Importing Reverse Mode Automatic Differentiation')
class Var:
"""
Var class used for reverse AD
If derivative doesn't exist at a point for one of the variables, forward AD should be used
"""
def __init__(self, name, value):
self.value = value
self.children = []
self.grad_value = None
self.name = name
def __str__(self):
""" returns the string of the formula """
return self.name
def grad(self):
""" returns the gradient of the formula with respect to the variable """
if self.grad_value is None:
self.grad_value = sum(val * var.grad() for val, var in self.children)
return self.grad_value
def __add__(self, other):
""" adds the vars, returns a new formula and appens children to the variables """
n = str(self) + "+" + str(other)
if not isinstance(other, Var):
z = Var(n, self.value + other)
self.children.append((1.0, z))
return z
z = Var(n, self.value + other.value)
other.children.append((1.0, z))
self.children.append((1.0, z))
return z
def __radd__(self, other):
return self.__add__(other)
def __sub__(self, other):
""" subtracts the vars, returns a new formula and appens children to the variables """
n = "(" + str(self) + ")" + "-(" + str(other) + ")"
if not isinstance(other, Var):
z = Var(n, self.value - other)
self.children.append((1.0, z))
return z
z = Var(n, self.value - other.value)
self.children.append((1.0, z))
other.children.append((-1.0, z))
return z
def __rsub__(self, other):
n = str(other) + "-(" + str(self) + ")"
z = Var(n, other - self.value)
self.children.append((-1.0, z))
return z
def __mul__(self, other):
""" multiply the vars, returns a new formula and appens children to the variables """
n = "(" + str(self) + ")" + "*(" + str(other) + ")"
if not isinstance(other, Var):
z = Var(n, self.value * other)
self.children.append((other, z))
return z
z = Var(n, self.value * other.value)
self.children.append((other.value, z))
other.children.append((self.value, z))
return z
def __rmul__(self, other):
return self.__mul__(other)
def __truediv__(self, other):
""" divides the vars, returns a new formula and appens children to the variables """
n = "(" + str(self) + ")" + "/(" + str(other) + ")"
if not isinstance(other, Var):
z = Var(n, self.value / other)
self.children.append((1/other, z))
return z
z = Var(n, self.value / other.value)
self.children.append((1/other.value, z))
other.children.append((-self.value/other.value**2, z))
return z
def __rtruediv__(self, other):
n = str(other) + "/" + str(self)
z = Var(n, other / self.value)
self.children.append((-other/self.value**2, z))
return z
def __pow__(self, other):
""" exponentiates the vars, returns a new formula and appens children to the variables """
n = "POW(" + str(self) + "," + str(other) + ")"
if not isinstance(other, Var):
z = Var(n, self.value ** other)
self.children.append((other*self.value**(other-1), z))
return z
z = Var(n, self.value ** other.value)
self.children.append((other.value*self.value**(other.value-1), z))
other.children.append((math.log(self.value)*self.value**other.value,z))
return z
def __rpow__(self, other):
n = "POW(" + str(other) + "," + str(self) + ")"
z = Var(n, other ** self.value)
self.children.append((math.log(other)*other**self.value,z))
return z
def sin(x):
""" calculates sin of the formula/var x """
if not isinstance(x, Var):
print('Bingo')
return math.sin(x)
n = "sin(" + str(x) + ")"
z = Var(n, math.sin(x.value))
x.children.append((math.cos(x.value), z))
return z
def cos(x):
""" calculates cos of the formula/var x """
if not isinstance(x, Var):
return math.cos(x)
n = "cos(" + str(x) + ")"
z = Var(n, math.cos(x.value))
x.children.append((-math.sin(x.value), z))
return z
def tan(x):
""" calculates tan of the formula/var x """
if not isinstance(x, Var):
return math.tan(x)
n = "tan(" + str(x) + ")"
z = Var(n, math.tan(x.value))
x.children.append((1.0+math.tan(x.value)**2, z))
return z
def sqrt(x):
""" calculates sqrt of the formula/var x """
if not isinstance(x, Var):
return math.sqrt(x)
n = "sqrt(" + str(x) + ")"
z = Var(n, math.sqrt(x.value))
x.children.append((0.5/(x.value)**0.5, z))
return z
def ln(x):
""" calculates ln of the formula/var x """
if not isinstance(x, Var):
return math.log(x)
n = "ln(" + str(x) + ")"
z = Var(n, math.log(x.value))
x.children.append((1.0/x.value, z))
return z
def log(x, base):
""" calculates log(x, base) of the formula/var x """
if not isinstance(x, Var):
return math.log(x)/math.log(base)
n = "ln(" + str(x) + ")/ln(" + str(base) + ")"
z = Var(n, math.log(x.value,base))
x.children.append((1.0/(x.value*math.log(base)), z))
return z
def arcsin(x):
""" calculates arcsin of the formula/var x """
if not isinstance(x, Var):
return math.asin(x)
n = "arcsin(" + str(x) + ")"
z = Var(n, math.asin(x.value))
x.children.append((1.0/math.sqrt(1.0-x.value**2), z))
return z
def arccos(x):
""" calculates arccos of the formula/var x """
if not isinstance(x, Var):
return math.acos(x)
n = "arccos(" + str(x) + ")"
z = Var(n, math.acos(x.value))
x.children.append((-1.0/math.sqrt(1.0-x.value**2), z))
return z
def arctan(x):
""" calculates arctan of the formula/var x """
if not isinstance(x, Var):
return math.atan(x)
n = "arctan(" + str(x) + ")"
z = Var(n, math.atan(x.value))
x.children.append((1.0/(1.0+x.value**2), z))
return z
def sinh(x):
""" calculates sinh of the formula/var x """
if not isinstance(x, Var):
return math.sinh(x)
n = "sinh(" + str(x) + ")"
z = Var(n, math.sinh(x.value))
x.children.append((math.cosh(x.value), z))
return z
def cosh(x):
""" calculates cosh of the formula/var x """
if not isinstance(x, Var):
return math.cosh(x)
n = "cosh(" + str(x) + ")"
z = Var(n, math.cosh(x.value))
x.children.append((math.sinh(x.value), z))
return z
def tanh(x):
""" calculates tanh of the formula/var x """
if not isinstance(x, Var):
return math.tanh(x)
n = "tanh(" + str(x) + ")"
z = Var(n, math.tanh(x.value))
x.children.append((1.0-math.tanh(x.value)**2, z))
return z | AD-cs207 | /AD-cs207-1.0.0.tar.gz/AD-cs207-1.0.0/AD/rev_ad.py | rev_ad.py |
import AD.interpreter as ast
import sympy
class SD():
"""
User friendly interface for the AST interpreter.
"""
def __init__(self, frmla):
self.formula = frmla
self.lexer = ast.Lexer(frmla)
self.parser = ast.Parser(self.lexer)
self.interpreter = ast.Interpreter(self.parser)
self.vd = None
def set_point(self, vd):
"""
sets the point to derive at
"""
if vd is not None:
self.vd = vd
if self.vd is None:
raise NameError("Must set point to evaluate")
def diff(self, dv, vd=None, order=1):
"""
returns numeric derivative with respect to variable dv
vd is used to set a new point
order is the order of the derivative to take
"""
self.set_point(vd)
new_interpreter = self.interpreter
for i in range(order-1):
new_frmla = new_interpreter.symbolic_diff(self.vd, dv)
new_lexer = ast.Lexer(new_frmla)
new_parser = ast.Parser(new_lexer)
new_interpreter = ast.Interpreter(new_parser)
return new_interpreter.differentiate(self.vd, dv)
def symbolic_diff(self, dv, vd=None, order=1, output='default'):
"""
returns symbolic derivative with respect to variable dv
vd is used to set a new point
order is the order of the derivative to take
"""
self.set_point(vd)
new_interpreter = self.interpreter
for i in range(order-1):
new_frmla = new_interpreter.symbolic_diff(self.vd, dv)
new_lexer = ast.Lexer(new_frmla)
new_parser = ast.Parser(new_lexer)
new_interpreter = ast.Interpreter(new_parser)
formul = new_interpreter.symbolic_diff(self.vd, dv)
simplified = self.symplify(formul, output)
return simplified
def diff_all(self, vd=None):
"""
returns numeric derivative of all variables
"""
self.set_point(vd)
return self.interpreter.diff_all(self.vd)
def val(self, vd=None):
"""
returns the value of the function at the point
"""
self.set_point(vd)
return self.interpreter.interpret(self.vd)
def new_formula(self, frmla):
"""
sets a new formula for the object
"""
self.formula = frmla
self.lexer = ast.Lexer(frmla)
self.parser = ast.Parser(self.lexer)
self.interpreter = ast.Interpreter(self.parser)
self.vd = None
def symplify(self, formul, output):
""" simplifies a formula string, output changes output format """
def POW(a, b):
return a ** b
def EXP(a):
return sympy.exp(a)
def LOG(a):
return sympy.log(a)
def COS(a):
return sympy.cos(a)
def SIN(a):
return sympy.sin(a)
def TAN(a): # Tangent Function
return sympy.tan(a)
def SINH(a): # Inverse trigonometric functions: inverse sine or arcsine
return sympy.sinh(a)
def COSH(a): # Inverse trigonometric functions: inverse cosine or arccosine
return sympy.cosh(a)
def TANH(a): # Inverse trigonometric functions: inverse tangent or arctangent
return sympy.tanh(a)
def ARCSIN(a): # Inverse trigonometric functions: inverse sine or arcsine
return sympy.asin(a)
def ARCCOS(a): # Inverse trigonometric functions: inverse cosine or arccosine
return sympy.acos(a)
def ARCTAN(a): # Inverse trigonometric functions: inverse tangent or arctangent
return sympy.atan(a)
string_for_sympy=""
string_for_sympy2=""
split_variables=self.vd.split(",")
for var in split_variables:
l=var.split(":")
string_for_sympy=string_for_sympy+l[0]+" "
string_for_sympy2=string_for_sympy2+l[0]+", "
exec(string_for_sympy2[:-2] + "= sympy.symbols('" + string_for_sympy+ "')")
if output == 'default':
return sympy.simplify(eval(formul))
if output == 'latex':
return sympy.latex(sympy.simplify(eval(formul)))
if output == 'pretty':
sympy.pprint(sympy.simplify(eval(formul)))
return sympy.simplify(eval(formul))
if output == 'all':
print('\nSymbolic differentiation result:')
print(formul)
print('\nSimplified Pretty Print:\n') ; sympy.pprint(sympy.simplify(eval(formul)))
print('\nSimplified Latex code:')
print(sympy.latex(sympy.simplify(eval(formul))))
print('\nSimplified Default:')
print(sympy.simplify(eval(formul)),'\n')
return sympy.simplify(eval(formul)) | AD-cs207 | /AD-cs207-1.0.0.tar.gz/AD-cs207-1.0.0/AD/symdif.py | symdif.py |
import copy
import math
import unicodedata
###############################################################################
# #
# LEXER #
# #
###############################################################################
# Token types
#
# EOF (end-of-file) token is used to indicate that
# there is no more input left for lexical analysis
INTEGER, PLUS, MINUS, MUL, DIV, LPAREN, RPAREN, EOF, VAR, COS, SIN, EXP,POW, LOG, COMMA, TAN, ARCSIN, ARCCOS, ARCTAN, SINH, COSH, TANH = (
'INTEGER', 'PLUS', 'MINUS', 'MUL', 'DIV', '(', ')', 'EOF', 'VAR', 'COS', 'SIN', 'EXP', 'POW', 'LOG', ',', 'TAN', 'ARCSIN', 'ARCCOS', 'ARCTAN', 'SINH', 'COSH', 'TANH'
)
def is_number(s):
""" checks if passed in variable is a float """
try:
float(s)
return True
except:
pass
return False
# Inputted strings are broken down into tokens by Lexer
class Token(object):
def __init__(self, type, value):
self.type = type
self.value = value
# Lexer takes a string and parses it into tokens
class Lexer(object):
def __init__(self, text):
# client string input, e.g. "4 + 2 * 3 - 6 / 2"
self.text = text
# self.pos is an index into self.text
self.pos = 0
self.current_char = self.text[self.pos]
def error(self):
raise NameError('Invalid character')
def advance(self):
"""Advance the `pos` pointer and set the `current_char` variable."""
self.pos += 1
if self.pos > len(self.text) - 1:
self.current_char = None # Indicates end of input
else:
self.current_char = self.text[self.pos]
def skip_whitespace(self):
""" Skips any spaces """
while self.current_char is not None and self.current_char.isspace():
self.advance()
def integer(self):
"""Return a (multidigit) float consumed from the input."""
index = 1
cur = self.text[self.pos:self.pos+index]
while(True):
rem = len(self.text) - self.pos - index
if rem > 2:
a = cur + self.text[self.pos+index:self.pos+index+1]
b = cur + self.text[self.pos+index:self.pos+index+2]
c = cur + self.text[self.pos+index:self.pos+index+3]
elif rem > 1:
a = cur + self.text[self.pos+index:self.pos+index+1]
b = cur + self.text[self.pos+index:self.pos+index+2]
c = None
elif rem > 0:
a = cur + self.text[self.pos+index:self.pos+index+1]
b = None
c = None
else:
while index > 0:
self.advance()
index -= 1
return float(cur)
if is_number(c):
# handles 1e-1
cur = c
index += 3
elif is_number(b):
# handles 1e1 / 1.1
cur = b
index += 2
elif is_number(a):
cur = a
index += 1
else:
while index > 0:
self.advance()
index -= 1
return float(cur)
def word(self):
"""Return a multichar integer consumed from the input."""
result = ''
while self.current_char is not None and self.current_char.isalpha():
result += self.current_char
self.advance()
return result
def get_next_token(self):
"""Lexical analyzer (also known as scanner or tokenizer)
This method is responsible for breaking a sentence
apart into tokens. One token at a time.
"""
while self.current_char is not None:
if self.current_char.isspace():
self.skip_whitespace()
continue
if self.current_char.isdigit() or self.current_char == ".":
return Token(INTEGER, self.integer())
# parse constants and constants
if self.current_char.isalpha():
wo = self.word()
w = wo.upper()
if(w == "E"):
return Token(INTEGER, math.e)
elif(w == "PI"):
return Token(INTEGER, math.pi)
elif(w == "COS"):
return Token(COS, self.word())
elif(w == "SIN"):
return Token(SIN, self.word())
elif(w == "EXP"):
return Token(EXP, self.word())
elif(w == "POW"):
return Token(POW, self.word())
elif(w == "LOG"):
return Token(LOG, self.word())
elif(w == "TAN"):
return Token(TAN, self.word())
elif(w == "ARCSIN"):
return Token(ARCSIN, self.word())
elif(w == "ARCCOS"):
return Token(ARCCOS, self.word())
elif(w == "ARCTAN"):
return Token(ARCTAN, self.word())
elif(w == "SINH"):
return Token(SINH, self.word())
elif(w == "COSH"):
return Token(COSH, self.word())
elif(w == "TANH"):
return Token(TANH, self.word())
else:
return Token(VAR, wo)
if self.current_char == '+':
self.advance()
return Token(PLUS, '+')
if self.current_char == '-':
self.advance()
return Token(MINUS, '-')
if self.current_char == '*':
self.advance()
return Token(MUL, '*')
if self.current_char == '/':
self.advance()
return Token(DIV, '/')
if self.current_char == '(':
self.advance()
return Token(LPAREN, '(')
if self.current_char == ')':
self.advance()
return Token(RPAREN, ')')
if self.current_char == ',':
self.advance()
return Token(COMMA, ',')
self.error()
return Token(EOF, None)
###############################################################################
# #
# PARSER #
# #
###############################################################################
# AST objects combine tokens into an abstract syntax tree
class AST(object):
pass
class BinOp(AST):
def __init__(self, left, op, right):
self.left = left
self.token = self.op = op
self.right = right
class Num(AST):
def __init__(self, token):
self.token = token
self.value = token.value
class Var(AST):
def __init__(self, token):
self.token = token
self.name = token.value
class UnaryOp(AST):
def __init__(self, op, expr):
self.token = self.op = op
self.expr = expr
# parses tokens generated by a lexer to create an abstract syntax tree
class Parser(object):
def __init__(self, lexer):
self.lexer = lexer
# set current token to the first token taken from the input
self.current_token = self.lexer.get_next_token()
def error(self):
raise NameError('Invalid syntax')
def eat(self, token_type):
# compare the current token type with the passed token
# type and if they match then "eat" the current token
# and assign the next token to the self.current_token,
# otherwise raise an exception.
if self.current_token.type == token_type:
self.current_token = self.lexer.get_next_token()
else:
self.error()
# parses "factors" which are defined using the context free grammer in the docstring
def factor(self):
"""factor : (PLUS | MINUS) factor | INTEGER | VAR | LPAREN expr RPAREN"""
token = self.current_token
if token.type == PLUS:
self.eat(PLUS)
node = UnaryOp(token, self.factor())
return node
elif token.type == MINUS:
self.eat(MINUS)
node = UnaryOp(token, self.factor())
return node
elif token.type == INTEGER:
self.eat(INTEGER)
return Num(token)
elif token.type == VAR:
self.eat(VAR)
return Var(token)
elif token.type == COS:
self.eat(COS)
self.eat(LPAREN)
x = self.expr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node
elif token.type == SIN:
self.eat(SIN)
self.eat(LPAREN)
x = self.expr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node
elif token.type == EXP:
self.eat(EXP)
self.eat(LPAREN)
x = self.expr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node
elif token.type == POW:
self.eat(POW)
self.eat(LPAREN)
x = self.expr()
self.eat(COMMA)
y = self.expr()
self.eat(RPAREN)
return BinOp(left = x, op = token, right = y)
elif token.type == LOG:
self.eat(LOG)
self.eat(LPAREN)
x = self.expr()
self.eat(RPAREN)
return UnaryOp(token, x)
elif token.type == TAN:
self.eat(TAN)
self.eat(LPAREN)
x = self.expr()
self.eat(RPAREN)
return UnaryOp(token, x)
elif token.type == ARCSIN:
self.eat(ARCSIN)
self.eat(LPAREN)
x = self.expr()
self.eat(RPAREN)
return UnaryOp(token, x)
elif token.type == ARCCOS:
self.eat(ARCCOS)
self.eat(LPAREN)
x = self.expr()
self.eat(RPAREN)
return UnaryOp(token, x)
elif token.type == ARCTAN:
self.eat(ARCTAN)
self.eat(LPAREN)
x = self.expr()
self.eat(RPAREN)
return UnaryOp(token, x)
elif token.type == SINH:
self.eat(SINH)
self.eat(LPAREN)
x = self.expr()
self.eat(RPAREN)
return UnaryOp(token, x)
elif token.type == COSH:
self.eat(COSH)
self.eat(LPAREN)
x = self.expr()
self.eat(RPAREN)
return UnaryOp(token, x)
elif token.type == TANH:
self.eat(TANH)
self.eat(LPAREN)
x = self.expr()
self.eat(RPAREN)
return UnaryOp(token, x)
elif token.type == LPAREN:
self.eat(LPAREN)
node = self.expr()
self.eat(RPAREN)
return node
else:
raise NameError('Invalid character')
# parses terms defined with the context free grammar in the docstring
def term(self):
"""term : factor ((MUL | DIV) factor)*"""
node = self.factor()
while self.current_token.type in (MUL, DIV):
token = self.current_token
if token.type == MUL:
self.eat(MUL)
elif token.type == DIV:
self.eat(DIV)
node = BinOp(left=node, op=token, right=self.factor())
return node
# parses exprs defined with the context free grammar in the docstring
def expr(self):
"""
expr : term ((PLUS | MINUS) term)*
term : factor ((MUL | DIV) factor)*
factor : (PLUS | MINUS) factor | INTEGER | LPAREN expr RPAREN
"""
node = self.term()
while self.current_token.type in (PLUS, MINUS):
token = self.current_token
if token.type == PLUS:
self.eat(PLUS)
elif token.type == MINUS:
self.eat(MINUS)
node = BinOp(left=node, op=token, right=self.term())
return node
# parses the lexer to return an abstract syntax tree
def parse(self):
node = self.expr()
if self.current_token.type != EOF:
self.error()
return node
# similar to factor, but returns the symbolic derivative of a factor
def dfactor(self):
"""factor : (PLUS | MINUS) factor | INTEGER | VAR | LPAREN expr RPAREN"""
token = self.current_token
if token.type == PLUS:
self.eat(PLUS)
x, dx = self.dfactor()
node = UnaryOp(token, x)
dnode = UnaryOp(token, dx)
return node, dnode
elif token.type == MINUS:
self.eat(MINUS)
x, dx = self.dfactor()
node = UnaryOp(token, x)
dnode = UnaryOp(token, dx)
return node, dnode
elif token.type == INTEGER:
self.eat(INTEGER)
return Num(token), Num(Token(INTEGER, 0))
elif token.type == VAR:
self.eat(VAR)
return Var(token), Var(Token(VAR, "d_" + token.value))
elif token.type == COS:
self.eat(COS)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = UnaryOp(Token(MINUS, "-"), UnaryOp(Token(SIN, "sin"), x)), op=Token(MUL,'*'), right=dx)
elif token.type == SIN:
self.eat(SIN)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = UnaryOp(Token(COS, "cos"), x), op=Token(MUL,'*'), right=dx)
elif token.type == TAN:
self.eat(TAN)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = BinOp(left = Num(Token(INTEGER, 1)), op = Token(PLUS, '+'),right = BinOp(left = UnaryOp(Token(TAN, "tan"), x), op = Token(MUL, '*'), right = UnaryOp(Token(TAN, "tan"), x))), op=Token(MUL,'*'), right = dx)
elif token.type == ARCSIN:
self.eat(ARCSIN)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = BinOp(left = BinOp(left = Num(Token(INTEGER, 1)), op = Token(MINUS, '-'), right = BinOp(left = x, op = Token(MUL, '*'), right = x)), op = Token(POW, 'pow'), right = Num(Token(INTEGER, -0.5))), op=Token(MUL,'*'), right = dx)
elif token.type == ARCCOS:
self.eat(ARCCOS)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, UnaryOp(Token(MINUS, "-"), BinOp(left = BinOp(left = BinOp(left = Num(Token(INTEGER, 1)), op = Token(MINUS, '-'), right = BinOp(left = x, op = Token(MUL, '*'), right = x)), op = Token(POW, 'pow'), right = Num(Token(INTEGER, -0.5))), op=Token(MUL,'*'), right = dx))
elif token.type == ARCTAN:
self.eat(ARCTAN)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = BinOp(left = BinOp(left = Num(Token(INTEGER, 1)), op = Token(PLUS, '+'), right = BinOp(left = x, op = Token(MUL, '*'), right = x)), op = Token(POW, 'pow'), right = Num(Token(INTEGER, -1.0))), op=Token(MUL,'*'), right = dx)
elif token.type == SINH:
self.eat(SINH)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = UnaryOp(Token(COSH, "cosh"), x), op=Token(MUL,'*'), right=dx)
elif token.type == COSH:
self.eat(COSH)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = UnaryOp(Token(SINH, "sinh"), x), op=Token(MUL,'*'), right=dx)
elif token.type == TANH:
self.eat(TANH)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = BinOp(left = Num(Token(INTEGER, 1.0)), op = Token(MINUS, '-'), right = BinOp(left = node,op = Token(MUL, '*'), right = node)), op=Token(MUL,'*'), right=dx)
elif token.type == EXP:
self.eat(EXP)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = node, op=Token(MUL,'*'), right=dx)
elif token.type == POW:
self.eat(POW)
self.eat(LPAREN)
x_cur = copy.deepcopy(self)
x = self.expr()
dx = x_cur.dexpr()
self.eat(COMMA)
y_cur = copy.deepcopy(self)
y = self.expr()
dy = y_cur.dexpr()
self.eat(RPAREN)
node = BinOp(left = x, op = token, right = y)
return node, BinOp(left = node, op = Token(MUL, '*'), right = BinOp(left = BinOp(left = BinOp(left = y, op = Token(DIV,'/'), right = x), op = Token(MUL,'*'), right = dx), op = Token(PLUS, '+'), right = BinOp(left = dy, op = Token(MUL, '*'),right = UnaryOp(Token(LOG, 'LOG'), x))))
elif token.type == LOG:
self.eat(LOG)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = dx, op=Token(DIV,'/'), right=x)
elif token.type == LPAREN:
self.eat(LPAREN)
cur = copy.deepcopy(self)
node = self.expr()
dnode = cur.dexpr()
self.eat(RPAREN)
return node, dnode
else:
raise NameError('Invalid character')
# similar to term, but returns the symbolic derivative of a term
def dterm(self):
"""term : factor ((MUL | DIV) factor)*"""
node, dnode = self.dfactor()
while self.current_token.type in (MUL, DIV):
token = self.current_token
if token.type == MUL:
self.eat(MUL)
elif token.type == DIV:
self.eat(DIV)
rnode, rdnode = self.dfactor()
lowdhi = BinOp(left=dnode, op=Token(MUL,'*'), right=rnode)
hidlow = BinOp(left=node, op=Token(MUL,'*'), right=rdnode)
if token.type == MUL:
# chain rule
dnode = BinOp(left=lowdhi, op=Token(PLUS,'+'), right=hidlow)
node = BinOp(left=node, op=Token(MUL,'*'), right=rnode)
else:
# quotient rule
topnode = BinOp(left=lowdhi, op=Token(MINUS, '-'), right=hidlow)
botnode = BinOp(left=rnode, op=Token(MUL,'*'), right=rnode)
dnode = BinOp(left=topnode, op=Token(DIV,'/'), right=botnode)
node = BinOp(left=node, op=Token(DIV,'/'), right=rnode)
return dnode
# similar to expr, but returns the symbolic derivative of an expr
def dexpr(self):
"""
expr : term ((PLUS | MINUS) term)*
term : factor ((MUL | DIV) factor)*
factor : (PLUS | MINUS) factor | INTEGER | LPAREN expr RPAREN
"""
dnode = self.dterm()
while self.current_token.type in (PLUS, MINUS):
token = self.current_token
if token.type == PLUS:
self.eat(PLUS)
elif token.type == MINUS:
self.eat(MINUS)
dnode = BinOp(left=dnode, op=token, right=self.dterm())
return dnode
# similar to parse, but returns an abstract syntax tree representing the symbolic derivative
def dparse(self):
node = self.dexpr()
if self.current_token.type != EOF:
self.error()
return node
###############################################################################
# #
# INTERPRETER #
# #
###############################################################################
class NodeVisitor(object):
"""
determines the correct visit method for nodes in the abstract syntax tree
visit_ used to evaluate the numeric value of an abstract syntax tree
str_visit_ used to evaluate the string form of an abstract syntax tree
"""
def visit(self, node):
method_name = 'visit_' + type(node).__name__
visitor = getattr(self, method_name, self.generic_visit)
return visitor(node)
def str_visit(self, node):
method_name = 'str_visit_' + type(node).__name__
str_visitor = getattr(self, method_name, self.generic_visit)
return str_visitor(node)
def generic_visit(self, node):
raise Exception('No visit_{} method'.format(type(node).__name__))
class Interpreter(NodeVisitor):
"""
Interpreter utilizes visit_ and str_visit_ to evaluate the abstract syntax tree
"""
def __init__(self, parser):
self.parser = parser
self.dtree = copy.deepcopy(parser).dparse()
self.tree = copy.deepcopy(parser).parse()
def visit_BinOp(self, node):
if node.op.type == PLUS:
return self.visit(node.left) + self.visit(node.right)
elif node.op.type == MINUS:
return self.visit(node.left) - self.visit(node.right)
elif node.op.type == MUL:
return self.visit(node.left) * self.visit(node.right)
elif node.op.type == DIV:
return self.visit(node.left) / self.visit(node.right)
elif node.op.type == POW:
return math.pow(self.visit(node.left), self.visit(node.right))
def str_visit_BinOp(self, node):
if node.op.type == PLUS:
l = self.str_visit(node.left)
r = self.str_visit(node.right)
if l == "0":
return r
if r == "0":
return l
return "(" + l + '+' + r + ")"
elif node.op.type == MINUS:
l = self.str_visit(node.left)
r = self.str_visit(node.right)
if r == "0":
return l
if l == "0":
return "(-" + r + ")"
return "(" + self.str_visit(node.left) + '-' + self.str_visit(node.right) + ")"
elif node.op.type == MUL:
l = self.str_visit(node.left)
r = self.str_visit(node.right)
if l == "0" or r == "0":
return "0"
if l == "1":
return r
if r == "1":
return l
else:
return "(" + l + "*" + r + ")"
elif node.op.type == DIV:
return "(" + self.str_visit(node.left) + '/' + self.str_visit(node.right) + ")"
elif node.op.type == POW:
return 'POW(' + self.str_visit(node.left) + ',' + self.str_visit(node.right) + ')'
def visit_Num(self, node):
return node.value
def str_visit_Num(self, node):
return str(node.value)
def visit_Var(self, node):
if self.vardict is None:
raise NameError("no var dict passed in")
if node.name not in self.vardict:
raise NameError("var {} not in var dict".format(node.name))
return self.vardict[node.name]
def str_visit_Var(self, node):
name = node.name
if name[:2] == "d_":
if self.vardict is None:
raise NameError("no var dict passed in")
if name not in self.vardict:
raise NameError("var {} not in var dict".format(name))
return str(self.vardict[name])
else:
return str(name)
def visit_UnaryOp(self, node):
op = node.op.type
if op == PLUS:
return +self.visit(node.expr)
elif op == MINUS:
return -self.visit(node.expr)
elif op == COS:
return math.cos(self.visit(node.expr))
elif op == SIN:
return math.sin(self.visit(node.expr))
elif op == TAN:
return math.tan(self.visit(node.expr))
elif op == ARCSIN:
return math.asin(self.visit(node.expr))
elif op == ARCCOS:
return math.acos(self.visit(node.expr))
elif op == ARCTAN:
return math.atan(self.visit(node.expr))
elif op == SINH:
return math.sinh(self.visit(node.expr))
elif op == COSH:
return math.cosh(self.visit(node.expr))
elif op == TANH:
return math.tanh(self.visit(node.expr))
elif op == EXP:
return math.exp(self.visit(node.expr))
elif op == LOG:
return math.log(self.visit(node.expr))
def str_visit_UnaryOp(self, node):
op = node.op.type
if op == PLUS:
return "+" + self.str_visit(node.expr)
elif op == MINUS:
return "-" + self.str_visit(node.expr)
elif op == COS:
return "COS(" + self.str_visit(node.expr) + ")"
elif op == SIN:
return "SIN(" + self.str_visit(node.expr) + ")"
elif op == TAN:
return "TAN(" + self.str_visit(node.expr) + ")"
elif op == ARCSIN:
return "ARCSIN(" + self.str_visit(node.expr) + ")"
elif op == ARCCOS:
return "ARCCOS(" + self.str_visit(node.expr) + ")"
elif op == ARCTAN:
return "ARCTAN(" + self.str_visit(node.expr) + ")"
elif op == SINH:
return "SINH(" + self.str_visit(node.expr) + ")"
elif op == COSH:
return "COSH(" + self.str_visit(node.expr) + ")"
elif op == TANH:
return "TANH(" + self.str_visit(node.expr) + ")"
elif op == EXP:
return "EXP(" + self.str_visit(node.expr) + ")"
elif op == LOG:
return "LOG(" + self.str_visit(node.expr) + ")"
def interpret(self, vd=None):
""" numerical evaluation """
self.get_vardict(vd)
tree = self.tree
if tree is None:
return ''
return self.visit(tree)
def differentiate(self, vd=None, dv=None):
""" evaluate numerical derivative, vd is the variable to derive on """
self.get_vardict(vd)
self.get_diffvar(dv)
tree = self.dtree
if tree is None:
return ''
return self.visit(tree)
def symbolic_diff(self, vd=None, dv=None):
""" evaluate symbolic derivative (return a string), vd is the variable to derive on """
original_vd = vd
self.get_vardict(vd)
self.get_diffvar(dv)
tree = self.dtree
if tree is None:
return ''
return self.str_visit(tree)
def diff_all(self, vd=None):
""" returns all partial derivatives """
self.get_vardict(vd)
tree = self.dtree
if tree is None:
return ''
variables = list(self.vardict.keys())
ret = {}
for v in variables:
self.vardict["d_"+v] = 0
for v in variables:
self.vardict["d_"+v] = 1
ret["d_{}".format(v)]=self.visit(tree)
self.vardict["d_"+v] = 0
return ret
def get_vardict(self, vd=None):
""" expects vardict to be formatted as x:10, y:20, z:3 """
vdict = {}
if vd is None:
text = input('vardict> ')
if not text:
self.vardict = None
return
else:
text = vd
text = text.replace(" ", "")
for var in text.split(','):
vals = var.split(':')
vdict[str(vals[0])] = float(vals[1])
self.vardict = vdict
return
def get_diffvar(self, dv=None):
""" sets the variable to derive on """
if dv is None:
text = input('d_var> ')
else:
text = dv
text = text.replace(" ", "")
if text not in self.vardict.keys():
raise NameError("d_var not in vardict")
for v in list(self.vardict.keys()):
self.vardict["d_"+v]=0
self.vardict["d_"+text]=1
return
# def main():
# if run as main, can take inputs from command line
# while True:
# try:
# try:
# text = raw_input('spi> ')
# except NameError: # Python3
# text = input('spi> ')
# except EOFError:
# break
# if not text:
# continue
# lexer = Lexer(text)
# parser = Parser(lexer)
# interpreter = Interpreter(parser)
# result = interpreter.differentiate()
# print(result)
# if __name__ == '__main__':
# main()
'''
Based off of the open source tutorial: Let's Build a Simple Interpreter
https://github.com/rspivak/lsbasi/tree/master/part8/python
''' | AD-cs207 | /AD-cs207-1.0.0.tar.gz/AD-cs207-1.0.0/AD/interpreter.py | interpreter.py |
# CS207 Final Project Repository
[](https://travis-ci.com/cs207-f18-WIRS/cs207-FinalProject)
[](https://coveralls.io/github/cs207-f18-WIRS/cs207-FinalProject?branch=master)
This repository contains the Final Project Deliverable on Automatic Differentiation for the Harvard Course CS 207: Systems Development for Computational Science.
## Project information:
- Specific information can be found at `docs/milestone1.md`.
- [Course Project discription](https://iacs-cs-207.github.io/cs207-F18/project.html) : Overview of the instruction on the project on automatic differentiation (AD).
## Course information:
- [Main course website](https://iacs-cs-207.github.io/cs207-F18/) : Check this site for all course-related policies including the syllabus, course schedule, and project policies.
- [GitHub Repo](https://github.com/IACS-CS-207/cs207-F18) : All course materials will be released on GitHub.
## Contributors:
- FELDHAUS Isabelle
- JIANG Shenghao
- STRUYVEN Robbert
- WANG William | AD-testing-packaging-CS207 | /AD_testing_packaging_CS207-0.1.5.tar.gz/AD_testing_packaging_CS207-0.1.5/README.md | README.md |
import copy
import math
import unicodedata
###############################################################################
# #
# LEXER #
# #
###############################################################################
# Token types
#
# EOF (end-of-file) token is used to indicate that
# there is no more input left for lexical analysis
INTEGER, PLUS, MINUS, MUL, DIV, LPAREN, RPAREN, EOF, VAR, COS, SIN, EXP,POW, LOG, COMMA = (
'INTEGER', 'PLUS', 'MINUS', 'MUL', 'DIV', '(', ')', 'EOF', 'VAR', 'COS', 'SIN', 'EXP', 'POW', 'LOG', ','
)
class Token(object):
def __init__(self, type, value):
self.type = type
self.value = value
def __str__(self):
"""String representation of the class instance.
Examples:
Token(INTEGER, 3)
Token(PLUS, '+')
Token(MUL, '*')
"""
return 'Token({type}, {value})'.format(
type=self.type,
value=repr(self.value)
)
def __repr__(self):
return self.__str__()
class Lexer(object):
def __init__(self, text):
# client string input, e.g. "4 + 2 * 3 - 6 / 2"
self.text = text
# self.pos is an index into self.text
self.pos = 0
self.current_char = self.text[self.pos]
def error(self):
raise Exception('Invalid character')
def advance(self):
"""Advance the `pos` pointer and set the `current_char` variable."""
self.pos += 1
if self.pos > len(self.text) - 1:
self.current_char = None # Indicates end of input
else:
self.current_char = self.text[self.pos]
def skip_whitespace(self):
while self.current_char is not None and self.current_char.isspace():
self.advance()
def is_number(self, s):
try:
float(s)
return True
except ValueError:
pass
try:
unicodedata.numeric(s)
return True
except (TypeError, ValueError):
pass
return False
def integer(self):
"""Return a (multidigit) integer consumed from the input."""
index = 0
while(self.is_number(self.text[self.pos:len(self.text)-index])==False):
index += 1
number = self.text[self.pos:len(self.text)-index]
index = 0
while(index < len(number)):
self.advance()
index += 1
return float(number)
def word(self):
"""Return a multichar integer consumed from the input."""
result = ''
while self.current_char is not None and self.current_char.isalpha():
result += self.current_char
self.advance()
return result
def get_next_token(self):
"""Lexical analyzer (also known as scanner or tokenizer)
This method is responsible for breaking a sentence
apart into tokens. One token at a time.
"""
while self.current_char is not None:
if self.current_char.isspace():
self.skip_whitespace()
continue
if self.current_char.isdigit():
return Token(INTEGER, self.integer())
if self.current_char.isalpha():
w = self.word()
if(w.upper() == "COS"):
return Token(COS, self.word())
elif(w.upper() == "SIN"):
return Token(SIN, self.word())
elif(w.upper() == "EXP"):
return Token(EXP, self.word())
elif(w.upper() == "POW"):
return Token(POW, self.word())
elif(w.upper() == "LOG"):
return Token(LOG, self.word())
else:
return Token(VAR, w)
if self.current_char == '+':
self.advance()
return Token(PLUS, '+')
if self.current_char == '-':
self.advance()
return Token(MINUS, '-')
if self.current_char == '*':
self.advance()
return Token(MUL, '*')
if self.current_char == '/':
self.advance()
return Token(DIV, '/')
if self.current_char == '(':
self.advance()
return Token(LPAREN, '(')
if self.current_char == ')':
self.advance()
return Token(RPAREN, ')')
if self.current_char == ',':
self.advance()
return Token(COMMA, ',')
self.error()
return Token(EOF, None)
###############################################################################
# #
# PARSER #
# #
###############################################################################
class AST(object):
pass
class BinOp(AST):
def __init__(self, left, op, right):
self.left = left
self.token = self.op = op
self.right = right
class Num(AST):
def __init__(self, token):
self.token = token
self.value = token.value
class Var(AST):
def __init__(self, token):
self.token = token
self.name = token.value
class UnaryOp(AST):
def __init__(self, op, expr):
self.token = self.op = op
self.expr = expr
class Parser(object):
def __init__(self, lexer):
self.lexer = lexer
# set current token to the first token taken from the input
self.current_token = self.lexer.get_next_token()
def error(self):
raise Exception('Invalid syntax')
def eat(self, token_type):
# compare the current token type with the passed token
# type and if they match then "eat" the current token
# and assign the next token to the self.current_token,
# otherwise raise an exception.
if self.current_token.type == token_type:
self.current_token = self.lexer.get_next_token()
else:
self.error()
def factor(self):
"""factor : (PLUS | MINUS) factor | INTEGER | VAR | LPAREN expr RPAREN"""
token = self.current_token
if token.type == PLUS:
self.eat(PLUS)
node = UnaryOp(token, self.factor())
return node
elif token.type == MINUS:
self.eat(MINUS)
node = UnaryOp(token, self.factor())
return node
elif token.type == INTEGER:
self.eat(INTEGER)
return Num(token)
elif token.type == VAR:
self.eat(VAR)
return Var(token)
elif token.type == COS:
self.eat(COS)
self.eat(LPAREN)
x = self.expr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node
elif token.type == SIN:
self.eat(SIN)
self.eat(LPAREN)
x = self.expr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node
elif token.type == EXP:
self.eat(EXP)
self.eat(LPAREN)
x = self.expr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node
elif token.type == POW:
self.eat(POW)
self.eat(LPAREN)
x = self.expr()
self.eat(COMMA)
y = self.expr()
self.eat(RPAREN)
return BinOp(left = x, op = token, right = y)
elif token.type == LOG:
self.eat(LOG)
self.eat(LPAREN)
x = self.expr()
self.eat(RPAREN)
return UnaryOp(token, x)
elif token.type == LPAREN:
self.eat(LPAREN)
node = self.expr()
self.eat(RPAREN)
return node
def term(self):
"""term : factor ((MUL | DIV) factor)*"""
node = self.factor()
while self.current_token.type in (MUL, DIV):
token = self.current_token
if token.type == MUL:
self.eat(MUL)
elif token.type == DIV:
self.eat(DIV)
node = BinOp(left=node, op=token, right=self.factor())
return node
def expr(self):
"""
expr : term ((PLUS | MINUS) term)*
term : factor ((MUL | DIV) factor)*
factor : (PLUS | MINUS) factor | INTEGER | LPAREN expr RPAREN
"""
node = self.term()
while self.current_token.type in (PLUS, MINUS):
token = self.current_token
if token.type == PLUS:
self.eat(PLUS)
elif token.type == MINUS:
self.eat(MINUS)
node = BinOp(left=node, op=token, right=self.term())
return node
def parse(self):
node = self.expr()
if self.current_token.type != EOF:
self.error()
return node
def dfactor(self):
"""factor : (PLUS | MINUS) factor | INTEGER | VAR | LPAREN expr RPAREN"""
token = self.current_token
if token.type == PLUS:
self.eat(PLUS)
node = UnaryOp(token, self.dfactor())
return node, node
elif token.type == MINUS:
self.eat(MINUS)
node = UnaryOp(token, self.dfactor())
return node, node
elif token.type == INTEGER:
self.eat(INTEGER)
return Num(token), Num(Token(INTEGER, 0))
elif token.type == VAR:
self.eat(VAR)
return Var(token), Var(Token(VAR, "d_" + token.value))
elif token.type == COS:
self.eat(COS)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = UnaryOp(Token(MINUS, "-"), UnaryOp(Token(SIN, "sin"), x)), op=Token(MUL,'*'), right=dx)
elif token.type == SIN:
self.eat(SIN)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = UnaryOp(Token(COS, "cos"), x), op=Token(MUL,'*'), right=dx)
elif token.type == EXP:
self.eat(EXP)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = node, op=Token(MUL,'*'), right=dx)
elif token.type == POW:
self.eat(POW)
self.eat(LPAREN)
x_cur = copy.deepcopy(self)
x = self.expr()
dx = x_cur.dexpr()
self.eat(COMMA)
y_cur = copy.deepcopy(self)
y = self.expr()
dy = y_cur.dexpr()
self.eat(RPAREN)
node = BinOp(left = x, op = token, right = y)
return node, BinOp(left = node, op = Token(MUL, '*'), right = BinOp(left = BinOp(left = BinOp(left = y, op = Token(DIV,'/'), right = x), op = Token(MUL,'*'), right = dx), op = Token(PLUS, '+'), right = BinOp(left = dy, op = Token(MUL, '*'),right = UnaryOp(Token(LOG, 'LOG'), x))))
elif token.type == LOG:
self.eat(LOG)
self.eat(LPAREN)
cur = copy.deepcopy(self)
x = self.expr()
dx = cur.dexpr()
node = UnaryOp(token, x)
self.eat(RPAREN)
return node, BinOp(left = dx, op=Token(DIV,'/'), right=x)
elif token.type == LPAREN:
self.eat(LPAREN)
cur = copy.deepcopy(self)
node = self.expr()
dnode = cur.dexpr()
self.eat(RPAREN)
return node, dnode
def dterm(self):
"""term : factor ((MUL | DIV) factor)*"""
node, dnode = self.dfactor()
while self.current_token.type in (MUL, DIV):
token = self.current_token
if token.type == MUL:
self.eat(MUL)
elif token.type == DIV:
self.eat(DIV)
rnode, rdnode = self.dfactor()
lowdhi = BinOp(left=dnode, op=Token(MUL,'*'), right=rnode)
hidlow = BinOp(left=node, op=Token(MUL,'*'), right=rdnode)
if token.type == MUL:
# chain rule
dnode = BinOp(left=lowdhi, op=Token(PLUS,'+'), right=hidlow)
node = BinOp(left=node, op=Token(MUL,'*'), right=rnode)
else:
# quotient rule
topnode = BinOp(left=lowdhi, op=Token(MINUS, '-'), right=hidlow)
botnode = BinOp(left=rnode, op=Token(MUL,'*'), right=rnode)
dnode = BinOp(left=topnode, op=Token(DIV,'/'), right=botnode)
node = BinOp(left=node, op=Token(DIV,'/'), right=rnode)
return dnode
def dexpr(self):
"""
expr : term ((PLUS | MINUS) term)*
term : factor ((MUL | DIV) factor)*
factor : (PLUS | MINUS) factor | INTEGER | LPAREN expr RPAREN
"""
dnode = self.dterm()
while self.current_token.type in (PLUS, MINUS):
token = self.current_token
if token.type == PLUS:
self.eat(PLUS)
elif token.type == MINUS:
self.eat(MINUS)
dnode = BinOp(left=dnode, op=token, right=self.dterm())
return dnode
def dparse(self):
node = self.dexpr()
if self.current_token.type != EOF:
self.error()
return node
###############################################################################
# #
# INTERPRETER #
# #
###############################################################################
class NodeVisitor(object):
def visit(self, node):
method_name = 'visit_' + type(node).__name__
visitor = getattr(self, method_name, self.generic_visit)
return visitor(node)
def generic_visit(self, node):
raise Exception('No visit_{} method'.format(type(node).__name__))
class Interpreter(NodeVisitor):
def __init__(self, parser):
self.parser = parser
self.dtree = copy.deepcopy(parser).dparse()
self.tree = copy.deepcopy(parser).parse()
def visit_BinOp(self, node):
if node.op.type == PLUS:
return self.visit(node.left) + self.visit(node.right)
elif node.op.type == MINUS:
return self.visit(node.left) - self.visit(node.right)
elif node.op.type == MUL:
return self.visit(node.left) * self.visit(node.right)
elif node.op.type == DIV:
return self.visit(node.left) / self.visit(node.right)
elif node.op.type == POW:
return math.pow(self.visit(node.left), self.visit(node.right))
def visit_Num(self, node):
return node.value
def visit_Var(self, node):
if self.vardict is None:
raise NameError("no var dict passed in")
if node.name not in self.vardict:
raise NameError("var {} not in var dict".format(node.name))
return self.vardict[node.name]
def visit_UnaryOp(self, node):
op = node.op.type
if op == PLUS:
return +self.visit(node.expr)
elif op == MINUS:
return -self.visit(node.expr)
elif op == COS:
return math.cos(self.visit(node.expr))
elif op == SIN:
return math.sin(self.visit(node.expr))
elif op == EXP:
return math.exp(self.visit(node.expr))
elif op == LOG:
return math.log(self.visit(node.expr))
def interpret(self, vd=None):
self.get_vardict(vd)
tree = self.tree
if tree is None:
return ''
return self.visit(tree)
def differentiate(self, vd=None, dv=None):
self.get_vardict(vd)
self.get_diffvar(dv)
tree = self.dtree
if tree is None:
return ''
return self.visit(tree)
def diff_all(self, vd=None):
self.get_vardict(vd)
tree = self.dtree
if tree is None:
return ''
variables = list(self.vardict.keys())
ret = {}
for v in variables:
self.vardict["d_"+v] = 0
for v in variables:
self.vardict["d_"+v] = 1
ret["d_{}".format(v)]=self.visit(tree)
self.vardict["d_"+v] = 0
return ret
def get_vardict(self, vd=None):
""" expects vardict to be formatted as x:10, y:20, z:3 """
vdict = {}
if vd is None:
text = input('vardict> ')
if not text:
self.vardict = None
return
else:
text = vd
text = text.replace(" ", "")
for var in text.split(','):
vals = var.split(':')
vdict[str(vals[0])] = float(vals[1])
self.vardict = vdict
return
def get_diffvar(self, dv=None):
if dv is None:
text = input('d_var> ')
else:
text = dv
text = text.replace(" ", "")
if text not in self.vardict.keys():
raise NameError("d_var not in vardict")
for v in list(self.vardict.keys()):
self.vardict["d_"+v]=0
self.vardict["d_"+text]=1
return
# def main():
# if run as main, can take inputs from command line
# while True:
# try:
# try:
# text = raw_input('spi> ')
# except NameError: # Python3
# text = input('spi> ')
# except EOFError:
# break
# if not text:
# continue
# lexer = Lexer(text)
# parser = Parser(lexer)
# interpreter = Interpreter(parser)
# result = interpreter.differentiate()
# print(result)
# if __name__ == '__main__':
# main()
'''
Based off of the open source tutorial: Let's Build a Simple Interpreter
https://github.com/rspivak/lsbasi/tree/master/part8/python
''' | AD-testing-packaging-CS207 | /AD_testing_packaging_CS207-0.1.5.tar.gz/AD_testing_packaging_CS207-0.1.5/AD/interpreter.py | interpreter.py |
# cs107-FinalProject
[](https://codecov.io/gh/ZLYEPJ20/cs107-FinalProject)
[](https://travis-ci.com/ZLYEPJ20/cs107-FinalProject)
Group #20
<ul>
<li> Zhufeng Kang - [email protected]</li>
<li> Yuxi Liu - [email protected]</li>
<li> Esther Brown - [email protected]</li>
<ul>
| AD2020 | /AD2020-0.0.2.tar.gz/AD2020-0.0.2/README.md | README.md |
import numpy as np
from autodiff.dual import Dual
class Node:
"""
Node class to implement the reverse mode auto differentiation. Elementary operations are overloaded to create the tree structure
to represent the function. A forward pass process is implemented in the _
"""
_supported_scalars = (int, float, np.float64)
def __init__(self, key, *, value = None, left_partial = None , right_partial = None, operation = None, left = None, right = None, sensitivity = 0):
self.key = key
self.left = left
self.right = right
self.value = value
self.left_partial = left_partial ## save partial at the self level is not the best choice. => does not account for recycled nodes unless leaf nodes are redefined
self.right_partial = right_partial
self.operation = operation # the elementary operation performed at each node
self.sensitivity = sensitivity
self._eval()
def __add__(self, other):
"""
overload addition operation
"""
#self.partial = 1 #### Calculate partial at the creation step will not work for sin, cos, etc!!!
#other.partial = 1
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
if isinstance(other, self._supported_scalars):
operation = lambda x: x + other
return Node('add', left = self, right = None, operation = operation)
else:
operation = lambda x,y: x+y
return Node('add', left = self, right = other, operation = operation)
def __radd__(self, other):
"""
overload reverse addition operation
"""
return self.__add__(other)
def __sub__(self, other):
#self.partial = 1
#other.partial = -1
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
if isinstance(other, self._supported_scalars):
operation = lambda x: x - other
return Node('sub', left = self, right = None, operation = operation)
else:
operation = lambda x,y: x-y
return Node('sub', left = self, right = other, operation = operation)
def __rsub__(self, other):
"""
overload reverse subtraction operation
"""
return -self.__sub__(other)
def __mul__(self, other):
#self.partial = other.value
#other.partial = self.value
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
if isinstance(other, self._supported_scalars):
operation = lambda x: x*other
return Node('mul', left = self, right = None, operation = operation)
else:
operation = lambda x,y: x*y
return Node('mul', left = self, right = other, operation = operation)
def __rmul__(self, other):
"""
overload reverse multiplication operation
"""
return self.__mul__(other)
def __truediv__(self, other):
"""
overload division operation
"""
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
if isinstance(other, self._supported_scalars):
operation = lambda x: x/other
return Node('div', left = self, right = None, operation = operation)
else:
operation = lambda x,y: x/y
return Node('div', left = self, right = other, operation = operation)
def __rtruediv__(self, other):
"""
overload reverse division operation
"""
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
else:
operation = lambda x: other/x
return Node('div', left = self, right = None, operation = operation)
def __pow__(self, other):
"""
overload the power operation
"""
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
if isinstance(other, self._supported_scalars):
operation = lambda x: x**other
return Node('pow', left = self, right = None, operation = operation)
else:
operation = lambda x,y: x**y
return Node('pow', left = self, right = other, operation = operation)
def __rpow__(self, other):
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
else:
operation = lambda x: other**x
return Node('exp', left = self, right = None, operation = operation)
def __neg__(self):
"""
overload the unary negation operation
"""
operation = lambda x: -x
return Node('neg', left = self, right = None, operation = operation)
def __lt__(self, other):
"""
overload the < operation
"""
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
elif isinstance(other, Node):
return self.value < other.value
else:
return self.value < other
def __gt__(self, other):
"""
overload the > operation
"""
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
elif isinstance(other, Node):
return self.value > other.value
else:
return self.value > other
def __eq__(self, other):
"""
overload the = operation
"""
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
elif isinstance(other, Node):
return self.value == other.value and self.sensitivity == other.sensitivity
else:
return self.value == other
def __ne__(self, other):
"""
overload the != operation
"""
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
elif isinstance(other, Node):
return self.value != other.value or self.sensitivity != other.sensitivity
else:
return self.value != other
def __le__(self, other):
"""
overload the <= operation
"""
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
elif isinstance(other, Node):
return self.value <= other.value
else:
return self.value <= other
def __ge__(self, other):
"""
overload the >= operation
"""
if not isinstance(other, (*self._supported_scalars, Node)):
raise TypeError(f'Type not supported for reverse mode auto differentiation')
elif isinstance(other, Node):
return self.value >= other.value
else:
return self.value >= other
def __str__(self):
return self._pretty(self)
def _eval(self):
"""
Forward pass of the reverse mode auto differentiation.
Calculate the value of all nodes of the tree, as well as the partial derivative of the current node wrt all child nodes.
"""
if (self.left is None) and (self.right is None):
return self.value
elif self.value is not None:
return self.value
elif self.right is None:
dual = self.operation(Dual(self.left._eval())) # real part evaluates the current node, dual part evaluates the partial derivative
self.value = dual.real
self.left_partial = dual.dual
return self.value
else:
self.left._eval()
self.right._eval()
dual1 = Dual(self.left.value, 1)
dual2 = Dual(self.right.value, 0)
dual = self.operation(dual1, dual2)
self.value = dual.real
self.left_partial = dual.dual
self.right_partial = self.operation(Dual(self.left.value, 0), Dual(self.right.value, 1)).dual
return self.value
def _sens(self):
"""
Reverse pass of the reverse mode auto differentiation.
Calculate the sensitivity (adjoint) of all child nodes with respect to the current node
"""
if (self.left is None) and (self.right is None):
pass
elif self.right is None:
self.left.sensitivity += self.sensitivity*self.left_partial
self.left._sens()
else:
self.left.sensitivity += self.sensitivity*self.left_partial
self.right.sensitivity += self.sensitivity*self.right_partial
self.left._sens()
self.right._sens()
def _reset(self):
"""
Reset the sensitivty of leaf nodes too zero to allow the reverse mode auto differentiation of the next component of a vector function.
Calculate the sensitivity (adjoint) of all child nodes with respect to the current node
"""
if (self.left is None) and (self.right is None):
pass
elif self.right is None:
self.left.sensitivity = 0
self.left._reset()
else:
self.left.sensitivity = 0
self.right.sensitivity = 0
self.left._reset()
self.right._reset()
@staticmethod
def _pretty(node):
"""Pretty print the expression tree (called recursively)"""
if node.left is None and node.right is None:
return f'{node.key}' + f': value = {node.value}'
if node.left is not None and node.right is None:
return f'{node.key}({node._pretty(node.left)})' + f': value = {node.value}'
return f'{node.key}({node._pretty(node.left)}, {node._pretty(node.right)})' + f': value = {node.value}' | AD27 | /ad27-0.0.1-py3-none-any.whl/autodiff/reverse.py | reverse.py |
import numpy as np
from autodiff.dual import Dual
from autodiff.reverse import Node
def sin(x):
"""
overwrite sine function
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.sin(x.real), np.cos(x.real)*x.dual)
elif type(x) is Node:
return Node('sin', left = x, operation = lambda x:sin(x))
else:
return np.sin(x)
def cos(x):
"""
overwrite cosine function
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.cos(x.real), -np.sin(x.real)*x.dual)
elif type(x) is Node:
return Node('cos', left = x, operation = lambda x:cos(x))
else:
return np.cos(x)
def tan(x):
"""
overwrite tangent
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.tan(x.real), 1/(np.cos(x.real))**2*x.dual)
elif type(x) is Node:
return Node('tan', left = x, operation = lambda x:tan(x))
else:
return np.tan(x)
def log(x):
"""
overwrite log
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.log(x.real), 1/x.real*x.dual)
elif type(x) is Node:
return Node('log', left = x, operation = lambda x:log(x))
else:
return np.log(x)
def log2(x):
"""
overwrite hyberbolic sine
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.log2(x.real), (1/(x.real*np.log(2)))*x.dual)
elif type(x) is Node:
return Node('log2', left = x, operation = lambda x:log2(x))
else:
return np.log2(x)
def log10(x):
"""
overwrite log10
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.log10(x.real), (1/(x.real*np.log(10)))*x.dual)
elif type(x) is Node:
return Node('log10', left = x, operation = lambda x:log10(x))
else:
return np.log10(x)
def sinh(x):
"""
overwrite hyberbolic sine
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.sinh(x.real), np.cosh(x.real) * x.dual)
elif type(x) is Node:
return Node('sinh', left = x, operation = lambda x:sinh(x))
else:
return np.sinh(x)
def cosh(x):
"""
overwrite hyberbolic cosine
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.cosh(x.real), np.sinh(x.real) * x.dual)
elif type(x) is Node:
return Node('cosh', left = x, operation = lambda x:cosh(x))
else:
return np.cosh(x)
def tanh(x):
"""
overwrite hyberbolic tangent
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.tanh(x.real), x.dual / np.cosh(x.real)**2)
elif type(x) is Node:
return Node('tanh', left = x, operation = lambda x:tanh(x))
else:
return np.tanh(x)
def exp(x):
"""
overwrite exponential
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.exp(x.real), np.exp(x.real) * x.dual)
elif type(x) is Node:
return Node('exp', left = x, operation = lambda x:exp(x))
else:
return np.exp(x)
def sqrt(x):
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.sqrt(x.real), 1/2/np.sqrt(x.real) * x.dual)
elif type(x) is Node:
return Node('sqrt', left = x, operation = lambda x:sqrt(x))
else:
return np.sqrt(x)
def power(x, other):
if type(x) is Node:
return Node('sqrt', left = x, operation = lambda x:power(x))
else:
return x.__pow__(other)
def arcsin(x):
"""
overwrite arc sine
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.arcsin(x.real), 1 / np.sqrt(1 - x.real ** 2) * x.dual)
elif type(x) is Node:
return Node('arcsin', left = x, operation = lambda x:arcsin(x))
else:
return np.arcsin(x)
def arccos(x):
"""
overwrite arc cosine
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.arccos(x.real), -1 / np.sqrt(1 - x.real**2) * x.dual)
elif type(x) is Node:
return Node('arccos', left = x, operation = lambda x:arccos(x))
else:
return np.arccos(x)
def arctan(x):
"""
overwrite arc tangent
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.arctan(x.real), 1 / (1 + x.real**2) * x.dual)
elif type(x) is Node:
return Node('arctan', left = x, operation = lambda x:arctan(x))
else:
return np.arctan(x)
def logist(x, loc=0, scale=1):
"""
overwrite logistic
default set loc and scale to be 0 and 1
"""
supported_types = (int, float, np.float64, Dual, Node)
if type(x) not in supported_types:
raise TypeError('type of input argument not supported')
elif type(x) is Dual:
return Dual(np.exp((loc-x.real)/scale)/(scale*(1+np.exp((loc-x.real)/scale))**2),
np.exp((loc-x.real)/scale)/(scale*(1+np.exp((loc-x.real)/scale))**2)/ \
(scale*(1+np.exp((loc-x.real)/scale))**2)**2* \
((-1/scale)*(scale*(1+np.exp((loc-x.real)/scale))**2)- \
((loc-x.real)/scale)*(scale*2*(1+np.exp((loc-x.real)/scale)))*np.exp((loc-x.real)/scale)*(-1)/scale)*x.dual)
elif type(x) is Node:
return Node('logist', left = x, operation = lambda x:logist(x))
else:
return np.exp((loc-x)/scale)/(scale*(1+np.exp((loc-x)/scale))**2) | AD27 | /ad27-0.0.1-py3-none-any.whl/autodiff/trig.py | trig.py |
import numpy as np
class Dual:
_supported_scalars = (int, float, np.float64)
def __init__(self, real, dual = 1):
self.real = real
self.dual = dual
def __add__(self, other):
"""
overload add operation
"""
if not isinstance(other, (*self._supported_scalars, Dual)):
raise TypeError(f'Type not supported for Dual number operations')
if isinstance(other, self._supported_scalars):
return Dual(self.real + other, self.dual)
else:
return Dual(self.real + other.real, self.dual + other.dual)
def __radd__(self, other):
"""
overload reverse subtration operation
"""
return self.__add__(other)
def __sub__(self, other):
"""
overload subtraction operation
"""
if not isinstance(other, (*self._supported_scalars, Dual)):
raise TypeError(f'Type not supported for Dual number operations')
if isinstance(other, self._supported_scalars):
return Dual(self.real - other, self.dual)
else:
return Dual(self.real - other.real, self.dual - other.dual)
def __rsub__(self, other):
"""
overload reverse subtraction operation
"""
return Dual(other - self.real, -self.dual)
def __mul__(self, other):
"""
overwrite multiplication operation
"""
if not isinstance(other, (*self._supported_scalars, Dual)):
raise TypeError(f'Type not supported for Dual number operations')
if isinstance(other, self._supported_scalars):
return Dual(other*self.real, other*self.dual)
else:
return Dual(self.real*other.real, self.dual*other.real + self.real*other.dual)
def __rmul__(self, other):
"""
overwrite reverse multiplication operation
"""
return self.__mul__(other)
def __pow__(self, other):
"""
overwrite power law operation
"""
if not isinstance(other, self._supported_scalars):
raise TypeError(f'Type not supported for Dual number operations')
if isinstance(other, self._supported_scalars):
return Dual(self.real**other, other*self.real**(other - 1)*self.dual)
def __rpow__(self, other):
"""
overwrite reverse power law operation
"""
if not isinstance(other, self._supported_scalars):
raise TypeError(f'Type not supported for Dual number operations')
if isinstance(other, self._supported_scalars):
return Dual(other**self.real, np.log(other)*other**self.real*self.dual)
def __truediv__(self, other):
"""
Overload the division operator (/) to handle Dual class
"""
if not isinstance(other, (*self._supported_scalars, Dual)):
raise TypeError(f'Type not supported for Dual number operations')
if isinstance(other, self._supported_scalars):
return Dual(self.real/other,self.dual/other)
else:
return Dual(self.real/other.real, self.dual/other.real - self.real*other.dual/other.real/other.real)
def __rtruediv__(self, other):
"""
Overload the reverse division operator (/) to handle Dual class
"""
return Dual(other/self.real, -other*self.dual/self.real/self.real )
def __neg__(self):
"""
Overload the negative operator to handle Dual class
"""
return Dual(-self.real, -self.dual)
def __neq__(self, other):
"""
Overload the inequality operator (!=) to handle Dual class
"""
if isinstance(other, Dual):
return self.real != other.real
return self.real != other
def __lt__(self, other):
"""
Overload the less than operator to handle Dual class
"""
if isinstance(other, Dual):
return self.real < other.real
return self.real < other
def __gt__(self, other):
"""
Overload the greater than operator to handle Dual class
"""
if isinstance(other, Dual):
return self.real > other.real
return self.real > other
def __le__(self, other):
"""
Overload the <= operator to handle Dual class
"""
if isinstance(other, Dual):
return self.real <= other.real
return self.real <= other
def __ge__(self, other):
"""
Overload the >= operator to handle Dual class
"""
if isinstance(other, Dual):
return self.real >= other.real
return self.real >= other
def __repr__(self):
"""
Print class definition
"""
return f'Dual({self.real},{self.dual})'
def __str__(self):
"""
prettier string representation
"""
return f'Forward mode dual number object(real: {self.real}, dual: {self.dual})'
def __len__(self):
"""
Return length of input vector
"""
return (type(self.real) in (int, float)) and (type(self.dual) in (int, float))
def __eq__(self,other):
if isinstance(other, Dual):
return (self.real == other.real and self.dual == other.dual)
return self.real == other | AD27 | /ad27-0.0.1-py3-none-any.whl/autodiff/dual.py | dual.py |
import numpy as np
import autodiff.trig as tr
from autodiff.dual import Dual
from autodiff.reverse import Node
class ForwardDiff:
def __init__(self, f):
self.f = f
def derivative(self, x, p=[1]):
"""
Parameters
==========
x : constant associated with each component of vector x
p : direction at which the direcitonal derivative is evaluated
Returns
=======
the dual part of a Dualnumber
Example:
=======
z_i = Dual(x_i, p_i)
f(z).real = f(x)
f(z).dual = D_p_{f}
"""
scalars = [float, int, np.float64]
if type(x) in scalars:
z = Dual(x)
elif isinstance(x, list) or isinstance(x, np.ndarray):
if len(p)!=len(x):
raise Exception('length of p should be the same as length of x')
if len(x)==1:
z=Dual(x[0])
else:
z = [0] * len(x)
for i in range(len(x)):
z[i] = Dual(x[i], p[i])
else:
raise TypeError(f'Unsupported type for derivative function. X is of type {type(x)}')
if type(self.f(z)) is Dual:
return self.f(z).dual
else:
output=[]
for i in self.f(z):
output.append(i.dual)
return output
def Jacobian(self, x):
# construct a dual number
deri_array = []
for i in range(len(x)):
p = np.zeros(len(x))
p[i] = 1
deri_array.append(self.derivative(x, p))
return np.array(deri_array).T
class ReverseDiff:
def __init__(self, f):
self.f = f
def Jacobian(self, vector):
iv_nodes = [Node(1-k) for k in range(len(vector))] #nodes of independent variables, key value numbering according to vs
for i, iv_node in enumerate(iv_nodes):
iv_node.value = vector[i]
tree = self.f([*iv_nodes])
print(type(tree))
if type(tree) is Node:
#tree._eval()
#print(tree)
tree.sensitivity = 1
tree._sens()
return [iv_node.sensitivity for iv_node in iv_nodes]
else:
deri_array = []
for line in tree:
#line._eval()
line._reset()
line.sensitivity=1
line._sens()
line_partials = [iv_node.sensitivity for iv_node in iv_nodes]
deri_array.append(line_partials )
return deri_array | AD27 | /ad27-0.0.1-py3-none-any.whl/autodiff/autoDiff.py | autoDiff.py |
import time
import os
import re
from ada.features import Execution, TestCase, UploadImage
def get_keyword_failed(data, keyword=""):
for func in data:
if func["status"] != "PASS":
if keyword:
keyword += "."
keyword += func["kwname"]
keyword = get_keyword_failed(func["functions"], keyword)
break
return keyword
class BaseListener:
ROBOT_LISTENER_API_VERSION = 2
API_KEY = ""
PROJECT_ID = ""
def __init__(self):
"""Is used to init variables, objects, ... to support for generating report
Args:
sampleVar: TBD
Returns:
NA
"""
self.execution = None
self.dict_exe = {}
self.arr_exe = []
self.step = {}
def start_suite(self, name, attrs):
"""This event will be trigger at the beginning of test suite.
Args:
name: TCs name
attrs: Attribute of test case can be query as dictionary type
Returns:
NA
"""
self.image = []
self.name = name
self.index = -1
parent = None
if self.arr_exe:
functions = []
id_parent = self.arr_exe[-1]
parent = self.dict_exe[id_parent]
if id_parent in self.step and self.step[id_parent]:
functions = self.step[id_parent][0]
try:
Execution(self.API_KEY).up_log(
parent, functions=functions
)
except Exception:
pass
try:
self.execution = Execution(self.API_KEY).create(
attrs["longname"], self.PROJECT_ID, attrs["totaltests"],
parent=parent, starttime=attrs["starttime"], endtime="",
doc=attrs["doc"], source=attrs["source"]
)
except Exception as e:
pass
self.dict_exe[attrs["id"]] = self.execution
self.step[attrs["id"]] = {}
self.arr_exe.append(attrs["id"])
def end_suite(self, name, attrs):
function = []
if attrs["id"] in self.step and self.step[attrs["id"]]:
function = self.step[attrs["id"]][0]
try:
Execution(self.API_KEY).up_log(
self.dict_exe[attrs["id"]], log_teardown="this is log teardown",
status="complete", functions=function, duration=attrs["elapsedtime"],
endtime=attrs["endtime"]
)
except Exception:
pass
del self.arr_exe[-1]
def start_test(self, name, attrs):
self.image = []
self.step[name] = {}
self.index = -1
self.start_time = time.time()
self.arr_exe.append(name)
def end_test(self, name, attrs):
failed_keyword = ""
if attrs['status'] == 'PASS':
status = "passed"
else:
status = "failed"
failed_keyword = get_keyword_failed(self.step[name][0])
result = None
try:
if not result:
result = TestCase(self.API_KEY).create(
name, status, attrs["elapsedtime"], self.execution,
failed_reason=attrs["message"], functions=self.step[name][0],
starttime=attrs["starttime"], endtime=attrs["endtime"],
failed_keyword=failed_keyword
)
if result and self.image:
UploadImage(self.API_KEY).create(result, self.image)
except Exception:
pass
self.step[name] = {}
del self.arr_exe[-1]
self.image = []
def start_keyword(self, name, attrs):
self.log_test = []
self.index += 1
self.step[self.arr_exe[-1]].setdefault(self.index, [])
def end_keyword(self, name, attrs):
# print("end key ", attrs)
attrs["functions"] = []
attrs["log"] = self.log_test
index = self.index + 1
key = self.arr_exe[-1]
if index in self.step[key] and self.step[key][index]:
attrs["functions"] = self.step[key][index]
self.step[key][index] = []
self.step[key][self.index].append(attrs)
self.index -= 1
self.log_test = []
self.check = True
def log_message(self, msg):
message = msg["message"]
result = re.search("(([<]([\w\W\-.\/]+\.(png|jpg))[>])|([\w-]+\.(png|jpg)))", message)
real_image = None
if result:
data = result.group(1).strip("<>")
if "/" in data or "\\" in data:
image_path = data
if "/" in data:
image = data.split("/")[-1]
else:
image = data.split("\\")[-1]
else:
image_path = os.path.join(os.getcwd(), data.strip())
image = data
try:
if os.path.isfile(image_path):
self.image.append(('screenshot', open(image_path, "rb")))
real_image = image
except:
pass
msg["image"] = real_image
self.log_test.append(msg)
# def message(self, msg):
# print('\n Listener detect message: %s' %(msg))
def close(self):
print('\n Close Suite') | ADA-sdk | /ADA-sdk-2.9.tar.gz/ADA-sdk-2.9/ada/listener.py | listener.py |
from collections import OrderedDict
import functools
import torch.nn as nn
####################
# Basic blocks
####################
def get_norm_layer(norm_type, adafm_ksize=1):
# helper selecting normalization layer
if norm_type == 'batch':
layer = functools.partial(nn.BatchNorm2d, affine=True, track_running_stats=True)
elif norm_type == 'instance':
layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False)
elif norm_type == 'basic':
layer = functools.partial(Basic)
elif norm_type == 'adafm':
layer = functools.partial(AdaptiveFM, kernel_size=adafm_ksize)
else:
raise NotImplementedError('normalization layer [{:s}] is not found'.format(norm_type))
return layer
def act(act_type, inplace=True, neg_slope=0.2, n_prelu=1):
# helper selecting activation
# neg_slope: for leakyrelu and init of prelu
# n_prelu: for p_relu num_parameters
act_type = act_type.lower()
if act_type == 'relu':
layer = nn.ReLU(inplace)
elif act_type == 'leakyrelu':
layer = nn.LeakyReLU(neg_slope, inplace)
elif act_type == 'prelu':
layer = nn.PReLU(num_parameters=n_prelu, init=neg_slope)
else:
raise NotImplementedError('activation layer [{:s}] is not found'.format(act_type))
return layer
def pad(pad_type, padding):
# helper selecting padding layer
# if padding is 'zero', do by conv layers
pad_type = pad_type.lower()
if padding == 0:
return None
if pad_type == 'reflect':
layer = nn.ReflectionPad2d(padding)
elif pad_type == 'replicate':
layer = nn.ReplicationPad2d(padding)
else:
raise NotImplementedError('padding layer [{:s}] is not implemented'.format(pad_type))
return layer
def get_valid_padding(kernel_size, dilation):
kernel_size = kernel_size + (kernel_size - 1) * (dilation - 1)
padding = (kernel_size - 1) // 2
return padding
class ShortcutBlock(nn.Module):
#Elementwise sum the output of a submodule to its input
def __init__(self, submodule):
super(ShortcutBlock, self).__init__()
self.sub = submodule
def forward(self, x):
output = x + self.sub(x)
return output
def __repr__(self):
tmpstr = 'Identity + \n|'
modstr = self.sub.__repr__().replace('\n', '\n|')
tmpstr = tmpstr + modstr
return tmpstr
def sequential(*args):
# Flatten Sequential. It unwraps nn.Sequential.
if len(args) == 1:
if isinstance(args[0], OrderedDict):
raise NotImplementedError('sequential does not support OrderedDict input.')
return args[0] # No sequential is needed.
modules = []
for module in args:
if isinstance(module, nn.Sequential):
for submodule in module.children():
modules.append(submodule)
elif isinstance(module, nn.Module):
modules.append(module)
return nn.Sequential(*modules)
def conv_block(in_nc, out_nc, kernel_size, stride=1, dilation=1, groups=1, bias=True, \
pad_type='zero', norm_layer=None, act_type='relu'):
'''
Conv layer with padding, normalization, activation
'''
padding = get_valid_padding(kernel_size, dilation)
p = pad(pad_type, padding) if pad_type and pad_type != 'zero' else None
padding = padding if pad_type == 'zero' else 0
c = nn.Conv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding, \
dilation=dilation, bias=bias, groups=groups)
a = act(act_type) if act_type else None
n = norm_layer(out_nc) if norm_layer else None
return sequential(p, c, n, a)
####################
# Useful blocks
####################
class ResNetBlock(nn.Module):
'''
ResNet Block, 3-3 style
'''
def __init__(self, in_nc, mid_nc, out_nc, kernel_size=3, stride=1, dilation=1, groups=1, \
bias=True, pad_type='zero', norm_layer=None, act_type='relu', res_scale=1):
super(ResNetBlock, self).__init__()
conv0 = conv_block(in_nc, mid_nc, kernel_size, stride, dilation, groups, bias, pad_type, \
norm_layer, act_type)
act_type = None
conv1 = conv_block(mid_nc, out_nc, kernel_size, stride, dilation, groups, bias, pad_type, \
norm_layer, act_type)
self.res = sequential(conv0, conv1)
self.res_scale = res_scale
def forward(self, x):
res = self.res(x).mul(self.res_scale)
return x + res
####################
# AdaFM
####################
class AdaptiveFM(nn.Module):
def __init__(self, in_channel, kernel_size):
super(AdaptiveFM, self).__init__()
padding = get_valid_padding(kernel_size, 1)
self.transformer = nn.Conv2d(in_channel, in_channel, kernel_size,
padding=padding, groups=in_channel)
def forward(self, x):
return self.transformer(x) + x
class Basic(nn.Module):
def __init__(self, in_channel):
super(Basic, self).__init__()
self.in_channel = in_channel
def forward(self, x):
return x
####################
# Upsampler
####################
def pixelshuffle_block(in_nc, out_nc, upscale_factor=2, kernel_size=3, stride=1, bias=True, \
pad_type='zero', norm_layer=None, act_type='relu'):
'''
Pixel shuffle layer
(Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional
Neural Network, CVPR17)
'''
conv = conv_block(in_nc, out_nc * (upscale_factor ** 2), kernel_size, stride, bias=bias, \
pad_type=pad_type, norm_layer=None, act_type=None)
pixel_shuffle = nn.PixelShuffle(upscale_factor)
n = norm_layer(out_nc) if norm_layer else None
a = act(act_type) if act_type else None
return sequential(conv, pixel_shuffle, n, a)
def upconv_blcok(in_nc, out_nc, upscale_factor=2, kernel_size=3, stride=1, bias=True, \
pad_type='zero', norm_layer=None, act_type='relu', mode='nearest'):
# Up conv
# described in https://distill.pub/2016/deconv-checkerboard/
upsample = nn.Upsample(scale_factor=upscale_factor, mode=mode)
conv = conv_block(in_nc, out_nc, kernel_size, stride, bias=bias, \
pad_type=pad_type, norm_layer=norm_layer, act_type=act_type)
return sequential(upsample, conv) | ADAFMNoiseReducer | /models/modules/block.py | block.py |
from ADB_Easy_Control import touch_event
from ADB_Easy_Control import data_model
def point_touch_withobj(position: data_model.Point, sleep_time: float):
touch_event.point_touch(position.position_x, position.position_y, sleep_time)
def point_swipe_withobj(start_position: data_model.Point, end_position: data_model.Point, swipe_time: float,
sleep_time: float):
touch_event.point_swipe(start_position.position_x, start_position.position_y, end_position.position_x,
end_position.position_y, swipe_time, sleep_time)
def point_longtime_touch_withobj(position: data_model.Point, touch_time: float, sleep_time: float):
touch_event.point_longtime_touch(position.position_x, position.position_y, touch_time, sleep_time)
def rectangle_area_touch_withobj(area: data_model.RectangleArea, sleep_time: float):
touch_event.rectangle_area_touch(area.beginarea_x, area.finisharea_x, area.beginarea_y, area.finisharea_y,
sleep_time)
def rectangle_area_longtime_touch_withobj(area: data_model.RectangleArea, touch_time: float, sleep_time: float):
touch_event.rectangle_area_longtime_touch(area.beginarea_x, area.finisharea_x, area.beginarea_y, area.finisharea_y,
touch_time, sleep_time)
def rectangle_area_swipe_withobj(start_area: data_model.RectangleArea, end_area: data_model.RectangleArea,
swipe_time: float,
sleep_time: float):
touch_event.rectangle_area_swipe(start_area.beginarea_x, start_area.finisharea_x, start_area.beginarea_y,
start_area.finisharea_y, end_area.beginarea_x, end_area.finisharea_x,
end_area.beginarea_y, end_area.finisharea_y, swipe_time, sleep_time)
def rectangle_inarea_rand_swipe_withobj(area: data_model.RectangleArea, min_swipe_distance: int,
max_swipe_distance: int,
swipe_time: float, sleep_time: float):
touch_event.rectangle_inarea_rand_swipe(area.beginarea_x, area.finisharea_x, area.beginarea_y, area.finisharea_y,
min_swipe_distance, max_swipe_distance, swipe_time, sleep_time) | ADB-Easy-Control | /ADB_Easy_Control-1.0.1-py3-none-any.whl/ADB_Easy_Control/touch_event_withobj.py | touch_event_withobj.py |
import os
def get_dpi() -> str:
dpi_string = os.popen("adb" + multi_devices_helper() + " shell wm density").read().split(" ")[2][:-1]
return dpi_string
def get_size() -> str:
size_string = os.popen("adb" + multi_devices_helper() + " shell wm size").read().split(" ")[2][:-1]
return size_string
def get_size_x() -> str:
size_x_string = os.popen("adb" + multi_devices_helper() + " shell wm size").read().split(" ")[2][:-1].split("x")[0]
return size_x_string
def get_size_y() -> str:
size_y_string = os.popen("adb" + multi_devices_helper() + " shell wm size").read().split(" ")[2][:-1].split("x")[1]
return size_y_string
def reboot():
os.system("adb" + multi_devices_helper() + " shell reboot")
def shutdown():
os.system("adb" + multi_devices_helper() + " shell reboot -p")
def turn_on_wifi():
os.system("adb" + multi_devices_helper() + " shell svc wifi enable")
def turn_off_wifi():
os.system("adb" + multi_devices_helper() + " shell svc wifi disable")
def wifi_prefer():
os.system("adb" + multi_devices_helper() + " shell svc wifi prefer")
def turn_on_data():
os.system("adb" + multi_devices_helper() + " shell svc data enable")
def turn_off_data():
os.system("adb" + multi_devices_helper() + " shell svc data disable")
def data_prefer():
os.system("adb" + multi_devices_helper() + " shell svc data prefer")
def power_stay_on(mode: str):
os.system("adb" + multi_devices_helper() + " shell svc power stayon " + format(mode))
def kill_adb_server():
os.system("adb kill-server")
def start_adb_server():
os.system("adb start_server")
def get_connected_devices() -> list:
devices_info = os.popen("adb devices -l").read().split()[4:]
devices_list = []
for i in range(0, len(devices_info), 7):
devices_list.append(devices_info[i])
return devices_list
def get_connected_device_info() -> list:
devices_info = os.popen("adb devices -l").read().split()[4:]
return devices_info
is_multi_devices = 0
current_device = ""
def multi_devices_helper() -> str:
if is_multi_devices == 1 and not current_device == "":
return format(" -s " + current_device)
else:
return "" | ADB-Easy-Control | /ADB_Easy_Control-1.0.1-py3-none-any.whl/ADB_Easy_Control/device_assistant.py | device_assistant.py |
import os
import datetime
from ADB_Easy_Control import device_assistant
def screen_capture() -> str:
now_time = datetime.datetime.now().strftime("%Y-%m-%d-%H:%M:%S")
os.system("adb" + device_assistant.multi_devices_helper() + " shell screencap -p /sdcard/screencap.png")
if not os.path.exists(format(os.getcwd()) + "/ScreenCapture"):
os.mkdir(format(os.getcwd()) + "/ScreenCapture/")
os.system(
"adb" + device_assistant.multi_devices_helper() + " pull /sdcard/screencap.png" + " " + format(
os.getcwd()) + "/ScreenCapture/" + format(now_time) + ".png")
return format(now_time) + ".png"
def custompath_screen_capture(filename: str, path: str) -> str:
os.system("adb" + device_assistant.multi_devices_helper() + " shell screencap -p /sdcard/screencap.png")
os.system("adb" + device_assistant.multi_devices_helper() + " pull /sdcard/screencap.png" + " " + format(
path) + "/ScreenCapture/" + format(filename) + ".png")
return format(filename) + ".png"
def screen_record(time_limit: float) -> str:
now_time = datetime.datetime.now().strftime("%Y-%m-%d-%H:%M:%S")
os.system("adb" + device_assistant.multi_devices_helper() + " shell screenrecord --time-limit " + format(
time_limit) + " /sdcard/screenrecord.mp4")
if not os.path.exists(format(os.getcwd()) + "/ScreenRecord"):
os.mkdir(format(os.getcwd()) + "/ScreenRecord/")
os.system(
"adb" + device_assistant.multi_devices_helper() + " pull /sdcard/screenrecord.mp4" + " " + format(
os.getcwd()) + "/ScreenRecord/" + format(now_time) + ".mp4")
return format(now_time) + ".mp4"
def custompath_screen_record(time_limit: float, filename: str, path: str) -> str:
os.system("adb" + device_assistant.multi_devices_helper() + " shell screenrecord --time-limit " + format(
time_limit) + " /sdcard/screenrecord.mp4")
os.system(
"adb" + device_assistant.multi_devices_helper() + " pull /sdcard/screenrecord.mp4" + " " + format(
path) + "/ScreenRecord/" + format(filename) + ".mp4")
return format(filename) + ".mp4"
def custom_screen_record(time_limit: float, size: str, bit_rate: int, filename: str, path: str):
os.system("adb" + device_assistant.multi_devices_helper() + " shell screenrecord --time-limit " + format(
time_limit) + " --size " + format(size) + " --bit-rate " + format(bit_rate) + " /sdcard/screenrecord.mp4")
os.system(
"adb" + device_assistant.multi_devices_helper() + " pull /sdcard/screenrecord.mp4" + " " + format(
path) + "/ScreenRecord/" + format(filename) + ".mp4")
return format(filename) + ".mp4"
def pull_file_to_computer(droid_path: str, computer_path: str):
os.system("adb" + device_assistant.multi_devices_helper() + " pull " + droid_path + " " + computer_path)
def push_file_to_droid(computer_path: str, droid_path: str):
os.system("adb" + device_assistant.multi_devices_helper() + " push " + computer_path + " " + droid_path) | ADB-Easy-Control | /ADB_Easy_Control-1.0.1-py3-none-any.whl/ADB_Easy_Control/screen_and_file.py | screen_and_file.py |
import os
import time
import random
import math
from ADB_Easy_Control import data_model, device_assistant
from functools import singledispatch
@singledispatch
def point_touch(position_x: int, position_y: int, sleep_time: float):
os.system('adb' + device_assistant.multi_devices_helper() + ' shell input tap ' + format(position_x) + ' ' + format(
position_y))
time.sleep(sleep_time)
@point_touch.register(data_model.Point)
def _(position: data_model.Point, sleep_time: float):
point_touch(position.position_x, position.position_y, sleep_time)
@singledispatch
def point_swipe(start_position_x: int, start_position_y: int, end_position_x: int, end_position_y: int,
swipe_time: float, sleep_time: float):
if swipe_time == 0:
os.system('adb' + device_assistant.multi_devices_helper() + ' shell input swipe ' + format(
start_position_x) + ' ' + format(start_position_y) + ' ' + format(
end_position_x) + ' ' + format(end_position_y))
if swipe_time != 0:
if swipe_time > 5:
print('You may have entered too long a slide time of ' + format(
swipe_time) + ' seconds.\nNote that the sliding time is in seconds and not milliseconds.')
os.system('adb' + device_assistant.multi_devices_helper() + ' shell input swipe ' + format(
start_position_x) + ' ' + format(start_position_y) + ' ' + format(
end_position_x) + ' ' + format(end_position_y) + '' + format(swipe_time * 1000))
time.sleep(sleep_time)
@point_swipe.register(data_model.Point)
def _(start_position: data_model.Point, end_position: data_model.Point, swipe_time: float,
sleep_time: float):
point_swipe(start_position.position_x, start_position.position_y, end_position.position_x,
end_position.position_y, swipe_time, sleep_time)
@singledispatch
def point_longtime_touch(position_x: int, position_y: int, touch_time: float, sleep_time: float):
if touch_time > 5:
# print('您可能输入了过长的滑动时间,' + format(touch_time) + '秒\n请注意,滑动时间的单位为秒而非毫秒')
print('You may have entered too long a touch time of ' + format(
touch_time) + ' seconds.\nNote that the touching time is in seconds and not milliseconds.')
os.system(
'adb' + device_assistant.multi_devices_helper() + ' shell input swipe ' + format(position_x) + ' ' + format(
position_y) + ' ' + format(
position_x) + ' ' + format(position_y) + '' + format(touch_time * 1000))
time.sleep(sleep_time)
@point_longtime_touch.register(data_model.Point)
def _(position: data_model.Point, touch_time: float, sleep_time: float):
point_longtime_touch(position.position_x, position.position_y, touch_time, sleep_time)
@singledispatch
def rectangle_area_touch(beginarea_x: int, finisharea_x: int, beginarea_y: int, finisharea_y: int, sleep_time: float):
rand_position_x = random.randint(beginarea_x, finisharea_x)
rand_position_y = random.randint(beginarea_y, finisharea_y)
os.system(
'adb' + device_assistant.multi_devices_helper() + ' shell input tap ' + format(rand_position_x) + ' ' + format(
rand_position_y))
time.sleep(sleep_time)
@rectangle_area_touch.register(data_model.RectangleArea)
def _(area: data_model.RectangleArea, sleep_time: float):
rectangle_area_touch(area.beginarea_x, area.finisharea_x, area.beginarea_y, area.finisharea_y,
sleep_time)
@singledispatch
def rectangle_area_longtime_touch(beginarea_x: int, finisharea_x: int, beginarea_y: int, finisharea_y: int,
touch_time: float, sleep_time: float):
rand_position_x = random.randint(beginarea_x, finisharea_x)
rand_position_y = random.randint(beginarea_y, finisharea_y)
os.system(
'adb' + device_assistant.multi_devices_helper() + ' shell input swipe' + format(rand_position_x) + ' ' + format(
rand_position_y) + ' ' + format(
rand_position_x) + ' ' + format(rand_position_y) + '' + format(touch_time * 1000))
time.sleep(sleep_time)
@rectangle_area_longtime_touch.register(data_model.RectangleArea)
def _(area: data_model.RectangleArea, touch_time: float, sleep_time: float):
rectangle_area_longtime_touch(area.beginarea_x, area.finisharea_x, area.beginarea_y, area.finisharea_y,
touch_time, sleep_time)
@singledispatch
def rectangle_area_swipe(start_beginarea_x: int, start_finisharea_x: int, start_beginarea_y: int,
start_finisharea_y: int, end_beginarea_x: int, end_finisharea_x: int, end_beginarea_y: int,
end_finisharea_y: int, swipe_time: float, sleep_time: float):
rand_start_position_x = random.randint(start_beginarea_x, start_finisharea_x)
rand_start_position_y = random.randint(start_beginarea_y, start_finisharea_y)
rand_end_position_x = random.randint(end_beginarea_x, end_finisharea_x)
rand_end_position_y = random.randint(end_beginarea_y, end_finisharea_y)
point_swipe(rand_start_position_x, rand_start_position_y, rand_end_position_x, rand_end_position_y, swipe_time,
sleep_time)
@rectangle_area_swipe.register(data_model.RectangleArea)
def _(start_area: data_model.RectangleArea, end_area: data_model.RectangleArea,
swipe_time: float,
sleep_time: float):
rectangle_area_swipe(start_area.beginarea_x, start_area.finisharea_x, start_area.beginarea_y,
start_area.finisharea_y, end_area.beginarea_x, end_area.finisharea_x,
end_area.beginarea_y, end_area.finisharea_y, swipe_time, sleep_time)
@singledispatch
def rectangle_inarea_rand_swipe(beginarea_x: int, finisharea_x: int, beginarea_y: int, finisharea_y: int,
min_swipe_distance: int, max_swipe_distance: int, swipe_time: float, sleep_time: float):
if min_swipe_distance > max_swipe_distance:
print("最小滑动距离" + format(min_swipe_distance) + "大于最大滑动距离" + format(max_swipe_distance))
return
diagonal_distance = math.hypot(finisharea_x - beginarea_x, finisharea_y - beginarea_y)
if max_swipe_distance > diagonal_distance:
print("设定的最大滑动距离" + format(max_swipe_distance) + "大于区域的对角线距离" + format(diagonal_distance))
max_swipe_distance = diagonal_distance
if min_swipe_distance > max_swipe_distance:
print("设定的最小滑动距离" + format(min_swipe_distance) + "大于区域的对角线距离" + format(diagonal_distance))
min_swipe_distance = max_swipe_distance
rand_distance = random.randint(min_swipe_distance, max_swipe_distance)
rand_degree = random.randint(0, 90)
x_move_distance = math.cos(math.radians(rand_degree)) * rand_distance
y_move_distance = math.sin(math.radians(rand_degree)) * rand_distance
rand_direction = random.randint(1, 4)
if rand_direction == 1:
rand_start_position_x = random.randint(beginarea_x, int(finisharea_x - x_move_distance))
rand_start_position_y = random.randint(beginarea_y, int(finisharea_y - y_move_distance))
rand_end_position_x = rand_start_position_x + x_move_distance
rand_end_position_y = rand_start_position_y + y_move_distance
elif rand_direction == 2:
rand_start_position_x = random.randint(beginarea_x, int(finisharea_x - x_move_distance))
rand_start_position_y = random.randint(int(beginarea_y + y_move_distance), finisharea_y)
rand_end_position_x = rand_start_position_x + x_move_distance
rand_end_position_y = rand_start_position_y - y_move_distance
elif rand_direction == 3:
rand_start_position_x = random.randint(int(beginarea_x + x_move_distance), finisharea_x)
rand_start_position_y = random.randint(beginarea_y, int(finisharea_y - y_move_distance))
rand_end_position_x = rand_start_position_x - x_move_distance
rand_end_position_y = rand_start_position_y + y_move_distance
else:
rand_start_position_x = random.randint(int(beginarea_x + x_move_distance), finisharea_x)
rand_start_position_y = random.randint(int(beginarea_y + y_move_distance), finisharea_y)
rand_end_position_x = rand_start_position_x - x_move_distance
rand_end_position_y = rand_start_position_y - y_move_distance
point_swipe(rand_start_position_x, rand_start_position_y, int(rand_end_position_x), int(rand_end_position_y),
swipe_time, sleep_time)
@rectangle_inarea_rand_swipe.register(data_model.RectangleArea)
def _(area: data_model.RectangleArea, min_swipe_distance: int, max_swipe_distance: int,
swipe_time: float, sleep_time: float):
rectangle_inarea_rand_swipe(area.beginarea_x, area.finisharea_x, area.beginarea_y, area.finisharea_y,
min_swipe_distance, max_swipe_distance, swipe_time, sleep_time) | ADB-Easy-Control | /ADB_Easy_Control-1.0.1-py3-none-any.whl/ADB_Easy_Control/touch_event.py | touch_event.py |
import os
from ADB_Easy_Control import device_assistant
def get_grep_or_findstr() -> str:
if os.name == "nt":
return "findstr"
else:
return "grep"
def get_current_activity() -> str:
package_and_activity_string = os.popen(
"adb" + device_assistant.multi_devices_helper() + " shell dumpsys activity activities | " + get_grep_or_findstr() + " mCurrentFocus").read().split(
" ")[4]
separator = "/"
activity_string = package_and_activity_string[package_and_activity_string.index(separator) + 1:-2]
return activity_string
def get_current_package() -> str:
package_and_activity_string = os.popen(
"adb" + device_assistant.multi_devices_helper() + " shell dumpsys activity activities | " + get_grep_or_findstr() + " mCurrentFocus").read().split(
" ")[4]
separator = "/"
package_string = package_and_activity_string[:package_and_activity_string.index(separator)]
return package_string
def start_activity(target_package: str, target_activity: str):
os.system(
"adb" + device_assistant.multi_devices_helper() + " shell am start -n " + target_package + "/" + target_activity)
def start_activity_with_parameter(target_package: str, target_activity: str, parameter: str):
os.system(
"adb" + device_assistant.multi_devices_helper() + " shell am start -n " + target_package + "/" + target_activity + " -d " + parameter)
def start_activity_by_action(target_intent_action: str):
os.system("adb" + device_assistant.multi_devices_helper() + " shell am start -a " + target_intent_action)
def start_activity_by_action_parameter(target_intent_action: str, parameter: str):
os.system(
"adb" + device_assistant.multi_devices_helper() + " shell am start -a " + target_intent_action + " -d " + parameter)
def start_service(target_package: str, target_service: str):
os.system(
"adb" + device_assistant.multi_devices_helper() + " shell am startservice -n " + target_package + "/" + target_service)
def start_service_with_parameter(target_package: str, target_service: str, parameter: str):
os.system(
"adb" + device_assistant.multi_devices_helper() + " shell am start -n " + target_package + "/" + target_service + " -d " + parameter)
def send_broadcast(parameter_and_action: str):
os.system("adb" + device_assistant.multi_devices_helper() + " shell am broadcast " + parameter_and_action)
def stop_app(target_package: str):
os.system("adb" + device_assistant.multi_devices_helper() + " shell am force-stop " + target_package) | ADB-Easy-Control | /ADB_Easy_Control-1.0.1-py3-none-any.whl/ADB_Easy_Control/app_assistant.py | app_assistant.py |
# ADB-Wifi
[](https://badge.fury.io/py/ADB-Wifi)
A script to automatically connect android devices in debug mode using WIFI
<p align="center">
<img src="extras/example.gif" width="100%" />
</p>
## Motivation
Everyday I need to connect to a lot of diferent devices to my computer.
Some devices have Micro-USB ports and others USB Type-C ports and I lose time plugging the devices and waiting for the ADB.
So, I have created this script to auto connect a device using WIFI.
**The diference to the other script and plugins:** The script save the connections in a configuration file to try reconnect when you boot your computer or when your device lost the wifi connection.
## Requirements
* Python 3
* ADB
## Installation
Using pip you can install ```adb-wifi```
### Linux and macOS:
```$ sudo pip install adb-wifi```
## Usage
1. Run ```$ adb-wifi```
You can add the ```adb-wifi``` to your startup applications.
2. Connect the devices to your computer and authorized the debub.
**Attention:** If your device turns off(battery, etc), you need to plug again the device to the computer because the adb need to open the ```tcpip port```!
If your device has rooted you can use this [application](https://play.google.com/store/apps/details?id=com.ttxapps.wifiadb)
to turn on the ```tcpip port```and ignore this step.
## Created & Maintained By
[Jorge Costa](https://github.com/extmkv)
| ADB-Wifi | /ADB-Wifi-0.4.2.tar.gz/ADB-Wifi-0.4.2/README.md | README.md |
# ADB Wrapper
```
python3 -m pip install ADBWrapper
```
```
from ADBWrapper import ADBWrapper
if __name__ == "__main__":
adb = ADBWrapper( { "ip": "192.168.4.57" , "port": "5555" } )
adb.take_screen_shot()
adb.screen_shot.show()
adb.open_uri( "https://www.youtube.com/watch?v=naOsvWxeYgo&list=PLcW8xNfZoh7fCLYJi0m3JXLs0LdcAsc0R&index=1" )
adb.press_key_sequence( [ 22 , 22 , 22 , 22 ] )
time.sleep( 10 )
adb.press_keycode( "KEYCODE_MEDIA_PAUSE" )
adb.press_keycode( "KEYCODE_MEDIA_FAST_FORWARD" )
adb.press_keycode( "KEYCODE_MEDIA_PLAY" )
``` | ADBWrapper | /ADBWrapper-0.0.3.tar.gz/ADBWrapper-0.0.3/README.md | README.md |
# ADCPy - code to work with ADCP data from the raw binary using python 3.x
[](https://adcpy.readthedocs.io/en/latest/?badge=latest)
[](https://travis-ci.org/mmartini-usgs/ADCPy)
### Purpose
This code prepares large amounts of single ping ADCP data from the raw binary for use with xarray by converting it to netCDF.
### Motivation
The code was written for the TRDI ADCP when I discovered theat TRDI's Velocity software could not easily export single ping data. While there are other packages out there, as the time of writing this code, I had yet to find one that saved the data in netCDF format (so it can be accessed with xarray and dask), could be run on linux, windows and mac, and did not load it into memory (the files I have are > 2GB)
The code is written as a module of functions, rather than classes, ensemble information is stored as nested dicts, in order to be more readable and to make the structure of the raw data (particularly the TRDI instruments) understandable.
### Status
As the code stands now, a 3.5 GB, single ping Workhorse ADCP .pd0 file with 3 Million ensembles will take 4-5 hours to convert. I live with this, because I can just let the conversion happen overnight on such large data sets, and once my data is in netCDF, everything else is convenient and fast. I suspect that more speed might be acheived by making use of xarray and dask to write the netCDF output, and I may do this if time allows, and I invite an enterprising soul to beat me to it. I use this code myself on a routine basis in my work, and continue to make it better as I learn more about python.
At USGS Coastal and Marine Geology we use the PMEL EPIC convention for netCDF as we started doing this back in the early 1990's. Downstream we do convert to more current CF conventions, however our diagnostic and other legacy code for processing instrument data from binary and other raw formats depends on the EPIC convention for time, so you will see a time (Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968) and time2 (msec since 0:00 GMT) variable created as default. This may confuse your code. If you want the more python friendly CF time (seconds since 1970-01-01T00:00:00 UTC) set timetype to CF.
Use at your own risk - this is a work in progress and a python learning project.
Enjoy,
Marinna
| ADCPy | /ADCPy-0.1.1.tar.gz/ADCPy-0.1.1/README.md | README.md |
# 10/4/2018 MM remove valid_range as it causes too many downstream problems
import sys, math
from netCDF4 import Dataset
from netCDF4 import num2date
import datetime as dt
# noinspection PyUnresolvedReferences
# import adcpy
from adcpy.TRDIstuff.TRDIpd0tonetcdf import julian
from adcpy.EPICstuff.EPICmisc import cftime2EPICtime
def doNortekRawFile(infileBName, infileIName, outfileName, goodens, timetype="CF"):
"""
Converts Nortek exported netcdf files of Signature data to raw current profile data to a netCDF4 file.
:param str infileBName: netCDF4 input Burst file from a Nortek Signature
:param str infileIName: is path of netCDF4 input IBurst file from a Nortek Signature
:param str outfileName: is path of a netcdf4 output file
:param list[int] goodens: ensemble numbers to export
:param str timetype: "CF" or "EPIC" time format to use for the "time" variable
"""
if infileIName == '':
midasdata = True
# TODO do this more elegantly, and make the default all ensembles
# this is necessary so that this function does not change the value
# in the calling function
ens2process = goodens[:]
nc = Dataset(infileBName, mode='r', format='NETCDF4')
# so the MIDAS output and the Contour output are different, here we hope
# to handle both, since MIDAS has been more tolerant of odd data
if midasdata:
ncI = nc
else:
ncI = Dataset(infileIName, mode='r', format='NETCDF4')
maxens = len(nc['Data']['Burst']['time'])
print('%s has %d ensembles' % (infileBName,maxens))
# TODO - ens2process[1] has the file size from the previous file run when multiple files are processed!
if ens2process[1] < 0:
ens2process[1] = maxens
# we are good to go, get the output file ready
print('Setting up netCDF output file %s' % outfileName)
# set up some pointers to the netCDF groups
config = nc['Config']
data = nc['Data']['Burst']
if 'IBurstHR' in ncI['Data'].groups:
idata = ncI['Data']['IBurstHR']
HRdata = True
else:
idata = ncI['Data']['IBurst']
HRdata = False
# TODO - pay attention to the possible number of bursts.
# we are assuming here that the first burst is the primary sample set of
# the four slant beams
# note that
# f4 = 4 byte, 32 bit float
# maxfloat = 3.402823*10**38;
intfill = -32768
floatfill = 1E35
nens = ens2process[1]-ens2process[0]
print('creating netCDF file %s with %d records' % (outfileName, nens))
cdf = Dataset(outfileName, 'w', clobber=True, format='NETCDF4')
# dimensions, in EPIC order
cdf.createDimension('time', nens)
if midasdata:
cdf.createDimension('depth', config.burst_nCells)
else:
cdf.createDimension('depth', config.Instrument_burst_nCells)
cdf.createDimension('lat', 1)
cdf.createDimension('lon', 1)
# write global attributes
cdf.history = "translated to USGS netCDF by Norteknc2USGScdf.py"
cdf.sensor_type = 'Nortek'
if midasdata:
cdf.serial_number = config.serialNumberDoppler
else:
cdf.serial_number = config.Instrument_serialNumberDoppler
# TODO - reduce the number if attributes we copy from the nc file
# build a dictionary of global attributes is a faster way to load attributes
# into a netcdf file http://unidata.github.io/netcdf4-python/#netCDF4.Dataset.setncatts
# put the "sensor_type" in front of the attributes that come directly
# from the instrument data
Nortek_config = dictifyatts(config, 'Nortek_')
cdf.setncatts(Nortek_config)
# it's not yet clear which way to go with this. python tools like xarray
# and panoply demand that time be a CF defined time.
# USGS CMG MATLAB tools need time and time2
# create the datetime object from the CF time
tobj = num2date(data['time'][:], data['time'].units, calendar=data['time'].calendar)
CFcount = data['time'][:]
CFunits = data['time'].units
EPICtime, EPICtime2 = cftime2EPICtime(CFcount,CFunits)
print('CFcount[0] = %f, CFunits = %s' % (CFcount[0], CFunits))
print('EPICtime[0] = %f, EPICtime2[0] = %f' % (EPICtime[0], EPICtime2[0]))
elapsed_sec = []
for idx in range(len(tobj)):
tdelta = tobj[idx]-tobj[0] # timedelta
elapsed_sec.append(tdelta.total_seconds())
# from the datetime object convert to time and time2
jd = []
time = []
time2 = []
# using u2 rather than u4 here because when EPIC time is written from this
# cdf to the nc file, it was getting messed up
# file is 1108sig001.cdf
# EPIC first time stamp = 25-Sep-2017 15:00:00
# seconds since 1970-01-01T00:00:00 UTC
# CF first time stamp = 25-Sep-2017 15:00:00
# file is 1108sig001.nc
# EPIC first time stamp = 08-Oct-5378 00:01:04
# seconds since 1970-01-01T00:00:00 UTC
# CF first time stamp = 25-Sep-2017 15:00:00
timevartype = 'u2'
for idx in range(len(tobj)):
j = julian(tobj[idx].year, tobj[idx].month, tobj[idx].day,
tobj[idx].hour, tobj[idx].minute, tobj[idx].second,
math.floor(tobj[idx].microsecond/10))
jd.append(j)
time.append(int(math.floor(j)))
time2.append(int((j - math.floor(j))*(24*3600*1000)))
if timetype == 'CF':
# cf_time for cf compliance and use by python packages like xarray
# if f8, 64 bit is not used, time is clipped
# TODO test this theory, because downstream 64 bit time is a problem
# for ADCP fast sampled, single ping data, need millisecond resolution
# cf_time = data['time'][:]
# cdf.createVariable('time','f8',('time'))
# cdf['time'].setncatts(dictifyatts(data['time'],''))
# cdf['time'][:] = cf_time[:]
varobj = cdf.createVariable('time', 'f8', ('time'))
varobj.setncatts(dictifyatts(data['time'], ''))
varobj[:] = data['time'][:]
varobj = cdf.createVariable('EPIC_time', timevartype,('time'))
varobj.units = "True Julian Day"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
# varobj[:] = time[:]
varobj[:] = EPICtime[:]
varobj = cdf.createVariable('EPIC_time2', timevartype, ('time'))
varobj.units = "msec since 0:00 GMT"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
# varobj[:] = time2[:]
varobj[:] = EPICtime2[:]
else:
# we include cf_time for cf compliance and use by python packages like xarray
# if f8, 64 bit is not used, time is clipped
# for ADCP fast sampled, single ping data, need millisecond resolution
varobj = cdf.createVariable('cf_time', 'f8', ('time'))
varobj.setncatts(dictifyatts(data['time'], ''))
varobj[:] = data['time'][:]
# we include time and time2 for EPIC compliance
varobj = cdf.createVariable('time', timevartype, ('time'))
varobj.units = "True Julian Day"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
# varobj[:] = time[:]
varobj[:] = EPICtime[:]
varobj = cdf.createVariable('time2', timevartype, ('time'))
varobj.units = "msec since 0:00 GMT"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
# varobj[:] = time2[:]
varobj[:] = EPICtime2[:]
cdf.start_time = '%s' % num2date(data['time'][0], data['time'].units)
cdf.stop_time = '%s' % num2date(data['time'][-1], data['time'].units)
print('times from the input file')
print(cdf.start_time)
print(cdf.stop_time)
print('times from the output file')
print('%s' % num2date(cdf['time'][0], cdf['time'].units))
print('%s' % num2date(cdf['time'][-1], cdf['time'].units))
varobj = cdf.createVariable('Rec', 'u4', ('time'), fill_value=intfill)
varobj.units = "count"
varobj.long_name = "Ensemble Count for each burst"
varobj[:] = data['EnsembleCount'][:]
varobj = cdf.createVariable('sv', 'f4', ('time'), fill_value=floatfill)
varobj.units = "m s-1"
varobj.long_name = "sound velocity (m s-1)"
varobj[:] = data['SpeedOfSound'][:]
# get the number
# there are separate Amplitude_Range, Correlation_Range and Velocity_Range
# we will pass on Velocity_Range as bindist
varobj = cdf.createVariable('bindist', 'f4', ('depth'), fill_value=floatfill)
# note name is one of the netcdf4 reserved attributes, use setncattr
varobj.setncattr('name', "bindist")
varobj.units = "m"
varobj.long_name = "bin distance from instrument for slant beams"
varobj.epic_code = 0
varobj.NOTE = "distance is not specified by Nortek as along beam or vertical"
if midasdata:
vardata = data.variables['Velocity Range'][:] # in raw data
else:
# because this is a coordinate variable, one can't just say data['Burst_Velocity_Beam_Range'][:]
try:
vardata = data.variables['Burst Velocity_Range'][:] # in raw data
except:
vardata = data.variables['Burst Velocity Beam_Range'][:] # in processed data
varobj[:] = vardata
nbbins = vardata.size
# map the Nortek beams onto TRDI order since later code expects TRDI order
TRDInumber = [3, 1, 4, 2]
for i in range(4):
varname = "vel%d" % TRDInumber[i]
if midasdata:
key = 'VelocityBeam%d' % (i+1)
else:
key = 'Vel_Beam%d' % (i+1)
varobj = cdf.createVariable(varname, 'f4', ('time', 'depth'), fill_value=floatfill)
varobj.units = "m s-1"
varobj.long_name = "Beam %d velocity (m s-1)" % TRDInumber[i]
varobj.epic_code = 1277+i
varobj.NOTE = "beams reordered from Nortek 1-2-3-4 to TRDI 3-1-4-2, as viewed clockwise from compass 0 " + \
"degree reference, when instrument is up-looking"
varobj[:, :] = data[key][:, :]
for i in range(4):
varname = "cor%d" % (i+1)
if midasdata:
key = 'CorrelationBeam%d' % (i+1)
else:
key = 'Cor_Beam%d' % (i+1)
varobj = cdf.createVariable(varname, 'u2', ('time', 'depth'), fill_value=intfill)
varobj.units = "percent"
varobj.long_name = "Beam %d correlation" % (i+1)
# varobj.epic_code = 1285+i
varobj[:, :] = data[key][:, :]
for i in range(4):
varname = "att%d" % (i+1)
if midasdata:
key = 'AmplitudeBeam%d' % (i+1)
else:
key = 'Amp_Beam%d' % (i+1)
varobj = cdf.createVariable(varname, 'f4', ('time', 'depth'), fill_value=intfill)
varobj.units = "dB"
# varobj.epic_code = 1281+i
varobj.long_name = "ADCP amplitude of beam %d" % (i+1)
varobj[:, :] = data[key][:, :]
varname = 'Heading'
varobj = cdf.createVariable('Hdg', 'f4', ('time'), fill_value=floatfill)
varobj.units = "degrees"
varobj.long_name = "INST Heading"
varobj.epic_code = 1215
# TODO can we tell on a Signature if a magvar was applied at deployment?
# no metadata found in the .nc file global attributes
# varobj.NOTE_9 = "no heading bias was applied during deployment"
varobj[:] = data[varname][:]
varname = 'Pitch'
varobj = cdf.createVariable('Ptch', 'f4', ('time'), fill_value=floatfill)
varobj.units = "degrees"
varobj.long_name = "INST Pitch"
varobj.epic_code = 1216
varobj[:] = data[varname][:]
varname = 'Roll'
varobj = cdf.createVariable('Roll', 'f4', ('time'), fill_value=floatfill)
varobj.units = "degrees"
varobj.long_name = "INST Roll"
varobj.epic_code = 1217
varobj[:] = data[varname][:]
# The Signature records magnetometer data we are not converting at this time
varname = 'WaterTemperature'
varobj = cdf.createVariable('Tx', 'f4', ('time'), fill_value=floatfill)
varobj.units = "degrees"
varobj.long_name = "Water temperature at ADCP"
# TODO - verify if Signature is IPTS-1990
# 20:T :TEMPERATURE (C) :temp:C:f10.2:IPTS-1990 standard
# varobj.epic_code = 28
varobj[:] = data[varname][:]
varname = 'Pressure'
varobj = cdf.createVariable('Pressure', 'f4', ('time'), fill_value=floatfill)
varobj.units = "dbar"
varobj.long_name = "ADCP Transducer Pressure"
varobj.epic_code = 4
varobj[:] = data[varname][:]
# TODO - Signature can bottom track, and we don't have an example yet
# we will want to model it on the TRDI ADCP format that laredy exists
# it is possible in a Signature for the vertical beam data to be on a
# different time base. Test for this. If it is the same time base we can
# include it now. If it isn't we will have to add it later by some other
# code. 5th beam Signature data is stored under the IBurst group
# it is also possible for the number of bins to be different
if midasdata:
vrkey = 'Velocity Range'
else:
vrkey = 'IBurstHR Velocity_Range'
if data['time'].size == idata['time'].size:
if nbbins == idata.variables[vrkey].size:
varobj = cdf.createVariable("vel5", 'f4', ('time', 'depth'), fill_value=floatfill)
varobj.units = "m s-1"
varobj.long_name = "Beam 5 velocity (m s-1)"
# varobj.valid_range = [-32767, 32767]
varobj[:, :] = idata['VelocityBeam5'][:, :]
# varobj[:,:] = idata['Vel_Beam5'][:,:] # contour version
varobj = cdf.createVariable("cor5", 'u2', ('time', 'depth'), fill_value=intfill)
varobj.units = "percent"
varobj.long_name = "Beam 5 correlation"
# varobj.valid_range = [0, 100]
varobj[:, :] = idata['CorrelationBeam5'][: ,:]
# varobj[:,:] = idata['Cor_Beam5'][:,:] # contour version
varobj = cdf.createVariable("att5", 'u2', ('time', 'depth'), fill_value=intfill)
varobj.units = "dB"
varobj.long_name = "ADCP amplitude of beam 5"
# varobj.valid_range = [0, 255]
varobj[:, :] = idata['AmplitudeBeam5'][:,:]
# varobj[:,:] = idata['Amp_Beam5'][:,:] # contour version
else:
print("Vertical beam data found with different number of cells.")
cdf.Nortek_VBeam_note = "Vertical beam data found with different number of cells. Vertical beam data " + \
"not exported to netCDF"
print("Vertical beam data not exported to netCDF")
else:
print("Vertical beam data found with different number of ensembles.")
cdf.Nortek_VBeam_note = "Vertical beam data found with different number of ensembles. Vertical beam data " + \
"not exported to netCDF"
print("Vertical beam data not exported to netCDF")
nc.close()
cdf.close()
print("%d ensembles copied" % maxens)
def dictifyatts(varptr, tag):
"""
read netcdf attributes and return them as a dict
:param varptr: pointer to a netcdf variable object
:param tag: string to add to the beginning of the keys in the dict
:return: the attributes as a dict where the keys are the attribute names and the values are the attribute data
"""
theDict = {}
for key in varptr.ncattrs():
if key.startswith('Instrument_'):
# we need to strip the 'Instrument_' off the beginning
n = key.find('_')
newkey = tag+key[n+1:]
else:
newkey = tag+key
theDict[newkey] = varptr.getncattr(key)
return theDict
# TODO add - and -- types of command line arguments
def __main():
print('%s running on python %s' % (sys.argv[0], sys.version))
if len(sys.argv) < 3:
print("%s usage:" % sys.argv[0])
print("Norteknc2USGScdf infileBName infileIName outfilename [startingensemble endingensemble]" )
sys.exit(1)
try:
infileBName = sys.argv[1]
except:
print('error - Burst input file name missing')
sys.exit(1)
try:
infileIName = sys.argv[2]
except:
print('error - IBurst input file name missing')
sys.exit(1)
try:
outfileName = sys.argv[3]
except:
print('error - output file name missing')
sys.exit(1)
print('Converting %s to %s' % (infileBName, outfileName))
try:
goodens = [int(sys.argv[4]), int(sys.argv[5])]
except:
print("No starting and ending ensembles specified, processing entire file")
goodens = [0, -1]
try:
timetype = sys.argv[6]
except:
print('Time type will be CF')
timetype = "CF"
print('Start file conversion at ', dt.datetime.now())
doNortekRawFile(infileBName, infileIName, outfileName, goodens, timetype)
print('Finished file conversion at ', dt.datetime.now())
if __name__ == "__main__":
__main() | ADCPy | /ADCPy-0.1.1.tar.gz/ADCPy-0.1.1/adcpy/Nortekstuff/Norteknc2USGScdf.py | Norteknc2USGScdf.py |
import sys
import os
import datetime as dt
import netCDF4 as nc
from netCDF4 import num2date
import numpy as np
import math
# noinspection PyUnboundLocalVariable,SpellCheckingInspection
def reshapeEPIC(cont_file, burst_file, burst_length, dim='time', edges=None, drop=None,
variable_attributes_to_omit=None, verbose=False):
"""
apportion a continuous time series file into bursts (e.g. reshape)
:usage: issue_flags = reshapeEPIC(cont_file, burst_file, burst_length,
dim=None, edges=None, drop=None)
:param str cont_file: name of netCDF file with continuous data
:param str burst_file: name of file to store the reshaped data, attributes will be copied
:param int burst_length: maximum number of samples in each burst
:param str dim: name of dimension along which we will split the data, usually 'time' or 'Rec'
:param list[tuple] edges: [(start0, end0), (start1, end1), ...] of edges defining the edges of each burst
:param str drop: set of variable names to omit from the output file
:param dict variable_attributes_to_omit: variable attributes to omit from output file
:param bool verbose: get lots of feedback to STDOUT
:return: dictionary of problem types and status
"""
print('%s running on python %s' % (sys.argv[0], sys.version))
print('Start file conversion at ', dt.datetime.now())
# check for the output file's existence before we try to delete it.
try:
os.remove(burst_file)
print('{} removed'.format(burst_file))
except FileNotFoundError:
pass
continuous_cdf = nc.Dataset(cont_file, format="NETCDF4")
if dim in continuous_cdf.dimensions:
print('the dimension we are operating on is {}'.format(dim))
else:
print('{} not found in input file, aborting'.format(dim))
continuous_cdf.close()
# create the new file
burst_cdf = nc.Dataset(burst_file, mode="w", clobber=True, format='NETCDF4')
# incoming data may be uneven, we need proper fill to be added
burst_cdf.set_fill_on()
# copy the global attributes
# first get a dict of them so that we can iterate
gatts = {}
for attr in continuous_cdf.ncattrs():
# print('{} = {}'.format(attr,getattr(continuous_cdf, attr)))
gatts[attr] = getattr(continuous_cdf, attr)
# add a few more important ones we will fill in later
gatts['start_time'] = ""
gatts['stop_time'] = ""
gatts['DELTA_T'] = ""
gatts['history'] = getattr(continuous_cdf, 'history') + '; converted to bursts by reshapeEPIC.py'
burst_cdf.setncatts(gatts)
print('Finished copying global attributes\n')
for item in continuous_cdf.dimensions.items():
print('Defining dimension {} which is {} long in continuous file'.format(item[0], len(item[1])))
if item[0] == dim:
# this is the dimension along which we will reshape
if len(edges) > 1:
nbursts = len(edges)
else:
nbursts = math.floor(len(item[1]) / burst_length)
burst_cdf.createDimension(dim, nbursts)
print('Reshaped dimension {} created for {} bursts'.format(item[0], nbursts))
else:
burst_cdf.createDimension(item[0], len(item[1]))
burst_cdf.createDimension('sample', burst_length)
# ---------------- set up the variables
# order of dimensions matters.
# per https://cmgsoft.repositoryhosting.com/trac/cmgsoft_m-cmg/wiki/EPIC_compliant
# for a burst file dimension order needs to be time, sample, depth, [lat], [lon]
for cvar in continuous_cdf.variables.items():
cvarobj = cvar[1]
print('{} is data type {}'.format(cvarobj.name, cvarobj.dtype))
try:
fill_value = cvarobj.getncattr('_FillValue')
if verbose:
print('\tthe fill value is {}'.format(fill_value))
except AttributeError:
print('\tfailed to read the fill value')
fill_value = False # do not use None here!!!
if verbose:
print('\tfillValue in burst file will be set to {} (if None, then False will be used)'.format(fill_value))
if cvarobj.name not in drop: # are we copying this variable?
dtype = cvarobj.dtype
if dim in cvarobj.dimensions: # are we reshaping this variable?
vdims_cont = cvarobj.dimensions
vdims_burst = []
for t in enumerate(vdims_cont):
vdims_burst.append(t[1])
if t[1] == dim:
vdims_burst.append('sample')
print('\tappending sample in {}'.format(cvarobj.name))
varobj = burst_cdf.createVariable(cvarobj.name, dtype, tuple(vdims_burst), fill_value=fill_value)
else:
# for a normal copy, no reshape
varobj = burst_cdf.createVariable(cvarobj.name, dtype, cvarobj.dimensions, fill_value=fill_value)
# copy the variable attributes
# first get a dict of them so that we can iterate
vatts = {}
for attr in cvarobj.ncattrs():
# print('{} = {}'.format(attr,getattr(continuous_cdf,attr)))
if attr not in variable_attributes_to_omit:
vatts[attr] = getattr(cvarobj, attr)
try:
varobj.setncatts(vatts)
except AttributeError:
print('AttributeError for {}'.format(cvarobj.name))
# not a coordinate but a fill value of None might cause problems
burst_cdf.createVariable('burst', 'uint16', ('time',), fill_value=False)
# these are coordinates and thus cannot have fill as their values
varobj = burst_cdf.createVariable('sample', 'uint16', ('sample',), fill_value=False)
varobj.units = "count"
try:
burst_cdf.createVariable('depth', 'float32', ('depth',), fill_value=False)
except:
pass # likely depth was already set up if this happens
# TODO - if edges is length 1, then we need to create the burst edges here
# --------- populate the file
# note that we don't have to change from a define to a read mode here
# coordinate variables are small(er) and can be done at once, be sure to use generative methods
print(f'\nNow populating data for {nbursts} bursts')
burst_cdf['burst'][:] = list(range(nbursts))
burst_cdf['sample'][:] = list(range(burst_length))
nbins = len(burst_cdf['depth'])
try:
binsize = continuous_cdf['depth'].bin_size_m
except AttributeError:
try:
binsize = continuous_cdf['depth'].bin_size
except AttributeError:
print('Warning: no depth size information found, assuming 1 m')
binsize = 1
try:
bin1distance = continuous_cdf['depth'].center_first_bin_m
except AttributeError:
try:
bin1distance = continuous_cdf['depth'].center_first_bin
except AttributeError:
print('Warning: no depth center of first bin information found, assuming 0.5 bins ')
bin1distance = binsize / 2
ranges_m = list(map(lambda ibin: bin1distance / 100 + ibin * binsize / 100, range(nbins)))
burst_cdf['depth'][:] = ranges_m
issue_flags = {}
diagnosticvars = {} # vars to generate diagnostic output
for cvar in continuous_cdf.variables.items():
varname = cvar[1].name
issue_flags[varname] = []
if varname not in drop:
# the variable objects in Continuous and Burst files
cvarobj = continuous_cdf[varname]
bvarobj = burst_cdf[varname]
vdims_cont = cvarobj.dimensions
vshapes_cont = cvarobj.shape
vndims_cont = len(cvarobj.dimensions)
vdims_burst = burst_cdf[varname].dimensions
vshapes_burst = burst_cdf[varname].shape
vndims_burst = len(burst_cdf[varname].dimensions)
if verbose:
print('{}\tin Continuous file is data type {} shape {}'.format(
varname, cvarobj.dtype, cvarobj.shape))
print('\tin Burst file it is data type {} shape {}'.format(
bvarobj.dtype, bvarobj.shape))
try:
fillval_burst = burst_cdf[varname].getncattr('_FillValue')
except:
if ('EPIC' in varname) and verbose:
# EPIC was ending up with odd fill values in the raw file
# this will avoid the typerror when EPIC_time is written
# not sure it is the best solution, for now it works
fillval_burst = 0
print('\tfillval_burst {}'.format(fillval_burst))
# this will prevent EPIC_time from being written
# fillval_burst = None
else:
fillval_burst = None
if 'sample' not in vdims_burst:
bvarobj[:] = continuous_cdf[varname][:]
else:
for iburst in range(nbursts):
continuous_cdf_corner = np.zeros(vndims_cont)
continuous_cdf_edges = np.ones(vndims_cont)
# look up data in the continuous file according to the user's indeces
continuous_cdf_corner[vdims_cont.index('time')] = edges[iburst][0]
ndatasamples = edges[iburst][1] - edges[iburst][0]
continuous_cdf_edges[vdims_cont.index('time')] = ndatasamples
if 'depth' in vdims_cont:
continuous_cdf_edges[vdims_cont.index('depth')] = vshapes_cont[vdims_cont.index('depth')]
if (iburst == 0) and verbose:
print('\tcontinuous_cdf_corner = {}, continuous_cdf_edges = {}'.format(
continuous_cdf_corner, continuous_cdf_edges))
# get the data, and this will be contingent on the number of dims
if vndims_cont == 1:
data = continuous_cdf[varname][int(continuous_cdf_corner[0]):int(continuous_cdf_corner[0]) +
int(continuous_cdf_edges[0])]
elif vndims_cont == 2:
if varname in diagnosticvars:
data = continuous_cdf[varname]
data = continuous_cdf[varname][int(continuous_cdf_corner[0]):int(continuous_cdf_corner[0]) +
int(continuous_cdf_edges[0]),
int(continuous_cdf_corner[1]):int(continuous_cdf_corner[1]) +
int(continuous_cdf_edges[1])]
elif vndims_cont == 3:
data = continuous_cdf[varname][int(continuous_cdf_corner[0]):int(continuous_cdf_corner[0]) +
int(continuous_cdf_edges[0]),
int(continuous_cdf_corner[1]):int(continuous_cdf_corner[1]) +
int(continuous_cdf_edges[1]),
int(continuous_cdf_corner[2]):int(continuous_cdf_corner[2]) +
int(continuous_cdf_edges[2])]
elif vndims_cont == 4:
data = continuous_cdf[varname][int(continuous_cdf_corner[0]):int(continuous_cdf_corner[0]) +
int(continuous_cdf_edges[0]),
int(continuous_cdf_corner[1]):int(continuous_cdf_corner[1]) +
int(continuous_cdf_edges[1]),
int(continuous_cdf_corner[2]):int(continuous_cdf_corner[2]) +
int(continuous_cdf_edges[2]),
int(continuous_cdf_corner[3]):int(continuous_cdf_corner[3]) +
int(continuous_cdf_edges[3])]
else:
if iburst == 0:
print('did not read data')
burstcorner = np.zeros(vndims_burst)
burstedges = np.ones(vndims_burst)
burstcorner[vdims_burst.index('time')] = iburst
burstedges[vdims_burst.index('time')] = burst_length
# since we don't have regular and recurring indeces, we need to handle
# situations where the data read is not the maximum number of samples
# samples MUST be the second dimension!
if ndatasamples < burst_length:
issue_flags[varname].append(ndatasamples)
if len(data.shape) == 1:
# start with a filled array
burstdata = np.full((1, vshapes_burst[1]), fillval_burst)
burstdata[:, 0:ndatasamples] = data[:]
elif len(data.shape) == 2:
# start with a filled array
burstdata = np.full((1, vshapes_burst[1], vshapes_burst[2]), fillval_burst)
burstdata[:, 0:ndatasamples] = data[:, :]
elif len(data.shape) == 3:
# start with a filled array
burstdata = np.full((1, vshapes_burst[1], vshapes_burst[2], vshapes_burst[3]),
fillval_burst)
burstdata[:, 0:ndatasamples, :] = data[:, :, :]
elif len(data.shape) == 4:
# start with a filled array
burstdata = np.full((1, vshapes_burst[1], vshapes_burst[2],
vshapes_burst[3], vshapes_burst[4]), fillval_burst)
burstdata[:, 0:ndatasamples, :, :] = data[:, :, :, :]
elif len(data.shape) == 5:
# start with a filled array
burstdata = np.full((1, vshapes_burst[1], vshapes_burst[2],
vshapes_burst[3], vshapes_burst[4],
vshapes_burst[5]), fillval_burst)
burstdata[:, 0:ndatasamples, :, :, :] = data[:, :, :, :, :]
else:
burstdata = data
if ('EPIC' in varname) and (iburst == 0) and verbose:
print('\tdata {}'.format(data[1:10]))
print('\tburstdata {}'.format(burstdata[1:10]))
if (iburst == 0) and verbose:
print('\tburstdata.shape = {} burst file dims {}'.format(
burstdata.shape, vdims_burst))
print('\tvndims_burst = {}'.format(vndims_burst))
print('\tdata.shape = {}'.format(data.shape))
# TODO -- can we simplify this code my making a function object?
if len(burstdata.shape) == 1:
try:
burst_cdf[varname][iburst] = burstdata[:]
except TypeError:
# TypeError: int() argument must be a string,
# a bytes-like object or a number, not 'NoneType'
# EPIC_time was given a fill value in the raw file.
# this was solved by making sure coordinate variables had fill value set to False
if iburst == 0:
print('\t{} in Burst file is data type {}, burstdata is type {}, '.format(
varname, bvarobj.dtype, type(burstdata)))
print(' and got a TypeError when writing')
except IndexError: # too many indices for array
if iburst == 0:
print('too many indices for array')
print('iburst = {}'.format(iburst))
print('burstdata = {}'.format(burstdata))
except ValueError:
if iburst == 0:
print('ValueError ')
elif len(burstdata.shape) == 2:
try:
burst_cdf[varname][iburst, :] = burstdata[:, :]
except TypeError:
# TypeError: int() argument must be a string,
# a bytes-like object or a number, not 'NoneType'
# EPIC_time was given a fill value in the raw file.
# this was solved by making sure coordinate variables had fill value set to False
if iburst == 0:
print('\t{} in Burst file is data type {}, burstdata is type {} '.format(
varname, bvarobj.dtype, type(burstdata)))
print('and got a TypeError when writing')
except IndexError: # too many indices for array
if iburst == 0:
print('too many indices for array')
print('iburst = {}'.format(iburst))
print('burstdata = {}'.format(burstdata))
except ValueError:
if iburst == 0:
print('ValueError ')
elif len(burstdata.shape) == 3:
try:
burst_cdf[varname][iburst, :, :] = burstdata[:, :, :]
except TypeError:
if iburst == 0:
print('\t{} is data type {} and got a TypeError when writing'.format(
varname, cvarobj.dtype))
except IndexError: # too many indices for array
if iburst == 0:
print('too many indices for array')
print('iburst = {}'.format(iburst))
print('burstdata = {}'.format(burstdata))
except ValueError:
if iburst == 0:
print('ValueError cannot reshape array of size 1 into shape (1,150,1,1)')
# here we have shapes [time lat lon]
elif len(burstdata.shape) == 4:
try:
burst_cdf[varname][iburst, :, :, :] = burstdata[:, :, :, :]
except TypeError:
if iburst == 0:
print('\t{} is data type {} and got a TypeError when writing'.format(
varname, cvarobj.dtype))
except IndexError: # too many indices for array
if iburst == 0:
print('too many indices for array')
print('iburst = {}'.format(iburst))
print('burstdata = {}'.format(burstdata))
except ValueError:
if iburst == 0:
print('ValueError cannot reshape array of size 1 into shape (1,150,1,1)')
# here we have shapes [time lat lon]
elif len(burstdata.shape) == 5:
try:
burst_cdf[varname][iburst, :, :, :] = burstdata[:, :, :, :, :]
except TypeError:
if iburst == 0:
print('\t{} is data type {} and got a TypeError when writing'.format(
varname, cvarobj.dtype))
except IndexError: # too many indices for array
if iburst == 0:
print('too many indices for array')
print('iburst = {}'.format(iburst))
print('burstdata = {}'.format(burstdata))
except ValueError:
if iburst == 0:
print('got a value error')
else:
if iburst == 0:
print('\tnot set up to write {} dimensions to burst file'.format(
len(burstdata.shape)))
# end of for iburst in range(nbursts):
# end of if 'sample' not in vdims_burst:
# end of if varname not in drop:
# for cvar in continuous_cdf.variables.items():
burst_cdf.start_time = str(num2date(burst_cdf['time'][0, 0], burst_cdf['time'].units))
burst_cdf.stop_time = str(num2date(burst_cdf['time'][-1, 0], burst_cdf['time'].units))
burst_cdf.close()
continuous_cdf.close()
print('Finished file conversion at ', dt.datetime.now())
return issue_flags
# utility functions for creating and managing indexes
# make a function to identify the indeces
# and this is where tuples are nice
def find_boundaries(data, edges):
"""
using a list of start and end timestamps (edges) that delineate the beginning times and ending times
of burts of measurements, find the indices into the data that correspond to these edges.
The time base may be irregular, it does not matter.
:param list data: time stamps from the data
:param list[tuple] edges: start and end times
:return: list of indices
"""
nparray = np.array(data) # make the data an numpy array for access to numpy's methods
idx = []
for edge in edges:
s = np.where(nparray >= edge[0])
e = np.where(nparray >= edge[1])
idx.append((int(s[0][0]), int(e[0][0])))
if not (len(idx) % 100):
print('.', end='')
print('\n')
return idx
def find_first_masked_value(x):
"""
helper function to find the first occurrence of a masked value in a numpy masked array
returns None if no masked values are found
:param numpy array x:
:return: index
"""
for tpl in enumerate(x):
# print(type(tpl[1]))
if type(tpl[1]) == np.ma.core.MaskedConstant:
# print(tpl[0])
return tpl[0]
return None
def generate_expected_start_times(cdffile, dim, burst_start_offset,
burst_interval, burst_length, sample_rate):
"""
generate a regular and recurring set of start and end timestamps that
delineate the beginning times and ending times of burts of measurements
:param str cdffile: name of a continuous time series data file
:param str dim: the unlimited or time dimension which we will find the indices to reshape
:param int burst_start_offset: when to start to make bursts in the continuous data, seconds
:param int burst_interval: time between start of bursts, seconds
:param int burst_length: number of samples in a burst
:param int sample_rate: Hertz
:return: list of tuples of start and end times for each burst
"""
# TODO - do this from a first burst first sample time stamp
print('the file we are looking up times in is {}'.format(cdffile))
cdf = nc.Dataset(cdffile, format="NETCDF4")
if dim in cdf.dimensions:
print('the dimension we are operating on is {}'.format(dim))
else:
print('{} not found in {} file, aborting'.format(dim, cdffile))
cdf.close()
print('loading the time variable data')
t = cdf['time'][:]
# check for masked/bad values
good_times = np.ma.masked_invalid(t)
print('there are {} times of which {} are good, searching for the first masked time'.format(
len(t), good_times.count()))
start_of_bad = find_first_masked_value(t)
if start_of_bad is None:
print('all times are good!')
else:
print('masked times start after {}'.format(num2date(t[start_of_bad - 1], cdf['time'].units)))
# get the number of bursts based on the elapsed time
print('len t = {} {} {} to {}'.format(len(t), type(t), t[0], t[-1]))
tfirst = num2date(t[0], cdf['time'].units)
if start_of_bad is None:
tlast = num2date(t[-1], cdf['time'].units)
else:
tlast = num2date(t[start_of_bad - 1], cdf['time'].units)
nbursts = int((tlast - tfirst).total_seconds() / burst_interval)
burst_start_times = []
for x in range(nbursts):
burst_start_times.append(burst_start_offset + x * burst_interval)
burst_duration = (1 / sample_rate) * burst_length # seconds
burst_end_times = list(map(lambda x: x + burst_duration, burst_start_times))
print('start times {} such as {}...'.format(len(burst_start_times), burst_start_times[0:5]))
print('end times {} such as {}...'.format(len(burst_end_times), burst_end_times[0:5]))
print('the last time is {} seconds from the start of the experiment'.format(cdf['time'][-1]))
# it turns out later to be convenient to have this as a list of tuples of start and end
slices = list(map(lambda s, e: (s, e), burst_start_times, burst_end_times))
print('edge tuples {} such as {}...'.format(len(slices), slices[0:5]))
cdf.close()
return slices
def save_indexes_to_file(cdffile, edge_tuples, index_file=None):
"""
write indexes to a file with the time stamps for QA/QC
:param str cdffile: the continuous time series netCDF file being operated upon
:param list[tuple] edge_tuples: the bursts to output
:param str index_file: a file to output a string listing of time stamps
"""
cdf = nc.Dataset(cdffile, format="NETCDF4")
tunits = cdf['time'].units
if index_file is not None:
index_file = cdffile.split('.')[0] + 'indices.txt'
with open(index_file, 'w') as outfile:
outfile.write('Burst Indexes for {}\n'.format(cdffile))
outfile.write('Burst, start index, end index, number of samples, start time, end time\n')
for x in enumerate(edge_tuples):
t0 = num2date(cdf['time'][x[1][0]], tunits)
t1 = num2date(cdf['time'][x[1][1]]-1, tunits)
try:
s = '{}, {}, {}, {}, {}, {}\n'.format(x[0], x[1][0], x[1][1],
x[1][1] - x[1][0], t0, t1)
except:
s = '{}, {}, {}, , , \n'.format(x[0], x[1][0], x[1][1])
outfile.write(s)
cdf.close()
print('Indexes written to {}'.format(index_file))
if __name__ == "__main__":
# then we have been run from the command line
if len(sys.argv) < 3:
print("%s \n usage:" % sys.argv[0])
print("reshapeEPIC(ContFile, burst_file, burst_length, [dim2changename], [edges], [vars2omit])")
sys.exit(1)
if len(sys.argv) != 3:
# TODO the keyword pairs do not get into reshape EPIC correctly,
reshapeEPIC(sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4:])
else:
reshapeEPIC(sys.argv[1], sys.argv[2], sys.argv[3])
else:
# we have been imported
# the argument passing here works fine
pass | ADCPy | /ADCPy-0.1.1.tar.gz/ADCPy-0.1.1/adcpy/EPICstuff/reshapeEPIC.py | reshapeEPIC.py |
from netCDF4 import Dataset
from netCDF4 import num2date
import datetime as dt
from pandas import Timestamp
import numpy as np
import math
import os
import sys
def s2hms(secs):
"""
convert seconds to hours, minutes and seconds
:param int secs:
:return: hours, minutes and seconds
"""
hour = math.floor(secs/3600)
mn = math.floor((secs % 3600)/60)
sec = secs % 60
return hour, mn, sec
def jdn(dto):
"""
convert datetime object to Julian Day Number
:param object dto: datetime
:return: int Julian Day Number
"""
year = dto.year
month = dto.month
day = dto.day
not_march = month < 3
if not_march:
year -= 1
month += 12
fr_y = math.floor(year / 100)
reform = 2 - fr_y + math.floor(fr_y / 4)
jjs = day + (
math.floor(365.25 * (year + 4716)) + math.floor(30.6001 * (month + 1)) + reform - 1524)
return jjs
def ajd(dto):
"""
Given datetime object returns Astronomical Julian Day.
Day is from midnight 00:00:00+00:00 with day fractional
value added.
:param object dto: datetime
:return: int Astronomical Julian Day
"""
jdd = jdn(dto)
day_fraction = dto.hour / 24.0 + dto.minute / 1440.0 + dto.second / 86400.0
return jdd + day_fraction - 0.5
# noinspection SpellCheckingInspection
def cftime2EPICtime(timecount, timeunits):
# take a CF time variable and convert to EPIC time and time2
# timecount is the integer count of minutes (for instance) since the time stamp
# given in timeunits
buf = timeunits.split()
t0 = dt.datetime.strptime(buf[2]+' '+buf[3], '%Y-%m-%d %H:%M:%S.%f')
t0j = ajd(t0)
# julian day for EPIC is the beginning of the day e.g. midnight
t0j = t0j+0.5 # add 0.5 because ajd() subtracts 0.5
if buf[0] == 'hours':
tj = timecount/24
elif buf[0] == 'minutes':
tj = timecount/(24*60)
elif buf[0] == 'seconds':
tj = timecount/(24*60*60)
elif buf[0] == 'milliseconds':
tj = timecount/(24*60*60*1000)
elif buf[0] == 'microseconds':
tj = timecount/(24*60*60*1000*1000)
else:
# TODO add a warning here, we're here because no units were recognized
tj = timecount
tj = t0j+tj
time = math.floor(tj)
time2 = math.floor((tj-time)*(24*3600*1000))
return time, time2
def EPICtime2datetime(time, time2):
"""
convert EPIC time and time2 to python datetime object
:param numpy array time:
:param numpy array time2:
:return: gregorian time as a list of int, datetime object
"""
# TODO - is there a rollover problem with this algorithm?
dtos = []
gtime = []
for idx in range(len(time)):
# time and time2 are the julian day and milliseconds
# in the day as per PMEL EPIC convention for netCDF
jd = time[idx]+(time2[idx]/(24*3600*1000))
secs = (jd % 1)*(24*3600)
j = math.floor(jd) - 1721119
in1 = 4*j-1
y = math.floor(in1/146097)
j = in1 - 146097*y
in1 = math.floor(j/4)
in1 = 4*in1 + 3
j = math.floor(in1/1461)
d = math.floor(((in1 - 1461*j) + 4)/4)
in1 = 5*d - 3
m = math.floor(in1/153)
d = math.floor(((in1 - 153*m) + 5)/5)
y = y*100 + j
mo = m-9
yr = y+1
if m < 10:
mo = m+3
yr = y
hour, mn, sec = s2hms(secs)
ss = math.floor(sec)
hundredths = math.floor((sec-ss)*100)
gtime.append([yr, mo, d, hour, mn, ss, hundredths])
# centiseconds * 10000 = microseconds
dto = dt.datetime(yr, mo, d, hour, mn, ss, int(hundredths*10000))
dtos.append(dto)
return gtime, dtos
def resample_cleanup(datafiles):
for file_name in datafiles:
print(f'Working on file {file_name}')
# re-open the dataset for numerical operations such as min and max
# we have to make attribute changes, etc. so need to open with the netCDF package
pyd = Dataset(file_name, mode="r+", format='NETCDF4')
dim_keys = pyd.dimensions.keys()
# add minimum and maximum attributes and replace NaNs with _FillValue
for var_key in pyd.variables.keys():
if (var_key not in dim_keys) & (var_key not in {'time', 'EPIC_time', 'EPIC_time2'}):
data = pyd[var_key][:]
nan_idx = np.isnan(pyd[var_key][:])
mn = np.min(data[~nan_idx])
mx = np.max(data[~nan_idx])
print('%s min = %f, max = %f' % (var_key, mn, mx))
pyd[var_key].minimum = mn
pyd[var_key].maximum = mx
fill = pyd.variables[var_key].getncattr('_FillValue')
data[nan_idx] = fill
pyd[var_key][:] = data
# Add back EPIC time
timecount = pyd['time']
timeunits = pyd['time'].units
time, time2 = cftime2EPICtime(timecount, timeunits)
print('first time = %7d and %8d' % (time[0], time2[0]))
# noinspection PyBroadException
try:
varobj = pyd.createVariable('EPIC_time', 'u4', ('time'))
except:
print('EPIC_time exists, updating.')
varobj = pyd['EPIC_time']
varobj.units = "True Julian Day"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
varobj[:] = time[:]
try:
varobj = pyd.createVariable('EPIC_time2','u4',('time'))
except:
print('EPIC_time2 exists, updating.')
varobj = pyd['EPIC_time2']
varobj.units = "msec since 0:00 GMT"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
varobj[:] = time2[:]
# re-compute DELTA_T in seconds
dtime = np.diff(pyd['time'][:])
dtm = dtime.mean().astype('float').round()
u = timeunits.split()
if u[0] == 'minutes':
dtm = dtm*60
elif u[0] == 'hours':
dtm = dtm*60*60
elif u[0] == 'milliseconds':
dtm = dtm/1000
elif u[0] == 'microseconds':
dtm = dtm/1000000
DELTA_T = '%d' % int(dtm)
pyd.DELTA_T = DELTA_T
print(DELTA_T)
# check start and stop time
pyd.start_time = '%s' % num2date(pyd['time'][0],pyd['time'].units)
pyd.stop_time = '%s' % num2date(pyd['time'][-1],pyd['time'].units)
print('cf start time %s' % pyd.start_time)
print('cf stop time %s' % pyd.stop_time)
# add the variable descriptions
var_desc = ''
for var_key in pyd.variables.keys():
if var_key not in dim_keys:
var_desc = var_desc+':'+var_key
var_desc = var_desc[1:]
print(var_desc)
pyd.VAR_DESC = var_desc
pyd.close()
def catEPIC(datafiles, outfile):
nfiles = len(datafiles)
# use the first file to establish some information
nc0 = Dataset(datafiles[0], mode = 'r', format = 'NETCDF4')
varnames = nc0.variables.keys()
if 'time2' not in varnames:
CFtime = True
if 'calendar' not in nc0['time'].__dict__:
print('No calendar specified, using gregorian')
nccalendar = 'gregorian'
else:
nccalendar = nc0['time'].calendar
else:
CFtime = False
nc0.close()
# now glean time information from all the files
alldt = np.array([])
timelens = []
for ifile in range(nfiles):
print(datafiles[ifile])
nc = Dataset(datafiles[ifile], mode = 'r', format = 'NETCDF4')
timelens.append(nc.dimensions['time'].size)
if CFtime:
tobj = num2date(nc['time'][:],nc['time'].units,calendar=nccalendar)
alldt = np.concatenate((alldt,tobj))
else:
tobj = EPICtime2datetime(nc['time'][:],nc['time2'][:])
alldt = np.concatenate((alldt,tobj))
print('file %d is %s to %s' % (ifile, tobj[0], tobj[-1]))
print(' first time object nc[''time''][0] is %f' % nc['time'][0])
print(' time units are %s' % nc['time'].units)
#if 'corvert' in nc.variables.keys():
# print(' there is a corvert')
nc.close()
# it was the case in the MATLAB version that the schema technique
# would reorder the variables - not sure python does this
# reordering the variables might not be a problem here since they are
# iterated by name
# dimensions
ncid_out = Dataset(outfile, "w", clobber=True, format="NETCDF4")
ncid_in = Dataset(datafiles[0], mode = 'r', format = 'NETCDF4')
for dimname in ncid_in.dimensions.keys():
if 'time' in dimname:
ncid_out.createDimension('time',len(alldt))
else:
ncid_out.createDimension(dimname,ncid_in.dimensions[dimname].size)
# global attributes
for attname in ncid_in.__dict__:
ncid_out.setncattr(attname,ncid_in.getncattr(attname))
# variables with their attributes
for varname in ncid_in.variables.keys():
print('Creating %s as %s' % (varname, ncid_in[varname].datatype))
ncid_out.createVariable(varname, ncid_in[varname].datatype,
dimensions = ncid_in[varname].dimensions)
for attname in ncid_in[varname].__dict__:
ncid_out[varname].setncattr(attname, ncid_in[varname].getncattr(attname))
ncid_out.close()
ncid_in.close()
# load the data
ncid_out = Dataset(outfile, mode='r+', format="NETCDF4")
print(timelens)
for ifile in range(nfiles):
if ifile == 0:
outidx_start = 0
outidx_end = outidx_start+timelens[ifile]
else:
outidx_start = outidx_end
outidx_end = outidx_start+timelens[ifile]
print('getting data from file %s and writing %d to %d (pythonic indeces)' % (datafiles[ifile],outidx_start,outidx_end))
ncid_in = Dataset(datafiles[ifile], mode="r", format="NETCDF4")
# TODO - check for the variable int he outfile
for varname in ncid_in.variables.keys():
dimnames = ncid_in[varname].dimensions
if 'time' in dimnames:
s = outidx_start
e = outidx_end
else:
s = 0
e = len(ncid_in[varname])
ndims = len(ncid_in[varname].dimensions)
#print('%s, %d' % (varname, ndims))
#print(len(ncid_in[varname]))
if ndims == 1:
ncid_out[varname][s:e] = ncid_in[varname][:]
elif ndims == 2:
ncid_out[varname][s:e,:] = ncid_in[varname][:,:]
elif ndims == 3:
ncid_out[varname][s:e,:,:] = ncid_in[varname][:,:,:]
elif ndims == 4:
ncid_out[varname][s:e,:,:,:] = ncid_in[varname][:,:,:,:]
ncid_in.close()
# finally, put the correct time span in th output file
units = "seconds since %d-%d-%d %d:%d:%f" % (alldt[0].year,
alldt[0].month,alldt[0].day,alldt[0].hour,alldt[0].minute,
alldt[0].second+alldt[0].microsecond/1000000)
# the 0:00 here was causing problems for xarray
units = "seconds since %d-%d-%d %d:%d:%f +0:00" % (alldt[0].year,
alldt[0].month,alldt[0].day,alldt[0].hour,alldt[0].minute,
alldt[0].second+alldt[0].microsecond/1000000)
elapsed = alldt-alldt[0] # output is a numpy array of timedeltas
# have to iterate to get at the timedelta objects in the numpy container
# seems crazy, someone please show me a better trick!
elapsed_sec = []
for e in elapsed: elapsed_sec.append(e.total_seconds())
t = np.zeros((len(alldt),1))
t2 = np.zeros((len(alldt),1))
for i in range(len(alldt)):
jd = ajd(alldt[i])
t[i] = int(math.floor(jd))
t2[i] = int((jd - math.floor(jd))*(24*3600*1000))
if CFtime:
ncid_out['time'][:] = elapsed_sec[:]
ncid_out['time'].units = units
ncid_out['EPIC_time'][:] = t[:]
ncid_out['EPIC_time2'][:] = t2[:]
else:
ncid_out['CFtime'][:] = elapsed_sec[:]
ncid_out['CFtime'].units = units
ncid_out['time'][:] = int(math.floor(jd))
ncid_out['time2'][:] = int((jd - math.floor(jd))*(24*3600*1000))
# recompute start_time and end_time
ncid_out.start_time = '%s' % num2date(ncid_out['time'][0],ncid_out['time'].units)
print(ncid_out.start_time)
ncid_out.stop_time = '%s' % num2date(ncid_out['time'][-1],ncid_out['time'].units)
print(ncid_out.stop_time)
# TODO update history
ncid_out.close()
def check_fill_value_encoding(ds):
"""
restore encoding to what it needs to be for EPIC and CF compliance
variables' encoding will be examined for the correct _FillValue
:param ds: xarray Dataset
:return: xarray Dataset with corrected encoding, dict with encoding that can be used with xarray.to_netcdf
"""
encoding_dict = {}
for var in ds.variables.items():
encoding_dict[var[0]] = ds[var[0]].encoding
# is it a coordinate?
if var[0] in ds.coords:
# coordinates do not have a _FillValue
if '_FillValue' in var[1].encoding:
encoding_dict[var[0]]['_FillValue'] = False
else:
# _FillValue cannot be NaN and must match the data type so just make sure it matches the data type.
# xarray changes ints to floats, not sure why yet
if '_FillValue' in var[1].encoding:
if np.isnan(var[1].encoding['_FillValue']):
print('NaN found in _FillValue, correcting')
if var[1].encoding['dtype'] in {'float32', 'float64'}:
var[1].encoding['_FillValue'] = 1E35
encoding_dict[var[0]]['_FillValue'] = 1E35
elif var[1].encoding['dtype'] in {'int32', 'int64'}:
var[1].encoding['_FillValue'] = 32768
encoding_dict[var[0]]['_FillValue'] = 32768
return ds, encoding_dict
def fix_missing_time(ds, delta_t):
"""
fix missing time values
change any NaT values in 'time' to a time value based on the last known good time, iterating to cover
larger gaps by constructing time as we go along.
xarray.DataArray.dropna is one way to do this, automated and convenient, and will leave an uneven time series,
so if you don't mind time gaps, that is a better tool.
:param ds: xarray Dataset, time units are in seconds
:param deltat: inter-burst time, sec, for the experiment's sampling scheme
:return:
"""
# TODO This could be improved by using an index mapping method - when I know python better.
dsnew = ds
count = 0
tbad = dsnew['time'][:].data # returns a numpy array of numpy.datetime64
tgood = tbad
# TODO - what if the first time is bad? need to look forward, then work backward
for t in enumerate(tbad):
if np.isnat(t[1]):
count += 1
prev_time = tbad[t[0] - 1]
new_time = prev_time + np.timedelta64(delta_t, 's')
tgood[t[0]] = new_time
print('bad time at {} will be given {}'.format(t[0], tgood[t[0]]))
return dsnew, count
def apply_timezone(cf_units):
"""
In xarray, the presence of time zone information in the units was causing decode_cf to ignore the hour,
minute and second information. This function applys the time zone information and removes it from the units
:param str cf_units:
:return: str
"""
if len(cf_units.split()) > 4:
# there is a time zone included
print(f'time zone information found in {cf_units}')
split_units = cf_units.split()
hrs, mins = split_units[4].split(':')
if '-' in hrs:
hrs = hrs[1:]
sign = -1
else:
sign = 1
dtz = dt.timedelta(0, 0, 0, 0, int(mins), int(hrs)) # this will return seconds
ts = Timestamp(split_units[2] + ' ' + split_units[3], tzinfo=None)
if sign < 0:
new_ts = ts - dtz
else:
new_ts = ts + dtz
if 'seconds' in cf_units:
new_units = '{} since {}'.format(split_units[0], new_ts)
else:
new_units = cf_units
print('unrecognized time units, units not changed')
print(f'new_units = {new_units}')
return new_units
def make_encoding_dict(ds):
"""
prepare encoding dictionary for writing a netCDF file later using xarray.to_netcdf
:param ds: xarray Dataset
:return: dict with encoding prepared for xarray.to_netcdf to EPIC/CF conventions
"""
encoding_dict = {}
for item in ds.variables.items():
# print(item)
var_name = item[0]
var_encoding = ds[var_name].encoding
encoding_dict[var_name] = var_encoding
# print('encoding for {} is {}'.format(var_name, encoding_dict[var_name]))
# is it a coordinate?
if var_name in ds.coords:
# coordinates do not have a _FillValue
if '_FillValue' in encoding_dict[var_name]:
print(f'encoding {var_name} fill value to False')
else:
print(f'encoding {var_name} is missing fill value, now added and set to False')
encoding_dict[var_name]['_FillValue'] = False
else:
# _FillValue cannot be NaN and must match the data type
# so just make sure it matches the data type.
if '_FillValue' in encoding_dict[var_name]:
print('{} fill value is {}'.format(var_name, encoding_dict[var_name]['_FillValue']))
if np.isnan(encoding_dict[var_name]['_FillValue']):
if 'float' in str(encoding_dict[var_name]['dtype']):
encoding_dict[var_name]['_FillValue'] = 1E35
elif 'int' in str(encoding_dict[var_name]['dtype']):
encoding_dict[var_name]['_FillValue'] = 32768
print('NaN found in _FillValue of {}, corrected to {}'.format(
var_name, encoding_dict[var_name]['_FillValue']))
elif encoding_dict[var_name]['_FillValue'] is None:
if 'float' in str(encoding_dict[var_name]['dtype']):
encoding_dict[var_name]['_FillValue'] = 1E35
elif 'int' in str(encoding_dict[var_name]['dtype']):
encoding_dict[var_name]['_FillValue'] = 32768
print('None found in _FillValue of {}, corrected to {}'.format(
var_name, encoding_dict[var_name]['_FillValue']))
else:
print('encoding found in _FillValue of {} remains {}'.format(var_name,
encoding_dict[var_name]['_FillValue']))
return encoding_dict
# TODO this is coded only for catEPIC, expand for other methods in this file
def __main():
print('%s running on python %s' % (sys.argv[0], sys.version))
if len(sys.argv) < 2:
print("%s usage:" % sys.argv[0])
print("catEPIC file_list outfile\n")
sys.exit(1)
try:
datafiles = sys.argv[1]
except:
print('error - input file list missing')
sys.exit(1)
try:
outfile = sys.argv[2]
except:
print('error - output file name missing')
sys.exit(1)
# some input testing
if ~os.path.isfile(datafiles[0]):
print('error - input file not found')
sys.exit(1)
if ~os.path.isfile(outfile):
print('%s will be overwritten' % outfile)
print('Start concatination at ',dt.datetime.now())
catEPIC(datafiles, outfile)
print('Finished file concatination at ',dt.datetime.now())
if __name__ == "__main__":
__main() | ADCPy | /ADCPy-0.1.1.tar.gz/ADCPy-0.1.1/adcpy/EPICstuff/EPICmisc.py | EPICmisc.py |
import netCDF4 as nc
from netCDF4 import num2date
import math
import sys
import datetime as dt
import adcpy.EPICstuff.reshapeEPIC as reshape
# making the indices
input_path = r'.\\'
output_path = r'.\\'
# continuous file
continuous_file = 'WAV17M2T02whV.nc' # rotated file with a 1D, continuous time series
number_of_output_files = 4
burst_file_name = 'WAV17M2T02whVwaves.nc'
index_file_name = 'WAV17M2T02whVindecesnc.txt'
"""
From the wavesmon config file
'Continuous waves 20171124T225553.pd0Wvs.cfg'
we have
[ADCP Setup]
EPB=2048
TBE=50
TBB=3600
"""
sample_rate = 2
# burst_length = 4096
burst_length = 2048
burst_interval = 3600 # 60 min interval
burst_start_offset = 0
# note we are dropping EPIC time as it is causing problems
variables_to_omit = {'EPIC_time', 'EPIC_time2'}
attributes_to_omit = {'valid_range'} # this is in older converted files and needs to be removed
dry_run = True
# ------------------- the rest of this should be automatic, no user settings below
operation_start = dt.datetime.now()
print('Start script run at ', operation_start)
dim = 'time'
# ----------- execute
all_slices = reshape.generate_expected_start_times(input_path + continuous_file, dim,
burst_start_offset, burst_interval, burst_length, sample_rate)
# here we limit the slices for testing
# print('** reducing the number of slices')
slices = all_slices # [200:300]
continuous_netcdf_object = nc.Dataset(input_path + continuous_file, format="NETCDF4")
print('the last time is {} seconds from the start of the experiment'.format(continuous_netcdf_object['time'][-1]))
print('looking up the boundaries... this takes about 10 minutes on a 12 GB file')
edges = reshape.find_boundaries(continuous_netcdf_object['time'][:], slices)
for x in edges[0:5]:
print('at indices {} to {} we found times {} to {}'.format(x[0], x[1], continuous_netcdf_object['time'][x[0]],
continuous_netcdf_object['time'][x[1]]))
burst_lengths = list(map(lambda t: t[1] - t[0], edges))
for x in burst_lengths[0:5]:
print('bursts are {} long'.format(x))
continuous_netcdf_object.close()
print('elapsed time is {} min'.format((dt.datetime.now() - operation_start).total_seconds() / 60))
reshape.save_indexes_to_file(input_path + continuous_file, edges, output_path + index_file_name)
number_of_bursts_per_file = int(math.floor(len(edges) / number_of_output_files))
# now iterate through the number of output files
# for ifile in range(1):
for file_index in range(number_of_output_files):
s = burst_file_name.split('.')
burstFile = s[0] + (f'%02d.' % file_index) + s[1]
print('making burst file {}'.format(burstFile))
burst_start_index = file_index * number_of_bursts_per_file
burst_end_index = burst_start_index + number_of_bursts_per_file
if burst_end_index > len(edges):
enburst = len(edges)
edges_this_file = edges[burst_start_index:burst_end_index]
samples_in_each_burst = list(map(lambda t: t[1] - t[0], edges_this_file))
# if there are no samples in a burst, we will skip the burst
# skip them by removing them from this index list
# this cleans up the tail end of the last file
# TODO - use None to signal
idx_empty_bursts = list(map(lambda y: False if x == 0 else True, samples_in_each_burst))
print('Zeros samples in {} bursts, these will be omitted'.format(idx_empty_bursts.count(0)))
continuous_netcdf_object = nc.Dataset(input_path + continuous_file, format="NETCDF4")
time_units = continuous_netcdf_object['time'].units
number_to_display = 5
if number_of_bursts_per_file < number_to_display or number_of_bursts_per_file < number_to_display * 2:
number_to_display = number_of_bursts_per_file
x = list(range(number_to_display))
else:
x = list(range(number_to_display)) + list(
range(len(edges_this_file) - number_to_display - 1, len(edges_this_file) - 1))
for i in x:
print('burst {} will be {} samples from {} to {}'.format(
i, samples_in_each_burst[i],
num2date(continuous_netcdf_object['time'][edges_this_file[i][0]], time_units),
num2date(continuous_netcdf_object['time'][edges_this_file[i][1]], time_units)))
continuous_netcdf_object.close()
if not dry_run:
reshape.reshapeEPIC(input_path + continuous_file, output_path + burstFile, burst_length,
dim='time', edges=edges_this_file, drop=variables_to_omit, atts2drop=attributes_to_omit)
print('End script run at ', dt.datetime.now())
print('elapsed time is {} min'.format((dt.datetime.now() - operation_start).total_seconds() / 60)) | ADCPy | /ADCPy-0.1.1.tar.gz/ADCPy-0.1.1/adcpy/EPICstuff/doreshape.py | doreshape.py |
import os
import sys
import numpy as np
from netCDF4 import Dataset
import netCDF4 as netcdf
import datetime as dt
from datetime import datetime
# noinspection PyPep8Naming
def doEPIC_ADCPfile(cdfFile, ncFile, attFile, settings):
"""
Convert a raw netcdf file containing data from any 4 beam Janus acoustic doppler profiler,
with or without a center beam, and transforms the data into Earth coordinates. Data are output to netCDF
using controlled vocabulary for the variable names, following the EPIC convention wherever possible.
:param str cdfFile: raw netCDF input data file name
:param str ncFile: output file name
:param str attFile: text file containing metadata
:param dict settings: a dict of settings as follows::
'good_ensembles': [] # starting and ending indices of the input file. For all data use [0,np.inf]
'orientation': 'UP' # uplooking ADCP, for downlooking, use DOWN
'transducer_offset_from_bottom': 2.02 # in meters
'transformation': 'EARTH' # | BEAM | INST
'adjust_to_UTC': 5 # for EST to UTC, if no adjustment, set to 0 or omit
"""
# check some of the settings we can't live without
# set flags, then remove from the settings list if we don't want them in metadata
if 'good_ensembles' not in settings.keys():
settings['good_ensembles'] = [0, np.inf] # nothing from user, do them all
print('No starting and ending ensembles specfied, processing entire file')
if 'orientation' not in settings.keys():
settings['orientation'] = "UP"
settings['orientation_note'] = "assumed by program"
print('No orientation specfied, assuming up-looking')
else:
settings['orientation_note'] = "user provided orientation"
if 'transducer_offset_from_bottom' not in settings.keys():
settings['transducer_offset_from_bottom'] = 0
print('No transducer_offset_from_bottom, assuming 0')
if 'transformation' not in settings.keys():
settings['transformation'] = "EARTH"
if 'adjust_to_UTC' not in settings.keys():
settings['adjust_to_UTC'] = 0
# TODO implement this time_type_out selection, right now does what is in the raw netCDF
# if 'time_type_out' not in settings.keys():
# settings['time_type_out'] = "CF"
# if 'time_type_out' in settings.keys():
# time_type_out = settings['time_type_out']
# else:
# time_type_out = "CF"
if 'use_pressure_for_WATER_DEPTH' in settings.keys():
if settings['use_pressure_for_WATER_DEPTH']:
usep4waterdepth = True
settings.pop('use_pressure_for_WATER_DEPTH')
else:
usep4waterdepth = False
else:
usep4waterdepth = True
rawcdf = Dataset(cdfFile, mode='r', format='NETCDF4')
rawvars = []
for key in rawcdf.variables.keys():
rawvars.append(key)
# this function will operate on the files using the netCDF package
nc = setupEPICnc(ncFile, rawcdf, attFile, settings)
nbeams = nc.number_of_slant_beams # what if this isn't 4?
nbins = len(rawcdf.dimensions['depth'])
nens = len(rawcdf.dimensions['time'])
ncvars = []
for key in nc.variables.keys():
ncvars.append(key)
declination = nc.magnetic_variation_at_site
# start and end indices
s = settings['good_ensembles'][0]
if settings['good_ensembles'][1] < 0:
e = nens
else:
e = settings['good_ensembles'][1]
print('Converting from index %d to %d of %s' % (s, e, cdfFile))
# many variables do not need processing and can just be copied to the
# new EPIC convention
varlist = {'sv': 'SV_80', 'Rec': 'Rec'}
for key in varlist:
varobj = nc.variables[varlist[key]]
varobj[:] = rawcdf.variables[key][s:e]
# check the time zone, Nortek data are usually set to UTC, no matter what
# the actual time zone of deployment might have been
if abs(settings['adjust_to_UTC']) > 0:
nc.time_zone_change_applied = settings['adjust_to_UTC']
nc.time_zone_change_applied_note = "adjust time to UTC requested by user"
toffset = settings['adjust_to_UTC']*3600
# determine what kind of time setup we have in the raw file
timevars = ['time', 'time2', 'EPIC_time', 'EPIC_time2', 'cf_time']
timevars_in_file = [item for item in timevars if item in rawvars]
if timevars_in_file == ['time', 'time2']:
time_type = "EPIC"
elif timevars_in_file == ['time', 'time2', 'cf_time']:
time_type = "EPIC_with_CF"
elif timevars_in_file == ['time', 'EPIC_time', 'EPIC_time2']:
time_type = "CF_with_EPIC"
elif timevars_in_file == ['time']:
time_type = "CF"
else:
time_type = None
print("Unrecognized time arrangement, known variables found: {}".format(timevars_in_file))
print("The raw netCDF file has time_type {}".format(time_type))
# raw variable name : EPIC variable name
if settings['time_type_out'] == 'EPIC':
varlist = {'time': 'time', 'time2': 'time2'}
elif settings['time_type_out'] == 'CF_with_EPIC':
varlist = {'time': 'time', 'EPIC_time': 'EPIC_time', 'EPIC_time2': 'EPIC_time2'}
elif settings['time_type_out'] == 'EPIC_with_CF':
varlist = {'time': 'time', 'time2': 'time2', 'cf_time': 'cf_time'}
else: # only CF time, the default
varlist = {'time': 'time'}
# TODO let user select type of time output, right now it uses what is in the netCDF file
for key in varlist:
print(key)
varobj = nc.variables[varlist[key]]
varobj[:] = rawcdf.variables[key][s:e]+toffset
# TRDI instruments have heading, pitch, roll and temperature in hundredths of degrees
if rawcdf.sensor_type == "TRDI":
degree_factor = 100
else:
degree_factor = 1
varlist = {'Ptch': 'Ptch_1216', 'Roll': 'Roll_1217', 'Tx': 'Tx_1211'}
for key in varlist:
varobj = nc.variables[varlist[key]]
varobj[:] = rawcdf.variables[key][s:e]/degree_factor
# TODO will need an instrument dependent methodology to check for any previous adjustments to heading
# prior to this correction. for instance, with TRDI instruments, Velocity or the EB command might have applied
# a correction. If EB is set, then that value was applied to the raw data seen by TRDIpd0tonetcdf.py
nc.magnetic_variation_applied = declination
nc.magnetic_variation_applied_note = "as provided by user"
heading = rawcdf.variables['Hdg'][s:e]/degree_factor + declination
heading[heading >= 360] = heading[heading >= 360] - 360
heading[heading < 0] = heading[heading < 0] + 360
nc['Hdg_1215'][:] = heading
# pressure needs to be in db or m
if 'Pressure' in rawvars:
pconvconst = 1 # when in doubt, do nothing
punits = rawcdf['Pressure'].units
if 'deca-pascals' in punits:
pconvconst = 1000 # decapascals to dbar = /1000
print('Pressure in deca-pascals will be converted to db')
nc['P_1'][:] = rawcdf.variables['Pressure'][s:e]/pconvconst
# check units of current velocity and convert to cm/s
vunits = rawcdf['vel1'].units
vconvconst = 1 # when in doubt, do nothing
if (vunits == 'mm s-1') | (vunits == 'mm/s'):
vconvconst = 0.1 # mm/s to cm/s
elif (vunits == 'm s-1') | (vunits == 'm/s'):
vconvconst = 100 # m/s to cm/s
print('Velocity in {} will be converted using a multiplier of {}'.format(vunits, vconvconst))
if 'vel5' in rawvars:
nc['Wvert'][:] = rawcdf.variables['vel5'][s:e, :] * vconvconst
if 'cor5' in rawvars:
nc['corvert'][:] = rawcdf.variables['cor5'][s:e, :]
if 'att5' in rawvars:
nc['AGCvert'][:] = rawcdf.variables['att5'][s:e, :]
if 'PGd4' in rawvars:
nc['PGd_1203'][:, :, 0, 0] = rawcdf.variables['PGd4'][s:e, :]
if 'PressVar' in rawvars:
nc['SDP_850'][:, 0, 0] = rawcdf.variables['PressVar'][s:e]
bindist = np.arange(len(nc['bindist']))
bindist = bindist*nc.bin_size+nc.center_first_bin
nc['bindist'][:] = bindist
# figure out DELTA_T - we need to use the cf_time, more convenient
if settings['time_type_out'] == 'CF':
# we will set the main "time' variable to CF convention
timekey = 'time'
else:
timekey = 'cf_time'
# this calculation uses for CF time
dtime = np.diff(nc[timekey][:])
delta_t = '%s' % int((dtime.mean().astype('float')).round()) # needs to be a string
nc.DELTA_T = delta_t
# depths and heights
nc.initial_instrument_height = settings['transducer_offset_from_bottom']
nc.initial_instrument_height_note = "height in meters above bottom: accurate for tripod mounted instruments"
# compute depth, make a guess we want to average all depths recorded
# deeper than user supplied water depth
# idx is returned as a tuple, the first of which is the actual index values
# set the water depth here, this will be used throughout
# the user may have put units next to the depth
if type(nc.WATER_DEPTH) is str:
water_depth = nc.WATER_DEPTH.split()
water_depth = float(water_depth[0])
else:
water_depth = nc.WATER_DEPTH
if ('Pressure' in rawvars) and usep4waterdepth:
idx = np.where(nc['P_1'][:] > water_depth/2)
# now for the mean of only on bottom pressure measurements
if len(idx[0]) > 0:
pmean = nc['P_1'][idx[0]].mean()
else:
pmean = 0 # this could be if the ADCP is in air the whole time
print('Site WATER_DEPTH given is %f' % water_depth)
print('Calculated mean water level from P_1 is %f m' % pmean)
print('Updating site WATER_DEPTH to %f m' % pmean)
nc.WATER_DEPTH = pmean+nc.transducer_offset_from_bottom
nc.WATER_DEPTH_source = "water depth = MSL from pressure sensor, (meters), nominal"
nc.WATER_DEPTH_NOTE = nc.WATER_DEPTH_source
nc.nominal_sensor_depth = nc.WATER_DEPTH-settings['transducer_offset_from_bottom']
nc.nominal_sensor_depth_note = "inst_depth = (water_depth - inst_height); nominal depth below surface, meters"
varnames = ['bindist', 'depth']
# WATER_DEPTH_datum is not used in this circumstance.
else:
print('Site WATER_DEPTH given is %f' % water_depth)
print('No pressure data available, so no adjustment to water depth made')
nc.WATER_DEPTH_source = "water depth as given by user, (meters), nominal"
nc.WATER_DEPTH_NOTE = nc.WATER_DEPTH_source
nc.nominal_sensor_depth = water_depth-settings['transducer_offset_from_bottom']
nc.nominal_sensor_depth_note = "inst_depth = (water_depth - inst_height); nominal depth below surface, meters"
varnames = ['bindist', 'depth']
# WATER_DEPTH_datum is not used in this circumstance.
for varname in varnames:
nc[varname].WATER_DEPTH = water_depth
nc[varname].WATER_DEPTH_source = nc.WATER_DEPTH_source
nc[varname].transducer_offset_from_bottom = nc.transducer_offset_from_bottom
# update depth variable for location of bins based on WATER_DEPTH information
if "UP" in nc.orientation:
depths = water_depth-nc.transducer_offset_from_bottom-nc['bindist']
else:
depths = -1 * (water_depth-nc.transducer_offset_from_bottom+nc['bindist'])
nc['depth'][:] = depths
nc.start_time = '%s' % netcdf.num2date(nc[timekey][0], nc[timekey].units)
nc.stop_time = '%s' % netcdf.num2date(nc[timekey][-1], nc[timekey].units)
# some of these repeating attributes depended on depth calculations
# these are the same for all variables because all sensors are in the same
# package, as of now, no remote sensors being logged by this ADCP
ncvarnames = []
for key in nc.variables.keys():
ncvarnames.append(key)
omitnames = []
for key in nc.dimensions.keys():
omitnames.append(key)
omitnames.append("Rec")
omitnames.append("depth")
for varname in ncvarnames:
if varname not in omitnames:
varobj = nc.variables[varname]
varobj.sensor_type = nc.INST_TYPE
varobj.sensor_depth = nc.nominal_sensor_depth
varobj.initial_sensor_height = nc.initial_instrument_height
varobj.initial_sensor_height_note = "height in meters above bottom: " +\
"accurate for tripod mounted instruments"
varobj.height_depth_units = "m"
print('finished copying data, starting computations at %s' % (dt.datetime.now()))
print('averaging cor at %s' % (dt.datetime.now()))
# this will be a problem - it loads all into memory
cor = (rawcdf.variables['cor1'][s:e, :] + rawcdf.variables['cor2'][s:e, :] +
rawcdf.variables['cor3'][s:e, :] + rawcdf.variables['cor4'][s:e, :]) / 4
nc['cor'][:, :, 0, 0] = cor[:, :]
print('averaging AGC at %s' % (dt.datetime.now()))
# this will be a problem - it loads all into memory
agc = (rawcdf.variables['att1'][s:e, :] + rawcdf.variables['att2'][s:e, :] +
rawcdf.variables['att3'][s:e, :]+rawcdf.variables['att4'][s:e, :]) / 4
nc['AGC_1202'][:, :, 0, 0] = agc[:, :]
print('converting %d ensembles from beam to earth %s' % (len(nc[timekey]), dt.datetime.now()))
# check our indexing
print('magnetic variation at site = %f' % nc.magnetic_variation_at_site)
print('magnetic variation applied = %f' % nc.magnetic_variation_applied)
print('magnetic variation applied note = %s' % nc.magnetic_variation_applied_note)
n = int(len(heading)/2)
print('From the middle of the time series at ensemble #%d, we have:' % n)
print('heading variable in this python process = %f' % heading[n])
print('rawcdf Hdg[n] = %f' % rawcdf['Hdg'][n])
print('nc Hdg_1215[n] = %f' % nc['Hdg_1215'][n, 0, 0])
# TODO add depth bin mapping
# this beam arrangement is for TRDI Workhorse and V, other instruments
# should be re-ordered to match
rawvarnames = ["vel1", "vel2", "vel3", "vel4"]
ncidx = 0
if settings['transformation'].upper() == "BEAM":
ncvarnames = ["Beam1", "Beam2", "Beam3", "Beam4"]
for idx in range(s, e):
for beam in range(nbeams):
nc[ncvarnames[beam]][ncidx, :, 0, 0] = \
rawcdf.variables[rawvarnames[beam]][idx, :] * vconvconst
ncidx = ncidx + 1
elif (settings['transformation'].upper() == "INST") or (settings['transformation'].upper() == "EARTH"):
ncvarnames = ["X", "Y", "Z", "Error"]
# the dolfyn way (https://github.com/lkilcher/dolfyn)
# load the ADCP data object - we have converted this from a class object to nested dictionaries for use here
adcpo = {
'props': {
'coord_sys': "beam",
'inst2earth:fixed': False,
},
'config': {
'beam_angle': nc.beam_angle,
'beam_pattern': nc.beam_pattern,
'orientation': nc.orientation,
},
# note declination is applied immediately when heading is read from the raw data file
'declination_in_heading': True,
# dolfyn shape for ensemble data is [bins x beams x ens]
'vel': np.ones([nbins, nbeams], dtype='float') * np.nan,
}
# vels has to be pre-defined to get the shapes to broadcast
# noinspection PyUnusedLocal
vels = np.ones([nbins, 1], dtype='float') * np.nan
# Nortek and TRDI do their along beam velocity directions opposite for
# slant beams. Vertical beam directions are the same.
if rawcdf.sensor_type == 'Nortek':
beam_vel_multiplier = -1
else:
beam_vel_multiplier = 1
for idx in range(s, e):
for beam in range(nbeams):
# load data of one ensemble to dolfyn shape, in cm/s
# adcpo['vel'][:,beam,0] = rawcdf.variables[rawvarnames[beam]][idx,:] * 0.1
vels = rawcdf.variables[rawvarnames[beam]][idx, :] * vconvconst * beam_vel_multiplier
adcpo['vel'][:, beam] = vels
# need to keep setting this with new beam data since we are iterating
adcpo['props']['coord_sys'] = "beam"
beam2inst(adcpo) # adcpo['vel'] is returned in inst coordinates
if settings['transformation'].upper() == "EARTH":
ncvarnames = ["u_1205", "v_1206", "w_1204", "Werr_1201"]
adcpo['heading_deg'] = nc.variables['Hdg_1215'][ncidx]
adcpo['pitch_deg'] = nc.variables['Ptch_1216'][ncidx]
adcpo['roll_deg'] = nc.variables['Roll_1217'][ncidx]
inst2earth(adcpo)
for beam in range(nbeams):
nc[ncvarnames[beam]][ncidx, :, 0, 0] = adcpo['vel'][:, beam]
ncidx = ncidx + 1
# immediate - then less feedback
ensf, ensi = np.modf(ncidx/1000)
if (ensf == 0) and (ncidx < 10000):
print('%d of %d ensembles read' % (ncidx, nens))
else:
ensf, ensi = np.modf(ncidx/10000)
if ensf == 0:
print('%d of %d ensembles read' % (ncidx, nens))
nc.transform = settings['transformation'].upper()
print('closing files at %s' % (dt.datetime.now()))
rawcdf.close()
nc.close()
def cal_earth_rotmatrix(heading=0, pitch=0, roll=0, declination=0):
"""
this transformation matrix is from the R.D. Instruments Coordinate Transformation booklet.
It presumes the beams are in the same position as RDI Workhorse ADCP beams, where,
when looking down on the transducers::
Beam 3 is in the direction of the compass' zero reference
Beam 1 is to the right
Beam 2 is to the left
Beam 4 is opposite beam 3
Pitch is about the beam 2-1 axis and is positive when beam 3 is raised
Roll is about the beam 3-4 axis and is positive when beam 2 is raised
Heading increases when beam 3 is rotated towards beam 1
Nortek Signature differs in these ways::
TRDI beam 3 = Nortek beam 1
TRDI beam 1 = Nortek beam 2
TRDI beam 4 = Nortek beam 3
TRDI beam 2 = Nortek beam 4
Heading, pitch and roll behave the same as TRDI
:param float heading: ADCP heading in degrees
:param float pitch: ADCP pitch in degrees
:param float roll: ADCP roll in degrees
:param float declination: heading offset from true, Westerly is negative
:return:
"""
heading = heading + declination
ch = np.cos(heading)
sh = np.sin(heading)
cp = np.cos(pitch)
sp = np.sin(pitch)
cr = np.cos(roll)
sr = np.sin(roll)
return np.asmatrix(np.array([
[(ch*cr+sh*sp*sr), (sh*cp), (ch*sr-sh*sp*cr)],
[(-sh*cr+ch*sp*sr), (ch*cp), (-sh*sr-ch*sp*cr)],
[(-cp*sr), sp, (cp*cr)]
]))
# start code from dolfyn
def calc_beam_rotmatrix(theta=20, convex=True, degrees=True):
"""
Calculate the rotation matrix from beam coordinates to
instrument head coordinates.
per dolfyn rotate.py code here: https://github.com/lkilcher/dolfyn
:param float theta: is the angle of the heads (usually 20 or 30 degrees)
:param int convex: is a flag for convex or concave head configuration.
:param bool degrees: is a flag which specifies whether theta is in degrees or radians (default: degrees=True)
"""
deg2rad = np.pi / 180.
if degrees:
theta = theta * deg2rad
if convex == 0 or convex == -1:
c = -1
else:
c = 1
a = 1 / (2. * np.sin(theta))
b = 1 / (4. * np.cos(theta))
d = a / (2. ** 0.5)
return np.array([[c * a, -c * a, 0, 0],
[0, 0, -c * a, c * a],
[b, b, b, b],
[d, d, -d, -d]])
def _cat4rot(tpl):
# TODO do we need this function _cat4rot
"""
helper function
:param tpl:
:return: numpy array
"""
tmp = []
for vl in tpl:
tmp.append(vl[:, None, :])
return np.concatenate(tuple(tmp), axis=1)
def beam2inst(adcpo, reverse=False, force=False):
"""
Rotate velocities from beam to instrument coordinates.
:param dict adcpo: containing the beam velocity data.
:param bool reverse: If True, this function performs the inverse rotation (inst->beam).
:param bool force: When true do not check which coordinate system the data is in prior to performing this rotation.
"""
if not force:
if not reverse and adcpo['props']['coord_sys'] != 'beam':
raise ValueError('The input must be in beam coordinates.')
if reverse and adcpo['props']['coord_sys'] != 'inst':
raise ValueError('The input must be in inst coordinates.')
if 'rotmat' in adcpo['config'].keys():
rotmat = adcpo['config']['rotmat']
else:
rotmat = calc_beam_rotmatrix(adcpo['config']['beam_angle'],
adcpo['config']['beam_pattern'].lower()
== 'convex')
cs = 'inst'
if reverse:
# Can't use transpose because rotation is not between
# orthogonal coordinate systems
rotmat = np.linalg.inv(rotmat)
cs = 'beam'
# raw = adcpo['vel'].transpose()
raw = np.asmatrix(adcpo['vel'])
# here I end up with an extra dimension of 4
# vels = np.einsum('ij,jkl->ikl', rotmat, raw)
# vels = np.einsum('ij,jk->ik', rotmat, raw)
vels = np.array(np.asmatrix(rotmat)*raw.transpose())
# vels = np.einsum('ij,jkl->ikl', rotmat, adcpo['vel'])
# ValueError: operands could not be broadcast together with remapped
# shapes [original->remapped]: (4,4)->(4,newaxis,newaxis,4) (16,4,1)->(4,1,16)
# adcpo['vel'] = np.einsum('ij,jkl->ikl', rotmat, adcpo['vel'])
adcpo['vel'] = vels.transpose()
adcpo['props']['coord_sys'] = cs
def inst2earth(adcpo, reverse=False, fixed_orientation=False, force=False):
"""
Rotate velocities from the instrument to earth coordinates.
:param dict adcpo: containing the data in instrument coordinates
:param bool reverse: If True, this function performs the inverse rotation (earth->inst).
:param bool fixed_orientation: When true, take the average orientation and apply it over the whole record.
:param bool force: When true do not check which coordinate system the data is in prior to performing this rotation.
Notes
-----
The rotation matrix is taken from the Teledyne RDI ADCP Coordinate Transformation manual January 2008
When performing the forward rotation, this function sets the 'inst2earth:fixed' flag to the value of
`fixed_orientation. When performing the reverse rotation, that value is 'popped' from the props dict and the input
value to this function`fixed_orientation` has no effect. If `'inst2earth:fixed'` is not in the props dict then
the input value *is* used.
"""
deg2rad = np.pi / 180.
if not force:
if not reverse and adcpo['props']['coord_sys'] != 'inst':
raise ValueError('The input must be in inst coordinates.')
if reverse and adcpo['props']['coord_sys'] != 'earth':
raise ValueError('The input must be in earth coordinates.')
if not reverse and 'declination' in adcpo['props'].keys() and not adcpo['props']['declination_in_heading']:
# Only do this if making the forward rotation.
adcpo['heading_deg'] += adcpo['props']['declination']
adcpo['props']['declination_in_heading'] = True
r = adcpo['roll_deg'] * deg2rad
p = np.arctan(np.tan(adcpo['pitch_deg'] * deg2rad) * np.cos(r))
h = adcpo['heading_deg'] * deg2rad
if adcpo['config']['orientation'].lower() == 'up':
r += np.pi
ch = np.cos(h)
sh = np.sin(h)
cr = np.cos(r)
sr = np.sin(r)
cp = np.cos(p)
sp = np.sin(p)
rotmat = np.empty((3, 3, len(r)))
rotmat[0, 0, :] = ch * cr + sh * sp * sr
rotmat[0, 1, :] = sh * cp
rotmat[0, 2, :] = ch * sr - sh * sp * cr
rotmat[1, 0, :] = -sh * cr + ch * sp * sr
rotmat[1, 1, :] = ch * cp
rotmat[1, 2, :] = -sh * sr - ch * sp * cr
rotmat[2, 0, :] = -cp * sr
rotmat[2, 1, :] = sp
rotmat[2, 2, :] = cp * cr
# Only operate on the first 3-components, b/c the 4th is err_vel
# ess = 'ijk,jlk->ilk'
cs = 'earth'
if reverse:
cs = 'inst'
fixed_orientation = adcpo['props'].pop('inst2earth:fixed', fixed_orientation)
# ess = ess.replace('ij', 'ji')
else:
adcpo['props']['inst2earth:fixed'] = fixed_orientation
if fixed_orientation:
# ess = ess.replace('k,', ',')
rotmat = rotmat.mean(-1)
# todo is the einsum method better? If so, uncomment the ess statements above
# vels = np.einsum(ess, rotmat, adcpo['vel'][:,:3])
vels = np.asmatrix(rotmat) * np.asmatrix(adcpo['vel'][:, :3].transpose())
adcpo['vel'][:, :3] = vels.transpose()
adcpo['props']['coord_sys'] = cs
# end code from dolfyn
def setupEPICnc(fname, rawcdf, attfile, settings):
"""
Construct an empty netCDF output file to EPIC conventions
:param str fname: output netCDF file name
:param Dataset rawcdf: input netCDF raw data file object
:param str attfile: metadata text file
:param dict settings: settings as follows::
'good_ensembles': [] # starting and ending indices of the input file. For all data use [0,np.inf]
'orientation': 'UP' # uplooking ADCP, for downlooking, use DOWN
'transducer_offset_from_bottom': 2.02 # in meters
'transformation': 'EARTH' # | BEAM | INST
'adjust_to_UTC': 5 # for EST to UTC, if no adjustment, set to 0 or omit
:return: netCDF file object
"""
# note that
# f4 = 4 byte, 32 bit float
# maximum value for 32 bit float = 3.402823*10**38;
intfill = -32768
floatfill = 1E35
# check the ensemble limits asked for by the user
nens = rawcdf.variables['Rec'].size
if settings['good_ensembles'][1] < 0:
settings['good_ensembles'][1] = nens
if settings['good_ensembles'][0] < 0:
settings['good_ensembles'][0] = 0
if settings['good_ensembles'][1] > nens:
settings['good_ensembles'][1] = nens-1
nens2write = settings['good_ensembles'][1]-settings['good_ensembles'][0]
print('creating netCDF file %s with %d records' % (fname, nens2write))
rawvars = []
for key in rawcdf.variables.keys():
rawvars.append(key)
nbins = len(rawcdf.dimensions['depth'])
cdf = Dataset(fname, "w", clobber=True, format="NETCDF4")
# dimensions, in EPIC order
cdf.createDimension('time', nens2write)
cdf.createDimension('depth', nbins)
cdf.createDimension('lat', 1)
cdf.createDimension('lon', 1)
# write global attributes
cdf.history = rawcdf.history + "rotations calculated and converted to EPIC format by ADCPcdf2ncEPIC.py"
# these get returned as a dictionary
gatts = read_globalatts(attfile)
if 'WATER_DEPTH' not in gatts.keys():
# noinspection PyTypeChecker
gatts['WATER_DEPTH'] = 0.0 # nothing from user
print('No WATER_DEPTH found, check depths of bins and WATER_DEPTH!')
gatts['orientation'] = settings['orientation'].upper()
if 'serial_number' not in gatts.keys():
gatts['serial_number'] = "unknown"
if 'magnetic_variation' not in gatts.keys():
# noinspection PyTypeChecker
gatts['magnetic_variation_at_site'] = 0.0
print('No magnetic_variation, assuming magnetic_variation_at_site = 0')
else:
gatts['magnetic_variation_at_site'] = gatts['magnetic_variation']
gatts.pop('magnetic_variation')
if type(gatts['MOORING']) != str:
gatts['MOORING'] = str(int(np.floor(gatts['MOORING'])))
writeDict2atts(cdf, gatts, "")
# more standard attributes
cdf.latitude_units = "degree_north"
cdf.longitude_units = "degree_east"
cdf.CREATION_DATE = "%s" % datetime.now()
cdf.DATA_TYPE = "ADCP"
cdf.FILL_FLAG = 0
cdf.COMPOSITE = 0
# attributes that the names will vary depending on the ADCP vendor
if rawcdf.sensor_type == "TRDI":
# TRDI attributes
if any('VBeam' in item for item in rawcdf.ncattrs()):
cdf.INST_TYPE = "TRDI Workhorse V"
else:
cdf.INST_TYPE = "TRDI Workhorse"
cdf.bin_size = rawcdf.TRDI_Depth_Cell_Length_cm/100
cdf.bin_count = rawcdf.TRDI_Number_of_Cells
cdf.center_first_bin = rawcdf.TRDI_Bin_1_distance_cm/100
cdf.blanking_distance = rawcdf.TRDI_Blank_after_Transmit_cm/100
cdf.transform = rawcdf.TRDI_Coord_Transform
cdf.beam_angle = rawcdf.TRDI_Beam_Angle
cdf.number_of_slant_beams = rawcdf.TRDI_Number_of_Beams
cdf.heading_bias_applied_EB = rawcdf.TRDI_Heading_Bias_Hundredths_of_Deg
cdf.beam_angle = rawcdf.TRDI_Beam_Angle
cdf.beam_pattern = rawcdf.TRDI_Beam_Pattern
elif rawcdf.sensor_type == "Nortek":
# Nortek attributes
# TODO - what to do about multiple sampling schemes?
cdf.INST_TYPE = "Nortek Signature"
cdf.bin_size = rawcdf.Nortek_burst_cellSize
cdf.bin_count = rawcdf.Nortek_burst_nCells
# Nortek Signature does not seem to have an odd offset to center of
# bin 1. This value comes from the Velocity Range provided by Nortek
cdf.center_first_bin = rawcdf['bindist'][0]
cdf.blanking_distance = rawcdf.Nortek_burst_blankingDistance
cdf.transform = rawcdf.Nortek_burst_coordSystem
# Nortek provides two angles for each beam, theta being from the vertical
cdf.beam_angle = rawcdf.Nortek_beamConfiguration1_theta
cdf.number_of_slant_beams = 4
# there's no indication from Nortek's metadata that magvar is applied or not
# TODO have to include information from the user
cdf.heading_bias_applied_EB = 0
# hard coded based on known Signature design
# Could be deduced from the theta and phi beam angles
cdf.beam_pattern = "Convex"
# attributes requiring user input
cdf.transducer_offset_from_bottom = settings['transducer_offset_from_bottom']
cdf.initial_instrument_height = settings['transducer_offset_from_bottom']
# TODO check on orientation, using user input for now
# rawcdf.TRDI_Orientation
# need translation to UP from "Up-facing beams"
cdf.orientation = settings['orientation'].upper()
cdf.orientation_note = settings['orientation_note']
if settings['orientation'].upper() == 'UP':
cdf.depth_note = "uplooking bin depths = WATER_DEPTH-transducer_offset_from_bottom-bindist"
else:
cdf.depth_note = "downlooking bin depths = WATER_DEPTH-transducer_offset_from_bottom+bindist"
cdf.serial_number = rawcdf.serial_number
# TODO consider using a float for time since less common integers are causing issues
# the problem is, CF time is a count, so integer is appropriate
timetype = 'u2' # u4 may be causing downstream problems with NCO
# u2 caused rollover problems when EPIC time was stored or read:
# file is 1108sig001.nc
# EPIC first time stamp = 08-Oct-5378 00:01:04
# seconds since 1970-01-01T00:00:00 UTC
# CF first time stamp = 25-Sep-2017 15:00:00
# the bad EPIC time is because a u4 datatype in the cdf file
# is being sent to a u2 datatype in hte nc file. Changing u2 to u4, etc.
# causes other problems
# timetype = 'u4' # u4 causes problems downstream in catEPIC with fill values
# for now choosing to live with the u2 problems
varobj = cdf.createVariable('Rec', timetype, ('time',), fill_value=intfill)
varobj.units = "count"
varobj.long_name = "Ensemble Number"
if settings['timetype'] == 'CF':
# if f8, 64 bit is not used, time is clipped
# TODO test this theory, because downstream 64 bit time is a problem
# for ADCP fast sampled, single ping data, need millisecond resolution
# for CF convention
varobj = cdf.createVariable('time', 'f8', ('time',))
varobj.units = rawcdf.variables['time'].units
# for EPIC convention
varobj = cdf.createVariable('EPIC_time', timetype, ('time',))
varobj.units = "True Julian Day"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
varobj = cdf.createVariable('EPIC_time2', timetype, ('time',))
varobj.units = "msec since 0:00 GMT"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
else:
# if f8, 64 bit is not used, time is clipped
# for ADCP fast sampled, single ping data, need millisecond resolution
# for CF convention
varobj = cdf.createVariable('cf_time', 'f8', ('time',))
varobj.units = rawcdf.variables['cf_time'].units
# for EPIC convention
varobj = cdf.createVariable('time', timetype, ('time',))
varobj.units = "True Julian Day"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
varobj = cdf.createVariable('time2', timetype, ('time',))
varobj.units = "msec since 0:00 GMT"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
varobj = cdf.createVariable('depth', 'f4', ('depth',))
varobj.units = "m"
varobj.long_name = "DEPTH (M)"
varobj.epic_code = 3
varobj.center_first_bin = cdf.center_first_bin
varobj.blanking_distance = cdf.blanking_distance
varobj.bin_size = cdf.bin_size
varobj.bin_count = nbins
varobj.transducer_offset_from_bottom = cdf.transducer_offset_from_bottom
varobj = cdf.createVariable('lat', 'f8', ('lat',))
varobj.units = "degree_north"
varobj.epic_code = 500
# note name is one of the netcdf4 reserved attributes, use setncattr
varobj.setncattr('name', "LAT")
varobj.long_name = "LATITUDE"
varobj.datum = "NAD83"
varobj[:] = float(gatts['latitude'])
varobj = cdf.createVariable('lon', 'f8', ('lon',))
varobj.units = "degree_east"
varobj.epic_code = 502
# note name is one of the netcdf4 reserved attributes, use setncattr
varobj.setncattr('name', "LON")
varobj.long_name = "LONGITUDE"
varobj.datum = "NAD83"
varobj[:] = float(gatts['longitude'])
varobj = cdf.createVariable('bindist', 'f4', ('depth',), fill_value=floatfill)
# note name is one of the netcdf4 reserved attributes, use setncattr
varobj.setncattr('name', "bindist")
varobj.units = "m"
varobj.long_name = "bin distance from instrument"
varobj.epic_code = 0
varobj.center_first_bin = cdf.center_first_bin
varobj.blanking_distance = cdf.blanking_distance
varobj.bin_size = cdf.bin_size
varobj.bin_count = nbins
varobj.transducer_offset_from_bottom = cdf.transducer_offset_from_bottom
varobj.NOTE = "distance is along profile from instrument head to center of bin"
varobj = cdf.createVariable('SV_80', 'f4', ('time', 'lat', 'lon'), fill_value=floatfill)
varobj.units = "m s-1"
varobj.epic_code = 80
# note name is one of the netcdf4 reserved attributes, use setncattr
varobj.setncattr('name', "SV")
varobj.long_name = "SOUND VELOCITY (M/S)"
varobj = cdf.createVariable('Hdg_1215', 'f4', ('time', 'lat', 'lon'), fill_value=floatfill)
varobj.units = "degrees"
# note name is one of the netcdf4 reserved attributes, use setncattr
varobj.setncattr('name', "Hdg")
varobj.long_name = "INST Heading"
varobj.epic_code = 1215
# varobj.heading_alignment = rawvarobj.heading_alignment
# varobj.heading_bias = rawvarobj.heading_bias
varobj = cdf.createVariable('Ptch_1216', 'f4', ('time', 'lat', 'lon'), fill_value=floatfill)
varobj.units = "degrees"
varobj.long_name = "INST Pitch"
varobj.epic_code = 1216
varobj = cdf.createVariable('Roll_1217', 'f4', ('time', 'lat', 'lon'), fill_value=floatfill)
varobj.units = "degrees"
varobj.long_name = "INST Roll"
varobj.epic_code = 1217
varobj = cdf.createVariable('Tx_1211', 'f4', ('time', 'lat', 'lon'), fill_value=floatfill)
varobj.units = "C"
# note name is one of the netcdf4 reserved attributes, use setncattr
varobj.setncattr('name', "T")
varobj.long_name = "instrument Transducer Temp."
varobj.epic_code = 1211
if 'Pressure' in rawvars:
# rawvarobj = rawcdf.variables['Pressure']
varobj = cdf.createVariable('P_1', 'f4', ('time', 'lat', 'lon'), fill_value=floatfill)
varobj.units = "dbar"
# note name is one of the netcdf4 reserved attributes, use setncattr
varobj.setncattr('name', "P")
varobj.long_name = "PRESSURE (DB)"
varobj.epic_code = 1
if 'PressVar' in rawvars:
varobj = cdf.createVariable('SDP_850', 'f4', ('time', 'lat', 'lon'), fill_value=floatfill)
varobj.setncattr('name', 'SDP')
varobj.long_name = "STAND. DEV. (PRESS)"
varobj.units = "mbar"
varobj.epic_code = 850
varobj = cdf.createVariable('cor', 'u2', ('time', 'depth', 'lat', 'lon'), fill_value=intfill)
varobj.setncattr('name', 'cor')
varobj.long_name = "Slant Beam Average Correlation (cor)"
varobj.units = "counts"
varobj.epic_code = 1202
varobj.NOTE = "Calculated from the slant beams"
if 'PGd4' in rawvars:
varobj = cdf.createVariable('PGd_1203', 'u2', ('time', 'depth', 'lat', 'lon'), fill_value=intfill)
varobj.setncattr('name', 'Pgd')
varobj.long_name = "Percent Good Pings"
varobj.units = "percent"
varobj.epic_code = 1203
varobj.NOTE = "Percentage of good 4-bem solutions (Field #4)"
varobj = cdf.createVariable('AGC_1202', 'u2', ('time', 'depth', 'lat', 'lon'), fill_value=intfill)
varobj.setncattr('name', 'AGC')
varobj.long_name = "Average Echo Intensity (AGC)"
varobj.units = "counts"
varobj.epic_code = 1202
varobj.NOTE = "Calculated from the slant beams"
if 'cor5' in rawvars:
varobj = cdf.createVariable('corvert', 'u2', ('time', 'depth', 'lat', 'lon'), fill_value=intfill)
varobj.setncattr('name', 'cor')
varobj.long_name = "Vertical Beam Correlation (cor)"
varobj.units = "counts"
varobj.epic_code = 1202
varobj.NOTE = "From the center vertical beam"
if 'att5' in rawvars:
varobj = cdf.createVariable('AGCvert', 'u2', ('time', 'depth', 'lat', 'lon'), fill_value=intfill)
varobj.setncattr('name', 'AGC')
varobj.long_name = "Vertical Beam Echo Intensity (AGC)"
varobj.units = "counts"
varobj.epic_code = 1202
varobj.NOTE = "From the center vertical beam"
# repeating attributes that do not depend on height or depth calculations
cdfvarnames = []
for key in cdf.variables.keys():
cdfvarnames.append(key)
omitnames = []
for key in cdf.dimensions.keys():
omitnames.append(key)
omitnames.append("Rec")
for varname in cdfvarnames:
if varname not in omitnames:
varobj = cdf.variables[varname]
varobj.serial_number = cdf.serial_number
if settings['transformation'].upper() == "BEAM":
varnames = ["Beam1", "Beam2", "Beam3", "Beam4"]
codes = [0, 0, 0, 0]
elif settings['transformation'].upper() == "INST":
varnames = ["X", "Y", "Z", "Error"]
codes = [0, 0, 0, 0]
else:
varnames = ["u_1205", "v_1206", "w_1204", "Werr_1201"]
codes = [1205, 1206, 1204, 1201]
for i in range(4):
varname = varnames[i]
varobj = cdf.createVariable(varname, 'f4', ('time', 'depth', 'lat', 'lon'), fill_value=floatfill)
varobj.units = "cm s-1"
varobj.long_name = "%s velocity (cm s-1)" % varnames[i]
varobj.epic_code = codes[i]
if 'vel5' in rawvars:
varobj = cdf.createVariable('Wvert', 'f4', ('time', 'depth', 'lat', 'lon'), fill_value=floatfill)
varobj.units = "cm s-1"
varobj.long_name = "Vertical velocity (cm s-1)"
# TODO do we do bottom track data here? Later? Or as a separate thing?
add_VAR_DESC(cdf)
return cdf
# noinspection PyUnresolvedReferences
def add_VAR_DESC(cdf):
"""
add the VAR_DESC global attribute constructed from variable names found in the file
:param object cdf: netCDF file object
"""
# cdf is an netcdf file object (e.g. pointer to open netcdf file)
varkeys = cdf.variables.keys() # get the names
dimkeys = cdf.dimensions.keys()
varnames = []
for key in varkeys:
varnames.append(key)
dimnames = []
for key in dimkeys:
dimnames.append(key)
buf = ""
for varname in varnames:
if varname not in dimnames:
buf = "%s:%s" % (buf, varname)
cdf.VAR_DESC = buf
def read_globalatts(fname):
"""
read_globalatts: read in file of metadata for a tripod or mooring
reads global attributes for an experiment from a text file (fname) called by all data processing programs
to get uniform metadata input one argument is required- the name of the file to read- it should have this form::
SciPi; J.Q. Scientist
PROJECT; USGS Coastal Marine Geology Program
EXPERIMENT; MVCO 2015 Stress Comparison
DESCRIPTION; Quadpod 13.9m
DATA_SUBTYPE; MOORED
COORD_SYSTEM; GEOGRAPHIC + SAMPLE
Conventions; PMEL/EPIC
MOORING; 1057
WATER_DEPTH; 13.9
WATER_DEPTH_NOTE; (meters), nominal
WATER_DEPTH_source; ship fathometer
latitude; 41.3336633
longitude; -70.565877
magnetic_variation; -14.7
Deployment_date; 17-Nov-2015
Recovery_date; 14-Dec-2015
DATA_CMNT;
platform_type; USGS aluminum T14 quadpod
DRIFTER; 0
POS_CONST; 0
DEPTH_CONST; 0
Conventions; PMEL/EPIC
institution; United States Geological Survey, Woods Hole Coastal and Marine Science Center
institution_url; http://woodshole.er.usgs.gov
:param str fname: input file name
:return: dict of metadata
"""
gatts = {}
f = open(fname, 'r')
for line in f:
line = line.strip()
cols = line.split(";")
gatts[cols[0]] = cols[1].strip()
f.close()
return gatts
# noinspection PyUnresolvedReferences
def writeDict2atts(cdfobj, d, tag):
"""
write a dictionary to netCDF attributes
:param object cdfobj: netcdf file object
:param dict d: metadata
:param str tag: tag to add before each atrribute name
:return: dict of metadata as written to file
"""
i = 0
# first, convert as many of the values in d to numbers as we can
for key in iter(d):
if type(d[key]) == str:
try:
d[key] = float(d[key])
except ValueError:
i += 1
for key in iter(d):
newkey = tag + key
try:
cdfobj.setncattr(newkey, d[key])
except:
print('can\'t set %s attribute' % key)
return d
def floor(dec):
"""
convenience function to round down
provided to avoid loading the math package
and because np.floor was causing unexpected behavior w.r.t ints
:param float dec:
:return: rounded number
"""
return int(dec - (dec % 1))
def __main():
print('%s running on python %s' % (sys.argv[0], sys.version))
if len(sys.argv) < 2:
print("%s useage:" % sys.argv[0])
print("ADCPcdf2ncEPIC rawcdfname ncEPICname USGSattfile [startingensemble endingensemble]\n")
print("starting and ending ensemble are netcdf file indeces, NOT TRDI ensemble numbers")
print("USGSattfile is a file containing EPIC metadata")
sys.exit(1)
try:
infile_name = sys.argv[1]
except:
print('error - input file name missing')
sys.exit(1)
try:
outfile_name = sys.argv[2]
except:
print('error - output file name missing')
sys.exit(1)
try:
attfile_name = sys.argv[3]
except:
print('error - global attribute file name missing')
sys.exit(1)
try:
settings = sys.argv[4]
except:
print('error - settings missing - need dictionary of:')
print('settings[\'good_ensembles\'] = [0, np.inf] # use np.inf for all ensembles or omit')
print('settings[\'transducer_offset_from_bottom\'] = 1 # m')
print('settings[\'transformation\'] = "EARTH" # | BEAM | INST')
sys.exit(1)
# some input testing
if ~os.path.isfile(infile_name):
print('error - input file not found')
sys.exit(1)
if ~os.path.isfile(attfile_name):
print('error - attribute file not found')
sys.exit(1)
print('Converting %s to %s' % (infile_name, outfile_name))
print('Start file conversion at ', dt.datetime.now())
# noinspection PyTypeChecker
doEPIC_ADCPfile(infile_name, outfile_name, attfile_name, settings)
print(f'Finished file conversion at {dt.datetime.now()}')
if __name__ == "__main__":
__main() | ADCPy | /ADCPy-0.1.1.tar.gz/ADCPy-0.1.1/adcpy/EPICstuff/ADCPcdf2ncEPIC.py | ADCPcdf2ncEPIC.py |
# 10/15/2018 MM was using np.nan to pre-fill arrays and this was causing
# NaNs in final output, a problem for CF. Replace np.nan with _FillValue
import os
import sys
import datetime as dt
import netCDF4 as nc
import numpy as np
def repopulateEPIC(*args, **kwargs):
# the argument passing here works fine
print('%s running on python %s' % (sys.argv[0], sys.version))
print('Start file conversion at ', dt.datetime.now())
shaped_file = args[0]
new_file = args[1]
sample_rate = args[2]
if 'start' in kwargs.keys():
start = kwargs['start']
else:
start = 'left'
if 'drop' in kwargs.keys():
drop = kwargs['drop']
else:
drop = {}
for key, value in kwargs.items():
print('{} = {}'.format(key, value))
print('Start file conversion at ', dt.datetime.now())
# check for the output file's existence before we try to delete it.
try:
os.remove(new_file)
print('{} removed'.format(new_file))
except FileNotFoundError:
pass
shapedcdf = nc.Dataset(shaped_file, format="NETCDF4")
ndims = len(shapedcdf.dimensions)
print(ndims)
nvars = len(shapedcdf.variables)
print(nvars)
# shapedcdf.getncattr('sensor_type')
ngatts = len(shapedcdf.ncattrs())
print(ngatts)
newcdf = nc.Dataset(new_file, mode="w", clobber=True, format='NETCDF4')
newcdf.set_fill_off()
# copy the global attributes
# first get a dict of them so that we can iterate
gatts = {}
for attr in shapedcdf.ncattrs():
gatts[attr] = getattr(shapedcdf, attr)
gatts['history'] = getattr(shapedcdf, 'history')+'; distributing time using redistributeSamples.py'
newcdf.setncatts(gatts)
for item in shapedcdf.dimensions.items():
print('Defining dimension {} which is {} long'.format(item[0], len(item[1])))
newcdf.createDimension(item[0], len(item[1]))
# this is the dimension along which we will redistribute the burst samples
# this is also the dimension we will drop from 'time'
dim = 'sample'
for var in shapedcdf.variables.items():
varobj = var[1]
try:
fill_value = varobj.getncattr('_FillValue')
except AttributeError:
fill_value = False # do not use None here!!!
print('{} is data type {} with fill {}'.format(varobj.name, varobj.dtype,
fill_value))
if varobj.name not in drop: # are we copying this variable?
if varobj.name == 'time': # is this time which we need to drop a dimension?
vdims_shaped = varobj.dimensions
vdims_new = []
for d in vdims_shaped:
if d == dim:
print('\tskipping sample in {}'.format(varobj.name))
else:
vdims_new.append(d)
newvarobj = newcdf.createVariable(varobj.name, varobj.dtype, tuple(vdims_new), fill_value=fill_value)
else:
# for a normal copy, no dimension drop
newvarobj = newcdf.createVariable(varobj.name, varobj.dtype, varobj.dimensions, fill_value=fill_value)
print('\t{} to {}'.format(varobj.dimensions, newvarobj.dimensions))
# copy the variable attributes
# first get a dict of them so that we can iterate
vatts = {}
for attr in varobj.ncattrs():
vatts[attr] = getattr(varobj, attr)
try:
newvarobj.setncatts(vatts)
except AttributeError:
print('Unable to copy atts for {}'.format(varobj.name))
# --------------- populate the data
# time is special, take care of it after time is populated
# we know because we are doing this that it is [time, sample]
if start == 'right':
print('copying time, using the last time stamp in each burst')
newcdf['time'][:] = shapedcdf['time'][:, -1]
# TODO -- bring into the loop and implement
# elif start == 'center':
# print('copying time, using the middle in each burst')
# #t =
# i = int(np.floor(len(shapedcdf['time'][0,:]/2)))
# newcdf['time'][:] = shapedcdf['time'][:,-1]
else:
print('copying time, using the first time stamp in each burst')
newcdf['time'][:] = shapedcdf['time'][:, 0]
drop = {'time'} # we have already done time
nbursts = len(newcdf['time'])
# Note we are dependent on the shape [time, sample, depth, lat, lon]
for svar in shapedcdf.variables.items():
varname = svar[1].name
print('{} is data type {}'.format(svar[0], svar[1].dtype))
if varname not in drop:
ndims = len(svar[1].dimensions)
print('\t{} dims to {} dims'.format(shapedcdf[varname].shape, newcdf[varname].shape))
if ('time' in svar[1].dimensions) and ('sample' in svar[1].dimensions):
print('\tdistributing samples, iterating through bursts')
try:
fill_value = svar[1].getncattr('_FillValue')
except AttributeError:
fill_value = None
for iburst in range(nbursts):
# get the data
if ndims == 2:
data = shapedcdf[varname][iburst, :]
elif ndims == 3:
data = shapedcdf[varname][iburst, :, :]
elif ndims == 4:
data = shapedcdf[varname][iburst, :, :, :]
elif ndims == 5:
data = shapedcdf[varname][iburst, :, :, :, :]
else:
data = None
if iburst == 0:
print('{} dims found - too many'.format(ndims))
# TODO: what do we do when this fails?
if iburst == 0 and data is not None:
print('\t data is {}'.format(data.shape))
# do not need to iterate over depth if shapes are correct!
# set up the index using the time stamps
t = shapedcdf['time'][iburst, :]
tidx = np.array(t-t[0])*sample_rate
# incoming data is represented as NaN, how we find them
tidxgood = ~np.isnan(tidx)
# reset the new container, same shape as old data
# new_data = np.full(data.shape,np.nan) # don't use NaN!
new_data = np.full(data.shape, fill_value)
# need an integer representation of the indices
# to make this assignment work: new_data[idxasint] = data[tidxgood]
idxasint = tidx[tidxgood].astype(int)
# different index types because these are different values
if iburst == 0:
print('\tnumber of dimensions = {}'.format(ndims))
if ndims == 2:
new_data[idxasint] = data[tidxgood]
newcdf[varname][iburst, :] = new_data
elif ndims == 3:
new_data[idxasint, :] = data[tidxgood, :]
newcdf[varname][iburst, :, :] = new_data
elif ndims == 4:
new_data[idxasint, :, :] = data[tidxgood, :, :]
newcdf[varname][iburst, :, :, :] = new_data
elif ndims == 5:
new_data[idxasint, :, :, :] = data[tidxgood, :, :, :]
newcdf[varname][iburst, :, :, :, :] = new_data
# no need to redistribute time
else: # if 'time' not in svar[1].dimensions:
print('\tno time and sample combination found, simple copy')
if ndims == 1:
newcdf[varname][:] = shapedcdf[varname][:]
elif ndims == 2:
newcdf[varname][:, :] = shapedcdf[varname][:, :]
elif ndims == 3:
newcdf[varname][:, :, :] = shapedcdf[varname][:, :, :]
elif ndims == 4:
newcdf[varname][:, :, :, :] = shapedcdf[varname][:, :, :, :]
elif ndims == 5:
newcdf[varname][:, :, :, :, :] = shapedcdf[varname][:, :, :, :, :]
else:
print('Not coded for more than 5 dimensions')
print('\n')
shapedcdf.close()
newcdf.close()
print('Finished writing new file {}'.format(new_file))
def __main():
print('%s running on python %s' % (sys.argv[0], sys.version))
if len(sys.argv) < 3:
print(__doc__)
return
try:
shaped_file = sys.argv[1]
except:
print('error - shaped netCDF input file name missing')
sys.exit(1)
try:
new_file = sys.argv[2]
except:
print('error - output file name missing')
sys.exit(1)
try:
sample_rate = sys.argv[3]
except:
print('error - sample_rate missing')
sys.exit(1)
print('repopulating {} to {} with {} sample_rate'.format(shaped_file, new_file, sample_rate))
repopulateEPIC(shaped_file, new_file, sample_rate, sys.argv[4:])
if __name__ == "__main__":
__main() | ADCPy | /ADCPy-0.1.1.tar.gz/ADCPy-0.1.1/adcpy/EPICstuff/repopulateEPIC.py | repopulateEPIC.py |
import struct
#
# The rawfile is assumed to be in PD0 format.
#
# PD0 format assumes the file is a succession of ensembles.
#
# Each ensemble starts with a two byte header identifying the type of data
# contained in the ensemble.
#
# Following the header is a two byte length field specifying the length of
# the header, length field, and data combined
#
# Following the length field is raw data for the number of bytes indicated by
# the length field
#
# Following the raw data is a checksum field which is the two least
# significant bytes of the sum of the byte values of the header, length field,
# and raw data.
#
# updated to run in python 3x, Marinna Martini 1/12/2017
# adapted from pd0.py by Gregory P. Dusek
# http://trac.nccoos.org/dataproc/wiki/DPWP/docs
def split(raw_file, waves_file, currents_file):
"""
Split PD0 format data into seperate waves and currents files
:param binaryIO raw_file:
:param binaryIO waves_file:
:param binaryIO currents_file:
:return:
"""
# header IDs
waves_id = 0x797F
currents_id = 0x7F7F
# convenience function reused for header, length, and checksum
def __nextLittleEndianUnsignedShort(file):
"""Get next little endian unsigned short from file"""
raw = file.read(2)
"""for python 3.5, struct.unpack('<H', raw)[0] needs to return a
byte, not an int
"""
return raw, struct.unpack("<H", raw)[0]
# factored for readability
def __computeChecksum(data, nbytes, ensemble):
"""Compute a checksum from header, length, and ensemble"""
cs = 0
for byte in data:
# since the for loop returns an int to byte, use as-is
# value = struct.unpack('B', byte)[0]
# cs += value
cs += byte
for byte in nbytes:
# value = struct.unpack('B', byte)[0]
# cs += value
cs += byte
for byte in ensemble:
# value = struct.unpack('B', byte)[0]
# cs += value
cs += byte
return cs & 0xFFFF
# find the first instance of a waves or currents header
raw_data = raw_file.read()
first_waves = raw_data.find(struct.pack("<H", waves_id))
first_currents = raw_data.find(struct.pack("<H", currents_id))
# bail if neither waves nor currents found
if (first_waves < 0) and (first_currents < 0):
# raise IOError, "Neither waves nor currents header found"
raise IOError("Neither waves nor currents header found")
# get the starting point by throwing out unfound headers
# and selecting the minimum
first_ensemble = min([x for x in (first_waves, first_currents) if x >= 0])
# seeks to the first occurrence of a waves or currents data
raw_file.seek(first_ensemble)
# loop through raw data
raw_header, header = __nextLittleEndianUnsignedShort(raw_file)
while (header == waves_id) or (header == currents_id):
# get ensemble length
raw_length, length = __nextLittleEndianUnsignedShort(raw_file)
# read up to the checksum
raw_ensemble = raw_file.read(length - 4)
# get checksum
raw_checksum, checksum = __nextLittleEndianUnsignedShort(raw_file)
computed_checksum = __computeChecksum(raw_header, raw_length, raw_ensemble)
if checksum != computed_checksum:
raise IOError("Checksum error")
# append to output stream
if header == waves_id:
waves_file.write(raw_header)
waves_file.write(raw_length)
waves_file.write(raw_ensemble)
waves_file.write(raw_checksum)
elif header == currents_id:
currents_file.write(raw_header)
currents_file.write(raw_length)
currents_file.write(raw_ensemble)
currents_file.write(raw_checksum)
try:
raw_header, header = __nextLittleEndianUnsignedShort(raw_file)
except struct.error:
break
def test():
"""Execute test suite"""
try:
import adcp.tests.runalltests as runalltests
except:
# possible if executed as script
import sys
import os
sys.path.append(os.path.join(os.path.dirname(__file__), "tests"))
import runalltests
runalltests.runalltests(subset="pd0")
class __TestException(Exception):
"""Flow control for running as script"""
pass
# wrapper function
def __test():
"""Execute test suite from command line"""
test()
# raise __TestException, 'Wrapper function for command line testing only'
raise __TestException("Wrapper function for command line testing only")
def __main():
"""Process as script from command line"""
import getopt
import os
import sys
# get the command line options and arguments
path = ""
raw_name, waves_name, currents_name = 3 * [None]
try:
opts, args = getopt.gnu_getopt(
sys.argv[1:],
"htp:r:w:c:",
["help", "test", "path=", "raw=", "waves=", "currents="],
)
for opt, arg in opts:
if opt in ["-h", "--help"]:
raise getopt.GetoptError("")
if opt in ["-t", "--test"]:
__test()
elif opt in ["-p", "--path"]:
path = arg
elif opt in ["-r", "--raw"]:
raw_name = arg
elif opt in ["-w", "--waves"]:
waves_name = arg
elif opt in ["-c", "--currents"]:
currents_name = arg
else:
raise getopt.GetoptError("")
if (raw_name is None) or (waves_name is None) or (currents_name is None):
if len(args) not in [3, 4]:
raise getopt.GetoptError("")
else:
if (
(raw_name is not None)
or (waves_name is not None)
or (currents_name is not None)
):
raise getopt.GetoptError("")
else:
if len(args) == 4:
path = args[0]
del args[0]
raw_name = args[0]
waves_name = args[1]
currents_name = args[2]
elif len(args) != 0:
raise getopt.GetoptError("")
except getopt.GetoptError:
print(__doc__)
return
except __TestException:
return
# split a raw PD0 file
raw_name = os.path.join(path, raw_name)
print(("Raw file path:", raw_name))
waves_name = os.path.join(path, waves_name)
print(("Waves file path:", waves_name))
currents_name = os.path.join(path, currents_name)
print(("Currents file path:", currents_name))
raw_file = open(raw_name, "rb")
try:
waves_file = open(waves_name, "wb")
try:
currents_file = open(currents_name, "wb")
try:
split(raw_file, waves_file, currents_file)
finally:
currents_file.close()
finally:
waves_file.close()
finally:
raw_file.close()
if __name__ == "__main__":
__main() | ADCPy | /ADCPy-0.1.1.tar.gz/ADCPy-0.1.1/adcpy/TRDIstuff/pd0.py | pd0.py |
import sys
import struct
def split(pd0file, packets_file, currents_file, first_ensemble, last_ensemble):
"""
split ADCP data in pd0 format into current profiles and wave packets
:param str pd0file: path and name of raw PD0 format input file
:param str packets_file: path and name of waves PD0 format output file
:param str currents_file: path and name of currents PD0 format output file
:param int first_ensemble: ensemble number of the first ensemble to read
:param int last_ensemble: ensemble number of the last ensemble to read
"""
try:
pd0file = open(pd0file, 'rb')
except:
print('Cannot open %s' % pd0file)
try:
packets_file = open(packets_file, 'wb')
except:
print('Cannot open %s' % packets_file)
try:
currents_file = open(currents_file, 'wb')
except:
print('Cannot open %s' % currents_file)
# header IDs
waves_id = 0x797f
currents_id = 0x7f7f
if last_ensemble < 0:
last_ensemble = 1E35
print('Reading from %d to the last ensemble found\n' % first_ensemble)
else:
print('Reading from ensemble %d to %d\n' % (first_ensemble, last_ensemble))
# find the first instance of a waves or currents header
raw_data = pd0file.read()
first_waves = raw_data.find(struct.pack('<H', waves_id))
first_currents = raw_data.find(struct.pack('<H', currents_id))
# bail if neither waves nor currents found
if (first_waves < 0) and (first_currents < 0):
# raise IOError, "Neither waves nor currents header found"
raise IOError('Neither waves nor currents header found')
# get the starting point by throwing out unknown headers
# and selecting the minimum
first_file_ensemble = min([x for x in (first_waves, first_currents) if x >= 0])
# seeks to the first occurrence of a waves or currents data
pd0file.seek(first_file_ensemble)
# loop through raw data
raw_header, header = __nextLittleEndianUnsignedShort(pd0file)
wave_count = 0
current_count = 0
while (header == waves_id) or (header == currents_id):
# get ensemble length
raw_length, length = __nextLittleEndianUnsignedShort(pd0file)
# read up to the checksum
raw_ensemble = pd0file.read(length-4)
# get checksum
raw_checksum, checksum = __nextLittleEndianUnsignedShort(pd0file)
computed_checksum = __computeChecksum(raw_header, raw_length, raw_ensemble)
if checksum != computed_checksum:
raise IOError('Checksum error')
# append to output stream
if header == waves_id:
wave_count = wave_count+1
packets_file.write(raw_header)
packets_file.write(raw_length)
packets_file.write(raw_ensemble)
packets_file.write(raw_checksum)
elif header == currents_id:
current_count = current_count+1
if (current_count >= first_ensemble) & (current_count < last_ensemble):
currents_file.write(raw_header)
currents_file.write(raw_length)
currents_file.write(raw_ensemble)
currents_file.write(raw_checksum)
elif current_count > last_ensemble:
break
try:
raw_header, header = __nextLittleEndianUnsignedShort(pd0file)
except struct.error:
break
if (current_count > 0) & ((current_count % 100) == 0):
print('%d current ensembles read' % current_count)
if (wave_count > 0) & ((wave_count % 1000) == 0):
print('%d wave ensembles read' % wave_count)
print('wave Ensemble count = %d\n' % wave_count)
print('current Ensemble count = %d\n' % current_count)
currents_file.close()
packets_file.close()
pd0file.close()
# convenience function reused for header, length, and checksum
def __nextLittleEndianUnsignedShort(file):
"""
Get next little endian unsigned short from file
:param file: file object open for reading as binary
:return: a tuple of raw bytes and unpacked data
"""
raw = file.read(2)
# for python 3.5, struct.unpack('<H', raw)[0] needs to return a byte, not an int
return raw, struct.unpack('<H', raw)[0]
# factored for readability
def __computeChecksum(file_header, ensemble_length, ensemble):
"""
Compute a checksum
:param file_header: file header
:param ensemble_length: ensemble ensemble_length
:param ensemble: ensemble raw data
:return: checksum for ensemble
"""
cs = 0
for byte in file_header:
# since the for loop returns an int to byte, use as-is
# value = struct.unpack('B', byte)[0]
# cs += value
cs += byte
for byte in ensemble_length:
# value = struct.unpack('B', byte)[0]
# cs += value
cs += byte
for byte in ensemble:
# value = struct.unpack('B', byte)[0]
# cs += value
cs += byte
return cs & 0xffff
def __main():
print('%s running on python %s' % (sys.argv[0], sys.version))
if len(sys.argv) < 3:
print(__doc__)
return
try:
pd0file = sys.argv[1]
except:
print('error - pd0 input file name missing')
sys.exit(1)
try:
packets_file = sys.argv[2]
except:
print('error - packets output file name missing')
sys.exit(1)
try:
currents_file = sys.argv[3]
except:
print('error - current profile output file name missing')
sys.exit(1)
print('Splitting %s to %s and %s' % (pd0file, packets_file, currents_file))
try:
first_ensemble = sys.argv[4]
except:
first_ensemble = 1
try:
last_ensemble = sys.argv[5]
except:
last_ensemble = -1
split(pd0file, packets_file, currents_file, first_ensemble, last_ensemble)
if __name__ == "__main__":
__main() | ADCPy | /ADCPy-0.1.1.tar.gz/ADCPy-0.1.1/adcpy/TRDIstuff/pd0splitter.py | pd0splitter.py |
import sys
import struct
import math
import numpy as np
# this line works in my local environment, fails in Travis
from netCDF4 import Dataset
import datetime as dt
from adcpy.EPICstuff.EPICmisc import cftime2EPICtime
from adcpy.EPICstuff.EPICmisc import ajd
def convert_pd0_to_netcdf(pd0File, cdfFile, good_ens, serial_number, time_type, delta_t):
"""
convert from binary pd0 format to netcdf
:param str pd0File: is path of raw PD0 format input file with current ensembles
:param str cdfFile: is path of a netcdf4 EPIC compliant output file
:param list good_ens: [start, end] ensembles to export. end = -1 for all ensembles in file
:param str serial_number: serial number of the instrument
:param str time_type: "CF" for CF conventions, "EPIC" for EPIC conventions
:param str delta_t: time between ensembles, in seconds. 15 min profiles would be 900
:return: count of ensembles read, ending index of netCDF file, error type if file could not be read
"""
# TODO figure out a better way to handle this situation
# need this check in case this function is used as a stand alone function
# this is necessary so that this function does not change the value
# in the calling function
ens2process = good_ens[:]
verbose = True # diagnostic, True = turn on output, False = silent
maxens, ens_len, ens_data, data_start_posn = analyzepd0file(pd0File, verbose)
infile = open(pd0File, 'rb')
infile.seek(data_start_posn)
if (ens2process[1] < 0) or ens2process[1] == np.inf:
ens2process[1] = maxens
# we are good to go, get the output file ready
print('Setting up netCDF file %s' % cdfFile)
cdf, cf_units = setup_netcdf_file(cdfFile, ens_data, ens2process, serial_number, time_type, delta_t)
# we want to save the time stamp from this ensemble since it is the
# time from which all other times in the file will be relative to
t0 = ens_data['VLeader']['dtobj']
netcdf_index = 0
ensemble_count = 0
verbose = False # diagnostic, True = turn on output, False = silent
nslantbeams = 4
# priming read - for the while loop
# note that ensemble lengths can change in the middle of the file!
# horribly inefficient, but here we go, one step backward, two forward...
bookmark = infile.tell() # save beginning of next ensemble
# need to read the header from the file to know the ensemble size
header = read_TRDI_header(infile)
if header['sourceID'] != b'\x7f':
print('non-currents ensemble found at %d' % bookmark)
if ens_len != header['nbytesperens']+2:
ens_len = header['nbytesperens']+2 # update to what we have
# go back to where this ensemble started before we checked the header
infile.seek(bookmark)
ens = infile.read(ens_len)
ens_error = None
while len(ens) > 0:
# print('-- ensemble %d length %g, file position %g' % (ensemble_count, len(ens), infile.tell()))
# print(ens_data['header'])
ens_data, ens_error = parse_TRDI_ensemble(ens, verbose)
if (ens_error is None) and (ensemble_count >= ens2process[0]):
# write to netCDF
if netcdf_index == 0:
print('--- first ensembles read at %s and TRDI #%d' % (
ens_data['VLeader']['timestr'], ens_data['VLeader']['Ensemble_Number']))
varobj = cdf.variables['Rec']
try:
varobj[netcdf_index] = ens_data['VLeader']['Ensemble_Number']
except:
# here we have reached the end of the netCDF file
cdf.close()
infile.close()
return
# time calculations done when vleader is read
if time_type == 'EPIC_with_CF':
varobj = cdf.variables['time']
varobj[netcdf_index] = ens_data['VLeader']['EPIC_time']
varobj = cdf.variables['time2']
varobj[netcdf_index] = ens_data['VLeader']['EPIC_time2']
varobj = cdf.variables['cf_time']
elapsed = ens_data['VLeader']['dtobj']-t0 # timedelta
elapsed_sec = elapsed.total_seconds()
varobj[netcdf_index] = elapsed_sec
elif time_type == 'CF_with_EPIC':
varobj = cdf.variables['time']
elapsed = ens_data['VLeader']['dtobj'] - t0 # timedelta
elapsed_sec = elapsed.total_seconds()
if elapsed_sec == 0:
print('elapsed seconds from ensemble {} is {}'.format(ensemble_count, elapsed_sec))
varobj[netcdf_index] = elapsed_sec
t1, t2 = cftime2EPICtime(elapsed_sec, cf_units)
varobj = cdf.variables['EPIC_time']
varobj[netcdf_index] = t1
varobj = cdf.variables['EPIC_time2']
varobj[netcdf_index] = t2
elif time_type == 'EPIC':
varobj = cdf.variables['time']
varobj[netcdf_index] = ens_data['VLeader']['EPIC_time']
varobj = cdf.variables['time2']
varobj[netcdf_index] = ens_data['VLeader']['EPIC_time2']
else: # only CF time, the default
varobj = cdf.variables['time']
elapsed = ens_data['VLeader']['dtobj']-t0 # timedelta
elapsed_sec = elapsed.total_seconds()
varobj[netcdf_index] = elapsed_sec
# diagnostic
if (ens2process[1]-ens2process[0]-1) < 100:
print('%d %15.8f %s' % (ens_data['VLeader']['Ensemble_Number'],
ens_data['VLeader']['julian_day_from_julian'],
ens_data['VLeader']['timestr']))
varobj = cdf.variables['sv']
varobj[netcdf_index] = ens_data['VLeader']['Speed_of_Sound']
for i in range(nslantbeams):
varname = "vel%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index, :] = ens_data['VData'][i, :]
for i in range(nslantbeams):
varname = "cor%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index, :] = ens_data['CData'][i, :]
for i in range(nslantbeams):
varname = "att%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index, :] = ens_data['IData'][i, :]
if 'GData' in ens_data:
for i in range(nslantbeams):
varname = "PGd%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index, :] = ens_data['GData'][i, :]
varobj = cdf.variables['Rec']
varobj[netcdf_index] = ens_data['VLeader']['Ensemble_Number']
varobj = cdf.variables['Hdg']
varobj[netcdf_index] = ens_data['VLeader']['Heading']
varobj = cdf.variables['Ptch']
varobj[netcdf_index] = ens_data['VLeader']['Pitch']
varobj = cdf.variables['Roll']
varobj[netcdf_index] = ens_data['VLeader']['Roll']
varobj = cdf.variables['HdgSTD']
varobj[netcdf_index] = ens_data['VLeader']['H/Hdg_Std_Dev']
varobj = cdf.variables['PtchSTD']
varobj[netcdf_index] = ens_data['VLeader']['P/Pitch_Std_Dev']
varobj = cdf.variables['RollSTD']
varobj[netcdf_index] = ens_data['VLeader']['R/Roll_Std_Dev']
varobj = cdf.variables['Tx']
varobj[netcdf_index] = ens_data['VLeader']['Temperature']
varobj = cdf.variables['S']
varobj[netcdf_index] = ens_data['VLeader']['Salinity']
varobj = cdf.variables['xmitc']
varobj[netcdf_index] = ens_data['VLeader']['Xmit_Current']
varobj = cdf.variables['xmitv']
varobj[netcdf_index] = ens_data['VLeader']['Xmit_Voltage']
varobj = cdf.variables['Ambient_Temp']
varobj[netcdf_index] = ens_data['VLeader']['Ambient_Temp']
varobj = cdf.variables['Pressure+']
varobj[netcdf_index] = ens_data['VLeader']['Pressure_(+)']
varobj = cdf.variables['Pressure-']
varobj[netcdf_index] = ens_data['VLeader']['Pressure_(-)']
varobj = cdf.variables['Attitude_Temp']
varobj[netcdf_index] = ens_data['VLeader']['Attitude_Temp']
varobj = cdf.variables['EWD1']
varobj[netcdf_index] = int(ens_data['VLeader']['Error_Status_Word_Low_16_bits_LSB'])
varobj = cdf.variables['EWD2']
varobj[netcdf_index] = int(ens_data['VLeader']['Error_Status_Word_Low_16_bits_MSB'])
varobj = cdf.variables['EWD3']
varobj[netcdf_index] = int(ens_data['VLeader']['Error_Status_Word_High_16_bits_LSB'])
varobj = cdf.variables['EWD4']
varobj[netcdf_index] = int(ens_data['VLeader']['Error_Status_Word_High_16_bits_MSB'])
if ens_data['FLeader']['Depth_sensor_available'] == 'Yes':
varobj = cdf.variables['Pressure']
varobj[netcdf_index] = ens_data['VLeader']['Pressure_deca-pascals']
varobj = cdf.variables['PressVar']
varobj[netcdf_index] = ens_data['VLeader']['Pressure_variance_deca-pascals']
# add bottom track data write to cdf here
if 'BTData' in ens_data:
if ens_data['BTData']['Mode'] == 0:
varobj = cdf.variables['BTRmin']
varobj[netcdf_index] = ens_data['BTData']['Ref_Layer_Min']
varobj = cdf.variables['BTRnear']
varobj[netcdf_index] = ens_data['BTData']['Ref_Layer_Near']
varobj = cdf.variables['BTRfar']
varobj[netcdf_index] = ens_data['BTData']['Ref_Layer_Far']
varnames = ('BTWe', 'BTWu', 'BTWv', 'BTWd')
for i in range(nslantbeams):
varname = "BTR%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index] = ens_data['BTData']['BT_Range'][i]
if ens_data['FLeader']['Coord_Transform'] == 'EARTH':
varobj = cdf.variables[varnames[i]]
else:
varname = "BTV%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index] = ens_data['BTData']['BT_Vel'][i]
varname = "BTc%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index] = ens_data['BTData']['BT_Corr'][i]
varname = "BTe%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index] = ens_data['BTData']['BT_Amp'][i]
varname = "BTp%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index] = ens_data['BTData']['BT_PGd'][i]
varname = "BTRSSI%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index] = ens_data['BTData']['RSSI_Amp'][i]
if ens_data['BTData']['Mode'] == 0:
varobj[netcdf_index] = ens_data['BTData']['Ref_Layer_Vel'][i]
varname = "BTRc%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index] = ens_data['BTData']['Ref_Layer_Corr'][i]
varname = "BTRi%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index] = ens_data['BTData']['Ref_Layer_Amp'][i]
varname = "BTRp%d" % (i+1)
varobj = cdf.variables[varname]
varobj[netcdf_index] = ens_data['BTData']['Ref_Layer_PGd'][i]
if 'VBeamVData' in ens_data:
if ens_data['VBeamLeader']['Vertical_Depth_Cells'] == ens_data['FLeader']['Number_of_Cells']:
varobj = cdf.variables['vel5']
varobj[netcdf_index, :] = ens_data['VBeamVData']
varobj = cdf.variables['cor5']
varobj[netcdf_index, :] = ens_data['VBeamCData']
varobj = cdf.variables['att5']
varobj[netcdf_index, :] = ens_data['VBeamIData']
if 'VBeamGData' in ens_data:
varobj = cdf.variables['PGd5']
varobj[netcdf_index, :] = ens_data['VBeamGData']
if 'WaveParams' in ens_data:
# we can get away with this because the key names and var names are the same
for key in ens_data['WaveParams']:
varobj = cdf.variables[key]
varobj[netcdf_index] = ens_data['WaveParams'][key]
if 'WaveSeaSwell' in ens_data:
# we can get away with this because the key names and var names are the same
for key in ens_data['WaveSeaSwell']:
varobj = cdf.variables[key]
varobj[netcdf_index] = ens_data['WaveSeaSwell'][key]
netcdf_index += 1
elif ens_error == 'no ID':
print('Stopping because ID tracking lost')
infile.close()
cdf.close()
sys.exit(1)
ensemble_count += 1
if ensemble_count > maxens:
print('stopping at estimated end of file ensemble %d' % ens2process[1])
break
n = 10000
ensf, ensi = math.modf(ensemble_count/n)
if ensf == 0:
print('%d ensembles read at %s and TRDI #%d' % (ensemble_count, ens_data['VLeader']['dtobj'],
ens_data['VLeader']['Ensemble_Number']))
if ensemble_count >= ens2process[1]-1:
print('stopping at requested ensemble %d' % ens2process[1])
break
# note that ensemble lengths can change in the middle of the file!
# TODO - is there a faster way to do this??
bookmark = infile.tell() # save beginning of next ensemble
# TODO - since we are jumping around, we should check here to see
# how close to the end of the file we are - if it is within one
# header length - we are done
# need to read the header from the file to know the ensemble size
header = read_TRDI_header(infile)
if header is None:
# we presume this is the end of the file, since we don't have header info
print('end of file reached with incomplete header')
break
if header['sourceID'] != b'\x7f':
print('non-currents ensemble found at %d' % bookmark)
if ens_len != header['nbytesperens']+2:
ens_len = header['nbytesperens']+2 # update to what we have
# TODO - fix this so that we aren't going back and forth, it is really slow
# go back to where this ensemble started before we checked the header
infile.seek(bookmark)
ens = infile.read(ens_len)
else: # while len(ens) > 0:
print('end of file reached')
if ensemble_count < maxens:
print('end of file reached after %d ensembles, less than estimated in the file' % ensemble_count)
elif ensemble_count > maxens:
print('end of file reached after %d ensembles, more than estimated in the file' % ensemble_count)
infile.close()
cdf.close()
print('%d ensembles read, %d records written' % (ensemble_count, netcdf_index))
return ensemble_count, netcdf_index, ens_error
# TODO this is not used - consider removing
def transpose_rotation_matrix(matrix):
"""
transpose the rotation matrix
:param matrix: rotation matrix from file
:return: transposed matrix
"""
if not matrix:
return []
return [[row[i] for row in matrix] for i in range(len(matrix[0]))]
def write_dict_to_cdf_attributes(netcdf_object, d, tag):
"""
write a dictionary to netCDF attributes
:param netcdf_object: netcdf file object
:param dict d: dictionary of attribute names and values
:param str tag: an identifier to prepend to the attribute name
:return: the dictionary d with any strings that can be changed to numbers, as numbers
"""
i = 0
# first, convert as many of the values in d to numbers as we can
for key in iter(d):
if type(d[key]) == str:
try:
d[key] = float(d[key])
except ValueError:
# we really don't need to print here,
# but python insists we do something
# print(' can\'t convert %s to float' % key)
i += 1
for key in iter(d):
newkey = tag + key
try:
netcdf_object.setncattr(newkey, d[key])
except:
print('can\'t set %s attribute' % key)
return d
def parse_TRDI_ensemble(ensbytes, verbose):
"""
convert the binary data for one ensemble to a dictionary of readable data
:param binary ensbytes: the raw binary data for the ensemble
:param verbose: print out the data as it is converted
:return: a dictionary of the data, a string describing any errors
"""
ens_data = {}
ens_error = None
ens_data['Header'] = parse_TRDI_header(ensbytes)
for i in range(ens_data['Header']['ndatatypes']):
# go to each offset and parse depending on what we find
offset = ens_data['Header']['offsets'][i]
# raw, val = __parseTRDIushort(ensbytes, offset)
val = struct.unpack('<H', ensbytes[offset:offset+2])[0]
if val == 0: # \x00\x00
if verbose:
print('Fixed Leader found at %g' % offset)
ens_data['FLeader'] = parse_TRDI_fixed_leader(ensbytes, offset)
# we need this to decode the other data records
ncells = int(ens_data['FLeader']['Number_of_Cells'])
nbeams = 4 # the 5th beam has it's own record
elif val == 128: # \x80\x00
if verbose:
print('Variable Leader found at %g' % offset)
ens_data['VLeader'] = parse_TRDI_variable_leader(ensbytes, offset)
# print(VLeader)
elif val == 256: # raw == b'\x00\x01': 256
if verbose:
print('Velocity found at %g' % offset)
ens_data['VData'] = parse_TRDI_velocity(ensbytes, offset, ncells, nbeams)
elif val == 512: # raw == b'\x00\x02':
if verbose:
print('Correlation found at %g' % offset)
ens_data['CData'] = parse_TRDI_correlation(ensbytes, offset, ncells, nbeams)
elif val == 768: # raw == b'\x00\x03':
if verbose:
print('Intensity found at %g' % offset)
ens_data['IData'] = parse_TRDI_intensity(ensbytes, offset, ncells, nbeams)
elif val == 1024: # raw == b'\x00\x04':
if verbose:
print('PGood found at %g' % offset)
ens_data['GData'] = parse_TRDI_percent_good(ensbytes, offset, ncells, nbeams)
elif val == 1280: # raw == b'\x00\x05':
if verbose:
print('Status profile found at %g' % offset)
elif val == 1536: # raw == b'\x00\x06':
if verbose:
print('BT found at %g' % offset)
ens_data['BTData'] = parse_TRDI_bottom_track(ensbytes, offset, nbeams)
elif val == 1792: # raw == b'\x00\x07':
# this not defined in TRDI docs
pass
elif val == 2048: # raw == b'\x00\x08':
if verbose:
print('MicroCAT data found at %g' % offset)
elif val == 12800: # raw == b'\x00\x32': #12800
if verbose:
print('Instrument transformation found at %g' % offset)
ens_data['XformMatrix'] = parse_TRDI_transformation_matrix(ensbytes, offset, nbeams)
elif val == 28672: # raw == b'\x00\x70':
if verbose:
print('V Series system config found at %g' % offset)
ens_data['VSysConfig'] = parse_TRDI_vertical_system_configuration(ensbytes, offset)
elif val == 28673: # raw == b'\x01\x70':
if verbose:
print('V Series ping setup found at %g' % offset)
ens_data['VPingSetup'] = parse_TRDI_vertical_ping_setup(ensbytes, offset)
elif val == 28674: # raw == b'\x02\x70':
if verbose:
print('V Series ADC Data found at %g' % offset)
# currently not defined well in TRDI docs
elif val == 28675: # raw == b'\x03\x70':
if verbose:
print('V Series System Configuration Data found at %g' % offset)
# currently not defined well in TRDI docs
elif val == 3841: # raw == b'\x01\x0f':
if verbose:
print('Vertical Beam Leader Data found at %g' % offset)
ens_data['VBeamLeader'] = parse_TRDI_vertical_beam_leader(ensbytes, offset)
elif val == 2560: # raw == b'\x00\x0a':
if verbose:
print('Vertical Beam Velocity Data found at %g' % offset)
ens_data['VBeamVData'] = parse_TRDI_vertical_velocity(ensbytes, offset,
ens_data['VBeamLeader']['Vertical_Depth_Cells'])
elif val == 2816: # raw == b'\x00\x0b':
if verbose:
print('Vertical Beam Correlation Data found at %g' % offset)
ens_data['VBeamCData'] = parse_TRDI_vertical_correlation(ensbytes, offset,
ens_data['VBeamLeader']['Vertical_Depth_Cells'])
elif val == 3072: # raw == b'\x00\x0c':
if verbose:
print('Vertical Beam Amplitude Data found at %g' % offset)
ens_data['VBeamIData'] = parse_TRDI_vertical_intensity(ensbytes, offset,
ens_data['VBeamLeader']['Vertical_Depth_Cells'])
elif val == 3328: # raw == b'\x00\x0d':
if verbose:
print('Vertical Beam Percent Good Data found at %g' % offset)
ens_data['VBeamGData'] = parse_TRDI_vertical_percent_good(ensbytes, offset,
ens_data['VBeamLeader']['Vertical_Depth_Cells'])
elif val == 28676: # raw == b'\x40\x70':
if verbose:
print('V Series Event Log Data found at %g' % offset)
elif val == 11: # raw == b'\x0b\x00':
if verbose:
print('Wavesmon 4 Wave Parameters found at %g' % offset)
ens_data['WaveParams'] = parse_TRDI_wave_parameters(ensbytes, offset)
elif val == 12: # raw == b'\x0c\x00':
if verbose:
print('Wavesmon 4 Sea and Swell found at %g' % offset)
ens_data['WaveSeaSwell'] = parse_TRDI_wave_sea_swell(ensbytes, offset)
else:
print('ID %d unrecognized at %g' % (val, offset))
ens_error = 'no ID'
csum = __computeChecksum(ensbytes)
if csum != (ensbytes[-2]+(ensbytes[-1] << 8)):
ens_error = 'checksum failure'
return ens_data, ens_error
def setup_netcdf_file(fname, ens_data, gens, serial_number, time_type, delta_t):
"""
create the netcdf output file, define dimensions and variables
:param str fname: path and name of netcdf file
:param dict ens_data: data from the first ensemble to be read
:param tuple gens: start and end ensemble indices
:param str serial_number: instrument serial number
:param str time_type: indicate if "CF", "CF_with_EPIC", "EPIC_with_CF" or "EPIC" timebase for "time"
:param str delta_t: time between ensembles
:return: netcdf file object, string describing the time units for CF time
"""
# note that
# f4 = 4 byte, 32 bit float
# maxfloat = 3.402823*10**38;
# where the variable is based ona single dimension, usually time, it is still expressed as a tuple ("time") and
# needs to be kept that way, even though pylint complains
intfill = -32768
floatfill = 1E35
# is it possible for delta_t to be none or an int. Deal with that here
if delta_t is None:
delta_t = "none"
if isinstance(delta_t, int):
delta_t = str(delta_t)
nens = gens[1]-gens[0]-1
print('creating netCDF file %s with %d records' % (fname, nens))
cdf = Dataset(fname, "w", clobber=True, format="NETCDF4")
# dimensions, in EPIC order
cdf.createDimension('time', nens)
cdf.createDimension('depth', ens_data['FLeader']['Number_of_Cells'])
cdf.createDimension('lat', 1)
cdf.createDimension('lon', 1)
# write global attributes
cdf.history = "translated to netCDF by TRDIpd0tonetcdf.py"
cdf.sensor_type = "TRDI"
cdf.serial_number = serial_number
cdf.DELTA_T = delta_t
cdf.sample_rate = ens_data['FLeader']['Time_Between_Ping Groups']
write_dict_to_cdf_attributes(cdf, ens_data['FLeader'], "TRDI_")
varobj = cdf.createVariable('Rec', 'u4', 'time', fill_value=intfill)
varobj.units = "count"
varobj.long_name = "Ensemble Number"
# the ensemble number is a two byte LSB and a one byte MSB (for the rollover)
# varobj.valid_range = [0, 2**23]
# it's not yet clear which way to go with this. python tools like xarray
# and panoply demand that time be a CF defined time.
# USGS CMG MATLAB tools need time and time2
# TODO - CF_time can come out as YYYY-M-D for dates with single digit months and days, check to see if this is ISO
# and fix if it is not. This is a better way:
# d = datetime.datetime(2010, 7, 4, 12, 15, 58)
# '{:%Y-%m-%d %H:%M:%S}'.format(d)
if time_type == 'EPIC_with_CF':
# we include time and time2 for EPIC compliance
varobj = cdf.createVariable('time', 'u4', ('time',))
varobj.units = "True Julian Day"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
varobj = cdf.createVariable('time2', 'u4', ('time',))
varobj.units = "msec since 0:00 GMT"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
cf_units = ""
# we include cf_time for cf compliance and use by python packages like xarray
# if f8, 64 bit is not used, time is clipped
# for ADCP fast sampled, single ping data, need millisecond resolution
varobj = cdf.createVariable('cf_time', 'f8', 'time')
# for cf convention, always assume UTC for now, and use the UNIX Epoch as the reference
varobj.units = "seconds since %d-%d-%d %d:%d:%f 0:00" % (ens_data['VLeader']['Year'],
ens_data['VLeader']['Month'],
ens_data['VLeader']['Day'],
ens_data['VLeader']['Hour'],
ens_data['VLeader']['Minute'],
ens_data['VLeader']['Second'] +
ens_data['VLeader']['Hundredths'] / 100)
varobj.standard_name = "time"
varobj.axis = "T"
elif time_type == "CF_with_EPIC":
# cf_time for cf compliance and use by python packages like xarray
# if f8, 64 bit is not used, time is clipped
# for ADCP fast sampled, single ping data, need millisecond resolution
varobj = cdf.createVariable('time', 'f8', ('time',))
# for cf convention, always assume UTC for now, and use the UNIX Epoch as the reference
varobj.units = "seconds since %d-%d-%d %d:%d:%f 0:00" % (ens_data['VLeader']['Year'],
ens_data['VLeader']['Month'],
ens_data['VLeader']['Day'],
ens_data['VLeader']['Hour'],
ens_data['VLeader']['Minute'],
ens_data['VLeader']['Second'] +
ens_data['VLeader']['Hundredths'] / 100)
cf_units = "seconds since %d-%d-%d %d:%d:%f 0:00" % (ens_data['VLeader']['Year'], ens_data['VLeader']['Month'],
ens_data['VLeader']['Day'], ens_data['VLeader']['Hour'],
ens_data['VLeader']['Minute'],
ens_data['VLeader']['Second']
+ ens_data['VLeader']['Hundredths'] / 100)
varobj.standard_name = "time"
varobj.axis = "T"
varobj.type = "UNEVEN"
# we include time and time2 for EPIC compliance
# this statement resulted in a fill value of -1??
# varobj = cdf.createVariable('EPIC_time','u4',('time',))
varobj = cdf.createVariable('EPIC_time', 'u4', ('time',), fill_value=False)
varobj.units = "True Julian Day"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
# this statement resulted in a fill value of -1??
# varobj = cdf.createVariable('EPIC_time2','u4',('time',))
varobj = cdf.createVariable('EPIC_time2', 'u4', ('time',), fill_value=False)
varobj.units = "msec since 0:00 GMT"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
elif time_type == "EPIC":
varobj = cdf.createVariable('time', 'u4', ('time',))
varobj.units = "True Julian Day"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
varobj = cdf.createVariable('time2', 'u4', ('time',))
varobj.units = "msec since 0:00 GMT"
varobj.epic_code = 624
varobj.datum = "Time (UTC) in True Julian Days: 2440000 = 0000 h on May 23, 1968"
varobj.NOTE = "Decimal Julian day [days] = time [days] + ( time2 [msec] / 86400000 [msec/day] )"
cf_units = ""
else: # only CF time
# this is best for use by python packages like xarray
# if f8, 64 bit is not used, time is clipped
# for ADCP fast sampled, single ping data, need millisecond resolution
varobj = cdf.createVariable('time', 'f8', ('time',))
# for cf convention, always assume UTC for now, and use the UNIX Epoch as the reference
varobj.units = "seconds since %d-%d-%d %d:%d:%f 0:00" % (ens_data['VLeader']['Year'],
ens_data['VLeader']['Month'],
ens_data['VLeader']['Day'],
ens_data['VLeader']['Hour'],
ens_data['VLeader']['Minute'],
ens_data['VLeader']['Second'] +
ens_data['VLeader']['Hundredths'] / 100)
cf_units = "seconds since %d-%d-%d %d:%d:%f 0:00" % (ens_data['VLeader']['Year'], ens_data['VLeader']['Month'],
ens_data['VLeader']['Day'], ens_data['VLeader']['Hour'],
ens_data['VLeader']['Minute'],
ens_data['VLeader']['Second']
+ ens_data['VLeader']['Hundredths'] / 100)
varobj.standard_name = "time"
varobj.axis = "T"
varobj.type = "UNEVEN"
varobj = cdf.createVariable('bindist', 'f4', ('depth',), fill_value=floatfill)
# note name is one of the netcdf4 reserved attributes, use setncattr
varobj.setncattr('name', "bindist")
varobj.units = "m"
varobj.long_name = "bin distance from instrument for slant beams"
varobj.epic_code = 0
# varobj.valid_range = [0 0]
varobj.NOTE = "distance is calculated from center of bin 1 and bin size"
bindist = []
for idx in range(ens_data['FLeader']['Number_of_Cells']):
bindist.append(idx * (ens_data['FLeader']['Depth_Cell_Length_cm'] / 100) +
ens_data['FLeader']['Bin_1_distance_cm'] / 100)
varobj[:] = bindist[:]
varobj = cdf.createVariable('depth', 'f4', ('depth',)) # no fill for ordinates
varobj.units = "m"
varobj.long_name = "distance from transducer, depth placeholder"
varobj.center_first_bin_m = ens_data['FLeader']['Bin_1_distance_cm'] / 100
varobj.blanking_distance_m = ens_data['FLeader']['Blank_after_Transmit_cm'] / 100
varobj.bin_size_m = ens_data['FLeader']['Depth_Cell_Length_cm'] / 100
varobj.bin_count = ens_data['FLeader']['Number_of_Cells']
varobj[:] = bindist[:]
varobj = cdf.createVariable('sv', 'f4', ('time',), fill_value=floatfill)
varobj.units = "m s-1"
varobj.long_name = "sound velocity (m s-1)"
# varobj.valid_range = [1400, 1600]
for i in range(4):
varname = "vel%d" % (i+1)
varobj = cdf.createVariable(varname, 'f4', ('time', 'depth'), fill_value=floatfill)
varobj.units = "mm s-1"
varobj.long_name = "Beam %d velocity (mm s-1)" % (i+1)
varobj.epic_code = 1277+i
# varobj.valid_range = [-32767, 32767]
for i in range(4):
varname = "cor%d" % (i+1)
varobj = cdf.createVariable(varname, 'u2', ('time', 'depth'), fill_value=intfill)
varobj.units = "counts"
varobj.long_name = "Beam %d correlation" % (i+1)
varobj.epic_code = 1285+i
# varobj.valid_range = [0, 255]
for i in range(4):
varname = "att%d" % (i+1)
varobj = cdf.createVariable(varname, 'u2', ('time', 'depth'), fill_value=intfill)
varobj.units = "counts"
varobj.epic_code = 1281+i
varobj.long_name = "ADCP attenuation of beam %d" % (i+1)
# varobj.valid_range = [0, 255]
if 'GData' in ens_data:
for i in range(4):
varname = "PGd%d" % (i+1)
varobj = cdf.createVariable(varname, 'u2', ('time', 'depth'), fill_value=intfill)
varobj.units = "counts"
varobj.long_name = "Percent Good Beam %d" % (i+1)
varobj.epic_code = 1241+i
# varobj.valid_range = [0, 100]
varobj = cdf.createVariable('Hdg', 'f4', ('time',), fill_value=floatfill)
varobj.units = "hundredths of degrees"
varobj.long_name = "INST Heading"
varobj.epic_code = 1215
varobj.heading_alignment = ens_data['FLeader']['Heading_Alignment_Hundredths_of_Deg']
varobj.heading_bias = ens_data['FLeader']['Heading_Bias_Hundredths_of_Deg']
# varobj.valid_range = [0, 36000]
if ens_data['FLeader']['Heading_Bias_Hundredths_of_Deg'] == 0:
varobj.NOTE_9 = "no heading bias was applied by EB during deployment or by wavesmon"
else:
varobj.NOTE_9 = "a heading bias was applied by EB during deployment or by wavesmon"
varobj = cdf.createVariable('Ptch', 'f4', ('time',), fill_value=floatfill)
varobj.units = "hundredths of degrees"
varobj.long_name = "INST Pitch"
varobj.epic_code = 1216
# varobj.valid_range = [-18000, 18000] # physical limit, not sensor limit
varobj = cdf.createVariable('Roll', 'f4', ('time',), fill_value=floatfill)
varobj.units = "hundredths of degrees"
varobj.long_name = "INST Roll"
varobj.epic_code = 1217
# varobj.valid_range = [-18000, 18000] # physical limit, not sensor limit
varobj = cdf.createVariable('HdgSTD', 'f4', ('time',), fill_value=floatfill)
varobj.units = "degrees"
varobj.long_name = "Heading Standard Deviation"
varobj = cdf.createVariable('PtchSTD', 'f4', ('time',), fill_value=floatfill)
varobj.units = "tenths of degrees"
varobj.long_name = "Pitch Standard Deviation"
varobj = cdf.createVariable('RollSTD', 'f4', ('time',), fill_value=floatfill)
varobj.units = "tenths of degrees"
varobj.long_name = "Roll Standard Deviation"
varobj = cdf.createVariable('Tx', 'f4', ('time',), fill_value=floatfill)
varobj.units = "hundredths of degrees"
varobj.long_name = "ADCP Transducer Temperature"
varobj.epic_code = 3017
# varobj.valid_range = [-500, 4000]
varobj = cdf.createVariable('S', 'f4', ('time',), fill_value=floatfill)
varobj.units = "PPT"
varobj.long_name = "SALINITY (PPT)"
varobj.epic_code = 40
# varobj.valid_range = [0, 40]
varobj = cdf.createVariable('xmitc', 'f4', ('time',), fill_value=floatfill)
varobj.units = "amps"
varobj.long_name = "transmit current"
varobj = cdf.createVariable('xmitv', 'f4', ('time',), fill_value=floatfill)
varobj.units = "volts"
varobj.long_name = "transmit voltage"
varobj = cdf.createVariable('Ambient_Temp', 'i2', ('time',), fill_value=intfill)
varobj.units = "C"
varobj.long_name = "Ambient_Temp"
varobj = cdf.createVariable('Pressure+', 'i2', ('time',), fill_value=intfill)
varobj.units = "unknown"
varobj.long_name = "Pressure+"
varobj = cdf.createVariable('Pressure-', 'i2', ('time',), fill_value=intfill)
varobj.units = "unknown"
varobj.long_name = "Pressure-"
varobj = cdf.createVariable('Attitude_Temp', 'i2', ('time',), fill_value=intfill)
varobj.units = "C"
varobj.long_name = "Attitude_Temp"
for i in range(4):
varname = "EWD%d" % (i+1)
varobj = cdf.createVariable(varname, 'u2', ('time',), fill_value=intfill)
varobj.units = "binary flag"
varobj.long_name = "Error Status Word %d" % (i+1)
if ens_data['FLeader']['Depth_sensor_available'] == 'Yes':
varobj = cdf.createVariable('Pressure', 'f4', ('time',), fill_value=floatfill)
varobj.units = "deca-pascals"
varobj.long_name = "ADCP Transducer Pressure"
varobj.epic_code = 4
varobj = cdf.createVariable('PressVar', 'f4', ('time',), fill_value=floatfill)
varobj.units = "deca-pascals"
varobj.long_name = "ADCP Transducer Pressure Variance"
if 'BTData' in ens_data:
# write globals attributable to BT setup
cdf.setncattr('TRDI_BT_pings_per_ensemble', ens_data['BTData']['Pings_per_ensemble'])
cdf.setncattr('TRDI_BT_reacquire_delay', ens_data['BTData']['delay_before_reacquire'])
cdf.setncattr('TRDI_BT_min_corr_mag', ens_data['BTData']['Corr_Mag_Min'])
cdf.setncattr('TRDI_BT_min_eval_mag', ens_data['BTData']['Eval_Amp_Min'])
cdf.setncattr('TRDI_BT_min_percent_good', ens_data['BTData']['PGd_Minimum'])
cdf.setncattr('TRDI_BT_mode', ens_data['BTData']['Mode'])
cdf.setncattr('TRDI_BT_max_err_vel', ens_data['BTData']['Err_Vel_Max'])
# cdf.setncattr('TRDI_BT_max_tracking_depth',ens_data['BTData'][''])
# cdf.setncattr('TRDI_BT_shallow_water_gain',ens_data['BTData'][''])
for i in range(4):
varname = "BTR%d" % (i+1)
varobj = cdf.createVariable(varname, 'u8', ('time',), fill_value=intfill)
varobj.units = "cm"
varobj.long_name = "BT Range %d" % (i+1)
for i in range(4):
varnames = ('BTWe', 'BTWu', 'BTWv', 'BTWd')
longnames = ('BT Error Velocity', 'BT Eastward Velocity', 'BT Northward Velocity', 'BT Vertical Velocity')
if ens_data['FLeader']['Coord_Transform'] == 'EARTH':
varobj = cdf.createVariable(varnames[i+1], 'i2', ('time',), fill_value=intfill)
varobj.units = "mm s-1"
varobj.long_name = "%s, mm s-1" % longnames[i+1]
else:
varname = "BTV%d" % (i+1)
varobj = cdf.createVariable(varname, 'i2', ('time',), fill_value=intfill)
varobj.units = "mm s-1"
varobj.long_name = "BT velocity, mm s-1 %d" % (i+1)
for i in range(4):
varname = "BTc%d" % (i+1)
varobj = cdf.createVariable(varname, 'u2', ('time',), fill_value=intfill)
varobj.units = "counts"
varobj.long_name = "BT correlation %d" % (i+1)
for i in range(4):
varname = "BTe%d" % (i+1)
varobj = cdf.createVariable(varname, 'u2', ('time',), fill_value=intfill)
varobj.units = "counts"
varobj.long_name = "BT evaluation amplitude %d" % (i+1)
for i in range(4):
varname = "BTp%d" % (i+1)
varobj = cdf.createVariable(varname, 'u2', ('time',), fill_value=intfill)
varobj.units = "percent"
varobj.long_name = "BT percent good %d" % (i+1)
# varobj.valid_range = [0, 100]
for i in range(4):
varname = "BTRSSI%d" % (i+1)
varobj = cdf.createVariable(varname, 'u2', ('time',), fill_value=intfill)
varobj.units = "counts"
varobj.long_name = "BT Receiver Signal Strength Indicator %d" % (i+1)
if ens_data['BTData']['Mode'] == 0: # water reference layer was used
varobj = cdf.createVariable('BTRmin', 'f4', ('time',), fill_value=floatfill)
varobj.units = 'dm'
varobj.long_name = "BT Ref. min"
varobj = cdf.createVariable('BTRnear', 'f4', ('time',), fill_value=floatfill)
varobj.units = 'dm'
varobj.long_name = "BT Ref. near"
varobj = cdf.createVariable('BTRfar', 'f4', ('time',), fill_value=floatfill)
varobj.units = 'dm'
varobj.long_name = "BT Ref. far"
for i in range(4):
varname = "BTRv%d" % (i+1)
varobj = cdf.createVariable(varname, 'i2', ('time',), fill_value=intfill)
varobj.units = "mm s-1"
varobj.long_name = "BT Ref. velocity, mm s-1 %d" % (i+1)
for i in range(4):
varname = "BTRc%d" % (i+1)
varobj = cdf.createVariable(varname, 'u2', ('time',), fill_value=intfill)
varobj.units = "counts"
varobj.long_name = "BT Ref. correlation %d" % (i+1)
for i in range(4):
varname = "BTRi%d" % (i+1)
varobj = cdf.createVariable(varname, 'u2', ('time',), fill_value=intfill)
varobj.units = "counts"
varobj.long_name = "BT Ref. intensity %d" % (i+1)
for i in range(4):
varname = "BTRp%d" % (i+1)
varobj = cdf.createVariable(varname, 'u2', ('time',), fill_value=intfill)
varobj.units = "percent"
varobj.long_name = "BT Ref. percent good %d" % (i+1)
varobj.epic_code = 1269+i
if 'VPingSetup' in ens_data:
write_dict_to_cdf_attributes(cdf, ens_data['VPingSetup'], "TRDI_VBeam_")
if 'VBeamLeader' in ens_data:
write_dict_to_cdf_attributes(cdf, ens_data['VBeamLeader'], "TRDI_VBeam_")
if 'VBeamVData' in ens_data:
if ens_data['VBeamLeader']['Vertical_Depth_Cells'] == ens_data['FLeader']['Number_of_Cells']:
varobj = cdf.createVariable("vel5", 'f4', ('time', 'depth'), fill_value=floatfill)
varobj.units = "mm s-1"
varobj.long_name = "Beam 5 velocity (mm s-1)"
varobj = cdf.createVariable("cor5", 'u2', ('time', 'depth'), fill_value=intfill)
varobj.units = "counts"
varobj.long_name = "Beam 5 correlation"
varobj = cdf.createVariable("att5", 'u2', ('time', 'depth'), fill_value=intfill)
varobj.units = "counts"
varobj.long_name = "ADCP attenuation of beam 5"
if 'VBeamGData' in ens_data:
varobj = cdf.createVariable("PGd5", 'u2', ('time', 'depth'), fill_value=intfill)
varobj.units = "counts"
varobj.long_name = "Percent Good Beam 5"
else:
cdf.TRDI_VBeam_note1 = 'Vertical beam data found without Percent Good'
else:
print("Vertical beam data found with different number of cells.")
cdf.TRDI_VBeam_note = "Vertical beam data found with different number of cells. " + \
"Vertical beam data not exported to netCDF"
print("Vertical beam data not exported to netCDF")
if 'WaveParams' in ens_data:
# no units given for any of these in the TRDI docs
varobj = cdf.createVariable("Hs", 'f4', ('time',), fill_value=floatfill)
varobj.units = "m"
varobj.long_name = "Significant Wave Height (m)"
varobj = cdf.createVariable("Tp", 'f4', ('time',), fill_value=floatfill)
varobj.units = "s"
varobj.long_name = "Peak Wave Period (s)"
varobj = cdf.createVariable("Dp", 'f4', ('time',), fill_value=floatfill)
varobj.units = "Deg."
varobj.long_name = "Peak Wave Direction (Deg.)"
varobj = cdf.createVariable("Dm", 'f4', ('time',), fill_value=floatfill)
varobj.units = "Deg."
varobj.long_name = "Mea Peak Wave Direction (Deg.)"
varobj = cdf.createVariable("SHmax", 'f4', ('time',), fill_value=floatfill)
varobj.units = "m"
varobj.long_name = "Maximum Wave Height (m)"
varobj.note = "from zero crossing analysis of surface track time series"
varobj = cdf.createVariable("SH13", 'f4', ('time',), fill_value=floatfill)
varobj.units = "m"
varobj.long_name = "Significant Wave Height of the largest 1/3 of the waves (m)"
varobj.note = "in the field from zero crossing anaylsis of surface track time series"
varobj = cdf.createVariable("SH10", 'f4', ('time',), fill_value=floatfill)
varobj.units = "m"
varobj.long_name = "Significant Wave Height of the largest 1/10 of the waves (m)"
varobj.note = "in the field from zero crossing anaylsis of surface track time series"
varobj = cdf.createVariable("STmax", 'f4', ('time',), fill_value=floatfill)
varobj.units = "s"
varobj.long_name = "Maximum Peak Wave Period (s)"
varobj.note = "from zero crossing analysis of surface track time series"
varobj = cdf.createVariable("ST13", 'f4', ('time',), fill_value=floatfill)
varobj.units = "s"
varobj.long_name = "Period associated with the peak wave height of the largest 1/3 of the waves (s)"
varobj.note = "in the field from zero crossing analysis of surface track time series"
varobj = cdf.createVariable("ST10", 'f4', ('time',), fill_value=floatfill)
varobj.units = "s"
varobj.long_name = "Period associated with the peak wave height of the largest 1/10 of the waves (s)"
varobj.note = "in the field from zero crossing anaylsis of surface track time series"
varobj = cdf.createVariable("T01", 'f4', ('time',), fill_value=floatfill)
varobj.units = " "
varobj = cdf.createVariable("Tz", 'f4', ('time',), fill_value=floatfill)
varobj.units = " "
varobj = cdf.createVariable("Tinv1", 'f4', ('time',), fill_value=floatfill)
varobj.units = " "
varobj = cdf.createVariable("S0", 'f4', ('time',), fill_value=floatfill)
varobj.units = " "
varobj = cdf.createVariable("Source", 'f4', ('time',), fill_value=floatfill)
varobj.units = " "
if 'WaveSeaSwell' in ens_data:
# no units given for any of these in the TRDI docs
varobj = cdf.createVariable("HsSea", 'f4', ('time',), fill_value=floatfill)
varobj.units = "m"
varobj.long_name = "Significant Wave Height (m)"
varobj.note = "in the sea region of the power spectrum"
varobj = cdf.createVariable("HsSwell", 'f4', ('time',), fill_value=floatfill)
varobj.units = "m"
varobj.long_name = "Significant Wave Height (m)"
varobj.note = "in the swell region of the power spectrum"
varobj = cdf.createVariable("TpSea", 'f4', ('time',), fill_value=floatfill)
varobj.units = "s"
varobj.long_name = "Peak Wave Period (s)"
varobj.note = "in the sea region of the power spectrum"
varobj = cdf.createVariable("TpSwell", 'f4', ('time',), fill_value=floatfill)
varobj.units = "s"
varobj.long_name = "Peak Wave Period (s)"
varobj.note = "in the swell region of the power spectrum"
varobj = cdf.createVariable("DpSea", 'f4', ('time',), fill_value=floatfill)
varobj.units = "Deg."
varobj.long_name = "Peak Wave Direction (Deg.)"
varobj.note = "in the sea region of the power spectrum"
varobj = cdf.createVariable("DpSwell", 'f4', ('time',), fill_value=floatfill)
varobj.units = "Deg."
varobj.long_name = "Peak Wave Direction (Deg.)"
varobj.note = "in the swell region of the power spectrum"
varobj = cdf.createVariable("SeaSwellPeriod", 'f4', ('time',), fill_value=floatfill)
varobj.units = "s"
varobj.long_name = "Transition Period between Sea and Swell (s)"
return cdf, cf_units
def bitstrLE(byte):
"""
make a bit string from little endian byte
:param byte byte: a byte
:return: a string of ones and zeros, the bits in the byte
"""
# surely there's a better way to do this!!
bits = ""
for i in [7, 6, 5, 4, 3, 2, 1, 0]: # Little Endian
if (byte >> i) & 1:
bits += "1"
else:
bits += "0"
return bits
def bitstrBE(byte):
"""
make a bit string from big endian byte
:param byte byte: a byte
:return: a string of ones and zeros, the bbits in the byte
"""
# surely there's a better way to do this!!
bits = ""
for i in range(8): # Big Endian
if (byte[0] >> i) & 1:
bits += "1"
else:
bits += "0"
return bits
def read_TRDI_header(infile):
"""
read the TRDI header bytes directly from a file pointer position and test for end of file
:param infile: pointer to a file open for reading
:return: a dictionary of the TRDI Header data
"""
header_data = {}
try:
header_data['headerID'] = infile.read(1)
except:
return None
try:
header_data['sourceID'] = infile.read(1)
except:
return None
try:
header_data['nbytesperens'] = struct.unpack('<H', infile.read(2))[0]
except:
return None
infile.read(1) # spare, skip it
header_data['ndatatypes'] = infile.read(1)[0] # remember, bytes objects are arrays
offsets = [0]*header_data['ndatatypes'] # predefine a list of ints to fill
for i in range(header_data['ndatatypes']):
offsets[i] = struct.unpack('<H', infile.read(2))[0]
header_data['offsets'] = offsets
return header_data
def parse_TRDI_header(bstream):
"""
parse the TRDI header data for the number of data types and byte offsets to each
:param bytes bstream: the raw binary header information
:return: dictionary of readable header data
"""
header_data = {
'headerID': bstream[0], # byte 1
'sourceID': bstream[1], # byte 2
'nbytesperens': struct.unpack('<H', bstream[2:4])[0],
# spare, skip it, byte 5
'ndatatypes': bstream[5] # byte 6
}
offsets = [0]*header_data['ndatatypes'] # predefine a list of ints to fill
for i in range(header_data['ndatatypes']):
offsets[i] = struct.unpack('<H', bstream[6+i*2:6+i*2+2])[0]
header_data['offsets'] = offsets
return header_data
def parse_TRDI_fixed_leader(bstream, offset):
"""
parse the Fixed Leader section of data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:return: dictionary of readable fixed leader data
"""
f_leader_data = {}
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 0:
print("expected fixed leader ID, instead found %g", leader_id)
return -1
f_leader_data['CPU_Version'] = "%s.%s" % (bstream[offset+2], bstream[offset+4])
f_leader_data['System_Configuration_LSB'] = bitstrLE(bstream[offset+4])
# anyone who has a better way to convert these bits, please tell me!
f_leader_data['System_Frequency'] = int(f_leader_data['System_Configuration_LSB'][5:8], 2)
sys_freqs = (75, 150, 300, 600, 1200, 2400)
f_leader_data['System_Frequency'] = sys_freqs[f_leader_data['System_Frequency']]
if f_leader_data['System_Configuration_LSB'][4] == "1":
f_leader_data['Beam_Pattern'] = 'Convex'
else:
f_leader_data['Beam_Pattern'] = 'Concave'
f_leader_data['Sensor_Configuration'] = int(f_leader_data['System_Configuration_LSB'][2:4], 2) + 1
if f_leader_data['System_Configuration_LSB'][1] == "1":
f_leader_data['Transducer_Head_Is_Attached'] = 'Yes'
else:
f_leader_data['Transducer_Head_Is_Attached'] = 'No'
if f_leader_data['System_Configuration_LSB'][0] == "1":
f_leader_data['Orientation'] = 'Up-facing beams'
else:
f_leader_data['Orientation'] = 'Down-facing beams'
f_leader_data['System_Configuration_MSB'] = bitstrLE(bstream[offset+5])
f_leader_data['Beam_Angle'] = int(f_leader_data['System_Configuration_MSB'][5:8], 2)
# the angles 15, 20, and 30 are used by the Workhorse
# the angle 25 is used by the Sentinel V, and so far, is always 25
angles = (15, 20, 30, 0, 0, 0, 0, 25)
f_leader_data['Beam_Angle'] = angles[f_leader_data['Beam_Angle']]
f_leader_data['Beam_Configuration'] = int(f_leader_data['System_Configuration_MSB'][0:4], 2)
if f_leader_data['Beam_Configuration'] == 4:
f_leader_data['Beam_Configuration'] = '4-bm janus'
elif f_leader_data['Beam_Configuration'] == 5:
f_leader_data['Beam_Configuration'] = '5-bm janus cfig demod'
elif f_leader_data['Beam_Configuration'] == 15:
f_leader_data['Beam_Configuration'] = '5-bm janus cfig (2 demod)'
else:
f_leader_data['Beam_Configuration'] = 'unknown'
f_leader_data['Simulated_Data'] = bstream[offset+6]
f_leader_data['Lag_Length'] = bstream[offset+7]
f_leader_data['Number_of_Beams'] = bstream[offset+8]
f_leader_data['Number_of_Cells'] = bstream[offset+9]
f_leader_data['Pings_Per_Ensemble'] = struct.unpack('<h', bstream[offset+10:offset+12])[0]
f_leader_data['Depth_Cell_Length_cm'] = struct.unpack('<h', bstream[offset+12:offset+14])[0]
f_leader_data['Blank_after_Transmit_cm'] = struct.unpack('<h', bstream[offset+14:offset+16])[0]
f_leader_data['Signal_Processing_Mode'] = bstream[offset+16]
f_leader_data['Low_Corr_Threshold'] = bstream[offset+17]
f_leader_data['No._Code_Reps'] = bstream[offset+18]
f_leader_data['PGd_Minimum'] = bstream[offset+19]
f_leader_data['Error_Velocity_Threshold'] = struct.unpack('<h', bstream[offset+20:offset+22])[0]
# TODO ping group time needs to be formatted better
f_leader_data['Time_Between_Ping Groups'] = "%03d:%02d:%02d" % (bstream[offset+22], bstream[offset+23],
bstream[offset+24])
f_leader_data['Coord_Transform_LSB'] = bitstrLE(bstream[offset+25])
f_leader_data['Coord_Transform'] = int(f_leader_data['Coord_Transform_LSB'][3:5], 2)
xforms = ('BEAM', 'INST', 'SHIP', 'EARTH')
f_leader_data['Coord_Transform'] = xforms[f_leader_data['Coord_Transform']]
if f_leader_data['Coord_Transform_LSB'][5] == '1':
f_leader_data['Tilts_Used'] = 'Yes'
else:
f_leader_data['Tilts_Used'] = 'No'
if f_leader_data['Coord_Transform_LSB'][6] == '1':
f_leader_data['3-Beam_Solution_Used'] = 'Yes'
else:
f_leader_data['3-Beam_Solution_Used'] = 'No'
if f_leader_data['Coord_Transform_LSB'][7] == '1':
f_leader_data['Bin_Mapping_Used'] = 'Yes'
else:
f_leader_data['Bin_Mapping_Used'] = 'No'
f_leader_data['Heading_Alignment_Hundredths_of_Deg'] = struct.unpack('<h', bstream[offset+26:offset+28])[0]
f_leader_data['Heading_Bias_Hundredths_of_Deg'] = struct.unpack('<h', bstream[offset+28:offset+30])[0]
f_leader_data['Sensor_Source_Byte'] = bitstrLE(bstream[offset+30])
if f_leader_data['Sensor_Source_Byte'][1] == '1':
f_leader_data['Calculate_EC_from_ED_ES_and_ET'] = 'Yes'
else:
f_leader_data['Calculate_EC_from_ED_ES_and_ET'] = 'No'
if f_leader_data['Sensor_Source_Byte'][2] == '1':
f_leader_data['Uses_ED_from_depth_sensor'] = 'Yes'
else:
f_leader_data['Uses_ED_from_depth_sensor'] = 'No'
if f_leader_data['Sensor_Source_Byte'][3] == '1':
f_leader_data['Uses_EH_from_transducer_heading_sensor'] = 'Yes'
else:
f_leader_data['Uses_EH_from_transducer_heading_sensor'] = 'No'
if f_leader_data['Sensor_Source_Byte'][4] == '1':
f_leader_data['Uses_EP_from_transducer_pitch_sensor'] = 'Yes'
else:
f_leader_data['Uses_EP_from_transducer_pitch sensor'] = 'No'
if f_leader_data['Sensor_Source_Byte'][5] == '1':
f_leader_data['Uses_ER_from_transducer_roll_sensor'] = 'Yes'
else:
f_leader_data['Uses_ER_from_transducer_roll_sensor'] = 'No'
if f_leader_data['Sensor_Source_Byte'][6] == '1':
f_leader_data['Uses_ES_from_conductivity_sensor'] = 'Yes'
else:
f_leader_data['Uses_ES_from_conductivity_sensor'] = 'No'
if f_leader_data['Sensor_Source_Byte'][7] == '1':
f_leader_data['Uses_ET_from_transducer_temperature_sensor'] = 'Yes'
else:
f_leader_data['Uses_ET_from_transducer_temperature_sensor'] = 'No'
f_leader_data['Sensor_Avail_Byte'] = bitstrLE(bstream[offset+31])
if f_leader_data['Sensor_Avail_Byte'][1] == '1':
f_leader_data['Speed_of_sound_sensor_available'] = 'Yes'
else:
f_leader_data['Speed_of_sound_sensor_available'] = 'No'
if f_leader_data['Sensor_Avail_Byte'][2] == '1':
f_leader_data['Depth_sensor_available'] = 'Yes'
else:
f_leader_data['Depth_sensor_available'] = 'No'
if f_leader_data['Sensor_Avail_Byte'][3] == '1':
f_leader_data['Heading_sensor_available'] = 'Yes'
else:
f_leader_data['Heading_sensor_available'] = 'No'
if f_leader_data['Sensor_Avail_Byte'][4] == '1':
f_leader_data['Pitch_sensor_available'] = 'Yes'
else:
f_leader_data['Pitch_sensor_available'] = 'No'
if f_leader_data['Sensor_Avail_Byte'][5] == '1':
f_leader_data['Roll_sensor_available'] = 'Yes'
else:
f_leader_data['Roll_sensor_available'] = 'No'
if f_leader_data['Sensor_Avail_Byte'][6] == '1':
f_leader_data['Conductivity_sensor_available'] = 'Yes'
else:
f_leader_data['Conductivity_sensor_available'] = 'No'
if f_leader_data['Sensor_Avail_Byte'][7] == '1':
f_leader_data['Temperature_sensor_available'] = 'Yes'
else:
f_leader_data['Temperature_sensor_available'] = 'No'
f_leader_data['Bin_1_distance_cm'] = struct.unpack('<h', bstream[offset+32:offset+34])[0]
f_leader_data['Xmit_pulse_length_cm'] = struct.unpack('<h', bstream[offset+34:offset+36])[0]
f_leader_data['Ref_Lyr_Avg_Starting_cell'] = bstream[offset+36]
f_leader_data['Ref_Lyr_Avg_Ending_cell'] = bstream[offset+37]
f_leader_data['False_Target_Threshold'] = bstream[offset+38]
f_leader_data['Transmit_lag_distance_cm'] = struct.unpack('<h', bstream[offset+40:offset+42])[0]
f_leader_data['CPU_Board_Serial_Number'] = ""
for i in range(8):
f_leader_data['CPU_Board_Serial_Number'] = f_leader_data['CPU_Board_Serial_Number'] + \
("%x" % bstream[offset+42+i])
f_leader_data['System_Bandwidth'] = struct.unpack('<h', bstream[offset+50:offset+52])[0]
f_leader_data['System_Power'] = bstream[offset+52]
f_leader_data['Base_Frequency_Index'] = bstream[offset+53]
# TODO these two need to be interpreted as spare if WH ADCP
# rawBytes, f_leader_data['Serial Number for Remus only'] = struct.unpack('<H',infile.read(2))[0]
# f_leader_data['Beam Angle for H-ADCP only'] = "%g" % infile.read(1)[0]
return f_leader_data
def parse_TRDI_variable_leader(bstream, offset):
"""
parse the Variable Leader section of data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:return: dictionary of readable variable leader data
"""
v_leader_data = {}
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 128:
print("expected variable leader ID, instead found %g", leader_id)
return -1
v_leader_data['Ensemble_Number'] = struct.unpack('<H', bstream[offset+2:offset+4])[0]
v_leader_data['Year'] = bstream[offset+4]
if v_leader_data['Year'] < 50: # circa 2000
v_leader_data['Year'] += 2000
else:
v_leader_data['Year'] += 1900
v_leader_data['Month'] = bstream[offset+5]
v_leader_data['Day'] = bstream[offset+6]
v_leader_data['Hour'] = bstream[offset+7]
v_leader_data['Minute'] = bstream[offset+8]
v_leader_data['Second'] = bstream[offset+9]
v_leader_data['Hundredths'] = bstream[offset+10]
v_leader_data['Ensemble_#_MSB'] = bstream[offset+11]
v_leader_data['Ensemble_Number'] = v_leader_data['Ensemble_Number']+(v_leader_data['Ensemble_#_MSB'] << 16)
v_leader_data['timestr'] = "%04d:%02d:%02d %02d:%02d:%02d.%03d" % (
v_leader_data['Year'], v_leader_data['Month'],
v_leader_data['Day'], v_leader_data['Hour'], v_leader_data['Minute'],
v_leader_data['Second'], v_leader_data['Hundredths'])
# compute time and time2
jd = julian(v_leader_data['Year'], v_leader_data['Month'], v_leader_data['Day'],
v_leader_data['Hour'], v_leader_data['Minute'], v_leader_data['Second'],
v_leader_data['Hundredths'])
v_leader_data['dtobj'] = dt.datetime(v_leader_data['Year'], v_leader_data['Month'], v_leader_data['Day'],
v_leader_data['Hour'], v_leader_data['Minute'], v_leader_data['Second'],
v_leader_data['Hundredths']*10000)
# centiseconds * 10000 = microseconds
jddt = ajd(v_leader_data['dtobj'])
v_leader_data['julian_day_from_as_datetime_object'] = jddt
v_leader_data['julian_day_from_julian'] = jd
# v_leader_data['time'] = jd
v_leader_data['EPIC_time'] = int(math.floor(jd))
v_leader_data['EPIC_time2'] = int((jd - math.floor(jd))*(24*3600*1000))
v_leader_data['BIT_Result_Byte_13'] = bitstrLE(bstream[offset+12])
v_leader_data['Demod_1_error_bit'] = int(v_leader_data['BIT_Result_Byte_13'][3])
v_leader_data['Demod_0_error_bit'] = int(v_leader_data['BIT_Result_Byte_13'][4])
v_leader_data['Timing_Card_error_bit'] = int(v_leader_data['BIT_Result_Byte_13'][6])
v_leader_data['Speed_of_Sound'] = struct.unpack('<H', bstream[offset+14:offset+16])[0]
v_leader_data['Depth_of_Transducer'] = struct.unpack('<H', bstream[offset+16:offset+18])[0]
v_leader_data['Heading, Pitch, Roll units'] = "hundredths_of_a_degree"
v_leader_data['Heading'] = struct.unpack('<H', bstream[offset+18:offset+20])[0]
v_leader_data['Pitch'] = struct.unpack('<h', bstream[offset+20:offset+22])[0]
v_leader_data['Roll'] = struct.unpack('<h', bstream[offset+22:offset+24])[0]
v_leader_data['Salinity'] = struct.unpack('<H', bstream[offset+24:offset+26])[0]
v_leader_data['Temperature'] = struct.unpack('<H', bstream[offset+26:offset+28])[0]
v_leader_data['MPT_minutes'] = bstream[offset+28]
v_leader_data['MPT_seconds'] = bstream[offset+29]
v_leader_data['MPT_hundredths'] = bstream[offset+30]
v_leader_data['H/Hdg_Std_Dev'] = bstream[offset+31]
v_leader_data['P/Pitch_Std_Dev'] = bstream[offset+32]
v_leader_data['R/Roll_Std_Dev'] = bstream[offset+33]
# the V Series PDO Output is different for the ADC channels
# V PD0 this is ADC Channel 0 not used
v_leader_data['Xmit_Current'] = bstream[offset+34] # ADC Channel 0
# V PD0 this is ADC Channel 1 XMIT Voltage
v_leader_data['Xmit_Voltage'] = bstream[offset+35] # ADC Channel 1
# V PD0 this is ADC Channel 2 not used
v_leader_data['Ambient_Temp'] = bstream[offset+36] # ADC Channel 2
# V PD0 this is ADC Channel 3 not used
v_leader_data['Pressure_(+)'] = bstream[offset+37] # ADC Channel 3
# V PD0 this is ADC Channel 4 not used
v_leader_data['Pressure_(-)'] = bstream[offset+38] # ADC Channel 4
# V PD0 this is ADC Channel 5 not used
v_leader_data['Attitude_Temp'] = bstream[offset+39] # ADC Channel 5
# V PD0 this is ADC Channel 6 not used
v_leader_data['Attitude'] = bstream[offset+40] # ADC Channel 6
# V PD0 this is ADC Channel 7 not used
v_leader_data['Contamination_Sensor'] = bstream[offset+41] # ADC Channel 7
v_leader_data['Error_Status_Word_Low_16_bits_LSB'] = bitstrLE(bstream[offset+42])
v_leader_data['Bus_Error_exception'] = int(v_leader_data['Error_Status_Word_Low_16_bits_LSB'][7])
v_leader_data['Address_Error_exception'] = int(v_leader_data['Error_Status_Word_Low_16_bits_LSB'][6])
v_leader_data['Illegal_Instruction_exception'] = int(v_leader_data['Error_Status_Word_Low_16_bits_LSB'][5])
v_leader_data['Zero_Divide_exception'] = int(v_leader_data['Error_Status_Word_Low_16_bits_LSB'][4])
v_leader_data['Emulator_exception'] = int(v_leader_data['Error_Status_Word_Low_16_bits_LSB'][3])
v_leader_data['Unassigned_exception'] = int(v_leader_data['Error_Status_Word_Low_16_bits_LSB'][2])
v_leader_data['Watchdog_restart_occurred'] = int(v_leader_data['Error_Status_Word_Low_16_bits_LSB'][1])
v_leader_data['Battery_Saver_power'] = int(v_leader_data['Error_Status_Word_Low_16_bits_LSB'][0])
v_leader_data['Error_Status_Word_Low_16_bits_MSB'] = bitstrLE(bstream[offset+43])
v_leader_data['Pinging'] = int(v_leader_data['Error_Status_Word_Low_16_bits_MSB'][7])
v_leader_data['Cold_Wakeup_occurred'] = int(v_leader_data['Error_Status_Word_Low_16_bits_MSB'][1])
v_leader_data['Unknown_Wakeup_occurred'] = int(v_leader_data['Error_Status_Word_Low_16_bits_MSB'][0])
v_leader_data['Error_Status_Word_High_16_bits_LSB'] = bitstrLE(bstream[offset+44])
v_leader_data['Clock_Read_error_occurred'] = int(v_leader_data['Error_Status_Word_High_16_bits_LSB'][7])
v_leader_data['Unexpected_alarm'] = int(v_leader_data['Error_Status_Word_High_16_bits_LSB'][6])
v_leader_data['Clock_jump_forward'] = int(v_leader_data['Error_Status_Word_High_16_bits_LSB'][5])
v_leader_data['Clock_jump_backward'] = int(v_leader_data['Error_Status_Word_High_16_bits_LSB'][4])
v_leader_data['Error_Status_Word_High_16_bits_MSB'] = bitstrLE(bstream[offset+42])
v_leader_data['Power_Fail_(Unrecorded)'] = int(v_leader_data['Error_Status_Word_High_16_bits_MSB'][4])
v_leader_data['Spurious_level_4_intr_(DSP)'] = int(v_leader_data['Error_Status_Word_High_16_bits_MSB'][3])
v_leader_data['Spurious_level_5_intr_(UART)'] = int(v_leader_data['Error_Status_Word_High_16_bits_MSB'][2])
v_leader_data['Spurious_level_6_intr_(CLOCK)'] = int(v_leader_data['Error_Status_Word_High_16_bits_MSB'][1])
v_leader_data['Level_7_interrupt_occurred'] = int(v_leader_data['Error_Status_Word_High_16_bits_MSB'][0])
# pressure of the water at the transducer head relative to one atmosphere (sea level)
# v_leader_data['Pressure word byte 1'] = bitstrLE(bstream[offset+48])
# v_leader_data['Pressure word byte 2'] = bitstrLE(bstream[offset+49])
# v_leader_data['Pressure word byte 3'] = bitstrLE(bstream[offset+50])
# v_leader_data['Pressure word byte 4'] = bitstrLE(bstream[offset+51])
v_leader_data['Pressure_deca-pascals'] = bstream[offset+48]+(bstream[offset+49] << 8)+(bstream[offset+50] << 16) + \
(bstream[offset+51] << 24)
v_leader_data['Pressure_variance_deca-pascals'] = bstream[offset+52]+(bstream[offset+53] << 8) + \
(bstream[offset+54] << 16)+(bstream[offset+55] << 24)
v_leader_data['RTC_Century'] = bstream[offset+57]
v_leader_data['RTC_Year'] = bstream[offset+58]
v_leader_data['RTC_Month'] = bstream[offset+59]
v_leader_data['RTC_Day'] = bstream[offset+60]
v_leader_data['RTC_Hour'] = bstream[offset+61]
v_leader_data['RTC_Minute'] = bstream[offset+62]
v_leader_data['RTC_Second'] = bstream[offset+63]
v_leader_data['RTC_Hundredths'] = bstream[offset+64]
return v_leader_data
def parse_TRDI_velocity(bstream, offset, ncells, nbeams):
"""
parse the velocity data, each velocity value is stored as a two byte, twos complement integer [-32768 to 32767]
with the LSB sent first. Units are mm/s. A value of -32768 = 0x8000 is a bad velocity value
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:param int ncells: number of cells in the profile
:param int nbeams: number of acoustic beams
:return: velocity data as a beam x cell numpy array of ints
"""
if bstream[offset+1] != 1:
print("expected velocity ID, instead found %g", bstream[offset+1])
return -1
# start with a numpy array of bad values
data = np.ones((nbeams, ncells), dtype=int) * -32768
ibyte = 2
for icell in range(ncells):
for ibeam in range(nbeams):
data[ibeam, icell] = struct.unpack('<h', bstream[offset+ibyte:offset+ibyte+2])[0]
ibyte = ibyte+2
return data
def parse_TRDI_correlation(bstream, offset, ncells, nbeams):
"""
parse the correlation data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:param int ncells: number of cells in the profile
:param int nbeams: number of acoustic beams
:return: correlation data as a beam x cell numpy array of ints
"""
if bstream[offset+1] != 2:
print("expected correlation ID, instead found %g", bstream[offset+1])
return -1
# start with a numpy array of bad values
data = np.ones((nbeams, ncells), dtype=int) * -32768
ibyte = 2
for icell in range(ncells):
for ibeam in range(nbeams):
data[ibeam, icell] = bstream[offset+ibyte]
ibyte = ibyte+1
return data
def parse_TRDI_intensity(bstream, offset, ncells, nbeams):
"""
parse the intensity data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:param int ncells: number of cells in the profile
:param int nbeams: number of acoustic beams
:return: intensity data as a beam x cell numpy array of ints
"""
if bstream[offset+1] != 3:
print("expected intensity ID, instead found %g", bstream[offset+1])
return -1
# start with a numpy array of bad values
data = np.ones((nbeams, ncells), dtype=int) * -32768
ibyte = 2
for icell in range(ncells):
for ibeam in range(nbeams):
data[ibeam, icell] = bstream[offset+ibyte]
ibyte = ibyte+1
return data
def parse_TRDI_percent_good(bstream, offset, ncells, nbeams):
"""
parse the Percent Good data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:param int ncells: number of cells in the profile
:param int nbeams: number of acoustic beams
:return: percent good data as a beam x cell numpy array of ints
"""
if bstream[offset+1] != 4:
print("expected intensity ID, instead found %g", bstream[offset+1])
return -1
# start with a numpy array of bad values
data = np.ones((nbeams, ncells), dtype=int) * -32768
ibyte = 2
for icell in range(ncells):
for ibeam in range(nbeams):
data[ibeam, icell] = bstream[offset+ibyte]
ibyte = ibyte+1
return data
def parse_TRDI_transformation_matrix(bstream, offset, nbeams):
"""
parse the transformation matrix data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:param int nbeams: number of acoustic beams
:return: transformation matrix data as a beam x 3 numpy array of ints
"""
if bstream[offset+1] != 50: # \x00\x32
print("expected transformation matrix ID, instead found %g", bstream[offset+1])
return -1
# start with a numpy array of bad values
data = np.zeros((nbeams, 3), dtype=int)
ibyte = 2
for iaxis in range(3):
for ibeam in range(nbeams):
data[ibeam, iaxis] = struct.unpack('<h', bstream[offset+ibyte:offset+ibyte+2])[0]
ibyte = ibyte+2
return data
def parse_TRDI_vertical_ping_setup(bstream, offset):
"""
parse the TRDI V ping setup data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:return: a dict of readable ping setup settings
"""
v_ping_setup_data = {}
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 28673: # \x70\x01 stored little endian
print("expected V Series Ping Setup ID, instead found %g" % leader_id)
return -1
v_ping_setup_data['Ensemble_Interval_ms'] = bstream[offset+4]+(bstream[offset+5] << 8) + (
bstream[offset+6] << 16)+(bstream[offset+7] << 24)
v_ping_setup_data['Number_of_Pings'] = struct.unpack('<H', bstream[offset+10:offset+12])[0]
v_ping_setup_data['Time_Between_Pings_ms'] = bstream[offset+10]+(bstream[offset+11] << 8) + (
bstream[offset+12] << 16)+(bstream[offset+13] << 24)
v_ping_setup_data['Offset_Between_Ping_Groups_ms'] = bstream[offset+14]+(bstream[offset+15] << 8) + (
bstream[offset+16] << 16)+(bstream[offset+17] << 24)
v_ping_setup_data['Ping_Sequence_Number'] = struct.unpack('<h', bstream[offset+22:offset+24])[0]
v_ping_setup_data['Ambiguity_Velocity'] = struct.unpack('<h', bstream[offset+24:offset+26])[0]
v_ping_setup_data['RX_Gain'] = bstream[offset+26]
v_ping_setup_data['RX_Beam_Mask'] = bstream[offset+27]
v_ping_setup_data['TX_Beam_Mask'] = bstream[offset+28]
v_ping_setup_data['Ensemble_Offset'] = bstream[offset+30]+(bstream[offset+31] << 8)+(bstream[offset+32] << 16) + (
bstream[offset+33] << 24)
v_ping_setup_data['Ensemble_Count'] = bstream[offset+34]+(bstream[offset+35] << 8)
v_ping_setup_data['Deployment_Start_Century'] = bstream[offset+36]
v_ping_setup_data['Deployment_Start_Year'] = bstream[offset+37]
v_ping_setup_data['Deployment_Start_Month'] = bstream[offset+38]
v_ping_setup_data['Deployment_Start_Day'] = bstream[offset+39]
v_ping_setup_data['Deployment_Start_Hour'] = bstream[offset+40]
v_ping_setup_data['Deployment_Start_Minute'] = bstream[offset+41]
v_ping_setup_data['Deployment_Start_Second'] = bstream[offset+42]
v_ping_setup_data['Deployment_Start_Hundredths'] = bstream[offset+43]
return v_ping_setup_data
def parse_TRDI_vertical_system_configuration(bstream, offset):
"""
parse the TRDI V system configuration data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:return: a dict of readable system configuration settings
"""
v_sys_config_data = {}
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 28672: # \x70\x00 stored little endian
print("expected V Series System Config ID, instead found %g" % leader_id)
return -1
v_sys_config_data['Firmware_Version'] = "%02d:%02d:%02d:%02d" % (bstream[offset+2], bstream[offset+3],
bstream[offset+4], bstream[offset+5])
v_sys_config_data['System_Frequency'] = bstream[offset+6]+(bstream[offset+7] << 8) + (
bstream[offset+8] << 16)+(bstream[offset+9] << 24)
v_sys_config_data['Pressure_Rating'] = struct.unpack('<H', bstream[offset+10:offset+12])[0]
return v_sys_config_data
def parse_TRDI_vertical_beam_leader(bstream, offset):
"""
parse the TRDI V beam leader data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:return: a dict of readable beam leader settings
"""
v_beam_leader_data = {}
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 3841: # \x0f\x01 stored little endian
print("expected Vertical Beam Leader ID, instead found %g" % leader_id)
return -1
v_beam_leader_data['Vertical_Depth_Cells'] = struct.unpack('<H', bstream[offset+2:offset+4])[0]
v_beam_leader_data['Vertical_Pings'] = struct.unpack('<H', bstream[offset+4:offset+6])[0]
v_beam_leader_data['Vertical_Depth_Cell_Size_cm'] = struct.unpack('<H', bstream[offset+6:offset+8])[0]
v_beam_leader_data['Vertical_First_Cell_Range_cm'] = struct.unpack('<H', bstream[offset+8:offset+10])[0]
v_beam_leader_data['Vertical_Mode'] = struct.unpack('<H', bstream[offset+10:offset+12])[0]
# 1 = low resolution slant beam cells = vertical beam cells
# 2 = High resolution, dedicated surface tracking ping with 4:1 transmit/receive ratio or larger
v_beam_leader_data['Vertical_Transmit_cm'] = struct.unpack('<H', bstream[offset+12:offset+14])[0]
v_beam_leader_data['Vertical_Lag_Length_cm'] = struct.unpack('<H', bstream[offset+14:offset+16])[0]
v_beam_leader_data['Transmit_Code_Elements'] = struct.unpack('<H', bstream[offset+16:offset+18])[0]
v_beam_leader_data['Ping_Offset_Time'] = struct.unpack('<H', bstream[offset+30:offset+32])[0]
return v_beam_leader_data
def parse_TRDI_vertical_velocity(bstream, offset, ncells):
"""
parse the vertical beam velocity data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:param int ncells: number of cells in the profile
:return: vertical beam velocity data as a numpy array of ints
"""
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 2560: # \x0a\x00 stored little endian
print("expected Vertical Beam velocity ID, instead found %g" % leader_id)
return -1
# start with a numpy array of bad values
data = np.ones(ncells, dtype=int) * -32768
ibyte = 2
for icell in range(ncells):
data[icell] = struct.unpack('<h', bstream[offset+ibyte:offset+ibyte+2])[0]
ibyte += 2
return data
def parse_TRDI_vertical_correlation(bstream, offset, ncells):
"""
parse the vertical beam correlation data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:param int ncells: number of cells in the profile
:return: vertical beam correlation data as a numpy array of ints
"""
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 2816: # \x0b\x00 stored little endian
print("expected Vertical Beam correlation ID, instead found %g" % leader_id)
return -1
# start with a numpy array of bad values
data = np.ones((ncells,), dtype=int) * -32768
ibyte = 2
for icell in range(ncells):
data[icell] = bstream[offset+ibyte]
ibyte += 1
return data
def parse_TRDI_vertical_intensity(bstream, offset, ncells):
"""
parse the vertical beam intensity data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:param int ncells: number of cells in the profile
:return: vertical beam intensity data as a numpy array of ints
"""
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 3072: # \x0c\x00 stored little endian
print("expected Vertical Beam intensity ID, instead found %g" % leader_id)
return -1
# start with a numpy array of bad values
data = np.ones((ncells, ), dtype=int) * -32768
ibyte = 2
for icell in range(ncells):
data[icell] = bstream[offset+ibyte]
ibyte += 1
return data
def parse_TRDI_vertical_percent_good(bstream, offset, ncells):
"""
parse the vertical beam percent good data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:param int ncells: number of cells in the profile
:return: vertical beam percent good data as a numpy array of ints
"""
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 3328: # \x0d\x00 stored little endian
print("expected Vertical Beam percent good ID, instead found %g" % leader_id)
return -1
# start with a numpy array of bad values
data = np.ones((ncells,), dtype=int) * -32768
ibyte = 2
for icell in range(ncells):
data[icell] = bstream[offset+ibyte]
ibyte += 1
return data
def parse_TRDI_event_log(bstream, offset):
"""
parse the event log data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:return: event log data as a dict
"""
v_event_log_data = {}
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 28676: # \x70\x04 stored little endian
print("expected V Series Event Log ID, instead found %g" % leader_id)
return -1
v_event_log_data['Fault_Count'] = struct.unpack('<H', bstream[offset+2:offset+4])[0]
# TODO read the fault codes and output to a text file
return v_event_log_data
def parse_TRDI_wave_parameters(bstream, offset):
"""
parse the wave parameters (wave statistics)
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:return: wave data as a dict
"""
data = {}
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 11: # \x00\x0b stored little endian
print("expected Wave Parameters ID, instead found %g" % leader_id)
return -1
data['Hs'] = struct.unpack('<H', bstream[offset+2:offset+4])[0]
data['Tp'] = struct.unpack('<H', bstream[offset+4:offset+6])[0]
data['Dp'] = struct.unpack('<H', bstream[offset+6:offset+8])[0]
data['Dm'] = struct.unpack('<H', bstream[offset+16:offset+18])[0]
data['SHmax'] = struct.unpack('<H', bstream[offset+30:offset+32])[0]
data['SH13'] = struct.unpack('<H', bstream[offset+32:offset+34])[0]
data['SH10'] = struct.unpack('<H', bstream[offset+34:offset+36])[0]
data['STmax'] = struct.unpack('<H', bstream[offset+36:offset+38])[0]
data['ST13'] = struct.unpack('<H', bstream[offset+38:offset+40])[0]
data['ST10'] = struct.unpack('<H', bstream[offset+40:offset+42])[0]
data['T01'] = struct.unpack('<H', bstream[offset+42:offset+44])[0]
data['Tz'] = struct.unpack('<H', bstream[offset+44:offset+46])[0]
data['Tinv1'] = struct.unpack('<H', bstream[offset+46:offset+48])[0]
data['S0'] = struct.unpack('<H', bstream[offset+48:offset+50])[0]
data['Source'] = bstream[offset+52]
return data
def parse_TRDI_wave_sea_swell(bstream, offset):
"""
parse the wave sea swell parameters (wave statistics)
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:return: wave sea swell data as a dict
"""
data = {}
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 12: # \x00\x0c stored little endian
print("expected Wave Sea and Swell ID, instead found %g" % leader_id)
return -1
data['HsSea'] = struct.unpack('<H', bstream[offset+2:offset+4])[0]
data['HsSwell'] = struct.unpack('<H', bstream[offset+4:offset+6])[0]
data['TpSea'] = struct.unpack('<H', bstream[offset+6:offset+8])[0]
data['TpSwell'] = struct.unpack('<H', bstream[offset+8:offset+10])[0]
data['DpSea'] = struct.unpack('<H', bstream[offset+10:offset+12])[0]
data['DpSwell'] = struct.unpack('<H', bstream[offset+12:offset+14])[0]
data['SeaSwellPeriod'] = struct.unpack('<H', bstream[offset+44:offset+46])[0]
return data
def parse_TRDI_bottom_track(bstream, offset, nbeams):
"""
parse the bottom track data
:param bytes bstream: an entire ensemble
:param int offset: the location in the bytes object of the first byte of this data format
:param int nbeams: number of acoustic beams
:return: bottom track data as a dict
"""
data = {}
leader_id = struct.unpack('<H', bstream[offset:offset+2])[0]
if leader_id != 1536: # \x00\x06 stored little endian
print("expected Bottom Track ID, instead found %g" % leader_id)
return -1
data['Pings_per_ensemble'] = struct.unpack('<H', bstream[offset+2:offset+4])[0]
data['delay_before_reacquire'] = struct.unpack('<H', bstream[offset+4:offset+6])[0]
data['Corr_Mag_Min'] = bstream[offset+6]
data['Eval_Amp_Min'] = bstream[offset+7]
data['PGd_Minimum'] = bstream[offset+8]
data['Mode'] = bstream[offset+9]
data['Err_Vel_Max'] = struct.unpack('<H', bstream[offset+10:offset+12])[0]
data['BT_Range_LSB'] = np.ones(nbeams, dtype=int) * -32768
ibyte = 16
for ibeam in range(nbeams):
data['BT_Range_LSB'][ibeam] = struct.unpack('<h', bstream[offset+ibyte:offset+ibyte+2])[0]
ibyte = ibyte+2
# the meaning and direction depends on the coordinate system used
data['BT_Vel'] = np.ones(nbeams, dtype=float) * 1e35
ibyte = 24
for ibeam in range(nbeams):
data['BT_Vel'][ibeam] = struct.unpack('<h', bstream[offset+ibyte:offset+ibyte+2])[0]
ibyte = ibyte+2
data['BT_Corr'] = np.ones(nbeams, dtype=int) * -32768
ibyte = 32
for ibeam in range(nbeams):
data['BT_Corr'][ibeam] = bstream[offset+ibyte]
ibyte = ibyte+1
data['BT_Amp'] = np.ones(nbeams, dtype=int) * -32768
ibyte = 36
for ibeam in range(nbeams):
data['BT_Amp'][ibeam] = bstream[offset+ibyte]
ibyte = ibyte+1
data['BT_PGd'] = np.ones(nbeams, dtype=int) * -32768
ibyte = 40
for ibeam in range(nbeams):
data['BT_PGd'][ibeam] = bstream[offset+ibyte]
ibyte = ibyte+1
data['Ref_Layer_Min'] = struct.unpack('<H', bstream[offset+44:offset+46])[0]
data['Ref_Layer_Near'] = struct.unpack('<H', bstream[offset+46:offset+48])[0]
data['Ref_Layer_Far'] = struct.unpack('<H', bstream[offset+48:offset+50])[0]
data['Ref_Layer_Vel'] = np.ones(nbeams, dtype=float) * 1e35
ibyte = 50
for ibeam in range(nbeams):
data['Ref_Layer_Vel'][ibeam] = struct.unpack('<h', bstream[offset+ibyte:offset+ibyte+2])[0]
ibyte = ibyte+2
data['Ref_Layer_Corr'] = np.ones(nbeams, dtype=int) * -32768
ibyte = 58
for ibeam in range(nbeams):
data['Ref_Layer_Corr'][ibeam] = bstream[offset+ibyte]
ibyte = ibyte+1
data['Ref_Layer_Amp'] = np.ones(nbeams, dtype=int) * -32768
ibyte = 62
for ibeam in range(nbeams):
data['Ref_Layer_Amp'][ibeam] = bstream[offset+ibyte]
ibyte = ibyte+1
data['Ref_Layer_PGd'] = np.ones(nbeams, dtype=int) * -32768
ibyte = 66
for ibeam in range(nbeams):
data['Ref_Layer_PGd'][ibeam] = bstream[offset+ibyte]
ibyte = ibyte+1
data['BT_Max_Depth'] = struct.unpack('<H', bstream[offset+70:offset+72])[0]
data['RSSI_Amp'] = np.ones(nbeams, dtype=int) * -32768
ibyte = 72
for ibeam in range(nbeams):
data['RSSI_Amp'][ibeam] = bstream[offset+ibyte]
ibyte = ibyte+1
data['GAIN'] = bstream[offset+76]
data['BT_Range_MSB'] = np.ones(nbeams, dtype=int) * -32768
ibyte = 77
for ibeam in range(nbeams):
data['BT_Range_MSB'][ibeam] = bstream[offset+ibyte]
ibyte = ibyte+1
data['BT_Range'] = np.ones(nbeams, dtype=int) * -32768
for ibeam in range(nbeams):
data['BT_Range'][ibeam] = data['BT_Range_LSB'][ibeam]+(data['BT_Range_MSB'][ibeam] << 16)
return data
def __computeChecksum(ensemble):
"""Compute a checksum from header, length, and ensemble"""
cs = 0
for byte in range(len(ensemble)-2):
cs += ensemble[byte]
return cs & 0xffff
def julian(year, month, day, hour, mn, sec, hund):
"""
convert hours, minutes and seconds to decimal hours
reference:
http://stackoverflow.com/questions/31142181/calculating-julian-date-in-python/41769526#41769526
and R. Signell's old matlab conversion code julian.m and hms2h.m
:param int year: year
:param int month: month
:param int day: day
:param int hour: hour
:param int mn: minute
:param int sec: second
:param int hund: hundredth of second
:return: julian day
"""
#
#
decimalsec = sec+hund/100
decimalhrs = hour+mn/60+decimalsec/3600
mo = month+9
yr = year-1
if month > 2:
mo -= 3
yr = year
c = math.floor(yr/100)
yr = yr - c*100
d = day
j = math.floor((146097*c)/4)+math.floor((1461*yr)/4) + \
math.floor((153*mo + 2)/5)+d+1721119
# If you want julian days to start and end at noon,
# replace the following line with:
# j=j+(decimalhrs-12)/24;
j = j+decimalhrs/24
return j
def analyzepd0file(pd0file, verbose=False):
"""
determine the input file size, read some ensembles, make an estimate of the number of ensembles within, return the
data from the first ensemble.
:param str pd0file: path and file name to raw ADC data file in pd0 format
:param bool verbose: output ensemble information
:return: number of ensembles in file, number of bytes in each ensemble, data from the first ensemble,
number of bytes to the start of the data
"""
infile = open(pd0file, 'rb')
while infile.tell() < 3000:
b1 = infile.read(1)
if b1 == b'\x7f':
b2 = infile.read(1)
if b2 == b'\x7f':
break
else:
print('Desired TRDI 7f7f ID not found within 3 kB from beginning of the file')
infile.close()
sys.exit(1)
start_of_data = infile.tell()-2
if start_of_data != 0:
print('data starts %d bytes into the file' % start_of_data)
infile.seek(start_of_data)
# need to read the header from the file to know the ensemble size
header = read_TRDI_header(infile)
if header['sourceID'] != b'\x7f':
print('error - this is not a currents file')
infile.close()
# number of bytes per ensemble in the header does not include the checksum
ens_len = header['nbytesperens']+2
print('ensemble length = %g' % ens_len)
print(header)
# it is faster to define the netCDF file with a known length
# for this we need to estimate how many ensembles we will be reading
# for some reason, sys.getsizeof(infile) does not report the true length
# of the input file, so we will go to the end and see how far we have gone
# there is a problem though. While TRDI's documentation says the V Series
# System Configuration data is always sent, this is not the case, so reading
# only the first ensemble will not give the ensemble size typical over the
# entire file
# rewind and read this several ensembles because further in the ensemble
# length can change on files output from Velocity
infile.seek(start_of_data)
nens2check = 5
nbytesperens = [0 for i in range(nens2check)]
ndatatypes = [0 for i in range(nens2check)]
for i in range(nens2check):
fileposn = infile.tell()
header = read_TRDI_header(infile)
ens_len = header['nbytesperens']+2
infile.seek(fileposn)
ens_data, ens_error = parse_TRDI_ensemble(infile.read(ens_len), verbose)
if ens_error is not None:
print('problem reading the first ensemble: ' + ens_error)
# infile.close()
# sys.exit(1)
if i == 0:
first_ens_data = ens_data
print('ensemble %d has %d bytes and %d datatypes' % (ens_data['VLeader']['Ensemble_Number'],
ens_data['Header']['nbytesperens'],
ens_data['Header']['ndatatypes']))
nbytesperens[i] = ens_data['Header']['nbytesperens']+2
ndatatypes[i] = ens_data['Header']['ndatatypes']
# the guess here is that if the first two ensembles are not the same,
# it's the second ensemble that is representative of the data
if nbytesperens[0] != nbytesperens[1]:
ens_len = nbytesperens[1]
else:
ens_len = nbytesperens[0]
infile.seek(0, 2)
nbytesinfile = infile.tell()
max_ens = (nbytesinfile/ens_len)-1
print('estimating %g ensembles in file using a %d ensemble size' % (max_ens, ens_len))
infile.close()
print(ens_data['Header'])
print('ensemble length = %g' % ens_len)
print('estimating %g ensembles in file' % max_ens)
# return max_ens, ens_len, ens_data, start_of_data
return max_ens, ens_len, first_ens_data, start_of_data
def __main():
print('%s running on python %s' % (sys.argv[0], sys.version))
if len(sys.argv) < 2:
print("%s usage:" % sys.argv[0])
print("TRDIpd0tonetcdf pd0file cdfFile [good_ens] [serial_number] [time_type] [delta_t]")
sys.exit(1)
try:
pd0file = sys.argv[1]
except:
print('error - pd0 input file name missing')
sys.exit(1)
try:
cdfFile = sys.argv[2]
except:
print('error - netcdf output file name missing')
sys.exit(1)
print('Converting %s to %s' % (pd0file, cdfFile))
try:
good_ens = [int(sys.argv[3]), int(sys.argv[4])]
except:
print('No starting and ending ensembles specified, processing entire file')
good_ens = [0, -1]
try:
serial_number = sys.argv[5]
except:
print('No serial number provided')
serial_number = "unknown"
try:
time_type = sys.argv[6]
except:
print('Time type will be CF')
time_type = "CF"
try:
delta_t = sys.argv[7]
except:
print('delta_t will be None')
delta_t = None
print('Start file conversion at ', dt.datetime.now())
convert_pd0_to_netcdf(pd0file, cdfFile, good_ens, serial_number, time_type, delta_t)
print('Finished file conversion at ', dt.datetime.now())
if __name__ == "__main__":
__main() | ADCPy | /ADCPy-0.1.1.tar.gz/ADCPy-0.1.1/adcpy/TRDIstuff/TRDIpd0tonetcdf.py | TRDIpd0tonetcdf.py |
ADDPIO project
==================
This project allows the Raspberry Pi* to access the sensors (accelerometer, gyroscope, ...)
and other IO of an Android* device, similar to the GPIO library. There is a corresponding
Android app (ADDPIO on the Google Play Store) to run on the Android device(s). The Raspberry
Pi and all Android devices must be connected to the same network. This uses UDP port 6297 to
communicate. Create a new ADDPIO object passing the ip address (this is displayed on the
Android app). The object has an input and output function that takes a type number and value.
See below for the standard type number symbols or use the number displayed on the Android app.
The Android sensors return an array of values (e.g. x,y,z). For ADDPIO sensor input the value
parameter represents the index into the array of values returned by the sensor. For other input,
the value is ignored.
The Android app has several widgets for IO:
buttons, LEDs, a touchpad, alarm, notification, and text.
Read the ip address and available sensors from the Android app.
from ADDPIO import ADDPIO
myHost = ADDPIO("192.168.0.0")
myValue = myHost.input(ADDPIO.SENSOR_ACCELEROMETER,1)
myValue = myHost.input(12345,47)
myHost.output(ADDPIO.ALARM,1)
myHost.output(ADDPIO.ALARM,0)
See the testADDPIO.py program for an example.
# Android sensors
SENSOR_ACCELEROMETER
SENSOR_AMBIENT_TEMPERATURE
SENSOR_GAME_ROTATION_VECTOR
SENSOR_GEOMAGNETIC_ROTATION_VECTOR
SENSOR_GRAVITY
SENSOR_GYROSCOPE
SENSOR_GYROSCOPE_UNCALIBRATED
SENSOR_HEART_BEAT
SENSOR_HEART_RATE
SENSOR_LIGHT
SENSOR_LINEAR_ACCELERATION
SENSOR_MAGNETIC_FIELD
SENSOR_MAGNETIC_FIELD_UNCALIBRATED
SENSOR_MOTION_DETECT
SENSOR_ORIENTATION
SENSOR_POSE_6DOF
SENSOR_PRESSURE
SENSOR_PROXIMITY
SENSOR_RELATIVE_HUMIDITY
SENSOR_ROTATION_VECTOR
SENSOR_SIGNIFICANT_MOTION
SENSOR_STATIONARY_DETECT
SENSOR_STEP_COUNTER
SENSOR_STEP_DETECTOR
SENSOR_TEMPERATURE
# Android input/output
BUTTON_1 input 0/1
BUTTON_2 input 0/1
LED_RED output 0/1
LED_GREEN output 0/1
LED_BLUE output 0/1
ALARM output 0/1
NOTIFICATION output any number
TEXT output any number
TOUCH_PAD_X_IN input 0-255
TOUCH_PAD_Y_IN input 0-255
TOUCH_PAD_X_OUT output 0-255
TOUCH_PAD_Y_OUT output 0-255
* Raspberry Pi is a trademark of the Raspberry Pi Foundation - http://www.raspberrypi.org
* Android is a trademark of Google Inc.
| ADDPIO | /ADDPIO-1.0.3b1.tar.gz/ADDPIO-1.0.3b1/README.rst | README.rst |
import socket
class ADDPIO(object):
global TIME_OUT,ATTEMPTS,DATA_SIZE,IO_PORT
TIME_OUT = 3
ATTEMPTS = 3
DATA_SIZE = 1024
IO_PORT = 6297
# Android sensors
SENSOR_ACCELEROMETER = 1
SENSOR_AMBIENT_TEMPERATURE = 13
SENSOR_GAME_ROTATION_VECTOR = 15
SENSOR_GEOMAGNETIC_ROTATION_VECTOR = 20
SENSOR_GRAVITY = 9
SENSOR_GYROSCOPE = 4
SENSOR_GYROSCOPE_UNCALIBRATED = 16
SENSOR_HEART_BEAT = 31
SENSOR_HEART_RATE = 21
SENSOR_LIGHT = 5
SENSOR_LINEAR_ACCELERATION = 10
SENSOR_MAGNETIC_FIELD = 2
SENSOR_MAGNETIC_FIELD_UNCALIBRATED = 14
SENSOR_MOTION_DETECT = 30
SENSOR_ORIENTATION = 3
SENSOR_POSE_6DOF = 28
SENSOR_PRESSURE = 6
SENSOR_PROXIMITY = 8
SENSOR_RELATIVE_HUMIDITY = 12
SENSOR_ROTATION_VECTOR = 11
SENSOR_SIGNIFICANT_MOTION = 17
SENSOR_STATIONARY_DETECT = 29
SENSOR_STEP_COUNTER = 19
SENSOR_STEP_DETECTOR = 18
SENSOR_TEMPERATURE = 7
# Android input/output
BUTTON_1 = 10001
BUTTON_2 = 10002
LED_RED = 10101
LED_GREEN = 10102
LED_BLUE = 10103
ALARM = 10201
NOTIFICATION = 10301
TEXT = 10401
TOUCH_PAD_X_IN = 10501
TOUCH_PAD_Y_IN = 10502
TOUCH_PAD_X_OUT = 10601
TOUCH_PAD_Y_OUT = 10602
def __init__(self, ipAddress):
self.ipAddress = ipAddress
self.port = IO_PORT
def comm(self, direction, pin, value):
complete = False
count = 0
# try ATTEMPTS times, then fail
while not complete:
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.settimeout(TIME_OUT)
message = direction + ":" + str(pin) + ":" + str(value)
sock.sendto(message, (self.ipAddress, self.port))
data, addr = sock.recvfrom(DATA_SIZE)
sock.close()
complete = True
except socket.error:
complete = False
count = count + 1
if count == ATTEMPTS:
complete = True
data = "comm fail"
return data
def input(self, pin, value):
return self.comm("in", pin, value)
def output(self, pin, value):
return self.comm("out", pin, value) | ADDPIO | /ADDPIO-1.0.3b1.tar.gz/ADDPIO-1.0.3b1/ADDPIO.py | ADDPIO.py |
This is an Artificial Intelligence made Bot called ADISON. This can operate 50+ functions.
It can: Search from Wikipedia and Google; Crack a joke; Play a song; Generate a strong password; Send a Whatsapp Message; Play a game; Take a note; Open Youtube, Okular, Pinterest, Notion, Google Classroom, Notepad, Google Drive, Google Calender, Whatsapp, Microsoft Whiteboard, Microsoft To Do, Zoom, Aesthetic Timer, Spotify,Stackoverflow and Gmail; Give information about the weather, time, itself and Who made me; I can show you the news, collections of short stories and novels; Click your picture; hibernate and log off your PC!
Ask her "What can you do?" or try the following commands:
"Get me a password"
"Play a song"
"Repeat after me"
"Crack a joke" or "Tell me a joke"
"Get me a list of jokes"
"Open Calculator"
"Play a game" or "Play a doodle game" or "Play Rock Paper Scissors"
"Take a photo"
"Take a note"
"Open Netflix"
"Open Drive"
"Show me my events"
"Open Okular"
"Open spotify"
"Send a Whatsapp message"
"Open Classroom"
"Open Notion"
"Open Pinterest"
"Tell me the time"
"Start the timer"
"Tasklist"
"Open youtube"
"Open Gmail"
"Open Whiteboard"
"Hibernate"
"Sign out"
"Shut Down"
"Restart"
"Show some short stories"
"Novels please"
THANK YOU
-------------------------------------------------------------- | ADISON | /ADISON-0.0.3.tar.gz/ADISON-0.0.1/README.txt | README.txt |
import logging
from typing import Any, Dict, List, Optional, Set, Tuple, cast
import requests
import requests_toolbelt
import rich.progress
import urllib3
from requests import adapters
from requests_toolbelt.utils import user_agent
from rich import print
import twine
from twine import package as package_file
KEYWORDS_TO_NOT_FLATTEN = {"gpg_signature", "content"}
LEGACY_PYPI = "https://pypi.python.org/"
LEGACY_TEST_PYPI = "https://testpypi.python.org/"
WAREHOUSE = "https://upload.pypi.org/"
OLD_WAREHOUSE = "https://upload.pypi.io/"
TEST_WAREHOUSE = "https://test.pypi.org/"
WAREHOUSE_WEB = "https://pypi.org/"
logger = logging.getLogger(__name__)
class Repository:
def __init__(
self,
repository_url: str,
username: Optional[str],
password: Optional[str],
disable_progress_bar: bool = False,
) -> None:
self.url = repository_url
self.session = requests.session()
# requests.Session.auth should be Union[None, Tuple[str, str], ...]
# But username or password could be None
# See TODO for utils.RepositoryConfig
self.session.auth = (
(username or "", password or "") if username or password else None
)
logger.info(f"username: {username if username else '<empty>'}")
logger.info(f"password: <{'hidden' if password else 'empty'}>")
self.session.headers["User-Agent"] = self._make_user_agent_string()
for scheme in ("http://", "https://"):
self.session.mount(scheme, self._make_adapter_with_retries())
# Working around https://github.com/python/typing/issues/182
self._releases_json_data: Dict[str, Dict[str, Any]] = {}
self.disable_progress_bar = disable_progress_bar
@staticmethod
def _make_adapter_with_retries() -> adapters.HTTPAdapter:
retry = urllib3.Retry(
allowed_methods=["GET"],
connect=5,
total=10,
status_forcelist=[500, 501, 502, 503],
)
return adapters.HTTPAdapter(max_retries=retry)
@staticmethod
def _make_user_agent_string() -> str:
user_agent_string = (
user_agent.UserAgentBuilder("twine", twine.__version__)
.include_implementation()
.build()
)
return cast(str, user_agent_string)
def close(self) -> None:
self.session.close()
@staticmethod
def _convert_data_to_list_of_tuples(data: Dict[str, Any]) -> List[Tuple[str, Any]]:
data_to_send = []
for key, value in data.items():
if key in KEYWORDS_TO_NOT_FLATTEN or not isinstance(value, (list, tuple)):
data_to_send.append((key, value))
else:
for item in value:
data_to_send.append((key, item))
return data_to_send
def set_certificate_authority(self, cacert: Optional[str]) -> None:
if cacert:
self.session.verify = cacert
def set_client_certificate(self, clientcert: Optional[str]) -> None:
if clientcert:
self.session.cert = clientcert
def register(self, package: package_file.PackageFile) -> requests.Response:
data = package.metadata_dictionary()
data.update({":action": "submit", "protocol_version": "1"})
print(f"Registering {package.basefilename}")
data_to_send = self._convert_data_to_list_of_tuples(data)
encoder = requests_toolbelt.MultipartEncoder(data_to_send)
resp = self.session.post(
self.url,
data=encoder,
allow_redirects=False,
headers={"Content-Type": encoder.content_type},
)
# Bug 28. Try to silence a ResourceWarning by releasing the socket.
resp.close()
return resp
def _upload(self, package: package_file.PackageFile) -> requests.Response:
data = package.metadata_dictionary()
data.update(
{
# action
":action": "file_upload",
"protocol_version": "1",
}
)
data_to_send = self._convert_data_to_list_of_tuples(data)
print(f"Uploading {package.basefilename}")
with open(package.filename, "rb") as fp:
data_to_send.append(
("content", (package.basefilename, fp, "application/octet-stream"))
)
encoder = requests_toolbelt.MultipartEncoder(data_to_send)
with rich.progress.Progress(
"[progress.percentage]{task.percentage:>3.0f}%",
rich.progress.BarColumn(),
rich.progress.DownloadColumn(),
"•",
rich.progress.TimeRemainingColumn(
compact=True,
elapsed_when_finished=True,
),
"•",
rich.progress.TransferSpeedColumn(),
disable=self.disable_progress_bar,
) as progress:
task_id = progress.add_task("", total=encoder.len)
monitor = requests_toolbelt.MultipartEncoderMonitor(
encoder,
lambda monitor: progress.update(
task_id,
completed=monitor.bytes_read,
),
)
resp = self.session.post(
self.url,
data=monitor,
allow_redirects=False,
headers={"Content-Type": monitor.content_type},
)
return resp
def upload(
self, package: package_file.PackageFile, max_redirects: int = 5
) -> requests.Response:
number_of_redirects = 0
while number_of_redirects < max_redirects:
resp = self._upload(package)
if resp.status_code == requests.codes.OK:
return resp
if 500 <= resp.status_code < 600:
number_of_redirects += 1
logger.warning(
f'Received "{resp.status_code}: {resp.reason}"'
"\nPackage upload appears to have failed."
f" Retry {number_of_redirects} of {max_redirects}."
)
else:
return resp
return resp
def package_is_uploaded(
self, package: package_file.PackageFile, bypass_cache: bool = False
) -> bool:
# NOTE(sigmavirus24): Not all indices are PyPI and pypi.io doesn't
# have a similar interface for finding the package versions.
if not self.url.startswith((LEGACY_PYPI, WAREHOUSE, OLD_WAREHOUSE)):
return False
safe_name = package.safe_name
releases = None
if not bypass_cache:
releases = self._releases_json_data.get(safe_name)
if releases is None:
url = f"{LEGACY_PYPI}pypi/{safe_name}/json"
headers = {"Accept": "application/json"}
response = self.session.get(url, headers=headers)
if response.status_code == 200:
releases = response.json()["releases"]
else:
releases = {}
self._releases_json_data[safe_name] = releases
packages = releases.get(package.metadata.version, [])
for uploaded_package in packages:
if uploaded_package["filename"] == package.basefilename:
return True
return False
def release_urls(self, packages: List[package_file.PackageFile]) -> Set[str]:
if self.url.startswith(WAREHOUSE):
url = WAREHOUSE_WEB
elif self.url.startswith(TEST_WAREHOUSE):
url = TEST_WAREHOUSE
else:
return set()
return {
f"{url}project/{package.safe_name}/{package.metadata.version}/"
for package in packages
}
def verify_package_integrity(self, package: package_file.PackageFile) -> None:
# TODO(sigmavirus24): Add a way for users to download the package and
# check it's hash against what it has locally.
pass | ADISON | /ADISON-0.0.3.tar.gz/ADISON-0.0.1/twine/repository.py | repository.py |
# Copyright 2018 Ian Stapleton Cordasco
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import contextlib
import logging
from typing import Any, Optional, cast
from twine import auth
from twine import exceptions
from twine import repository
from twine import utils
class Settings:
"""Object that manages the configuration for Twine.
This object can only be instantiated with keyword arguments.
For example,
.. code-block:: python
Settings(True, username='fakeusername')
Will raise a :class:`TypeError`. Instead, you would want
.. code-block:: python
Settings(sign=True, username='fakeusername')
"""
def __init__(
self,
*,
sign: bool = False,
sign_with: str = "gpg",
identity: Optional[str] = None,
username: Optional[str] = None,
password: Optional[str] = None,
non_interactive: bool = False,
comment: Optional[str] = None,
config_file: str = utils.DEFAULT_CONFIG_FILE,
skip_existing: bool = False,
cacert: Optional[str] = None,
client_cert: Optional[str] = None,
repository_name: str = "pypi",
repository_url: Optional[str] = None,
verbose: bool = False,
disable_progress_bar: bool = False,
**ignored_kwargs: Any,
) -> None:
"""Initialize our settings instance.
:param sign:
Configure whether the package file should be signed.
:param sign_with:
The name of the executable used to sign the package with.
:param identity:
The GPG identity that should be used to sign the package file.
:param username:
The username used to authenticate to the repository (package
index).
:param password:
The password used to authenticate to the repository (package
index).
:param non_interactive:
Do not interactively prompt for username/password if the required
credentials are missing.
:param comment:
The comment to include with each distribution file.
:param config_file:
The path to the configuration file to use.
:param skip_existing:
Specify whether twine should continue uploading files if one
of them already exists. This primarily supports PyPI. Other
package indexes may not be supported.
:param cacert:
The path to the bundle of certificates used to verify the TLS
connection to the package index.
:param client_cert:
The path to the client certificate used to perform authentication to the
index. This must be a single file that contains both the private key and
the PEM-encoded certificate.
:param repository_name:
The name of the repository (package index) to interact with. This
should correspond to a section in the config file.
:param repository_url:
The URL of the repository (package index) to interact with. This
will override the settings inferred from ``repository_name``.
:param verbose:
Show verbose output.
:param disable_progress_bar:
Disable the progress bar.
"""
self.config_file = config_file
self.comment = comment
self.verbose = verbose
self.disable_progress_bar = disable_progress_bar
self.skip_existing = skip_existing
self._handle_repository_options(
repository_name=repository_name,
repository_url=repository_url,
)
self._handle_package_signing(
sign=sign,
sign_with=sign_with,
identity=identity,
)
# _handle_certificates relies on the parsed repository config
self._handle_certificates(cacert, client_cert)
self.auth = auth.Resolver.choose(not non_interactive)(
self.repository_config,
auth.CredentialInput(username, password),
)
@property
def username(self) -> Optional[str]:
# Workaround for https://github.com/python/mypy/issues/5858
return cast(Optional[str], self.auth.username)
@property
def password(self) -> Optional[str]:
with self._allow_noninteractive():
# Workaround for https://github.com/python/mypy/issues/5858
return cast(Optional[str], self.auth.password)
def _allow_noninteractive(self) -> "contextlib.AbstractContextManager[None]":
"""Bypass NonInteractive error when client cert is present."""
suppressed = (exceptions.NonInteractive,) if self.client_cert else ()
return contextlib.suppress(*suppressed)
@property
def verbose(self) -> bool:
return self._verbose
@verbose.setter
def verbose(self, verbose: bool) -> None:
"""Initialize a logger based on the --verbose option."""
self._verbose = verbose
twine_logger = logging.getLogger("twine")
twine_logger.setLevel(logging.INFO if verbose else logging.WARNING)
@staticmethod
def register_argparse_arguments(parser: argparse.ArgumentParser) -> None:
"""Register the arguments for argparse."""
parser.add_argument(
"-r",
"--repository",
action=utils.EnvironmentDefault,
env="TWINE_REPOSITORY",
default="pypi",
help="The repository (package index) to upload the package to. "
"Should be a section in the config file (default: "
"%(default)s). (Can also be set via %(env)s environment "
"variable.)",
)
parser.add_argument(
"--repository-url",
action=utils.EnvironmentDefault,
env="TWINE_REPOSITORY_URL",
default=None,
required=False,
help="The repository (package index) URL to upload the package to."
" This overrides --repository. "
"(Can also be set via %(env)s environment variable.)",
)
parser.add_argument(
"-s",
"--sign",
action="store_true",
default=False,
help="Sign files to upload using GPG.",
)
parser.add_argument(
"--sign-with",
default="gpg",
help="GPG program used to sign uploads (default: %(default)s).",
)
parser.add_argument(
"-i",
"--identity",
help="GPG identity used to sign files.",
)
parser.add_argument(
"-u",
"--username",
action=utils.EnvironmentDefault,
env="TWINE_USERNAME",
required=False,
help="The username to authenticate to the repository "
"(package index) as. (Can also be set via "
"%(env)s environment variable.)",
)
parser.add_argument(
"-p",
"--password",
action=utils.EnvironmentDefault,
env="TWINE_PASSWORD",
required=False,
help="The password to authenticate to the repository "
"(package index) with. (Can also be set via "
"%(env)s environment variable.)",
)
parser.add_argument(
"--non-interactive",
action=utils.EnvironmentFlag,
env="TWINE_NON_INTERACTIVE",
help="Do not interactively prompt for username/password if the "
"required credentials are missing. (Can also be set via "
"%(env)s environment variable.)",
)
parser.add_argument(
"-c",
"--comment",
help="The comment to include with the distribution file.",
)
parser.add_argument(
"--config-file",
default=utils.DEFAULT_CONFIG_FILE,
help="The .pypirc config file to use.",
)
parser.add_argument(
"--skip-existing",
default=False,
action="store_true",
help="Continue uploading files if one already exists. (Only valid "
"when uploading to PyPI. Other implementations may not "
"support this.)",
)
parser.add_argument(
"--cert",
action=utils.EnvironmentDefault,
env="TWINE_CERT",
default=None,
required=False,
metavar="path",
help="Path to alternate CA bundle (can also be set via %(env)s "
"environment variable).",
)
parser.add_argument(
"--client-cert",
metavar="path",
help="Path to SSL client certificate, a single file containing the"
" private key and the certificate in PEM format.",
)
parser.add_argument(
"--verbose",
default=False,
required=False,
action="store_true",
help="Show verbose output.",
)
parser.add_argument(
"--disable-progress-bar",
default=False,
required=False,
action="store_true",
help="Disable the progress bar.",
)
@classmethod
def from_argparse(cls, args: argparse.Namespace) -> "Settings":
"""Generate the Settings from parsed arguments."""
settings = vars(args)
settings["repository_name"] = settings.pop("repository")
settings["cacert"] = settings.pop("cert")
return cls(**settings)
def _handle_package_signing(
self, sign: bool, sign_with: str, identity: Optional[str]
) -> None:
if not sign and identity:
raise exceptions.InvalidSigningConfiguration(
"sign must be given along with identity"
)
self.sign = sign
self.sign_with = sign_with
self.identity = identity
def _handle_repository_options(
self, repository_name: str, repository_url: Optional[str]
) -> None:
self.repository_config = utils.get_repository_from_config(
self.config_file,
repository_name,
repository_url,
)
self.repository_config["repository"] = utils.normalize_repository_url(
cast(str, self.repository_config["repository"]),
)
def _handle_certificates(
self, cacert: Optional[str], client_cert: Optional[str]
) -> None:
self.cacert = utils.get_cacert(cacert, self.repository_config)
self.client_cert = utils.get_clientcert(client_cert, self.repository_config)
def check_repository_url(self) -> None:
"""Verify we are not using legacy PyPI.
:raises twine.exceptions.UploadToDeprecatedPyPIDetected:
The configured repository URL is for legacy PyPI.
"""
repository_url = cast(str, self.repository_config["repository"])
if repository_url.startswith(
(repository.LEGACY_PYPI, repository.LEGACY_TEST_PYPI)
):
raise exceptions.UploadToDeprecatedPyPIDetected.from_args(
repository_url, utils.DEFAULT_REPOSITORY, utils.TEST_REPOSITORY
)
def create_repository(self) -> repository.Repository:
"""Create a new repository for uploading."""
repo = repository.Repository(
cast(str, self.repository_config["repository"]),
self.username,
self.password,
self.disable_progress_bar,
)
repo.set_certificate_authority(self.cacert)
repo.set_client_certificate(self.client_cert)
return repo | ADISON | /ADISON-0.0.3.tar.gz/ADISON-0.0.1/twine/settings.py | settings.py |
import hashlib
import io
import logging
import os
import re
import subprocess
from typing import Dict, NamedTuple, Optional, Sequence, Tuple, Union
import importlib_metadata
import pkginfo
from rich import print
from twine import exceptions
from twine import wheel
from twine import wininst
DIST_TYPES = {
"bdist_wheel": wheel.Wheel,
"bdist_wininst": wininst.WinInst,
"bdist_egg": pkginfo.BDist,
"sdist": pkginfo.SDist,
}
DIST_EXTENSIONS = {
".whl": "bdist_wheel",
".exe": "bdist_wininst",
".egg": "bdist_egg",
".tar.bz2": "sdist",
".tar.gz": "sdist",
".zip": "sdist",
}
MetadataValue = Union[str, Sequence[str]]
logger = logging.getLogger(__name__)
def _safe_name(name: str) -> str:
"""Convert an arbitrary string to a standard distribution name.
Any runs of non-alphanumeric/. characters are replaced with a single '-'.
Copied from pkg_resources.safe_name for compatibility with warehouse.
See https://github.com/pypa/twine/issues/743.
"""
return re.sub("[^A-Za-z0-9.]+", "-", name)
class PackageFile:
def __init__(
self,
filename: str,
comment: Optional[str],
metadata: pkginfo.Distribution,
python_version: Optional[str],
filetype: Optional[str],
) -> None:
self.filename = filename
self.basefilename = os.path.basename(filename)
self.comment = comment
self.metadata = metadata
self.python_version = python_version
self.filetype = filetype
self.safe_name = _safe_name(metadata.name)
self.signed_filename = self.filename + ".asc"
self.signed_basefilename = self.basefilename + ".asc"
self.gpg_signature: Optional[Tuple[str, bytes]] = None
hasher = HashManager(filename)
hasher.hash()
hexdigest = hasher.hexdigest()
self.md5_digest = hexdigest.md5
self.sha2_digest = hexdigest.sha2
self.blake2_256_digest = hexdigest.blake2
@classmethod
def from_filename(cls, filename: str, comment: Optional[str]) -> "PackageFile":
# Extract the metadata from the package
for ext, dtype in DIST_EXTENSIONS.items():
if filename.endswith(ext):
try:
meta = DIST_TYPES[dtype](filename)
except EOFError:
raise exceptions.InvalidDistribution(
"Invalid distribution file: '%s'" % os.path.basename(filename)
)
else:
break
else:
raise exceptions.InvalidDistribution(
"Unknown distribution format: '%s'" % os.path.basename(filename)
)
# If pkginfo encounters a metadata version it doesn't support, it may give us
# back empty metadata. At the very least, we should have a name and version,
# which could also be empty if, for example, a MANIFEST.in doesn't include
# setup.cfg.
missing_fields = [
f.capitalize() for f in ["name", "version"] if not getattr(meta, f)
]
if missing_fields:
supported_metadata = list(pkginfo.distribution.HEADER_ATTRS)
raise exceptions.InvalidDistribution(
"Metadata is missing required fields: "
f"{', '.join(missing_fields)}.\n"
"Make sure the distribution includes the files where those fields "
"are specified, and is using a supported Metadata-Version: "
f"{', '.join(supported_metadata)}."
)
py_version: Optional[str]
if dtype == "bdist_egg":
(dist,) = importlib_metadata.Distribution.discover( # type: ignore[no-untyped-call] # python/importlib_metadata#288 # noqa: E501
path=[filename]
)
py_version = dist.metadata["Version"]
elif dtype == "bdist_wheel":
py_version = meta.py_version
elif dtype == "bdist_wininst":
py_version = meta.py_version
else:
py_version = None
return cls(filename, comment, meta, py_version, dtype)
def metadata_dictionary(self) -> Dict[str, MetadataValue]:
"""Merge multiple sources of metadata into a single dictionary.
Includes values from filename, PKG-INFO, hashers, and signature.
"""
meta = self.metadata
data = {
# identify release
"name": self.safe_name,
"version": meta.version,
# file content
"filetype": self.filetype,
"pyversion": self.python_version,
# additional meta-data
"metadata_version": meta.metadata_version,
"summary": meta.summary,
"home_page": meta.home_page,
"author": meta.author,
"author_email": meta.author_email,
"maintainer": meta.maintainer,
"maintainer_email": meta.maintainer_email,
"license": meta.license,
"description": meta.description,
"keywords": meta.keywords,
"platform": meta.platforms,
"classifiers": meta.classifiers,
"download_url": meta.download_url,
"supported_platform": meta.supported_platforms,
"comment": self.comment,
"sha256_digest": self.sha2_digest,
# PEP 314
"provides": meta.provides,
"requires": meta.requires,
"obsoletes": meta.obsoletes,
# Metadata 1.2
"project_urls": meta.project_urls,
"provides_dist": meta.provides_dist,
"obsoletes_dist": meta.obsoletes_dist,
"requires_dist": meta.requires_dist,
"requires_external": meta.requires_external,
"requires_python": meta.requires_python,
# Metadata 2.1
"provides_extras": meta.provides_extras,
"description_content_type": meta.description_content_type,
# Metadata 2.2
"dynamic": meta.dynamic,
}
if self.gpg_signature is not None:
data["gpg_signature"] = self.gpg_signature
# FIPS disables MD5 and Blake2, making the digest values None. Some package
# repositories don't allow null values, so this only sends non-null values.
# See also: https://github.com/pypa/twine/issues/775
if self.md5_digest:
data["md5_digest"] = self.md5_digest
if self.blake2_256_digest:
data["blake2_256_digest"] = self.blake2_256_digest
return data
def add_gpg_signature(
self, signature_filepath: str, signature_filename: str
) -> None:
if self.gpg_signature is not None:
raise exceptions.InvalidDistribution("GPG Signature can only be added once")
with open(signature_filepath, "rb") as gpg:
self.gpg_signature = (signature_filename, gpg.read())
def sign(self, sign_with: str, identity: Optional[str]) -> None:
print(f"Signing {self.basefilename}")
gpg_args: Tuple[str, ...] = (sign_with, "--detach-sign")
if identity:
gpg_args += ("--local-user", identity)
gpg_args += ("-a", self.filename)
self.run_gpg(gpg_args)
self.add_gpg_signature(self.signed_filename, self.signed_basefilename)
@classmethod
def run_gpg(cls, gpg_args: Tuple[str, ...]) -> None:
try:
subprocess.check_call(gpg_args)
return
except FileNotFoundError:
if gpg_args[0] != "gpg":
raise exceptions.InvalidSigningExecutable(
f"{gpg_args[0]} executable not available."
)
logger.warning("gpg executable not available. Attempting fallback to gpg2.")
try:
subprocess.check_call(("gpg2",) + gpg_args[1:])
except FileNotFoundError:
raise exceptions.InvalidSigningExecutable(
"'gpg' or 'gpg2' executables not available.\n"
"Try installing one of these or specifying an executable "
"with the --sign-with flag."
)
class Hexdigest(NamedTuple):
md5: Optional[str]
sha2: Optional[str]
blake2: Optional[str]
class HashManager:
"""Manage our hashing objects for simplicity.
This will also allow us to better test this logic.
"""
def __init__(self, filename: str) -> None:
"""Initialize our manager and hasher objects."""
self.filename = filename
self._md5_hasher = None
try:
self._md5_hasher = hashlib.md5()
except ValueError:
# FIPs mode disables MD5
pass
self._sha2_hasher = hashlib.sha256()
self._blake_hasher = None
try:
self._blake_hasher = hashlib.blake2b(digest_size=256 // 8)
except (ValueError, TypeError):
# FIPS mode disables blake2
pass
def _md5_update(self, content: bytes) -> None:
if self._md5_hasher is not None:
self._md5_hasher.update(content)
def _md5_hexdigest(self) -> Optional[str]:
if self._md5_hasher is not None:
return self._md5_hasher.hexdigest()
return None
def _sha2_update(self, content: bytes) -> None:
if self._sha2_hasher is not None:
self._sha2_hasher.update(content)
def _sha2_hexdigest(self) -> Optional[str]:
if self._sha2_hasher is not None:
return self._sha2_hasher.hexdigest()
return None
def _blake_update(self, content: bytes) -> None:
if self._blake_hasher is not None:
self._blake_hasher.update(content)
def _blake_hexdigest(self) -> Optional[str]:
if self._blake_hasher is not None:
return self._blake_hasher.hexdigest()
return None
def hash(self) -> None:
"""Hash the file contents."""
with open(self.filename, "rb") as fp:
for content in iter(lambda: fp.read(io.DEFAULT_BUFFER_SIZE), b""):
self._md5_update(content)
self._sha2_update(content)
self._blake_update(content)
def hexdigest(self) -> Hexdigest:
"""Return the hexdigest for the file."""
return Hexdigest(
self._md5_hexdigest(),
self._sha2_hexdigest(),
self._blake_hexdigest(),
) | ADISON | /ADISON-0.0.3.tar.gz/ADISON-0.0.1/twine/package.py | package.py |
import functools
import getpass
import logging
from typing import Callable, Optional, Type, cast
import keyring
from twine import exceptions
from twine import utils
logger = logging.getLogger(__name__)
class CredentialInput:
def __init__(
self, username: Optional[str] = None, password: Optional[str] = None
) -> None:
self.username = username
self.password = password
class Resolver:
def __init__(self, config: utils.RepositoryConfig, input: CredentialInput) -> None:
self.config = config
self.input = input
@classmethod
def choose(cls, interactive: bool) -> Type["Resolver"]:
return cls if interactive else Private
@property
@functools.lru_cache()
def username(self) -> Optional[str]:
return utils.get_userpass_value(
self.input.username,
self.config,
key="username",
prompt_strategy=self.username_from_keyring_or_prompt,
)
@property
@functools.lru_cache()
def password(self) -> Optional[str]:
return utils.get_userpass_value(
self.input.password,
self.config,
key="password",
prompt_strategy=self.password_from_keyring_or_prompt,
)
@property
def system(self) -> Optional[str]:
return self.config["repository"]
def get_username_from_keyring(self) -> Optional[str]:
try:
system = cast(str, self.system)
logger.info("Querying keyring for username")
creds = keyring.get_credential(system, None)
if creds:
return cast(str, creds.username)
except AttributeError:
# To support keyring prior to 15.2
pass
except Exception as exc:
logger.warning("Error getting username from keyring", exc_info=exc)
return None
def get_password_from_keyring(self) -> Optional[str]:
try:
system = cast(str, self.system)
username = cast(str, self.username)
logger.info("Querying keyring for password")
return cast(str, keyring.get_password(system, username))
except Exception as exc:
logger.warning("Error getting password from keyring", exc_info=exc)
return None
def username_from_keyring_or_prompt(self) -> str:
username = self.get_username_from_keyring()
if username:
logger.info("username set from keyring")
return username
return self.prompt("username", input)
def password_from_keyring_or_prompt(self) -> str:
password = self.get_password_from_keyring()
if password:
logger.info("password set from keyring")
return password
return self.prompt("password", getpass.getpass)
def prompt(self, what: str, how: Callable[..., str]) -> str:
return how(f"Enter your {what}: ")
class Private(Resolver):
def prompt(self, what: str, how: Optional[Callable[..., str]] = None) -> str:
raise exceptions.NonInteractive(f"Credential not found for {what}.") | ADISON | /ADISON-0.0.3.tar.gz/ADISON-0.0.1/twine/auth.py | auth.py |
import argparse
import collections
import configparser
import functools
import logging
import os
import os.path
import unicodedata
from typing import Any, Callable, DefaultDict, Dict, Optional, Sequence, Union
from urllib.parse import urlparse
from urllib.parse import urlunparse
import requests
import rfc3986
from twine import exceptions
# Shim for input to allow testing.
input_func = input
DEFAULT_REPOSITORY = "https://upload.pypi.org/legacy/"
TEST_REPOSITORY = "https://test.pypi.org/legacy/"
DEFAULT_CONFIG_FILE = "~/.pypirc"
# TODO: In general, it seems to be assumed that the values retrieved from
# instances of this type aren't None, except for username and password.
# Type annotations would be cleaner if this were Dict[str, str], but that
# requires reworking the username/password handling, probably starting with
# get_userpass_value.
RepositoryConfig = Dict[str, Optional[str]]
logger = logging.getLogger(__name__)
def get_config(path: str) -> Dict[str, RepositoryConfig]:
"""Read repository configuration from a file (i.e. ~/.pypirc).
Format: https://packaging.python.org/specifications/pypirc/
If the default config file doesn't exist, return a default configuration for
pypyi and testpypi.
"""
realpath = os.path.realpath(os.path.expanduser(path))
parser = configparser.RawConfigParser()
try:
with open(realpath) as f:
parser.read_file(f)
logger.info(f"Using configuration from {realpath}")
except FileNotFoundError:
# User probably set --config-file, but the file can't be read
if path != DEFAULT_CONFIG_FILE:
raise
# server-login is obsolete, but retained for backwards compatibility
defaults: RepositoryConfig = {
"username": parser.get("server-login", "username", fallback=None),
"password": parser.get("server-login", "password", fallback=None),
}
config: DefaultDict[str, RepositoryConfig]
config = collections.defaultdict(lambda: defaults.copy())
index_servers = parser.get(
"distutils", "index-servers", fallback="pypi testpypi"
).split()
# Don't require users to manually configure URLs for these repositories
config["pypi"]["repository"] = DEFAULT_REPOSITORY
if "testpypi" in index_servers:
config["testpypi"]["repository"] = TEST_REPOSITORY
# Optional configuration values for individual repositories
for repository in index_servers:
for key in [
"username",
"repository",
"password",
"ca_cert",
"client_cert",
]:
if parser.has_option(repository, key):
config[repository][key] = parser.get(repository, key)
# Convert the defaultdict to a regular dict to prevent surprising behavior later on
return dict(config)
def _validate_repository_url(repository_url: str) -> None:
"""Validate the given url for allowed schemes and components."""
# Allowed schemes are http and https, based on whether the repository
# supports TLS or not, and scheme and host must be present in the URL
validator = (
rfc3986.validators.Validator()
.allow_schemes("http", "https")
.require_presence_of("scheme", "host")
)
try:
validator.validate(rfc3986.uri_reference(repository_url))
except rfc3986.exceptions.RFC3986Exception as exc:
raise exceptions.UnreachableRepositoryURLDetected(
f"Invalid repository URL: {exc.args[0]}."
)
def get_repository_from_config(
config_file: str,
repository: str,
repository_url: Optional[str] = None,
) -> RepositoryConfig:
"""Get repository config command-line values or the .pypirc file."""
# Prefer CLI `repository_url` over `repository` or .pypirc
if repository_url:
_validate_repository_url(repository_url)
return {
"repository": repository_url,
"username": None,
"password": None,
}
try:
return get_config(config_file)[repository]
except OSError as exc:
raise exceptions.InvalidConfiguration(str(exc))
except KeyError:
raise exceptions.InvalidConfiguration(
f"Missing '{repository}' section from {config_file}.\n"
f"More info: https://packaging.python.org/specifications/pypirc/ "
)
_HOSTNAMES = {
"pypi.python.org",
"testpypi.python.org",
"upload.pypi.org",
"test.pypi.org",
}
def normalize_repository_url(url: str) -> str:
parsed = urlparse(url)
if parsed.netloc in _HOSTNAMES:
return urlunparse(("https",) + parsed[1:])
return urlunparse(parsed)
def get_file_size(filename: str) -> str:
"""Return the size of a file in KB, or MB if >= 1024 KB."""
file_size = os.path.getsize(filename) / 1024
size_unit = "KB"
if file_size > 1024:
file_size = file_size / 1024
size_unit = "MB"
return f"{file_size:.1f} {size_unit}"
def check_status_code(response: requests.Response, verbose: bool) -> None:
"""Generate a helpful message based on the response from the repository.
Raise a custom exception for recognized errors. Otherwise, print the
response content (based on the verbose option) before re-raising the
HTTPError.
"""
if response.status_code == 410 and "pypi.python.org" in response.url:
raise exceptions.UploadToDeprecatedPyPIDetected(
f"It appears you're uploading to pypi.python.org (or "
f"testpypi.python.org). You've received a 410 error response. "
f"Uploading to those sites is deprecated. The new sites are "
f"pypi.org and test.pypi.org. Try using {DEFAULT_REPOSITORY} (or "
f"{TEST_REPOSITORY}) to upload your packages instead. These are "
f"the default URLs for Twine now. More at "
f"https://packaging.python.org/guides/migrating-to-pypi-org/."
)
elif response.status_code == 405 and "pypi.org" in response.url:
raise exceptions.InvalidPyPIUploadURL(
f"It appears you're trying to upload to pypi.org but have an "
f"invalid URL. You probably want one of these two URLs: "
f"{DEFAULT_REPOSITORY} or {TEST_REPOSITORY}. Check your "
f"--repository-url value."
)
try:
response.raise_for_status()
except requests.HTTPError as err:
if not verbose:
logger.warning(
"Error during upload. "
"Retry with the --verbose option for more details."
)
raise err
def get_userpass_value(
cli_value: Optional[str],
config: RepositoryConfig,
key: str,
prompt_strategy: Optional[Callable[[], str]] = None,
) -> Optional[str]:
"""Get a credential (e.g. a username or password) from the configuration.
Uses the following rules:
1. If ``cli_value`` is specified, use that.
2. If ``config[key]`` is specified, use that.
3. If ``prompt_strategy`` is specified, use its return value.
4. Otherwise return ``None``
:param cli_value:
The value supplied from the command line.
:param config:
A dictionary of repository configuration values.
:param key:
The credential to look up in ``config``, e.g. ``"username"`` or ``"password"``.
:param prompt_strategy:
An argumentless function to get the value, e.g. from keyring or by prompting
the user.
:return:
The credential value, i.e. the username or password.
"""
if cli_value is not None:
logger.info(f"{key} set by command options")
return cli_value
elif config.get(key) is not None:
logger.info(f"{key} set from config file")
return config[key]
elif prompt_strategy:
warning = ""
value = prompt_strategy()
if not value:
warning = f"Your {key} is empty"
elif any(unicodedata.category(c).startswith("C") for c in value):
# See https://www.unicode.org/reports/tr44/#General_Category_Values
# Most common case is "\x16" when pasting in Windows Command Prompt
warning = f"Your {key} contains control characters"
if warning:
logger.warning(f"{warning}. Did you enter it correctly?")
logger.warning(
"See https://twine.readthedocs.io/#entering-credentials "
"for more information."
)
return value
else:
return None
#: Get the CA bundle via :func:`get_userpass_value`.
get_cacert = functools.partial(get_userpass_value, key="ca_cert")
#: Get the client certificate via :func:`get_userpass_value`.
get_clientcert = functools.partial(get_userpass_value, key="client_cert")
class EnvironmentDefault(argparse.Action):
"""Get values from environment variable."""
def __init__(
self,
env: str,
required: bool = True,
default: Optional[str] = None,
**kwargs: Any,
) -> None:
default = os.environ.get(env, default)
self.env = env
if default:
required = False
super().__init__(default=default, required=required, **kwargs)
def __call__(
self,
parser: argparse.ArgumentParser,
namespace: argparse.Namespace,
values: Union[str, Sequence[Any], None],
option_string: Optional[str] = None,
) -> None:
setattr(namespace, self.dest, values)
class EnvironmentFlag(argparse.Action):
"""Set boolean flag from environment variable."""
def __init__(self, env: str, **kwargs: Any) -> None:
default = self.bool_from_env(os.environ.get(env))
self.env = env
super().__init__(default=default, nargs=0, **kwargs)
def __call__(
self,
parser: argparse.ArgumentParser,
namespace: argparse.Namespace,
values: Union[str, Sequence[Any], None],
option_string: Optional[str] = None,
) -> None:
setattr(namespace, self.dest, True)
@staticmethod
def bool_from_env(val: Optional[str]) -> bool:
"""Allow '0' and 'false' and 'no' to be False."""
falsey = {"0", "false", "no"}
return bool(val and val.lower() not in falsey) | ADISON | /ADISON-0.0.3.tar.gz/ADISON-0.0.1/twine/utils.py | utils.py |
import io
import os
import re
import zipfile
from typing import List, Optional
from pkginfo import distribution
from twine import exceptions
# Monkeypatch Metadata 2.0 support
distribution.HEADER_ATTRS_2_0 = distribution.HEADER_ATTRS_1_2
distribution.HEADER_ATTRS.update({"2.0": distribution.HEADER_ATTRS_2_0})
wheel_file_re = re.compile(
r"""^(?P<namever>(?P<name>.+?)(-(?P<ver>\d.+?))?)
((-(?P<build>\d.*?))?-(?P<pyver>.+?)-(?P<abi>.+?)-(?P<plat>.+?)
\.whl|\.dist-info)$""",
re.VERBOSE,
)
class Wheel(distribution.Distribution):
def __init__(self, filename: str, metadata_version: Optional[str] = None) -> None:
self.filename = filename
self.basefilename = os.path.basename(self.filename)
self.metadata_version = metadata_version
self.extractMetadata()
@property
def py_version(self) -> str:
wheel_info = wheel_file_re.match(self.basefilename)
if wheel_info is None:
return "any"
else:
return wheel_info.group("pyver")
@staticmethod
def find_candidate_metadata_files(names: List[str]) -> List[List[str]]:
"""Filter files that may be METADATA files."""
tuples = [x.split("/") for x in names if "METADATA" in x]
return [x[1] for x in sorted((len(x), x) for x in tuples)]
def read(self) -> bytes:
fqn = os.path.abspath(os.path.normpath(self.filename))
if not os.path.exists(fqn):
raise exceptions.InvalidDistribution("No such file: %s" % fqn)
if fqn.endswith(".whl"):
archive = zipfile.ZipFile(fqn)
names = archive.namelist()
def read_file(name: str) -> bytes:
return archive.read(name)
else:
raise exceptions.InvalidDistribution(
"Not a known archive format for file: %s" % fqn
)
try:
for path in self.find_candidate_metadata_files(names):
candidate = "/".join(path)
data = read_file(candidate)
if b"Metadata-Version" in data:
return data
finally:
archive.close()
raise exceptions.InvalidDistribution("No METADATA in archive: %s" % fqn)
def parse(self, data: bytes) -> None:
super().parse(data)
fp = io.StringIO(data.decode("utf-8", errors="replace"))
msg = distribution.parse(fp)
self.description = msg.get_payload() | ADISON | /ADISON-0.0.3.tar.gz/ADISON-0.0.1/twine/wheel.py | wheel.py |
# Copyright 2015 Ian Stapleton Cordasco
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class TwineException(Exception):
"""Base class for all exceptions raised by twine."""
pass
class RedirectDetected(TwineException):
"""A redirect was detected that the user needs to resolve.
In some cases, requests refuses to issue a new POST request after a
redirect. In order to prevent a confusing user experience, we raise this
exception to allow users to know the index they're uploading to is
redirecting them.
"""
@classmethod
def from_args(cls, repository_url: str, redirect_url: str) -> "RedirectDetected":
if redirect_url == f"{repository_url}/":
return cls(
f"{repository_url} attempted to redirect to {redirect_url}.\n"
f"Your repository URL is missing a trailing slash. "
"Please add it and try again.",
)
return cls(
f"{repository_url} attempted to redirect to {redirect_url}.\n"
f"If you trust these URLs, set {redirect_url} as your repository URL "
"and try again.",
)
class PackageNotFound(TwineException):
"""A package file was provided that could not be found on the file system.
This is only used when attempting to register a package_file.
"""
pass
class UploadToDeprecatedPyPIDetected(TwineException):
"""An upload attempt was detected to deprecated PyPI domains.
The sites pypi.python.org and testpypi.python.org are deprecated.
"""
@classmethod
def from_args(
cls, target_url: str, default_url: str, test_url: str
) -> "UploadToDeprecatedPyPIDetected":
"""Return an UploadToDeprecatedPyPIDetected instance."""
return cls(
"You're trying to upload to the legacy PyPI site '{}'. "
"Uploading to those sites is deprecated. \n "
"The new sites are pypi.org and test.pypi.org. Try using "
"{} (or {}) to upload your packages instead. "
"These are the default URLs for Twine now. \n More at "
"https://packaging.python.org/guides/migrating-to-pypi-org/"
" .".format(target_url, default_url, test_url)
)
class UnreachableRepositoryURLDetected(TwineException):
"""An upload attempt was detected to a URL without a protocol prefix.
All repository URLs must have a protocol (e.g., ``https://``).
"""
pass
class InvalidSigningConfiguration(TwineException):
"""Both the sign and identity parameters must be present."""
pass
class InvalidSigningExecutable(TwineException):
"""Signing executable must be installed on system."""
pass
class InvalidConfiguration(TwineException):
"""Raised when configuration is invalid."""
pass
class InvalidDistribution(TwineException):
"""Raised when a distribution is invalid."""
pass
class NonInteractive(TwineException):
"""Raised in non-interactive mode when credentials could not be found."""
pass
class InvalidPyPIUploadURL(TwineException):
"""Repository configuration tries to use PyPI with an incorrect URL.
For example, https://pypi.org instead of https://upload.pypi.org/legacy.
"""
pass | ADISON | /ADISON-0.0.3.tar.gz/ADISON-0.0.1/twine/exceptions.py | exceptions.py |
import argparse
import logging.config
from typing import Any, List, Tuple
import importlib_metadata
import rich
import rich.highlighter
import rich.logging
import rich.theme
import twine
args = argparse.Namespace()
def configure_output() -> None:
# Configure the global Console, available via rich.get_console().
# https://rich.readthedocs.io/en/latest/reference/init.html
# https://rich.readthedocs.io/en/latest/console.html
rich.reconfigure(
# Setting force_terminal makes testing easier by ensuring color codes. This
# could be based on FORCE_COLORS or PY_COLORS in os.environ, since Rich
# doesn't support that (https://github.com/Textualize/rich/issues/343).
force_terminal=True,
no_color=getattr(args, "no_color", False),
highlight=False,
theme=rich.theme.Theme(
{
"logging.level.debug": "green",
"logging.level.info": "blue",
"logging.level.warning": "yellow",
"logging.level.error": "red",
"logging.level.critical": "reverse red",
}
),
)
# Using dictConfig to override existing loggers, which prevents failures in
# test_main.py due to capsys not being cleared.
logging.config.dictConfig(
{
"disable_existing_loggers": False,
"version": 1,
"handlers": {
"console": {
"class": "rich.logging.RichHandler",
"show_time": False,
"show_path": False,
"highlighter": rich.highlighter.NullHighlighter(),
}
},
"root": {
"handlers": ["console"],
},
}
)
def list_dependencies_and_versions() -> List[Tuple[str, str]]:
deps = (
"importlib-metadata",
"keyring",
"pkginfo",
"requests",
"requests-toolbelt",
"urllib3",
)
return [(dep, importlib_metadata.version(dep)) for dep in deps] # type: ignore[no-untyped-call] # python/importlib_metadata#288 # noqa: E501
def dep_versions() -> str:
return ", ".join(
"{}: {}".format(*dependency) for dependency in list_dependencies_and_versions()
)
def dispatch(argv: List[str]) -> Any:
registered_commands = importlib_metadata.entry_points(
group="twine.registered_commands"
)
parser = argparse.ArgumentParser(prog="twine")
parser.add_argument(
"--version",
action="version",
version=f"%(prog)s version {twine.__version__} ({dep_versions()})",
)
parser.add_argument(
"--no-color",
default=False,
required=False,
action="store_true",
help="disable colored output",
)
parser.add_argument(
"command",
choices=registered_commands.names,
)
parser.add_argument(
"args",
help=argparse.SUPPRESS,
nargs=argparse.REMAINDER,
)
parser.parse_args(argv, namespace=args)
configure_output()
main = registered_commands[args.command].load()
return main(args.args) | ADISON | /ADISON-0.0.3.tar.gz/ADISON-0.0.1/twine/cli.py | cli.py |
# Copyright 2018 Dustin Ingram
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import cgi
import io
import logging
import re
from typing import List, Optional, Tuple, cast
import readme_renderer.rst
from rich import print
from twine import commands
from twine import package as package_file
logger = logging.getLogger(__name__)
_RENDERERS = {
None: readme_renderer.rst, # Default if description_content_type is None
"text/plain": None, # Rendering cannot fail
"text/x-rst": readme_renderer.rst,
"text/markdown": None, # Rendering cannot fail
}
# Regular expression used to capture and reformat docutils warnings into
# something that a human can understand. This is loosely borrowed from
# Sphinx: https://github.com/sphinx-doc/sphinx/blob
# /c35eb6fade7a3b4a6de4183d1dd4196f04a5edaf/sphinx/util/docutils.py#L199
_REPORT_RE = re.compile(
r"^<string>:(?P<line>(?:\d+)?): "
r"\((?P<level>DEBUG|INFO|WARNING|ERROR|SEVERE)/(\d+)?\) "
r"(?P<message>.*)",
re.DOTALL | re.MULTILINE,
)
class _WarningStream(io.StringIO):
def write(self, text: str) -> int:
matched = _REPORT_RE.search(text)
if matched:
line = matched.group("line")
level_text = matched.group("level").capitalize()
message = matched.group("message").rstrip("\r\n")
text = f"line {line}: {level_text}: {message}\n"
return super().write(text)
def __str__(self) -> str:
return self.getvalue().strip()
def _check_file(
filename: str, render_warning_stream: _WarningStream
) -> Tuple[List[str], bool]:
"""Check given distribution."""
warnings = []
is_ok = True
package = package_file.PackageFile.from_filename(filename, comment=None)
metadata = package.metadata_dictionary()
description = cast(Optional[str], metadata["description"])
description_content_type = cast(Optional[str], metadata["description_content_type"])
if description_content_type is None:
warnings.append(
"`long_description_content_type` missing. defaulting to `text/x-rst`."
)
description_content_type = "text/x-rst"
content_type, params = cgi.parse_header(description_content_type)
renderer = _RENDERERS.get(content_type, _RENDERERS[None])
if description is None or description.rstrip() == "UNKNOWN":
warnings.append("`long_description` missing.")
elif renderer:
rendering_result = renderer.render(
description, stream=render_warning_stream, **params
)
if rendering_result is None:
is_ok = False
return warnings, is_ok
def check(
dists: List[str],
strict: bool = False,
) -> bool:
"""Check that a distribution will render correctly on PyPI and display the results.
This is currently only validates ``long_description``, but more checks could be
added; see https://github.com/pypa/twine/projects/2.
:param dists:
The distribution files to check.
:param output_stream:
The destination of the resulting output.
:param strict:
If ``True``, treat warnings as errors.
:return:
``True`` if there are rendering errors, otherwise ``False``.
"""
uploads = [i for i in commands._find_dists(dists) if not i.endswith(".asc")]
if not uploads: # Return early, if there are no files to check.
logger.error("No files to check.")
return False
failure = False
for filename in uploads:
print(f"Checking {filename}: ", end="")
render_warning_stream = _WarningStream()
warnings, is_ok = _check_file(filename, render_warning_stream)
# Print the status and/or error
if not is_ok:
failure = True
print("[red]FAILED[/red]")
logger.error(
"`long_description` has syntax errors in markup"
" and would not be rendered on PyPI."
f"\n{render_warning_stream}"
)
elif warnings:
if strict:
failure = True
print("[red]FAILED due to warnings[/red]")
else:
print("[yellow]PASSED with warnings[/yellow]")
else:
print("[green]PASSED[/green]")
# Print warnings after the status and/or error
for message in warnings:
logger.warning(message)
return failure
def main(args: List[str]) -> bool:
"""Execute the ``check`` command.
:param args:
The command-line arguments.
:return:
The exit status of the ``check`` command.
"""
parser = argparse.ArgumentParser(prog="twine check")
parser.add_argument(
"dists",
nargs="+",
metavar="dist",
help="The distribution files to check, usually dist/*",
)
parser.add_argument(
"--strict",
action="store_true",
default=False,
required=False,
help="Fail on warnings",
)
parsed_args = parser.parse_args(args)
# Call the check function with the arguments from the command line
return check(parsed_args.dists, strict=parsed_args.strict) | ADISON | /ADISON-0.0.3.tar.gz/ADISON-0.0.1/twine/commands/check.py | check.py |
# Copyright 2013 Donald Stufft
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import os.path
from typing import Dict, List, cast
import requests
from rich import print
from twine import commands
from twine import exceptions
from twine import package as package_file
from twine import settings
from twine import utils
logger = logging.getLogger(__name__)
def skip_upload(
response: requests.Response, skip_existing: bool, package: package_file.PackageFile
) -> bool:
"""Determine if a failed upload is an error or can be safely ignored.
:param response:
The response from attempting to upload ``package`` to a repository.
:param skip_existing:
If ``True``, use the status and content of ``response`` to determine if the
package already exists on the repository. If so, then a failed upload is safe
to ignore.
:param package:
The package that was being uploaded.
:return:
``True`` if a failed upload can be safely ignored, otherwise ``False``.
"""
if not skip_existing:
return False
status = response.status_code
reason = getattr(response, "reason", "").lower()
text = getattr(response, "text", "").lower()
# NOTE(sigmavirus24): PyPI presently returns a 400 status code with the
# error message in the reason attribute. Other implementations return a
# 403 or 409 status code.
return (
# pypiserver (https://pypi.org/project/pypiserver)
status == 409
# PyPI / TestPyPI / GCP Artifact Registry
or (status == 400 and any("already exist" in x for x in [reason, text]))
# Nexus Repository OSS (https://www.sonatype.com/nexus-repository-oss)
or (status == 400 and any("updating asset" in x for x in [reason, text]))
# Artifactory (https://jfrog.com/artifactory/)
or (status == 403 and "overwrite artifact" in text)
# Gitlab Enterprise Edition (https://about.gitlab.com)
or (status == 400 and "already been taken" in text)
)
def _make_package(
filename: str, signatures: Dict[str, str], upload_settings: settings.Settings
) -> package_file.PackageFile:
"""Create and sign a package, based off of filename, signatures and settings."""
package = package_file.PackageFile.from_filename(filename, upload_settings.comment)
signed_name = package.signed_basefilename
if signed_name in signatures:
package.add_gpg_signature(signatures[signed_name], signed_name)
elif upload_settings.sign:
package.sign(upload_settings.sign_with, upload_settings.identity)
file_size = utils.get_file_size(package.filename)
logger.info(f"{package.filename} ({file_size})")
if package.gpg_signature:
logger.info(f"Signed with {package.signed_filename}")
return package
def upload(upload_settings: settings.Settings, dists: List[str]) -> None:
"""Upload one or more distributions to a repository, and display the progress.
If a package already exists on the repository, most repositories will return an
error response. However, if ``upload_settings.skip_existing`` is ``True``, a message
will be displayed and any remaining distributions will be uploaded.
For known repositories (like PyPI), the web URLs of successfully uploaded packages
will be displayed.
:param upload_settings:
The configured options related to uploading to a repository.
:param dists:
The distribution files to upload to the repository. This can also include
``.asc`` files; the GPG signatures will be added to the corresponding uploads.
:raises twine.exceptions.TwineException:
The upload failed due to a configuration error.
:raises requests.HTTPError:
The repository responded with an error.
"""
dists = commands._find_dists(dists)
# Determine if the user has passed in pre-signed distributions
signatures = {os.path.basename(d): d for d in dists if d.endswith(".asc")}
uploads = [i for i in dists if not i.endswith(".asc")]
upload_settings.check_repository_url()
repository_url = cast(str, upload_settings.repository_config["repository"])
print(f"Uploading distributions to {repository_url}")
packages_to_upload = [
_make_package(filename, signatures, upload_settings) for filename in uploads
]
repository = upload_settings.create_repository()
uploaded_packages = []
for package in packages_to_upload:
skip_message = (
f"Skipping {package.basefilename} because it appears to already exist"
)
# Note: The skip_existing check *needs* to be first, because otherwise
# we're going to generate extra HTTP requests against a hardcoded
# URL for no reason.
if upload_settings.skip_existing and repository.package_is_uploaded(package):
logger.warning(skip_message)
continue
resp = repository.upload(package)
logger.info(f"Response from {resp.url}:\n{resp.status_code} {resp.reason}")
if resp.text:
logger.info(resp.text)
# Bug 92. If we get a redirect we should abort because something seems
# funky. The behaviour is not well defined and redirects being issued
# by PyPI should never happen in reality. This should catch malicious
# redirects as well.
if resp.is_redirect:
raise exceptions.RedirectDetected.from_args(
repository_url,
resp.headers["location"],
)
if skip_upload(resp, upload_settings.skip_existing, package):
logger.warning(skip_message)
continue
utils.check_status_code(resp, upload_settings.verbose)
uploaded_packages.append(package)
release_urls = repository.release_urls(uploaded_packages)
if release_urls:
print("\n[green]View at:")
for url in release_urls:
print(url)
# Bug 28. Try to silence a ResourceWarning by clearing the connection
# pool.
repository.close()
def main(args: List[str]) -> None:
"""Execute the ``upload`` command.
:param args:
The command-line arguments.
"""
parser = argparse.ArgumentParser(prog="twine upload")
settings.Settings.register_argparse_arguments(parser)
parser.add_argument(
"dists",
nargs="+",
metavar="dist",
help="The distribution files to upload to the repository "
"(package index). Usually dist/* . May additionally contain "
"a .asc file to include an existing signature with the "
"file upload.",
)
parsed_args = parser.parse_args(args)
upload_settings = settings.Settings.from_argparse(parsed_args)
# Call the upload function with the arguments from the command line
return upload(upload_settings, parsed_args.dists) | ADISON | /ADISON-0.0.3.tar.gz/ADISON-0.0.1/twine/commands/upload.py | upload.py |
# Copyright 2015 Ian Cordasco
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os.path
from typing import List, cast
from rich import print
from twine import exceptions
from twine import package as package_file
from twine import settings
def register(register_settings: settings.Settings, package: str) -> None:
"""Pre-register a package name with a repository before uploading a distribution.
Pre-registration is not supported on PyPI, so the ``register`` command is only
necessary if you are using a different repository that requires it.
:param register_settings:
The configured options relating to repository registration.
:param package:
The path of the distribution to use for package metadata.
:raises twine.exceptions.TwineException:
The registration failed due to a configuration error.
:raises requests.HTTPError:
The repository responded with an error.
"""
repository_url = cast(str, register_settings.repository_config["repository"])
print(f"Registering package to {repository_url}")
repository = register_settings.create_repository()
if not os.path.exists(package):
raise exceptions.PackageNotFound(
f'"{package}" does not exist on the file system.'
)
resp = repository.register(
package_file.PackageFile.from_filename(package, register_settings.comment)
)
repository.close()
if resp.is_redirect:
raise exceptions.RedirectDetected.from_args(
repository_url,
resp.headers["location"],
)
resp.raise_for_status()
def main(args: List[str]) -> None:
"""Execute the ``register`` command.
:param args:
The command-line arguments.
"""
parser = argparse.ArgumentParser(
prog="twine register",
description="register operation is not required with PyPI.org",
)
settings.Settings.register_argparse_arguments(parser)
parser.add_argument(
"package",
metavar="package",
help="File from which we read the package metadata.",
)
parsed_args = parser.parse_args(args)
register_settings = settings.Settings.from_argparse(parsed_args)
# Call the register function with the args from the command line
register(register_settings, parsed_args.package) | ADISON | /ADISON-0.0.3.tar.gz/ADISON-0.0.1/twine/commands/register.py | register.py |
[](https://pypi.org/project/ADLES/)
[](https://travis-ci.org/GhostofGoes/ADLES)
[](http://adles.readthedocs.io/en/latest/)
[](https://zenodo.org/badge/latestdoi/68841026)
# Overview
Automated Deployment of Lab Environments System (ADLES)
ADLES automates the deterministic creation of virtualized environments for use
in Cybersecurity and Information Technology (IT) education.
The system enables educators to easily build deterministic and portable
environments for their courses, saving significant amounts of time and effort,
and alleviates the requirement of possessing advanced IT knowledge.
Complete documentation can be found at [ReadTheDocs](https://adles.readthedocs.io).
[Publication describing the system.](https://doi.org/10.1016/j.cose.2017.12.007)
# Getting started
```bash
# Install
pip3 install adles
# Usage
adles -h
# Specification syntax
adles --print-spec exercise
adles --print-spec infra
# Examples
adles --list-examples
adles --print-example competition
```
# Usage
Creating an environment using ADLES:
* Read the exercise and infrastructure specifications and examples of them.
* Write an infrastructure specification for your platform. (Currently, VMware vSphere is the only platform supported)
* Write an exercise specification with the environment you want created.
* Check its syntax, run the mastering phase, make your changes, and then run the deployment phase.
```bash
# Validate spec
adles validate my-competition.yaml
# Create Master images
adles masters my-competition.yaml
# Deploy the exercise
adles deploy my-competition.yaml
# Cleanup the environment
adles cleanup my-competition.yaml
```
## Detailed usage
```bash
usage: adles [-h] [--version] [-v] [--syslog SERVER] [--no-color]
[--list-examples] [--print-spec NAME] [--print-example NAME]
[-i INFRA]
{validate,deploy,masters,package,cleanup} ...
Examples:
adles --list-examples
adles --print-example competition | adles validate -
adles validate examples/pentest-tutorial.yaml
adles masters examples/experiment.yaml
adles -v deploy examples/experiment.yaml
adles cleanup -t masters --cleanup-nets examples/competition.yaml
adles validate -t infra examples/infra.yaml
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose Emit debugging logs to terminal
--syslog SERVER Send logs to a Syslog server on port 514
--no-color Do not color terminal output
-i INFRA, --infra INFRA
Override the infrastructure specification to be used
Print examples and specs:
--list-examples Prints the list of available example scenarios
--print-spec NAME Prints the named specification
--print-example NAME Prints the named example
ADLES Subcommands:
{validate,deploy,masters,package,cleanup}
validate Validate the syntax of your specification
deploy Environment deployment phase of specification
masters Master creation phase of specification
package Create a package
cleanup Cleanup and remove existing environments
```
## vSphere Utility Scripts
There are a number of utility scripts to make certain vSphere tasks bearable.
```bash
# Basic usage
vsphere --help
vsphere <script> --help
vsphere <script --version
# Running as a Python module
python -m adles.vsphere --help
```
### Detailed usage
```bash
usage: vsphere [-h] {cleanup,clone,power,info,snapshot} ...
Single-purpose CLI scripts for interacting with vSphere
optional arguments:
-h, --help show this help message and exit
vSphere scripts:
{cleanup,clone,power,info,snapshot}
cleanup Cleanup and Destroy Virtual Machines (VMs) and VM
Folders in a vSphere environment.
clone Clone multiple Virtual Machines in vSphere.
power Power operations for Virtual Machines in vSphere.
info Query information about a vSphere environment and
objects within it.
snapshot Perform Snapshot operations on Virtual Machines in a
vSphere environment.
```
# System requirements
Python: 3.6+
ADLES will run on any platform supported by Python. It has been tested on:
* Windows 10 (Anniversary and Creators)
* Ubuntu 14.04 and 16.04 (Including Bash on Ubuntu on Windows)
* CentOS 7
## Python packages
See ``setup.py`` for specific versions
* pyyaml
* colorlog
* humanfriendly
* tqdm
* pyvmomi (If you are using VMware vSphere)
* setuptools (If you are installing manually or developing)
## Platforms
**VMware vSphere**
* vCenter Server: 6.0+
* ESXi: 6.0+
# Reporting issues and getting help
If there is a bug in ADLES or you run into issues getting it working, please open an [issue on GitHub](https://github.com/GhostofGoes/ADLES/issues). I'm happy to help, though it may take a few days to a week for me to respond. If it's time-sensitive (I know the feeling well), feel free to contact me directly (see below).
If you have general questions, want help with using ADLES for your project or students, or just want to discuss the project, drop me a line via email (`adles/__about__.py`), Twitter (@GhostOfGoes), or on Discord (@KnownError). The [Python Discord server](https://discord.gg/python) is a good place
to ask questions or discuss the project.
# Contributing
Contributors are more than welcome! See the [contribution guide](CONTRIBUTING.md) to get started, and checkout the [TODO list](TODO.md) and [GitHub issues](https://github.com/GhostofGoes/ADLES/issues) for a full list of tasks and bugs.
## Contributors
* Christopher Goes (@GhostOfGoes)
* Daniel Conte de Leon (dcontedeleon)
# Goals and TODO
The overall goal of ADLES is to create a easy to use and rock-solid system that allows instructors
and students teaching using virtual environments to automate their workloads.
Long-term, I’d like to see the creation of a open-source repository, similar to
Hashicorp’s Atlas and Docker’s Hub, where educators can share packages
and contribute to improving cyber education globally.
Full list of TODOs are in `documentation/TODO.md` and the GitHub issues.
# License
This project is licensed under the Apache License, Version 2.0. See
LICENSE for the full license text, and NOTICES for attributions to
external projects that this project uses code from.
# Project History
The system began as a proof of concept implementation of my Master's thesis research at the
University of Idaho in Fall of 2016. It was originally designed to run on the RADICL lab.
| ADLES | /ADLES-1.4.0.tar.gz/ADLES-1.4.0/README.md | README.md |
# Contributing to ADLES
Thanks for taking an interest in this awesome little project. We love
to bring new members into the community, and can always use the help.
## Important resources
* Bug reports and issues: create an issue on [GitHub](https://github.com/GhostofGoes/ADLES/issues)
* Discussion: the [Python Discord server](https://discord.gg/python)
* [Documentation](https://adles.readthedocs.io/en/latest/)
* [PyPI](https://pypi.org/project/ADLES/)
# Where to contribute
## Good for beginners
* Documentation. This is especially important since this project is
focused on education. We need extensive documentation that *teaches*
the user (example: how to edit a file) in addition to the normal items.
* Adding unit tests
* Adding functional tests
* Development tooling: `pylint`, `tox`, `pipenv`, etc.
* Running the tests on your system and reporting if anything breaks...
* ...bug reports!
## Major areas
* Adding new deployment providers: Vagrant, Cloud providers (Apache Libcloud), etc.
* Adding to the existing specifications
* Getting the Package specification off the ground
* User interface: finishing the CLI overhaul, adding web GUI, visualizations, etc.
# Getting started
1. Create your own fork of the code through GitHub web interface ([Here's a Guide](https://gist.github.com/Chaser324/ce0505fbed06b947d962))
2. Clone the fork to your computer. This can be done using the
[GitHub desktop](https://desktop.github.com/) GUI , `git clone <fork-url>`,
or the Git tools in your favorite editor or IDE.
3. Create and checkout a new branch in the fork with either your username (e.g. "ghostofgoes"),
or the name of the feature or issue you're working on (e.g. "web-gui").
Again, this can be done using the GUI, your favorite editor, or `git checkout -b <branch> origin/<branch>`.
4. Create a virtual environment:
* Linux/OSX (Bash)
```bash
python -m pip install --user -U virtualenv
mkdir -p ~/.virtualenvs/
python -m virtualenv ~/.virtualenvs/ADLES
source ~/.virtualenvs/ADLES/bin/activate
```
* Windows (PowerShell)
```powershell
python -m pip install --user -U virtualenv
New-Item -ItemType directory -Path "$Env:USERPROFILE\.virtualenvs"
python -m virtualenv "$Env:USERPROFILE\.virtualenvs\ADLES"
$Env:USERPROFILE\.virtualenvs\ADLES\Scripts\Activate.ps1
```
5. Install the package: `python -m pip install -e .`
6. Setup and run the tests (wait, what tests? ...yeah, hey, what a great area to contribute!)
7. Write some code! Git commit messages should information about what changed,
and if it's relevant, the rationale (thinking) for the change.
8. Follow the checklist
9. Submit a pull request!
## Code requirements
* All methods must have type annotations
* Must work on Python 3.5+
* Must work on Windows 10+, Ubuntu 16.04+, and Kali Rolling 2017+
* Try to match the general code style (loosely PEP8)
* Be respectful.
Memes, references, and jokes are ok.
Explicit language (cursing/swearing), NSFW text/content, or racism are NOT ok.
## Checklist before submitting a pull request
* [ ] Update the [CHANGELOG](CHANGELOG.md) (For non-trivial changes, e.g. changing functionality or adding tests)
* [ ] Add your name to the contributors list in the [README](README.md)
* [ ] All tests pass locally
* [ ] Pylint is happy
# Bug reports
Filing a bug report:
1. Answer these questions:
* [ ] What version of `ADLES` are you using? (`adles --version`)
* [ ] What operating system and processor architecture are you using?
* [ ] What version of Python are you using?
* [ ] What did you do?
* [ ] What did you expect to see?
* [ ] What did you see instead?
2. Put any excessive output into a [GitHub Gist](https://gist.github.com/) and include a link in the issue.
3. Tag the issue with "Bug"
**NOTE**: If the issue is a potential security vulnerability, do *NOT* open an issue!
Instead, email: ghostofgoes(at)gmail(dot)com
# Features and ideas
Ideas for features or other things are welcomed. Open an issue on GitHub
detailing the idea, and tag it appropriately (e.g. "Feature" for a new feature).
| ADLES | /ADLES-1.4.0.tar.gz/ADLES-1.4.0/CONTRIBUTING.md | CONTRIBUTING.md |
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## [1.4.0] - 2019-09-04
**Notable changes**
- New CLI command syntax, run `adles --help` for details or checkout the Usage section in the README
- Consolidated the vSphere helper scripts (e.g. `vm-power`) into a single command, `vsphere` . For usage, run `vsphere --help`.
- **ADLES now requires Python 3.6+**. It is included or easily installable on any modern Linux distribution, Windows, and OSX.
### Added
- The CLI can now be invoked a Python module (e.g. `python -m adles`, `python -m adles.vsphere`)
- Added two new specification fields to all spec types: `spec-type` and `spec-version`
- New argument: `--syslog`. Configures saving of log output to the specified Syslog server.
- Added progress bars to the cloning, power, and snapshot vSphere helper commands
- Support the `NO_COLOR` environment variable (per [no-color.org](https://no-color.org/))
- New dependencies: [tqdm](https://github.com/tqdm/tqdm) and [humanfriendly](https://pypi.org/project/humanfriendly/)
- Debian package (See the [GitHub releases page](https://github.com/GhostofGoes/ADLES/releases))
### Changed
- Failing to import an optional dependency will now log an error instead
of raising an exception and terminating execution.
- Logs will not longer emit to a syslog server by default.
Syslog server will now only be used if the parameter is set.
- Behind the scenes changes to commandline argument parsing that will
make adding future functionality easier and enable usage of other
third-party libraries that use `argparse`.
- Lots of code cleanup and formatting
- Bumped dependency versions
- Various other minor changes, see the Git pull request diff for all the changes
### Removed
- Dropped support for Python < 3.6
- Removed `Libvirt` and `HyperV` interfaces
- Removed dependency: `netaddr`
### Dev
- Added Tox for test running and linting
- Added `.editorconfig`
- Added `.gitattributes`
- Reorganized some documentation
- Removed CodeClimate
- Moved the remaining examples in the project root into `examples/`
- Added unit tests to Travis CI
## [1.3.6] - 2017-12-19
### Fixed
- Fixed issue with any of the commandline scripts where just entering
the script name (e.g "adles") on the commandline would error,
instead of printing help as a user would expect.
- Fixed vm_snapshots script outputting to the wrong log file.
### Changed
- Under-the-hood improvements to how arguments are parsed
and keyboard interrupts are handled.
## [1.3.5] - 2017-12-13
### Changed
- Move package dependencies into setup.py from requirements.txt.
## [1.3.4] - 2017-12-13
### Added
- Man page on Linux systems!
## [1.3.3] - 2017-11-25
### Added
- The ability to relocate a VM to a new host and/or datastore to the VM class.
- The ability to change the mode of a VM HDD to the VM class.
### Changed
- Cleaned up docstrings in vm.py.
## [1.3.2] - 2017-11-25
### Added
- The ability to resize a HDD to the VM class.
## [1.3.1] - 2017-11-24
### Fixed
- Fixed bug where interfaces (and their dependencies) would be imported,
even if they were not being used. This caused the "pip install docker" error.
### Changed
- Minor improvements to logging.
## [1.3.0] - 2017-07-02
### Added
- An interface for libvirt (LibvirtInterface).
- LibcloudInterface that is inherited by LibvirtInterface and CloudInterface.
- Libvirt to infrastructure-specification.
- libvirt to optional-requirements.
### Changed
- Significant changes to class heirarachy. All interfaces now inherit from Interface and class its init method.
There is a separate PlatformInterface that has most of the functionality Interface did, and this is what is now called from main().
- Tweaked some boilerplate code in the interfaces.
- Updated parser.
- Formatting tweaks.
- Moved apache-libcloud to requirements.
## [1.2.0] - 2017-07-02
Initial stab at cloud interface using Apache libcloud
(quickly superseded by 1.3.0, so ignore this version).
| ADLES | /ADLES-1.4.0.tar.gz/ADLES-1.4.0/CHANGELOG.md | CHANGELOG.md |
import logging
import sys
from os.path import basename, exists, join, splitext
from adles.args import parse_cli_args
from adles.interfaces import PlatformInterface
from adles.parser import check_syntax, parse_yaml
from adles.utils import handle_keyboard_interrupt, setup_logging
def run_cli():
"""Parse command line interface arguments and run ADLES."""
args = parse_cli_args()
exit_status = main(args=args)
sys.exit(exit_status)
@handle_keyboard_interrupt
def main(args) -> int:
"""
:param args:
:return: The exit status of the program
"""
# Configure logging, including console colors and syslog server
colors = (False if args.no_color else True)
syslog = (args.syslog, 514) if args.syslog else None
setup_logging(filename='adles.log', colors=colors,
console_verbose=args.verbose, server=syslog)
# Set the sub-command to execute
command = args.command
# Just validate syntax, no building of environment
if command == 'validate':
if check_syntax(args.spec, args.validate_type) is None:
return 1
# Build an environment using a specification
elif command in ['deploy', 'masters', 'cleanup', 'package']:
override = None
if command == 'package': # Package specification
package_spec = check_syntax(args.spec, spec_type='package')
if package_spec is None: # Ensure it passed the check
return 1
# Extract exercise spec filename
spec_filename = package_spec["contents"]["environment"]
if "infrastructure" in package_spec["contents"]:
# Extract infra spec filename
override = package_spec["contents"]["infrastructure"]
else:
spec_filename = args.spec
# Validate specification syntax before proceeding
spec = check_syntax(spec_filename)
if spec is None: # Ensure it passed the check
return 1
if "name" not in spec["metadata"]:
# Default name is the filename of the specification
spec["metadata"]["name"] = splitext(basename(args.spec))[0]
# Override the infra file defined in exercise/package specification
if args.infra:
infra_file = args.infra
if not exists(infra_file):
logging.error("Could not find infra file '%s' "
"to override with", infra_file)
else:
override = infra_file
if override is not None: # Override infra file in exercise config
logging.info("Overriding infrastructure config "
"file with '%s'", override)
spec["metadata"]["infra-file"] = override
# Instantiate the Interface and call functions for the specified phase
interface = PlatformInterface(infra=parse_yaml(
spec["metadata"]["infra-file"]), spec=spec)
if command == 'masters':
interface.create_masters()
logging.info("Finished Master creation for %s",
spec["metadata"]["name"])
elif command == 'deploy':
interface.deploy_environment()
logging.info("Finished deployment of %s",
spec["metadata"]["name"])
elif command == 'cleanup':
if args.cleanup_type == 'masters':
interface.cleanup_masters(args.cleanup_nets)
elif args.cleanup_type == 'environment':
interface.cleanup_environment(args.cleanup_nets)
logging.info("Finished %s cleanup of %s", args.cleanup_type,
spec["metadata"]["name"])
else:
logging.error("INTERNAL ERROR -- Invalid command: %s", command)
return 1
# Show examples on commandline
elif args.list_examples or args.print_example:
from pkg_resources import Requirement, resource_filename
from os import listdir
example_dir = resource_filename(Requirement.parse("ADLES"), "examples")
# Filter non-YAML files from the listdir output
examples = [x[:-5] for x in listdir(example_dir) if ".yaml" in x]
if args.list_examples: # List all examples and their metadata
print("Example scenarios that can be printed " # noqa: T001
"using --print-example <name>")
# Print header for the output
print("Name".ljust(25) + "Version".ljust(10) + "Description") # noqa: T001
for example in examples:
if "infra" in example:
continue
metadata = parse_yaml(
join(example_dir, example + ".yaml"))["metadata"]
name = str(example).ljust(25)
ver = str(metadata["version"]).ljust(10)
desc = str(metadata["description"])
print(name + ver + desc) # noqa: T001
else:
example = args.print_example
if example in examples:
# Print out the complete content of a named example
with open(join(example_dir, example + ".yaml")) as file:
print(file.read()) # noqa: T001
else:
logging.error("Invalid example: %s", example)
return 1
# Show specifications on commandline
elif args.print_spec:
from pkg_resources import Requirement, resource_filename
spec = args.print_spec
specs = ["exercise", "package", "infrastructure"]
# Find spec in package installation directory and print it
if spec in specs:
# Extract specifications from their package installation location
filename = resource_filename(Requirement.parse("ADLES"),
join("specifications",
spec + "-specification.yaml"))
with open(filename) as file:
print(file.read()) # noqa: T001
else:
logging.error("Invalid specification: %s", spec)
return 1
# Handle invalid arguments
else:
logging.error("Invalid arguments. Argument dump:\n%s", str(vars(args)))
return 1
# Finished successfully
return 0 | ADLES | /ADLES-1.4.0.tar.gz/ADLES-1.4.0/adles/main.py | main.py |
import argparse
import sys
from adles.__about__ import __version__
description = """
ADLES: Automated Deployment of Lab Environments System.
Uses formal YAML specifications to create virtual environments for educational purposes.
Examples:
adles --list-examples
adles --print-example competition | adles validate -
adles validate examples/pentest-tutorial.yaml
adles masters examples/experiment.yaml
adles -v deploy examples/experiment.yaml
adles cleanup -t masters --cleanup-nets examples/competition.yaml
adles validate -t infra examples/infra.yaml
"""
epilog = """
License: Apache 2.0
Author: Christopher Goes <[email protected]>
Project: https://github.com/GhostofGoes/ADLES
"""
# TODO: Gooey
def parse_cli_args() -> argparse.Namespace:
main_parser = argparse.ArgumentParser(
prog='adles', formatter_class=argparse.RawDescriptionHelpFormatter,
description=description, epilog=epilog
)
main_parser.set_defaults(command='main')
main_parser.add_argument('--version', action='version',
version='ADLES %s' % __version__)
main_parser.add_argument('-v', '--verbose', action='store_true',
help='Emit debugging logs to terminal')
# TODO: break out logging config into a separate group
main_parser.add_argument('--syslog', type=str, metavar='SERVER',
help='Send logs to a Syslog server on port 514')
main_parser.add_argument('--no-color', action='store_true',
help='Do not color terminal output')
# Example/spec printing
examples = main_parser.add_argument_group(title='Print examples and specs')
examples.add_argument('--list-examples', action='store_true',
help='Prints the list of available example scenarios')
examples.add_argument('--print-spec', type=str, default='exercise',
help='Prints the named specification', metavar='NAME',
choices=['exercise', 'package', 'infrastructure'])
examples.add_argument('--print-example', type=str, metavar='NAME',
help='Prints the named example')
main_parser.add_argument('-i', '--infra', type=str, metavar='INFRA',
help='Override the infrastructure '
'specification to be used')
# ADLES sub-commands (TODO)
adles_subs = main_parser.add_subparsers(title='ADLES Subcommands')
# Validate
validate = adles_subs.add_parser(name='validate',
help='Validate the syntax '
'of your specification')
validate.set_defaults(command='validate')
validate.add_argument('-t', '--validate-type', type=str,
metavar='TYPE', default='exercise',
choices=['exercise', 'package', 'infra'],
help='Type of specification to validate')
# TODO:
# type=argparse.FileType(encoding='UTF-8')
# '-' argument...
validate.add_argument('spec', help='The YAML specification file to validate')
# Deployment phase
deploy = adles_subs.add_parser(name='deploy', help='Environment deployment '
'phase of specification')
deploy.set_defaults(command='deploy')
deploy.add_argument('spec', help='The YAML specification file to deploy')
# Mastering phase
masters = adles_subs.add_parser(name='masters', help='Master creation phase '
'of specification')
masters.set_defaults(command='masters')
masters.add_argument('spec', help='The YAML specification file '
'to create masters from')
# TODO: packages
package = adles_subs.add_parser(name='package', help='Create a package')
package.set_defaults(command='package')
package.add_argument('spec', help='The package specification to use')
# Cleanup
cleanup = adles_subs.add_parser(name='cleanup', help='Cleanup and remove '
'existing environments')
cleanup.set_defaults(command='cleanup')
cleanup.add_argument('-t', '--cleanup-type', type=str, metavar='TYPE',
choices=['masters', 'environment'],
help='Type of cleanup to perform')
cleanup.add_argument('--cleanup-nets', action='store_true',
help='Cleanup networks created during either phase')
cleanup.add_argument('spec', help='The YAML specification file defining '
'the environment to cleanup')
# Default to printing usage if no arguments are provided
if len(sys.argv) == 1:
main_parser.print_usage()
sys.exit(1)
args = main_parser.parse_args()
return args | ADLES | /ADLES-1.4.0.tar.gz/ADLES-1.4.0/adles/args.py | args.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.