code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
# AGouTI - Annotation of Genomic and Transcriptomic Intervals
## Introduction
High-throughput sequencing techniques have become very popular in molecular biology research. In many cases, obtained results are described by positions corresponding to the transcript, gene, or genome. Usually, to infer the biological function of such regions, it is necessary to annotate these regions with overlapping known genomic features, such as genes, transcripts, exons, UTRs, CDSs, etc. AGouTI is a tool designed to annotate any genomic or transcriptomic coordinates using known genome annotation data in GTF or GFF files.
#### Main features
1. AGouTI works with coordinates describing positions within the genome and within the transcripts,
2. Ability to assign intragenic regions from provided GTF/GFF annotation (UTRs, CDS, etc.) or de novo (5’ part, middle, 3’ part, whole),
3. Annotation of intervals in standard BED or custom column-based text files (TSV, CSV, etc.) in any non-standard format,
4. Flexible handling of multiple annotations for a single region,
5. Flexible selection of GTF/GFF attributes to include in the annotation.
<hr>
<br>
<br>
<br>
## Installation
Python >= 3.7 is required to run this software.
You can install AGouTI using `pip` as follows (recommended)
`pip install AGouTI`
or by
`python setup.py install`
<br>
#### Having troubles with an older Python version?
You can easily manage python versions using `conda` and the concept of virtual environments.
`Anaconda` can be downloaded from https://www.anaconda.com/distribution/#download-section (Python 3.x version)
To create a virtual environment with a specified Python version, you can type
`
conda create --name your_virtualenv_name python=3.8
`
Afterward, you need to activate your virtual environment.
`
conda activate your_virtualenv_name
`
Now you can install AGouTI using `pip` or `conda` and perform your analysis.
After your job is finished, you can leave your virtual environment.
`
conda deactivate
`
As an alternative to conda you can use Python's `venv`
<hr>
<br>
<br>
<br>
## Run AGouTI
You can now access <b>AGouTI</b> as follows:
`
agouti --help
`
<br/>
Running AGouTI is a two-step process. First, you need to create a dedicated database based on your annotation file (GTF/GFF3) using the `create_db` module and then annotate your intervals with the features of interest using `annotate`.
`
agouti create_db --help
`
<br/>
`
agouti annotate --help
`
<br>
<br>
### Step 1. Create the database
We have decided to rely on the SQLite database to efficiently store and access annotation data. Thus, the first step of our annotation pipeline is to create such a database using information included in GTF or GFF3 files. All feature names and attributes from the GTF/GFF3 file are automatically converted to lowercase to uniform the feature selection during the annotation step. By default, the initial database is created in RAM. Then it is inspected to provide the user with a list of features and attributes available in the GTF/GFF file and visualize the hierarchy of those features in a graph-based manner. The inspection process is efficient while the initial database is stored in memory (by default), but for low-memory machines, it can also be stored on the hard drive (an SSD drive would be recommended for speed). Memory consumption depends on the number of features and attributes that must be stored. For example, estimated memory use for Gencode annotation files of the human genome is up to 5GB. Finally, the database is written on a hard drive and can be used for annotation. All those steps are done automatically using create_db mode. Example invocations:
`
agouti create_db -f GTF -a gtf_of_your_choice.gtf -d database_name
`
or
`
agouti create_db -f GFF3 -a gff3_of_your_choice.gff3 -d database_name
`
##### Required options
<b>-a</b>, <b>--annotation</b> : Input file containing gene annotations in GTF or GFF3 format.
<b>-f</b>, <b>--format</b> : Input file format (GTF or GFF3), can be gzip-compressed.
<b>-d</b>, <b>--db</b> : Name for the output database.
##### Additional options
<b>-l</b>, <b>--low-ram</b> : Creates the database as a sqlite3 file directly on your disk. By default, the initial database is created in RAM to quickly inspect contents and relations between features and afterward saved on a local drive. Using this option may significantly slow down database creation. Use only when your RAM size is limited in comparison to the expected size of your database.
<b>-i</b>, <b>--infer_genes</b> : Infer genes. Use only with GTF files that do not contain separate lines describing genes. This step might be very time-consuming.
<b>-j</b>, <b>--infer_transcripts</b> : Infer transcripts. Use only with GTF files that do not contain separate lines describing transcripts. This step might be very time-consuming.
##### Output
The output files include:
1. <b>database_name</b> - the SQLite database file
2. <b>database_name.relations</b> - text file storing relations between feature types. This file is required for annotation with AGouTI. Therefore it must be stored in the same directory as the database file.
3. <b>database_name.attributes_and_features.pickle</b> - python dictionary stored as a pickle file. This file is required for annotation with AGouTI. Therefore it must be stored in the same directory as the database file.
4. <b>database_name.database.structure.txt</b> - additional text file listing all the features and attributes present in the GTF/GFF3 file and showing relations between them in a tree-like structure
The output consists of a tree-based structure representing the hierarchy of the features in the GTF or GFF3 file and a list of available attributes for each feature type. That information is by default displayed on stdout.
Having insights into the database contents, users can choose only a subset of features and attributes to annotate the dataset of interest.
<br>
<br>
### Step 2. Annotate your file
After creating the database, you can run annotation with AGouTI using the `agouti` annotate command. For example, AGouTI can annotate intervals stored in standard BED or any column-based text file containing information about genomic or transcriptomic coordinates (see the `--custom` option).
The transcriptomic mode enables the annotation of intervals encoded as positions within the transcripts instead of chromosomes. However, it is instrumental in annotating results from transcript-focused analyses (e.g., RBPs binding sites, identification of RNA structural motifs, etc.), it is essential to use the same source and version of annotation files as during the generation of the intervals submitted for annotation. The transcript IDs often change between the annotation releases as the transcript layout is improved (e.g., ENST00000613119.1 -> ENST00000613119.2 -> ENST00000613119.3, etc.). Since the transcript IDs are part of the coordinate system in transcriptomic mode, any difference will result in an annotation error.
Basic command:
`agouti annotate -i input.bed -d database_name`
##### Required Options
<b>-i</b>, <b>--input</b> : Input file in BED or another column-based format (see --custom).
<b>-d</b>, <b>--database</b> : Database created by agouti create_db.
##### Additional options
<b>-m</b>, <b>--custom</b> : Specify that the input text file is in custom format, besides BED. It should contain columns with information about feature id (id), chromosome (chr), start (s), and end (e) coordinates. Users can optionally specify a column with strand information (strand); otherwise, AGouTI will set it to '.'. Format should be specified as column indexes (starting from 1), in the following order: "id,chr,s,e,strand" or "id,chr,s,e", e.g. --custom 1,2,4,5,6. The field separator used in your file can be provided using the --separator option.
<b>-p</b>, <b>--separator</b> : Field separator for the --custom option. Default is tabulator.
<b>-b</b>, <b>--first_base_num</b>' : The first base number in the input file (BED/CUSTOM). Either 0 (0-based coordinates) or 1 (1-based coordinates). Default is 0 (standard for genomic BED files).
<b>-n</b>, <b>--header_lines</b> : The number of header lines. 0 by default. If a single header line is present, set this parameter to 1, etc.
<b>-t</b>, <b>--transcriptomic</b> : Transcriptomic annotation mode. In this mode, transcript IDs from the GTF/GFF3 are expected to be placed in the first column of provided BED file instead of chromosome names. Coordinates in this mode are assumed to reflect positions within the transcript. Optional.
<b>-f</b>, <b>--select_features</b> : Comma-separated list of feature names to be reported, e.g., "mRNA,CDS". Refer to [db_name].database.structure.txt file written during the database creation for a list of valid features. By default, all features are reported.
<b>-a</b>, <b>--select_attributes</b> : Comma-separated list of attribute names to be reported, e.g., "ID,description". Refer to [db_name].database.structure.txt file written during the database creation for a list of valid attributes. By default, all attributes are reported.
<b>-c</b>, <b>--combine</b> : List of specific feature-attribute combinations to be reported. Desired combination should be specified in the format: 'feature1-attribute1:attribute2,feature2-attribute1', e.g., "mRNA:description,biotype" for each mRNA, will provide annotation of mRNA description and mRNA biotype.
<b>-s</b>, <b>--strand_specific</b> : Strand-specific search.
<b>-w</b>, <b>--completly_within</b> : The feature must lie entirely within the GTF/GFF3 feature to be reported.
<b>-l</b>, <b>--level</b> : Group results on a specific level (e.g., 1 for gene level, 2 for mRNA, tRNA). Must be one of [1,2]. An annotation may be done on gene or transcript levels (level 1 or 2) so that each output line will correspond to the gene or transcript. Please note that --level 1 cannot be combined with --transcriptomic. Default is 2.
<b>-r</b>, <b>--annotate_feature_region</b> : Report region within the GTF or GFF3 feature, which overlaps with an entry from the input file. Designed to work with `--transcriptomic`. Possible values:
1. `5 prime` - when the annotated feature starts within the first quarter of gene or transcript and ends in the first half
2. `middle` - starts and ends within the second and third quarter
3. `3 prime` - starts within the third quarter and ends in the last one
4. `whole` - starts within the first quarter and ends within the last one. The length of the annotated feature does <b>not</b> exceed 90% of transcript or gene length.
5. `full` - starts within the first quarter and ends within the last one. The length of the annotated feature does exceed 90% of transcript or gene length.
<b>--statistics</b>: Calculate additional statistics. Those will be displayed at the end of the software\'s output (starting with #).
<b>--stats_only</b>: Display statistics only.
##### Output
Output is by default displayed on stdout in the form of a self-explanatory `.tsv` table.
<hr>
<br>
<br>
<br>
## Test case 1
All files used in this case are available at https://github.com/zywicki-lab/agouti (`agouti/agouti_pkg/sample_data.tar.gz`).
### Annotate human single nucleotide polymorphisms (SNPs) stored in the BED file (`common_snp.bed`).
We've downloaded BED file from the UCSC Table Browser (<i>https://genome.ucsc.edu/cgi-bin/hgTables</i>) using the following filters:
<b>Clade:</b> Mammal
<b>Genome:</b> Human
<b>Assembly:</b> Dec. 2013 (GRCh38/hg38)
<b>Group:</b> Variation
<b>Track:</b> Common SNPs(151)
<b>Table:</b> snp151Common
<b>Position:</b> chrX
<b>Output format:</b> BED - browser extensible data
and saved only the first 1000 SNPs in `common_snp.bed`.
Furthermore, we've downloaded gene annotations (only the X chromosome, genome version <i>GRCh38.p13</i>) in the GFF3 file format from the Ensembl database (<i>https://www.ensembl.org</i>) and subtracted to contain only the first 18178 lines.
To run AGouTI, the User needs to create the database based on the contents of the GFF3 file. It can be done using the command
`agouti create_db -f GFF3 -a Homo_sapiens.GRCh38.105.chromosome.X.gff3.gz -d Homo_sapiens.GRCh38.105.chromosome.X.db`
After the job completes, the User can explore the structure and contents of the database and GFF3 file by examining the `Homo_sapiens.GRCh38.105.chromosome.X.db.database.structure.txt`.
Let’s say that we are not interested in pseudogenes, unconfirmed_transcripts, and non-coding RNAs. To annotate SNPs using our database and calculate additional statistics (`--statistics`), we can type
`agouti annotate -d Homo_sapiens.GRCh38.105.chromosome.X.db -i common_snp.bed -f gene,lnc_rna,exon,mrna,five_prime_utr,three_prime_utr,cds --statistics > annotated_snp.tsv`
You can examine the results stored in the `annotated_snp.tsv`
Please note! Some SNPs lie in the intergenic regions, and the closest gene upstream/downstream is marked as `None`, because we operate near the chromosome border, and no gene is annotated upstream or downstream from such SNPs.
To discard SNPs located in the intergenic regions, we can use `grep`, `awk`, or similar tools, e.g.
`grep -v intergenic annotated_snp.tsv > annotated_snp_intergenic_discarded.tsv`
You can also make annotations on gene instead of transcript level (option `-l`)
`agouti annotate -d Homo_sapiens.GRCh38.105.chromosome.X.db -i common_snp.bed -f gene,lnc_rna,exon,mrna,five_prime_utr,three_prime_utr,cds -l 1 --statistics | grep -v intergenic > annotated_snp_l1.tsv`
You can explore the differences by examining the `annotated_snp_l1.tsv` file.
<hr>
<br>
<br>
## Test case 2
All files used in this case are available at https://github.com/zywicki-lab/agouti (`agouti/agouti_pkg/sample_data.tar.gz`).
### Annotate sample results obtained with the missRNA software (`missRNA.tsv`). Records represent coordinates of the small RNA molecules excised from the longer RNA molecules.
We've downloaded human gene annotations (only the 1st chromosome, genome version <i>GRCh38.p13</i>) in the GFF3 file format from the Ensembl database (<i>https://www.ensembl.org</i>) and subtracted to contain only the first 68571 lines.
To create the databasae, run:
`agouti create_db -f GFF3 -a Homo_sapiens.GRCh38.105.chromosome.1.gff3.gz -d Homo_sapiens.GRCh38.105.chr1.db`
To annotate the file in format other than standard BED using transcriptomic mode, use:
`agouti annotate -i missRNA.tsv -d Homo_sapiens.GRCh38.105.chr1.db -t -r -m 2,1,4,5,6 > missRNA-results.tsv`
The self-explanatory output file is stored as `missRNA-results.tsv` in the same directory as sample datasets.
<hr>
<br>
<br>
<br>
### Contribute
If you notice any errors and mistakes or would like to suggest some new features, please use Github's issue tracking system to report them. You are also welcome to send a pull request with your corrections and suggestions.
### License
This project is licensed under the GNU General Public License v3.0 license terms.
Anytree (Copyright (c) 2016 c0fec0de) and gffutils (Copyright (c) 2013 Ryan Dale) packages are distributed with this software to ensure full compatibility. Documentation, authors, license and additional information can be found at https://anytree.readthedocs.io/en/latest/ and https://github.com/daler/gffutils.
| AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/README.md | README.md |
from agouti_pkg.read_input import read_header_line
def prepare_bed_header(bed_file, custom, transcriptomic, num_of_bed_fields, sep, header_line_num):
"""Prepares part of the header for the features from BED line
Arguments:
bed_file {str} -- name or path of the input file
custom {str} -- custom file format description - as in --custom arg
transcriptomic {bool} -- args.transcriptomic
num_of_bed_fields {int} -- number of columns in the input file
sep {str} -- separator
Returns:
[str] -- bed header of the output file
"""
rhl = read_header_line(bed_file, custom, sep, header_line_num)
bed_header = ""
if custom != "BED":
idx = 1
custom_format = custom.strip().split(",")
custom_format = list(map(int, custom_format))
if not transcriptomic:
bed_fields = ["feature_id", "chr", "feature_start", "feature_end",
"strand"]
else:
bed_fields = ["feature_id", "transcript_id", "feature_start",
"feature_end", "strand"]
indices = [index for index, value in sorted(enumerate(custom_format),
key=lambda x: x[1])]
temp = 0
for i in range(1, num_of_bed_fields + 1):
if i not in custom_format:
if not rhl[0]: # if there is no header
bed_header = "{}\tCUSTOM_field_{}".format(bed_header, idx)
else:
bed_header = "{}\t{}".format(bed_header, rhl[1][idx - 1])
idx = idx + 1
else:
bed_header = "{}\t{}".format(bed_header,
bed_fields[(indices[temp])])
temp = temp + 1
else:
bed_fields = []
if not transcriptomic:
bed_fields = ["chr", "feature_start", "feature_end",
"feature_name", "feature_score", "strand"]
else:
bed_fields = ["transcript_id", "feature_start", "feature_end",
"feature_name", "feature_score", "strand"]
for field in range(0, num_of_bed_fields):
if field < 6:
bed_header = "{}\t{}".format(bed_header, bed_fields[field])
else:
idx = field - 6
if not rhl[0]: # if there is no header
bed_header = "{}\tBED_field_{}".format(bed_header, idx + 1)
else:
bed_header = "{}\t{}".format(bed_header, rhl[1][idx])
return bed_header
def prepare_header(db, attributes_and_features, args, num_of_bed_fields, header_line_num):
"""Prepare header for the output file
Arguments:
db {Database} -- object of the Database class
attributes_and_features {dict} -- dictionary of attributes and\
features that need to be annotated
args {argparse} -- parsed command line argument using argparse
num_of_bed_fields {int} -- number of columns in the input file
Returns:
tuple -- header of the output file, attributes choosen by the user to\
be annotated and present in the database,\
attributes choosen by the user
"""
features_to_annotate = set(db.features_at_3_level)
features_choosen_by_user = set(attributes_and_features.keys())
intersection = features_to_annotate & features_choosen_by_user
if args.level == 1:
features_at_given_level = db.features_at_1_level
else:
features_at_given_level = db.features_at_2_level
attributes_choosen_by_user = set()
for f in attributes_and_features.keys():
if (f in features_at_given_level):
for a in attributes_and_features[f]:
attributes_choosen_by_user.add(a)
attributes_choosen_by_user = sorted(attributes_choosen_by_user)
intersection = list(intersection)
intersection.sort()
bed_header = prepare_bed_header(args.bed, args.custom, args.transcriptomic,
num_of_bed_fields, args.sep, header_line_num)
if (args.level == 1):
header = "{}\tannotated_gene_id\tannotated_featuretype\
\tannotated_gene_start\tannotated_gene_end\
\t{}\t{}".format(bed_header, "\t".join(intersection),
"\t".join(attributes_choosen_by_user))
elif (args.level == 2):
if args.transcriptomic:
header = "{}\tannotated_gene_id\tannotated_featuretype\
\tannotated_chromosome\tannotated_transcript_start\
\tannotated_transcript_end\t{}\t{}\
".format(bed_header, "\t".join(intersection),
"\t".join(attributes_choosen_by_user))
else:
header = "{}\tannotated_transcript_id\tannotated_gene_id\
\tannotated_featuretype\tannotated_transcript_start\
\tannotated_transcript_end\t{}\t{}\
".format(bed_header, "\t".join(intersection),
"\t".join(attributes_choosen_by_user))
if (args.annotate_relative_location):
header = header.strip() + "\tfeature_region"
return header, list(intersection), list(attributes_choosen_by_user) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/header.py | header.py |
from __future__ import absolute_import
import functools
import itertools
import operator
import sys
import types
__author__ = "Benjamin Peterson <[email protected]>"
__version__ = "1.12.0"
# Useful for very coarse version differentiation.
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
PY34 = sys.version_info[0:2] >= (3, 4)
if PY3:
string_types = str,
integer_types = int,
class_types = type,
text_type = str
binary_type = bytes
MAXSIZE = sys.maxsize
else:
string_types = basestring,
integer_types = (int, long)
class_types = (type, types.ClassType)
text_type = unicode
binary_type = str
if sys.platform.startswith("java"):
# Jython always uses 32 bits.
MAXSIZE = int((1 << 31) - 1)
else:
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
class X(object):
def __len__(self):
return 1 << 31
try:
len(X())
except OverflowError:
# 32-bit
MAXSIZE = int((1 << 31) - 1)
else:
# 64-bit
MAXSIZE = int((1 << 63) - 1)
del X
def _add_doc(func, doc):
"""Add documentation to a function."""
func.__doc__ = doc
def _import_module(name):
"""Import module, returning the module after the last dot."""
__import__(name)
return sys.modules[name]
class _LazyDescr(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, tp):
result = self._resolve()
setattr(obj, self.name, result) # Invokes __set__.
try:
# This is a bit ugly, but it avoids running this again by
# removing this descriptor.
delattr(obj.__class__, self.name)
except AttributeError:
pass
return result
class MovedModule(_LazyDescr):
def __init__(self, name, old, new=None):
super(MovedModule, self).__init__(name)
if PY3:
if new is None:
new = name
self.mod = new
else:
self.mod = old
def _resolve(self):
return _import_module(self.mod)
def __getattr__(self, attr):
_module = self._resolve()
value = getattr(_module, attr)
setattr(self, attr, value)
return value
class _LazyModule(types.ModuleType):
def __init__(self, name):
super(_LazyModule, self).__init__(name)
self.__doc__ = self.__class__.__doc__
def __dir__(self):
attrs = ["__doc__", "__name__"]
attrs += [attr.name for attr in self._moved_attributes]
return attrs
# Subclasses should override this
_moved_attributes = []
class MovedAttribute(_LazyDescr):
def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
super(MovedAttribute, self).__init__(name)
if PY3:
if new_mod is None:
new_mod = name
self.mod = new_mod
if new_attr is None:
if old_attr is None:
new_attr = name
else:
new_attr = old_attr
self.attr = new_attr
else:
self.mod = old_mod
if old_attr is None:
old_attr = name
self.attr = old_attr
def _resolve(self):
module = _import_module(self.mod)
return getattr(module, self.attr)
class _SixMetaPathImporter(object):
"""
A meta path importer to import six.moves and its submodules.
This class implements a PEP302 finder and loader. It should be compatible
with Python 2.5 and all existing versions of Python3
"""
def __init__(self, six_module_name):
self.name = six_module_name
self.known_modules = {}
def _add_module(self, mod, *fullnames):
for fullname in fullnames:
self.known_modules[self.name + "." + fullname] = mod
def _get_module(self, fullname):
return self.known_modules[self.name + "." + fullname]
def find_module(self, fullname, path=None):
if fullname in self.known_modules:
return self
return None
def __get_module(self, fullname):
try:
return self.known_modules[fullname]
except KeyError:
raise ImportError("This loader does not know module " + fullname)
def load_module(self, fullname):
try:
# in case of a reload
return sys.modules[fullname]
except KeyError:
pass
mod = self.__get_module(fullname)
if isinstance(mod, MovedModule):
mod = mod._resolve()
else:
mod.__loader__ = self
sys.modules[fullname] = mod
return mod
def is_package(self, fullname):
"""
Return true, if the named module is a package.
We need this method to get correct spec objects with
Python 3.4 (see PEP451)
"""
return hasattr(self.__get_module(fullname), "__path__")
def get_code(self, fullname):
"""Return None
Required, if is_package is implemented"""
self.__get_module(fullname) # eventually raises ImportError
return None
get_source = get_code # same as get_code
_importer = _SixMetaPathImporter(__name__)
class _MovedItems(_LazyModule):
"""Lazy loading of moved objects"""
__path__ = [] # mark as package
_moved_attributes = [
MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"),
MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
MovedAttribute("intern", "__builtin__", "sys"),
MovedAttribute("map", "itertools", "builtins", "imap", "map"),
MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"),
MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"),
MovedAttribute("getoutput", "commands", "subprocess"),
MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload"),
MovedAttribute("reduce", "__builtin__", "functools"),
MovedAttribute("shlex_quote", "pipes", "shlex", "quote"),
MovedAttribute("StringIO", "StringIO", "io"),
MovedAttribute("UserDict", "UserDict", "collections"),
MovedAttribute("UserList", "UserList", "collections"),
MovedAttribute("UserString", "UserString", "collections"),
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"),
MovedModule("builtins", "__builtin__"),
MovedModule("configparser", "ConfigParser"),
MovedModule("copyreg", "copy_reg"),
MovedModule("dbm_gnu", "gdbm", "dbm.gnu"),
MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"),
MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
MovedModule("http_cookies", "Cookie", "http.cookies"),
MovedModule("html_entities", "htmlentitydefs", "html.entities"),
MovedModule("html_parser", "HTMLParser", "html.parser"),
MovedModule("http_client", "httplib", "http.client"),
MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"),
MovedModule("email_mime_image", "email.MIMEImage", "email.mime.image"),
MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"),
MovedModule("email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart"),
MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"),
MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
MovedModule("cPickle", "cPickle", "pickle"),
MovedModule("queue", "Queue"),
MovedModule("reprlib", "repr"),
MovedModule("socketserver", "SocketServer"),
MovedModule("_thread", "thread", "_thread"),
MovedModule("tkinter", "Tkinter"),
MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"),
MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
MovedModule("tkinter_colorchooser", "tkColorChooser",
"tkinter.colorchooser"),
MovedModule("tkinter_commondialog", "tkCommonDialog",
"tkinter.commondialog"),
MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
MovedModule("tkinter_font", "tkFont", "tkinter.font"),
MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
"tkinter.simpledialog"),
MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"),
MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"),
MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"),
MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"),
MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"),
]
# Add windows specific modules.
if sys.platform == "win32":
_moved_attributes += [
MovedModule("winreg", "_winreg"),
]
for attr in _moved_attributes:
setattr(_MovedItems, attr.name, attr)
if isinstance(attr, MovedModule):
_importer._add_module(attr, "moves." + attr.name)
del attr
_MovedItems._moved_attributes = _moved_attributes
moves = _MovedItems(__name__ + ".moves")
_importer._add_module(moves, "moves")
class Module_six_moves_urllib_parse(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_parse"""
_urllib_parse_moved_attributes = [
MovedAttribute("ParseResult", "urlparse", "urllib.parse"),
MovedAttribute("SplitResult", "urlparse", "urllib.parse"),
MovedAttribute("parse_qs", "urlparse", "urllib.parse"),
MovedAttribute("parse_qsl", "urlparse", "urllib.parse"),
MovedAttribute("urldefrag", "urlparse", "urllib.parse"),
MovedAttribute("urljoin", "urlparse", "urllib.parse"),
MovedAttribute("urlparse", "urlparse", "urllib.parse"),
MovedAttribute("urlsplit", "urlparse", "urllib.parse"),
MovedAttribute("urlunparse", "urlparse", "urllib.parse"),
MovedAttribute("urlunsplit", "urlparse", "urllib.parse"),
MovedAttribute("quote", "urllib", "urllib.parse"),
MovedAttribute("quote_plus", "urllib", "urllib.parse"),
MovedAttribute("unquote", "urllib", "urllib.parse"),
MovedAttribute("unquote_plus", "urllib", "urllib.parse"),
MovedAttribute("unquote_to_bytes", "urllib", "urllib.parse", "unquote", "unquote_to_bytes"),
MovedAttribute("urlencode", "urllib", "urllib.parse"),
MovedAttribute("splitquery", "urllib", "urllib.parse"),
MovedAttribute("splittag", "urllib", "urllib.parse"),
MovedAttribute("splituser", "urllib", "urllib.parse"),
MovedAttribute("splitvalue", "urllib", "urllib.parse"),
MovedAttribute("uses_fragment", "urlparse", "urllib.parse"),
MovedAttribute("uses_netloc", "urlparse", "urllib.parse"),
MovedAttribute("uses_params", "urlparse", "urllib.parse"),
MovedAttribute("uses_query", "urlparse", "urllib.parse"),
MovedAttribute("uses_relative", "urlparse", "urllib.parse"),
]
for attr in _urllib_parse_moved_attributes:
setattr(Module_six_moves_urllib_parse, attr.name, attr)
del attr
Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes
_importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"),
"moves.urllib_parse", "moves.urllib.parse")
class Module_six_moves_urllib_error(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_error"""
_urllib_error_moved_attributes = [
MovedAttribute("URLError", "urllib2", "urllib.error"),
MovedAttribute("HTTPError", "urllib2", "urllib.error"),
MovedAttribute("ContentTooShortError", "urllib", "urllib.error"),
]
for attr in _urllib_error_moved_attributes:
setattr(Module_six_moves_urllib_error, attr.name, attr)
del attr
Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes
_importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"),
"moves.urllib_error", "moves.urllib.error")
class Module_six_moves_urllib_request(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_request"""
_urllib_request_moved_attributes = [
MovedAttribute("urlopen", "urllib2", "urllib.request"),
MovedAttribute("install_opener", "urllib2", "urllib.request"),
MovedAttribute("build_opener", "urllib2", "urllib.request"),
MovedAttribute("pathname2url", "urllib", "urllib.request"),
MovedAttribute("url2pathname", "urllib", "urllib.request"),
MovedAttribute("getproxies", "urllib", "urllib.request"),
MovedAttribute("Request", "urllib2", "urllib.request"),
MovedAttribute("OpenerDirector", "urllib2", "urllib.request"),
MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"),
MovedAttribute("ProxyHandler", "urllib2", "urllib.request"),
MovedAttribute("BaseHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"),
MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"),
MovedAttribute("FileHandler", "urllib2", "urllib.request"),
MovedAttribute("FTPHandler", "urllib2", "urllib.request"),
MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"),
MovedAttribute("UnknownHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"),
MovedAttribute("urlretrieve", "urllib", "urllib.request"),
MovedAttribute("urlcleanup", "urllib", "urllib.request"),
MovedAttribute("URLopener", "urllib", "urllib.request"),
MovedAttribute("FancyURLopener", "urllib", "urllib.request"),
MovedAttribute("proxy_bypass", "urllib", "urllib.request"),
MovedAttribute("parse_http_list", "urllib2", "urllib.request"),
MovedAttribute("parse_keqv_list", "urllib2", "urllib.request"),
]
for attr in _urllib_request_moved_attributes:
setattr(Module_six_moves_urllib_request, attr.name, attr)
del attr
Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes
_importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"),
"moves.urllib_request", "moves.urllib.request")
class Module_six_moves_urllib_response(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_response"""
_urllib_response_moved_attributes = [
MovedAttribute("addbase", "urllib", "urllib.response"),
MovedAttribute("addclosehook", "urllib", "urllib.response"),
MovedAttribute("addinfo", "urllib", "urllib.response"),
MovedAttribute("addinfourl", "urllib", "urllib.response"),
]
for attr in _urllib_response_moved_attributes:
setattr(Module_six_moves_urllib_response, attr.name, attr)
del attr
Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes
_importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"),
"moves.urllib_response", "moves.urllib.response")
class Module_six_moves_urllib_robotparser(_LazyModule):
"""Lazy loading of moved objects in six.moves.urllib_robotparser"""
_urllib_robotparser_moved_attributes = [
MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"),
]
for attr in _urllib_robotparser_moved_attributes:
setattr(Module_six_moves_urllib_robotparser, attr.name, attr)
del attr
Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes
_importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"),
"moves.urllib_robotparser", "moves.urllib.robotparser")
class Module_six_moves_urllib(types.ModuleType):
"""Create a six.moves.urllib namespace that resembles the Python 3 namespace"""
__path__ = [] # mark as package
parse = _importer._get_module("moves.urllib_parse")
error = _importer._get_module("moves.urllib_error")
request = _importer._get_module("moves.urllib_request")
response = _importer._get_module("moves.urllib_response")
robotparser = _importer._get_module("moves.urllib_robotparser")
def __dir__(self):
return ['parse', 'error', 'request', 'response', 'robotparser']
_importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"),
"moves.urllib")
def add_move(move):
"""Add an item to six.moves."""
setattr(_MovedItems, move.name, move)
def remove_move(name):
"""Remove item from six.moves."""
try:
delattr(_MovedItems, name)
except AttributeError:
try:
del moves.__dict__[name]
except KeyError:
raise AttributeError("no such move, %r" % (name,))
if PY3:
_meth_func = "__func__"
_meth_self = "__self__"
_func_closure = "__closure__"
_func_code = "__code__"
_func_defaults = "__defaults__"
_func_globals = "__globals__"
else:
_meth_func = "im_func"
_meth_self = "im_self"
_func_closure = "func_closure"
_func_code = "func_code"
_func_defaults = "func_defaults"
_func_globals = "func_globals"
try:
advance_iterator = next
except NameError:
def advance_iterator(it):
return it.next()
next = advance_iterator
try:
callable = callable
except NameError:
def callable(obj):
return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
if PY3:
def get_unbound_function(unbound):
return unbound
create_bound_method = types.MethodType
def create_unbound_method(func, cls):
return func
Iterator = object
else:
def get_unbound_function(unbound):
return unbound.im_func
def create_bound_method(func, obj):
return types.MethodType(func, obj, obj.__class__)
def create_unbound_method(func, cls):
return types.MethodType(func, None, cls)
class Iterator(object):
def next(self):
return type(self).__next__(self)
callable = callable
_add_doc(get_unbound_function,
"""Get the function out of a possibly unbound function""")
get_method_function = operator.attrgetter(_meth_func)
get_method_self = operator.attrgetter(_meth_self)
get_function_closure = operator.attrgetter(_func_closure)
get_function_code = operator.attrgetter(_func_code)
get_function_defaults = operator.attrgetter(_func_defaults)
get_function_globals = operator.attrgetter(_func_globals)
if PY3:
def iterkeys(d, **kw):
return iter(d.keys(**kw))
def itervalues(d, **kw):
return iter(d.values(**kw))
def iteritems(d, **kw):
return iter(d.items(**kw))
def iterlists(d, **kw):
return iter(d.lists(**kw))
viewkeys = operator.methodcaller("keys")
viewvalues = operator.methodcaller("values")
viewitems = operator.methodcaller("items")
else:
def iterkeys(d, **kw):
return d.iterkeys(**kw)
def itervalues(d, **kw):
return d.itervalues(**kw)
def iteritems(d, **kw):
return d.iteritems(**kw)
def iterlists(d, **kw):
return d.iterlists(**kw)
viewkeys = operator.methodcaller("viewkeys")
viewvalues = operator.methodcaller("viewvalues")
viewitems = operator.methodcaller("viewitems")
_add_doc(iterkeys, "Return an iterator over the keys of a dictionary.")
_add_doc(itervalues, "Return an iterator over the values of a dictionary.")
_add_doc(iteritems,
"Return an iterator over the (key, value) pairs of a dictionary.")
_add_doc(iterlists,
"Return an iterator over the (key, [values]) pairs of a dictionary.")
if PY3:
def b(s):
return s.encode("latin-1")
def u(s):
return s
unichr = chr
import struct
int2byte = struct.Struct(">B").pack
del struct
byte2int = operator.itemgetter(0)
indexbytes = operator.getitem
iterbytes = iter
import io
StringIO = io.StringIO
BytesIO = io.BytesIO
_assertCountEqual = "assertCountEqual"
if sys.version_info[1] <= 1:
_assertRaisesRegex = "assertRaisesRegexp"
_assertRegex = "assertRegexpMatches"
else:
_assertRaisesRegex = "assertRaisesRegex"
_assertRegex = "assertRegex"
else:
def b(s):
return s
# Workaround for standalone backslash
def u(s):
return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape")
unichr = unichr
int2byte = chr
def byte2int(bs):
return ord(bs[0])
def indexbytes(buf, i):
return ord(buf[i])
iterbytes = functools.partial(itertools.imap, ord)
import StringIO
StringIO = BytesIO = StringIO.StringIO
_assertCountEqual = "assertItemsEqual"
_assertRaisesRegex = "assertRaisesRegexp"
_assertRegex = "assertRegexpMatches"
_add_doc(b, """Byte literal""")
_add_doc(u, """Text literal""")
def assertCountEqual(self, *args, **kwargs):
return getattr(self, _assertCountEqual)(*args, **kwargs)
def assertRaisesRegex(self, *args, **kwargs):
return getattr(self, _assertRaisesRegex)(*args, **kwargs)
def assertRegex(self, *args, **kwargs):
return getattr(self, _assertRegex)(*args, **kwargs)
if PY3:
exec_ = getattr(moves.builtins, "exec")
def reraise(tp, value, tb=None):
try:
if value is None:
value = tp()
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
finally:
value = None
tb = None
else:
def exec_(_code_, _globs_=None, _locs_=None):
"""Execute code in a namespace."""
if _globs_ is None:
frame = sys._getframe(1)
_globs_ = frame.f_globals
if _locs_ is None:
_locs_ = frame.f_locals
del frame
elif _locs_ is None:
_locs_ = _globs_
exec("""exec _code_ in _globs_, _locs_""")
exec_("""def reraise(tp, value, tb=None):
try:
raise tp, value, tb
finally:
tb = None
""")
if sys.version_info[:2] == (3, 2):
exec_("""def raise_from(value, from_value):
try:
if from_value is None:
raise value
raise value from from_value
finally:
value = None
""")
elif sys.version_info[:2] > (3, 2):
exec_("""def raise_from(value, from_value):
try:
raise value from from_value
finally:
value = None
""")
else:
def raise_from(value, from_value):
raise value
print_ = getattr(moves.builtins, "print", None)
if print_ is None:
def print_(*args, **kwargs):
"""The new-style print function for Python 2.4 and 2.5."""
fp = kwargs.pop("file", sys.stdout)
if fp is None:
return
def write(data):
if not isinstance(data, basestring):
data = str(data)
# If the file has an encoding, encode unicode with it.
if (isinstance(fp, file) and
isinstance(data, unicode) and
fp.encoding is not None):
errors = getattr(fp, "errors", None)
if errors is None:
errors = "strict"
data = data.encode(fp.encoding, errors)
fp.write(data)
want_unicode = False
sep = kwargs.pop("sep", None)
if sep is not None:
if isinstance(sep, unicode):
want_unicode = True
elif not isinstance(sep, str):
raise TypeError("sep must be None or a string")
end = kwargs.pop("end", None)
if end is not None:
if isinstance(end, unicode):
want_unicode = True
elif not isinstance(end, str):
raise TypeError("end must be None or a string")
if kwargs:
raise TypeError("invalid keyword arguments to print()")
if not want_unicode:
for arg in args:
if isinstance(arg, unicode):
want_unicode = True
break
if want_unicode:
newline = unicode("\n")
space = unicode(" ")
else:
newline = "\n"
space = " "
if sep is None:
sep = space
if end is None:
end = newline
for i, arg in enumerate(args):
if i:
write(sep)
write(arg)
write(end)
if sys.version_info[:2] < (3, 3):
_print = print_
def print_(*args, **kwargs):
fp = kwargs.get("file", sys.stdout)
flush = kwargs.pop("flush", False)
_print(*args, **kwargs)
if flush and fp is not None:
fp.flush()
_add_doc(reraise, """Reraise an exception.""")
if sys.version_info[0:2] < (3, 4):
def wraps(wrapped, assigned=functools.WRAPPER_ASSIGNMENTS,
updated=functools.WRAPPER_UPDATES):
def wrapper(f):
f = functools.wraps(wrapped, assigned, updated)(f)
f.__wrapped__ = wrapped
return f
return wrapper
else:
wraps = functools.wraps
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
# This requires a bit of explanation: the basic idea is to make a dummy
# metaclass for one level of class instantiation that replaces itself with
# the actual metaclass.
class metaclass(type):
def __new__(cls, name, this_bases, d):
return meta(name, bases, d)
@classmethod
def __prepare__(cls, name, this_bases):
return meta.__prepare__(name, bases)
return type.__new__(metaclass, 'temporary_class', (), {})
def add_metaclass(metaclass):
"""Class decorator for creating a class with a metaclass."""
def wrapper(cls):
orig_vars = cls.__dict__.copy()
slots = orig_vars.get('__slots__')
if slots is not None:
if isinstance(slots, str):
slots = [slots]
for slots_var in slots:
orig_vars.pop(slots_var)
orig_vars.pop('__dict__', None)
orig_vars.pop('__weakref__', None)
if hasattr(cls, '__qualname__'):
orig_vars['__qualname__'] = cls.__qualname__
return metaclass(cls.__name__, cls.__bases__, orig_vars)
return wrapper
def ensure_binary(s, encoding='utf-8', errors='strict'):
"""Coerce **s** to six.binary_type.
For Python 2:
- `unicode` -> encoded to `str`
- `str` -> `str`
For Python 3:
- `str` -> encoded to `bytes`
- `bytes` -> `bytes`
"""
if isinstance(s, text_type):
return s.encode(encoding, errors)
elif isinstance(s, binary_type):
return s
else:
raise TypeError("not expecting type '%s'" % type(s))
def ensure_str(s, encoding='utf-8', errors='strict'):
"""Coerce *s* to `str`.
For Python 2:
- `unicode` -> encoded to `str`
- `str` -> `str`
For Python 3:
- `str` -> `str`
- `bytes` -> decoded to `str`
"""
if not isinstance(s, (text_type, binary_type)):
raise TypeError("not expecting type '%s'" % type(s))
if PY2 and isinstance(s, text_type):
s = s.encode(encoding, errors)
elif PY3 and isinstance(s, binary_type):
s = s.decode(encoding, errors)
return s
def ensure_text(s, encoding='utf-8', errors='strict'):
"""Coerce *s* to six.text_type.
For Python 2:
- `unicode` -> `unicode`
- `str` -> `unicode`
For Python 3:
- `str` -> `str`
- `bytes` -> decoded to `str`
"""
if isinstance(s, binary_type):
return s.decode(encoding, errors)
elif isinstance(s, text_type):
return s
else:
raise TypeError("not expecting type '%s'" % type(s))
def python_2_unicode_compatible(klass):
"""
A decorator that defines __unicode__ and __str__ methods under Python 2.
Under Python 3 it does nothing.
To support Python 2 and 3 with a single code base, define a __str__ method
returning text and apply this decorator to the class.
"""
if PY2:
if '__str__' not in klass.__dict__:
raise ValueError("@python_2_unicode_compatible cannot be applied "
"to %s because it doesn't define __str__()." %
klass.__name__)
klass.__unicode__ = klass.__str__
klass.__str__ = lambda self: self.__unicode__().encode('utf-8')
return klass
# Complete the moves implementation.
# This code is at the end of this module to speed up module loading.
# Turn this module into a package.
__path__ = [] # required for PEP 302 and PEP 451
__package__ = __name__ # see PEP 366 @ReservedAssignment
if globals().get("__spec__") is not None:
__spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable
# Remove other six meta path importers, since they cause problems. This can
# happen if six is removed from sys.modules and then reloaded. (Setuptools does
# this for some reason.)
if sys.meta_path:
for i, importer in enumerate(sys.meta_path):
# Here's some real nastiness: Another "instance" of the six module might
# be floating around. Therefore, we can't use isinstance() to check for
# the six meta path importer, since the other six instance will have
# inserted an importer with different class.
if (type(importer).__name__ == "_SixMetaPathImporter" and
importer.name == __name__):
del sys.meta_path[i]
break
del i, importer
# Finally, add the importer to the meta path import hook.
sys.meta_path.append(_importer) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/six.py | six.py |
from agouti_pkg.miscallaneous import *
class ProcessingProduct(object):
"""Feature from the BED file. One line in BED corresponds\
to one ProcessingProduct"""
def __init__(self, coordinates, processing_product, score, strand,
bed_line, first_base_num, coords_outside_transcript="No"):
self.coordinates = coordinates
if (first_base_num == 0):
one_based_coordinates = (coordinates[0], coordinates[1]+1,
coordinates[2])
self.coordinates = one_based_coordinates
self.processing_product = processing_product
self.score = score
self.strand = strand
self.bed_line = bed_line
self.coords_outside_transcript = coords_outside_transcript
def convert_coordinates(self):
"""Converts coordinates into UCSC-like format
Returns:
str -- coordinates in UCSC-like format
"""
coord_formated = "{chr}:{start}-{end}".format(
chr=self.coordinates[0],
start=self.coordinates[1],
end=self.coordinates[2])
return coord_formated
def genomic_to_transcriptomic(self, database, featuretypes_from_db):
"""Converts genomic coordinates to transcriptomic coordinates of given\
mRNA. Require two args. One of them is a sqlite3 database created \
by gffutils, second is a generator object that yields featuretypes\
from database (as strings)
Arguments:
database {modules.database.Database} -- Database object
featuretypes_from_db {list} -- feature types in the database
Returns:
tuple -- first element is tuple of (5' utr len, cds len ,\
3' utr len), second is start coordinate of CDS within\
transcript
"""
transcript_id = self.coordinates[0]
mRNA = database.database[transcript_id]
three_prime_UTRs = get_three_prime_UTRs(mRNA,
list(featuretypes_from_db),
database)
five_prime_UTRs = get_five_prime_UTRs(mRNA, list(featuretypes_from_db),
database)
exons = get_exons(mRNA, database)
cds = get_cds(mRNA, list(featuretypes_from_db), database)
five_prime_utr_len = sum_features_length(five_prime_UTRs)
three_prime_utr_len = sum_features_length(three_prime_UTRs)
transcript_length = sum_features_length(exons)
cds_len = transcript_length - five_prime_utr_len - three_prime_utr_len
if len(cds) == 0:
cds_start = None
else:
cds_start = min(cds, key=attrgetter('start')).start
if (sum([cds_len, three_prime_utr_len, five_prime_utr_len]) !=
transcript_length and len(cds)):
left_side_of_cds_len = 0
for exon in exons:
if (cds_start >= exon.end):
left_side_of_cds_len += exon.end - exon.start
elif (cds_start < exon.end and cds_start > exon.start):
left_side_of_cds_len += cds_start - exon.start
five_prime_utr_len = left_side_of_cds_len
three_prime_utr_len = (transcript_length - five_prime_utr_len
- cds_len)
lengths_tuple = (five_prime_utr_len, cds_len,
three_prime_utr_len)
if lengths_tuple == (0, 0, 0):
lengths_tuple = (0, transcript_length, 0)
return lengths_tuple, cds_start
def check_overlapping_feature__position(self, lengths_dict, feature,
transcriptomic, offset=0):
"""Checks in which part of overlapping feature lies ProcessingProduct\
object
Arguments:
feature {Feature} -- overlapping feature
transcriptomic {bool} -- transcriptomic option (see help)
Keyword Arguments:
offset {int} -- offset option (see help) (default: {0})
Returns:
str -- localization within overlapping feature
"""
start, end = min(self.coordinates[1], self.coordinates[2]), max(self.coordinates[1], self.coordinates[2])
if (transcriptomic):
feature_length = sum(lengths_dict[feature.id])
else:
feature_length = max(feature.stop, feature.start) - min(feature.stop, feature.start)
pp_length = self.coordinates[2] - self.coordinates[1]
if (offset):
feature_start = offset
else:
feature_start = 0
if start > feature_length and end > feature_length:
loc = "downstream"
elif self.coords_outside_transcript == "both":
loc = "upstream"
elif (start < feature_start + (0.25 * feature_length)
and end < feature_start + (0.5 * feature_length)):
loc = "5 prime"
elif (start > feature_start + (0.25 * feature_length) and
end < feature_start + (0.75 * feature_length)):
loc = "middle"
elif (start > feature_start + (0.5 * feature_length) and
end > feature_start + (0.75 * feature_length)):
loc = "3 prime"
elif (start < feature_start + (0.25 * feature_length) and
end > feature_start + (0.75 * feature_length) and
pp_length < 0.9 * feature_length):
loc = "whole"
elif (start < feature_start + (0.25 * feature_length) and
end > feature_start + (0.75 * feature_length) and
pp_length >= 0.9 * feature_length):
loc = "full"
else:
loc = "other"
return loc | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/processing_product.py | processing_product.py |
from agouti_pkg.miscallaneous import handle_negative_coordinates
from agouti_pkg.processing_product import ProcessingProduct
from agouti_pkg.eprint import eprint
import sys
import codecs
from agouti_pkg.argument_parser import parse_arguments
def read_custom_format(input_file, custom, sep, first_base_num,
num_of_bed_fields, header_line_num):
"""Reads and parse input file in CUSTOM format. Returns dictionary of\
id (key) and ProcessingProduct's (value)
Arguments:
input_file {str} -- input file name or path
custom {str} -- custom file format description - as in --custom arg
sep {str} -- column separator
first_base_num {int} -- 0 for 0-based coordinates or 1 for 1-based
num_of_bed_fields {int} -- number of columns in the input file
header_line_num {int} -- number of header lines from the top of the file
Returns:
tuple -- first element stands for dictionary based on input file.\
Key stands for product ID, value is an object of\
ProcessingProduct class. Second element is of int type and\
describes number of columns in the input file
"""
products = {}
custom_format = custom.split(',')
custom_format = list(filter(None, custom_format))
try:
custom_format = list(map(int, custom_format))
except ValueError:
eprint("ERROR: incorrect format provided with --custom flag. Invalid literal for int() with base 10")
sys.exit()
if (len(custom_format) < 4):
eprint("ERROR: incorrect format provided with --custom flag or wrong separator used")
sys.exit()
try:
with open(input_file) as f:
for index, line in enumerate(f): ###
if header_line_num - index > 0:
if header_line_num - index > 1:
print(line.strip())
continue
if not line.startswith("#"):
splitted_line = line.split(codecs.decode(
sep,
'unicode_escape'))
if (num_of_bed_fields == -1):
num_of_bed_fields = len(splitted_line)
line_updated = ""
if line.strip().endswith(sep):
line_updated = "{}.\n".format(line.strip())
else:
line_updated = line
start_coord, end_coord, coord_outside_transcript =\
handle_negative_coordinates(int(splitted_line[
custom_format[2] - 1]),
int(splitted_line[
custom_format[3] - 1]))
coordinates = (splitted_line[custom_format[1] - 1],
start_coord, end_coord)
processing_product = splitted_line[custom_format[0] - 1]
score = 0
if len(custom_format) == 4:
strand = "."
else:
strand = splitted_line[custom_format[4] - 1]
if coord_outside_transcript != "No":
product = ProcessingProduct(coordinates,
processing_product,
score, strand,
line_updated.strip().replace(sep,
"\t"),
first_base_num,
coord_outside_transcript)
else:
product = ProcessingProduct(coordinates,
processing_product,
score, strand,
line_updated.strip().replace(sep,
"\t"),
first_base_num)
products[processing_product] = product
f.close()
except IndexError:
eprint("ERROR: incorrect format provided with --custom flag or wrong separator used")
sys.exit()
return products, num_of_bed_fields
def read_BED_file(bed_file, num_of_bed_fields, first_base_num, header_line_num):
"""Reads and parse input file in BED format. Returns dictionary of\
id (key) and ProcessingProduct's (value)
Arguments:
bed_file {str} -- input file name or path
num_of_bed_fields {int} -- number of columns in the input file
first_base_num {int} -- 0 for 0-based coordinates or 1 for 1-based
header_line_num {int} -- number of header lines from the top of the file
Returns:
[tuple] -- list of ProcessingProducts objects, number of columns in\
the input file
"""
products = {}
counter = 0
with open(bed_file) as f:
for index, line in enumerate(f): ###
if header_line_num - index > 0:
continue
if not line.startswith("#"):
tab = line.strip().split()
num_of_bed_fields = len(list(filter(None, tab))) if (
num_of_bed_fields == -1) else num_of_bed_fields
start_coord, end_coord, coord_outside_transcript = (
handle_negative_coordinates(int(tab[1]), int(tab[2])))
coordinates = (tab[0], start_coord, end_coord)
while (len(tab) < 6):
# in case of less than 6 columns in BED,
# fill the rest (up to 6) with blank fields
tab.append(".")
processing_product, score, strand = tab[3], tab[4], tab[5]
if processing_product == ".":
processing_product = f"unnamed_agouti_feature_{counter}"
counter += 1
if coord_outside_transcript != "No":
product = ProcessingProduct(coordinates,
processing_product,
score, strand, line.strip(),
first_base_num,
coord_outside_transcript)
else:
product = ProcessingProduct(coordinates,
processing_product,
score, strand, line.strip(),
0)
products[processing_product] = product
f.close()
return products, num_of_bed_fields
def read_header_line(bed_file, custom, sep, header_line_num):
"""Read and parse header line from the input file
Arguments:
bed_file {str} -- input file name or path
custom {str} -- custom file format description - as in --custom arg
sep {str} -- separator
header_line_num {int} -- number of header lines from the top of the file
Returns:
[tuple] -- header line, list of header fields
"""
f = open(bed_file)
lines = f.readlines()
header = False
field_list = []
potential_header = ""
if header_line_num != 0:
potential_header = lines[header_line_num - 1]
header = True
f.close()
if custom != "BED" and header:
custom_format = list(
map(int, list(filter(None, custom.split(',')))))
sh = list(filter(None, potential_header.strip().split(codecs.decode(
sep,
'unicode_escape'))))
for i in range(0, len(sh)):
if (i + 1 not in custom_format):
field_list.append(sh[i])
elif custom == "BED":
sh = list(filter(None, potential_header.strip().split()))
if len(sh) > 6:
field_list = sh[6:]
return header, field_list | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/read_input.py | read_input.py |
from agouti_pkg.miscallaneous import *
def prepare_output(lengths_dict, header, args, attributes_and_features,
database, processing_product, overlapping_features,
region, cds_start=None):
"""Prepares a single line of the output file. Parse ProcessingProduct\
objects.
Arguments:
lengths_dict {dict} -- 5'UTR length, CDS length, 3'UTR length
header {str} -- header
args {argparse.Namespace} -- argparse command-line arguments
attributes_and_features {dict} -- dictionary of attributes and\
features that need to be annotated
database {Database} -- Database
processing_product {ProcessingProduct} -- object of ProcessingProduct
class
overlapping_features {list} -- list of overlapping features
region {tuple} -- coordinates
Keyword Arguments:
cds_start {int} -- start of cds coordinate (default: {None})
Returns:
str -- single output line
"""
out = ""
if len(overlapping_features): # if there are overlapping features
for feature in overlapping_features:
temp_out = "."
temp_result = ""
temp_featuretype="."
temp_seqid="."
temp_overlapping_feature_start="."
temp_overlapping_feature_end="."
if ((feature.strand != processing_product.strand) and
args.strand_specific and not args.transcriptomic) or (args.transcriptomic and processing_product.strand == "-"):
continue
if (feature.featuretype in attributes_and_features.keys()):
if not args.transcriptomic:
if ((args.completly_within and not completly_within(
region[1], region[2], feature)) or not
infer_feature_level(feature, database) ==
args.level):
continue
if args.level == 2:
gene_id = "{}\t".format((get_level1_parent(database,
feature,
id=True)))
out += "{bed}\t{featureid}\t{gene_id}\t{featuretype}\t\
{overlapping_feature_start}\t\
{overlapping_feature_end}\
".format(bed=processing_product.bed_line.strip(),
featureid=feature.id.strip(),
gene_id=gene_id.strip(),
featuretype=feature.featuretype.strip(),
overlapping_feature_start=feature.start,
overlapping_feature_end=feature.end)
else:
gene_id = ""
out += "{bed}\t{featureid}\t{featuretype}\t\
{overlapping_feature_start}\t\
{overlapping_feature_end}\
".format(bed=processing_product.bed_line.strip(),
featureid=feature.id.strip(),
featuretype=feature.featuretype.strip(),
overlapping_feature_start=feature.start,
overlapping_feature_end=feature.end)
out += "\t{}\n".format(
filter_overlapping_features(attributes_and_features,
args, database, feature,
overlapping_features,
header[1], header[2]))
else:
comp_pos = completly_within_positive_coordinates # abbr
temp = comp_pos(lengths_dict, feature, processing_product,
cds_start)
if (not infer_feature_level(feature, database) ==
args.level or (args.completly_within and
processing_product.coords_outside_transcript ==
"just_one") or processing_product.coords_outside_transcript ==
"both") or (args.completly_within and temp[0] ==
"just_one") or temp == "both":
if args.level == 2:
temp_out = get_level1_parent(database, feature, id=True)
temp_featuretype=feature.featuretype
temp_seqid=feature.seqid
temp_overlapping_feature_start=feature.start
temp_overlapping_feature_end=feature.end
analyzed_set = set(attributes_and_features[feature.featuretype]).intersection(feature.attributes)
d = {}
for a in header[2]:
for attr in analyzed_set:
if attr == a:
d[attr] = ("").join(feature[attr])
# concatenation
temp_result = ""
for h in (header[1] + header[2]):
try:
temp_result += "{}\t".format(d[h].strip())
except KeyError:
temp_result += "{}\t".format(".")
continue
if args.level == 2:
gene_id = get_level1_parent(database, feature, id=True)
else:
gene_id = ""
out += "{bed}\t{gene_id}\t{featuretype}\t{seqid}\t\
{overlapping_feature_start}\t\
{overlapping_feature_end}\
".format(bed=processing_product.bed_line.strip(),
featuretype=feature.featuretype.strip(),
gene_id=gene_id, seqid=feature.seqid.strip(),
overlapping_feature_start=feature.start,
overlapping_feature_end=feature.end)
filt_ov = filter_overlapping_features # abbr
out += "\t{}\n".format(filt_ov(attributes_and_features,
args, database, feature, [],
header[1], header[2],
processing_product, region))
insert = processing_product.check_overlapping_feature__position(lengths_dict,
feature,
args.transcriptomic,
args.offset)
if (args.annotate_relative_location):
out = out.rstrip()
out += "\t{}\n".format(insert)
if (out == ""):
additional_fields = ""
header_lengths = (len(header[1]) + len(header[2])) if not (
args.annotate_relative_location) else (len(header[1]) +
len(header[2]) + 1)
if (args.transcriptomic):
feature = database.database[processing_product.coordinates[0]]
out += "{bed}\t{gene_id}\t{featuretype}\t{seqid}\t\
{overlapping_feature_start}\t{overlapping_feature_end}\
".format(bed=processing_product.bed_line.strip(),
featuretype=temp_featuretype, gene_id=temp_out, seqid=temp_seqid,
overlapping_feature_start=temp_overlapping_feature_start,
overlapping_feature_end=temp_overlapping_feature_end)
if args.level != 2 or ((feature.strand != processing_product.strand) and
args.strand_specific and not args.transcriptomic) or (args.transcriptomic and processing_product.strand == "-"):
out += "\t{}".format(".\t.\t.")
for _ in range(0, len(header[2])):
out += "\t."
out += "\n"
else:
out += "\t{}\n".format(temp_result)
if (args.annotate_relative_location):
out = out.rstrip()
if not temp_featuretype == ".":
out += "\t{}\n".format(
processing_product.check_overlapping_feature__position(lengths_dict, feature,
args.transcriptomic, args.offset))
else:
for _ in range(1, header_lengths):
additional_fields += "\t."
closest_gene = find_closest_gene(database, processing_product)
find_closest_gene(database, processing_product)
out += "{bed}\t{featureid}\t{gene_id}\t{featuretype}\t\
{overlapping_feature_start}\t{overlapping_feature_end}\
{add}".format(bed=processing_product.bed_line.strip(),
featureid=closest_gene, featuretype=".",
gene_id="intergenic",
overlapping_feature_start=".",
overlapping_feature_end=".",
add=additional_fields)
return out.strip() | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/output_processing.py | output_processing.py |
import argparse
from agouti_pkg.sequence_ontology import *
import pickle
from agouti_pkg.eprint import eprint
import sys
def validate_arguments(args, parser):
""" Additional argument validation. Applicable for
mutually exclusive arguments, etc.
Arguments:
args {argparse.Namespace} -- argparse command-line arguments
parser {ArgumentParser} - ArgumentParser object
"""
if (args.transcriptomic and args.level == 1):
parser.error("--level 1 cannot be combined with --transcriptomic")
elif (not args.transcriptomic and args.annotate_relative_location):
parser.error("--annotate_relative_location should be used with --transcriptomic")
elif ((args.combine and (args.attributes or args.features))):
parser.error("Use --select_features, --select_attributes OR --combine.")
elif (args.offset != 0 and not args.transcriptomic):
parser.error("--offset should be used with --transcriptomic")
elif (args.sep != "\t" and args.custom == "BED"):
parser.error("--separator must be used with --custom option")
return
def parse_arguments(argv):
"""Parse command-line arguments for agouti
Arguments:
argv {list} -- list of command-line arguments (sys.argv)
Returns:
argparse.Namespace -- parsed arguments
"""
parser = argparse.ArgumentParser('agouti', description='AGouTI - software for annotation of genomic and transcriptomic intervals',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
subprasers = parser.add_subparsers(dest='command')
create_db = subprasers.add_parser('create_db', help='create the database for agouti')
create_db.add_argument('-a', '--annotation', type=str, help='input file with the genomic annotation in either GTF or GFF3',
required=True, dest='annotation')
create_db.add_argument('-f', '--format', type=str, help='format of the input file with the genomic annotation',
required=True, choices=['GTF', 'GFF3'], dest='format')
create_db.add_argument('-d', '--db', type=str, help='name for the output database',
required=True, dest='database')
create_db.add_argument('-l', '--low-ram', help='enable low-memory mode of the database creation process (warning: slow!)',
action="store_true", dest='save_on_disk')
create_db.add_argument('-i', '--infer_genes', help='infer gene features. Use only with GTF files that do not have lines describing genes (warning: slow!)',
action="store_true", dest='infer_genes')
create_db.add_argument('-j', '--infer_transcripts', help='infer transcript features. Use only with GTF files that do not have lines describing transcripts (warning: slow!)',
action="store_true", dest='infer_transcripts')
annotate = subprasers.add_parser('annotate', help='run annotation with agouti')
annotate.add_argument('-i', '--input', type=str,
help='input file in BED or another column-based format (see --custom).', required=True,
dest='bed')
annotate.add_argument('-d', '--database', type=str, help='database file created with the agouti create_db run mode',
required=True, dest='database')
annotate.add_argument('-m', '--custom', type=str, help='the input text file is in custom format, other than BED. It should contain columns with information about feature id (id), chromosome (chr), start (s) and end (e) coordinates, and optionally about strand. User should provide the proper column indexes (starting from 1) in order: "id,chr,s,e[,strand]". The index of the strand column is optional. Example use: --custom 1,2,4,5,6 or --custom 1,2,4,5. The field separator used in a text file can be specified using the --separator option',
required=False, default="BED", dest='custom')
annotate.add_argument('-p', '--separator', type=str, help='field separator to be used with the --custom option. Default is "\\t"', required=False, default="\t",
dest='sep')
annotate.add_argument('-b', '--coordinates', type=int, help='indicate the coordinate system used in the input file (BED/CUSTOM). Either 0 (0-based coordinates) or 1 (1-based coordinates). Default is 0',
choices=[0, 1], required=False, default=0,
dest='first_base_num')
annotate.add_argument('-n', '--header_lines', type=int, help='the number of header lines in the input file. Default is 0',
required=False, default=0, dest='header_lines')
annotate.add_argument('-t', '--transcriptomic', action="store_true", help='transcriptomic annotation mode. In this mode, transcript IDs from the GTF/GFF3 are expected to be placed in the first column of provided BED file instead of chromosome names. Coordinates in this mode are assumed to reflect positions within the transcript')
annotate.add_argument('-f', '--select_features', type=str, help='comma-separated list of feature names to be reported, e.g., "mRNA,CDS". Refer to [db_name].database.structure.txt file for a list of valid features. By default, all features are reported',
required=False, dest='features')
annotate.add_argument('-a', '--select_attributes', type=str, help='comma-separated list of attribute names to be reported, e.g., "ID,description". Refer to [db_name].database.structure.txt file for a list of valid attributes. By default, all attributes are reported',
required=False, dest='attributes')
annotate.add_argument('-c', '--combine', type=str, help='list of specific feature-attribute combinations to be reported. The combinations should be specified in the format: feature1-attribute1:attribute2,feature2-attribute1, e.g. "mRNA-ID:description,CDS-ID',
required=False, dest='combine')
annotate.add_argument('-s', '--strand_specific', action="store_true",
help='strand-specific search')
annotate.add_argument('-w', '--completly_within', action="store_true",
help='the annotated BED interval must be located entirely within the GTF/GFF3 feature. By default, any overlap is sufficient to trigger annotation')
annotate.add_argument('-l', '--level', type=int, help='annotate results on a specific level (1 for gene level, 2 for mRNA, tRNA level, etc.). For available levels, refer to the tree-like representation of features in [db_name].database.structure.txt file. Please note that --level 1 cannot be combined with –transcriptomic mode. Default is 2',
choices=[1, 2], required=False, default=2,
dest='level')
annotate.add_argument('-o', '--offset', type=int, help=argparse.SUPPRESS,
required=False, default=0, dest='offset')
annotate.add_argument('-r', '--annotate_relative_location', action="store_true",
help='annotate the relative location of the interval within the feature. Designed to work with –transcriptomic mode')
annotate.add_argument('--statistics', action="store_true", help='calculate additional feature statistics. Those will be displayed on the stderr',
dest='statistics')
annotate.add_argument('--stats_only', action="store_true", help='calculate and display only feature statistics. No annotation will be performed.', dest='stats_only')
if len(sys.argv)==1:
parser.print_help(sys.stderr)
sys.exit(1)
args = parser.parse_args()
if args.command == "create_db":
if ((args.infer_genes or args.infer_transcripts) and args.format == "GFF3"):
parser.error("Use option --infer_genes and/or --infer_transcripts only with the GTF file format")
elif args.command == "annotate":
args = parser.parse_args(argv[1:])
validate_arguments(args, parser)
return args
def create_attributes_and_features_dict(database, args, featuretypes_from_db):
"""Creates dictionary of features (keys) and attributes (values) provided \
by the user via the command-line. These combinations of features and \
attributes will be then reported and annotated.
Arguments:
database {modules.database.Database} -- object of Database class
args {argparse.Namespace} -- command-line arguments
featuretypes_from_db {list} -- list of feature types from the database
Returns:
dict -- dictionary of attributes and features to be annotated
"""
attributes_and_features = {}
if args.attributes and args.features:
for feature in list(filter(None, args.features.strip().split(","))):
attributes_and_features[feature.lower()] = tuple(
map(lambda x: x.lower(), list(filter(None, args.attributes.strip().split(",")))))
elif args.combine:
combined_str = args.combine.strip().split(",")
for combination in combined_str:
c = combination.strip().split("-")
attributes_and_features[c[0].lower()] = tuple(
map(lambda x: x.lower(), list(filter(None, c[1].strip().split(":")))))
else:
try:
with open('{}.attributes_and_features.pickle'.format(args.database), 'rb') as handle:
temp_attributes_and_features = pickle.load(handle)
except FileNotFoundError:
eprint("Cannot locate file 'attributes_and_features.pickle', which should be created during database creation")
sys.exit()
if args.features:
features_args_list = list(filter(None, args.features.strip().split(",")))
for feature, attributes in temp_attributes_and_features.items():
if feature in features_args_list:
attributes_and_features[feature.lower()] = attributes
elif args.attributes:
attributes_args_list = list(filter(None, args.attributes.strip().split(",")))
for feature, attributes in temp_attributes_and_features.items():
attr_list = []
for attr in attributes:
if attr in attributes_args_list:
attr_list.append(attr)
attr_list = list(filter(None, attr_list))
if len(attr_list):
attributes_and_features[feature.lower()] = attr_list
else:
attributes_and_features = temp_attributes_and_features
to_remove = []
if (args.transcriptomic):
for f in attributes_and_features.keys():
if f in database.features_at_3_level and not (
f in cds_synonyms or f in UTR_synonyms
or f in three_prime_UTR_synonyms or f
in five_prime_UTR_synonyms):
if (args.combine or args.features):
eprint("In your case (--transcriptomic), features at the 3rd level must be of CDS or UTR type")
sys.exit()
else:
to_remove.append(f)
for f in set(to_remove):
del attributes_and_features[f]
# whether attributes_and_features.keys() are at annotated level
validity = False
for a in attributes_and_features.keys():
if args.level == 1:
if a in database.features_at_1_level:
validity = True
elif args.level == 2:
if a in database.features_at_2_level:
validity = True
if not validity:
eprint("In your case, you need to specify at least one feature at annotated level")
sys.exit()
return attributes_and_features | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/argument_parser.py | argument_parser.py |
from itertools import tee
import sys
from agouti_pkg.eprint import eprint
class Database(object):
def __init__(self, database, db_name):
self.database = database
self.parent_child_relation = []
self.features_at_1_level = []
self.features_at_2_level = []
self.features_at_3_level = []
self.name = db_name
self.parent_child_relation = set()
self.create_parent_child_relation()
self.find_featuretypes_at_given_level(3)
self.find_featuretypes_at_given_level(2)
self.find_featuretypes_at_given_level(1)
def create_parent_child_relation(self):
"""Creates Parent -- Child like relations of Feature types
"""
parent_child = set()
try:
file = open("{}.relations".format(self.name))
lines = file.readlines()
for line in lines:
tab = line.strip().split("\t")
parent_child.add((tab[0], tab[1]))
file.close()
except (IOError, IndexError):
eprint("ERROR: Couldn't find the file {}.relations -> create the database again!".format(self.name))
sys.exit()
self.parent_child_relation = parent_child
def find_featuretypes_at_given_level(self, level):
"""Returns Feature types at specified level
Arguments:
level {int} -- level of the feature (1, 2 or 3)
"""
pc = self.parent_child_relation
unzipped_pc = list(zip(*pc))
featuretypes = set()
l1, l2, l3 = set(), set(), set()
for p in unzipped_pc[0]: # finding features at level 1 -> no parents
if (p not in unzipped_pc[1]):
l1.add(p)
for p, c in pc:
if p in l1:
l2.add(c)
for p, c in pc:
if p in l2:
l3.add(c)
if (level == 1):
self.features_at_1_level = l1
elif (level == 2):
self.features_at_2_level = l2
else:
self.features_at_3_level = l3
def get_children(self, parent, ids=False, featuretypes=(), level=None):
"""Get children or children ids of the given 'parent'
Arguments:
parent {Feature} -- parent Feature
Keyword Arguments:
ids {bool} -- True if ids instead of Features are to be returned\
(default: {False})
featuretypes {tuple} -- children of only these featuretypes\
will be returned (default: {()})
level {int} -- return children at this level (default: {None})
Returns:
[list] -- list of chilldren Features or ids
"""
children = []
if len(featuretypes):
child_iter = self.database.children(parent.id,
featuretype=featuretypes,
level=level)
else:
child_iter = self.database.children(parent.id, level=level)
for child in child_iter:
if(child.featuretype != parent.featuretype):
if (not ids):
children.append(child)
else:
children.append(child.id)
return children | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/database.py | database.py |
from agouti_pkg.eprint import eprint
import sys
import os
import pickle
import sqlite3
import agouti_pkg.gffutils
import agouti_pkg.gffutils.inspect as inspect
import argparse
from agouti_pkg.anytree import Node, RenderTree, importer
from agouti_pkg.anytree.exporter import DotExporter
from agouti_pkg.argument_parser import parse_arguments
import gzip
output_lines = []
def transform_func(x):
x.featuretype = x.featuretype.lower()
return x
def inspect_db(args, db):
try:
features = {}
if args.annotation.endswith(".gz"):
f = gzip.open(args.annotation, "rb")
else:
f = open(args.annotation)
lines = f.readlines()
for line in lines:
if isinstance(line, bytes):
line = line.decode()
splitted = line.strip().split("\t")
if (not line.startswith("#")):
feature = splitted[2].lower()
attributes = splitted[8].strip().replace('"', "").replace("'", "")
if feature not in features.keys():
features[feature] = set()
tab = attributes.split(db.dialect["field separator"])
tab = list(filter(None, tab))
for attribute in tab:
a = attribute.strip().split(db.dialect["keyval separator"])
a = list(filter(None, a))
features[feature].add(a[0]) # was a[0].lower()
f.close()
except IndexError:
eprint("{}ERROR: The file {} has the wrong format".format(os.linesep,
args.annotation))
os.system("rm -f {}".format(args.database))
sys.exit()
return features
def find_children(_parent, parent_child, prefix=""):
pre = prefix+" "
output_lines.append("{}{}".format(prefix, _parent))
for (parent, child) in parent_child:
if _parent == parent:
find_children(child, parent_child, pre)
def find_roots(x):
"""takes parent_child list of tuples as an argument. The list should be in\
the format [(parent, child), (parent, child)], etc."""
roots = set()
parent_child_relation = list(zip(*x))
for parent in parent_child_relation[0]:
if (parent not in parent_child_relation[1]):
roots.add(parent)
return roots
def add_super_feature(parent_child):
"""Function adds super-feature if there are multiple first level features"""
unzipped = list(zip(*parent_child))
modified_parent_child = set()
first_level_features = []
for parent in unzipped[0]:
if parent not in unzipped[1]:
first_level_features.append(parent)
if len(first_level_features) > 1: # if more than one root
for i in first_level_features:
modified_parent_child.add((".", i))
return parent_child.union(modified_parent_child)
def main(args):
"""Main function
Arguments:
argv {argparse.Namespace} -- command-line arguments parsed with\
argparse
"""
global output_lines
try:
if args.format == 'GFF3':
if (not args.save_on_disk):
db = agouti_pkg.gffutils.create_db(args.annotation, force=True,
keep_order=False,
sort_attribute_values=False,
merge_strategy="create_unique",
transform=transform_func,
dbfn=":memory:",
checklines=50)
else:
db = agouti_pkg.gffutils.create_db(args.annotation, force=True,
keep_order=False,
sort_attribute_values=False,
merge_strategy="create_unique",
transform=transform_func,
dbfn=args.database,
checklines=50)
else:
if (not args.save_on_disk):
db = agouti_pkg.gffutils.create_db(args.annotation, force=True,
keep_order=False,
sort_attribute_values=False,
merge_strategy="create_unique",
disable_infer_genes= not args.infer_genes,
disable_infer_transcripts=not args.infer_transcripts,
transform=transform_func,
dbfn=":memory:", checklines=50)
else:
db = agouti_pkg.gffutils.create_db(args.annotation, force=True,
keep_order=False,
sort_attribute_values=False,
merge_strategy="create_unique",
disable_infer_genes= not args.infer_genes,
disable_infer_transcripts=not args.infer_transcripts,
transform=transform_func,
dbfn=args.database, checklines=50)
except ValueError:
eprint("{}ERROR: The file {} has the wrong format or does not exist".format(os.linesep, args.annotation))
sys.exit()
database_struct = open("{}.database.structure.txt".format(args.database), "w")
database_struct.write("{}Attributes available for each feature type:{}\n".format(os.linesep, os.linesep))
attributes_and_features = inspect_db(args, db)
with open('{}.attributes_and_features.pickle'.format(args.database), 'wb') as handle:
pickle.dump(attributes_and_features, handle,
protocol=pickle.HIGHEST_PROTOCOL) # saving the dictionary
for feature, attributes in attributes_and_features.items():
database_struct.write("{}: {}\n".format(feature, ", ".join(set(attributes))))
if args.format == "GTF":
if (("gene" not in attributes_and_features.keys() and not args.infer_genes) or ("transcript" not in attributes_and_features.keys() and not args.infer_transcripts)):
eprint("{}ERROR: The file {} seems to be missing gene or transcript features. Please use --infer_genes and/or --infer_transcripts".format(os.linesep, args.annotation))
sys.exit()
parent_child = set()
# Find featuretypes without parents
if args.format == "GFF3":
for featuretype in db.featuretypes():
children = set()
for f in db.iter_by_parent_childs(featuretype, level=1):
children.update(set([x.featuretype for x in f if x.featuretype != featuretype]))
if len(children):
for c in children:
parent_child.add((featuretype,c))
if args.format == "GTF":
parent_child.add(("gene", "transcript"))
for i in db.featuretypes():
if i not in ["gene", "transcript"]:
parent_child.add(("transcript", i))
file = open("{}.relations".format(args.database), "w")
for relation in parent_child:
file.write("{}\t{}\n".format(relation[0], relation[1]))
file.close()
output_lines = []
database_struct.write(os.linesep)
database_struct.write(os.linesep)
parent_child = add_super_feature(parent_child)
imp = importer.IndentedStringImporter()
try:
for r in find_roots(parent_child):
output_lines = []
find_children(r, parent_child)
root = imp.import_(output_lines)
tree = RenderTree(root.children[0])
for pre, fill, node in tree:
database_struct.write("%s%s\n" % (pre, node.name))
except IndexError:
eprint("{}ERROR: File {} has the wrong format. Couldn't create parent-child relations. If you are using GTF file format, please check if gene and transcript feature lines are present - if not, use --infer_genes option or consider using another annotation file. Check if transcript and gene names differ and are correctly formatted.".format(os.linesep, args.annotation))
os.system("rm -f {}".format(args.database))
sys.exit()
if (not args.save_on_disk):
bck = sqlite3.connect(args.database)
with bck:
db.conn.backup(bck)
bck.close()
print("-"*10)
print("The pipeline finished successfully!")
print("Available attributes and relations are available in the file {}.database.structure.txt".format(args.database))
database_struct.close()
if __name__ == "__main__":
sys.exit(main(sys.argv)) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/agouti_create_database.py | agouti_create_database.py |
import os
import sys
import codecs
from agouti_pkg.eprint import eprint
import agouti_pkg.gffutils
import agouti_pkg.gffutils.inspect as inspect
from operator import attrgetter
from agouti_pkg.database import Database
from agouti_pkg.argument_parser import (parse_arguments,
create_attributes_and_features_dict)
from agouti_pkg.sequence_ontology import *
from agouti_pkg.processing_product import ProcessingProduct
from agouti_pkg.miscallaneous import *
from agouti_pkg.read_input import *
from agouti_pkg.header import *
from agouti_pkg.output_processing import prepare_output
def main(args):
"""Main function
Arguments:
args {argparse.Namespace} -- command line arguments parsed with\
argparse
"""
chromosomes_not_found = set() ## chromosomes in the input file but not found in the reference annotations
chromosomes_found = set() ## chromosomes in the input file and found in the reference annotations
num_of_bed_fields = -1
lengths_dict = {} # stores length of UTRs and CDS of a given transcript
try:
db = Database(agouti_pkg.gffutils.FeatureDB(args.database, keep_order=False),
args.database) # reading the database from file
except ValueError:
eprint("ERROR: the database provided does not exists")
sys.exit()
# list all featuretypes existing in db
featuretypes_from_db = list(db.database.featuretypes())
try:
attr_and_feat = create_attributes_and_features_dict # abbr
attributes_and_features = attr_and_feat(db, args, featuretypes_from_db)
except IndexError:
eprint("ERROR: arguments provided with --combine, --select_attributes or --select_features are incorrect")
sys.exit()
try:
if (args.custom == "BED"):
try:
products, num_of_bed_fields = read_BED_file(args.bed,
num_of_bed_fields,
args.first_base_num, args.header_lines)
except FileNotFoundError:
eprint("ERROR: the input file does not exists")
sys.exit()
else:
try:
products, num_of_bed_fields = read_custom_format(args.bed, args.custom,
args.sep,
args.first_base_num,
num_of_bed_fields, args.header_lines)
except FileNotFoundError:
eprint("ERROR: the input file does not exists")
sys.exit()
except IndexError:
eprint("ERROR: the input file has the wrong format")
sys.exit()
header = prepare_header(db, attributes_and_features, args,
num_of_bed_fields, args.header_lines)
whole_output = f"{header[0].strip()}"
for key, value in products.items():
if (args.transcriptomic):
try:
id = value.coordinates[0]
overlapping_features = [db.database[id]]
g2t, cds_start = value.genomic_to_transcriptomic(
db, featuretypes_from_db)
lengths_dict[value.coordinates[0]] = g2t
out = prepare_output(lengths_dict, header, args,
attributes_and_features, db, value,
overlapping_features, g2t, cds_start)
whole_output = f"{whole_output}\n{out}"
except (agouti_pkg.gffutils.exceptions.FeatureNotFoundError):
eprint("WARNING: Couldn't find transcript {}. Please make sure that it exists in your GTF/GFF3 file.".format(id))
else:
region = value.coordinates
## test whether chromosome is present in annotations
if value.coordinates[0] not in chromosomes_found:
if len(list(db.database.region(seqid=value.coordinates[0]))) == 0:
chromosomes_not_found.add(value.coordinates[0])
else:
chromosomes_found.add(value.coordinates[0])
##
if (not args.strand_specific):
overlapping_features = list(
db.database.region(region=region,
featuretype=attributes_and_features.keys(),
completely_within=False))
elif(args.strand_specific):
overlapping_features = list(db.database.region(region=region,
strand=value.strand,
featuretype=attributes_and_features.keys(),
completely_within=False))
out = prepare_output(lengths_dict, header, args,
attributes_and_features, db, value,
overlapping_features, region)
whole_output = f"{whole_output}\n{out}"
if args.statistics or args.stats_only:
statistics(whole_output)
if not args.stats_only:
print(whole_output)
if len(chromosomes_not_found):
eprint(f"WARNING: The following chromosomes were not found in the annotations: {', '.join(list(chromosomes_not_found))}")
return
if __name__ == "__main__":
sys.exit(main(sys.argv)) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/agouti_annotate.py | agouti_annotate.py |
from operator import attrgetter
from agouti_pkg.sequence_ontology import *
from io import StringIO
import pandas as pd
from agouti_pkg.eprint import eprint
def statistics(agouti_output : str):
"""Create statistics based on the agouti output
Args:
agouti_output (str): output of the agouti software
"""
output = StringIO(agouti_output)
df = pd.read_csv(output, sep="\t", index_col=False)
columns = df.columns
#check the index of the first column for which stats will be calculated
for i in range(0, len(columns)):
if columns[i].startswith("annotated_") and not columns[i+1].startswith("annotated_"):
n = i+1
break
cols = list(columns[n:])
if "annotated_featuretype" in columns:
cols.append("annotated_featuretype")
eprint("##"*10)
eprint("##STATISTICS")
eprint("##"*10)
for col in cols:
try:
df[col] = df[col].str.strip()
except AttributeError:
pass
output = []
col_stats = df[col].value_counts(dropna=False).to_dict()
for key, value in col_stats.items():
output.append(f"'{key}' - {value}")
eprint(f"# statistics for the '{col}' column: {'; '.join(output)}")
def sum_features_length(features):
"""Returns the sum of the features lengths. Features should be of Feature\
type
Arguments:
features {list} -- list or other iterable object of Features
Returns:
int -- sum of features lengths
"""
length = 0
for f in features:
length += (f.end - f.start)
return length
def get_level1_parent(db, child, id=False):
"""Returns 1st level parent or parent id of a given Feature.
If child has no parrent child will be returned
Arguments:
db {modules.database.Database} -- Database object
child {Feature} -- Feature object
Keyword Arguments:
id {bool} -- if True returns id of the object instead of Feature\
object (default: {False})
Returns:
Feature -- child Feature or id if id parameter is set to True
"""
for p in db.database.parents(child):
p_parents = db.database.parents(p)
if sum(1 for a in db.database.parents(p)) == 0:
return p.id if id else p
elif sum(1 for c in db.database.parents(p)) == 1:
for a in db.database.parents(p):
if p.featuretype == a.featuretype:
return p.id if id else p
return child.id if id else child
def completly_within(start, end, feature):
"""Checks if product described by start and end coordinates is completly\
within feature.
Arguments:
start {int} -- start coordinate of annotated product
end {int} -- end coordinate of annotated product
feature {Feature} -- Feature object
Returns:
bool -- True if product is completly within Feature object,\
False otherwise
"""
return True if start >= feature.start and end <= feature.end else False
def get_min_start(featurelist):
"""Returns the smallest feature start coordinate from the list of Feature\
objects
Arguments:
featurelist {list} -- list of Feature objects
Returns:
int -- min start coordinate from a given set
"""
starts = []
for feature in featurelist:
starts.append(feature.start)
return min(starts)
def get_max_end(featurelist):
"""Returns the biggest feature end coordinate from the list of Feature\
objects
Arguments:
featurelist {list} -- list of Feature objects
Returns:
int -- max end coordinate from a given set
"""
ends = []
for feature in featurelist:
ends.append(feature.start)
return max(ends)
def distinguish_UTRs(utr, cds):
"""Distinguish 3' from 5' UTRs if not annotated
Arguments:
utr {list} -- list of UTRs
cds {list} -- list of CDS
Returns:
tuple -- tuple of sets containing 3' and 5' UTRs
"""
cds_start, cds_end = get_min_start(cds), get_max_end(cds)
three_primes, five_primes = set(), set()
for u in utr:
if (((u.end <= cds_start) and u.strand == "+")
or ((u.start >= cds_end) and u.strand == "-")):
five_primes.add(u)
elif (((u.start >= cds_end) and u.strand == "+")
or ((u.end <= cds_start) and u.strand == "-")):
three_primes.add(u)
return (five_primes, three_primes)
def get_exons(feature, db):
"""Returns exons of a given transcript.
Arguments:
feature {Feature} -- transcript
db {modules.database.Database} -- Database object
Returns:
list -- list of exons
"""
utr5_children = db.get_children(feature, featuretypes=["exon"], level=1)
exons_children = db.get_children(feature, featuretypes=["exon"], level=1)
return list(exons_children)
def get_cds(feature, featuretypes_from_db, db):
"""Returns CDS of a given transcript
Arguments:
feature {Feature} -- transcript
featuretypes_from_db {list} -- list of feature types from the database
db {modules.database.Database} -- Database object
Returns:
tuple -- tuple of CDS
"""
synonyms = cds_synonyms
intersection = set(synonyms) & set(featuretypes_from_db)
children = db.get_children(feature, featuretypes=intersection, level=1)
return children
def get_three_prime_UTRs(feature, featuretypes_from_db, db):
"""Returns 3' UTR regions of a given transcript
Arguments:
feature {Feature} -- transcript
featuretypes_from_db {list} -- list of feature types from the database
db {modules.database.Database} -- Database object
Returns:
list -- list of 3 prime UTRs. Usually of length equal to 1
"""
synonyms = [three_prime_UTR_synonyms, UTR_synonyms]
intersection = set(synonyms[0]) & set(featuretypes_from_db)
utr3_children = db.get_children(feature, featuretypes=intersection,
level=1)
intersection = set(synonyms[1]) & set(featuretypes_from_db)
utr_children = db.get_children(feature, featuretypes=intersection, level=1)
cds = get_cds(feature, featuretypes_from_db, db)
five_primes, three_primes = [], []
if len(utr_children) and len(cds) and len(intersection):
five_primes, three_primes = distinguish_UTRs(utr_children, cds)
result_list = []
try:
for a in (list(utr3_children) + list(three_primes)):
if "+" in feature.strand:
if not a.start < get_max_end(cds):
result_list.append(a)
elif "-" in feature.strand:
if not a.start > get_min_start(cds):
result_list.append(a)
except ValueError:
return []
return result_list
def get_five_prime_UTRs(feature, featuretypes_from_db, db):
"""Returns 5' UTR regions of a given transcript
Arguments:
feature {Feature} -- transcript
featuretypes_from_db {list} -- list of feature types from the database
db {modules.database.Database} -- Database object
Returns:
list -- list of 3 prime UTRs. Usually of length equal to 1
"""
synonyms = [five_prime_UTR_synonyms, UTR_synonyms]
intersection = set(synonyms[0]) & set(featuretypes_from_db)
utr5_children = db.get_children(
feature, featuretypes=intersection, level=1)
intersection = set(synonyms[1]) & set(featuretypes_from_db)
utr_children = db.get_children(feature, featuretypes=intersection, level=1)
cds = get_cds(feature, featuretypes_from_db, db)
five_primes, three_primes = [], []
if len(utr_children) and len(cds) and len(intersection):
five_primes, three_primes = distinguish_UTRs(utr_children, cds)
result_list = []
try:
for a in (list(utr5_children) + list(five_primes)):
if "+" in feature.strand:
if not a.start > get_min_start(cds):
result_list.append(a)
elif "-" in feature.strand:
if not a.start < get_max_end(cds):
result_list.append(a)
except ValueError:
return []
return result_list
def handle_negative_coordinates(start, end):
"""Checks if coordinates are outside transcript boundaries
Arguments:
start {int} -- start coordinate
end {int} -- end coordinate
Returns:
tuple -- updated start coordinate, updated end coordinate,
string describing which coordinates are outside
transcript boundaries
"""
coord_outside_transcript = "No"
if (start < 0 and end > 0):
start_coord, end_coord = 0, end
coord_outside_transcript = "just_one"
elif (start < 0 and end < 0):
start_coord, end_coord = 0, 0
coord_outside_transcript = "both"
else:
start_coord, end_coord = start, end
return start_coord, end_coord, coord_outside_transcript
def infer_feature_level(feature, db):
"""Infers feature level
Arguments:
feature {Feature} -- feature which level is to be inferred
db {Database} -- object of the Database class
Returns:
int -- level
"""
level = 1
for parent in db.database.parents(feature):
if (parent.featuretype != feature.featuretype):
level = level + 1
return level
def completly_within_positive_coordinates(lengths_dict, feature,
processing_product, cds_start):
"""Checks if coordinates of ProcessingProduct object don't exceed\
transcript length.
Arguments:
lengths_dict {dict} -- dictionary of utr and cds lengths
feature {Feature} -- feature, transcript
processing_product {ProcessingProduct} -- object of the\
ProcessingProduct class
cds_start {int} -- start of the CDS
Returns:
tuple -- ...
"""
length = sum(lengths_dict[feature.id])
five_prime_UTR_len = lengths_dict[feature.id][0]
if (processing_product.coordinates[2] >
length and processing_product.coordinates[1] > length):
return ("both")
elif (processing_product.coordinates[2] > length):
return ("just_one", length, five_prime_UTR_len)
return ("No", length, five_prime_UTR_len)
def add_quotation_marks(l):
"""Add quotation marks for each string in list
Arguments:
l {list} -- list of strings
Returns:
[list] -- returns list of strings with added quotation marks
"""
return ["'" + x + "'" for x in l]
def check_position_within_transcript(args, lengths, num, processing_product):
"""Checks whether processing_product lies within 3' UTR, CDS or 5' UTR
Arguments:
args {argparse.Namespace} -- argparse command-line arguments
lengths {tuple} -- 5'UTR length, CDS length, 3'UTR length
num {int} -- index of the feature. Might be one 0, 1 or 2
processing_product {ProcessingProduct} -- object of ProcessingProduct\
class
Returns:
bool -- True if ProcessingProduct lies within 3'UTR, 5'UTR or CDS
"""
if (args.completly_within):
if (num == 0 and processing_product.coordinates[1] >= args.offset and
processing_product.coordinates[2] < args.offset + lengths[0]):
# if inside 5'UTR
return True
# if inside CDS
elif (num == 1 and processing_product.coordinates[1] >= args.offset +
lengths[0] and processing_product.coordinates[2] <=
(args.offset + lengths[0] + lengths[1])):
return True
# if inside 3'UTR
elif (num == 2 and processing_product.coordinates[1] > (args.offset +
lengths[0] + lengths[1]) and processing_product.coordinates[2] <
args.offset + lengths[0] + lengths[1] + lengths[2]):
return True
else:
return False
else:
if (num == 0 and processing_product.coordinates[2] > args.offset
and processing_product.coordinates[1] < args.offset + lengths[0]):
return True
elif (num == 1 and (processing_product.coordinates[1] in
range(args.offset + lengths[0] + 1, args.offset + lengths[0] +
lengths[1])
or processing_product.coordinates[2] in
range(args.offset + lengths[0] + 1, args.offset +
lengths[0] + lengths[1])
or processing_product.coordinates[1] in
range(args.offset + lengths[0] + 1, args.offset +
lengths[0] + lengths[1])
or (processing_product.coordinates[1] <=
args.offset + lengths[0] and processing_product.coordinates[2] >=
args.offset + lengths[0] + lengths[1]))):
return True
elif (num == 2 and processing_product.coordinates[2] > args.offset +
lengths[0] + lengths[1] and processing_product.coordinates[1] <
args.offset + lengths[0] + lengths[1] + lengths[2]):
return True
else:
return False
def find_closest_gene(database, processing_product):
"""Find the closest gene to the processing_product
Arguments:
database {Database} -- Database object
processing_product {ProcessingProduct} -- ProcessingProduct object
Returns:
[str] -- string to display in the output file
"""
f1 = ", ".join(add_quotation_marks(list(database.features_at_1_level)))
if "-" not in processing_product.strand:
upstream = database.database.execute(
'SELECT max(end), id from features where seqid == "{}" AND \
featuretype IN ({}) AND end <= {}'.format(
processing_product.coordinates[0], f1,
processing_product.coordinates[1]))
downstream = database.database.execute(
'SELECT min(start), id from features where seqid == "{}" AND \
featuretype IN ({}) AND start >= {}'.format(
processing_product.coordinates[0], f1,
processing_product.coordinates[2]))
else:
downstream = database.database.execute(
'SELECT max(end), id from features where seqid == "{}" AND \
featuretype IN ({}) AND end <= {}'.format(
processing_product.coordinates[0], f1,
processing_product.coordinates[1]))
upstream = database.database.execute(
'SELECT min(start), id from features where seqid == "{}" AND \
featuretype IN ({}) AND start >= {}'.format(
processing_product.coordinates[0], f1,
processing_product.coordinates[2]))
values = []
for i in upstream.fetchone():
values.append(i)
for i in downstream.fetchone():
values.append(i)
if "-" not in processing_product.strand:
return "closest gene upstream: {} - distance {} bp; closest gene downstream: {} - distance {} bp".format(values[1], processing_product.coordinates[1] -
values[0] if values[0] is not None else None,
values[3], values[2] -
processing_product.coordinates[2]
if values[2] is not None else None)
else:
return "closest gene upstream: {} - distance {} bp; closest gene downstream: {} - distance {} bp".format(values[1], values[0]-processing_product.coordinates[2]
if values[0] is not None else None, values[3],
processing_product.coordinates[1] - values[2]
if values[2] is not None else None)
def filter_overlapping_features(attributes_and_features, args, db, feature,
overlapping_features, header_features,
header_attr, processing_product=None,
lengths=None):
"""Filter overlapping features so that they meet criteria specified by\
user.
Parse the results to be displayed in the output file.
Arguments:
attributes_and_features {dict} -- dictionary of attributes and\
features that need to be annotated
args {argparse.Namespace} -- argparse command-line arguments
db {Database} -- Database object
feature {Feature} -- single overlapping Feature object
overlapping_features {list} -- list of overlapping features
header_features {list} -- features present in the header
header_attr {list} -- attributes present in the header
Keyword Arguments:
processing_product {ProcessingProduct} -- object of the
ProcessingProduct class (default: {None})
lengths {dict} -- dictionary of utr and cds lengths (default: {None})
Returns:
str -- results to be displayed
"""
children_ids = db.get_children(feature, ids=True)
d = {}
position = check_position_within_transcript # abbreviation
for h in (header_features + header_attr):
d[h] = "."
if (lengths and feature.featuretype in mRNA_synonyms): # if transcriptomic
if lengths[1] != 0:
for f in d.keys():
if f in three_prime_UTR_synonyms and lengths[2] != 0:
d[f] = "y" if position(args, lengths, 2,
processing_product) else "."
elif f in five_prime_UTR_synonyms and lengths[1] != 0:
d[f] = "y" if position(args, lengths, 0,
processing_product) else "."
elif f in UTR_synonyms:
d[f] = "y" if (
position(args, lengths, 2, processing_product) and
lengths[2] != 0) or (position(args, lengths, 0,
processing_product) and
lengths[0] != 1) else "."
elif f in cds_synonyms:
d[f] = "y" if position(args, lengths, 1,
processing_product) else "."
else:
for f in d.keys():
if (f in three_prime_UTR_synonyms or
f in five_prime_UTR_synonyms or f in UTR_synonyms or
f in cds_synonyms):
d[f] = "NA"
else:
for o in overlapping_features:
if (o.id in children_ids and
o.featuretype in attributes_and_features.keys()):
d[o.featuretype] = "y"
# attributes
analyzed_set = set(attributes_and_features[feature.featuretype]
).intersection(list(feature.attributes))
for a in header_attr:
for attr in analyzed_set:
if attr == a:
d[attr] = ("").join(feature[attr])
# concatenation
result = ""
for h in (header_features + header_attr):
result += "{}\t".format(d[h])
return result.strip() | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/miscallaneous.py | miscallaneous.py |
from __future__ import absolute_import
import re
from operator import itemgetter
# Do not import Decimal directly to avoid reload issues
import decimal
from .compat import unichr, binary_type, text_type, string_types, integer_types, PY3
def _import_speedups():
try:
from . import _speedups
return _speedups.encode_basestring_ascii, _speedups.make_encoder
except ImportError:
return None, None
c_encode_basestring_ascii, c_make_encoder = _import_speedups()
from .decoder import PosInf
from .raw_json import RawJSON
ESCAPE = re.compile(r'[\x00-\x1f\\"]')
ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])')
HAS_UTF8 = re.compile(r'[\x80-\xff]')
ESCAPE_DCT = {
'\\': '\\\\',
'"': '\\"',
'\b': '\\b',
'\f': '\\f',
'\n': '\\n',
'\r': '\\r',
'\t': '\\t',
}
for i in range(0x20):
#ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i))
ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,))
FLOAT_REPR = repr
def encode_basestring(s, _PY3=PY3, _q=u'"'):
"""Return a JSON representation of a Python string
"""
if _PY3:
if isinstance(s, bytes):
s = str(s, 'utf-8')
elif type(s) is not str:
# convert an str subclass instance to exact str
# raise a TypeError otherwise
s = str.__str__(s)
else:
if isinstance(s, str) and HAS_UTF8.search(s) is not None:
s = unicode(s, 'utf-8')
elif type(s) not in (str, unicode):
# convert an str subclass instance to exact str
# convert a unicode subclass instance to exact unicode
# raise a TypeError otherwise
if isinstance(s, str):
s = str.__str__(s)
else:
s = unicode.__getnewargs__(s)[0]
def replace(match):
return ESCAPE_DCT[match.group(0)]
return _q + ESCAPE.sub(replace, s) + _q
def py_encode_basestring_ascii(s, _PY3=PY3):
"""Return an ASCII-only JSON representation of a Python string
"""
if _PY3:
if isinstance(s, bytes):
s = str(s, 'utf-8')
elif type(s) is not str:
# convert an str subclass instance to exact str
# raise a TypeError otherwise
s = str.__str__(s)
else:
if isinstance(s, str) and HAS_UTF8.search(s) is not None:
s = unicode(s, 'utf-8')
elif type(s) not in (str, unicode):
# convert an str subclass instance to exact str
# convert a unicode subclass instance to exact unicode
# raise a TypeError otherwise
if isinstance(s, str):
s = str.__str__(s)
else:
s = unicode.__getnewargs__(s)[0]
def replace(match):
s = match.group(0)
try:
return ESCAPE_DCT[s]
except KeyError:
n = ord(s)
if n < 0x10000:
#return '\\u{0:04x}'.format(n)
return '\\u%04x' % (n,)
else:
# surrogate pair
n -= 0x10000
s1 = 0xd800 | ((n >> 10) & 0x3ff)
s2 = 0xdc00 | (n & 0x3ff)
#return '\\u{0:04x}\\u{1:04x}'.format(s1, s2)
return '\\u%04x\\u%04x' % (s1, s2)
return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"'
encode_basestring_ascii = (
c_encode_basestring_ascii or py_encode_basestring_ascii)
class JSONEncoder(object):
"""Extensible JSON <http://json.org> encoder for Python data structures.
Supports the following objects and types by default:
+-------------------+---------------+
| Python | JSON |
+===================+===============+
| dict, namedtuple | object |
+-------------------+---------------+
| list, tuple | array |
+-------------------+---------------+
| str, unicode | string |
+-------------------+---------------+
| int, long, float | number |
+-------------------+---------------+
| True | true |
+-------------------+---------------+
| False | false |
+-------------------+---------------+
| None | null |
+-------------------+---------------+
To extend this to recognize other objects, subclass and implement a
``.default()`` method with another method that returns a serializable
object for ``o`` if possible, otherwise it should call the superclass
implementation (to raise ``TypeError``).
"""
item_separator = ', '
key_separator = ': '
def __init__(self, skipkeys=False, ensure_ascii=True,
check_circular=True, allow_nan=True, sort_keys=False,
indent=None, separators=None, encoding='utf-8', default=None,
use_decimal=True, namedtuple_as_object=True,
tuple_as_array=True, bigint_as_string=False,
item_sort_key=None, for_json=False, ignore_nan=False,
int_as_string_bitcount=None, iterable_as_array=False):
"""Constructor for JSONEncoder, with sensible defaults.
If skipkeys is false, then it is a TypeError to attempt
encoding of keys that are not str, int, long, float or None. If
skipkeys is True, such items are simply skipped.
If ensure_ascii is true, the output is guaranteed to be str
objects with all incoming unicode characters escaped. If
ensure_ascii is false, the output will be unicode object.
If check_circular is true, then lists, dicts, and custom encoded
objects will be checked for circular references during encoding to
prevent an infinite recursion (which would cause an OverflowError).
Otherwise, no such check takes place.
If allow_nan is true, then NaN, Infinity, and -Infinity will be
encoded as such. This behavior is not JSON specification compliant,
but is consistent with most JavaScript based encoders and decoders.
Otherwise, it will be a ValueError to encode such floats.
If sort_keys is true, then the output of dictionaries will be
sorted by key; this is useful for regression tests to ensure
that JSON serializations can be compared on a day-to-day basis.
If indent is a string, then JSON array elements and object members
will be pretty-printed with a newline followed by that string repeated
for each level of nesting. ``None`` (the default) selects the most compact
representation without any newlines. For backwards compatibility with
versions of simplejson earlier than 2.1.0, an integer is also accepted
and is converted to a string with that many spaces.
If specified, separators should be an (item_separator, key_separator)
tuple. The default is (', ', ': ') if *indent* is ``None`` and
(',', ': ') otherwise. To get the most compact JSON representation,
you should specify (',', ':') to eliminate whitespace.
If specified, default is a function that gets called for objects
that can't otherwise be serialized. It should return a JSON encodable
version of the object or raise a ``TypeError``.
If encoding is not None, then all input strings will be
transformed into unicode using that encoding prior to JSON-encoding.
The default is UTF-8.
If use_decimal is true (default: ``True``), ``decimal.Decimal`` will
be supported directly by the encoder. For the inverse, decode JSON
with ``parse_float=decimal.Decimal``.
If namedtuple_as_object is true (the default), objects with
``_asdict()`` methods will be encoded as JSON objects.
If tuple_as_array is true (the default), tuple (and subclasses) will
be encoded as JSON arrays.
If *iterable_as_array* is true (default: ``False``),
any object not in the above table that implements ``__iter__()``
will be encoded as a JSON array.
If bigint_as_string is true (not the default), ints 2**53 and higher
or lower than -2**53 will be encoded as strings. This is to avoid the
rounding that happens in Javascript otherwise.
If int_as_string_bitcount is a positive number (n), then int of size
greater than or equal to 2**n or lower than or equal to -2**n will be
encoded as strings.
If specified, item_sort_key is a callable used to sort the items in
each dictionary. This is useful if you want to sort items other than
in alphabetical order by key.
If for_json is true (not the default), objects with a ``for_json()``
method will use the return value of that method for encoding as JSON
instead of the object.
If *ignore_nan* is true (default: ``False``), then out of range
:class:`float` values (``nan``, ``inf``, ``-inf``) will be serialized
as ``null`` in compliance with the ECMA-262 specification. If true,
this will override *allow_nan*.
"""
self.skipkeys = skipkeys
self.ensure_ascii = ensure_ascii
self.check_circular = check_circular
self.allow_nan = allow_nan
self.sort_keys = sort_keys
self.use_decimal = use_decimal
self.namedtuple_as_object = namedtuple_as_object
self.tuple_as_array = tuple_as_array
self.iterable_as_array = iterable_as_array
self.bigint_as_string = bigint_as_string
self.item_sort_key = item_sort_key
self.for_json = for_json
self.ignore_nan = ignore_nan
self.int_as_string_bitcount = int_as_string_bitcount
if indent is not None and not isinstance(indent, string_types):
indent = indent * ' '
self.indent = indent
if separators is not None:
self.item_separator, self.key_separator = separators
elif indent is not None:
self.item_separator = ','
if default is not None:
self.default = default
self.encoding = encoding
def default(self, o):
"""Implement this method in a subclass such that it returns
a serializable object for ``o``, or calls the base implementation
(to raise a ``TypeError``).
For example, to support arbitrary iterators, you could
implement default like this::
def default(self, o):
try:
iterable = iter(o)
except TypeError:
pass
else:
return list(iterable)
return JSONEncoder.default(self, o)
"""
raise TypeError('Object of type %s is not JSON serializable' % o.__class__.__name__)
def encode(self, o):
"""Return a JSON string representation of a Python data structure.
>>> from simplejson import JSONEncoder
>>> JSONEncoder().encode({"foo": ["bar", "baz"]})
'{"foo": ["bar", "baz"]}'
"""
# This is for extremely simple cases and benchmarks.
if isinstance(o, binary_type):
_encoding = self.encoding
if (_encoding is not None and not (_encoding == 'utf-8')):
o = text_type(o, _encoding)
if isinstance(o, string_types):
if self.ensure_ascii:
return encode_basestring_ascii(o)
else:
return encode_basestring(o)
# This doesn't pass the iterator directly to ''.join() because the
# exceptions aren't as detailed. The list call should be roughly
# equivalent to the PySequence_Fast that ''.join() would do.
chunks = self.iterencode(o, _one_shot=True)
if not isinstance(chunks, (list, tuple)):
chunks = list(chunks)
if self.ensure_ascii:
return ''.join(chunks)
else:
return u''.join(chunks)
def iterencode(self, o, _one_shot=False):
"""Encode the given object and yield each string
representation as available.
For example::
for chunk in JSONEncoder().iterencode(bigobject):
mysocket.write(chunk)
"""
if self.check_circular:
markers = {}
else:
markers = None
if self.ensure_ascii:
_encoder = encode_basestring_ascii
else:
_encoder = encode_basestring
if self.encoding != 'utf-8' and self.encoding is not None:
def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding):
if isinstance(o, binary_type):
o = text_type(o, _encoding)
return _orig_encoder(o)
def floatstr(o, allow_nan=self.allow_nan, ignore_nan=self.ignore_nan,
_repr=FLOAT_REPR, _inf=PosInf, _neginf=-PosInf):
# Check for specials. Note that this type of test is processor
# and/or platform-specific, so do tests which don't depend on
# the internals.
if o != o:
text = 'NaN'
elif o == _inf:
text = 'Infinity'
elif o == _neginf:
text = '-Infinity'
else:
if type(o) != float:
# See #118, do not trust custom str/repr
o = float(o)
return _repr(o)
if ignore_nan:
text = 'null'
elif not allow_nan:
raise ValueError(
"Out of range float values are not JSON compliant: " +
repr(o))
return text
key_memo = {}
int_as_string_bitcount = (
53 if self.bigint_as_string else self.int_as_string_bitcount)
if (_one_shot and c_make_encoder is not None
and self.indent is None):
_iterencode = c_make_encoder(
markers, self.default, _encoder, self.indent,
self.key_separator, self.item_separator, self.sort_keys,
self.skipkeys, self.allow_nan, key_memo, self.use_decimal,
self.namedtuple_as_object, self.tuple_as_array,
int_as_string_bitcount,
self.item_sort_key, self.encoding, self.for_json,
self.ignore_nan, decimal.Decimal, self.iterable_as_array)
else:
_iterencode = _make_iterencode(
markers, self.default, _encoder, self.indent, floatstr,
self.key_separator, self.item_separator, self.sort_keys,
self.skipkeys, _one_shot, self.use_decimal,
self.namedtuple_as_object, self.tuple_as_array,
int_as_string_bitcount,
self.item_sort_key, self.encoding, self.for_json,
self.iterable_as_array, Decimal=decimal.Decimal)
try:
return _iterencode(o, 0)
finally:
key_memo.clear()
class JSONEncoderForHTML(JSONEncoder):
"""An encoder that produces JSON safe to embed in HTML.
To embed JSON content in, say, a script tag on a web page, the
characters &, < and > should be escaped. They cannot be escaped
with the usual entities (e.g. &) because they are not expanded
within <script> tags.
This class also escapes the line separator and paragraph separator
characters U+2028 and U+2029, irrespective of the ensure_ascii setting,
as these characters are not valid in JavaScript strings (see
http://timelessrepo.com/json-isnt-a-javascript-subset).
"""
def encode(self, o):
# Override JSONEncoder.encode because it has hacks for
# performance that make things more complicated.
chunks = self.iterencode(o, True)
if self.ensure_ascii:
return ''.join(chunks)
else:
return u''.join(chunks)
def iterencode(self, o, _one_shot=False):
chunks = super(JSONEncoderForHTML, self).iterencode(o, _one_shot)
for chunk in chunks:
chunk = chunk.replace('&', '\\u0026')
chunk = chunk.replace('<', '\\u003c')
chunk = chunk.replace('>', '\\u003e')
if not self.ensure_ascii:
chunk = chunk.replace(u'\u2028', '\\u2028')
chunk = chunk.replace(u'\u2029', '\\u2029')
yield chunk
def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
_key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot,
_use_decimal, _namedtuple_as_object, _tuple_as_array,
_int_as_string_bitcount, _item_sort_key,
_encoding,_for_json,
_iterable_as_array,
## HACK: hand-optimized bytecode; turn globals into locals
_PY3=PY3,
ValueError=ValueError,
string_types=string_types,
Decimal=None,
dict=dict,
float=float,
id=id,
integer_types=integer_types,
isinstance=isinstance,
list=list,
str=str,
tuple=tuple,
iter=iter,
):
if _use_decimal and Decimal is None:
Decimal = decimal.Decimal
if _item_sort_key and not callable(_item_sort_key):
raise TypeError("item_sort_key must be None or callable")
elif _sort_keys and not _item_sort_key:
_item_sort_key = itemgetter(0)
if (_int_as_string_bitcount is not None and
(_int_as_string_bitcount <= 0 or
not isinstance(_int_as_string_bitcount, integer_types))):
raise TypeError("int_as_string_bitcount must be a positive integer")
def _encode_int(value):
skip_quoting = (
_int_as_string_bitcount is None
or
_int_as_string_bitcount < 1
)
if type(value) not in integer_types:
# See #118, do not trust custom str/repr
value = int(value)
if (
skip_quoting or
(-1 << _int_as_string_bitcount)
< value <
(1 << _int_as_string_bitcount)
):
return str(value)
return '"' + str(value) + '"'
def _iterencode_list(lst, _current_indent_level):
if not lst:
yield '[]'
return
if markers is not None:
markerid = id(lst)
if markerid in markers:
raise ValueError("Circular reference detected")
markers[markerid] = lst
buf = '['
if _indent is not None:
_current_indent_level += 1
newline_indent = '\n' + (_indent * _current_indent_level)
separator = _item_separator + newline_indent
buf += newline_indent
else:
newline_indent = None
separator = _item_separator
first = True
for value in lst:
if first:
first = False
else:
buf = separator
if isinstance(value, string_types):
yield buf + _encoder(value)
elif _PY3 and isinstance(value, bytes) and _encoding is not None:
yield buf + _encoder(value)
elif isinstance(value, RawJSON):
yield buf + value.encoded_json
elif value is None:
yield buf + 'null'
elif value is True:
yield buf + 'true'
elif value is False:
yield buf + 'false'
elif isinstance(value, integer_types):
yield buf + _encode_int(value)
elif isinstance(value, float):
yield buf + _floatstr(value)
elif _use_decimal and isinstance(value, Decimal):
yield buf + str(value)
else:
yield buf
for_json = _for_json and getattr(value, 'for_json', None)
if for_json and callable(for_json):
chunks = _iterencode(for_json(), _current_indent_level)
elif isinstance(value, list):
chunks = _iterencode_list(value, _current_indent_level)
else:
_asdict = _namedtuple_as_object and getattr(value, '_asdict', None)
if _asdict and callable(_asdict):
chunks = _iterencode_dict(_asdict(),
_current_indent_level)
elif _tuple_as_array and isinstance(value, tuple):
chunks = _iterencode_list(value, _current_indent_level)
elif isinstance(value, dict):
chunks = _iterencode_dict(value, _current_indent_level)
else:
chunks = _iterencode(value, _current_indent_level)
for chunk in chunks:
yield chunk
if first:
# iterable_as_array misses the fast path at the top
yield '[]'
else:
if newline_indent is not None:
_current_indent_level -= 1
yield '\n' + (_indent * _current_indent_level)
yield ']'
if markers is not None:
del markers[markerid]
def _stringify_key(key):
if isinstance(key, string_types): # pragma: no cover
pass
elif _PY3 and isinstance(key, bytes) and _encoding is not None:
key = str(key, _encoding)
elif isinstance(key, float):
key = _floatstr(key)
elif key is True:
key = 'true'
elif key is False:
key = 'false'
elif key is None:
key = 'null'
elif isinstance(key, integer_types):
if type(key) not in integer_types:
# See #118, do not trust custom str/repr
key = int(key)
key = str(key)
elif _use_decimal and isinstance(key, Decimal):
key = str(key)
elif _skipkeys:
key = None
else:
raise TypeError('keys must be str, int, float, bool or None, '
'not %s' % key.__class__.__name__)
return key
def _iterencode_dict(dct, _current_indent_level):
if not dct:
yield '{}'
return
if markers is not None:
markerid = id(dct)
if markerid in markers:
raise ValueError("Circular reference detected")
markers[markerid] = dct
yield '{'
if _indent is not None:
_current_indent_level += 1
newline_indent = '\n' + (_indent * _current_indent_level)
item_separator = _item_separator + newline_indent
yield newline_indent
else:
newline_indent = None
item_separator = _item_separator
first = True
if _PY3:
iteritems = dct.items()
else:
iteritems = dct.iteritems()
if _item_sort_key:
items = []
for k, v in dct.items():
if not isinstance(k, string_types):
k = _stringify_key(k)
if k is None:
continue
items.append((k, v))
items.sort(key=_item_sort_key)
else:
items = iteritems
for key, value in items:
if not (_item_sort_key or isinstance(key, string_types)):
key = _stringify_key(key)
if key is None:
# _skipkeys must be True
continue
if first:
first = False
else:
yield item_separator
yield _encoder(key)
yield _key_separator
if isinstance(value, string_types):
yield _encoder(value)
elif _PY3 and isinstance(value, bytes) and _encoding is not None:
yield _encoder(value)
elif isinstance(value, RawJSON):
yield value.encoded_json
elif value is None:
yield 'null'
elif value is True:
yield 'true'
elif value is False:
yield 'false'
elif isinstance(value, integer_types):
yield _encode_int(value)
elif isinstance(value, float):
yield _floatstr(value)
elif _use_decimal and isinstance(value, Decimal):
yield str(value)
else:
for_json = _for_json and getattr(value, 'for_json', None)
if for_json and callable(for_json):
chunks = _iterencode(for_json(), _current_indent_level)
elif isinstance(value, list):
chunks = _iterencode_list(value, _current_indent_level)
else:
_asdict = _namedtuple_as_object and getattr(value, '_asdict', None)
if _asdict and callable(_asdict):
chunks = _iterencode_dict(_asdict(),
_current_indent_level)
elif _tuple_as_array and isinstance(value, tuple):
chunks = _iterencode_list(value, _current_indent_level)
elif isinstance(value, dict):
chunks = _iterencode_dict(value, _current_indent_level)
else:
chunks = _iterencode(value, _current_indent_level)
for chunk in chunks:
yield chunk
if newline_indent is not None:
_current_indent_level -= 1
yield '\n' + (_indent * _current_indent_level)
yield '}'
if markers is not None:
del markers[markerid]
def _iterencode(o, _current_indent_level):
if isinstance(o, string_types):
yield _encoder(o)
elif _PY3 and isinstance(o, bytes) and _encoding is not None:
yield _encoder(o)
elif isinstance(o, RawJSON):
yield o.encoded_json
elif o is None:
yield 'null'
elif o is True:
yield 'true'
elif o is False:
yield 'false'
elif isinstance(o, integer_types):
yield _encode_int(o)
elif isinstance(o, float):
yield _floatstr(o)
else:
for_json = _for_json and getattr(o, 'for_json', None)
if for_json and callable(for_json):
for chunk in _iterencode(for_json(), _current_indent_level):
yield chunk
elif isinstance(o, list):
for chunk in _iterencode_list(o, _current_indent_level):
yield chunk
else:
_asdict = _namedtuple_as_object and getattr(o, '_asdict', None)
if _asdict and callable(_asdict):
for chunk in _iterencode_dict(_asdict(),
_current_indent_level):
yield chunk
elif (_tuple_as_array and isinstance(o, tuple)):
for chunk in _iterencode_list(o, _current_indent_level):
yield chunk
elif isinstance(o, dict):
for chunk in _iterencode_dict(o, _current_indent_level):
yield chunk
elif _use_decimal and isinstance(o, Decimal):
yield str(o)
else:
while _iterable_as_array:
# Markers are not checked here because it is valid for
# an iterable to return self.
try:
o = iter(o)
except TypeError:
break
for chunk in _iterencode_list(o, _current_indent_level):
yield chunk
return
if markers is not None:
markerid = id(o)
if markerid in markers:
raise ValueError("Circular reference detected")
markers[markerid] = o
o = _default(o)
for chunk in _iterencode(o, _current_indent_level):
yield chunk
if markers is not None:
del markers[markerid]
return _iterencode | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/simplejson/encoder.py | encoder.py |
from UserDict import DictMixin
class OrderedDict(dict, DictMixin):
def __init__(self, *args, **kwds):
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
self.__end
except AttributeError:
self.clear()
self.update(*args, **kwds)
def clear(self):
self.__end = end = []
end += [None, end, end] # sentinel node for doubly linked list
self.__map = {} # key --> [key, prev, next]
dict.clear(self)
def __setitem__(self, key, value):
if key not in self:
end = self.__end
curr = end[1]
curr[2] = end[1] = self.__map[key] = [key, curr, end]
dict.__setitem__(self, key, value)
def __delitem__(self, key):
dict.__delitem__(self, key)
key, prev, next = self.__map.pop(key)
prev[2] = next
next[1] = prev
def __iter__(self):
end = self.__end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]
def __reversed__(self):
end = self.__end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]
def popitem(self, last=True):
if not self:
raise KeyError('dictionary is empty')
key = reversed(self).next() if last else iter(self).next()
value = self.pop(key)
return key, value
def __reduce__(self):
items = [[k, self[k]] for k in self]
tmp = self.__map, self.__end
del self.__map, self.__end
inst_dict = vars(self).copy()
self.__map, self.__end = tmp
if inst_dict:
return (self.__class__, (items,), inst_dict)
return self.__class__, (items,)
def keys(self):
return list(self)
setdefault = DictMixin.setdefault
update = DictMixin.update
pop = DictMixin.pop
values = DictMixin.values
items = DictMixin.items
iterkeys = DictMixin.iterkeys
itervalues = DictMixin.itervalues
iteritems = DictMixin.iteritems
def __repr__(self):
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, self.items())
def copy(self):
return self.__class__(self)
@classmethod
def fromkeys(cls, iterable, value=None):
d = cls()
for key in iterable:
d[key] = value
return d
def __eq__(self, other):
if isinstance(other, OrderedDict):
return len(self)==len(other) and \
all(p==q for p, q in zip(self.items(), other.items()))
return dict.__eq__(self, other)
def __ne__(self, other):
return not self == other | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/simplejson/ordered_dict.py | ordered_dict.py |
from __future__ import absolute_import
import re
import sys
import struct
from .compat import PY3, unichr
from .scanner import make_scanner, JSONDecodeError
def _import_c_scanstring():
try:
from ._speedups import scanstring
return scanstring
except ImportError:
return None
c_scanstring = _import_c_scanstring()
# NOTE (3.1.0): JSONDecodeError may still be imported from this module for
# compatibility, but it was never in the __all__
__all__ = ['JSONDecoder']
FLAGS = re.VERBOSE | re.MULTILINE | re.DOTALL
def _floatconstants():
if sys.version_info < (2, 6):
_BYTES = '7FF80000000000007FF0000000000000'.decode('hex')
nan, inf = struct.unpack('>dd', _BYTES)
else:
nan = float('nan')
inf = float('inf')
return nan, inf, -inf
NaN, PosInf, NegInf = _floatconstants()
_CONSTANTS = {
'-Infinity': NegInf,
'Infinity': PosInf,
'NaN': NaN,
}
STRINGCHUNK = re.compile(r'(.*?)(["\\\x00-\x1f])', FLAGS)
BACKSLASH = {
'"': u'"', '\\': u'\\', '/': u'/',
'b': u'\b', 'f': u'\f', 'n': u'\n', 'r': u'\r', 't': u'\t',
}
DEFAULT_ENCODING = "utf-8"
def py_scanstring(s, end, encoding=None, strict=True,
_b=BACKSLASH, _m=STRINGCHUNK.match, _join=u''.join,
_PY3=PY3, _maxunicode=sys.maxunicode):
"""Scan the string s for a JSON string. End is the index of the
character in s after the quote that started the JSON string.
Unescapes all valid JSON string escape sequences and raises ValueError
on attempt to decode an invalid string. If strict is False then literal
control characters are allowed in the string.
Returns a tuple of the decoded string and the index of the character in s
after the end quote."""
if encoding is None:
encoding = DEFAULT_ENCODING
chunks = []
_append = chunks.append
begin = end - 1
while 1:
chunk = _m(s, end)
if chunk is None:
raise JSONDecodeError(
"Unterminated string starting at", s, begin)
end = chunk.end()
content, terminator = chunk.groups()
# Content is contains zero or more unescaped string characters
if content:
if not _PY3 and not isinstance(content, unicode):
content = unicode(content, encoding)
_append(content)
# Terminator is the end of string, a literal control character,
# or a backslash denoting that an escape sequence follows
if terminator == '"':
break
elif terminator != '\\':
if strict:
msg = "Invalid control character %r at"
raise JSONDecodeError(msg, s, end)
else:
_append(terminator)
continue
try:
esc = s[end]
except IndexError:
raise JSONDecodeError(
"Unterminated string starting at", s, begin)
# If not a unicode escape sequence, must be in the lookup table
if esc != 'u':
try:
char = _b[esc]
except KeyError:
msg = "Invalid \\X escape sequence %r"
raise JSONDecodeError(msg, s, end)
end += 1
else:
# Unicode escape sequence
msg = "Invalid \\uXXXX escape sequence"
esc = s[end + 1:end + 5]
escX = esc[1:2]
if len(esc) != 4 or escX == 'x' or escX == 'X':
raise JSONDecodeError(msg, s, end - 1)
try:
uni = int(esc, 16)
except ValueError:
raise JSONDecodeError(msg, s, end - 1)
end += 5
# Check for surrogate pair on UCS-4 systems
# Note that this will join high/low surrogate pairs
# but will also pass unpaired surrogates through
if (_maxunicode > 65535 and
uni & 0xfc00 == 0xd800 and
s[end:end + 2] == '\\u'):
esc2 = s[end + 2:end + 6]
escX = esc2[1:2]
if len(esc2) == 4 and not (escX == 'x' or escX == 'X'):
try:
uni2 = int(esc2, 16)
except ValueError:
raise JSONDecodeError(msg, s, end)
if uni2 & 0xfc00 == 0xdc00:
uni = 0x10000 + (((uni - 0xd800) << 10) |
(uni2 - 0xdc00))
end += 6
char = unichr(uni)
# Append the unescaped character
_append(char)
return _join(chunks), end
# Use speedup if available
scanstring = c_scanstring or py_scanstring
WHITESPACE = re.compile(r'[ \t\n\r]*', FLAGS)
WHITESPACE_STR = ' \t\n\r'
def JSONObject(state, encoding, strict, scan_once, object_hook,
object_pairs_hook, memo=None,
_w=WHITESPACE.match, _ws=WHITESPACE_STR):
(s, end) = state
# Backwards compatibility
if memo is None:
memo = {}
memo_get = memo.setdefault
pairs = []
# Use a slice to prevent IndexError from being raised, the following
# check will raise a more specific ValueError if the string is empty
nextchar = s[end:end + 1]
# Normally we expect nextchar == '"'
if nextchar != '"':
if nextchar in _ws:
end = _w(s, end).end()
nextchar = s[end:end + 1]
# Trivial empty object
if nextchar == '}':
if object_pairs_hook is not None:
result = object_pairs_hook(pairs)
return result, end + 1
pairs = {}
if object_hook is not None:
pairs = object_hook(pairs)
return pairs, end + 1
elif nextchar != '"':
raise JSONDecodeError(
"Expecting property name enclosed in double quotes",
s, end)
end += 1
while True:
key, end = scanstring(s, end, encoding, strict)
key = memo_get(key, key)
# To skip some function call overhead we optimize the fast paths where
# the JSON key separator is ": " or just ":".
if s[end:end + 1] != ':':
end = _w(s, end).end()
if s[end:end + 1] != ':':
raise JSONDecodeError("Expecting ':' delimiter", s, end)
end += 1
try:
if s[end] in _ws:
end += 1
if s[end] in _ws:
end = _w(s, end + 1).end()
except IndexError:
pass
value, end = scan_once(s, end)
pairs.append((key, value))
try:
nextchar = s[end]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end]
except IndexError:
nextchar = ''
end += 1
if nextchar == '}':
break
elif nextchar != ',':
raise JSONDecodeError("Expecting ',' delimiter or '}'", s, end - 1)
try:
nextchar = s[end]
if nextchar in _ws:
end += 1
nextchar = s[end]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end]
except IndexError:
nextchar = ''
end += 1
if nextchar != '"':
raise JSONDecodeError(
"Expecting property name enclosed in double quotes",
s, end - 1)
if object_pairs_hook is not None:
result = object_pairs_hook(pairs)
return result, end
pairs = dict(pairs)
if object_hook is not None:
pairs = object_hook(pairs)
return pairs, end
def JSONArray(state, scan_once, _w=WHITESPACE.match, _ws=WHITESPACE_STR):
(s, end) = state
values = []
nextchar = s[end:end + 1]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end:end + 1]
# Look-ahead for trivial empty array
if nextchar == ']':
return values, end + 1
elif nextchar == '':
raise JSONDecodeError("Expecting value or ']'", s, end)
_append = values.append
while True:
value, end = scan_once(s, end)
_append(value)
nextchar = s[end:end + 1]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end:end + 1]
end += 1
if nextchar == ']':
break
elif nextchar != ',':
raise JSONDecodeError("Expecting ',' delimiter or ']'", s, end - 1)
try:
if s[end] in _ws:
end += 1
if s[end] in _ws:
end = _w(s, end + 1).end()
except IndexError:
pass
return values, end
class JSONDecoder(object):
"""Simple JSON <http://json.org> decoder
Performs the following translations in decoding by default:
+---------------+-------------------+
| JSON | Python |
+===============+===================+
| object | dict |
+---------------+-------------------+
| array | list |
+---------------+-------------------+
| string | str, unicode |
+---------------+-------------------+
| number (int) | int, long |
+---------------+-------------------+
| number (real) | float |
+---------------+-------------------+
| true | True |
+---------------+-------------------+
| false | False |
+---------------+-------------------+
| null | None |
+---------------+-------------------+
It also understands ``NaN``, ``Infinity``, and ``-Infinity`` as
their corresponding ``float`` values, which is outside the JSON spec.
"""
def __init__(self, encoding=None, object_hook=None, parse_float=None,
parse_int=None, parse_constant=None, strict=True,
object_pairs_hook=None):
"""
*encoding* determines the encoding used to interpret any
:class:`str` objects decoded by this instance (``'utf-8'`` by
default). It has no effect when decoding :class:`unicode` objects.
Note that currently only encodings that are a superset of ASCII work,
strings of other encodings should be passed in as :class:`unicode`.
*object_hook*, if specified, will be called with the result of every
JSON object decoded and its return value will be used in place of the
given :class:`dict`. This can be used to provide custom
deserializations (e.g. to support JSON-RPC class hinting).
*object_pairs_hook* is an optional function that will be called with
the result of any object literal decode with an ordered list of pairs.
The return value of *object_pairs_hook* will be used instead of the
:class:`dict`. This feature can be used to implement custom decoders
that rely on the order that the key and value pairs are decoded (for
example, :func:`collections.OrderedDict` will remember the order of
insertion). If *object_hook* is also defined, the *object_pairs_hook*
takes priority.
*parse_float*, if specified, will be called with the string of every
JSON float to be decoded. By default, this is equivalent to
``float(num_str)``. This can be used to use another datatype or parser
for JSON floats (e.g. :class:`decimal.Decimal`).
*parse_int*, if specified, will be called with the string of every
JSON int to be decoded. By default, this is equivalent to
``int(num_str)``. This can be used to use another datatype or parser
for JSON integers (e.g. :class:`float`).
*parse_constant*, if specified, will be called with one of the
following strings: ``'-Infinity'``, ``'Infinity'``, ``'NaN'``. This
can be used to raise an exception if invalid JSON numbers are
encountered.
*strict* controls the parser's behavior when it encounters an
invalid control character in a string. The default setting of
``True`` means that unescaped control characters are parse errors, if
``False`` then control characters will be allowed in strings.
"""
if encoding is None:
encoding = DEFAULT_ENCODING
self.encoding = encoding
self.object_hook = object_hook
self.object_pairs_hook = object_pairs_hook
self.parse_float = parse_float or float
self.parse_int = parse_int or int
self.parse_constant = parse_constant or _CONSTANTS.__getitem__
self.strict = strict
self.parse_object = JSONObject
self.parse_array = JSONArray
self.parse_string = scanstring
self.memo = {}
self.scan_once = make_scanner(self)
def decode(self, s, _w=WHITESPACE.match, _PY3=PY3):
"""Return the Python representation of ``s`` (a ``str`` or ``unicode``
instance containing a JSON document)
"""
if _PY3 and isinstance(s, bytes):
s = str(s, self.encoding)
obj, end = self.raw_decode(s)
end = _w(s, end).end()
if end != len(s):
raise JSONDecodeError("Extra data", s, end, len(s))
return obj
def raw_decode(self, s, idx=0, _w=WHITESPACE.match, _PY3=PY3):
"""Decode a JSON document from ``s`` (a ``str`` or ``unicode``
beginning with a JSON document) and return a 2-tuple of the Python
representation and the index in ``s`` where the document ended.
Optionally, ``idx`` can be used to specify an offset in ``s`` where
the JSON document begins.
This can be used to decode a JSON document from a string that may
have extraneous data at the end.
"""
if idx < 0:
# Ensure that raw_decode bails on negative indexes, the regex
# would otherwise mask this behavior. #98
raise JSONDecodeError('Expecting value', s, idx)
if _PY3 and not isinstance(s, str):
raise TypeError("Input string must be text, not bytes")
# strip UTF-8 bom
if len(s) > idx:
ord0 = ord(s[idx])
if ord0 == 0xfeff:
idx += 1
elif ord0 == 0xef and s[idx:idx + 3] == '\xef\xbb\xbf':
idx += 3
return self.scan_once(s, idx=_w(s, idx).end()) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/simplejson/decoder.py | decoder.py |
import re
from .errors import JSONDecodeError
def _import_c_make_scanner():
try:
from ._speedups import make_scanner
return make_scanner
except ImportError:
return None
c_make_scanner = _import_c_make_scanner()
__all__ = ['make_scanner', 'JSONDecodeError']
NUMBER_RE = re.compile(
r'(-?(?:0|[1-9]\d*))(\.\d+)?([eE][-+]?\d+)?',
(re.VERBOSE | re.MULTILINE | re.DOTALL))
def py_make_scanner(context):
parse_object = context.parse_object
parse_array = context.parse_array
parse_string = context.parse_string
match_number = NUMBER_RE.match
encoding = context.encoding
strict = context.strict
parse_float = context.parse_float
parse_int = context.parse_int
parse_constant = context.parse_constant
object_hook = context.object_hook
object_pairs_hook = context.object_pairs_hook
memo = context.memo
def _scan_once(string, idx):
errmsg = 'Expecting value'
try:
nextchar = string[idx]
except IndexError:
raise JSONDecodeError(errmsg, string, idx)
if nextchar == '"':
return parse_string(string, idx + 1, encoding, strict)
elif nextchar == '{':
return parse_object((string, idx + 1), encoding, strict,
_scan_once, object_hook, object_pairs_hook, memo)
elif nextchar == '[':
return parse_array((string, idx + 1), _scan_once)
elif nextchar == 'n' and string[idx:idx + 4] == 'null':
return None, idx + 4
elif nextchar == 't' and string[idx:idx + 4] == 'true':
return True, idx + 4
elif nextchar == 'f' and string[idx:idx + 5] == 'false':
return False, idx + 5
m = match_number(string, idx)
if m is not None:
integer, frac, exp = m.groups()
if frac or exp:
res = parse_float(integer + (frac or '') + (exp or ''))
else:
res = parse_int(integer)
return res, m.end()
elif nextchar == 'N' and string[idx:idx + 3] == 'NaN':
return parse_constant('NaN'), idx + 3
elif nextchar == 'I' and string[idx:idx + 8] == 'Infinity':
return parse_constant('Infinity'), idx + 8
elif nextchar == '-' and string[idx:idx + 9] == '-Infinity':
return parse_constant('-Infinity'), idx + 9
else:
raise JSONDecodeError(errmsg, string, idx)
def scan_once(string, idx):
if idx < 0:
# Ensure the same behavior as the C speedup, otherwise
# this would work for *some* negative string indices due
# to the behavior of __getitem__ for strings. #98
raise JSONDecodeError('Expecting value', string, idx)
try:
return _scan_once(string, idx)
finally:
memo.clear()
return scan_once
make_scanner = c_make_scanner or py_make_scanner | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/simplejson/scanner.py | scanner.py |
r"""JSON (JavaScript Object Notation) <http://json.org> is a subset of
JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data
interchange format.
:mod:`simplejson` exposes an API familiar to users of the standard library
:mod:`marshal` and :mod:`pickle` modules. It is the externally maintained
version of the :mod:`json` library contained in Python 2.6, but maintains
compatibility back to Python 2.5 and (currently) has significant performance
advantages, even without using the optional C extension for speedups.
Encoding basic Python object hierarchies::
>>> import simplejson as json
>>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])
'["foo", {"bar": ["baz", null, 1.0, 2]}]'
>>> print(json.dumps("\"foo\bar"))
"\"foo\bar"
>>> print(json.dumps(u'\u1234'))
"\u1234"
>>> print(json.dumps('\\'))
"\\"
>>> print(json.dumps({"c": 0, "b": 0, "a": 0}, sort_keys=True))
{"a": 0, "b": 0, "c": 0}
>>> from simplejson.compat import StringIO
>>> io = StringIO()
>>> json.dump(['streaming API'], io)
>>> io.getvalue()
'["streaming API"]'
Compact encoding::
>>> import simplejson as json
>>> obj = [1,2,3,{'4': 5, '6': 7}]
>>> json.dumps(obj, separators=(',',':'), sort_keys=True)
'[1,2,3,{"4":5,"6":7}]'
Pretty printing::
>>> import simplejson as json
>>> print(json.dumps({'4': 5, '6': 7}, sort_keys=True, indent=' '))
{
"4": 5,
"6": 7
}
Decoding JSON::
>>> import simplejson as json
>>> obj = [u'foo', {u'bar': [u'baz', None, 1.0, 2]}]
>>> json.loads('["foo", {"bar":["baz", null, 1.0, 2]}]') == obj
True
>>> json.loads('"\\"foo\\bar"') == u'"foo\x08ar'
True
>>> from simplejson.compat import StringIO
>>> io = StringIO('["streaming API"]')
>>> json.load(io)[0] == 'streaming API'
True
Specializing JSON object decoding::
>>> import simplejson as json
>>> def as_complex(dct):
... if '__complex__' in dct:
... return complex(dct['real'], dct['imag'])
... return dct
...
>>> json.loads('{"__complex__": true, "real": 1, "imag": 2}',
... object_hook=as_complex)
(1+2j)
>>> from decimal import Decimal
>>> json.loads('1.1', parse_float=Decimal) == Decimal('1.1')
True
Specializing JSON object encoding::
>>> import simplejson as json
>>> def encode_complex(obj):
... if isinstance(obj, complex):
... return [obj.real, obj.imag]
... raise TypeError('Object of type %s is not JSON serializable' %
... obj.__class__.__name__)
...
>>> json.dumps(2 + 1j, default=encode_complex)
'[2.0, 1.0]'
>>> json.JSONEncoder(default=encode_complex).encode(2 + 1j)
'[2.0, 1.0]'
>>> ''.join(json.JSONEncoder(default=encode_complex).iterencode(2 + 1j))
'[2.0, 1.0]'
Using simplejson.tool from the shell to validate and pretty-print::
$ echo '{"json":"obj"}' | python -m simplejson.tool
{
"json": "obj"
}
$ echo '{ 1.2:3.4}' | python -m simplejson.tool
Expecting property name: line 1 column 3 (char 2)
"""
from __future__ import absolute_import
__version__ = '3.16.0'
__all__ = [
'dump', 'dumps', 'load', 'loads',
'JSONDecoder', 'JSONDecodeError', 'JSONEncoder',
'OrderedDict', 'simple_first', 'RawJSON'
]
__author__ = 'Bob Ippolito <[email protected]>'
from decimal import Decimal
from .errors import JSONDecodeError
from .raw_json import RawJSON
from .decoder import JSONDecoder
from .encoder import JSONEncoder, JSONEncoderForHTML
def _import_OrderedDict():
import collections
try:
return collections.OrderedDict
except AttributeError:
from . import ordered_dict
return ordered_dict.OrderedDict
OrderedDict = _import_OrderedDict()
def _import_c_make_encoder():
try:
from ._speedups import make_encoder
return make_encoder
except ImportError:
return None
_default_encoder = JSONEncoder(
skipkeys=False,
ensure_ascii=True,
check_circular=True,
allow_nan=True,
indent=None,
separators=None,
encoding='utf-8',
default=None,
use_decimal=True,
namedtuple_as_object=True,
tuple_as_array=True,
iterable_as_array=False,
bigint_as_string=False,
item_sort_key=None,
for_json=False,
ignore_nan=False,
int_as_string_bitcount=None,
)
def dump(obj, fp, skipkeys=False, ensure_ascii=True, check_circular=True,
allow_nan=True, cls=None, indent=None, separators=None,
encoding='utf-8', default=None, use_decimal=True,
namedtuple_as_object=True, tuple_as_array=True,
bigint_as_string=False, sort_keys=False, item_sort_key=None,
for_json=False, ignore_nan=False, int_as_string_bitcount=None,
iterable_as_array=False, **kw):
"""Serialize ``obj`` as a JSON formatted stream to ``fp`` (a
``.write()``-supporting file-like object).
If *skipkeys* is true then ``dict`` keys that are not basic types
(``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
will be skipped instead of raising a ``TypeError``.
If *ensure_ascii* is false, then the some chunks written to ``fp``
may be ``unicode`` instances, subject to normal Python ``str`` to
``unicode`` coercion rules. Unless ``fp.write()`` explicitly
understands ``unicode`` (as in ``codecs.getwriter()``) this is likely
to cause an error.
If *check_circular* is false, then the circular reference check
for container types will be skipped and a circular reference will
result in an ``OverflowError`` (or worse).
If *allow_nan* is false, then it will be a ``ValueError`` to
serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``)
in strict compliance of the original JSON specification, instead of using
the JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``). See
*ignore_nan* for ECMA-262 compliant behavior.
If *indent* is a string, then JSON array elements and object members
will be pretty-printed with a newline followed by that string repeated
for each level of nesting. ``None`` (the default) selects the most compact
representation without any newlines. For backwards compatibility with
versions of simplejson earlier than 2.1.0, an integer is also accepted
and is converted to a string with that many spaces.
If specified, *separators* should be an
``(item_separator, key_separator)`` tuple. The default is ``(', ', ': ')``
if *indent* is ``None`` and ``(',', ': ')`` otherwise. To get the most
compact JSON representation, you should specify ``(',', ':')`` to eliminate
whitespace.
*encoding* is the character encoding for str instances, default is UTF-8.
*default(obj)* is a function that should return a serializable version
of obj or raise ``TypeError``. The default simply raises ``TypeError``.
If *use_decimal* is true (default: ``True``) then decimal.Decimal
will be natively serialized to JSON with full precision.
If *namedtuple_as_object* is true (default: ``True``),
:class:`tuple` subclasses with ``_asdict()`` methods will be encoded
as JSON objects.
If *tuple_as_array* is true (default: ``True``),
:class:`tuple` (and subclasses) will be encoded as JSON arrays.
If *iterable_as_array* is true (default: ``False``),
any object not in the above table that implements ``__iter__()``
will be encoded as a JSON array.
If *bigint_as_string* is true (default: ``False``), ints 2**53 and higher
or lower than -2**53 will be encoded as strings. This is to avoid the
rounding that happens in Javascript otherwise. Note that this is still a
lossy operation that will not round-trip correctly and should be used
sparingly.
If *int_as_string_bitcount* is a positive number (n), then int of size
greater than or equal to 2**n or lower than or equal to -2**n will be
encoded as strings.
If specified, *item_sort_key* is a callable used to sort the items in
each dictionary. This is useful if you want to sort items other than
in alphabetical order by key. This option takes precedence over
*sort_keys*.
If *sort_keys* is true (default: ``False``), the output of dictionaries
will be sorted by item.
If *for_json* is true (default: ``False``), objects with a ``for_json()``
method will use the return value of that method for encoding as JSON
instead of the object.
If *ignore_nan* is true (default: ``False``), then out of range
:class:`float` values (``nan``, ``inf``, ``-inf``) will be serialized as
``null`` in compliance with the ECMA-262 specification. If true, this will
override *allow_nan*.
To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
``.default()`` method to serialize additional types), specify it with
the ``cls`` kwarg. NOTE: You should use *default* or *for_json* instead
of subclassing whenever possible.
"""
# cached encoder
if (not skipkeys and ensure_ascii and
check_circular and allow_nan and
cls is None and indent is None and separators is None and
encoding == 'utf-8' and default is None and use_decimal
and namedtuple_as_object and tuple_as_array and not iterable_as_array
and not bigint_as_string and not sort_keys
and not item_sort_key and not for_json
and not ignore_nan and int_as_string_bitcount is None
and not kw
):
iterable = _default_encoder.iterencode(obj)
else:
if cls is None:
cls = JSONEncoder
iterable = cls(skipkeys=skipkeys, ensure_ascii=ensure_ascii,
check_circular=check_circular, allow_nan=allow_nan, indent=indent,
separators=separators, encoding=encoding,
default=default, use_decimal=use_decimal,
namedtuple_as_object=namedtuple_as_object,
tuple_as_array=tuple_as_array,
iterable_as_array=iterable_as_array,
bigint_as_string=bigint_as_string,
sort_keys=sort_keys,
item_sort_key=item_sort_key,
for_json=for_json,
ignore_nan=ignore_nan,
int_as_string_bitcount=int_as_string_bitcount,
**kw).iterencode(obj)
# could accelerate with writelines in some versions of Python, at
# a debuggability cost
for chunk in iterable:
fp.write(chunk)
def dumps(obj, skipkeys=False, ensure_ascii=True, check_circular=True,
allow_nan=True, cls=None, indent=None, separators=None,
encoding='utf-8', default=None, use_decimal=True,
namedtuple_as_object=True, tuple_as_array=True,
bigint_as_string=False, sort_keys=False, item_sort_key=None,
for_json=False, ignore_nan=False, int_as_string_bitcount=None,
iterable_as_array=False, **kw):
"""Serialize ``obj`` to a JSON formatted ``str``.
If ``skipkeys`` is false then ``dict`` keys that are not basic types
(``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
will be skipped instead of raising a ``TypeError``.
If ``ensure_ascii`` is false, then the return value will be a
``unicode`` instance subject to normal Python ``str`` to ``unicode``
coercion rules instead of being escaped to an ASCII ``str``.
If ``check_circular`` is false, then the circular reference check
for container types will be skipped and a circular reference will
result in an ``OverflowError`` (or worse).
If ``allow_nan`` is false, then it will be a ``ValueError`` to
serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``) in
strict compliance of the JSON specification, instead of using the
JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).
If ``indent`` is a string, then JSON array elements and object members
will be pretty-printed with a newline followed by that string repeated
for each level of nesting. ``None`` (the default) selects the most compact
representation without any newlines. For backwards compatibility with
versions of simplejson earlier than 2.1.0, an integer is also accepted
and is converted to a string with that many spaces.
If specified, ``separators`` should be an
``(item_separator, key_separator)`` tuple. The default is ``(', ', ': ')``
if *indent* is ``None`` and ``(',', ': ')`` otherwise. To get the most
compact JSON representation, you should specify ``(',', ':')`` to eliminate
whitespace.
``encoding`` is the character encoding for str instances, default is UTF-8.
``default(obj)`` is a function that should return a serializable version
of obj or raise TypeError. The default simply raises TypeError.
If *use_decimal* is true (default: ``True``) then decimal.Decimal
will be natively serialized to JSON with full precision.
If *namedtuple_as_object* is true (default: ``True``),
:class:`tuple` subclasses with ``_asdict()`` methods will be encoded
as JSON objects.
If *tuple_as_array* is true (default: ``True``),
:class:`tuple` (and subclasses) will be encoded as JSON arrays.
If *iterable_as_array* is true (default: ``False``),
any object not in the above table that implements ``__iter__()``
will be encoded as a JSON array.
If *bigint_as_string* is true (not the default), ints 2**53 and higher
or lower than -2**53 will be encoded as strings. This is to avoid the
rounding that happens in Javascript otherwise.
If *int_as_string_bitcount* is a positive number (n), then int of size
greater than or equal to 2**n or lower than or equal to -2**n will be
encoded as strings.
If specified, *item_sort_key* is a callable used to sort the items in
each dictionary. This is useful if you want to sort items other than
in alphabetical order by key. This option takes precendence over
*sort_keys*.
If *sort_keys* is true (default: ``False``), the output of dictionaries
will be sorted by item.
If *for_json* is true (default: ``False``), objects with a ``for_json()``
method will use the return value of that method for encoding as JSON
instead of the object.
If *ignore_nan* is true (default: ``False``), then out of range
:class:`float` values (``nan``, ``inf``, ``-inf``) will be serialized as
``null`` in compliance with the ECMA-262 specification. If true, this will
override *allow_nan*.
To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
``.default()`` method to serialize additional types), specify it with
the ``cls`` kwarg. NOTE: You should use *default* instead of subclassing
whenever possible.
"""
# cached encoder
if (not skipkeys and ensure_ascii and
check_circular and allow_nan and
cls is None and indent is None and separators is None and
encoding == 'utf-8' and default is None and use_decimal
and namedtuple_as_object and tuple_as_array and not iterable_as_array
and not bigint_as_string and not sort_keys
and not item_sort_key and not for_json
and not ignore_nan and int_as_string_bitcount is None
and not kw
):
return _default_encoder.encode(obj)
if cls is None:
cls = JSONEncoder
return cls(
skipkeys=skipkeys, ensure_ascii=ensure_ascii,
check_circular=check_circular, allow_nan=allow_nan, indent=indent,
separators=separators, encoding=encoding, default=default,
use_decimal=use_decimal,
namedtuple_as_object=namedtuple_as_object,
tuple_as_array=tuple_as_array,
iterable_as_array=iterable_as_array,
bigint_as_string=bigint_as_string,
sort_keys=sort_keys,
item_sort_key=item_sort_key,
for_json=for_json,
ignore_nan=ignore_nan,
int_as_string_bitcount=int_as_string_bitcount,
**kw).encode(obj)
_default_decoder = JSONDecoder(encoding=None, object_hook=None,
object_pairs_hook=None)
def load(fp, encoding=None, cls=None, object_hook=None, parse_float=None,
parse_int=None, parse_constant=None, object_pairs_hook=None,
use_decimal=False, namedtuple_as_object=True, tuple_as_array=True,
**kw):
"""Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
a JSON document) to a Python object.
*encoding* determines the encoding used to interpret any
:class:`str` objects decoded by this instance (``'utf-8'`` by
default). It has no effect when decoding :class:`unicode` objects.
Note that currently only encodings that are a superset of ASCII work,
strings of other encodings should be passed in as :class:`unicode`.
*object_hook*, if specified, will be called with the result of every
JSON object decoded and its return value will be used in place of the
given :class:`dict`. This can be used to provide custom
deserializations (e.g. to support JSON-RPC class hinting).
*object_pairs_hook* is an optional function that will be called with
the result of any object literal decode with an ordered list of pairs.
The return value of *object_pairs_hook* will be used instead of the
:class:`dict`. This feature can be used to implement custom decoders
that rely on the order that the key and value pairs are decoded (for
example, :func:`collections.OrderedDict` will remember the order of
insertion). If *object_hook* is also defined, the *object_pairs_hook*
takes priority.
*parse_float*, if specified, will be called with the string of every
JSON float to be decoded. By default, this is equivalent to
``float(num_str)``. This can be used to use another datatype or parser
for JSON floats (e.g. :class:`decimal.Decimal`).
*parse_int*, if specified, will be called with the string of every
JSON int to be decoded. By default, this is equivalent to
``int(num_str)``. This can be used to use another datatype or parser
for JSON integers (e.g. :class:`float`).
*parse_constant*, if specified, will be called with one of the
following strings: ``'-Infinity'``, ``'Infinity'``, ``'NaN'``. This
can be used to raise an exception if invalid JSON numbers are
encountered.
If *use_decimal* is true (default: ``False``) then it implies
parse_float=decimal.Decimal for parity with ``dump``.
To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
kwarg. NOTE: You should use *object_hook* or *object_pairs_hook* instead
of subclassing whenever possible.
"""
return loads(fp.read(),
encoding=encoding, cls=cls, object_hook=object_hook,
parse_float=parse_float, parse_int=parse_int,
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook,
use_decimal=use_decimal, **kw)
def loads(s, encoding=None, cls=None, object_hook=None, parse_float=None,
parse_int=None, parse_constant=None, object_pairs_hook=None,
use_decimal=False, **kw):
"""Deserialize ``s`` (a ``str`` or ``unicode`` instance containing a JSON
document) to a Python object.
*encoding* determines the encoding used to interpret any
:class:`str` objects decoded by this instance (``'utf-8'`` by
default). It has no effect when decoding :class:`unicode` objects.
Note that currently only encodings that are a superset of ASCII work,
strings of other encodings should be passed in as :class:`unicode`.
*object_hook*, if specified, will be called with the result of every
JSON object decoded and its return value will be used in place of the
given :class:`dict`. This can be used to provide custom
deserializations (e.g. to support JSON-RPC class hinting).
*object_pairs_hook* is an optional function that will be called with
the result of any object literal decode with an ordered list of pairs.
The return value of *object_pairs_hook* will be used instead of the
:class:`dict`. This feature can be used to implement custom decoders
that rely on the order that the key and value pairs are decoded (for
example, :func:`collections.OrderedDict` will remember the order of
insertion). If *object_hook* is also defined, the *object_pairs_hook*
takes priority.
*parse_float*, if specified, will be called with the string of every
JSON float to be decoded. By default, this is equivalent to
``float(num_str)``. This can be used to use another datatype or parser
for JSON floats (e.g. :class:`decimal.Decimal`).
*parse_int*, if specified, will be called with the string of every
JSON int to be decoded. By default, this is equivalent to
``int(num_str)``. This can be used to use another datatype or parser
for JSON integers (e.g. :class:`float`).
*parse_constant*, if specified, will be called with one of the
following strings: ``'-Infinity'``, ``'Infinity'``, ``'NaN'``. This
can be used to raise an exception if invalid JSON numbers are
encountered.
If *use_decimal* is true (default: ``False``) then it implies
parse_float=decimal.Decimal for parity with ``dump``.
To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
kwarg. NOTE: You should use *object_hook* or *object_pairs_hook* instead
of subclassing whenever possible.
"""
if (cls is None and encoding is None and object_hook is None and
parse_int is None and parse_float is None and
parse_constant is None and object_pairs_hook is None
and not use_decimal and not kw):
return _default_decoder.decode(s)
if cls is None:
cls = JSONDecoder
if object_hook is not None:
kw['object_hook'] = object_hook
if object_pairs_hook is not None:
kw['object_pairs_hook'] = object_pairs_hook
if parse_float is not None:
kw['parse_float'] = parse_float
if parse_int is not None:
kw['parse_int'] = parse_int
if parse_constant is not None:
kw['parse_constant'] = parse_constant
if use_decimal:
if parse_float is not None:
raise TypeError("use_decimal=True implies parse_float=Decimal")
kw['parse_float'] = Decimal
return cls(encoding=encoding, **kw).decode(s)
def _toggle_speedups(enabled):
from . import decoder as dec
from . import encoder as enc
from . import scanner as scan
c_make_encoder = _import_c_make_encoder()
if enabled:
dec.scanstring = dec.c_scanstring or dec.py_scanstring
enc.c_make_encoder = c_make_encoder
enc.encode_basestring_ascii = (enc.c_encode_basestring_ascii or
enc.py_encode_basestring_ascii)
scan.make_scanner = scan.c_make_scanner or scan.py_make_scanner
else:
dec.scanstring = dec.py_scanstring
enc.c_make_encoder = None
enc.encode_basestring_ascii = enc.py_encode_basestring_ascii
scan.make_scanner = scan.py_make_scanner
dec.make_scanner = scan.make_scanner
global _default_decoder
_default_decoder = JSONDecoder(
encoding=None,
object_hook=None,
object_pairs_hook=None,
)
global _default_encoder
_default_encoder = JSONEncoder(
skipkeys=False,
ensure_ascii=True,
check_circular=True,
allow_nan=True,
indent=None,
separators=None,
encoding='utf-8',
default=None,
)
def simple_first(kv):
"""Helper function to pass to item_sort_key to sort simple
elements to the top, then container elements.
"""
return (isinstance(kv[1], (list, dict, tuple)), kv[0]) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/simplejson/__init__.py | __init__.py |
from __future__ import print_function
import re
_MAXCACHE = 20
class Resolver(object):
_match_cache = {}
def __init__(self, pathattr='name'):
"""Resolve :any:`NodeMixin` paths using attribute `pathattr`."""
super(Resolver, self).__init__()
self.pathattr = pathattr
def get(self, node, path):
"""
Return instance at `path`.
An example module tree:
>>> from anytree import Node
>>> top = Node("top", parent=None)
>>> sub0 = Node("sub0", parent=top)
>>> sub0sub0 = Node("sub0sub0", parent=sub0)
>>> sub0sub1 = Node("sub0sub1", parent=sub0)
>>> sub1 = Node("sub1", parent=top)
A resolver using the `name` attribute:
>>> r = Resolver('name')
Relative paths:
>>> r.get(top, "sub0/sub0sub0")
Node('/top/sub0/sub0sub0')
>>> r.get(sub1, "..")
Node('/top')
>>> r.get(sub1, "../sub0/sub0sub1")
Node('/top/sub0/sub0sub1')
>>> r.get(sub1, ".")
Node('/top/sub1')
>>> r.get(sub1, "")
Node('/top/sub1')
>>> r.get(top, "sub2")
Traceback (most recent call last):
...
anytree.resolver.ChildResolverError: Node('/top') has no child sub2. Children are: 'sub0', 'sub1'.
Absolute paths:
>>> r.get(sub0sub0, "/top")
Node('/top')
>>> r.get(sub0sub0, "/top/sub0")
Node('/top/sub0')
>>> r.get(sub0sub0, "/")
Traceback (most recent call last):
...
anytree.resolver.ResolverError: root node missing. root is '/top'.
>>> r.get(sub0sub0, "/bar")
Traceback (most recent call last):
...
anytree.resolver.ResolverError: unknown root node '/bar'. root is '/top'.
"""
node, parts = self.__start(node, path)
for part in parts:
if part == "..":
node = node.parent
elif part in ("", "."):
pass
else:
node = self.__get(node, part)
return node
def __get(self, node, name):
for child in node.children:
if _getattr(child, self.pathattr) == name:
return child
raise ChildResolverError(node, name, self.pathattr)
def glob(self, node, path):
"""
Return instances at `path` supporting wildcards.
Behaves identical to :any:`get`, but accepts wildcards and returns
a list of found nodes.
* `*` matches any characters, except '/'.
* `?` matches a single character, except '/'.
An example module tree:
>>> from anytree import Node
>>> top = Node("top", parent=None)
>>> sub0 = Node("sub0", parent=top)
>>> sub0sub0 = Node("sub0", parent=sub0)
>>> sub0sub1 = Node("sub1", parent=sub0)
>>> sub1 = Node("sub1", parent=top)
>>> sub1sub0 = Node("sub0", parent=sub1)
A resolver using the `name` attribute:
>>> r = Resolver('name')
Relative paths:
>>> r.glob(top, "sub0/sub?")
[Node('/top/sub0/sub0'), Node('/top/sub0/sub1')]
>>> r.glob(sub1, ".././*")
[Node('/top/sub0'), Node('/top/sub1')]
>>> r.glob(top, "*/*")
[Node('/top/sub0/sub0'), Node('/top/sub0/sub1'), Node('/top/sub1/sub0')]
>>> r.glob(top, "*/sub0")
[Node('/top/sub0/sub0'), Node('/top/sub1/sub0')]
>>> r.glob(top, "sub1/sub1")
Traceback (most recent call last):
...
anytree.resolver.ChildResolverError: Node('/top/sub1') has no child sub1. Children are: 'sub0'.
Non-matching wildcards are no error:
>>> r.glob(top, "bar*")
[]
>>> r.glob(top, "sub2")
Traceback (most recent call last):
...
anytree.resolver.ChildResolverError: Node('/top') has no child sub2. Children are: 'sub0', 'sub1'.
Absolute paths:
>>> r.glob(sub0sub0, "/top/*")
[Node('/top/sub0'), Node('/top/sub1')]
>>> r.glob(sub0sub0, "/")
Traceback (most recent call last):
...
anytree.resolver.ResolverError: root node missing. root is '/top'.
>>> r.glob(sub0sub0, "/bar")
Traceback (most recent call last):
...
anytree.resolver.ResolverError: unknown root node '/bar'. root is '/top'.
"""
node, parts = self.__start(node, path)
return self.__glob(node, parts)
def __start(self, node, path):
sep = node.separator
parts = path.split(sep)
if path.startswith(sep):
node = node.root
rootpart = _getattr(node, self.pathattr)
parts.pop(0)
if not parts[0]:
msg = "root node missing. root is '%s%s'."
raise ResolverError(node, "", msg % (sep, str(rootpart)))
elif parts[0] != rootpart:
msg = "unknown root node '%s%s'. root is '%s%s'."
raise ResolverError(node, "", msg % (sep, parts[0], sep, str(rootpart)))
parts.pop(0)
return node, parts
def __glob(self, node, parts):
nodes = []
name = parts[0]
remainder = parts[1:]
# handle relative
if name == "..":
nodes += self.__glob(node.parent, remainder)
elif name in ("", "."):
nodes += self.__glob(node, remainder)
else:
matches = self.__find(node, name, remainder)
if not matches and not Resolver.is_wildcard(name):
raise ChildResolverError(node, name, self.pathattr)
nodes += matches
return nodes
def __find(self, node, pat, remainder):
matches = []
for child in node.children:
name = _getattr(child, self.pathattr)
try:
if Resolver.__match(name, pat):
if remainder:
matches += self.__glob(child, remainder)
else:
matches.append(child)
except ResolverError as exc:
if not Resolver.is_wildcard(pat):
raise exc
return matches
@staticmethod
def is_wildcard(path):
"""Return `True` is a wildcard."""
return "?" in path or "*" in path
@staticmethod
def __match(name, pat):
try:
re_pat = Resolver._match_cache[pat]
except KeyError:
res = Resolver.__translate(pat)
if len(Resolver._match_cache) >= _MAXCACHE:
Resolver._match_cache.clear()
Resolver._match_cache[pat] = re_pat = re.compile(res)
return re_pat.match(name) is not None
@staticmethod
def __translate(pat):
re_pat = ''
for char in pat:
if char == "*":
re_pat += ".*"
elif char == "?":
re_pat += "."
else:
re_pat += re.escape(char)
return re_pat + r'\Z(?ms)'
class ResolverError(RuntimeError):
def __init__(self, node, child, msg):
"""Resolve Error at `node` handling `child`."""
super(ResolverError, self).__init__(msg)
self.node = node
self.child = child
class ChildResolverError(ResolverError):
def __init__(self, node, child, pathattr):
"""Child Resolve Error at `node` handling `child`."""
names = [repr(_getattr(c, pathattr)) for c in node.children]
msg = "%r has no child %s. Children are: %s."
msg = msg % (node, child, ", ".join(names))
super(ChildResolverError, self).__init__(node, child, msg)
def _getattr(node, name):
return getattr(node, name, None) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/anytree/resolver.py | resolver.py |
from agouti_pkg.anytree.iterators import PreOrderIter
def findall(node, filter_=None, stop=None, maxlevel=None, mincount=None, maxcount=None):
"""
Search nodes matching `filter_` but stop at `maxlevel` or `stop`.
Return tuple with matching nodes.
Args:
node: top node, start searching.
Keyword Args:
filter_: function called with every `node` as argument, `node` is returned if `True`.
stop: stop iteration at `node` if `stop` function returns `True` for `node`.
maxlevel (int): maximum decending in the node hierarchy.
mincount (int): minimum number of nodes.
maxcount (int): maximum number of nodes.
Example tree:
>>> from anytree import Node, RenderTree, AsciiStyle
>>> f = Node("f")
>>> b = Node("b", parent=f)
>>> a = Node("a", parent=b)
>>> d = Node("d", parent=b)
>>> c = Node("c", parent=d)
>>> e = Node("e", parent=d)
>>> g = Node("g", parent=f)
>>> i = Node("i", parent=g)
>>> h = Node("h", parent=i)
>>> print(RenderTree(f, style=AsciiStyle()).by_attr())
f
|-- b
| |-- a
| +-- d
| |-- c
| +-- e
+-- g
+-- i
+-- h
>>> findall(f, filter_=lambda node: node.name in ("a", "b"))
(Node('/f/b'), Node('/f/b/a'))
>>> findall(f, filter_=lambda node: d in node.path)
(Node('/f/b/d'), Node('/f/b/d/c'), Node('/f/b/d/e'))
The number of matches can be limited:
>>> findall(f, filter_=lambda node: d in node.path, mincount=4) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
anytree.search.CountError: Expecting at least 4 elements, but found 3. ... Node('/f/b/d/e'))
>>> findall(f, filter_=lambda node: d in node.path, maxcount=2) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
anytree.search.CountError: Expecting 2 elements at maximum, but found 3. ... Node('/f/b/d/e'))
"""
return _findall(node, filter_=filter_, stop=stop,
maxlevel=maxlevel, mincount=mincount, maxcount=maxcount)
def findall_by_attr(node, value, name="name", maxlevel=None, mincount=None, maxcount=None):
"""
Search nodes with attribute `name` having `value` but stop at `maxlevel`.
Return tuple with matching nodes.
Args:
node: top node, start searching.
value: value which need to match
Keyword Args:
name (str): attribute name need to match
maxlevel (int): maximum decending in the node hierarchy.
mincount (int): minimum number of nodes.
maxcount (int): maximum number of nodes.
Example tree:
>>> from anytree import Node, RenderTree, AsciiStyle
>>> f = Node("f")
>>> b = Node("b", parent=f)
>>> a = Node("a", parent=b)
>>> d = Node("d", parent=b)
>>> c = Node("c", parent=d)
>>> e = Node("e", parent=d)
>>> g = Node("g", parent=f)
>>> i = Node("i", parent=g)
>>> h = Node("h", parent=i)
>>> print(RenderTree(f, style=AsciiStyle()).by_attr())
f
|-- b
| |-- a
| +-- d
| |-- c
| +-- e
+-- g
+-- i
+-- h
>>> findall_by_attr(f, "d")
(Node('/f/b/d'),)
"""
return _findall(node, filter_=lambda n: _filter_by_name(n, name, value),
maxlevel=maxlevel, mincount=mincount, maxcount=maxcount)
def find(node, filter_=None, stop=None, maxlevel=None):
"""
Search for *single* node matching `filter_` but stop at `maxlevel` or `stop`.
Return matching node.
Args:
node: top node, start searching.
Keyword Args:
filter_: function called with every `node` as argument, `node` is returned if `True`.
stop: stop iteration at `node` if `stop` function returns `True` for `node`.
maxlevel (int): maximum decending in the node hierarchy.
Example tree:
>>> from anytree import Node, RenderTree, AsciiStyle
>>> f = Node("f")
>>> b = Node("b", parent=f)
>>> a = Node("a", parent=b)
>>> d = Node("d", parent=b)
>>> c = Node("c", parent=d)
>>> e = Node("e", parent=d)
>>> g = Node("g", parent=f)
>>> i = Node("i", parent=g)
>>> h = Node("h", parent=i)
>>> print(RenderTree(f, style=AsciiStyle()).by_attr())
f
|-- b
| |-- a
| +-- d
| |-- c
| +-- e
+-- g
+-- i
+-- h
>>> find(f, lambda node: node.name == "d")
Node('/f/b/d')
>>> find(f, lambda node: node.name == "z")
>>> find(f, lambda node: b in node.path) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
anytree.search.CountError: Expecting 1 elements at maximum, but found 5. (Node('/f/b')... Node('/f/b/d/e'))
"""
return _find(node, filter_=filter_, stop=stop, maxlevel=maxlevel)
def find_by_attr(node, value, name="name", maxlevel=None):
"""
Search for *single* node with attribute `name` having `value` but stop at `maxlevel`.
Return tuple with matching nodes.
Args:
node: top node, start searching.
value: value which need to match
Keyword Args:
name (str): attribute name need to match
maxlevel (int): maximum decending in the node hierarchy.
Example tree:
>>> from anytree import Node, RenderTree, AsciiStyle
>>> f = Node("f")
>>> b = Node("b", parent=f)
>>> a = Node("a", parent=b)
>>> d = Node("d", parent=b)
>>> c = Node("c", parent=d, foo=4)
>>> e = Node("e", parent=d)
>>> g = Node("g", parent=f)
>>> i = Node("i", parent=g)
>>> h = Node("h", parent=i)
>>> print(RenderTree(f, style=AsciiStyle()).by_attr())
f
|-- b
| |-- a
| +-- d
| |-- c
| +-- e
+-- g
+-- i
+-- h
>>> find_by_attr(f, "d")
Node('/f/b/d')
>>> find_by_attr(f, name="foo", value=4)
Node('/f/b/d/c', foo=4)
>>> find_by_attr(f, name="foo", value=8)
"""
return _find(node, filter_=lambda n: _filter_by_name(n, name, value),
maxlevel=maxlevel)
def _find(node, filter_, stop=None, maxlevel=None):
items = _findall(node, filter_, stop=stop, maxlevel=maxlevel, maxcount=1)
return items[0] if items else None
def _findall(node, filter_, stop=None, maxlevel=None, mincount=None, maxcount=None):
result = tuple(PreOrderIter(node, filter_, stop, maxlevel))
resultlen = len(result)
if mincount is not None and resultlen < mincount:
msg = "Expecting at least %d elements, but found %d."
raise CountError(msg % (mincount, resultlen), result)
if maxcount is not None and resultlen > maxcount:
msg = "Expecting %d elements at maximum, but found %d."
raise CountError(msg % (maxcount, resultlen), result)
return result
def _filter_by_name(node, name, value):
try:
return getattr(node, name) == value
except AttributeError:
return False
class CountError(RuntimeError):
def __init__(self, msg, result):
"""Error raised on `mincount` or `maxcount` mismatch."""
if result:
msg += " " + repr(result)
super(CountError, self).__init__(msg) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/anytree/search.py | search.py |
class Walker(object):
def __init__(self):
"""Walk from one node to another."""
super(Walker, self).__init__()
def walk(self, start, end):
"""
Walk from `start` node to `end` node.
Returns:
(upwards, common, downwards): `upwards` is a list of nodes to go upward to.
`common` top node. `downwards` is a list of nodes to go downward to.
Raises:
WalkError: on no common root node.
>>> from anytree import Node, RenderTree, AsciiStyle
>>> f = Node("f")
>>> b = Node("b", parent=f)
>>> a = Node("a", parent=b)
>>> d = Node("d", parent=b)
>>> c = Node("c", parent=d)
>>> e = Node("e", parent=d)
>>> g = Node("g", parent=f)
>>> i = Node("i", parent=g)
>>> h = Node("h", parent=i)
>>> print(RenderTree(f, style=AsciiStyle()))
Node('/f')
|-- Node('/f/b')
| |-- Node('/f/b/a')
| +-- Node('/f/b/d')
| |-- Node('/f/b/d/c')
| +-- Node('/f/b/d/e')
+-- Node('/f/g')
+-- Node('/f/g/i')
+-- Node('/f/g/i/h')
Create a walker:
>>> w = Walker()
This class is made for walking:
>>> w.walk(f, f)
((), Node('/f'), ())
>>> w.walk(f, b)
((), Node('/f'), (Node('/f/b'),))
>>> w.walk(b, f)
((Node('/f/b'),), Node('/f'), ())
>>> w.walk(h, e)
((Node('/f/g/i/h'), Node('/f/g/i'), Node('/f/g')), Node('/f'), (Node('/f/b'), Node('/f/b/d'), Node('/f/b/d/e')))
>>> w.walk(d, e)
((), Node('/f/b/d'), (Node('/f/b/d/e'),))
For a proper walking the nodes need to be part of the same tree:
>>> w.walk(Node("a"), Node("b"))
Traceback (most recent call last):
...
anytree.walker.WalkError: Node('/a') and Node('/b') are not part of the same tree.
"""
s = start.path
e = end.path
if start.root != end.root:
msg = "%r and %r are not part of the same tree." % (start, end)
raise WalkError(msg)
# common
c = Walker.__calc_common(s, e)
assert c[0] is start.root
len_c = len(c)
# up
if start is c[-1]:
up = tuple()
else:
up = tuple(reversed(s[len_c:]))
# down
if end is c[-1]:
down = tuple()
else:
down = e[len_c:]
return up, c[-1], down
@staticmethod
def __calc_common(s, e):
return tuple([si for si, ei in zip(s, e) if si is ei])
class WalkError(RuntimeError):
"""Walk Error.""" | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/anytree/walker.py | walker.py |
import collections
import agouti_pkg.six
Row = collections.namedtuple("Row", ("pre", "fill", "node"))
class AbstractStyle(object):
def __init__(self, vertical, cont, end):
"""
Tree Render Style.
Args:
vertical: Sign for vertical line.
cont: Chars for a continued branch.
end: Chars for the last branch.
"""
super(AbstractStyle, self).__init__()
self.vertical = vertical
self.cont = cont
self.end = end
assert (len(cont) == len(vertical) and len(cont) == len(end)), (
"'%s', '%s' and '%s' need to have equal length" % (vertical, cont,
end))
@property
def empty(self):
"""Empty string as placeholder."""
return ' ' * len(self.end)
def __repr__(self):
classname = self.__class__.__name__
return "%s()" % classname
class AsciiStyle(AbstractStyle):
def __init__(self):
"""
Ascii style.
>>> from anytree import Node, RenderTree
>>> root = Node("root")
>>> s0 = Node("sub0", parent=root)
>>> s0b = Node("sub0B", parent=s0)
>>> s0a = Node("sub0A", parent=s0)
>>> s1 = Node("sub1", parent=root)
>>> print(RenderTree(root, style=AsciiStyle()))
Node('/root')
|-- Node('/root/sub0')
| |-- Node('/root/sub0/sub0B')
| +-- Node('/root/sub0/sub0A')
+-- Node('/root/sub1')
"""
super(AsciiStyle, self).__init__(u'| ', u'|-- ', u'+-- ')
class ContStyle(AbstractStyle):
def __init__(self):
u"""
Continued style, without gaps.
>>> from anytree import Node, RenderTree
>>> root = Node("root")
>>> s0 = Node("sub0", parent=root)
>>> s0b = Node("sub0B", parent=s0)
>>> s0a = Node("sub0A", parent=s0)
>>> s1 = Node("sub1", parent=root)
>>> print(RenderTree(root, style=ContStyle()))
Node('/root')
├── Node('/root/sub0')
│ ├── Node('/root/sub0/sub0B')
│ └── Node('/root/sub0/sub0A')
└── Node('/root/sub1')
"""
super(ContStyle, self).__init__(u'\u2502 ',
u'\u251c\u2500\u2500 ',
u'\u2514\u2500\u2500 ')
class ContRoundStyle(AbstractStyle):
def __init__(self):
u"""
Continued style, without gaps, round edges.
>>> from anytree import Node, RenderTree
>>> root = Node("root")
>>> s0 = Node("sub0", parent=root)
>>> s0b = Node("sub0B", parent=s0)
>>> s0a = Node("sub0A", parent=s0)
>>> s1 = Node("sub1", parent=root)
>>> print(RenderTree(root, style=ContRoundStyle()))
Node('/root')
├── Node('/root/sub0')
│ ├── Node('/root/sub0/sub0B')
│ ╰── Node('/root/sub0/sub0A')
╰── Node('/root/sub1')
"""
super(ContRoundStyle, self).__init__(u'\u2502 ',
u'\u251c\u2500\u2500 ',
u'\u2570\u2500\u2500 ')
class DoubleStyle(AbstractStyle):
def __init__(self):
u"""
Double line style, without gaps.
>>> from anytree import Node, RenderTree
>>> root = Node("root")
>>> s0 = Node("sub0", parent=root)
>>> s0b = Node("sub0B", parent=s0)
>>> s0a = Node("sub0A", parent=s0)
>>> s1 = Node("sub1", parent=root)
>>> print(RenderTree(root, style=DoubleStyle))
Node('/root')
╠══ Node('/root/sub0')
║ ╠══ Node('/root/sub0/sub0B')
║ ╚══ Node('/root/sub0/sub0A')
╚══ Node('/root/sub1')
"""
super(DoubleStyle, self).__init__(u'\u2551 ',
u'\u2560\u2550\u2550 ',
u'\u255a\u2550\u2550 ')
@agouti_pkg.six.python_2_unicode_compatible
class RenderTree(object):
def __init__(self, node, style=ContStyle(), childiter=list):
u"""
Render tree starting at `node`.
Keyword Args:
style (AbstractStyle): Render Style.
childiter: Child iterator.
:any:`RenderTree` is an iterator, returning a tuple with 3 items:
`pre`
tree prefix.
`fill`
filling for multiline entries.
`node`
:any:`NodeMixin` object.
It is up to the user to assemble these parts to a whole.
>>> from anytree import Node, RenderTree
>>> root = Node("root", lines=["c0fe", "c0de"])
>>> s0 = Node("sub0", parent=root, lines=["ha", "ba"])
>>> s0b = Node("sub0B", parent=s0, lines=["1", "2", "3"])
>>> s0a = Node("sub0A", parent=s0, lines=["a", "b"])
>>> s1 = Node("sub1", parent=root, lines=["Z"])
Simple one line:
>>> for pre, _, node in RenderTree(root):
... print("%s%s" % (pre, node.name))
root
├── sub0
│ ├── sub0B
│ └── sub0A
└── sub1
Multiline:
>>> for pre, fill, node in RenderTree(root):
... print("%s%s" % (pre, node.lines[0]))
... for line in node.lines[1:]:
... print("%s%s" % (fill, line))
c0fe
c0de
├── ha
│ ba
│ ├── 1
│ │ 2
│ │ 3
│ └── a
│ b
└── Z
The `childiter` is responsible for iterating over child nodes at the
same level. An reversed order can be achived by using `reversed`.
>>> for row in RenderTree(root, childiter=reversed):
... print("%s%s" % (row.pre, row.node.name))
root
├── sub1
└── sub0
├── sub0A
└── sub0B
Or writing your own sort function:
>>> def mysort(items):
... return sorted(items, key=lambda item: item.name)
>>> for row in RenderTree(root, childiter=mysort):
... print("%s%s" % (row.pre, row.node.name))
root
├── sub0
│ ├── sub0A
│ └── sub0B
└── sub1
:any:`by_attr` simplifies attribute rendering and supports multiline:
>>> print(RenderTree(root).by_attr())
root
├── sub0
│ ├── sub0B
│ └── sub0A
└── sub1
>>> print(RenderTree(root).by_attr("lines"))
c0fe
c0de
├── ha
│ ba
│ ├── 1
│ │ 2
│ │ 3
│ └── a
│ b
└── Z
"""
if not isinstance(style, AbstractStyle):
style = style()
self.node = node
self.style = style
self.childiter = childiter
def __iter__(self):
return self.__next(self.node, tuple())
def __next(self, node, continues):
yield RenderTree.__item(node, continues, self.style)
children = node.children
if children:
lastidx = len(children) - 1
for idx, child in enumerate(self.childiter(children)):
for grandchild in self.__next(child, continues + (idx != lastidx, )):
yield grandchild
@staticmethod
def __item(node, continues, style):
if not continues:
return Row(u'', u'', node)
else:
items = [style.vertical if cont else style.empty for cont in continues]
indent = ''.join(items[:-1])
branch = style.cont if continues[-1] else style.end
pre = indent + branch
fill = ''.join(items)
return Row(pre, fill, node)
def __str__(self):
lines = ["%s%r" % (pre, node) for pre, _, node in self]
return "\n".join(lines)
def __repr__(self):
classname = self.__class__.__name__
args = [repr(self.node),
"style=%s" % repr(self.style),
"childiter=%s" % repr(self.childiter)]
return "%s(%s)" % (classname, ", ".join(args))
def by_attr(self, attrname="name"):
"""Return rendered tree with node attribute `attrname`."""
def get():
for pre, fill, node in self:
attr = getattr(node, attrname, "")
if isinstance(attr, (list, tuple)):
lines = attr
else:
lines = str(attr).split("\n")
yield u"%s%s" % (pre, lines[0])
for line in lines[1:]:
yield u"%s%s" % (fill, line)
return "\n".join(get()) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/anytree/render.py | render.py |
import logging
import codecs
from os import path
from os import remove
from subprocess import check_call
from tempfile import NamedTemporaryFile
from agouti_pkg.anytree import PreOrderIter
class DotExporter(object):
def __init__(self, node, graph="digraph", name="tree", options=None,
indent=4, nodenamefunc=None, nodeattrfunc=None,
edgeattrfunc=None, edgetypefunc=None):
"""
Dot Language Exporter.
Args:
node (Node): start node.
Keyword Args:
graph: DOT graph type.
name: DOT graph name.
options: list of options added to the graph.
indent (int): number of spaces for indent.
nodenamefunc: Function to extract node name from `node` object.
The function shall accept one `node` object as
argument and return the name of it.
nodeattrfunc: Function to decorate a node with attributes.
The function shall accept one `node` object as
argument and return the attributes.
edgeattrfunc: Function to decorate a edge with attributes.
The function shall accept two `node` objects as
argument. The first the node and the second the child
and return the attributes.
edgetypefunc: Function to which gives the edge type.
The function shall accept two `node` objects as
argument. The first the node and the second the child
and return the edge (i.e. '->').
>>> from agouti_pkg.anytree import Node
>>> root = Node("root")
>>> s0 = Node("sub0", parent=root, edge=2)
>>> s0b = Node("sub0B", parent=s0, foo=4, edge=109)
>>> s0a = Node("sub0A", parent=s0, edge="")
>>> s1 = Node("sub1", parent=root, edge="")
>>> s1a = Node("sub1A", parent=s1, edge=7)
>>> s1b = Node("sub1B", parent=s1, edge=8)
>>> s1c = Node("sub1C", parent=s1, edge=22)
>>> s1ca = Node("sub1Ca", parent=s1c, edge=42)
A directed graph:
>>> from agouti_pkg.anytree.exporter import DotExporter
>>> for line in DotExporter(root):
... print(line)
digraph tree {
"root";
"sub0";
"sub0B";
"sub0A";
"sub1";
"sub1A";
"sub1B";
"sub1C";
"sub1Ca";
"root" -> "sub0";
"root" -> "sub1";
"sub0" -> "sub0B";
"sub0" -> "sub0A";
"sub1" -> "sub1A";
"sub1" -> "sub1B";
"sub1" -> "sub1C";
"sub1C" -> "sub1Ca";
}
An undirected graph:
>>> def nodenamefunc(node):
... return '%s:%s' % (node.name, node.depth)
>>> def edgeattrfunc(node, child):
... return 'label="%s:%s"' % (node.name, child.name)
>>> def edgetypefunc(node, child):
... return '--'
>>> from agouti_pkg.anytree.exporter import DotExporter
>>> for line in DotExporter(root, graph="graph",
... nodenamefunc=nodenamefunc,
... nodeattrfunc=lambda node: "shape=box",
... edgeattrfunc=edgeattrfunc,
... edgetypefunc=edgetypefunc):
... print(line)
graph tree {
"root:0" [shape=box];
"sub0:1" [shape=box];
"sub0B:2" [shape=box];
"sub0A:2" [shape=box];
"sub1:1" [shape=box];
"sub1A:2" [shape=box];
"sub1B:2" [shape=box];
"sub1C:2" [shape=box];
"sub1Ca:3" [shape=box];
"root:0" -- "sub0:1" [label="root:sub0"];
"root:0" -- "sub1:1" [label="root:sub1"];
"sub0:1" -- "sub0B:2" [label="sub0:sub0B"];
"sub0:1" -- "sub0A:2" [label="sub0:sub0A"];
"sub1:1" -- "sub1A:2" [label="sub1:sub1A"];
"sub1:1" -- "sub1B:2" [label="sub1:sub1B"];
"sub1:1" -- "sub1C:2" [label="sub1:sub1C"];
"sub1C:2" -- "sub1Ca:3" [label="sub1C:sub1Ca"];
}
"""
self.node = node
self.graph = graph
self.name = name
self.options = options
self.indent = indent
self.nodenamefunc = nodenamefunc
self.nodeattrfunc = nodeattrfunc
self.edgeattrfunc = edgeattrfunc
self.edgetypefunc = edgetypefunc
def __iter__(self):
# prepare
indent = " " * self.indent
nodenamefunc = self.nodenamefunc or DotExporter.__default_nodenamefunc
nodeattrfunc = self.nodeattrfunc or DotExporter.__default_nodeattrfunc
edgeattrfunc = self.edgeattrfunc or DotExporter.__default_edgeattrfunc
edgetypefunc = self.edgetypefunc or DotExporter.__default_edgetypefunc
return self.__iter(indent, nodenamefunc, nodeattrfunc, edgeattrfunc,
edgetypefunc)
@staticmethod
def __default_nodenamefunc(node):
return node.name
@staticmethod
def __default_nodeattrfunc(node):
return None
@staticmethod
def __default_edgeattrfunc(node, child):
return None
@staticmethod
def __default_edgetypefunc(node, child):
return "->"
def __iter(self, indent, nodenamefunc, nodeattrfunc, edgeattrfunc, edgetypefunc):
yield "{self.graph} {self.name} {{".format(self=self)
for option in self.__iter_options(indent):
yield option
for node in self.__iter_nodes(indent, nodenamefunc, nodeattrfunc):
yield node
for edge in self.__iter_edges(indent, nodenamefunc, edgeattrfunc, edgetypefunc):
yield edge
yield "}"
def __iter_options(self, indent):
options = self.options
if options:
for option in options:
yield "%s%s" % (indent, option)
def __iter_nodes(self, indent, nodenamefunc, nodeattrfunc):
for node in PreOrderIter(self.node):
nodename = nodenamefunc(node)
nodeattr = nodeattrfunc(node)
nodeattr = " [%s]" % nodeattr if nodeattr is not None else ""
yield '%s"%s"%s;' % (indent, nodename, nodeattr)
def __iter_edges(self, indent, nodenamefunc, edgeattrfunc, edgetypefunc):
for node in PreOrderIter(self.node):
nodename = nodenamefunc(node)
for child in node.children:
childname = nodenamefunc(child)
edgeattr = edgeattrfunc(node, child)
edgetype = edgetypefunc(node, child)
edgeattr = " [%s]" % edgeattr if edgeattr is not None else ""
yield '%s"%s" %s "%s"%s;' % (indent, nodename, edgetype,
childname, edgeattr)
def to_dotfile(self, filename):
"""
Write graph to `filename`.
>>> from agouti_pkg.anytree import Node
>>> root = Node("root")
>>> s0 = Node("sub0", parent=root)
>>> s0b = Node("sub0B", parent=s0)
>>> s0a = Node("sub0A", parent=s0)
>>> s1 = Node("sub1", parent=root)
>>> s1a = Node("sub1A", parent=s1)
>>> s1b = Node("sub1B", parent=s1)
>>> s1c = Node("sub1C", parent=s1)
>>> s1ca = Node("sub1Ca", parent=s1c)
>>> from agouti_pkg.anytree.exporter import DotExporter
>>> DotExporter(root).to_dotfile("tree.dot")
The generated file should be handed over to the `dot` tool from the
http://www.graphviz.org/ package::
$ dot tree.dot -T png -o tree.png
"""
with codecs.open(filename, "w", "utf-8") as file:
for line in self:
file.write("%s\n" % line)
def to_picture(self, filename):
"""
Write graph to a temporary file and invoke `dot`.
The output file type is automatically detected from the file suffix.
*`graphviz` needs to be installed, before usage of this method.*
"""
fileformat = path.splitext(filename)[1][1:]
with NamedTemporaryFile("wb", delete=False) as dotfile:
dotfilename = dotfile.name
for line in self:
dotfile.write(("%s\n" % line).encode("utf-8"))
dotfile.flush()
cmd = ["dot", dotfilename, "-T", fileformat, "-o", filename]
check_call(cmd)
try:
remove(dotfilename)
except Exception:
msg = 'Could not remove temporary file %s' % dotfilename
logging.getLogger(__name__).warn(msg) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/anytree/exporter/dotexporter.py | dotexporter.py |
class DictExporter(object):
def __init__(self, dictcls=dict, attriter=None, childiter=list):
"""
Tree to dictionary exporter.
Every node is converted to a dictionary with all instance
attributes as key-value pairs.
Child nodes are exported to the children attribute.
A list of dictionaries.
Keyword Args:
dictcls: class used as dictionary. :any:`dict` by default.
attriter: attribute iterator for sorting and/or filtering.
childiter: child iterator for sorting and/or filtering.
>>> from pprint import pprint # just for nice printing
>>> from anytree import AnyNode
>>> from anytree.exporter import DictExporter
>>> root = AnyNode(a="root")
>>> s0 = AnyNode(a="sub0", parent=root)
>>> s0a = AnyNode(a="sub0A", b="foo", parent=s0)
>>> s0b = AnyNode(a="sub0B", parent=s0)
>>> s1 = AnyNode(a="sub1", parent=root)
>>> exporter = DictExporter()
>>> pprint(exporter.export(root)) # order within dictionary might vary!
{'a': 'root',
'children': [{'a': 'sub0',
'children': [{'a': 'sub0A', 'b': 'foo'}, {'a': 'sub0B'}]},
{'a': 'sub1'}]}
Pythons dictionary `dict` does not preserve order.
:any:`collections.OrderedDict` does.
In this case attributes can be ordered via `attriter`.
>>> from collections import OrderedDict
>>> exporter = DictExporter(dictcls=OrderedDict, attriter=sorted)
>>> pprint(exporter.export(root))
OrderedDict([('a', 'root'),
('children',
[OrderedDict([('a', 'sub0'),
('children',
[OrderedDict([('a', 'sub0A'), ('b', 'foo')]),
OrderedDict([('a', 'sub0B')])])]),
OrderedDict([('a', 'sub1')])])])
The attribute iterator `attriter` may be used for filtering too.
For example, just dump attributes named `a`:
>>> exporter = DictExporter(attriter=lambda attrs: [(k, v) for k, v in attrs if k == "a"])
>>> pprint(exporter.export(root))
{'a': 'root',
'children': [{'a': 'sub0', 'children': [{'a': 'sub0A'}, {'a': 'sub0B'}]},
{'a': 'sub1'}]}
The child iterator `childiter` can be used for sorting and filtering likewise:
>>> exporter = DictExporter(childiter=lambda children: [child for child in children if "0" in child.a])
>>> pprint(exporter.export(root))
{'a': 'root',
'children': [{'a': 'sub0',
'children': [{'a': 'sub0A', 'b': 'foo'}, {'a': 'sub0B'}]}]}
"""
self.dictcls = dictcls
self.attriter = attriter
self.childiter = childiter
def export(self, node):
"""Export tree starting at `node`."""
attriter = self.attriter or (lambda attr_values: attr_values)
return self.__export(node, self.dictcls, attriter, self.childiter)
def __export(self, node, dictcls, attriter, childiter):
attr_values = attriter(self._iter_attr_values(node))
attr_values = DictExporter.__filter_node_internals(attr_values)
data = dictcls(attr_values)
children = [self.__export(child, dictcls, attriter, childiter)
for child in childiter(node.children)]
if children:
data['children'] = children
return data
def _iter_attr_values(self, node):
return node.__dict__.items()
@staticmethod
def __filter_node_internals(attr_values):
for attr, value in attr_values:
if attr in ("_NodeMixin__parent", "_NodeMixin__children"):
continue
yield attr, value | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/anytree/exporter/dictexporter.py | dictexporter.py |
from agouti_pkg.anytree import AnyNode
def _get_indentation(line):
content = line.lstrip(' ')
# Split string using version without indentation; First item of result is the indentation itself.
indentation_length = len(line.split(content)[0])
return indentation_length, content
class IndentedStringImporter(object):
def __init__(self, nodecls=AnyNode):
u"""
Import Tree from a single string (with all the lines) or list of strings (lines) with indentation.
Every indented line is converted to an instance of `nodecls`. The string (without indentation) found on the lines are set as the respective node name.
This importer do not constrain indented data to have a definite number of whitespaces (multiple of any number). Nodes are considered child of a parent simply if its indentation is bigger than its parent.
This means that the tree can have siblings with different indentations, as long as the siblings indentations are bigger than the respective parent (but not necessarily the same considering each other).
Keyword Args:
nodecls: class used for nodes.
Example using a string list:
>>> from anytree.importer import IndentedStringImporter
>>> from anytree import RenderTree
>>> importer = IndentedStringImporter()
>>> lines = [
... 'Node1',
... 'Node2',
... ' Node3',
... 'Node5',
... ' Node6',
... ' Node7',
... ' Node8',
... ' Node9',
... ' Node10',
... ' Node11',
... ' Node12',
... 'Node13',
...]
>>> root = importer.import_(lines)
>>> print(RenderTree(root))
AnyNode(name='root')
├── AnyNode(name='Node1')
├── AnyNode(name='Node2')
│ └── AnyNode(name='Node3')
├── AnyNode(name='Node5')
│ ├── AnyNode(name='Node6')
│ │ └── AnyNode(name='Node7')
│ ├── AnyNode(name='Node8')
│ │ ├── AnyNode(name='Node9')
│ │ └── AnyNode(name='Node10')
│ ├── AnyNode(name='Node11')
│ └── AnyNode(name='Node12')
└── AnyNode(name='Node13')
Example using a string:
>>> string = "Node1\n Node2\n Node3\n Node4"
>>> root = importer.import_(string)
>>> print(RenderTree(root))
AnyNode(name='root')
└── AnyNode(name='Node1')
├── AnyNode(name='Node2')
└── AnyNode(name='Node3')
└── AnyNode(name='Node4')
"""
self.nodecls = nodecls
def _tree_from_indented_str(self, data):
if isinstance(data, str):
lines = data.splitlines()
else:
lines = data
root = self.nodecls(name="root")
indentations = {}
for line in lines:
current_indentation, name = _get_indentation(line)
if len(indentations) == 0:
parent = root
elif current_indentation not in indentations:
# parent is the next lower indentation
keys = [key for key in indentations.keys() if key < current_indentation]
parent = indentations[max(keys)]
else:
# current line uses the parent of the last line with same indentation and replaces
# it as the last line with this given indentation
parent = indentations[current_indentation].parent
indentations[current_indentation] = self.nodecls(name=name, parent=parent)
# delete all higher indentations
keys = [key for key in indentations.keys() if key > current_indentation]
for key in keys:
indentations.pop(key)
return root
def import_(self, data):
"""Import tree from `data`, which can be a single string or a list of lines."""
return self._tree_from_indented_str(data) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/anytree/importer/indentedstringimporter.py | indentedstringimporter.py |
import warnings
from agouti_pkg.anytree.iterators import PreOrderIter
from .exceptions import LoopError
from .exceptions import TreeError
class NodeMixin(object):
__slots__ = ("__parent", "__children")
separator = "/"
u"""
The :any:`NodeMixin` class extends any Python class to a tree node.
The only tree relevant information is the `parent` attribute.
If `None` the :any:`NodeMixin` is root node.
If set to another node, the :any:`NodeMixin` becomes the child of it.
>>> from anytree import NodeMixin, RenderTree
>>> class MyBaseClass(object):
... foo = 4
>>> class MyClass(MyBaseClass, NodeMixin): # Add Node feature
... def __init__(self, name, length, width, parent=None):
... super(MyClass, self).__init__()
... self.name = name
... self.length = length
... self.width = width
... self.parent = parent
>>> my0 = MyClass('my0', 0, 0)
>>> my1 = MyClass('my1', 1, 0, parent=my0)
>>> my2 = MyClass('my2', 0, 2, parent=my0)
>>> for pre, _, node in RenderTree(my0):
... treestr = u"%s%s" % (pre, node.name)
... print(treestr.ljust(8), node.length, node.width)
my0 0 0
├── my1 1 0
└── my2 0 2
"""
@property
def parent(self):
u"""
Parent Node.
On set, the node is detached from any previous parent node and attached
to the new node.
>>> from anytree import Node, RenderTree
>>> udo = Node("Udo")
>>> marc = Node("Marc")
>>> lian = Node("Lian", parent=marc)
>>> print(RenderTree(udo))
Node('/Udo')
>>> print(RenderTree(marc))
Node('/Marc')
└── Node('/Marc/Lian')
**Attach**
>>> marc.parent = udo
>>> print(RenderTree(udo))
Node('/Udo')
└── Node('/Udo/Marc')
└── Node('/Udo/Marc/Lian')
**Detach**
To make a node to a root node, just set this attribute to `None`.
>>> marc.is_root
False
>>> marc.parent = None
>>> marc.is_root
True
"""
try:
return self.__parent
except AttributeError:
return None
@parent.setter
def parent(self, value):
if value is not None and not isinstance(value, NodeMixin):
msg = "Parent node %r is not of type 'NodeMixin'." % (value)
raise TreeError(msg)
try:
parent = self.__parent
except AttributeError:
parent = None
if parent is not value:
self.__check_loop(value)
self.__detach(parent)
self.__attach(value)
def __check_loop(self, node):
if node is not None:
if node is self:
msg = "Cannot set parent. %r cannot be parent of itself."
raise LoopError(msg % self)
if self in node.path:
msg = "Cannot set parent. %r is parent of %r."
raise LoopError(msg % (self, node))
def __detach(self, parent):
if parent is not None:
self._pre_detach(parent)
parentchildren = parent.__children_
assert any([child is self for child in parentchildren]), "Tree internal data is corrupt."
# ATOMIC START
parentchildren.remove(self)
self.__parent = None
# ATOMIC END
self._post_detach(parent)
def __attach(self, parent):
if parent is not None:
self._pre_attach(parent)
parentchildren = parent.__children_
assert not any([child is self for child in parentchildren]), "Tree internal data is corrupt."
# ATOMIC START
parentchildren.append(self)
self.__parent = parent
# ATOMIC END
self._post_attach(parent)
@property
def __children_(self):
try:
return self.__children
except AttributeError:
self.__children = []
return self.__children
@property
def children(self):
"""
All child nodes.
>>> from anytree import Node
>>> n = Node("n")
>>> a = Node("a", parent=n)
>>> b = Node("b", parent=n)
>>> c = Node("c", parent=n)
>>> n.children
(Node('/n/a'), Node('/n/b'), Node('/n/c'))
Modifying the children attribute modifies the tree.
**Detach**
The children attribute can be updated by setting to an iterable.
>>> n.children = [a, b]
>>> n.children
(Node('/n/a'), Node('/n/b'))
Node `c` is removed from the tree.
In case of an existing reference, the node `c` does not vanish and is the root of its own tree.
>>> c
Node('/c')
**Attach**
>>> d = Node("d")
>>> d
Node('/d')
>>> n.children = [a, b, d]
>>> n.children
(Node('/n/a'), Node('/n/b'), Node('/n/d'))
>>> d
Node('/n/d')
**Duplicate**
A node can just be the children once. Duplicates cause a :any:`TreeError`:
>>> n.children = [a, b, d, a]
Traceback (most recent call last):
...
anytree.node.exceptions.TreeError: Cannot add node Node('/n/a') multiple times as child.
"""
return tuple(self.__children_)
@staticmethod
def __check_children(children):
seen = set()
for child in children:
if not isinstance(child, NodeMixin):
msg = ("Cannot add non-node object %r. "
"It is not a subclass of 'NodeMixin'.") % child
raise TreeError(msg)
if child not in seen:
seen.add(child)
else:
msg = "Cannot add node %r multiple times as child." % child
raise TreeError(msg)
@children.setter
def children(self, children):
# convert iterable to tuple
children = tuple(children)
NodeMixin.__check_children(children)
# ATOMIC start
old_children = self.children
del self.children
try:
self._pre_attach_children(children)
for child in children:
child.parent = self
self._post_attach_children(children)
assert len(self.children) == len(children)
except Exception:
self.children = old_children
raise
# ATOMIC end
@children.deleter
def children(self):
children = self.children
self._pre_detach_children(children)
for child in self.children:
child.parent = None
assert len(self.children) == 0
self._post_detach_children(children)
def _pre_detach_children(self, children):
"""Method call before detaching `children`."""
pass
def _post_detach_children(self, children):
"""Method call after detaching `children`."""
pass
def _pre_attach_children(self, children):
"""Method call before attaching `children`."""
pass
def _post_attach_children(self, children):
"""Method call after attaching `children`."""
pass
@property
def path(self):
"""
Path of this `Node`.
>>> from anytree import Node
>>> udo = Node("Udo")
>>> marc = Node("Marc", parent=udo)
>>> lian = Node("Lian", parent=marc)
>>> udo.path
(Node('/Udo'),)
>>> marc.path
(Node('/Udo'), Node('/Udo/Marc'))
>>> lian.path
(Node('/Udo'), Node('/Udo/Marc'), Node('/Udo/Marc/Lian'))
"""
return self._path
@property
def _path(self):
path = []
node = self
while node:
path.insert(0, node)
node = node.parent
return tuple(path)
@property
def ancestors(self):
"""
All parent nodes and their parent nodes.
>>> from anytree import Node
>>> udo = Node("Udo")
>>> marc = Node("Marc", parent=udo)
>>> lian = Node("Lian", parent=marc)
>>> udo.ancestors
()
>>> marc.ancestors
(Node('/Udo'),)
>>> lian.ancestors
(Node('/Udo'), Node('/Udo/Marc'))
"""
return self._path[:-1]
@property
def anchestors(self):
"""
All parent nodes and their parent nodes - see :any:`ancestors`.
The attribute `anchestors` is just a typo of `ancestors`. Please use `ancestors`.
This attribute will be removed in the 2.0.0 release.
"""
warnings.warn(".anchestors was a typo and will be removed in version 3.0.0", DeprecationWarning)
return self.ancestors
@property
def descendants(self):
"""
All child nodes and all their child nodes.
>>> from anytree import Node
>>> udo = Node("Udo")
>>> marc = Node("Marc", parent=udo)
>>> lian = Node("Lian", parent=marc)
>>> loui = Node("Loui", parent=marc)
>>> soe = Node("Soe", parent=lian)
>>> udo.descendants
(Node('/Udo/Marc'), Node('/Udo/Marc/Lian'), Node('/Udo/Marc/Lian/Soe'), Node('/Udo/Marc/Loui'))
>>> marc.descendants
(Node('/Udo/Marc/Lian'), Node('/Udo/Marc/Lian/Soe'), Node('/Udo/Marc/Loui'))
>>> lian.descendants
(Node('/Udo/Marc/Lian/Soe'),)
"""
return tuple(PreOrderIter(self))[1:]
@property
def root(self):
"""
Tree Root Node.
>>> from anytree import Node
>>> udo = Node("Udo")
>>> marc = Node("Marc", parent=udo)
>>> lian = Node("Lian", parent=marc)
>>> udo.root
Node('/Udo')
>>> marc.root
Node('/Udo')
>>> lian.root
Node('/Udo')
"""
if self.parent:
return self._path[0]
else:
return self
@property
def siblings(self):
"""
Tuple of nodes with the same parent.
>>> from anytree import Node
>>> udo = Node("Udo")
>>> marc = Node("Marc", parent=udo)
>>> lian = Node("Lian", parent=marc)
>>> loui = Node("Loui", parent=marc)
>>> lazy = Node("Lazy", parent=marc)
>>> udo.siblings
()
>>> marc.siblings
()
>>> lian.siblings
(Node('/Udo/Marc/Loui'), Node('/Udo/Marc/Lazy'))
>>> loui.siblings
(Node('/Udo/Marc/Lian'), Node('/Udo/Marc/Lazy'))
"""
parent = self.parent
if parent is None:
return tuple()
else:
return tuple([node for node in parent.children if node != self])
@property
def is_leaf(self):
"""
`Node` has no children (External Node).
>>> from anytree import Node
>>> udo = Node("Udo")
>>> marc = Node("Marc", parent=udo)
>>> lian = Node("Lian", parent=marc)
>>> udo.is_leaf
False
>>> marc.is_leaf
False
>>> lian.is_leaf
True
"""
return len(self.__children_) == 0
@property
def is_root(self):
"""
`Node` is tree root.
>>> from anytree import Node
>>> udo = Node("Udo")
>>> marc = Node("Marc", parent=udo)
>>> lian = Node("Lian", parent=marc)
>>> udo.is_root
True
>>> marc.is_root
False
>>> lian.is_root
False
"""
return self.parent is None
@property
def height(self):
"""
Number of edges on the longest path to a leaf `Node`.
>>> from anytree import Node
>>> udo = Node("Udo")
>>> marc = Node("Marc", parent=udo)
>>> lian = Node("Lian", parent=marc)
>>> udo.height
2
>>> marc.height
1
>>> lian.height
0
"""
if self.__children_:
return max([child.height for child in self.__children_]) + 1
else:
return 0
@property
def depth(self):
"""
Number of edges to the root `Node`.
>>> from anytree import Node
>>> udo = Node("Udo")
>>> marc = Node("Marc", parent=udo)
>>> lian = Node("Lian", parent=marc)
>>> udo.depth
0
>>> marc.depth
1
>>> lian.depth
2
"""
return len(self._path) - 1
def _pre_detach(self, parent):
"""Method call before detaching from `parent`."""
pass
def _post_detach(self, parent):
"""Method call after detaching from `parent`."""
pass
def _pre_attach(self, parent):
"""Method call before attaching to `parent`."""
pass
def _post_attach(self, parent):
"""Method call after attaching to `parent`."""
pass | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/anytree/node/nodemixin.py | nodemixin.py |
from .abstractiter import AbstractIter
class LevelOrderGroupIter(AbstractIter):
"""
Iterate over tree applying level-order strategy with grouping starting at `node`.
Return a tuple of nodes for each level. The first tuple contains the
nodes at level 0 (always `node`). The second tuple contains the nodes at level 1
(children of `node`). The next level contains the children of the children, and so on.
>>> from anytree import Node, RenderTree, AsciiStyle, LevelOrderGroupIter
>>> f = Node("f")
>>> b = Node("b", parent=f)
>>> a = Node("a", parent=b)
>>> d = Node("d", parent=b)
>>> c = Node("c", parent=d)
>>> e = Node("e", parent=d)
>>> g = Node("g", parent=f)
>>> i = Node("i", parent=g)
>>> h = Node("h", parent=i)
>>> print(RenderTree(f, style=AsciiStyle()).by_attr())
f
|-- b
| |-- a
| +-- d
| |-- c
| +-- e
+-- g
+-- i
+-- h
>>> [[node.name for node in children] for children in LevelOrderGroupIter(f)]
[['f'], ['b', 'g'], ['a', 'd', 'i'], ['c', 'e', 'h']]
>>> [[node.name for node in children] for children in LevelOrderGroupIter(f, maxlevel=3)]
[['f'], ['b', 'g'], ['a', 'd', 'i']]
>>> [[node.name for node in children]
... for children in LevelOrderGroupIter(f, filter_=lambda n: n.name not in ('e', 'g'))]
[['f'], ['b'], ['a', 'd', 'i'], ['c', 'h']]
>>> [[node.name for node in children]
... for children in LevelOrderGroupIter(f, stop=lambda n: n.name == 'd')]
[['f'], ['b', 'g'], ['a', 'i'], ['h']]
"""
@staticmethod
def _iter(children, filter_, stop, maxlevel):
level = 1
while children:
yield tuple([child for child in children if filter_(child)])
level += 1
if AbstractIter._abort_at_level(level, maxlevel):
break
children = LevelOrderGroupIter._get_grandchildren(children, stop)
@staticmethod
def _get_grandchildren(children, stop):
next_children = []
for child in children:
next_children = next_children + AbstractIter._get_children(child.children, stop)
return next_children | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/anytree/iterators/levelordergroupiter.py | levelordergroupiter.py |
import agouti_pkg.six
import collections
from agouti_pkg.gffutils import constants
# collections.MutableMapping is apparently the best way to provide dict-like
# interface (http://stackoverflow.com/a/3387975)
try:
class Attributes(collections.MutableMapping):
def __init__(self, *args, **kwargs):
"""
An Attributes object acts much like a dictionary. However, values are
always stored internally as lists, even if a single value is provided.
Whether or not you get a list back depends on the
`constants.always_return_list` setting, which can be set on-the-fly.
If True, then one-item lists are returned. This is best shown with an
example:
Set up an Attributes object:
>>> attr = Attributes()
Set the "Name" attribute with a string:
>>> attr['Name'] = 'gene1'
This is stored internally as a list, and by default, we'll get a list
back:
>>> assert attr['Name'] == ['gene1']
The same thing happens if we set it with a list in the first place:
>>> attr['Name'] = ['gene1']
>>> assert attr['Name'] == ['gene1']
Now, change the setting so that upon access, single-value lists are
returned as the first item.
>>> constants.always_return_list = False
>>> assert attr['Name'] == 'gene1'
Change it back again:
>>> constants.always_return_list = True
>>> assert attr['Name'] == ['gene1']
"""
self._d = dict()
self.update(*args, **kwargs)
def __setitem__(self, k, v):
if not isinstance(v, (list, tuple)):
v = [v]
self._d[k] = v
def __getitem__(self, k):
v = self._d[k]
if constants.always_return_list:
return v
if isinstance(v, list) and len(v) == 1:
v = v[0]
return v
def __delitem__(self, key):
del self._d[key]
def __iter__(self):
return iter(self.keys())
def __len__(self):
return len(self._d)
def keys(self):
return self._d.keys()
def values(self):
return [self.__getitem__(k) for k in self.keys()]
def items(self):
r = []
for k in self.keys():
r.append((k, self.__getitem__(k)))
return r
def __str__(self):
s = []
for i in self.items():
s.append("%s: %s" % i)
return '\n'.join(s)
def update(self, *args, **kwargs):
for k, v in agouti_pkg.six.iteritems(dict(*args, **kwargs)):
self[k] = v
except AttributeError:
class Attributes(collections.abc.MutableMapping):
def __init__(self, *args, **kwargs):
"""
An Attributes object acts much like a dictionary. However, values are
always stored internally as lists, even if a single value is provided.
Whether or not you get a list back depends on the
`constants.always_return_list` setting, which can be set on-the-fly.
If True, then one-item lists are returned. This is best shown with an
example:
Set up an Attributes object:
>>> attr = Attributes()
Set the "Name" attribute with a string:
>>> attr['Name'] = 'gene1'
This is stored internally as a list, and by default, we'll get a list
back:
>>> assert attr['Name'] == ['gene1']
The same thing happens if we set it with a list in the first place:
>>> attr['Name'] = ['gene1']
>>> assert attr['Name'] == ['gene1']
Now, change the setting so that upon access, single-value lists are
returned as the first item.
>>> constants.always_return_list = False
>>> assert attr['Name'] == 'gene1'
Change it back again:
>>> constants.always_return_list = True
>>> assert attr['Name'] == ['gene1']
"""
self._d = dict()
self.update(*args, **kwargs)
def __setitem__(self, k, v):
if not isinstance(v, (list, tuple)):
v = [v]
self._d[k] = v
def __getitem__(self, k):
v = self._d[k]
if constants.always_return_list:
return v
if isinstance(v, list) and len(v) == 1:
v = v[0]
return v
def __delitem__(self, key):
del self._d[key]
def __iter__(self):
return iter(self.keys())
def __len__(self):
return len(self._d)
def keys(self):
return self._d.keys()
def values(self):
return [self.__getitem__(k) for k in self.keys()]
def items(self):
r = []
for k in self.keys():
r.append((k, self.__getitem__(k)))
return r
def __str__(self):
s = []
for i in self.items():
s.append("%s: %s" % i)
return '\n'.join(s)
def update(self, *args, **kwargs):
for k, v in agouti_pkg.six.iteritems(dict(*args, **kwargs)):
self[k] = v
# Useful for profiling: which dictionary-like class to store attributes in.
# This is used in Feature below and in parser.py
dict_class = Attributes
#dict_class = dict
#dict_class = helper_classes.DefaultOrderedDict
#dict_class = collections.defaultdict
#dict_class = collections.OrderedDict
#dict_class = helper_classes.DefaultListOrderedDict | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/attributes.py | attributes.py |
SCHEMA = """
CREATE TABLE features (
id text,
seqid text,
source text,
featuretype text,
start int,
end int,
score text,
strand text,
frame text,
attributes text,
extra text,
bin int,
primary key (id)
);
CREATE TABLE relations (
parent text,
child text,
level int,
primary key (parent, child, level)
);
CREATE TABLE meta (
dialect text,
version text
);
CREATE TABLE directives (
directive text
);
CREATE TABLE autoincrements (
base text,
n int,
primary key (base)
);
CREATE TABLE duplicates (
idspecid text,
newid text,
primary key (newid)
);
"""
default_pragmas = {
'synchronous': 'NORMAL',
'journal_mode': 'MEMORY',
'main.page_size': 4096,
'main.cache_size': 10000,
}
_keys = ['id', 'seqid', 'source', 'featuretype', 'start', 'end', 'score',
'strand', 'frame', 'attributes', 'extra', 'bin']
_gffkeys = ['seqid', 'source', 'featuretype', 'start', 'end', 'score',
'strand', 'frame', 'attributes']
_gffkeys_extra = _gffkeys + ['extra']
_SELECT = "SELECT " \
+ ', '.join(_keys) \
+ ", features.rowid as file_order FROM features "
_INSERT = "INSERT INTO features (" \
+ ', '.join(_keys) + ") VALUES (" + ','.join(list('?' * len(_keys))) + ")"
_update_clause = ','.join(['%s = ?' % i for i in _keys])
_UPDATE = "UPDATE features SET " + _update_clause + " WHERE id = ?"
# TODO: create indexes once profiling figures out which ones work best....
INDEXES = []
# This dictionary keeps track of idiosyncracies to [attempt to] maintain
# invariance of file->db->file round trips.
dialect = {
# Initial semicolon, e.g.,
#
# ;ID=001;
# vs
# ID=001;
'leading semicolon': False,
# Semicolon after the last value, e.g.,
#
# ID=001; Name=gene1;
# vs
# ID=001; Name=gene1
'trailing semicolon': False,
# e.g.,
#
# gene_id "GENE1"
# vs
# gene_id GENE1
'quoted GFF2 values': False,
# Sometimes there's extra space surrounding the semicolon, e.g.,
#
# ID=001;Name=gene1
# vs
# ID=001; Name=gene1
'field separator': ';',
# Usually "=" for GFF3; " " for GTF, e.g.,
#
# gene_id "GENE1"
# vs
# gene_id="GENE1"
'keyval separator': '=',
# Usually a comma, e.g.,
#
# Parent=gene1,gene2,gene3
'multival separator': ',',
# General GTF or GFF format
'fmt': 'gff3',
# How multiple values for the same key are handled, e.g.,
#
# Parent=gene1; Parent=gene2;
# vs
# Parent=gene1,gene2;
#
# (the first one has repeated keys)
'repeated keys': False,
# If these keys exist, then print them in this order.
'order': ['ID', 'Name', 'gene_id', 'transcript_id'],
}
always_return_list = True
ignore_url_escape_characters = False
# these keyword args are used by iterators.
_iterator_kwargs = (
'data',
'checklines', 'transform', 'force_dialect_check', 'dialect', 'from_string') | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/constants.py | constants.py |
import agouti_pkg.six
try:
from Bio.SeqFeature import SeqFeature, FeatureLocation
from Bio import SeqIO
except ImportError:
import warnings
warnings.warn("BioPython must be installed to use this module")
from .feature import Feature, feature_from_line
_biopython_strand = {
'+': 1,
'-': -1,
'.': 0,
}
_feature_strand = dict((v, k) for k, v in _biopython_strand.items())
def to_seqfeature(feature):
"""
Converts a gffutils.Feature object to a Bio.SeqFeature object.
The GFF fields `source`, `score`, `seqid`, and `frame` are stored as
qualifiers. GFF `attributes` are also stored as qualifiers.
Parameters
----------
feature : Feature object, or string
If string, assume it is a GFF or GTF-format line; otherwise just use
the provided feature directly.
"""
if isinstance(feature, agouti_pkg.six.string_types):
feature = feature_from_line(feature)
qualifiers = {
'source': [feature.source],
'score': [feature.score],
'seqid': [feature.seqid],
'frame': [feature.frame],
}
qualifiers.update(feature.attributes)
return SeqFeature(
# Convert from GFF 1-based to standard Python 0-based indexing used by
# BioPython
FeatureLocation(feature.start - 1, feature.stop),
id=feature.id,
type=feature.featuretype,
strand=_biopython_strand[feature.strand],
qualifiers=qualifiers
)
def from_seqfeature(s, **kwargs):
"""
Converts a Bio.SeqFeature object to a gffutils.Feature object.
The GFF fields `source`, `score`, `seqid`, and `frame` are assumed to be
stored as qualifiers. Any other qualifiers will be assumed to be GFF
attributes.
"""
source = s.qualifiers.get('source', '.')[0]
score = s.qualifiers.get('score', '.')[0]
seqid = s.qualifiers.get('seqid', '.')[0]
frame = s.qualifiers.get('frame', '.')[0]
strand = _feature_strand[s.strand]
# BioPython parses 1-based GenBank positions into 0-based for use within
# Python. We need to convert back to 1-based GFF format here.
start = s.location.start.position + 1
stop = s.location.end.position
featuretype = s.type
id = s.id
attributes = dict(s.qualifiers)
attributes.pop('source', '.')
attributes.pop('score', '.')
attributes.pop('seqid', '.')
attributes.pop('frame', '.')
return Feature(seqid, source, featuretype, start, stop, score, strand,
frame, attributes, **kwargs) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/biopython_integration.py | biopython_integration.py |
import os
import pybedtools
from pybedtools import featurefuncs
from agouti_pkg.gffutils import helpers
import agouti_pkg.six
def to_bedtool(iterator):
"""
Convert any iterator into a pybedtools.BedTool object.
Note that the supplied iterator is not consumed by this function. To save
to a temp file or to a known location, use the `.saveas()` method of the
returned BedTool object.
"""
def gen():
for i in iterator:
yield helpers.asinterval(i)
return pybedtools.BedTool(gen())
def tsses(db, merge_overlapping=False, attrs=None, attrs_sep=":",
merge_kwargs=None, as_bed6=False, bedtools_227_or_later=True):
"""
Create 1-bp transcription start sites for all transcripts in the database
and return as a sorted pybedtools.BedTool object pointing to a temporary
file.
To save the file to a known location, use the `.moveto()` method on the
resulting `pybedtools.BedTool` object.
To extend regions upstream/downstream, see the `.slop()` method on the
resulting `pybedtools.BedTool object`.
Requires pybedtools.
Parameters
----------
db : gffutils.FeatureDB
The database to use
as_bed6 : bool
If True, output file is in BED6 format; otherwise it remains in the
GFF/GTF format and dialect of the file used to create the database.
Note that the merge options below necessarily force `as_bed6=True`.
merge_overlapping : bool
If True, output will be in BED format. Overlapping TSSes will be merged
into a single feature, and their names will be collapsed using
`merge_sep` and placed in the new name field.
merge_kwargs : dict
If `merge_overlapping=True`, these keyword arguments are passed to
pybedtools.BedTool.merge(), which are in turn sent to `bedtools merge`.
The merge operates on a BED6 file which will have had the name field
constructed as specified by other arguments here. See the available
options for your installed version of BEDTools; the defaults used here
are `merge_kwargs=dict(o='distinct', c=4, s=True)`.
Any provided `merge_kwargs` are used to *update* the default. It is
recommended to not override `c=4` and `s=True`, otherwise the
post-merge fixing may not work correctly. Good candidates for tweaking
are `d` (merge distance), `o` (operation), `delim` (delimiter to use
for collapse operations).
attrs : str or list
Only has an effect when `as_bed6=True` or `merge_overlapping=True`.
Determines what goes in the name field of an output BED file. By
default, "gene_id" for GTF databases and "ID" for GFF. If a list of
attributes is supplied, e.g. ["gene_id", "transcript_id"], then these
will be joined by `attr_join_sep` and then placed in the name field.
attrs_sep: str
If `as_bed6=True` or `merge_overlapping=True`, then use this character
to separate attributes in the name field of the output BED. If also
using `merge_overlapping=True`, you'll probably want this to be
different than `merge_sep` in order to parse things out later.
bedtools_227_or_later : bool
In version 2.27, BEDTools changed the output for merge. By default,
this function expects BEDTools version 2.27 or later, but set this to
False to assume the older behavior.
For testing purposes, the environment variable
GFFUTILS_USES_BEDTOOLS_227_OR_LATER is set to either "true" or "false"
and is used to override this argument.
Examples
--------
>>> import gffutils
>>> db = gffutils.create_db(
... gffutils.example_filename('FBgn0031208.gtf'),
... ":memory:",
... keep_order=True,
... verbose=False)
Default settings -- no merging, and report a separate TSS on each line even
if they overlap (as in the first two):
>>> print(tsses(db)) # doctest: +NORMALIZE_WHITESPACE
chr2L gffutils_derived transcript_TSS 7529 7529 . + . gene_id "FBgn0031208"; transcript_id "FBtr0300689";
chr2L gffutils_derived transcript_TSS 7529 7529 . + . gene_id "FBgn0031208"; transcript_id "FBtr0300690";
chr2L gffutils_derived transcript_TSS 11000 11000 . - . gene_id "Fk_gene_1"; transcript_id "transcript_Fk_gene_1";
chr2L gffutils_derived transcript_TSS 12500 12500 . - . gene_id "Fk_gene_2"; transcript_id "transcript_Fk_gene_2";
<BLANKLINE>
Default merging, showing the first two TSSes merged and reported as
a single unique TSS for the gene. Note the conversion to BED:
>>> x = tsses(db, merge_overlapping=True)
>>> print(x) # doctest: +NORMALIZE_WHITESPACE
chr2L 7528 7529 FBgn0031208 . +
chr2L 10999 11000 Fk_gene_1 . -
chr2L 12499 12500 Fk_gene_2 . -
<BLANKLINE>
Report both gene ID and transcript ID in the name. In some cases this can
be easier to parse than the original GTF or GFF file. With no merging
specified, we must add `as_bed6=True` to see the names in BED format.
>>> x = tsses(db, attrs=['gene_id', 'transcript_id'], as_bed6=True)
>>> print(x) # doctest: +NORMALIZE_WHITESPACE
chr2L 7528 7529 FBgn0031208:FBtr0300689 . +
chr2L 7528 7529 FBgn0031208:FBtr0300690 . +
chr2L 10999 11000 Fk_gene_1:transcript_Fk_gene_1 . -
chr2L 12499 12500 Fk_gene_2:transcript_Fk_gene_2 . -
<BLANKLINE>
Use a 3kb merge distance so the last 2 features are merged together:
>>> x = tsses(db, merge_overlapping=True, merge_kwargs=dict(d=3000))
>>> print(x) # doctest: +NORMALIZE_WHITESPACE
chr2L 7528 7529 FBgn0031208 . +
chr2L 10999 12500 Fk_gene_1,Fk_gene_2 . -
<BLANKLINE>
The set of unique TSSes for each gene, +1kb upstream and 500bp downstream:
>>> x = tsses(db, merge_overlapping=True)
>>> x = x.slop(l=1000, r=500, s=True, genome='dm3')
>>> print(x) # doctest: +NORMALIZE_WHITESPACE
chr2L 6528 8029 FBgn0031208 . +
chr2L 10499 12000 Fk_gene_1 . -
chr2L 11999 13500 Fk_gene_2 . -
<BLANKLINE>
"""
_override = os.environ.get('GFFUTILS_USES_BEDTOOLS_227_OR_LATER', None)
if _override is not None:
if _override == 'true':
bedtools_227_or_later = True
elif _override == 'false':
bedtools_227_or_later = False
else:
raise ValueError(
"Unknown value for GFFUTILS_USES_BEDTOOLS_227_OR_LATER "
"environment variable: {0}".format(_override))
if bedtools_227_or_later:
_merge_kwargs = dict(o='distinct', s=True, c='4,5,6')
else:
_merge_kwargs = dict(o='distinct', s=True, c='4')
if merge_kwargs is not None:
_merge_kwargs.update(merge_kwargs)
def gen():
"""
Generator of pybedtools.Intervals representing TSSes.
"""
for gene in db.features_of_type('gene'):
for transcript in db.children(gene, level=1):
if transcript.strand == '-':
transcript.start = transcript.stop
else:
transcript.stop = transcript.start
transcript.featuretype = transcript.featuretype + '_TSS'
yield helpers.asinterval(transcript)
# GFF/GTF format
x = pybedtools.BedTool(gen()).sort()
# Figure out default attrs to use, depending on the original format.
if attrs is None:
if db.dialect['fmt'] == 'gtf':
attrs = 'gene_id'
else:
attrs = 'ID'
if merge_overlapping or as_bed6:
if isinstance(attrs, agouti_pkg.six.string_types):
attrs = [attrs]
def to_bed(f):
"""
Given a pybedtools.Interval, return a new Interval with the name
set according to the kwargs provided above.
"""
name = attrs_sep.join([f.attrs[i] for i in attrs])
return pybedtools.Interval(
f.chrom,
f.start,
f.stop,
name,
str(f.score),
f.strand)
x = x.each(to_bed).saveas()
if merge_overlapping:
if bedtools_227_or_later:
x = x.merge(**_merge_kwargs)
else:
def fix_merge(f):
f = featurefuncs.extend_fields(f, 6)
return pybedtools.Interval(
f.chrom,
f.start,
f.stop,
f[4],
'.',
f[3])
x = x.merge(**_merge_kwargs).saveas().each(fix_merge).saveas()
return x | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/pybedtools_integration.py | pybedtools_integration.py |
from agouti_pkg.gffutils import iterators
from agouti_pkg.gffutils import interface
from collections import Counter
import sys
def inspect(data, look_for=['featuretype', 'chrom', 'attribute_keys',
'feature_count'], limit=None, verbose=True):
"""
Inspect a GFF or GTF data source.
This function is useful for figuring out the different featuretypes found
in a file (for potential removal before creating a FeatureDB).
Returns a dictionary with a key for each item in `look_for` and
a corresponding value that is a dictionary of how many of each unique item
were found.
There will always be a `feature_count` key, indicating how many features
were looked at (if `limit` is provided, then `feature_count` will be the
same as `limit`).
For example, if `look_for` is ['chrom', 'featuretype'], then the result
will be a dictionary like::
{
'chrom': {
'chr1': 500,
'chr2': 435,
'chr3': 200,
...
...
}.
'featuretype': {
'gene': 150,
'exon': 324,
...
},
'feature_count': 5000
}
Parameters
----------
data : str, FeatureDB instance, or iterator of Features
If `data` is a string, assume it's a GFF or GTF filename. If it's
a FeatureDB instance, then its `all_features()` method will be
automatically called. Otherwise, assume it's an iterable of Feature
objects.
look_for : list
List of things to keep track of. Options are:
- any attribute of a Feature object, such as chrom, source, start,
stop, strand.
- "attribute_keys", which will look at all the individual
attribute keys of each feature
limit : int
Number of features to look at. Default is no limit.
verbose : bool
Report how many features have been processed.
Returns
-------
dict
"""
results = {}
obj_attrs = []
for i in look_for:
if i not in ['attribute_keys', 'feature_count']:
obj_attrs.append(i)
results[i] = Counter()
attr_keys = 'attribute_keys' in look_for
d = iterators.DataIterator(data)
feature_count = 0
for f in d:
if verbose:
sys.stderr.write('\r%s features inspected' % feature_count)
sys.stderr.flush()
for obj_attr in obj_attrs:
results[obj_attr].update([getattr(f, obj_attr)])
if attr_keys:
results['attribute_keys'].update(f.attributes.keys())
feature_count += 1
if limit and feature_count == limit:
break
new_results = {}
for k, v in results.items():
new_results[k] = dict(v)
new_results['feature_count'] = feature_count
return new_results | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/inspection.py | inspection.py |
import agouti_pkg.six
import tempfile
import shutil
from time import strftime, localtime
from agouti_pkg.gffutils.version import version
class GFFWriter:
"""
Simple GFF writer class for serializing gffutils
records to a file.
Parameters:
-----------
out: string or file-like object
If a string, parsed as a filename, otherwise, a file-like object to
write to.
with_header: bool
If True, output a header file for the GFF. The header indicates the
file was created with gffutils and includes a timestamp and version
info.
in_place: bool
If True and if `out` is a filename, then write the file in place (uses
named temporary files.)
TODO: Add make separate GTFWriter class or add support
for GTF here.
"""
def __init__(self, out, with_header=True, in_place=False):
self.out = out
self.with_header = with_header
self.in_place = in_place
# Temporary file to be used (only applies when in_place is True)
self.temp_file = None
# Output stream to write to
self.out_stream = None
if isinstance(out, agouti_pkg.six.string_types):
if self.in_place:
# Use temporary file
self.temp_file = tempfile.NamedTemporaryFile(delete=False)
self.out_stream = open(self.temp_file.name, "w")
else:
# Just use the filename given
self.out_stream = open(self.out, "w")
else:
# Assumed to be a write-able stream
if self.in_place:
# The in_place parameter is undefined for
# streams, since no filenames are involved
raise ValueError("Cannot use 'in_place' when writing to "
"a stream.")
self.out_stream = out
# write header if asked
if self.with_header:
timestamp = strftime("%Y-%m-%d %H:%M:%S", localtime())
header = "#GFF3 file (created by gffutils (v%s) on %s)" \
% (version, timestamp)
self.out_stream.write("%s\n" % (header))
def write_rec(self, rec):
"""
Output record to file.
"""
self.out_stream.write("%s\n" % rec)
def write_recs(self, recs):
"""
Output several records to file.
"""
for rec in recs:
self.write_rec(rec)
def write_gene_recs(self, db, gene_id):
"""
NOTE: The goal of this function is to have a canonical ordering when
outputting a gene and all of its records to a file. The order is
intended to be:
gene
# mRNAs sorted by length, with longest mRNA first
mRNA_1
# Exons of mRNA, sorted by start position (ascending)
exon_1
# Children of exon, sorted by start position
exon_child_1
exon_child_2
exon_2
...
# Non-exonic children here
...
mRNA_2
...
# Non-mRNA children here
...
Output records of a gene to a file, given a GFF database
and a gene_id. Outputs records in canonical order: gene record
first, then longest mRNA, followed by longest mRNA exons,
followed by rest, followed by next longest mRNA, and so on.
Includes the gene record itself in the output.
TODO: This probably doesn't handle deep GFF hierarchies.
"""
gene_rec = db[gene_id]
# Output gene record
self.write_rec(gene_rec)
# Get each mRNA's lengths
mRNA_lens = {}
c = list(db.children(gene_id, featuretype="mRNA"))
for mRNA in db.children(gene_id, featuretype="mRNA"):
mRNA_lens[mRNA.id] = \
sum(len(exon) for exon in db.children(mRNA,
featuretype="exon"))
# Sort mRNAs by length
sorted_mRNAs = \
sorted(mRNA_lens.items(), key=lambda x: x[1], reverse=True)
for curr_mRNA in sorted_mRNAs:
mRNA_id = curr_mRNA[0]
mRNA_rec = db[mRNA_id]
# Write mRNA record to file
self.write_rec(mRNA_rec)
# Write mRNA's children records to file
self.write_mRNA_children(db, mRNA_id)
# Write non-mRNA children of gene (only level1)
for gene_child in db.children(gene_id, level=1):
if gene_child.featuretype != "mRNA":
self.write_rec(gene_child)
def write_mRNA_children(self, db, mRNA_id):
"""
Write out the children records of the mRNA given by the ID
(not including the mRNA record itself) in a canonical
order, where exons are sorted by start position and given
first.
"""
mRNA_children = db.children(mRNA_id, order_by='start')
nonexonic_children = []
for child_rec in mRNA_children:
if child_rec.featuretype == "exon":
self.write_rec(child_rec)
self.write_exon_children(db, child_rec)
else:
nonexonic_children.append(child_rec)
self.write_recs(nonexonic_children)
def write_exon_children(self, db, exon_id):
"""
Write out the children records of the exon given by
the ID (not including the exon record itself).
"""
exon_children = db.children(exon_id, order_by='start')
for exon_child in exon_children:
self.write_rec(exon_child)
def close(self):
"""
Close the stream. Assumes stream has 'close' method.
"""
self.out_stream.close()
# If we're asked to write in place, substitute the named
# temporary file for the current file
if self.in_place:
shutil.move(self.temp_file.name, self.out) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/gffwriter.py | gffwriter.py |
import copy
import warnings
import collections
import tempfile
import sys
import os
import sqlite3
import agouti_pkg.six
from textwrap import dedent
from agouti_pkg.gffutils import constants
from agouti_pkg.gffutils import version
from agouti_pkg.gffutils import bins
from agouti_pkg.gffutils import helpers
from agouti_pkg.gffutils import feature
from agouti_pkg.gffutils import interface
from agouti_pkg.gffutils import iterators
import logging
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
ch.setFormatter(formatter)
logger.addHandler(ch)
def deprecation_handler(kwargs):
"""
As things change from version to version, deal with them here.
"""
# After reconsidering, let's leave `infer_gene_extent` for another release.
# But when it's time to deprecate it, use this code:
if 0:
if 'infer_gene_extent' in kwargs:
raise ValueError(
"'infer_gene_extent' is deprecated as of version 0.8.4 in "
"favor of more granular control over inferring genes and/or "
"transcripts. The previous default was "
"'infer_gene_extent=True`, which corresponds to the new "
"defaults "
"'disable_infer_genes=False' and "
"'disable_infer_transcripts=False'. Please see the docstring "
"for gffutils.create_db for details.")
if len(kwargs) > 0:
raise TypeError("unhandled kwarg in %s" % kwargs)
class _DBCreator(object):
def __init__(self, data, dbfn, force=False, verbose=False, id_spec=None,
merge_strategy='merge', checklines=10, transform=None,
force_dialect_check=False, from_string=False, dialect=None,
default_encoding='utf-8',
disable_infer_genes=False,
disable_infer_transcripts=False,
infer_gene_extent=True,
force_merge_fields=None,
text_factory=sqlite3.OptimizedUnicode,
pragmas=constants.default_pragmas, _keep_tempfiles=False,
directives=None,
**kwargs):
"""
Base class for _GFFDBCreator and _GTFDBCreator; see create_db()
function for docs
"""
self._keep_tempfiles = _keep_tempfiles
if force_merge_fields is None:
force_merge_fields = []
if merge_strategy == 'merge':
if set(['start', 'end']).intersection(force_merge_fields):
raise ValueError("Can't merge start/end fields since "
"they must be integers")
warn = set(force_merge_fields)\
.intersection(['frame', 'strand'])
for w in warn:
warnings.warn(
"%s field will be merged for features with the same ID; "
"this may result in unusable features." % w)
self.force_merge_fields = force_merge_fields
self.pragmas = pragmas
self.merge_strategy = merge_strategy
self.default_encoding = default_encoding
if directives is None:
directives = []
self.directives = directives
if not infer_gene_extent:
warnings.warn("'infer_gene_extent' will be deprecated. For now, "
"the following equivalent values were automatically "
"set: 'disable_infer_genes=True', "
"'disable_infer_transcripts=True'. Please use these "
"instead in the future.")
disable_infer_genes = True
disable_infer_transcripts = True
self.disable_infer_genes = disable_infer_genes
self.disable_infer_transcripts = disable_infer_transcripts
self._autoincrements = collections.defaultdict(int)
if force:
if os.path.exists(dbfn):
os.unlink(dbfn)
self.dbfn = dbfn
self.id_spec = id_spec
if isinstance(dbfn, agouti_pkg.six.string_types):
conn = sqlite3.connect(dbfn)
else:
conn = dbfn
self.conn = conn
self.conn.row_factory = sqlite3.Row
self.set_verbose(verbose)
if text_factory is not None:
if self.verbose == 'debug':
logger.debug('setting text factory to %s' % text_factory)
self.conn.text_factory = text_factory
self._data = data
self._orig_logger_level = logger.level
self.iterator = iterators.DataIterator(
data=data, checklines=checklines, transform=transform,
force_dialect_check=force_dialect_check, from_string=from_string,
dialect=dialect
)
def set_verbose(self, verbose=None):
if verbose == 'debug':
logger.setLevel(logging.DEBUG)
elif verbose:
logger.setLevel(logging.INFO)
else:
logger.setLevel(logging.ERROR)
self.verbose = verbose
def _increment_featuretype_autoid(self, key):
self._autoincrements[key] += 1
return '%s_%s' % (key, self._autoincrements[key])
def _id_handler(self, f):
"""
Given a Feature from self.iterator, figure out what the ID should be.
This uses `self.id_spec` identify the ID.
"""
# If id_spec is a string, convert to iterable for later
if isinstance(self.id_spec, agouti_pkg.six.string_types):
id_key = [self.id_spec]
elif hasattr(self.id_spec, '__call__'):
id_key = [self.id_spec]
# If dict, then assume it's a feature -> attribute mapping, e.g.,
# {'gene': 'gene_id'} for GTF
elif isinstance(self.id_spec, dict):
try:
id_key = self.id_spec[f.featuretype]
if isinstance(id_key, agouti_pkg.six.string_types):
id_key = [id_key]
# Otherwise, use default auto-increment.
except KeyError:
return self._increment_featuretype_autoid(f.featuretype)
# Otherwise assume it's an iterable.
else:
id_key = self.id_spec
# Then try them in order, returning the first one that works:
for k in id_key:
if hasattr(k, '__call__'):
_id = k(f)
if _id:
if _id.startswith('autoincrement:'):
return self._increment_featuretype_autoid(_id[14:])
return _id
else:
# use GFF fields rather than attributes for cases like :seqid:
# or :strand:
if (len(k) > 3) and (k[0] == ':') and (k[-1] == ':'):
# No [0] here -- only attributes key/vals are forced into
# lists, not standard GFF fields.
return getattr(f, k[1:-1])
else:
try:
return f.attributes[k][0]
except (KeyError, IndexError):
pass
# If we get here, then default autoincrement
return self._increment_featuretype_autoid(f.featuretype)
def _get_feature(self, ID):
c = self.conn.cursor()
results = c.execute(
constants._SELECT + ' WHERE id = ?', (ID,)).fetchone()
return feature.Feature(dialect=self.iterator.dialect, **results)
def _do_merge(self, f, merge_strategy, add_duplicate=False):
"""
Different merge strategies upon name conflicts.
"error":
Raise error
"warning"
Log a warning
"merge":
Combine old and new attributes -- but only if everything else
matches; otherwise error. This can be slow, but is thorough.
"create_unique":
Autoincrement based on the ID, always creating a new ID.
"replace":
Replaces existing database feature with `f`.
"""
if merge_strategy == 'error':
raise ValueError("Duplicate ID {0.id}".format(f))
if merge_strategy == 'warning':
logger.warning(
"Duplicate lines in file for id '{0.id}'; "
"ignoring all but the first".format(f))
return None, merge_strategy
elif merge_strategy == 'replace':
return f, merge_strategy
# This is by far the most complicated strategy.
elif merge_strategy == 'merge':
# Recall that if we made it to this method, there was at least one
# ID collision.
# This will eventually contain the features that match ID AND that
# match non-attribute fields like start, stop, strand, etc.
features_to_merge = []
# Iterate through all features that have the same ID according to
# the id_spec provided.
if self.verbose == "debug":
logger.debug('candidates with same idspec: %s'
% ([i.id for i in self._candidate_merges(f)]))
# If force_merge_fields was provided, don't pay attention to these
# fields if they're different. We are assuming attributes will be
# different, hence the [:-1]
_gffkeys_to_check = list(
set(constants._gffkeys[:-1])
.difference(self.force_merge_fields))
for existing_feature in self._candidate_merges(f):
# Check other GFF fields (if not specified in
# self.force_merge_fields) to make sure they match.
other_attributes_same = True
for k in _gffkeys_to_check:
if getattr(existing_feature, k) != getattr(f, k):
other_attributes_same = False
break
if other_attributes_same:
# All the other GFF fields match. So this existing feature
# should be merged.
features_to_merge.append(existing_feature)
if self.verbose == 'debug':
logger.debug(
'same attributes between:\nexisting: %s'
'\nthis : %s'
% (existing_feature, f))
else:
# The existing feature's GFF fields don't match, so don't
# append anything.
if self.verbose == 'debug':
logger.debug(
'different attributes between:\nexisting: %s\n'
'this : %s'
% (existing_feature, f))
if (len(features_to_merge) == 0):
# No merge candidates found, so we should make a new ID for
# this feature. This can happen when idspecs match, but other
# fields (like start/stop) are different. Call this method
# again, but using the "create_unique" strategy, and then
# record the newly-created ID in the duplicates table.
orig_id = f.id
uniqued_feature, merge_strategy = self._do_merge(
f, merge_strategy='create_unique')
self._add_duplicate(orig_id, uniqued_feature.id)
return uniqued_feature, merge_strategy
# Whoo! Found some candidates to merge.
else:
if self.verbose == 'debug':
logger.debug('num candidates: %s' % len(features_to_merge))
# This is the attributes dictionary we'll be modifying.
merged_attributes = copy.deepcopy(f.attributes)
# Keep track of non-attribute fields (this will be an empty
# dict if no force_merge_fields)
final_fields = dict(
[(field, set([getattr(f, field)]))
for field in self.force_merge_fields])
# Update the attributes
for existing_feature in features_to_merge:
if self.verbose == 'debug':
logger.debug(
'\nmerging\n\n%s\n%s\n' % (f, existing_feature))
for k in existing_feature.attributes.keys():
v = merged_attributes.setdefault(k, [])
v.extend(existing_feature[k])
merged_attributes[k] = v
# Update the set of non-attribute fields found so far
for field in self.force_merge_fields:
final_fields[field].update(
[getattr(existing_feature, field)])
# Set the merged attributes
for k, v in merged_attributes.items():
merged_attributes[k] = list(set(v))
existing_feature.attributes = merged_attributes
# Set the final merged non-attributes
for k, v in final_fields.items():
setattr(existing_feature, k, ','.join(sorted(map(str, v))))
if self.verbose == 'debug':
logger.debug('\nMERGED:\n%s' % existing_feature)
return existing_feature, merge_strategy
elif merge_strategy == 'create_unique':
f.id = self._increment_featuretype_autoid(f.id)
return f, merge_strategy
else:
raise ValueError("Invalid merge strategy '%s'"
% (merge_strategy))
def _add_duplicate(self, idspecid, newid):
"""
Adds a duplicate ID (as identified by id_spec) and its new ID to the
duplicates table so that they can be later searched for merging.
Parameters
----------
newid : str
The primary key used in the features table
idspecid : str
The ID identified by id_spec
"""
c = self.conn.cursor()
try:
c.execute(
'''
INSERT INTO duplicates
(idspecid, newid)
VALUES (?, ?)''',
(idspecid, newid))
except sqlite3.ProgrammingError:
c.execute(
'''
INSERT INTO duplicates
(idspecid, newid)
VALUES (?, ?)''',
(idspecid.decode(self.default_encoding),
newid.decode(self.default_encoding))
)
if self.verbose == 'debug':
logger.debug('added id=%s; new=%s' % (idspecid, newid))
self.conn.commit()
def _candidate_merges(self, f):
"""
Identifies those features that originally had the same ID as `f`
(according to the id_spec), but were modified because of duplicate
IDs.
"""
candidates = [self._get_feature(f.id)]
c = self.conn.cursor()
results = c.execute(
constants._SELECT + '''
JOIN duplicates ON
duplicates.newid = features.id WHERE duplicates.idspecid = ?''',
(f.id,)
)
for i in results:
candidates.append(
feature.Feature(dialect=self.iterator.dialect, **i))
return list(set(candidates))
def _populate_from_lines(self, lines):
raise NotImplementedError
def _update_relations(self):
raise NotImplementedError
def _drop_indexes(self):
c = self.conn.cursor()
for index in constants.INDEXES:
c.execute("DROP INDEX IF EXISTS ?", (index,))
self.conn.commit()
def set_pragmas(self, pragmas):
"""
Set pragmas for the current database connection.
Parameters
----------
pragmas : dict
Dictionary of pragmas; see constants.default_pragmas for a template
and http://www.sqlite.org/pragma.html for a full list.
"""
self.pragmas = pragmas
c = self.conn.cursor()
c.executescript(
';\n'.join(
['PRAGMA %s=%s' % i for i in self.pragmas.items()]
)
)
self.conn.commit()
def _init_tables(self):
"""
Table creation
"""
c = self.conn.cursor()
v = sqlite3.sqlite_version_info
self.set_pragmas(self.pragmas)
c.executescript(constants.SCHEMA)
self.conn.commit()
def _finalize(self):
"""
Various last-minute stuff to perform after file has been parsed and
imported.
In general, if you'll be adding stuff to the meta table, do it here.
"""
c = self.conn.cursor()
directives = self.directives + self.iterator.directives
c.executemany('''
INSERT INTO directives VALUES (?)
''', ((i,) for i in directives))
c.execute(
'''
INSERT INTO meta (version, dialect)
VALUES (:version, :dialect)''',
dict(version=version.version,
dialect=helpers._jsonify(self.iterator.dialect))
)
c.executemany(
'''
INSERT OR REPLACE INTO autoincrements VALUES (?, ?)
''', list(self._autoincrements.items()))
# These indexes are *well* worth the effort and extra storage: over
# 500x speedup on code like this:
#
# genes = []
# for i in db.features_of_type('snoRNA'):
# for k in db.parents(i, level=1, featuretype='gene'):
# genes.append(k.id)
#
logger.info("Creating relations(parent) index")
c.execute('DROP INDEX IF EXISTS relationsparent')
c.execute('CREATE INDEX relationsparent ON relations (parent)')
logger.info("Creating relations(child) index")
c.execute('DROP INDEX IF EXISTS relationschild')
c.execute('CREATE INDEX relationschild ON relations (child)')
logger.info("Creating features(featuretype) index")
c.execute('DROP INDEX IF EXISTS featuretype')
c.execute('CREATE INDEX featuretype ON features (featuretype)')
logger.info("Creating features (seqid, start, end) index")
c.execute('DROP INDEX IF EXISTS seqidstartend')
c.execute('CREATE INDEX seqidstartend ON features (seqid, start, end)')
logger.info("Creating features (seqid, start, end, strand) index")
c.execute('DROP INDEX IF EXISTS seqidstartendstrand')
c.execute('CREATE INDEX seqidstartendstrand ON features (seqid, start, end, strand)')
# speeds computation 1000x in some cases
logger.info("Running ANALYSE features")
c.execute('ANALYZE features')
self.conn.commit()
self.warnings = self.iterator.warnings
def create(self):
"""
Calls various methods sequentially in order to fully build the
database.
"""
# Calls each of these methods in order. _populate_from_lines and
# _update_relations must be implemented in subclasses.
self._init_tables()
self._populate_from_lines(self.iterator)
self._update_relations()
self._finalize()
def update(self, iterator):
self._populate_from_lines(iterator)
self._update_relations()
def execute(self, query):
"""
Execute a query directly on the database.
"""
c = self.conn.cursor()
result = c.execute(query)
for i in result:
yield i
def _insert(self, feature, cursor):
"""
Insert a feature into the database.
"""
try:
cursor.execute(constants._INSERT, feature.astuple())
except sqlite3.ProgrammingError:
cursor.execute(
constants._INSERT, feature.astuple(self.default_encoding))
def _replace(self, feature, cursor):
"""
Insert a feature into the database.
"""
try:
cursor.execute(
constants._UPDATE,
list(feature.astuple()) + [feature.id])
except sqlite3.ProgrammingError:
cursor.execute(
constants._INSERT,
list(feature.astuple(self.default_encoding)) + [feature.id])
class _GFFDBCreator(_DBCreator):
def __init__(self, *args, **kwargs):
"""
_DBCreator subclass specifically for working with GFF files.
create_db() delegates to this class -- see that function for docs
"""
super(_GFFDBCreator, self).__init__(*args, **kwargs)
def _populate_from_lines(self, lines):
c = self.conn.cursor()
self._drop_indexes()
last_perc = 0
logger.info("Populating features")
msg = ("Populating features table and first-order relations: "
"%d features\r")
# c.executemany() was not as much of an improvement as I had expected.
#
# Compared to a benchmark of doing each insert separately:
# executemany using a list of dicts to iterate over is ~15% slower
# executemany using a list of tuples to iterate over is ~8% faster
features_seen = None
_features, _relations = [], []
for i, f in enumerate(lines):
features_seen = i
# Percent complete
if self.verbose:
if i % 1000 == 0:
sys.stderr.write(msg % i)
sys.stderr.flush()
# TODO: handle ID creation here...should be combined with the
# INSERT below (that is, don't IGNORE below but catch the error and
# re-try with a new ID). However, is this doable with an
# execute-many?
f.id = self._id_handler(f)
try:
self._insert(f, c)
except sqlite3.IntegrityError:
fixed, final_strategy = self._do_merge(f, self.merge_strategy)
if final_strategy == 'merge':
c.execute(
'''
UPDATE features SET attributes = ?
WHERE id = ?
''', (helpers._jsonify(fixed.attributes),
fixed.id))
# For any additional fields we're merging, update those as
# well.
if self.force_merge_fields:
_set_clause = ', '.join(
['%s = ?' % field
for field in self.force_merge_fields])
values = [
getattr(fixed, field)
for field in self.force_merge_fields] + [fixed.id]
c.execute(
'''
UPDATE features SET %s
WHERE id = ?
''' % _set_clause, tuple(values))
elif final_strategy == 'replace':
self._replace(f, c)
elif final_strategy == 'create_unique':
self._insert(f, c)
if 'Parent' in f.attributes:
for parent in f.attributes['Parent']:
c.execute(
'''
INSERT OR IGNORE INTO relations VALUES
(?, ?, 1)
''', (parent, f.id))
if features_seen is None:
raise ValueError("No lines parsed -- was an empty file provided?")
self.conn.commit()
if self.verbose:
logger.info(msg % i)
def _update_relations(self):
logger.info("Updating relations")
c = self.conn.cursor()
c2 = self.conn.cursor()
c3 = self.conn.cursor()
# TODO: pre-compute indexes?
# c.execute('CREATE INDEX ids ON features (id)')
# c.execute('CREATE INDEX parentindex ON relations (parent)')
# c.execute('CREATE INDEX childindex ON relations (child)')
# self.conn.commit()
if isinstance(self._keep_tempfiles, agouti_pkg.six.string_types):
suffix = self._keep_tempfiles
else:
suffix = '.gffutils'
tmp = tempfile.NamedTemporaryFile(delete=False, suffix=suffix).name
fout = open(tmp, 'w')
# Here we look for "grandchildren" -- for each ID, get the child
# (parenthetical subquery below); then for each of those get *its*
# child (main query below).
#
# Results are written to temp file so that we don't read and write at
# the same time, which would slow things down considerably.
c.execute('SELECT id FROM features')
for parent in c:
c2.execute('''
SELECT child FROM relations WHERE parent IN
(SELECT child FROM relations WHERE parent = ?)
''', tuple(parent))
for grandchild in c2:
fout.write('\t'.join((parent[0], grandchild[0])) + '\n')
fout.close()
def relations_generator():
for line in open(fout.name):
parent, child = line.strip().split('\t')
yield dict(parent=parent, child=child, level=2)
c.executemany(
'''
INSERT OR IGNORE INTO relations VALUES
(:parent, :child, :level)
''', relations_generator())
# TODO: Index creation. Which ones affect performance?
c.execute("DROP INDEX IF EXISTS binindex")
c.execute("CREATE INDEX binindex ON features (bin)")
self.conn.commit()
if not self._keep_tempfiles:
os.unlink(fout.name)
class _GTFDBCreator(_DBCreator):
def __init__(self, *args, **kwargs):
"""
create_db() delegates to this class -- see that function for docs
"""
self.transcript_key = kwargs.pop('transcript_key', 'transcript_id')
self.gene_key = kwargs.pop('gene_key', 'gene_id')
self.subfeature = kwargs.pop('subfeature', 'exon')
super(_GTFDBCreator, self).__init__(*args, **kwargs)
def _populate_from_lines(self, lines):
msg = (
"Populating features table and first-order relations: %d "
"features\r"
)
c = self.conn.cursor()
# Only check this many features to see if it's a gene or transcript and
# issue the appropriate warning.
gene_and_transcript_check_limit = 1000
last_perc = 0
lines_seen = 0
for i, f in enumerate(lines):
# See issues #48 and #20.
if lines_seen < gene_and_transcript_check_limit:
if (
f.featuretype == 'transcript' and
not self.disable_infer_transcripts
):
warnings.warn(
"It appears you have a transcript feature in your GTF "
"file. You may want to use the "
"`disable_infer_transcripts` "
"option to speed up database creation")
elif (
f.featuretype == 'gene' and
not self.disable_infer_genes
):
warnings.warn(
"It appears you have a gene feature in your GTF "
"file. You may want to use the "
"`disable_infer_genes` "
"option to speed up database creation")
lines_seen = i + 1
# Percent complete
if self.verbose:
if i % 1000 == 0:
sys.stderr.write(msg % i)
sys.stderr.flush()
f.id = self._id_handler(f)
# Insert the feature itself...
try:
self._insert(f, c)
except sqlite3.IntegrityError:
fixed, final_strategy = self._do_merge(f, self.merge_strategy)
if final_strategy == 'merge':
c.execute(
'''
UPDATE features SET attributes = ?
WHERE id = ?
''', (helpers._jsonify(fixed.attributes),
fixed.id))
# For any additional fields we're merging, update those as
# well.
if self.force_merge_fields:
_set_clause = ', '.join(
['%s = ?' % field
for field in self.force_merge_fields])
values = [getattr(fixed, field)
for field in self.force_merge_fields]\
+ [fixed.id]
c.execute(
'''
UPDATE features SET %s
WHERE id = ?
''' % _set_clause, values)
elif final_strategy == 'replace':
self._replace(f, c)
elif final_strategy == 'create_unique':
self._insert(f, c)
# For an on-spec GTF file,
# self.transcript_key = "transcript_id"
# self.gene_key = "gene_id"
relations = []
parent = None
grandparent = None
if self.transcript_key in f.attributes:
parent = f.attributes[self.transcript_key][0]
relations.append((parent, f.id, 1))
if self.gene_key in f.attributes:
grandparent = f.attributes[self.gene_key]
if len(grandparent) > 0:
grandparent = grandparent[0]
relations.append((grandparent, f.id, 2))
if parent is not None:
relations.append((grandparent, parent, 1))
# Note the IGNORE, so relationships defined many times in the file
# (e.g., the transcript-gene relation on pretty much every line in
# a GTF) will only be included once.
c.executemany(
'''
INSERT OR IGNORE INTO relations (parent, child, level)
VALUES (?, ?, ?)
''', relations
)
if lines_seen == 0:
raise ValueError("No lines parsed -- was an empty file provided?")
logger.info('Committing changes')
self.conn.commit()
if self.verbose:
logger.info(msg % i)
def _update_relations(self):
if self.disable_infer_genes and self.disable_infer_transcripts:
return
# TODO: do any indexes speed this up?
c = self.conn.cursor()
c2 = self.conn.cursor()
logger.info("Creating relations(parent) index")
c.execute('DROP INDEX IF EXISTS relationsparent')
c.execute('CREATE INDEX relationsparent ON relations (parent)')
logger.info("Creating relations(child) index")
c.execute('DROP INDEX IF EXISTS relationschild')
c.execute('CREATE INDEX relationschild ON relations (child)')
if not (self.disable_infer_genes or self.disable_infer_transcripts):
msg = 'gene and transcript'
elif self.disable_infer_transcripts:
msg = 'gene'
elif self.disable_infer_genes:
msg = 'transcript'
logger.info('Inferring %s extents '
'and writing to tempfile' % msg)
if isinstance(self._keep_tempfiles, agouti_pkg.six.string_types):
suffix = self._keep_tempfiles
else:
suffix = '.gffutils'
tmp = tempfile.NamedTemporaryFile(delete=False, suffix=suffix).name
fout = open(tmp, 'w')
self._tmpfile = tmp
# This takes some explanation...
#
# First, the nested subquery gets the level-1 parents of
# self.subfeature featuretypes. For an on-spec GTF file,
# self.subfeature = "exon". So this subquery translates to getting the
# distinct level-1 parents of exons -- which are transcripts.
#
# OK, so this first subquery is now a list of transcripts; call it
# "firstlevel".
#
# Then join firstlevel on relations, but the trick is to now consider
# each transcript a *child* -- so that relations.parent (on the first
# line of the query) will be the first-level parent of the transcript
# (the gene).
#
#
# The result is something like:
#
# transcript1 gene1
# transcript2 gene1
# transcript3 gene2
#
# Note that genes are repeated; below we need to ensure that only one
# is added. To ensure this, the results are ordered by the gene ID.
#
# By the way, we do this even if we're only looking for transcripts or
# only looking for genes.
c.execute(
'''
SELECT DISTINCT firstlevel.parent, relations.parent
FROM (
SELECT DISTINCT parent
FROM relations
JOIN features ON features.id = relations.child
WHERE features.featuretype = ?
AND relations.level = 1
)
AS firstlevel
JOIN relations ON firstlevel.parent = child
WHERE relations.level = 1
ORDER BY relations.parent
''', (self.subfeature,))
# Now we iterate through those results (using a new cursor) to infer
# the extent of transcripts and/or genes.
last_gene_id = None
n_features = 0
for transcript_id, gene_id in c:
if not self.disable_infer_transcripts:
# transcript extent
c2.execute(
'''
SELECT MIN(start), MAX(end), strand, seqid
FROM features
JOIN relations ON
features.id = relations.child
WHERE parent = ? AND featuretype == ?
''', (transcript_id, self.subfeature))
transcript_start, transcript_end, strand, seqid = c2.fetchone()
transcript_attributes = {
self.transcript_key: [transcript_id],
self.gene_key: [gene_id]
}
transcript_bin = bins.bins(
transcript_start, transcript_end, one=True)
# Write out to file; we'll be reading it back in shortly. Omit
# score, frame, source, and extra since they will always have
# the same default values (".", ".", "gffutils_derived", and []
# respectively)
fout.write('\t'.join(map(str, [
transcript_id,
seqid,
transcript_start,
transcript_end,
strand,
'transcript',
transcript_bin,
helpers._jsonify(transcript_attributes)
])) + '\n')
n_features += 1
if not self.disable_infer_genes:
# Infer gene extent, but only if we haven't done so already
if gene_id != last_gene_id:
c2.execute(
'''
SELECT MIN(start), MAX(end), strand, seqid
FROM features
JOIN relations ON
features.id = relations.child
WHERE parent = ? AND featuretype == ?
''', (gene_id, self.subfeature))
gene_start, gene_end, strand, seqid = c2.fetchone()
gene_attributes = {self.gene_key: [gene_id]}
gene_bin = bins.bins(gene_start, gene_end, one=True)
fout.write('\t'.join(map(str, [
gene_id,
seqid,
gene_start,
gene_end,
strand,
'gene',
gene_bin,
helpers._jsonify(gene_attributes)
])) + '\n')
last_gene_id = gene_id
n_features += 1
fout.close()
def derived_feature_generator():
"""
Generator of items from the file that was just created...
"""
keys = ['parent', 'seqid', 'start', 'end', 'strand',
'featuretype', 'bin', 'attributes']
for line in open(fout.name):
d = dict(list(zip(keys, line.strip().split('\t'))))
d.pop('parent')
d['score'] = '.'
d['source'] = 'gffutils_derived'
d['frame'] = '.'
d['extra'] = []
d['attributes'] = helpers._unjsonify(d['attributes'])
f = feature.Feature(**d)
f.id = self._id_handler(f)
yield f
# Drop the indexes so the inserts are faster
c.execute('DROP INDEX IF EXISTS relationsparent')
c.execute('DROP INDEX IF EXISTS relationschild')
# Insert the just-inferred transcripts and genes. TODO: should we
# *always* use "merge" here for the merge_strategy?
logger.info("Importing inferred features into db")
last_perc = None
for i, f in enumerate(derived_feature_generator()):
perc = int(i / float(n_features) * 100)
if perc != last_perc:
sys.stderr.write('%s of %s (%s%%)\r' % (i, n_features, perc))
sys.stderr.flush()
last_perc = perc
try:
self._insert(f, c)
except sqlite3.IntegrityError:
fixed, final_strategy = self._do_merge(f, 'merge')
c.execute(
'''
UPDATE features SET attributes = ?
WHERE id = ?
''', (helpers._jsonify(fixed.attributes),
fixed.id))
logger.info("Committing changes")
self.conn.commit()
if not self._keep_tempfiles:
os.unlink(fout.name)
# TODO: recreate indexes?
def create_db(data, dbfn, id_spec=None, force=False, verbose=False,
checklines=10, merge_strategy='error', transform=None,
gtf_transcript_key='transcript_id', gtf_gene_key='gene_id',
gtf_subfeature='exon', force_gff=False,
force_dialect_check=False, from_string=False, keep_order=False,
text_factory=sqlite3.OptimizedUnicode, force_merge_fields=None,
pragmas=constants.default_pragmas, sort_attribute_values=False,
dialect=None, _keep_tempfiles=False, infer_gene_extent=True,
disable_infer_genes=False, disable_infer_transcripts=False,
**kwargs):
"""
Create a database from a GFF or GTF file.
For more details on when and how to use the kwargs below, see the examples
in the online documentation (:ref:`examples`).
Parameters
----------
data : string or iterable
If a string (and `from_string` is False), then `data` is the path to
the original GFF or GTF file.
If a string and `from_string` is True, then assume `data` is the actual
data to use.
Otherwise, it's an iterable of Feature objects.
dbfn : string
Path to the database that will be created. Can be the special string
":memory:" to create an in-memory database.
id_spec : string, list, dict, callable, or None
This parameter guides what will be used as the primary key for the
database, which in turn determines how you will access individual
features by name from the database.
If `id_spec=None`, then auto-increment primary keys based on the
feature type (e.g., "gene_1", "gene_2"). This is also the fallback
behavior for the other values below.
If `id_spec` is a string, then look for this key in the attributes. If
it exists, then use its value as the primary key, otherwise
autoincrement based on the feature type. For many GFF3 files, "ID"
usually works well.
If `id_spec` is a list or tuple of keys, then check for each one in
order, using the first one found. For GFF3, this might be ["ID",
"Name"], which would use the ID if it exists, otherwise the Name,
otherwise autoincrement based on the feature type.
If `id_spec` is a dictionary, then it is a mapping of feature types to
what should be used as the ID. For example, for GTF files, `{'gene':
'gene_id', 'transcript': 'transcript_id'}` may be useful. The values
of this dictionary can also be a list, e.g., `{'gene': ['gene_id',
'geneID']}`
If `id_spec` is a callable object, then it accepts a dictionary from
the iterator and returns one of the following:
* None (in which case the feature type will be auto-incremented)
* string (which will be used as the primary key)
* special string starting with "autoincrement:X", where "X" is
a string that will be used for auto-incrementing. For example,
if "autoincrement:chr10", then the first feature will be
"chr10_1", the second "chr10_2", and so on.
force : bool
If `False` (default), then raise an exception if `dbfn` already exists.
Use `force=True` to overwrite any existing databases.
verbose : bool
Report percent complete and other feedback on how the db creation is
progressing.
In order to report percent complete, the entire file needs to be read
once to see how many items there are; for large files you may want to
use `verbose=False` to avoid this.
checklines : int
Number of lines to check the dialect.
merge_strategy : str
One of {merge, create_unique, error, warning, replace}.
This parameter specifies the behavior when two items have an identical
primary key.
Using `merge_strategy="merge"`, then there will be a single entry in
the database, but the attributes of all features with the same primary
key will be merged.
Using `merge_strategy="create_unique"`, then the first entry will use
the original primary key, but the second entry will have a unique,
autoincremented primary key assigned to it
Using `merge_strategy="error"`, a :class:`gffutils.DuplicateID`
exception will be raised. This means you will have to edit the file
yourself to fix the duplicated IDs.
Using `merge_strategy="warning"`, a warning will be printed to the
logger, and the duplicate feature will be skipped.
Using `merge_strategy="replace"` will replace the entire existing
feature with the new feature.
transform : callable
Function (or other callable object) that accepts a `Feature` object and
returns a (possibly modified) `Feature` object.
gtf_transcript_key, gtf_gene_key : string
Which attribute to use as the transcript ID and gene ID respectively
for GTF files. Default is `transcript_id` and `gene_id` according to
the GTF spec.
gtf_subfeature : string
Feature type to use as a "gene component" when inferring gene and
transcript extents for GTF files. Default is `exon` according to the
GTF spec.
force_gff : bool
If True, do not do automatic format detection -- only use GFF.
force_dialect_check : bool
If True, the dialect will be checkef for every feature (instead of just
`checklines` features). This can be slow, but may be necessary for
inconsistently-formatted input files.
from_string : bool
If True, then treat `data` as actual data (rather than the path to
a file).
keep_order : bool
If True, all features returned from this instance will have the
order of their attributes maintained. This can be turned on or off
database-wide by setting the `keep_order` attribute or with this
kwarg, or on a feature-by-feature basis by setting the `keep_order`
attribute of an individual feature.
Note that a single order of attributes will be used for all features.
Specifically, the order will be determined by the order of attribute
keys in the first `checklines` of the input data. See
helpers._choose_dialect for more information on this.
Default is False, since this includes a sorting step that can get
time-consuming for many features.
infer_gene_extent : bool
DEPRECATED in version 0.8.4. See `disable_infer_transcripts` and
`disable_infer_genes` for more granular control.
disable_infer_transcripts, disable_infer_genes : bool
Only used for GTF files. By default -- and according to the GTF spec --
we assume that there are no transcript or gene features in the file.
gffutils then infers the extent of each transcript based on its
constituent exons and infers the extent of each gene bases on its
constituent transcripts.
This default behavior is problematic if the input file already contains
transcript or gene features (like recent GENCODE GTF files for human),
since 1) the work to infer extents is unnecessary, and 2)
trying to insert an inferred feature back into the database triggers
gffutils' feature-merging routines, which can get time consuming.
The solution is to use `disable_infer_transcripts=True` if your GTF
already has transcripts in it, and/or `disable_infer_genes=True` if it
already has genes in it. This can result in dramatic (100x) speedup.
Prior to version 0.8.4, setting `infer_gene_extents=False` would
disable both transcript and gene inference simultaneously. As of
version 0.8.4, these argument allow more granular control.
force_merge_fields : list
If merge_strategy="merge", then features will only be merged if their
non-attribute values are identical (same chrom, source, start, stop,
score, strand, phase). Using `force_merge_fields`, you can override
this behavior to allow merges even when fields are different. This
list can contain one or more of ['seqid', 'source', 'featuretype',
'score', 'strand', 'frame']. The resulting merged fields will be
strings of comma-separated values. Note that 'start' and 'end' are not
available, since these fields need to be integers.
text_factory : callable
Text factory to use for the sqlite3 database. See
https://docs.python.org/2/library/\
sqlite3.html#sqlite3.Connection.text_factory
for details. The default sqlite3.OptimizedUnicode will return Unicode
objects only for non-ASCII data, and bytestrings otherwise.
pragmas : dict
Dictionary of pragmas used when creating the sqlite3 database. See
http://www.sqlite.org/pragma.html for a list of available pragmas. The
defaults are stored in constants.default_pragmas, which can be used as
a template for supplying a custom dictionary.
sort_attribute_values : bool
All features returned from the database will have their attribute
values sorted. Typically this is only useful for testing, since this
can get time-consuming for large numbers of features.
_keep_tempfiles : bool or string
False by default to clean up intermediate tempfiles created during GTF
import. If True, then keep these tempfile for testing or debugging.
If string, then keep the tempfile for testing, but also use the string
as the suffix fo the tempfile. This can be useful for testing in
parallel environments.
Returns
-------
New :class:`FeatureDB` object.
"""
_locals = locals()
# Check if any older kwargs made it in
deprecation_handler(kwargs)
kwargs = dict((i, _locals[i]) for i in constants._iterator_kwargs)
# First construct an iterator so that we can identify the file format.
# DataIterator figures out what kind of data was provided (string of lines,
# filename, or iterable of Features) and checks `checklines` lines to
# identify the dialect.
iterator = iterators.DataIterator(**kwargs)
kwargs.update(**_locals)
if dialect is None:
dialect = iterator.dialect
# However, a side-effect of this is that if `data` was a generator, then
# we've just consumed `checklines` items (see
# iterators.BaseIterator.__init__, which calls iterators.peek).
#
# But it also chains those consumed items back onto the beginning, and the
# result is available as as iterator._iter.
#
# That's what we should be using now for `data:
kwargs['data'] = iterator._iter
kwargs['directives'] = iterator.directives
# Since we've already checked lines, we don't want to do it again
kwargs['checklines'] = 0
if force_gff or (dialect['fmt'] == 'gff3'):
cls = _GFFDBCreator
id_spec = id_spec or 'ID'
add_kwargs = dict(
id_spec=id_spec,
)
elif dialect['fmt'] == 'gtf':
cls = _GTFDBCreator
id_spec = id_spec or {'gene': 'gene_id', 'transcript': 'transcript_id'}
add_kwargs = dict(
transcript_key=gtf_transcript_key,
gene_key=gtf_gene_key,
subfeature=gtf_subfeature,
id_spec=id_spec,
)
kwargs.update(**add_kwargs)
kwargs['dialect'] = dialect
c = cls(**kwargs)
c.create()
if dbfn == ':memory:':
db = interface.FeatureDB(c.conn,
keep_order=keep_order,
pragmas=pragmas,
sort_attribute_values=sort_attribute_values,
text_factory=text_factory)
else:
db = interface.FeatureDB(c,
keep_order=keep_order,
pragmas=pragmas,
sort_attribute_values=sort_attribute_values,
text_factory=text_factory)
return db | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/create.py | create.py |
# kent src: "How much to shift to get to next larger bin."
# So each level splits by 2**3 = 8 fold (octree?)
NEXT_SHIFT = 3
# kent src: "How much to shift to get to finest bin."
# 2 ** FIRST_SHIFT is the size of the smallest bin.
FIRST_SHIFT = 17
# These offsets are the bin numbers at the beginning of each level. In Fig 7,
# these would be 6, 2, and 1.
#
# Bins with higher numbers (e.g. 585 to 4681) are the smaller-sized bins. Run
# print_bin_sizes() for more info on this.
OFFSETS = [
4096 + 512 + 64 + 8 + 1, # bins 4681-585
512 + 64 + 8 + 1, # bins 585-73
64 + 8 + 1, # bins 73-9
8 + 1, # bins 9-1
1 # bin 0
]
# for BED (0-based, half-open) or GFF (1-based, closed intervals)
COORD_OFFSETS = {'bed': 0, 'gff': 1}
def bins(start, stop, fmt='gff', one=True):
"""
Uses the definition of a "genomic bin" described in Fig 7 of
http://genome.cshlp.org/content/12/6/996.abstract.
Parameters
----------
one : boolean
If `one=True` (default), then only return the smallest bin that
completely contains these coordinates (useful for assigning a single
bin).
If `one=False`, then return the set of *all* bins that overlap these
coordinates (useful for looking for features that could intersect)
fmt : 'gff' | 'bed'
This specifies 1-based start coords (gff) or 0-based start coords (bed)
"""
# Jump to highest resolution bin that will fit these coords (depending on
# whether we have a BED or GFF-style coordinate).
#
# Some GFF files include negative coords, which will throw off this
# calculation. If negative coords, then set the bin to the largest
# possible.
if start < 0:
if one:
return 1
else:
return set([1])
if stop < 0:
if one:
return 1
else:
return set([1])
start = (start - COORD_OFFSETS[fmt]) >> FIRST_SHIFT
stop = (stop) >> FIRST_SHIFT
# We always at least fit within the chrom, which is bin 1.
bins = set([1])
for offset in OFFSETS:
# Since we're going from smallest to largest bins, the first one where
# the feature's start and stop positions are both within the same bin
# is the smallest one these coords fit within.
if one:
if start == stop:
# Note that at this point, because of the bit-shifting, `start`
# is the number of bins (at this current level). So we need to
# add it to `offset` to get the actual bin ID.
return offset + start
# See the Fig 7 reproduction above to see why range().
bins.update(list(range(offset + start, offset + stop + 1)))
# Move to the next level (8x larger bin size; i.e., 2**NEXT_SHIFT
# larger bin size)
start >>= NEXT_SHIFT
stop >>= NEXT_SHIFT
return bins
def print_bin_sizes():
"""
Useful for debugging: how large is each bin, and what are the bin IDs?
"""
for i, offset in enumerate(OFFSETS):
binstart = offset
try:
binstop = OFFSETS[i + 1]
except IndexError:
binstop = binstart
bin_size = 2 ** (FIRST_SHIFT + (i * NEXT_SHIFT))
actual_size = bin_size
# nice formatting
bin_size, suffix = bin_size / 1024, 'Kb'
if bin_size >= 1024:
bin_size, suffix = bin_size / 1024, 'Mb'
if bin_size >= 1024:
bin_size, suffix = bin_size / 1024, 'Gb'
size = '(%s %s)' % (bin_size, suffix)
actual_size = '%s bp' % (actual_size)
print('level: {i:1}; bins {binstart:<4} to {binstop:<4}; '
'size: {actual_size:<12} {size:<6}'.format(**locals()))
def test():
# These should obviously fit inside the first bin
assert bins(0, 1, fmt='bed') == 4681
assert bins(1, 1, fmt='gff') == 4681
# All bins that overlap:
assert bins(0, 1, fmt='bed', one=False) == set([1, 73, 9, 585, 4681])
# Or, more generally,
assert bins(0, 1, fmt='bed', one=False) == set(OFFSETS)
# Since the smallest bin size is 2**17, this rather large feature should
# fit completely inside the first bin, too
assert bins(2 ** 17 - 1, 2 ** 17 - 1, fmt='bed') == 4681
# or, more generally,
assert bins(
2 ** FIRST_SHIFT - 1,
2 ** FIRST_SHIFT - 1,
fmt='bed') == OFFSETS[0]
# At exactly 2**17 in BED coords it moves over to the next bin
assert bins(2 ** 17, 2 ** 17, fmt='bed') == 4682
# or
assert bins(
2 ** FIRST_SHIFT,
2 ** FIRST_SHIFT,
fmt='bed') == OFFSETS[0] + 1
# The other bins it overlaps should be similar to before, with the
# exception of 4682
assert bins(2 ** 17, 2 ** 17, fmt='bed', one=False) \
== set([1, 9, 73, 585, 4682])
assert bins(2 ** 17, 2 ** 17, fmt='bed', one=False)\
.difference(bins(2 ** 17 - 1, 2 ** 17 - 1, fmt='bed', one=False)) \
== set([4682])
# Spanning the first and second smallest bins should cause it to bump up
# a level
assert bins(2 ** 17 - 1, 2 ** 17, fmt='bed') == 585
# or
assert bins(
2 ** FIRST_SHIFT - 1,
2 ** FIRST_SHIFT,
fmt='bed') == OFFSETS[1]
# Make it as big as it can get in this bin:
assert bins(
2 ** FIRST_SHIFT - 1,
2 ** (FIRST_SHIFT + NEXT_SHIFT) - 1,
fmt='bed') == OFFSETS[1]
# Then make it big enough to jump another level
assert bins(
2 ** FIRST_SHIFT - 1,
2 ** (FIRST_SHIFT + NEXT_SHIFT),
fmt='bed') == OFFSETS[2]
# When start or stop or both are negative, bin should be the largest
# possible.
assert bins(-1, 1000, one=True) == 1
assert bins(-1, 1000, one=False) == set([1])
assert bins(-1, -1000, one=True) == 1
assert bins(-1, -1000, one=False) == set([1])
assert bins(1, -1000, one=True) == 1
assert bins(1, -1000, one=False) == set([1])
if __name__ == "__main__":
print_bin_sizes()
test() | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/bins.py | bins.py |
from agouti_pkg.gffutils import iterators
from agouti_pkg.gffutils import interface
from collections import Counter
import sys
def inspect(data, look_for=['featuretype', 'chrom', 'attribute_keys',
'feature_count'], limit=None, verbose=True):
"""
Inspect a GFF or GTF data source.
This function is useful for figuring out the different featuretypes found
in a file (for potential removal before creating a FeatureDB).
Returns a dictionary with a key for each item in `look_for` and
a corresponding value that is a dictionary of how many of each unique item
were found.
There will always be a `feature_count` key, indicating how many features
were looked at (if `limit` is provided, then `feature_count` will be the
same as `limit`).
For example, if `look_for` is ['chrom', 'featuretype'], then the result
will be a dictionary like::
{
'chrom': {
'chr1': 500,
'chr2': 435,
'chr3': 200,
...
...
}.
'featuretype': {
'gene': 150,
'exon': 324,
...
},
'feature_count': 5000
}
Parameters
----------
data : str, FeatureDB instance, or iterator of Features
If `data` is a string, assume it's a GFF or GTF filename. If it's
a FeatureDB instance, then its `all_features()` method will be
automatically called. Otherwise, assume it's an iterable of Feature
objects.
look_for : list
List of things to keep track of. Options are:
- any attribute of a Feature object, such as chrom, source, start,
stop, strand.
- "attribute_keys", which will look at all the individual
attribute keys of each feature
limit : int
Number of features to look at. Default is no limit.
verbose : bool
Report how many features have been processed.
Returns
-------
dict
"""
results = {}
obj_attrs = []
for i in look_for:
if i not in ['attribute_keys', 'feature_count']:
obj_attrs.append(i)
results[i] = Counter()
attr_keys = 'attribute_keys' in look_for
d = iterators.DataIterator(data)
feature_count = 0
for f in d:
if verbose:
sys.stderr.write('\r%s features inspected' % feature_count)
sys.stderr.flush()
for obj_attr in obj_attrs:
results[obj_attr].update([getattr(f, obj_attr)])
if attr_keys:
results['attribute_keys'].update(f.attributes.keys())
feature_count += 1
if limit and feature_count == limit:
break
new_results = {}
for k, v in results.items():
new_results[k] = dict(v)
new_results['feature_count'] = feature_count
return new_results | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/inspect.py | inspect.py |
import copy
import sys
import os
import agouti_pkg.simplejson
import time
import tempfile
import agouti_pkg.six
from agouti_pkg.gffutils import constants
from agouti_pkg.gffutils import bins
import agouti_pkg.gffutils
from agouti_pkg.gffutils import gffwriter
from agouti_pkg.gffutils import parser
from agouti_pkg.gffutils.attributes import dict_class
HERE = os.path.dirname(os.path.abspath(__file__))
def example_filename(fn):
"""
Return the full path of a data file that ships with gffutils.
"""
return os.path.join(HERE, 'test', 'data', fn)
def infer_dialect(attributes):
"""
Infer the dialect based on the attributes.
Parameters
----------
attributes : str or iterable
A single attributes string from a GTF or GFF line, or an iterable of
such strings.
Returns
-------
Dictionary representing the inferred dialect
"""
if isinstance(attributes, agouti_pkg.six.string_types):
attributes = [attributes]
dialects = [parser._split_keyvals(i)[1] for i in attributes]
return _choose_dialect(dialects)
def _choose_dialect(dialects):
"""
Given a list of dialects, choose the one to use as the "canonical" version.
If `dialects` is an empty list, then use the default GFF3 dialect
Parameters
----------
dialects : iterable
iterable of dialect dictionaries
Returns
-------
dict
"""
# NOTE: can use helpers.dialect_compare if you need to make this more
# complex....
# For now, this function favors the first dialect, and then appends the
# order of additional fields seen in the attributes of other lines giving
# priority to dialects that come first in the iterable.
if len(dialects) == 0:
return constants.dialect
final_order = []
for dialect in dialects:
for o in dialect['order']:
if o not in final_order:
final_order.append(o)
dialect = dialects[0]
dialect['order'] = final_order
return dialect
def make_query(args, other=None, limit=None, strand=None, featuretype=None,
extra=None, order_by=None, reverse=False,
completely_within=False):
"""
Multi-purpose, bare-bones ORM function.
This function composes queries given some commonly-used kwargs that can be
passed to FeatureDB methods (like .parents(), .children(), .all_features(),
.features_of_type()). It handles, in one place, things like restricting to
featuretype, limiting to a genomic range, limiting to one strand, or
returning results ordered by different criteria.
Additional filtering/subsetting/sorting behavior should be added here.
(Note: this ended up having better performance (and flexibility) than
sqlalchemy)
This function also provides support for additional JOINs etc (supplied via
the `other` kwarg) and extra conditional clauses (`extra` kwarg). See the
`_QUERY` var below for the order in which they are used.
For example, FeatureDB._relation uses `other` to supply the JOIN
substatment, and that same method also uses `extra` to supply the
"relations.level = ?" substatment (see the source for FeatureDB._relation
for more details).
`args` contains the arguments that will ultimately be supplied to the
sqlite3.connection.execute function. It may be further populated below --
for example, if strand="+", then the query will include a strand clause,
and the strand will be appended to the args.
`args` can be pre-filled with args that are passed to `other` and `extra`.
"""
_QUERY = ("{_SELECT} {OTHER} {EXTRA} {FEATURETYPE} "
"{LIMIT} {STRAND} {ORDER_BY}")
# Construct a dictionary `d` that will be used later as _QUERY.format(**d).
# Default is just _SELECT, which returns all records in the features table.
# (Recall that constants._SELECT gets the fields in the order needed to
# reconstruct a Feature)
d = dict(_SELECT=constants._SELECT, OTHER="", FEATURETYPE="", LIMIT="",
STRAND="", ORDER_BY="", EXTRA="")
if other:
d['OTHER'] = other
if extra:
d['EXTRA'] = extra
# If `other` and `extra` take args (that is, they have "?" in them), then
# they should have been provided in `args`.
required_args = (d['EXTRA'] + d['OTHER']).count('?')
if len(args) != required_args:
raise ValueError('Not enough args (%s) for subquery' % args)
# Below, if a kwarg is specified, then we create sections of the query --
# appending to args as necessary.
#
# IMPORTANT: the order in which things are processed here is the same as
# the order of the placeholders in _QUERY. That is, we need to build the
# args in parallel with the query to avoid putting the wrong args in the
# wrong place.
if featuretype:
# Handle single or iterables of featuretypes.
#
# e.g., "featuretype = 'exon'"
#
# or, "featuretype IN ('exon', 'CDS')"
if isinstance(featuretype, agouti_pkg.six.string_types):
d['FEATURETYPE'] = "features.featuretype = ?"
args.append(featuretype)
else:
d['FEATURETYPE'] = (
"features.featuretype IN (%s)"
% (','.join(["?" for _ in featuretype]))
)
args.extend(featuretype)
if limit:
# Restrict to a genomic region. Makes use of the UCSC binning strategy
# for performance.
#
# `limit` is a string or a tuple of (chrom, start, stop)
#
# e.g., "seqid = 'chr2L' AND start > 1000 AND end < 5000"
if isinstance(limit, agouti_pkg.six.string_types):
seqid, startstop = limit.split(':')
start, end = startstop.split('-')
else:
seqid, start, end = limit
# Identify possible bins
_bins = bins.bins(int(start), int(end), one=False)
# Use different overlap conditions
if completely_within:
d['LIMIT'] = (
"features.seqid = ? AND features.start >= ? "
"AND features.end <= ?"
)
args.extend([seqid, start, end])
else:
d['LIMIT'] = (
"features.seqid = ? AND features.start <= ? "
"AND features.end >= ?"
)
# Note order (end, start)
args.extend([seqid, end, start])
# Add bin clause. See issue #45.
if len(_bins) < 900:
d['LIMIT'] += " AND features.bin IN (%s)" % (','.join(map(str, _bins)))
if strand:
# e.g., "strand = '+'"
d['STRAND'] = "features.strand = ?"
args.append(strand)
# TODO: implement file_order!
valid_order_by = constants._gffkeys_extra + ['file_order', 'length']
_order_by = []
if order_by:
# Default is essentially random order.
#
# e.g. "ORDER BY seqid, start DESC"
if isinstance(order_by, agouti_pkg.six.string_types):
_order_by.append(order_by)
else:
for k in order_by:
if k not in valid_order_by:
raise ValueError("%s not a valid order-by value in %s"
% (k, valid_order_by))
# There's no length field, so order by end - start
if k == 'length':
k = '(end - start)'
_order_by.append(k)
_order_by = ','.join(_order_by)
if reverse:
direction = 'DESC'
else:
direction = 'ASC'
d['ORDER_BY'] = 'ORDER BY %s %s' % (_order_by, direction)
# Ensure only one "WHERE" is included; the rest get "AND ". This is ugly.
where = False
if "where" in d['OTHER'].lower():
where = True
for i in ['EXTRA', 'FEATURETYPE', 'LIMIT', 'STRAND']:
if d[i]:
if not where:
d[i] = "WHERE " + d[i]
where = True
else:
d[i] = "AND " + d[i]
return _QUERY.format(**d), args
def _bin_from_dict(d):
"""
Given a dictionary yielded by the parser, return the genomic "UCSC" bin
"""
try:
start = int(d['start'])
end = int(d['end'])
return bins.bins(start, end, one=True)
# e.g., if "."
except ValueError:
return None
def _jsonify(x):
"""Use most compact form of JSON"""
if isinstance(x, dict_class):
return agouti_pkg.simplejson.dumps(x._d, separators=(',', ':'))
return agouti_pkg.simplejson.dumps(x, separators=(',', ':'))
def _unjsonify(x, isattributes=False):
"""Convert JSON string to an ordered defaultdict."""
if isattributes:
obj = agouti_pkg.simplejson.loads(x)
return dict_class(obj)
return agouti_pkg.simplejson.loads(x)
def _feature_to_fields(f, jsonify=True):
"""
Convert feature to tuple, for faster sqlite3 import
"""
x = []
for k in constants._keys:
v = getattr(f, k)
if jsonify and (k in ('attributes', 'extra')):
x.append(_jsonify(v))
else:
x.append(v)
return tuple(x)
def _dict_to_fields(d, jsonify=True):
"""
Convert dict to tuple, for faster sqlite3 import
"""
x = []
for k in constants._keys:
v = d[k]
if jsonify and (k in ('attributes', 'extra')):
x.append(_jsonify(v))
else:
x.append(v)
return tuple(x)
def asinterval(feature):
"""
Converts a gffutils.Feature to a pybedtools.Interval
"""
import pybedtools
return pybedtools.create_interval_from_list(str(feature).split('\t'))
def merge_attributes(attr1, attr2):
"""
Merges two attribute dictionaries into a single dictionary.
Parameters
----------
`attr1`, `attr2` : dict
Returns
-------
dict
"""
new_d = copy.deepcopy(attr1)
new_d.update(attr2)
#all of attr2 key : values just overwrote attr1, fix it
for k, v in new_d.items():
if not isinstance(v, list):
new_d[k] = [v]
for k, v in agouti_pkg.six.iteritems(attr1):
if k in attr2:
if not isinstance(v, list):
v = [v]
new_d[k].extend(v)
return dict((k, list(set(v))) for k, v in new_d.items())
def dialect_compare(dialect1, dialect2):
"""
Compares two dialects.
"""
orig = set(dialect1.items())
new = set(dialect2.items())
return dict(
added=dict(list(new.difference(orig))),
removed=dict(list(orig.difference(new)))
)
def sanitize_gff_db(db, gid_field="gid"):
"""
Sanitize given GFF db. Returns a sanitized GFF db.
Sanitizing means:
- Ensuring that start < stop for all features
- Standardizing gene units by adding a 'gid' attribute
that makes the file grep-able
TODO: Do something with negative coordinates?
"""
def sanitized_iterator():
# Iterate through the database by each gene's records
for gene_recs in db.iter_by_parent_childs():
# The gene's ID
gene_id = gene_recs[0].id
for rec in gene_recs:
# Fixup coordinates if necessary
if rec.start > rec.stop:
rec.start, rec.stop = rec.stop, rec.start
# Add a gene id field to each gene's records
rec.attributes[gid_field] = [gene_id]
yield rec
# Return sanitized GFF database
sanitized_db = \
agouti_pkg.gffutils.create_db(sanitized_iterator(), ":memory:",
verbose=False)
return sanitized_db
def sanitize_gff_file(gff_fname,
in_memory=True,
in_place=False):
"""
Sanitize a GFF file.
"""
db = None
if is_gff_db(gff_fname):
# It's a database filename, so load it
db = agouti_pkg.gffutils.FeatureDB(gff_fname)
else:
# Need to create a database for file
if in_memory:
db = agouti_pkg.gffutils.create_db(gff_fname, ":memory:",
verbose=False)
else:
db = get_gff_db(gff_fname)
if in_place:
gff_out = gffwriter.GFFWriter(gff_fname,
in_place=in_place)
else:
gff_out = gffwriter.GFFWriter(sys.stdout)
sanitized_db = sanitize_gff_db(db)
for gene_rec in sanitized_db.all_features(featuretype="gene"):
gff_out.write_gene_recs(sanitized_db, gene_rec.id)
gff_out.close()
def annotate_gff_db(db):
"""
Annotate a GFF file by cross-referencing it with another GFF
file, e.g. one containing gene models.
"""
pass
def is_gff_db(db_fname):
"""
Return True if the given filename is a GFF database.
For now, rely on .db extension.
"""
if not os.path.isfile(db_fname):
return False
if db_fname.endswith(".db"):
return True
return False
def to_unicode(obj, encoding='utf-8'):
if isinstance(obj, agouti_pkg.six.string_types):
if not isinstance(obj, agouti_pkg.six.text_type):
obj = agouti_pkg.six.text_type(obj, encoding)
return obj
def canonical_transcripts(db, fasta_filename):
import pyfaidx
fasta = pyfaidx.Fasta(fasta_filename, as_raw=True)
for gene in db.features_of_type('gene'):
# exons_list will contain (CDS_length, total_length, transcript, [exons]) tuples.
exon_list = []
for ti, transcript in enumerate(db.children(gene, level=1)):
cds_len = 0
total_len = 0
exons = list(db.children(transcript, level=1))
for exon in exons:
exon_length = len(exon)
if exon.featuretype == 'CDS':
cds_len += exon_length
total_len += exon_length
exon_list.append((cds_len, total_len, transcript, exons))
# If we have CDS, then use the longest coding transcript
if max(i[0] for i in exon_list) > 0:
best = sorted(exon_list)[0]
# Otherwise, just choose the longest
else:
best = sorted(exon_list, lambda x: x[1])[0]
print(best)
canonical_exons = best[-1]
transcript = best[-2]
seqs = [i.sequence(fasta) for i in canonical_exons]
yield transcript, ''.join(seqs)
##
## Helpers for gffutils-cli
##
## TODO: move clean_gff here?
##
def get_gff_db(gff_fname,
ext=".db"):
"""
Get db for GFF file. If the database has a .db file,
load that. Otherwise, create a named temporary file,
serialize the db to that, and return the loaded database.
"""
if not os.path.isfile(gff_fname):
# Not sure how we should deal with errors normally in
# gffutils -- Ryan?
raise ValueError("GFF %s does not exist." % (gff_fname))
candidate_db_fname = "%s.%s" % (gff_fname, ext)
if os.path.isfile(candidate_db_fname):
# Standard .db file found, so return it
return candidate_db_fname
# Otherwise, we need to create a temporary but non-deleted
# file to store the db in. It'll be up to the user
# of the function the delete the file when done.
## NOTE: Ryan must have a good scheme for dealing with this
## since pybedtools does something similar under the hood, i.e.
## creating temporary files as needed without over proliferation
db_fname = tempfile.NamedTemporaryFile(delete=False)
# Create the database for the gff file (suppress output
# when using function internally)
print("Creating db for %s" % (gff_fname))
t1 = time.time()
db = agouti_pkg.gffutils.create_db(gff_fname, db_fname.name,
merge_strategy="merge",
verbose=False)
t2 = time.time()
print(" - Took %.2f seconds" % (t2 - t1))
return db | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/helpers.py | helpers.py |
import os
import tempfile
import itertools
from agouti_pkg.gffutils.feature import feature_from_line
from agouti_pkg.gffutils.interface import FeatureDB
from agouti_pkg.gffutils import helpers
from textwrap import dedent
import agouti_pkg.six
from agouti_pkg.six.moves.urllib.request import urlopen
if agouti_pkg.six.PY3:
from urllib import parse as urlparse
else:
import urlparse
def peek(it, n):
_peek = []
for _ in range(n):
try:
_peek.append(agouti_pkg.six.next(it))
except StopIteration:
break
return _peek, itertools.chain(_peek, it)
class Directive(object):
def __init__(self, line):
self.info = line
class _BaseIterator(object):
def __init__(self, data, checklines=10, transform=None,
force_dialect_check=False, dialect=None):
"""
Base class for iterating over features. In general, you should use
DataIterator -- so see the docstring of class for argument
descriptions.
All subclasses -- _FileIterator, _URLIterator, _FeatureIterator,
_StringIterator -- gain the following behavior:
- self.current_item and self.current_item_number are set on every
iteration. This is very useful for debugging, or reporting to
the user exactly what item or line number caused the issue.
- transform a Feature before it gets yielded, filter out a Feature
- auto-detect dialect by peeking `checklines` items into the
iterator, and then re-reading those, applying the detected
dialect. If multiple dialects are found, use
helpers._choose_dialect to figure out the best one.
- keep track of directives
"""
self.data = data
self.checklines = checklines
self.current_item = None
self.current_item_number = None
self.dialect = None
self._observed_dialects = []
self.directives = []
self.transform = transform
self.warnings = []
if force_dialect_check and dialect is not None:
raise ValueError("force_dialect_check is True, but a dialect "
"is provided")
if force_dialect_check:
# In this case, self.dialect remains None. When
# parser._split_keyvals gets None as a dialect, it tries to infer
# a dialect.
self._iter = self._custom_iter()
elif dialect is not None:
self._observed_dialects = [dialect]
self.dialect = helpers._choose_dialect(self._observed_dialects)
self._iter = self._custom_iter()
else:
# Otherwise, check some lines to determine what the dialect should
# be
self.peek, self._iter = peek(self._custom_iter(), checklines)
self._observed_dialects = [i.dialect for i in self.peek]
self.dialect = helpers._choose_dialect(self._observed_dialects)
def _custom_iter(self):
raise NotImplementedError("Must define in subclasses")
def __iter__(self):
for i in self._iter:
i.dialect = self.dialect
if self.transform:
i = self.transform(i)
if i:
yield i
else:
yield i
def _directive_handler(self, directive):
self.directives.append(directive[2:])
class _FileIterator(_BaseIterator):
"""
Subclass for iterating over features provided as a filename
"""
def open_function(self, data):
data = os.path.expanduser(data)
if data.endswith('.gz'):
import gzip
return gzip.open(data)
return open(data)
def _custom_iter(self):
valid_lines = 0
for i, line in enumerate(self.open_function(self.data)):
if isinstance(line, agouti_pkg.six.binary_type):
line = line.decode('utf-8')
line = line.rstrip('\n\r')
self.current_item = line
self.current_item_number = i
if line == '##FASTA' or line.startswith('>'):
raise StopIteration
if line.startswith('##'):
self._directive_handler(line)
continue
if line.startswith(('#')) or len(line) == 0:
continue
# (If we got here it should be a valid line)
valid_lines += 1
yield feature_from_line(line, dialect=self.dialect)
class _UrlIterator(_FileIterator):
"""
Subclass for iterating over features provided as a URL
"""
def open_function(self, data):
response = urlopen(data)
# ideas from
# http://stackoverflow.com/a/17537107
# https://rationalpie.wordpress.com/2010/06/02/\
# python-streaming-gzip-decompression/
if data.endswith('.gz'):
import zlib
d = zlib.decompressobj(16 + zlib.MAX_WBITS)
READ_BLOCK_SIZE = 1024
def _iter():
last_line = ""
while True:
data = response.read(READ_BLOCK_SIZE)
if not data:
break
data = "".join((last_line, d.decompress(data).decode()))
lines = data.split('\n')
last_line = lines.pop()
for line in lines:
yield line + '\n'
yield last_line
return _iter()
else:
return response
class _FeatureIterator(_BaseIterator):
"""
Subclass for iterating over features that are already in an iterator
"""
def _custom_iter(self):
for i, feature in enumerate(self.data):
self.current_item = feature
self.current_item_number = i
yield feature
class _StringIterator(_FileIterator):
"""
Subclass for iterating over features provided as a string (e.g., from
file.read())
"""
def _custom_iter(self):
self.tmp = tempfile.NamedTemporaryFile(delete=False)
data = dedent(self.data)
if isinstance(data, agouti_pkg.six.text_type):
data = data.encode('utf-8')
self.tmp.write(data)
self.tmp.close()
self.data = self.tmp.name
for feature in super(_StringIterator, self)._custom_iter():
yield feature
os.unlink(self.tmp.name)
def is_url(url):
"""
Check to see if a URL has a valid protocol.
Parameters
----------
url : str or unicode
Returns
-------
True if `url` has a valid protocol False otherwise.
"""
try:
return urlparse.urlparse(url).scheme in urlparse.uses_netloc
except:
return False
def DataIterator(data, checklines=10, transform=None,
force_dialect_check=False, from_string=False, **kwargs):
"""
Iterate over features, no matter how they are provided.
Parameters
----------
data : str, iterable of Feature objs, FeatureDB
`data` can be a string (filename, URL, or contents of a file, if
from_string=True), any arbitrary iterable of features, or a FeatureDB
(in which case its all_features() method will be called).
checklines : int
Number of lines to check in order to infer a dialect.
transform : None or callable
If not None, `transform` should accept a Feature object as its only
argument and return either a (possibly modified) Feature object or
a value that evaluates to False. If the return value is False, the
feature will be skipped.
force_dialect_check : bool
If True, check the dialect of every feature. Thorough, but can be
slow.
from_string : bool
If True, `data` should be interpreted as the contents of a file rather
than the filename itself.
dialect : None or dict
Provide the dialect, which will override auto-detected dialects. If
provided, you should probably also use `force_dialect_check=False` and
`checklines=0` but this is not enforced.
"""
_kwargs = dict(data=data, checklines=checklines, transform=transform,
force_dialect_check=force_dialect_check, **kwargs)
if isinstance(data, agouti_pkg.six.string_types):
if from_string:
return _StringIterator(**_kwargs)
else:
if os.path.exists(data):
return _FileIterator(**_kwargs)
elif is_url(data):
return _UrlIterator(**_kwargs)
elif isinstance(data, FeatureDB):
_kwargs['data'] = data.all_features()
return _FeatureIterator(**_kwargs)
else:
return _FeatureIterator(**_kwargs) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/iterators.py | iterators.py |
import os
import agouti_pkg.six
import sqlite3
import shutil
import warnings
from agouti_pkg.gffutils import bins
from agouti_pkg.gffutils import helpers
from agouti_pkg.gffutils import constants
from agouti_pkg.gffutils.feature import Feature
from agouti_pkg.gffutils.exceptions import FeatureNotFoundError
class FeatureDB(object):
# docstring to be filled in for methods that call out to
# helpers.make_query()
_method_doc = """
limit : string or tuple
Limit the results to a genomic region. If string, then of the form
"seqid:start-end"; if tuple, then (seqid, start, end).
strand : "-" | "+" | "."
Limit the results to one strand
featuretype : string or tuple
Limit the results to one or several featuretypes.
order_by : string or tuple
Order results by one or many fields; the string or tuple items must
be in: 'seqid', 'source', 'featuretype', 'start', 'end', 'score',
'strand', 'frame', 'attributes', 'extra'.
reverse : bool
Change sort order; only relevant if `order_by` is not None. By
default, results will be in ascending order, so use `reverse=True`
for descending.
completely_within : bool
If False (default), a single bp overlap with `limit` is sufficient
to return a feature; if True, then the feature must be completely
within `limit`. Only relevant when `limit` is not None.
"""
def __init__(self, dbfn, default_encoding='utf-8', keep_order=False,
pragmas=constants.default_pragmas,
sort_attribute_values=False,
text_factory=sqlite3.OptimizedUnicode):
"""
Connect to a database created by :func:`gffutils.create_db`.
Parameters
----------
dbfn : str
Path to a database created by :func:`gffutils.create_db`.
text_factory : callable
Optionally set the way sqlite3 handles strings. Default is
sqlite3.OptimizedUnicode, which returns ascii when possible,
unicode otherwise
encoding : str
When non-ASCII characters are encountered, assume they are in this
encoding.
keep_order : bool
If True, all features returned from this instance will have the
order of their attributes maintained. This can be turned on or off
database-wide by setting the `keep_order` attribute or with this
kwarg, or on a feature-by-feature basis by setting the `keep_order`
attribute of an individual feature.
Default is False, since this includes a sorting step that can get
time-consuming for many features.
pragmas : dict
Dictionary of pragmas to use when connecting to the database. See
http://www.sqlite.org/pragma.html for the full list of
possibilities, and constants.default_pragmas for the defaults.
These can be changed later using the :meth:`FeatureDB.set_pragmas`
method.
Notes
-----
`dbfn` can also be a subclass of :class:`_DBCreator`, useful for
when :func:`gffutils.create_db` is provided the ``dbfn=":memory:"``
kwarg.
"""
# Since specifying the string ":memory:" will actually try to connect
# to a new, separate (and empty) db in memory, we can alternatively
# pass in a sqlite connection instance to use its existing, in-memory
# db.
from agouti_pkg.gffutils import create
if isinstance(dbfn, create._DBCreator):
self.conn = dbfn.conn
self.dbfn = dbfn.dbfn
elif isinstance(dbfn, sqlite3.Connection):
self.conn = dbfn
self.dbfn = dbfn
# otherwise assume dbfn is a string.
elif dbfn == ':memory:':
raise ValueError(
"cannot connect to memory db; please provide the connection")
else:
if not os.path.exists(dbfn):
raise ValueError("Database file %s does not exist" % dbfn)
self.dbfn = dbfn
self.conn = sqlite3.connect(self.dbfn)
if text_factory is not None:
self.conn.text_factory = text_factory
self.conn.row_factory = sqlite3.Row
self.default_encoding = default_encoding
self.keep_order = keep_order
self.sort_attribute_values = sort_attribute_values
c = self.conn.cursor()
# Load some meta info
# TODO: this is a good place to check for previous versions, and offer
# to upgrade...
c.execute(
'''
SELECT version, dialect FROM meta
''')
version, dialect = c.fetchone()
self.version = version
self.dialect = helpers._unjsonify(dialect)
# Load directives from db
c.execute(
'''
SELECT directive FROM directives
''')
self.directives = [directive[0] for directive in c if directive]
# Load autoincrements so that when we add new features, we can start
# autoincrementing from where we last left off (instead of from 1,
# which would cause name collisions)
c.execute(
'''
SELECT base, n FROM autoincrements
''')
self._autoincrements = dict(c)
self.set_pragmas(pragmas)
if not self._analyzed():
warnings.warn(
"It appears that this database has not had the ANALYZE "
"sqlite3 command run on it. Doing so can dramatically "
"speed up queries, and is done by default for databases "
"created with gffutils >0.8.7.1 (this database was "
"created with version %s) Consider calling the analyze() "
"method of this object." % self.version)
def set_pragmas(self, pragmas):
"""
Set pragmas for the current database connection.
Parameters
----------
pragmas : dict
Dictionary of pragmas; see constants.default_pragmas for a template
and http://www.sqlite.org/pragma.html for a full list.
"""
self.pragmas = pragmas
c = self.conn.cursor()
c.executescript(
';\n'.join(
['PRAGMA %s=%s' % i for i in self.pragmas.items()]
)
)
self.conn.commit()
def _feature_returner(self, **kwargs):
"""
Returns a feature, adding additional database-specific defaults
"""
kwargs.setdefault('dialect', self.dialect)
kwargs.setdefault('keep_order', self.keep_order)
kwargs.setdefault('sort_attribute_values', self.sort_attribute_values)
return Feature(**kwargs)
def _analyzed(self):
res = self.execute(
"""
SELECT name FROM sqlite_master WHERE type='table'
AND name='sqlite_stat1';
""")
return len(list(res)) == 1
def schema(self):
"""
Returns the database schema as a string.
"""
c = self.conn.cursor()
c.execute(
'''
SELECT sql FROM sqlite_master
''')
results = []
for i, in c:
if i is not None:
results.append(i)
return '\n'.join(results)
def __getitem__(self, key):
if isinstance(key, Feature):
key = key.id
c = self.conn.cursor()
try:
c.execute(constants._SELECT + ' WHERE id = ?', (key,))
except sqlite3.ProgrammingError:
c.execute(
constants._SELECT + ' WHERE id = ?',
(key.decode(self.default_encoding),))
results = c.fetchone()
# TODO: raise error if more than one key is found
if results is None:
raise FeatureNotFoundError(key)
return self._feature_returner(**results)
def count_features_of_type(self, featuretype=None):
"""
Simple count of features.
Can be faster than "grep", and is faster than checking the length of
results from :meth:`gffutils.FeatureDB.features_of_type`.
Parameters
----------
featuretype : string
Feature type (e.g., "gene") to count. If None, then count *all*
features in the database.
Returns
-------
The number of features of this type, as an integer
"""
c = self.conn.cursor()
if featuretype is not None:
c.execute(
'''
SELECT count() FROM features
WHERE featuretype = ?
''', (featuretype,))
else:
c.execute(
'''
SELECT count() FROM features
''')
results = c.fetchone()
if results is not None:
results = results[0]
return results
def features_of_type(self, featuretype, limit=None, strand=None,
order_by=None, reverse=False,
completely_within=False):
"""
Returns an iterator of :class:`gffutils.Feature` objects.
Parameters
----------
{_method_doc}
"""
query, args = helpers.make_query(
args=[],
limit=limit,
featuretype=featuretype,
order_by=order_by,
reverse=reverse,
strand=strand,
completely_within=completely_within,
)
for i in self._execute(query, args):
yield self._feature_returner(**i)
# TODO: convert this to a syntax similar to itertools.groupby
def iter_by_parent_childs(self, featuretype="gene", level=None,
order_by=None, reverse=False,
completely_within=False):
"""
For each parent of type `featuretype`, yield a list L of that parent
and all of its children (`[parent] + list(children)`). The parent will
always be L[0].
This is useful for "sanitizing" a GFF file for downstream tools.
Additional kwargs are passed to :meth:`FeatureDB.children`, and will
therefore only affect items L[1:] in each yielded list.
"""
# Get all the parent records of the requested feature type
parent_recs = self.all_features(featuretype=featuretype)
# Return a generator of these parent records and their
# children
for parent_rec in parent_recs:
unit_records = \
[parent_rec] + list(self.children(parent_rec.id, level = level))
yield unit_records
def all_features(self, limit=None, strand=None, featuretype=None,
order_by=None, reverse=False, completely_within=False):
"""
Iterate through the entire database.
Returns
-------
A generator object that yields :class:`Feature` objects.
Parameters
----------
{_method_doc}
"""
query, args = helpers.make_query(
args=[],
limit=limit,
strand=strand,
featuretype=featuretype,
order_by=order_by,
reverse=reverse,
completely_within=completely_within
)
for i in self._execute(query, args):
yield self._feature_returner(**i)
def featuretypes(self):
"""
Iterate over feature types found in the database.
Returns
-------
A generator object that yields featuretypes (as strings)
"""
c = self.conn.cursor()
c.execute(
'''
SELECT DISTINCT featuretype from features
''')
for i, in c:
yield i
def _relation(self, id, join_on, join_to, level=None, featuretype=None,
order_by=None, reverse=False, completely_within=False,
limit=None):
# The following docstring will be included in the parents() and
# children() docstrings to maintain consistency, since they both
# delegate to this method.
"""
Parameters
----------
id : string or a Feature object
level : None or int
If `level=None` (default), then return all children regardless
of level. If `level` is an integer, then constrain to just that
level.
{_method_doc}
Returns
-------
A generator object that yields :class:`Feature` objects.
"""
if isinstance(id, Feature):
id = id.id
other = '''
JOIN relations
ON relations.{join_on} = features.id
WHERE relations.{join_to} = ?
'''.format(**locals())
args = [id]
level_clause = ''
if level is not None:
level_clause = 'relations.level = ?'
args.append(level)
query, args = helpers.make_query(
args=args,
other=other,
extra=level_clause,
featuretype=featuretype,
order_by=order_by,
reverse=reverse,
limit=limit,
completely_within=completely_within,
)
# modify _SELECT so that only unique results are returned
query = query.replace("SELECT", "SELECT DISTINCT")
for i in self._execute(query, args):
yield self._feature_returner(**i)
def children(self, id, level=None, featuretype=None, order_by=None,
reverse=False, limit=None, completely_within=False):
"""
Return children of feature `id`.
{_relation_docstring}
"""
return self._relation(
id, join_on='child', join_to='parent', level=level,
featuretype=featuretype, order_by=order_by, reverse=reverse,
limit=limit, completely_within=completely_within)
def parents(self, id, level=None, featuretype=None, order_by=None,
reverse=False, completely_within=False, limit=None):
"""
Return parents of feature `id`.
{_relation_docstring}
"""
return self._relation(
id, join_on='parent', join_to='child', level=level,
featuretype=featuretype, order_by=order_by, reverse=reverse,
limit=limit, completely_within=completely_within)
def _execute(self, query, args):
self._last_query = query
self._last_args = args
c = self.conn.cursor()
c.execute(query, tuple(args))
return c
def execute(self, query):
"""
Execute arbitrary queries on the db.
.. seealso::
:class:`FeatureDB.schema` may be helpful when writing your own
queries.
Parameters
----------
query : str
Query to execute -- trailing ";" optional.
Returns
-------
A sqlite3.Cursor object that can be iterated over.
"""
c = self.conn.cursor()
return c.execute(query)
def analyze(self):
"""
Runs the sqlite ANALYZE command to potentially speed up queries
dramatically.
"""
self.execute('ANALYZE features')
self.conn.commit()
def region(self, region=None, seqid=None, start=None, end=None,
strand=None, featuretype=None, completely_within=False):
"""
Return features within specified genomic coordinates.
Specifying genomic coordinates can be done in a flexible manner
Parameters
----------
region : string, tuple, or Feature instance
If string, then of the form "seqid:start-end". If tuple, then
(seqid, start, end). If :class:`Feature`, then use the features
seqid, start, and end values.
This argument is mutually exclusive with start/end/seqid.
*Note*: By design, even if a feature is provided, its strand will
be ignored. If you want to restrict the output by strand, use the
separate `strand` kwarg.
strand : + | - | . | None
If `strand` is provided, then only those features exactly matching
`strand` will be returned. So `strand='.'` will only return
unstranded features. Default is `strand=None` which does not
restrict by strand.
seqid, start, end, strand
Mutually exclusive with `region`. These kwargs can be used to
approximate slice notation; see "Details" section below.
featuretype : None, string, or iterable
If not None, then restrict output. If string, then only report
that feature type. If iterable, then report all featuretypes in
the iterable.
completely_within : bool
By default (`completely_within=False`), returns features that
partially or completely overlap `region`. If
`completely_within=True`, features that are completely within
`region` will be returned.
Notes
-------
The meaning of `seqid`, `start`, and `end` is interpreted as follows:
====== ====== ===== ======================================
seqid start end meaning
====== ====== ===== ======================================
str int int equivalent to `region` kwarg
None int int features from all chroms within coords
str None int equivalent to [:end] slice notation
str int None equivalent to [start:] slice notation
None None None equivalent to FeatureDB.all_features()
====== ====== ===== ======================================
If performance is a concern, use `completely_within=True`. This allows
the query to be optimized by only looking for features that fall in the
precise genomic bin (same strategy as UCSC Genome Browser and
BEDTools). Otherwise all features' start/stop coords need to be
searched to see if they partially overlap the region of interest.
Examples
--------
- `region(seqid="chr1", start=1000)` returns all features on chr1 that
start or extend past position 1000
- `region(seqid="chr1", start=1000, completely_within=True)` returns
all features on chr1 that start past position 1000.
- `region("chr1:1-100", strand="+", completely_within=True)` returns
only plus-strand features that completely fall within positions 1 to
100 on chr1.
Returns
-------
A generator object that yields :class:`Feature` objects.
"""
# Argument handling.
if region is not None:
if (seqid is not None) or (start is not None) or (end is not None):
raise ValueError(
"If region is supplied, do not supply seqid, "
"start, or end as separate kwargs")
if isinstance(region, agouti_pkg.six.string_types):
toks = region.split(':')
if len(toks) == 1:
seqid = toks[0]
start, end = None, None
else:
seqid, coords = toks[:2]
if len(toks) == 3:
strand = toks[2]
start, end = coords.split('-')
elif isinstance(region, Feature):
seqid = region.seqid
start = region.start
end = region.end
strand = region.strand
# otherwise assume it's a tuple
else:
seqid, start, end = region[:3]
# e.g.,
# completely_within=True..... start >= {start} AND end <= {end}
# completely_within=False.... start < {end} AND end > {start}
if completely_within:
start_op = '>='
end_op = '<='
else:
start_op = '<'
end_op = '>'
end, start = start, end
args = []
position_clause = []
if seqid is not None:
position_clause.append('seqid = ?')
args.append(seqid)
if start is not None:
start = int(start)
position_clause.append('start %s ?' % start_op)
args.append(start)
if end is not None:
end = int(end)
position_clause.append('end %s ?' % end_op)
args.append(end)
position_clause = ' AND '.join(position_clause)
# Only use bins if we have defined boundaries and completely_within is
# True. Otherwise you can't know how far away a feature stretches
# (which means bins are not computable ahead of time)
if (start is not None) and (end is not None) and completely_within:
_bins = list(bins.bins(start, end, one=False))
# See issue #45
if len(_bins) < 900:
_bin_clause = ' or ' .join(['bin = ?' for _ in _bins])
_bin_clause = 'AND ( %s )' % _bin_clause
args += _bins
else:
_bin_clause = ''
else:
_bin_clause = ''
query = ' '.join([
constants._SELECT,
'WHERE ',
position_clause,
_bin_clause])
# Add the featuretype clause
if featuretype is not None:
if isinstance(featuretype, agouti_pkg.six.string_types):
featuretype = [featuretype]
feature_clause = ' or '.join(
['featuretype = ?' for _ in featuretype])
query += ' AND (%s) ' % feature_clause
args.extend(featuretype)
if strand is not None:
strand_clause = ' and strand = ? '
query += strand_clause
args.append(strand)
c = self.conn.cursor()
self._last_query = query
self._last_args = args
self._context = {
'start': start,
'end': end,
'seqid': seqid,
'region': region,
}
c.execute(query, tuple(args))
for i in c:
yield self._feature_returner(**i)
def interfeatures(self, features, new_featuretype=None,
merge_attributes=True, dialect=None,
attribute_func=None, update_attributes=None):
"""
Construct new features representing the space between features.
For example, if `features` is a list of exons, then this method will
return the introns. If `features` is a list of genes, then this method
will return the intergenic regions.
Providing N features will return N - 1 new features.
This method purposefully does *not* do any merging or sorting of
coordinates, so you may want to use :meth:`FeatureDB.merge` first.
The new features' attributes will be a merge of the neighboring
features' attributes. This is useful if you have provided a list of
exons; the introns will then retain the transcript and/or gene parents.
Parameters
----------
features : iterable of :class:`feature.Feature` instances
Sorted, merged iterable
new_featuretype : string or None
The new features will all be of this type, or, if None (default)
then the featuretypes will be constructed from the neighboring
features, e.g., `inter_exon_exon`.
attribute_func : callable or None
If None, then nothing special is done to the attributes. If
callable, then the callable accepts two attribute dictionaries and
returns a single attribute dictionary. If `merge_attributes` is
True, then `attribute_func` is called before `merge_attributes`.
This could be useful for manually managing IDs for the new
features.
update_attributes : dict
After attributes have been modified and merged, this dictionary can
be used to replace parts of the attributes dictionary.
Returns
-------
A generator that yields :class:`Feature` objects
"""
for i, f in enumerate(features):
# no inter-feature for the first one
if i == 0:
interfeature_start = f.stop
last_feature = f
continue
interfeature_stop = f.start
if new_featuretype is None:
new_featuretype = 'inter_%s_%s' % (
last_feature.featuretype, f.featuretype)
assert last_feature.strand == f.strand
assert last_feature.chrom == f.chrom
strand = last_feature.strand
chrom = last_feature.chrom
# Shrink
interfeature_start += 1
interfeature_stop -= 1
if merge_attributes:
new_attributes = helpers.merge_attributes(
last_feature.attributes, f.attributes)
else:
new_attributes = {}
if update_attributes:
new_attributes.update(update_attributes)
new_bin = bins.bins(
interfeature_start, interfeature_stop, one=True)
_id = None
fields = dict(
seqid=chrom,
source='gffutils_derived',
featuretype=new_featuretype,
start=interfeature_start,
end=interfeature_stop,
score='.',
strand=strand,
frame='.',
attributes=new_attributes,
bin=new_bin)
if dialect is None:
# Support for @classmethod -- if calling from the class, then
# self.dialect is not defined, so defer to Feature's default
# (which will be constants.dialect, or GFF3).
try:
dialect = self.dialect
except AttributeError:
dialect = None
yield self._feature_returner(**fields)
interfeature_start = f.stop
def delete(self, features, make_backup=True, **kwargs):
"""
Delete features from database.
features : str, iterable, FeatureDB instance
If FeatureDB, all features will be used. If string, assume it's the
ID of the feature to remove. Otherwise, assume it's an iterable of
Feature objects. The classes in gffutils.iterators may be helpful
in this case.
make_backup : bool
If True, and the database you're about to update is a file on disk,
makes a copy of the existing database and saves it with a .bak
extension.
Returns
-------
FeatureDB object, with features deleted.
"""
if make_backup:
if isinstance(self.dbfn, agouti_pkg.six.string_types):
shutil.copy2(self.dbfn, self.dbfn + '.bak')
c = self.conn.cursor()
query1 = """
DELETE FROM features WHERE id = ?
"""
query2 = """
DELETE FROM relations WHERE parent = ? OR child = ?
"""
if isinstance(features, FeatureDB):
features = features.all_features()
if isinstance(features, agouti_pkg.six.string_types):
features = [features]
if isinstance(features, Feature):
features = [features]
for feature in features:
if isinstance(feature, agouti_pkg.six.string_types):
_id = feature
else:
_id = feature.id
c.execute(query1, (_id,))
c.execute(query2, (_id, _id))
self.conn.commit()
return self
def update(self, data, make_backup=True, **kwargs):
"""
Update database with features in `data`.
data : str, iterable, FeatureDB instance
If FeatureDB, all data will be used. If string, assume it's
a filename of a GFF or GTF file. Otherwise, assume it's an
iterable of Feature objects. The classes in gffutils.iterators may
be helpful in this case.
make_backup : bool
If True, and the database you're about to update is a file on disk,
makes a copy of the existing database and saves it with a .bak
extension.
Notes
-----
Other kwargs are used in the same way as in gffutils.create_db; see the
help for that function for details.
Returns
-------
FeatureDB with updated features.
"""
from gffutils import create
from gffutils import iterators
if make_backup:
if isinstance(self.dbfn, agouti_pkg.six.string_types):
shutil.copy2(self.dbfn, self.dbfn + '.bak')
# get iterator-specific kwargs
_iterator_kwargs = {}
for k, v in kwargs.items():
if k in constants._iterator_kwargs:
_iterator_kwargs[k] = v
# Handle all sorts of input
data = iterators.DataIterator(data, **_iterator_kwargs)
if self.dialect['fmt'] == 'gtf':
if 'id_spec' not in kwargs:
kwargs['id_spec'] = {
'gene': 'gene_id', 'transcript': 'transcript_id'}
db = create._GTFDBCreator(
data=data, dbfn=self.dbfn, dialect=self.dialect, **kwargs)
elif self.dialect['fmt'] == 'gff3':
if 'id_spec' not in kwargs:
kwargs['id_spec'] = 'ID'
db = create._GFFDBCreator(
data=data, dbfn=self.dbfn, dialect=self.dialect, **kwargs)
else:
raise ValueError
db._populate_from_lines(data)
db._update_relations()
db._finalize()
return db
def add_relation(self, parent, child, level, parent_func=None,
child_func=None):
"""
Manually add relations to the database.
Parameters
----------
parent : str or Feature instance
Parent feature to add.
child : str or Feature instance
Child feature to add
level : int
Level of the relation. For example, if parent is a gene and child
is an mRNA, then you might want level to be 1. But if child is an
exon, then level would be 2.
parent_func, child_func : callable
These optional functions control how attributes are updated in the
database. They both have the signature `func(parent, child)` and
must return a [possibly modified] Feature instance. For example,
we could add the child's database id as the "child" attribute in
the parent::
def parent_func(parent, child):
parent.attributes['child'] = child.id
and add the parent's "gene_id" as the child's "Parent" attribute::
def child_func(parent, child):
child.attributes['Parent'] = parent['gene_id']
Returns
-------
FeatureDB object with new relations added.
"""
if isinstance(parent, agouti_pkg.six.string_types):
parent = self[parent]
if isinstance(child, agouti_pkg.six.string_types):
child = self[child]
c = self.conn.cursor()
c.execute('''
INSERT INTO relations (parent, child, level)
VALUES (?, ?, ?)''', (parent.id, child.id, level))
if parent_func is not None:
parent = parent_func(parent, child)
self._update(parent, c)
if child_func is not None:
child = child_func(parent, child)
self._update(child, c)
self.conn.commit()
return self
def _update(self, feature, cursor):
values = [list(feature.astuple()) + [feature.id]]
cursor.execute(
constants._UPDATE, *tuple(values))
def _insert(self, feature, cursor):
"""
Insert a feature into the database.
"""
try:
cursor.execute(constants._INSERT, feature.astuple())
except sqlite3.ProgrammingError:
cursor.execute(
constants._INSERT, feature.astuple(self.default_encoding))
def create_introns(self, exon_featuretype='exon',
grandparent_featuretype='gene', parent_featuretype=None,
new_featuretype='intron', merge_attributes=True):
"""
Create introns from existing annotations.
Parameters
----------
exon_featuretype : string
Feature type to use in order to infer introns. Typically `"exon"`.
grandparent_featuretype : string
If `grandparent_featuretype` is not None, then group exons by
children of this featuretype. If `granparent_featuretype` is
"gene" (default), then introns will be created for all first-level
children of genes. This may include mRNA, rRNA, ncRNA, etc. If
you only want to infer introns from one of these featuretypes
(e.g., mRNA), then use the `parent_featuretype` kwarg which is
mutually exclusive with `grandparent_featuretype`.
parent_featuretype : string
If `parent_featuretype` is not None, then only use this featuretype
to infer introns. Use this if you only want a subset of
featuretypes to have introns (e.g., "mRNA" only, and not ncRNA or
rRNA). Mutually exclusive with `grandparent_featuretype`.
new_featuretype : string
Feature type to use for the inferred introns; default is
`"intron"`.
merge_attributes : bool
Whether or not to merge attributes from all exons. If False then no
attributes will be created for the introns.
Returns
-------
A generator object that yields :class:`Feature` objects representing
new introns
Notes
-----
The returned generator can be passed directly to the
:meth:`FeatureDB.update` method to permanently add them to the
database, e.g., ::
db.update(db.create_introns())
"""
if (grandparent_featuretype and parent_featuretype) or (
grandparent_featuretype is None and parent_featuretype is None
):
raise ValueError("exactly one of `grandparent_featuretype` or "
"`parent_featuretype` should be provided")
if grandparent_featuretype:
def child_gen():
for gene in self.features_of_type(grandparent_featuretype):
for child in self.children(gene, level=1):
yield child
elif parent_featuretype:
def child_gen():
for child in self.features_of_type(parent_featuretype):
yield child
for child in child_gen():
exons = self.children(child, level=1, featuretype=exon_featuretype,
order_by='start')
for intron in self.interfeatures(
exons, new_featuretype=new_featuretype,
merge_attributes=merge_attributes, dialect=self.dialect
):
yield intron
def merge(self, features, ignore_strand=False):
"""
Merge overlapping features together.
Parameters
----------
features : iterator of Feature instances
ignore_strand : bool
If True, features on multiple strands will be merged, and the final
strand will be set to '.'. Otherwise, ValueError will be raised if
trying to merge features on differnt strands.
Returns
-------
A generator object that yields :class:`Feature` objects representing
the newly merged features.
"""
# Consume iterator up front...
features = list(features)
if len(features) == 0:
raise StopIteration
# Either set all strands to '+' or check for strand-consistency.
if ignore_strand:
strand = '.'
else:
strands = [i.strand for i in features]
if len(set(strands)) > 1:
raise ValueError('Specify ignore_strand=True to force merging '
'of multiple strands')
strand = strands[0]
# Sanity check to make sure all features are from the same chromosome.
chroms = [i.chrom for i in features]
if len(set(chroms)) > 1:
raise NotImplementedError('Merging multiple chromosomes not '
'implemented')
chrom = chroms[0]
# To start, we create a merged feature of just the first feature.
current_merged_start = features[0].start
current_merged_stop = features[0].stop
# We don't need to check the first one, so start at feature #2.
for feature in features[1:]:
# Does this feature start within the currently merged feature?...
if feature.start <= current_merged_stop + 1:
# ...It starts within, so leave current_merged_start where it
# is. Does it extend any farther?
if feature.stop >= current_merged_stop:
# Extends further, so set a new stop position
current_merged_stop = feature.stop
else:
# If feature.stop < current_merged_stop, it's completely
# within the previous feature. Nothing more to do.
continue
else:
# The start position is outside the merged feature, so we're
# done with the current merged feature. Prepare for output...
merged_feature = dict(
seqid=feature.chrom,
source='.',
featuretype=feature.featuretype,
start=current_merged_start,
end=current_merged_stop,
score='.',
strand=strand,
frame='.',
attributes='')
yield self._feature_returner(**merged_feature)
# and we start a new one, initializing with this feature's
# start and stop.
current_merged_start = feature.start
current_merged_stop = feature.stop
# need to yield the last one.
if len(features) == 1:
feature = features[0]
merged_feature = dict(
seqid=feature.chrom,
source='.',
featuretype=feature.featuretype,
start=current_merged_start,
end=current_merged_stop,
score='.',
strand=strand,
frame='.',
attributes='')
yield self._feature_returner(**merged_feature)
def children_bp(self, feature, child_featuretype='exon', merge=False,
ignore_strand=False):
"""
Total bp of all children of a featuretype.
Useful for getting the exonic bp of an mRNA.
Parameters
----------
feature : str or Feature instance
child_featuretype : str
Which featuretype to consider. For example, to get exonic bp of an
mRNA, use `child_featuretype='exon'`.
merge : bool
Whether or not to merge child features together before summing
them.
ignore_strand : bool
If True, then overlapping features on different strands will be
merged together; otherwise, merging features with different strands
will result in a ValueError.
Returns
-------
Integer representing the total number of bp.
"""
children = self.children(feature, featuretype=child_featuretype,
order_by='start')
if merge:
children = self.merge(children, ignore_strand=ignore_strand)
total = 0
for child in children:
total += len(child)
return total
def bed12(self, feature, block_featuretype=['exon'],
thick_featuretype=['CDS'], thin_featuretype=None,
name_field='ID', color=None):
"""
Converts `feature` into a BED12 format.
GFF and GTF files do not necessarily define genes consistently, so this
method provides flexiblity in specifying what to call a "transcript".
Parameters
----------
feature : str or Feature instance
In most cases, this feature should be a transcript rather than
a gene.
block_featuretype : str or list
Which featuretype to use as the exons. These are represented as
blocks in the BED12 format. Typically 'exon'.
Use the `thick_featuretype` and `thin_featuretype` arguments to
control the display of CDS as thicker blocks and UTRs as thinner
blocks.
Note that the features for `thick` or `thin` are *not*
automatically included in the blocks; if you do want them included,
then those featuretypes should be added to this `block_features`
list.
If no child features of type `block_featuretype` are found, then
the full `feature` is returned in BED12 format as if it had
a single exon.
thick_featuretype : str or list
Child featuretype(s) to use in order to determine the boundaries of
the "thick" blocks. In BED12 format, these represent coding
sequences; typically this would be set to "CDS". This argument is
mutually exclusive with `thin_featuretype`.
Specifically, the BED12 thickStart will be the start coord of the
first `thick` item and the thickEnd will be the stop coord of the
last `thick` item.
thin_featuretype : str or list
Child featuretype(s) to use in order to determine the boundaries of
the "thin" blocks. In BED12 format, these represent untranslated
regions. Typically "utr" or ['three_prime_UTR', 'five_prime_UTR'].
Mutually exclusive with `thick_featuretype`.
Specifically, the BED12 thickStart field will be the stop coord of
the first `thin` item and the thickEnd field will be the start
coord of the last `thin` item.
name_field : str
Which attribute of `feature` to use as the feature's name. If this
field is not present, a "." placeholder will be used instead.
color : None or str
If None, then use black (0,0,0) as the RGB color; otherwise this
should be a comma-separated string of R,G,B values each of which
are integers in the range 0-255.
"""
if thick_featuretype and thin_featuretype:
raise ValueError("Can only specify one of `thick_featuertype` or "
"`thin_featuretype`")
exons = list(self.children(feature, featuretype=block_featuretype,
order_by='start'))
if len(exons) == 0:
exons = [feature]
feature = self[feature]
first = exons[0].start
last = exons[-1].stop
if first != feature.start:
raise ValueError(
"Start of first exon (%s) does not match start of feature (%s)"
% (first, feature.start))
if last != feature.stop:
raise ValueError(
"End of last exon (%s) does not match end of feature (%s)"
% (last, feature.stop))
if color is None:
color = '0,0,0'
color = color.replace(' ', '').strip()
# Use field names as defined at
# http://genome.ucsc.edu/FAQ/FAQformat.html#format1
chrom = feature.chrom
chromStart = feature.start - 1
chromEnd = feature.stop
orig = constants.always_return_list
constants.always_return_list = True
try:
name = feature[name_field][0]
except KeyError:
name = "."
constants.always_return_list = orig
score = feature.score
if score == '.':
score = '0'
strand = feature.strand
itemRgb = color
blockCount = len(exons)
blockSizes = [len(i) for i in exons]
blockStarts = [i.start - 1 - chromStart for i in exons]
if thick_featuretype:
thick = list(self.children(feature, featuretype=thick_featuretype,
order_by='start'))
if len(thick) == 0:
thickStart = feature.start
thickEnd = feature.stop
else:
thickStart = thick[0].start - 1 # BED 0-based coords
thickEnd = thick[-1].stop
if thin_featuretype:
thin = list(self.children(feature, featuretype=thin_featuretype,
order_by='start'))
if len(thin) == 0:
thickStart = feature.start
thickEnd = feature.stop
else:
thickStart = thin[0].stop
thickEnd = thin[-1].start - 1 # BED 0-based coords
tst = chromStart + blockStarts[-1] + blockSizes[-1]
assert tst == chromEnd, "tst=%s; chromEnd=%s" % (tst, chromEnd)
fields = [
chrom,
chromStart,
chromEnd,
name,
score,
strand,
thickStart,
thickEnd,
itemRgb,
blockCount,
','.join(map(str, blockSizes)),
','.join(map(str, blockStarts))]
return '\t'.join(map(str, fields))
# Recycle the docs for _relation so they stay consistent between parents()
# and children()
children.__doc__ = children.__doc__.format(
_relation_docstring=_relation.__doc__)
parents.__doc__ = parents.__doc__.format(
_relation_docstring=_relation.__doc__)
# Add the docs for methods that call helpers.make_query()
for method in [parents, children, features_of_type, all_features]:
method.__doc__ = method.__doc__.format(
_method_doc=_method_doc) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/interface.py | interface.py |
import re
import copy
import collections
from agouti_pkg.six.moves import urllib
from agouti_pkg.gffutils import constants
from agouti_pkg.gffutils.exceptions import AttributeStringError
import logging
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
ch.setFormatter(formatter)
logger.addHandler(ch)
gff3_kw_pat = re.compile('\w+=')
# Encoding/decoding notes
# -----------------------
# From
# https://github.com/The-Sequence-Ontology/Specifications/blob/master/gff3.md#description-of-the-format:
#
# GFF3 files are nine-column, tab-delimited, plain text files.
# Literal use of tab, newline, carriage return, the percent (%) sign,
# and control characters must be encoded using RFC 3986
# Percent-Encoding; no other characters may be encoded. Backslash and
# other ad-hoc escaping conventions that have been added to the GFF
# format are not allowed. The file contents may include any character
# in the set supported by the operating environment, although for
# portability with other systems, use of Latin-1 or Unicode are
# recommended.
#
# tab (%09)
# newline (%0A)
# carriage return (%0D)
# % percent (%25)
# control characters (%00 through %1F, %7F)
#
# In addition, the following characters have reserved meanings in
# column 9 and must be escaped when used in other contexts:
#
# ; semicolon (%3B)
# = equals (%3D)
# & ampersand (%26)
# , comma (%2C)
#
#
# See also issue #98.
#
# Note that spaces are NOT encoded. Some GFF files have spaces encoded; in
# these cases round-trip invariance will not hold since the %20 will be decoded
# but not re-encoded.
_to_quote = '\n\t\r%;=&,'
_to_quote += ''.join([chr(i) for i in range(32)])
_to_quote += chr(127)
# Caching idea from urllib.parse.Quoter, which uses a defaultdict for
# efficiency. Here we're sort of doing the reverse of the "reserved" idea used
# there.
class Quoter(collections.defaultdict):
def __missing__(self, b):
if b in _to_quote:
res = '%{:02X}'.format(ord(b))
else:
res = b
self[b] = res
return res
quoter = Quoter()
def _reconstruct(keyvals, dialect, keep_order=False,
sort_attribute_values=False):
"""
Reconstructs the original attributes string according to the dialect.
Parameters
==========
keyvals : dict
Attributes from a GFF/GTF feature
dialect : dict
Dialect containing info on how to reconstruct a string version of the
attributes
keep_order : bool
If True, then perform sorting of attribute keys to ensure they are in
the same order as those provided in the original file. Default is
False, which saves time especially on large data sets.
sort_attribute_values : bool
If True, then sort values to ensure they will always be in the same
order. Mostly only useful for testing; default is False.
"""
if not dialect:
raise AttributeStringError()
if not keyvals:
return ""
parts = []
# Re-encode when reconstructing attributes
if constants.ignore_url_escape_characters or dialect['fmt'] != 'gff3':
attributes = keyvals
else:
attributes = {}
for k, v in keyvals.items():
attributes[k] = []
for i in v:
attributes[k].append(''.join([quoter[j] for j in i]))
# May need to split multiple values into multiple key/val pairs
if dialect['repeated keys']:
items = []
for key, val in attributes.items():
if len(val) > 1:
for v in val:
items.append((key, [v]))
else:
items.append((key, val))
else:
items = list(attributes.items())
def sort_key(x):
# sort keys by their order in the dialect; anything not in there will
# be in arbitrary order at the end.
try:
return dialect['order'].index(x[0])
except ValueError:
return 1e6
if keep_order:
items.sort(key=sort_key)
for key, val in items:
# Multival sep is usually a comma:
if val:
if sort_attribute_values:
val = sorted(val)
val_str = dialect['multival separator'].join(val)
if val_str:
# Surround with quotes if needed
if dialect['quoted GFF2 values']:
val_str = '"%s"' % val_str
# Typically "=" for GFF3 or " " otherwise
part = dialect['keyval separator'].join([key, val_str])
else:
if dialect['fmt'] == 'gtf':
part = dialect['keyval separator'].join([key, '""'])
else:
part = key
parts.append(part)
# Typically ";" or "; "
parts_str = dialect['field separator'].join(parts)
# Sometimes need to add this
if dialect['trailing semicolon']:
parts_str += ';'
return parts_str
# TODO:
# Cythonize -- profiling shows that the bulk of the time is spent on this
# function...
def _split_keyvals(keyval_str, dialect=None):
"""
Given the string attributes field of a GFF-like line, split it into an
attributes dictionary and a "dialect" dictionary which contains information
needed to reconstruct the original string.
Lots of logic here to handle all the corner cases.
If `dialect` is None, then do all the logic to infer a dialect from this
attribute string.
Otherwise, use the provided dialect (and return it at the end).
"""
def _unquote_quals(quals, dialect):
"""
Handles the unquoting (decoding) of percent-encoded characters.
See notes on encoding/decoding above.
"""
if not constants.ignore_url_escape_characters and dialect['fmt'] == 'gff3':
for key, vals in quals.items():
unquoted = [urllib.parse.unquote(v) for v in vals]
quals[key] = unquoted
return quals
infer_dialect = False
if dialect is None:
# Make a copy of default dialect so it can be modified as needed
dialect = copy.copy(constants.dialect)
infer_dialect = True
from agouti_pkg.gffutils import feature
quals = feature.dict_class()
if not keyval_str:
return quals, dialect
# If a dialect was provided, then use that directly.
if not infer_dialect:
if dialect['trailing semicolon']:
keyval_str = keyval_str.rstrip(';')
parts = keyval_str.split(dialect['field separator'])
kvsep = dialect['keyval separator']
if dialect['leading semicolon']:
pieces = []
for p in parts:
if p and p[0] == ';':
p = p[1:]
pieces.append(p.strip().split(kvsep))
key_vals = [(p[0], " ".join(p[1:])) for p in pieces]
if dialect['fmt'] == 'gff3':
key_vals = [p.split(kvsep) for p in parts]
else:
leadingsemicolon = dialect['leading semicolon']
pieces = []
for i, p in enumerate(parts):
if i == 0 and leadingsemicolon:
p = p[1:]
pieces.append(p.strip().split(kvsep))
key_vals = [(p[0], " ".join(p[1:])) for p in pieces]
quoted = dialect['quoted GFF2 values']
for item in key_vals:
# Easy if it follows spec
if len(item) == 2:
key, val = item
# Only key provided?
elif len(item) == 1:
key = item[0]
val = ''
else:
key = item[0]
val = dialect['keyval separator'].join(item[1:])
try:
quals[key]
except KeyError:
quals[key] = []
if quoted:
if (len(val) > 0 and val[0] == '"' and val[-1] == '"'):
val = val[1:-1]
if val:
# TODO: if there are extra commas for a value, just use empty
# strings
# quals[key].extend([v for v in val.split(',') if v])
vals = val.split(',')
quals[key].extend(vals)
quals = _unquote_quals(quals, dialect)
return quals, dialect
# If we got here, then we need to infer the dialect....
#
# Reset the order to an empty list so that it will only be populated with
# keys that are found in the file.
dialect['order'] = []
# ensembl GTF has trailing semicolon
if keyval_str[-1] == ';':
keyval_str = keyval_str[:-1]
dialect['trailing semicolon'] = True
# GFF2/GTF has a semicolon with at least one space after it.
# Spaces can be on both sides (e.g. wormbase)
# GFF3 works with no spaces.
# So split on the first one we can recognize...
for sep in (' ; ', '; ', ';'):
parts = keyval_str.split(sep)
if len(parts) > 1:
dialect['field separator'] = sep
break
# Is it GFF3? They have key-vals separated by "="
if gff3_kw_pat.match(parts[0]):
key_vals = [p.split('=') for p in parts]
dialect['fmt'] = 'gff3'
dialect['keyval separator'] = '='
# Otherwise, key-vals separated by space. Key is first item.
else:
dialect['keyval separator'] = " "
pieces = []
for p in parts:
# Fix misplaced semicolons in keys in some GFF2 files
if p and p[0] == ';':
p = p[1:]
dialect['leading semicolon'] = True
pieces.append(p.strip().split(' '))
key_vals = [(p[0], " ".join(p[1:])) for p in pieces]
for item in key_vals:
# Easy if it follows spec
if len(item) == 2:
key, val = item
# Only key provided?
elif len(item) == 1:
key = item[0]
val = ''
# Pathological cases where values of a key have within them the key-val
# separator, e.g.,
# Alias=SGN-M1347;ID=T0028;Note=marker name(s): T0028 SGN-M1347 |identity=99.58|escore=2e-126
else:
key = item[0]
val = dialect['keyval separator'].join(item[1:])
# Is the key already in there?
if key in quals:
dialect['repeated keys'] = True
else:
quals[key] = []
# Remove quotes in GFF2
if len(val) > 0 and val[0] == '"' and val[-1] == '"':
val = val[1:-1]
dialect['quoted GFF2 values'] = True
if val:
# TODO: if there are extra commas for a value, just use empty
# strings
# quals[key].extend([v for v in val.split(',') if v])
vals = val.split(',')
if (len(vals) > 1) and dialect['repeated keys']:
raise AttributeStringError(
"Internally inconsistent attributes formatting: "
"some have repeated keys, some do not.")
quals[key].extend(vals)
# keep track of the order of keys
dialect['order'].append(key)
if (
(dialect['keyval separator'] == ' ') and
(dialect['quoted GFF2 values'])
):
dialect['fmt'] = 'gtf'
quals = _unquote_quals(quals, dialect)
return quals, dialect | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/parser.py | parser.py |
from agouti_pkg.pyfaidx import Fasta
import agouti_pkg.six
import agouti_pkg.simplejson
from agouti_pkg.gffutils import constants
from agouti_pkg.gffutils import helpers
from agouti_pkg.gffutils import parser
from agouti_pkg.gffutils import bins
from agouti_pkg.gffutils.attributes import dict_class
_position_lookup = dict(enumerate(['seqid', 'source', 'featuretype', 'start',
'end', 'score', 'strand', 'frame',
'attributes']))
class Feature(object):
def __init__(self, seqid=".", source=".", featuretype=".",
start=".", end=".", score=".", strand=".", frame=".",
attributes=None, extra=None, bin=None, id=None, dialect=None,
file_order=None, keep_order=False,
sort_attribute_values=False):
"""
Represents a feature from the database.
Usually you won't want to use this directly, since it has various
implementation details needed for operating in the context of FeatureDB
objects. Instead, try the :func:`gffutils.feature.feature_from_line`
function.
When printed, reproduces the original line from the file as faithfully
as possible using `dialect`.
Parameters
----------
seqid : string
Name of the sequence (often chromosome)
source : string
Source of the feature; typically the originating database or
program that predicted the feature
featuretype : string
Type of feature. For example "gene", "exon", "TSS", etc
start, end : int or "."
1-based coordinates; start must be <= end. If "." (the default
placeholder for GFF files), then the corresponding attribute will
be None.
score : string
Stored as a string.
strand : "+" | "-" | "."
Strand of the feature; "." when strand is not relevant.
frame : "0" | "1" | "2"
Coding frame. 0 means in-frame; 1 means there is one extra base at
the beginning, so the first codon starts at the second base;
2 means two extra bases at the beginning. Interpretation is strand
specific; "beginning" for a minus-strand feature is at the end
coordinate.
attributes : string or dict
If a string, first assume it is serialized JSON; if this fails then
assume it's the original key/vals string. If it's a dictionary
already, then use as-is.
The end result is that this instance's `attributes` attribute will
always be a dictionary.
Upon printing, the attributes will be reconstructed based on this
dictionary and the dialect -- except if the original attributes
string was provided, in which case that will be used directly.
Notes on encoding/decoding: the only time unquoting
(e.g., "%2C" becomes ",") happens is if `attributes` is a string
and if `settings.ignore_url_escape_characters = False`. If dict or
JSON, the contents are used as-is.
Similarly, the only time characters are quoted ("," becomes "%2C")
is when the feature is printed (`__str__` method).
extra : string or list
Additional fields after the canonical 9 fields for GFF/GTF.
If a string, then first assume it's serialized JSON; if this fails
then assume it's a tab-delimited string of additional fields. If
it's a list already, then use as-is.
bin : int
UCSC genomic bin. If None, will be created based on provided
start/end; if start or end is "." then bin will be None.
id : None or string
Database-specific primary key for this feature. The only time this
should not be None is if this feature is coming from a database, in
which case it will be filled in automatically.
dialect : dict or None
The dialect to use when reconstructing attribute strings; defaults
to the GFF3 spec. :class:`FeatureDB` objects will automatically
attach the dialect from the original file.
file_order : int
This is the `rowid` special field used in a sqlite3 database; this
is provided by FeatureDB.
keep_order : bool
If True, then the attributes in the printed string will be in the
order specified in the dialect. Disabled by default, since this
sorting step is time-consuming over many features.
sort_attribute_values : bool
If True, then the values of each attribute will be sorted when the
feature is printed. Mostly useful for testing, where the order is
important for checking against expected values. Disabled by
default, since it can be time-consuming over many features.
"""
# start/end can be provided as int-like, ".", or None, but will be
# converted to int or None
if start == "." or start == "":
start = None
elif start is not None:
start = int(start)
if end == "." or end == "":
end = None
elif end is not None:
end = int(end)
# Flexible handling of attributes:
# If dict, then use that; otherwise assume JSON and convert to a dict;
# otherwise assume original string and convert to a dict.
#
# dict_class is set at the module level above...this is so you can swap
# in and out different dict implementations (ordered, defaultdict, etc)
# for testing.
attributes = attributes or dict_class()
if isinstance(attributes, agouti_pkg.six.string_types):
try:
attributes = helpers._unjsonify(attributes, isattributes=True)
# it's a string but not JSON: assume original attributes string.
except agouti_pkg.simplejson.JSONDecodeError:
# But Feature.attributes is still a dict
attributes, _dialect = parser._split_keyvals(attributes)
# Use this dialect if none provided.
dialect = dialect or _dialect
# If string, then try un-JSONifying it into a list; if that doesn't
# work then assume it's tab-delimited and convert to a list.
extra = extra or []
if isinstance(extra, agouti_pkg.six.string_types):
try:
extra = helpers._unjsonify(extra)
except agouti_pkg.simplejson.JSONDecodeError:
extra = extra.split('\t')
self.seqid = seqid
self.source = source
self.featuretype = featuretype
self.start = start
self.end = end
self.score = score
self.strand = strand
self.frame = frame
self.attributes = attributes
self.extra = extra
self.bin = self.calc_bin(bin)
self.id = id
self.dialect = dialect or constants.dialect
self.file_order = file_order
self.keep_order = keep_order
self.sort_attribute_values = sort_attribute_values
def calc_bin(self, _bin=None):
"""
Calculate the smallest UCSC genomic bin that will contain this feature.
"""
if _bin is None:
try:
_bin = bins.bins(self.start, self.end, one=True)
except TypeError:
_bin = None
return _bin
def __repr__(self):
memory_loc = hex(id(self))
# Reconstruct start/end as "."
if self.start is None:
start = '.'
else:
start = self.start
if self.end is None:
end = '.'
else:
end = self.end
return (
"<Feature {x.featuretype} ({x.seqid}:{start}-{end}"
"[{x.strand}]) at {loc}>".format(x=self, start=start, end=end,
loc=memory_loc))
def __getitem__(self, key):
if isinstance(key, int):
# TODO: allow access to "extra" fields
attr = _position_lookup[key]
return getattr(self, attr)
else:
return self.attributes[key]
def __setitem__(self, key, value):
if isinstance(key, int):
# TODO: allow setting of "extra" fields
attr = _position_lookup[key]
setattr(self, attr, value)
else:
self.attributes[key] = value
def __str__(self):
if agouti_pkg.six.PY3:
return self.__unicode__()
else:
return unicode(self).encode('utf-8')
def __unicode__(self):
# All fields but attributes (and extra).
items = [getattr(self, k) for k in constants._gffkeys[:-1]]
# Handle start/stop, which are either None or int
if items[3] is None:
items[3] = "."
else:
items[3] = str(items[3])
if items[4] is None:
items[4] = "."
else:
items[4] = str(items[4])
# Reconstruct from dict and dialect
reconstructed_attributes = parser._reconstruct(
self.attributes, self.dialect, keep_order=self.keep_order,
sort_attribute_values=self.sort_attribute_values)
# Final line includes reconstructed as well as any previously-added
# "extra" fields
items.append(reconstructed_attributes)
if self.extra:
items.append('\t'.join(self.extra))
return '\t'.join(items)
def __hash__(self):
return hash(str(self))
def __eq__(self, other):
return str(self) == str(other)
def __ne__(self, other):
return str(self) != str(other)
def __len__(self):
return self.stop - self.start + 1
# aliases for official GFF field names; this way x.chrom == x.seqid; and
# x.stop == x.end.
@property
def chrom(self):
return self.seqid
@chrom.setter
def chrom(self, value):
self.seqid = value
@property
def stop(self):
return self.end
@stop.setter
def stop(self, value):
self.end = value
def astuple(self, encoding=None):
"""
Return a tuple suitable for import into a database.
Attributes field and extra field jsonified into strings. The order of
fields is such that they can be supplied as arguments for the query
defined in :attr:`gffutils.constants._INSERT`.
If `encoding` is not None, then convert string fields to unicode using
the provided encoding.
Returns
-------
Tuple
"""
if not encoding:
return (
self.id, self.seqid, self.source, self.featuretype, self.start,
self.end, self.score, self.strand, self.frame,
helpers._jsonify(self.attributes),
helpers._jsonify(self.extra), self.calc_bin()
)
return (
self.id.decode(encoding), self.seqid.decode(encoding),
self.source.decode(encoding), self.featuretype.decode(encoding),
self.start, self.end, self.score.decode(encoding),
self.strand.decode(encoding), self.frame.decode(encoding),
helpers._jsonify(self.attributes).decode(encoding),
helpers._jsonify(self.extra).decode(encoding), self.calc_bin()
)
def sequence(self, fasta, use_strand=True):
"""
Retrieves the sequence of this feature as a string.
Uses the pyfaidx package.
Parameters
----------
fasta : str
If str, then it's a FASTA-format filename; otherwise assume it's
a pyfaidx.Fasta object.
use_strand : bool
If True (default), the sequence returned will be
reverse-complemented for minus-strand features.
Returns
-------
string
"""
if isinstance(fasta, agouti_pkg.six.string_types):
fasta = Fasta(fasta, as_raw=False)
# recall GTF/GFF is 1-based closed; pyfaidx uses Python slice notation
# and is therefore 0-based half-open.
seq = fasta[self.chrom][self.start-1:self.stop]
if use_strand and self.strand == '-':
seq = seq.reverse.complement
return seq.seq
def feature_from_line(line, dialect=None, strict=True, keep_order=False):
"""
Given a line from a GFF file, return a Feature object
Parameters
----------
line : string
strict : bool
If True (default), assume `line` is a single, tab-delimited string that
has at least 9 fields.
If False, then the input can have a more flexible format, useful for
creating single ad hoc features or for writing tests. In this case,
`line` can be a multi-line string (as long as it has a single non-empty
line), and, as long as there are only 9 fields (standard GFF/GTF), then
it's OK to use spaces instead of tabs to separate fields in `line`.
But if >9 fields are to be used, then tabs must be used.
keep_order, dialect
Passed directly to :class:`Feature`; see docstring for that class for
description
Returns
-------
A new :class:`Feature` object.
"""
if not strict:
lines = line.splitlines(False)
_lines = []
for i in lines:
i = i.strip()
if len(i) > 0:
_lines.append(i)
assert len(_lines) == 1, _lines
line = _lines[0]
if '\t' in line:
fields = line.rstrip('\n\r').split('\t')
else:
fields = line.rstrip('\n\r').split(None, 8)
else:
fields = line.rstrip('\n\r').split('\t')
try:
attr_string = fields[8]
except IndexError:
attr_string = ""
attrs, _dialect = parser._split_keyvals(attr_string, dialect=dialect)
d = dict(list(zip(constants._gffkeys, fields)))
d['attributes'] = attrs
d['extra'] = fields[9:]
d['keep_order'] = keep_order
if dialect is None:
dialect = _dialect
return Feature(dialect=dialect, **d) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/feature.py | feature.py |
from agouti.gffutils.helpers import asinterval
try:
from pybedtools.contrib.plotting import Track
except ImportError:
import warnings
warnings.warn("Please install pybedtools for plotting.")
class Gene(object):
def __init__(self, db, gene_id, transcripts=['mRNA'], utrs=["3'UTR",
"5'UTR"], cds=['CDS'], ybase=0, **kwargs):
"""
Represents a gene, `gene_id`, as a collection of
pybedtools.contrib.plotting.Track objects.
This class is flexible in how transcripts/CDSs/UTRs are defined;
for example, you will usually have to adjust what featuretypes to
consider as UTRs based on your annotation.
For example, GTF files will typically define "3'UTR" but FlyBase GFF
files use "three_prime_UTR". In the former case you'd specify
`utrs=["3'UTR", "5'UTR"]` while in the latter,
`utrs=["three_prime_UTR", "five_prime_UTR"]`.
The full length of the transcript is represented by a thin line; UTRs
are slightly thicker, and CDSs are thickest.
`kwargs` are passed to pybedtools.contrib.plotting.Track.
`ybase` sets the bottom edge of the bottom isoform (sorted so that it's
the shortest)
Use the `max_y` attribute if you need to know the upper extent of the
plotted transcripts. This is useful if you are plotting multiple genes
on the same axes and need to know what to use for the next one's
`ybase`.
You can adjust the `heights` attribute to set how wide transcripts,
UTRs, CDSs are. Padding is essentially "full" minus the largest height
(CDS, 0.9, by default).
"""
self.heights = {
'transcript': 0.2,
'utrs': 0.5,
'cds': 0.9,
'full': 1.0}
self.kwargs = kwargs
self._transcripts = []
for transcript in db.children(gene_id, level=1):
if transcripts is None or transcript.featuretype in transcripts:
d = {}
d['transcript'] = [transcript]
d['utrs'] = []
d['cds'] = []
for child in db.children(transcript, level=1):
_utrs = []
if child.featuretype in utrs:
d['utrs'].append(child)
if child.featuretype in cds:
d['cds'].append(child)
self._transcripts.append(d)
self.tracks = []
self.ybase = ybase
for d in sorted(self._transcripts,
reverse=True, key=lambda x: len(x['transcript'])):
self.tracks.append(self._make_track(d, 'transcript'))
self.tracks.append(self._make_track(d, 'utrs'))
self.tracks.append(self._make_track(d, 'cds'))
self.ybase += self.heights['full']
self.max_y = ybase + self.heights['full']
def add_to_ax(self, ax):
"""
Add all the transcripts for this gene to axes `ax`
"""
for track in self.tracks:
ax.add_collection(track)
def _make_track(self, d, cls):
yheight = self.heights[cls]
ybase = self.ybase + (self.heights['full'] - yheight) * 0.5
return Track(
(asinterval(i) for i in d[cls]),
ybase=ybase, yheight=yheight, **self.kwargs) | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/contrib/plotting.py | plotting.py |
# list the children and their expected first-order parents for the GFF test file.
GFF_parent_check_level_1 = {'FBtr0300690':['FBgn0031208'],
'FBtr0300689':['FBgn0031208'],
'CG11023:1':['FBtr0300689','FBtr0300690'],
'five_prime_UTR_FBgn0031208:1_737':['FBtr0300689','FBtr0300690'],
'CDS_FBgn0031208:1_737':['FBtr0300689','FBtr0300690'],
'intron_FBgn0031208:1_FBgn0031208:2':['FBtr0300690'],
'intron_FBgn0031208:1_FBgn0031208:3':['FBtr0300689'],
'FBgn0031208:3':['FBtr0300689'],
'CDS_FBgn0031208:3_737':['FBtr0300689'],
'CDS_FBgn0031208:2_737':['FBtr0300690'],
'exon:chr2L:8193-8589:+':['FBtr0300690'],
'intron_FBgn0031208:2_FBgn0031208:4':['FBtr0300690'],
'three_prime_UTR_FBgn0031208:3_737':['FBtr0300689'],
'FBgn0031208:4':['FBtr0300690'],
'CDS_FBgn0031208:4_737':['FBtr0300690'],
'three_prime_UTR_FBgn0031208:4_737':['FBtr0300690'],
}
# and second-level . . . they should all be grandparents of the same gene.
GFF_parent_check_level_2 = {
'CG11023:1':['FBgn0031208'],
'five_prime_UTR_FBgn0031208:1_737':['FBgn0031208'],
'CDS_FBgn0031208:1_737':['FBgn0031208'],
'intron_FBgn0031208:1_FBgn0031208:2':['FBgn0031208'],
'intron_FBgn0031208:1_FBgn0031208:3':['FBgn0031208'],
'FBgn0031208:3':['FBgn0031208'],
'CDS_FBgn0031208:3_737':['FBgn0031208'],
'CDS_FBgn0031208:2_737':['FBgn0031208'],
'exon:chr2L:8193-8589:+':['FBgn0031208'],
'intron_FBgn0031208:2_FBgn0031208:4':['FBgn0031208'],
'three_prime_UTR_FBgn0031208:3_737':['FBgn0031208'],
'FBgn0031208:4':['FBgn0031208'],
'CDS_FBgn0031208:4_737':['FBgn0031208'],
'three_prime_UTR_FBgn0031208:4_737':['FBgn0031208'],
}
# Same thing for GTF test file . . .
GTF_parent_check_level_1 = {
'exon:chr2L:7529-8116:+':['FBtr0300689'],
'exon:chr2L:7529-8116:+_1':['FBtr0300690'],
'exon:chr2L:8193-9484:+':['FBtr0300689'],
'exon:chr2L:8193-8589:+':['FBtr0300690'],
'exon:chr2L:8668-9484:+':['FBtr0300690'],
'exon:chr2L:10000-11000:-':['transcript_Fk_gene_1'],
'exon:chr2L:11500-12500:-':['transcript_Fk_gene_2'],
'CDS:chr2L:7680-8116:+':['FBtr0300689'],
'CDS:chr2L:7680-8116:+_1':['FBtr0300690'],
'CDS:chr2L:8193-8610:+':['FBtr0300689'],
'CDS:chr2L:8193-8589:+':['FBtr0300690'],
'CDS:chr2L:8668-9276:+':['FBtr0300690'],
'CDS:chr2L:10000-11000:-':['transcript_Fk_gene_1'],
'FBtr0300689':['FBgn0031208'],
'FBtr0300690':['FBgn0031208'],
'transcript_Fk_gene_1':['Fk_gene_1'],
'transcript_Fk_gene_2':['Fk_gene_2'],
'start_codon:chr2L:7680-7682:+':['FBtr0300689'],
'start_codon:chr2L:7680-7682:+_1':['FBtr0300690'],
'start_codon:chr2L:10000-11002:-':['transcript_Fk_gene_1'],
'stop_codon:chr2L:8611-8613:+':['FBtr0300689'],
'stop_codon:chr2L:9277-9279:+':['FBtr0300690'],
'stop_codon:chr2L:11001-11003:-':['transcript_Fk_gene_1'],
}
GTF_parent_check_level_2 = {
'exon:chr2L:7529-8116:+':['FBgn0031208'],
'exon:chr2L:8193-9484:+':['FBgn0031208'],
'exon:chr2L:8193-8589:+':['FBgn0031208'],
'exon:chr2L:8668-9484:+':['FBgn0031208'],
'exon:chr2L:10000-11000:-':['Fk_gene_1'],
'exon:chr2L:11500-12500:-':['Fk_gene_2'],
'CDS:chr2L:7680-8116:+':['FBgn0031208'],
'CDS:chr2L:8193-8610:+':['FBgn0031208'],
'CDS:chr2L:8193-8589:+':['FBgn0031208'],
'CDS:chr2L:8668-9276:+':['FBgn0031208'],
'CDS:chr2L:10000-11000:-':['Fk_gene_1'],
'FBtr0300689':[],
'FBtr0300690':[],
'transcript_Fk_gene_1':[],
'transcript_Fk_gene_2':[],
'start_codon:chr2L:7680-7682:+':['FBgn0031208'],
'start_codon:chr2L:10000-11002:-':['Fk_gene_1'],
'stop_codon:chr2L:8611-8613:+':['FBgn0031208'],
'stop_codon:chr2L:9277-9279:+':['FBgn0031208'],
'stop_codon:chr2L:11001-11003:-':['Fk_gene_1'],
}
expected_feature_counts = {
'gff3':{'gene':3,
'mRNA':4,
'exon':6,
'CDS':5,
'five_prime_UTR':1,
'intron':3,
'pcr_product':1,
'protein':2,
'three_prime_UTR':2},
'gtf':{
#'gene':3,
# 'mRNA':4,
'CDS':6,
'exon':7,
'start_codon':3,
'stop_codon':3}
}
expected_features = {'gff3':['gene',
'mRNA',
'protein',
'five_prime_UTR',
'three_prime_UTR',
'pcr_product',
'CDS',
'exon',
'intron'],
'gtf':['gene',
'mRNA',
'CDS',
'exon',
'start_codon',
'stop_codon']} | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/gffutils/test/expected.py | expected.py |
from __future__ import division
import os
import sys
from os.path import getmtime
from agouti_pkg.six import PY2, PY3, string_types, integer_types
from agouti_pkg.six.moves import zip_longest
try:
from collections import OrderedDict
except ImportError: #python 2.6
from ordereddict import OrderedDict
from collections import namedtuple
import re
import string
import warnings
from math import ceil
from threading import Lock
if sys.version_info > (3, ):
buffer = memoryview
dna_bases = re.compile(r'([ACTGNactgnYRWSKMDVHBXyrwskmdvhbx]+)')
__version__ = '0.5.5.2'
class KeyFunctionError(ValueError):
"""Raised if the key_function argument is invalid."""
class FastaIndexingError(Exception):
"""Raised if we encounter malformed FASTA that prevents indexing."""
class IndexNotFoundError(IOError):
"""Raised if read_fai cannot open the index file."""
class FastaNotFoundError(IOError):
"""Raised if the fasta file cannot be opened."""
class FetchError(IndexError):
"""Raised if a request to fetch a FASTA sequence cannot be fulfilled."""
class BedError(ValueError):
"""Indicates a malformed BED entry."""
class RegionError(Exception):
# This exception class is currently unused, but has been retained for
# backwards compatibility.
"""A region error occurred."""
class UnsupportedCompressionFormat(IOError):
"""
Raised when a FASTA file is given with a recognized but unsupported
compression extension.
"""
class Sequence(object):
"""
name = FASTA entry name
seq = FASTA sequence
start, end = coordinates of subsequence (optional)
comp = boolean switch for complement property
"""
def __init__(self, name='', seq='', start=None, end=None, comp=False):
self.name = name
self.seq = seq
self.start = start
self.end = end
self.comp = comp
assert isinstance(name, string_types)
assert isinstance(seq, string_types)
def __getitem__(self, n):
""" Returns a sliced version of Sequence
>>> x = Sequence(name='chr1', seq='ATCGTA', start=1, end=6)
>>> x
>chr1:1-6
ATCGTA
>>> x[:3]
>chr1:1-3
ATC
>>> x[3:]
>chr1:4-6
GTA
>>> x[1:-1]
>chr1:2-5
TCGT
>>> x[::-1]
>chr1:6-1
ATGCTA
>>> x[::-3]
>chr1
AC
>>> x = Sequence(name='chr1', seq='ATCGTA', start=0, end=6)
>>> x
>chr1:0-6
ATCGTA
>>> x[:3]
>chr1:0-3
ATC
>>> x[3:]
>chr1:3-6
GTA
>>> x[1:-1]
>chr1:1-5
TCGT
>>> x[::-1]
>chr1:6-0
ATGCTA
>>> x[::-3]
>chr1
AC
"""
if self.start is None or self.end is None:
correction_factor = 0
elif len(
self.seq
) == abs(self.end - self.start) + 1: # determine coordinate system
one_based = True
correction_factor = -1
elif len(self.seq) == abs(self.end - self.start):
one_based = False
correction_factor = 0
elif len(self.seq) != abs(self.end - self.start):
raise ValueError(
"Coordinates (Sequence.start=%s and Sequence.end=%s) imply a different length than Sequence.seq (len=%s). Did you modify Sequence.seq?"
% (self.start, self.end, len(self.seq)))
if isinstance(n, slice):
slice_start, slice_stop, slice_step = n.indices(len(self))
if self.start is None or self.end is None: # there should never be self.start != self.end == None
start = None
end = None
return self.__class__(self.name, self.seq[n], start, end,
self.comp)
self_end, self_start = (self.end, self.start)
if abs(slice_step) > 1:
start = None
end = None
elif slice_step == -1: # flip the coordinates when we reverse
if slice_stop == -1:
slice_stop = 0
start = self_end - slice_stop
end = self_start + slice_stop
#print(locals())
else:
start = self_start + slice_start
end = self_start + slice_stop + correction_factor
return self.__class__(self.name, self.seq[n], start, end,
self.comp)
elif isinstance(n, integer_types):
if n < 0:
n = len(self) + n
if self.start:
return self.__class__(self.name, self.seq[n], self.start + n,
self.start + n, self.comp)
else:
return self.__class__(self.name, self.seq[n], self.comp)
def __str__(self):
return self.seq
def __neg__(self):
""" Returns the reverse compliment of sequence
>>> x = Sequence(name='chr1', seq='ATCGTA', start=1, end=6)
>>> x
>chr1:1-6
ATCGTA
>>> y = -x
>>> y
>chr1:6-1 (complement)
TACGAT
>>> -y
>chr1:1-6
ATCGTA
"""
return self[::-1].complement
def __repr__(self):
return '\n'.join([''.join(['>', self.fancy_name]), self.seq])
def __len__(self):
"""
>>> len(Sequence('chr1', 'ACT'))
3
"""
return len(self.seq)
@property
def fancy_name(self):
""" Return the fancy name for the sequence, including start, end, and complementation.
>>> x = Sequence(name='chr1', seq='ATCGTA', start=1, end=6, comp=True)
>>> x.fancy_name
'chr1:1-6 (complement)'
"""
name = self.name
if self.start is not None and self.end is not None:
name = ':'.join([name, '-'.join([str(self.start), str(self.end)])])
if self.comp:
name += ' (complement)'
return name
@property
def long_name(self):
""" DEPRECATED: Use fancy_name instead.
Return the fancy name for the sequence, including start, end, and complementation.
>>> x = Sequence(name='chr1', seq='ATCGTA', start=1, end=6, comp=True)
>>> x.long_name
'chr1:1-6 (complement)'
"""
msg = "The `Sequence.long_name` property is deprecated, and will be removed in future versions. Please use `Sequence.fancy_name` instead."
warnings.warn(msg, DeprecationWarning, stacklevel=2)
return self.fancy_name
@property
def complement(self):
""" Returns the compliment of self.
>>> x = Sequence(name='chr1', seq='ATCGTA')
>>> x.complement
>chr1 (complement)
TAGCAT
"""
comp = self.__class__(
self.name, complement(self.seq), start=self.start, end=self.end)
comp.comp = False if self.comp else True
return comp
@property
def reverse(self):
""" Returns the reverse of self.
>>> x = Sequence(name='chr1', seq='ATCGTA')
>>> x.reverse
>chr1
ATGCTA
"""
return self[::-1]
@property
def orientation(self):
""" get the orientation forward=1, reverse=-1
>>> x = Sequence(name='chr1', seq='ATCGTA', start=1, end=6)
>>> x.orientation
1
>>> x.complement.orientation is None
True
>>> x[::-1].orientation is None
True
>>> x = -x
>>> x.orientation
-1
"""
if self.start < self.end and not self.comp:
return 1
elif self.start > self.end and self.comp:
return -1
else:
return None
@property
def gc(self):
""" Return the GC content of seq as a float
>>> x = Sequence(name='chr1', seq='ATCGTA')
>>> y = round(x.gc, 2)
>>> y == 0.33
True
"""
g = self.seq.count('G')
g += self.seq.count('g')
c = self.seq.count('C')
c += self.seq.count('c')
return (g + c) / len(self.seq)
class IndexRecord(
namedtuple('IndexRecord',
['rlen', 'offset', 'lenc', 'lenb', 'bend', 'prev_bend'])):
__slots__ = ()
def __getitem__(self, key):
if type(key) == str:
return getattr(self, key)
return tuple.__getitem__(self, key)
def __str__(self):
return "{rlen:n}\t{offset:n}\t{lenc:n}\t{lenb:n}\n".format(
**self._asdict())
def __len__(self):
return self.rlen
class Faidx(object):
""" A python implementation of samtools faidx FASTA indexing """
def __init__(self,
filename,
default_seq=None,
key_function=lambda x: x,
as_raw=False,
strict_bounds=False,
read_ahead=None,
mutable=False,
split_char=None,
duplicate_action="stop",
filt_function=lambda x: True,
one_based_attributes=True,
read_long_names=False,
sequence_always_upper=False,
rebuild=True,
build_index=True):
"""
filename: name of fasta file
key_function: optional callback function which should return a unique
key for the self.index dictionary when given rname.
as_raw: optional parameter to specify whether to return sequences as a
Sequence() object or as a raw string.
Default: False (i.e. return a Sequence() object).
"""
self.filename = filename
if filename.lower().endswith('.bgz') or filename.lower().endswith(
'.gz'):
# Only try to import Bio if we actually need the bgzf reader.
try:
from Bio import bgzf
from Bio import __version__ as bgzf_version
from distutils.version import LooseVersion
if LooseVersion(bgzf_version) < LooseVersion('1.73'):
raise ImportError
except ImportError:
raise ImportError(
"BioPython >= 1.73 must be installed to read block gzip files.")
else:
self._fasta_opener = bgzf.open
self._bgzf = True
elif filename.lower().endswith('.bz2') or filename.lower().endswith(
'.zip'):
raise UnsupportedCompressionFormat(
"Compressed FASTA is only supported in BGZF format. Use "
"bgzip to compresss your FASTA.")
else:
self._fasta_opener = open
self._bgzf = False
try:
self.file = self._fasta_opener(filename, 'r+b'
if mutable else 'rb')
except (ValueError, IOError) as e:
if str(e).find('BGZF') > -1:
raise UnsupportedCompressionFormat(
"Compressed FASTA is only supported in BGZF format. Use "
"the samtools bgzip utility (instead of gzip) to "
"compress your FASTA.")
else:
raise FastaNotFoundError(
"Cannot read FASTA file %s" % filename)
self.indexname = filename + '.fai'
self.read_long_names = read_long_names
self.key_function = key_function
try:
key_fn_test = self.key_function(
"TestingReturnType of_key_function")
if not isinstance(key_fn_test, string_types):
raise KeyFunctionError(
"key_function argument should return a string, not {0}".
format(type(key_fn_test)))
except Exception as e:
pass
self.filt_function = filt_function
assert duplicate_action in ("stop", "first", "last", "longest",
"shortest", "drop")
self.duplicate_action = duplicate_action
self.as_raw = as_raw
self.default_seq = default_seq
if self._bgzf and self.default_seq is not None:
raise FetchError(
"The default_seq argument is not supported with using BGZF compression. Please decompress your FASTA file and try again."
)
if self._bgzf:
self.strict_bounds = True
else:
self.strict_bounds = strict_bounds
self.split_char = split_char
self.one_based_attributes = one_based_attributes
self.sequence_always_upper = sequence_always_upper
self.index = OrderedDict()
self.lock = Lock()
self.buffer = dict((('seq', None), ('name', None), ('start', None),
('end', None)))
if not read_ahead or isinstance(read_ahead, integer_types):
self.read_ahead = read_ahead
elif not isinstance(read_ahead, integer_types):
raise ValueError("read_ahead value must be int, not {0}".format(
type(read_ahead)))
self.mutable = mutable
with self.lock: # lock around index generation so only one thread calls method
try:
if os.path.exists(self.indexname) and getmtime(
self.indexname) >= getmtime(self.filename):
self.read_fai()
elif os.path.exists(self.indexname) and getmtime(
self.indexname) < getmtime(
self.filename) and not rebuild:
self.read_fai()
warnings.warn(
"Index file {0} is older than FASTA file {1}.".format(
self.indexname, self.filename), RuntimeWarning)
elif build_index:
self.build_index()
self.read_fai()
else:
self.read_fai()
except FastaIndexingError:
os.remove(self.indexname)
self.file.close()
raise
except Exception:
# Handle potential exceptions other than 'FastaIndexingError'
self.file.close()
raise
def __contains__(self, region):
if not self.buffer['name']:
return False
name, start, end = region
if self.buffer['name'] == name and self.buffer['start'] <= start and self.buffer['end'] >= end:
return True
else:
return False
def __repr__(self):
return 'Faidx("%s")' % (self.filename)
def _index_as_string(self):
""" Returns the string representation of the index as iterable """
for k, v in self.index.items():
yield '\t'.join([k, str(v)])
def read_fai(self):
try:
with open(self.indexname) as index:
prev_bend = 0
drop_keys = []
for line in index:
line = line.rstrip()
rname, rlen, offset, lenc, lenb = line.split('\t')
rlen, offset, lenc, lenb = map(int,
(rlen, offset, lenc, lenb))
newlines = int(ceil(rlen / lenc) * (lenb - lenc))
bend = offset + newlines + rlen
rec = IndexRecord(rlen, offset, lenc, lenb, bend,
prev_bend)
if self.read_long_names:
rname = self._long_name_from_index_record(rec)
if self.split_char:
rname = filter(self.filt_function,
self.key_function(rname).split(
self.split_char))
else:
# filter must act on an iterable
rname = filter(self.filt_function,
[self.key_function(rname)])
for key in rname: # mdshw5/pyfaidx/issues/64
if key in self.index:
if self.duplicate_action == "stop":
raise ValueError('Duplicate key "%s"' % key)
elif self.duplicate_action == "first":
continue
elif self.duplicate_action == "last":
self.index[key] = rec
elif self.duplicate_action == "longest":
if len(rec) > len(self.index[key]):
self.index[key] = rec
elif self.duplicate_action == "shortest":
if len(rec) < len(self.index[key]):
self.index[key] = rec
elif self.duplicate_action == "drop":
if key not in drop_keys:
drop_keys.append(key)
else:
self.index[key] = rec
prev_bend = bend
for dup in drop_keys:
self.index.pop(dup, None)
except IOError:
raise IndexNotFoundError(
"Could not read index file %s" % self.indexname)
def build_index(self):
try:
with self._fasta_opener(self.filename, 'rb') as fastafile:
with open(self.indexname, 'w') as indexfile:
rname = None # reference sequence name
offset = 0 # binary offset of end of current line
rlen = 0 # reference character length
blen = None # binary line length (includes newline)
clen = None # character line length
bad_lines = [] # lines > || < than blen
thisoffset = offset
valid_entry = False
lastline = None
for i, line in enumerate(fastafile):
line_blen = len(line)
line = line.decode()
line_clen = len(line.rstrip('\n\r'))
lastline = i
# write an index line
if line[0] == '>':
valid_entry = check_bad_lines(
rname, bad_lines, i - 1)
if valid_entry and i > 0:
indexfile.write(
"{0}\t{1:d}\t{2:d}\t{3:d}\t{4:d}\n".format(
rname, rlen, thisoffset, clen, blen))
elif not valid_entry:
raise FastaIndexingError(
"Line length of fasta"
" file is not "
"consistent! "
"Inconsistent line found in >{0} at "
"line {1:n}.".format(
rname, bad_lines[0][0] + 1))
blen = None
rlen = 0
clen = None
bad_lines = []
try: # must catch empty deflines (actually these might be okay: https://github.com/samtools/htslib/pull/258)
rname = line.rstrip('\n\r')[1:].split()[
0] # duplicates are detected with read_fai
except IndexError:
raise FastaIndexingError(
"Bad sequence name %s at line %s." %
(line.rstrip('\n\r'), str(i)))
offset += line_blen
thisoffset = fastafile.tell(
) if self._bgzf else offset
else: # check line and advance offset
if not blen:
blen = line_blen
if not clen:
clen = line_clen
# only one short line should be allowed
# before we hit the next header, and it
# should be the last line in the entry
if line_blen != blen or line_blen == 1:
bad_lines.append((i, line_blen))
offset += line_blen
rlen += line_clen
# check that we find at least 1 valid FASTA record
if not valid_entry:
raise FastaIndexingError(
"The FASTA file %s does not contain a valid sequence. "
"Check that sequence definition lines start with '>'." % self.filename)
# write the final index line, if there is one.
if lastline is not None:
valid_entry = check_bad_lines(
rname, bad_lines, lastline
) # advance index since we're at the end of the file
if valid_entry:
indexfile.write(
"{0:s}\t{1:d}\t{2:d}\t{3:d}\t{4:d}\n".format(
rname, rlen, thisoffset, clen, blen))
else:
raise FastaIndexingError(
"Line length of fasta"
" file is not "
"consistent! "
"Inconsistent line found in >{0} at "
"line {1:n}.".format(rname,
bad_lines[0][0] + 1))
except (IOError, FastaIndexingError) as e:
if isinstance(e, IOError):
raise IOError(
"%s may not be writable. Please use Fasta(rebuild=False), Faidx(rebuild=False) or faidx --no-rebuild."
% self.indexname)
elif isinstance(e, FastaIndexingError):
raise e
def write_fai(self):
with self.lock:
with open(self.indexname, 'w') as outfile:
for line in self._index_as_string:
outfile.write(line)
def from_buffer(self, start, end):
i_start = start - self.buffer['start'] # want [0, 1) coordinates from [1, 1] coordinates
i_end = end - self.buffer['start'] + 1
return self.buffer['seq'][i_start:i_end]
def fill_buffer(self, name, start, end):
try:
seq = self.from_file(name, start, end)
self.buffer['seq'] = seq
self.buffer['start'] = start
self.buffer['end'] = end
self.buffer['name'] = name
except FetchError:
pass
def fetch(self, name, start, end):
if self.read_ahead and not (name, start, end) in self:
self.fill_buffer(name, start, end + self.read_ahead)
if (name, start, end) in self:
seq = self.from_buffer(start, end)
else:
seq = self.from_file(name, start, end)
return self.format_seq(seq, name, start, end)
def from_file(self, rname, start, end, internals=False):
""" Fetch the sequence ``[start:end]`` from ``rname`` using 1-based coordinates
1. Count newlines before start
2. Count newlines to end
3. Difference of 1 and 2 is number of newlines in [start:end]
4. Seek to start position, taking newlines into account
5. Read to end position, return sequence
"""
assert start == int(start)
assert end == int(end)
try:
i = self.index[rname]
except KeyError:
raise FetchError("Requested rname {0} does not exist! "
"Please check your FASTA file.".format(rname))
start0 = start - 1 # make coordinates [0,1)
if start0 < 0:
raise FetchError(
"Requested start coordinate must be greater than 1.")
seq_len = end - start0
# Calculate offset (https://github.com/samtools/htslib/blob/20238f354894775ed22156cdd077bc0d544fa933/faidx.c#L398)
newlines_before = int(
(start0 - 1) / i.lenc * (i.lenb - i.lenc)) if start0 > 0 else 0
newlines_to_end = int(end / i.lenc * (i.lenb - i.lenc))
newlines_inside = newlines_to_end - newlines_before
seq_blen = newlines_inside + seq_len
bstart = i.offset + newlines_before + start0
if seq_blen < 0 and self.strict_bounds:
raise FetchError("Requested coordinates start={0:n} end={1:n} are "
"invalid.\n".format(start, end))
elif end > i.rlen and self.strict_bounds:
raise FetchError("Requested end coordinate {0:n} outside of {1}. "
"\n".format(end, rname))
with self.lock:
if self._bgzf: # We can't add to virtual offsets, so we need to read from the beginning of the record and trim the beginning if needed
self.file.seek(i.offset)
chunk = start0 + newlines_before + newlines_inside + seq_len
chunk_seq = self.file.read(chunk).decode()
seq = chunk_seq[start0 + newlines_before:]
else:
self.file.seek(bstart)
if bstart + seq_blen > i.bend and not self.strict_bounds:
seq_blen = i.bend - bstart
if seq_blen > 0:
seq = self.file.read(seq_blen).decode()
elif seq_blen <= 0 and not self.strict_bounds:
seq = ''
if not internals:
return seq.replace('\n', '').replace('\r', '')
else:
return (seq, locals())
def format_seq(self, seq, rname, start, end):
start0 = start - 1
if len(
seq
) < end - start0 and self.default_seq: # Pad missing positions with default_seq
pad_len = end - start0 - len(seq)
seq = ''.join([seq, pad_len * self.default_seq])
else: # Return less than requested range
end = start0 + len(seq)
if self.sequence_always_upper:
seq = seq.upper()
if not self.one_based_attributes:
start = start0
if self.as_raw:
return seq
else:
return Sequence(
name=rname, start=int(start), end=int(end), seq=seq)
def to_file(self, rname, start, end, seq):
""" Write sequence in region from start-end, overwriting current
contents of the FASTA file. """
if not self.mutable:
raise IOError(
"Write attempted for immutable Faidx instance. Set mutable=True to modify original FASTA."
)
file_seq, internals = self.from_file(rname, start, end, internals=True)
with self.lock:
if len(seq) != len(file_seq) - internals['newlines_inside']:
raise IOError(
"Specified replacement sequence needs to have the same length as original."
)
elif len(seq) == len(file_seq) - internals['newlines_inside']:
line_len = internals['i'].lenc
if '\r\n' in file_seq:
newline_char = '\r\n'
elif '\r' in file_seq:
newline_char = '\r'
else:
newline_char = '\n'
self.file.seek(internals['bstart'])
if internals['newlines_inside'] == 0:
self.file.write(seq.encode())
elif internals['newlines_inside'] > 0:
n = 0
m = file_seq.index(newline_char)
while m < len(seq):
self.file.write(''.join([seq[n:m], newline_char]).encode())
n = m
m += line_len
self.file.write(seq[n:].encode())
self.file.flush()
def get_long_name(self, rname):
""" Return the full sequence defline and description. External method using the self.index """
index_record = self.index[rname]
if self._bgzf:
return self._long_name_from_bgzf(index_record)
else:
return self._long_name_from_index_record(index_record)
def _long_name_from_index_record(self, index_record):
""" Return the full sequence defline and description. Internal method passing IndexRecord """
prev_bend = index_record.prev_bend
defline_end = index_record.offset
self.file.seek(prev_bend)
return self.file.read(defline_end - prev_bend).decode()[1:-1]
def _long_name_from_bgzf(self, index_record):
""" Return the full sequence defline and description. Internal method passing IndexRecord
This method is present for compatibility with BGZF files, since we cannot subtract their offsets.
It may be possible to implement a more efficient method. """
raise NotImplementedError(
"FastaRecord.long_name and Fasta(read_long_names=True) "
"are not supported currently for BGZF compressed files.")
prev_bend = index_record.prev_bend
self.file.seek(prev_bend)
defline = []
while True:
chunk = self.file.read(4096).decode()
defline.append(chunk)
if '\n' in chunk or '\r' in chunk:
break
return ''.join(defline)[1:].split('\n\r')[0]
def close(self):
self.__exit__()
def __enter__(self):
return self
def __exit__(self, *args):
self.file.close()
class FastaRecord(object):
__slots__ = ['name', '_fa']
def __init__(self, name, fa):
self.name = name
self._fa = fa
def __getitem__(self, n):
"""Return sequence from region [start, end)
Coordinates are 0-based, end-exclusive."""
try:
if isinstance(n, slice):
start, stop, step = n.start, n.stop, n.step
if start is None:
start = 0
if stop is None:
stop = len(self)
if stop < 0:
stop = len(self) + stop
if start < 0:
start = len(self) + start
return self._fa.get_seq(self.name, start + 1, stop)[::step]
elif isinstance(n, integer_types):
if n < 0:
n = len(self) + n
return self._fa.get_seq(self.name, n + 1, n + 1)
except FetchError:
raise
def __iter__(self):
""" Construct a line-based generator that respects the original line lengths. """
line_len = self._fa.faidx.index[self.name].lenc
start = 0
while True:
end = start + line_len
if end < len(self):
yield self[start:end]
else:
yield self[start:]
return
start += line_len
def __reversed__(self):
""" Reverse line-based generator """
line_len = self._fa.faidx.index[self.name].lenc
# have to determine last line length
last_line = len(self) % line_len
if last_line == 0:
last_line = line_len
end = len(self)
start = end - last_line
while True:
if start > 0:
yield self[start:end][::-1]
else:
yield self[:end][::-1]
return
if end == len(self): # first iteration
end -= last_line
else:
end -= line_len
start = end - line_len
def __repr__(self):
return 'FastaRecord("%s")' % (self.name)
def __len__(self):
return self._fa.faidx.index[self.name].rlen
@property
def unpadded_len(self):
""" Returns the length of the contig without 5' and 3' N padding.
Functions the same as contigNonNSize in Fasta.cpp at
https://github.com/Illumina/hap.py/blob/master/src/c%2B%2B/lib/tools/Fasta.cpp#L284
"""
length = len(self)
stop = False
for line in iter(self):
if stop:
break
if isinstance(line, Sequence):
line = line.seq
for base in line.upper():
if base == 'N':
length -= 1
else:
stop = True
break
stop = False
for line in reversed(self):
if stop:
break
if isinstance(line, Sequence):
line = line.seq
for base in line.upper():
if base == 'N':
length -= 1
else:
stop = True
break
return length
def __str__(self):
return str(self[:])
@property
def variant_sites(self):
if isinstance(self._fa, FastaVariant):
pos = []
var = self._fa.vcf.fetch(self.name, 0, len(self))
for site in var:
if site.is_snp:
sample = site.genotype(self._fa.sample)
if sample.gt_type in self._fa.gt_type and eval(
self._fa.filter):
pos.append(site.POS)
return tuple(pos)
else:
raise NotImplementedError(
"variant_sites() only valid for FastaVariant.")
@property
def long_name(self):
""" Read the actual defline from self._fa.faidx mdshw5/pyfaidx#54 """
return self._fa.faidx.get_long_name(self.name)
@property
def __array_interface__(self):
""" Implement numpy array interface for issue #139"""
return {
'shape': (len(self), ),
'typestr': '|S1',
'version': 3,
'data': buffer(str(self).encode('ascii'))
}
class MutableFastaRecord(FastaRecord):
def __init__(self, name, fa):
super(MutableFastaRecord, self).__init__(name, fa)
if self._fa.faidx._fasta_opener != open:
raise UnsupportedCompressionFormat(
"BGZF compressed FASTA is not supported for MutableFastaRecord. "
"Please decompress your FASTA file.")
def __setitem__(self, n, value):
"""Mutate sequence in region [start, end)
to value.
Coordinates are 0-based, end-exclusive."""
try:
if isinstance(n, slice):
start, stop, step = n.start, n.stop, n.step
if step:
raise IndexError("Step operator is not implemented.")
if not start:
start = 0
if not stop:
stop = len(self)
if stop < 0:
stop = len(self) + stop
if start < 0:
start = len(self) + start
self._fa.faidx.to_file(self.name, start + 1, stop, value)
elif isinstance(n, integer_types):
if n < 0:
n = len(self) + n
return self._fa.faidx.to_file(self.name, n + 1, n + 1, value)
except (FetchError, IOError):
raise
class Fasta(object):
def __init__(self,
filename,
default_seq=None,
key_function=lambda x: x,
as_raw=False,
strict_bounds=False,
read_ahead=None,
mutable=False,
split_char=None,
filt_function=lambda x: True,
one_based_attributes=True,
read_long_names=False,
duplicate_action="stop",
sequence_always_upper=False,
rebuild=True,
build_index=True):
"""
An object that provides a pygr compatible interface.
filename: name of fasta file
"""
self.filename = filename
self.mutable = mutable
self.faidx = Faidx(
filename,
key_function=key_function,
as_raw=as_raw,
default_seq=default_seq,
strict_bounds=strict_bounds,
read_ahead=read_ahead,
mutable=mutable,
split_char=split_char,
filt_function=filt_function,
one_based_attributes=one_based_attributes,
read_long_names=read_long_names,
duplicate_action=duplicate_action,
sequence_always_upper=sequence_always_upper,
rebuild=rebuild,
build_index=build_index)
self.keys = self.faidx.index.keys
if not self.mutable:
self.records = dict(
[(rname, FastaRecord(rname, self)) for rname in self.keys()])
elif self.mutable:
self.records = dict([(rname, MutableFastaRecord(rname, self))
for rname in self.keys()])
def __contains__(self, rname):
"""Return True if genome contains record."""
return rname in self.faidx.index
def __getitem__(self, rname):
"""Return a chromosome by its name, or its numerical index."""
if isinstance(rname, integer_types):
rname = tuple(self.keys())[rname]
try:
return self.records[rname]
except KeyError:
raise KeyError("{0} not in {1}.".format(rname, self.filename))
def __repr__(self):
return 'Fasta("%s")' % (self.filename)
def __iter__(self):
for rname in self.keys():
yield self[rname]
def get_seq(self, name, start, end, rc=False):
"""Return a sequence by record name and interval [start, end).
Coordinates are 1-based, end-exclusive.
If rc is set, reverse complement will be returned.
"""
# Get sequence from real genome object and save result.
seq = self.faidx.fetch(name, start, end)
if rc:
return -seq
else:
return seq
def get_spliced_seq(self, name, intervals, rc=False):
"""Return a sequence by record name and list of intervals
Interval list is an iterable of [start, end].
Coordinates are 1-based, end-exclusive.
If rc is set, reverse complement will be returned.
"""
# Get sequence for all intervals
chunks = [self.faidx.fetch(name, s, e) for s, e in intervals]
start = chunks[0].start
end = chunks[-1].end
# reverce complement
if rc:
seq = "".join([(-chunk).seq for chunk in chunks[::-1]])
else:
seq = "".join([chunk.seq for chunk in chunks])
# Sequence coordinate validation wont work since
# len(Sequence.seq) != end - start
return Sequence(name=name, seq=seq, start=None, end=None)
def close(self):
self.__exit__()
def __enter__(self):
return self
def __exit__(self, *args):
self.faidx.__exit__(*args)
class FastaVariant(Fasta):
""" Return consensus sequence from FASTA and VCF inputs
"""
expr = set(('>', '<', '=', '!'))
def __init__(self,
filename,
vcf_file,
sample=None,
het=True,
hom=True,
call_filter=None,
**kwargs):
super(FastaVariant, self).__init__(filename, **kwargs)
try:
import pysam
except ImportError:
raise ImportError("pysam must be installed for FastaVariant.")
try:
import vcf
except ImportError:
raise ImportError("PyVCF must be installed for FastaVariant.")
if call_filter is not None:
try:
key, expr, value = call_filter.split() # 'GQ > 30'
except IndexError:
raise ValueError(
"call_filter must be a string in the format 'XX <>!= NN'")
assert all([x in self.expr for x in list(expr)])
assert all([x in string.ascii_uppercase for x in list(key)])
assert all([x in string.printable for x in list(value)])
self.filter = "sample['{key}'] {expr} {value}".format(**locals())
else:
self.filter = 'True'
if os.path.exists(vcf_file):
self.vcf = vcf.Reader(filename=vcf_file)
else:
raise IOError("File {0} does not exist.".format(vcf_file))
if sample is not None:
self.sample = sample
else:
self.sample = self.vcf.samples[0]
if len(self.vcf.samples) > 1 and sample is None:
warnings.warn("Using sample {0} genotypes.".format(
self.sample), RuntimeWarning)
if het and hom:
self.gt_type = set((1, 2))
elif het:
self.gt_type = set((1, ))
elif hom:
self.gt_type = set((2, ))
else:
self.gt_type = set()
def __repr__(self):
return 'FastaVariant("%s", "%s", gt="%s")' % (self.filename,
self.vcf.filename,
str(self.gt_type))
def get_seq(self, name, start, end):
"""Return a sequence by record name and interval [start, end).
Replace positions with polymorphism with variant.
Coordinates are 0-based, end-exclusive.
"""
seq = self.faidx.fetch(name, start, end)
if self.faidx.as_raw:
seq_mut = list(seq)
del seq
else:
seq_mut = list(seq.seq)
del seq.seq
var = self.vcf.fetch(name, start - 1, end)
for record in var:
if record.is_snp: # skip indels
sample = record.genotype(self.sample)
if sample.gt_type in self.gt_type and eval(self.filter):
alt = record.ALT[0]
i = (record.POS - 1) - (start - 1)
seq_mut[i:i + len(alt)] = str(alt)
# slice the list in case we added an MNP in last position
if self.faidx.as_raw:
return ''.join(seq_mut[:end - start + 1])
else:
seq.seq = ''.join(seq_mut[:end - start + 1])
return seq
def wrap_sequence(n, sequence, fillvalue=''):
args = [iter(sequence)] * n
for line in zip_longest(fillvalue=fillvalue, *args):
yield ''.join(line + ("\n", ))
# To take a complement, we map each character in the first string in this pair
# to the corresponding character in the second string.
complement_map = ('ACTGNactgnYRWSKMDVHBXyrwskmdvhbx',
'TGACNtgacnRYWSMKHBDVXrywsmkhbdvx')
invalid_characters_set = set(
chr(x) for x in range(256) if chr(x) not in complement_map[0])
invalid_characters_string = ''.join(invalid_characters_set)
if PY3:
complement_table = str.maketrans(complement_map[0], complement_map[1],
invalid_characters_string)
translate_arguments = (complement_table, )
elif PY2:
complement_table = string.maketrans(complement_map[0], complement_map[1])
translate_arguments = (complement_table, invalid_characters_string)
def complement(seq):
""" Returns the complement of seq.
>>> seq = 'ATCGTA'
>>> complement(seq)
'TAGCAT'
"""
seq = str(seq)
result = seq.translate(*translate_arguments)
if len(result) != len(seq):
first_invalid_position = next(
i for i in range(len(seq)) if seq[i] in invalid_characters_set)
raise ValueError(
"Sequence contains non-DNA character '{0}' at position {1:n}\n".
format(seq[first_invalid_position], first_invalid_position + 1))
return result
def translate_chr_name(from_name, to_name):
chr_name_map = dict(zip(from_name, to_name))
def map_to_function(rname):
return chr_name_map[rname]
return map_to_function
def bed_split(bed_entry):
try:
rname, start, end = bed_entry.rstrip().split()[:3]
except (IndexError, ValueError):
raise BedError('Malformed BED entry! {0}\n'.format(bed_entry.rstrip()))
start, end = (int(start), int(end))
return (rname, start, end)
def ucsc_split(region):
try:
rname, interval = region.split(':')
except ValueError:
rname = region
interval = None
try:
start, end = interval.split('-')
start, end = (int(start) - 1, int(end))
except (AttributeError, ValueError):
start, end = (None, None)
return (rname, start, end)
def check_bad_lines(rname, bad_lines, i):
""" Find inconsistent line lengths in the middle of an
entry. Allow blank lines between entries, and short lines
occurring at the last line of an entry. Returns boolean
validating the entry.
>>> check_bad_lines('chr0', [(10, 79)], 10)
True
>>> check_bad_lines('chr0', [(9, 79)], 10)
False
>>> check_bad_lines('chr0', [(9, 79), (10, 1)], 10)
True
"""
if len(bad_lines) == 0:
return True
elif len(bad_lines) == 1:
if bad_lines[0][0] == i: # must be last line
return True
else:
return False
elif len(bad_lines) == 2:
if bad_lines[0][0] == i: # must not be last line
return False
elif bad_lines[1][0] == i and bad_lines[1][1] == 1: # blank last line
if bad_lines[0][0] + 1 == i and bad_lines[0][1] > 1: # non-blank line
return True
else:
return False
if len(bad_lines) > 2:
return False
raise RuntimeError("Unhandled exception during fasta indexing at entry " + rname + \
"Please report this issue at https://github.com/mdshw5/pyfaidx/issues " + \
str(bad_lines))
if __name__ == "__main__":
import doctest
doctest.testmod() | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/pyfaidx/__init__.py | __init__.py |
import argparse
import sys
import os.path
import re
from agouti_pkg.pyfaidx import Fasta, wrap_sequence, FetchError, ucsc_split, bed_split
from collections import defaultdict
keepcharacters = (' ', '.', '_')
def write_sequence(args):
_, ext = os.path.splitext(args.fasta)
if ext:
ext = ext[1:] # remove the dot from extension
filt_function = re.compile(args.regex).search
if args.invert_match:
filt_function = lambda x: not re.compile(args.regex).search(x)
fasta = Fasta(args.fasta, default_seq=args.default_seq, key_function=eval(args.header_function), strict_bounds=not args.lazy, split_char=args.delimiter, filt_function=filt_function, read_long_names=args.long_names, rebuild=not args.no_rebuild)
regions_to_fetch, split_function = split_regions(args)
if not regions_to_fetch:
regions_to_fetch = fasta.keys()
header = False
for region in regions_to_fetch:
name, start, end = split_function(region)
if args.size_range:
if start is not None and end is not None:
sequence_len = end - start
else:
sequence_len = len(fasta[name])
if args.size_range[0] > sequence_len or args.size_range[1] < sequence_len:
continue
if args.split_files: # open output file based on sequence name
filename = '.'.join(str(e) for e in (name, start, end, ext) if e)
filename = ''.join(c for c in filename if c.isalnum() or c in keepcharacters)
outfile = open(filename, 'w')
elif args.out:
outfile = args.out
else:
outfile = sys.stdout
try:
if args.transform:
if not header and args.transform == 'nucleotide':
outfile.write("name\tstart\tend\tA\tT\tC\tG\tN\tothers\n")
header = True
outfile.write(transform_sequence(args, fasta, name, start, end))
else:
for line in fetch_sequence(args, fasta, name, start, end):
outfile.write(line)
except FetchError as e:
raise FetchError(str(e) + " Try setting --lazy.\n")
if args.split_files:
outfile.close()
fasta.__exit__()
def fetch_sequence(args, fasta, name, start=None, end=None):
try:
line_len = fasta.faidx.index[name].lenc
if args.auto_strand and start > end and start is not None and end is not None:
# flip (0, 1] coordinates
sequence = fasta[name][end - 1:start + 1]
sequence = sequence.reverse.complement
else:
sequence = fasta[name][start:end]
except KeyError:
sys.stderr.write("warning: {name} not found in file\n".format(**locals()))
return
if args.complement:
sequence = sequence.complement
if args.reverse:
sequence = sequence.reverse
if args.no_output:
return
if args.no_names:
pass
else:
if (start or end) and not args.no_coords:
yield ''.join(['>', sequence.fancy_name, '\n'])
else:
yield ''.join(['>', sequence.name, '\n'])
for line in wrap_sequence(line_len, sequence.seq):
yield line
def mask_sequence(args):
fasta = Fasta(args.fasta, mutable=True, split_char=args.delimiter)
regions_to_fetch, split_function = split_regions(args)
for region in regions_to_fetch:
rname, start, end = split_function(region)
if args.mask_with_default_seq:
if start and end:
span = end - start
elif not start and not end:
span = len(fasta[rname])
else:
span = len(fasta[rname][start:end])
fasta[rname][start:end] = span * args.default_seq
elif args.mask_by_case:
fasta[rname][start:end] = fasta[rname][start:end].lowercase()
def split_regions(args):
if args.bed:
regions_to_fetch = args.bed
split_function = bed_split
else:
regions_to_fetch = args.regions
split_function = ucsc_split
return (regions_to_fetch, split_function)
def transform_sequence(args, fasta, name, start=None, end=None):
line_len = fasta.faidx.index[name].lenc
s = fasta[name][start:end]
if args.no_output:
return
if args.transform == 'bed':
return '{name}\t{start}\t{end}\n'.format(name=s.name, start=s.start - 1 , end=s.end)
elif args.transform == 'chromsizes':
return '{name}\t{length}\n'.format(name=s.name, length=len(s))
elif args.transform == 'nucleotide':
ss = str(s).upper()
nucs = defaultdict(int)
nucs.update([(c, str(ss).count(c)) for c in set(str(ss))])
A = nucs.pop('A', 0)
T = nucs.pop('T', 0)
C = nucs.pop('C', 0)
G = nucs.pop('G', 0)
N = nucs.pop('N', 0)
others = '|'.join([':'.join((k, v)) for k, v in nucs.items()])
return '{sname}\t{sstart}\t{send}\t{A}\t{T}\t{C}\t{G}\t{N}\t{others}\n'.format(sname=s.name, sstart=s.start, send=s.end, **locals())
elif args.transform == 'transposed':
return '{name}\t{start}\t{end}\t{seq}\n'.format(name=s.name, start=s.start, end=s.end, seq=str(s))
def main(ext_args=None):
from pyfaidx import __version__
parser = argparse.ArgumentParser(description="Fetch sequences from FASTA. If no regions are specified, all entries in the input file are returned. Input FASTA file must be consistently line-wrapped, and line wrapping of output is based on input line lengths.",
epilog="Please cite: Shirley MD, Ma Z, Pedersen BS, Wheelan SJ. (2015) Efficient \"pythonic\" access to FASTA files using pyfaidx. PeerJ PrePrints 3:e1196 https://dx.doi.org/10.7287/peerj.preprints.970v1")
parser.add_argument('fasta', type=str, help='FASTA file')
parser.add_argument('regions', type=str, nargs='*', help="space separated regions of sequence to fetch e.g. chr1:1-1000")
_input = parser.add_argument_group('input options')
output = parser.add_argument_group('output options')
header = parser.add_argument_group('header options')
_input.add_argument('-b', '--bed', type=argparse.FileType('r'), help="bed file of regions")
output.add_argument('-o', '--out', type=argparse.FileType('w'), help="output file name (default: stdout)")
output.add_argument('-i', '--transform', type=str, choices=('bed', 'chromsizes', 'nucleotide', 'transposed'), help="transform the requested regions into another format. default: %(default)s")
output.add_argument('-c', '--complement', action="store_true", default=False, help="complement the sequence. default: %(default)s")
output.add_argument('-r', '--reverse', action="store_true", default=False, help="reverse the sequence. default: %(default)s")
output.add_argument('-y', '--auto-strand', action="store_true", default=False, help="reverse complement the sequence when start > end coordinate. default: %(default)s")
output.add_argument('-a', '--size-range', type=parse_size_range, default=None, help='selected sequences are in the size range [low, high]. example: 1,1000 default: %(default)s')
names = header.add_mutually_exclusive_group()
names.add_argument('-n', '--no-names', action="store_true", default=False, help="omit sequence names from output. default: %(default)s")
names.add_argument('-f', '--long-names', action="store_true", default=False, help="output full (long) names from the input fasta headers. default: headers are truncated after the first whitespace")
header.add_argument('-t', '--no-coords', action="store_true", default=False, help="omit coordinates (e.g. chr:start-end) from output headers. default: %(default)s")
output.add_argument('-x', '--split-files', action="store_true", default=False, help="write each region to a separate file (names are derived from regions)")
output.add_argument('-l', '--lazy', action="store_true", default=False, help="fill in --default-seq for missing ranges. default: %(default)s")
output.add_argument('-s', '--default-seq', type=check_seq_length, default=None, help='default base for missing positions and masking. default: %(default)s')
header.add_argument('-d', '--delimiter', type=str, default=None, help='delimiter for splitting names to multiple values (duplicate names will be discarded). default: %(default)s')
header.add_argument('-e', '--header-function', type=str, default='lambda x: x.split()[0]', help='python function to modify header lines e.g: "lambda x: x.split("|")[0]". default: %(default)s')
header.add_argument('-u', '--duplicates-action', type=str, default="stop", choices=("stop", "first", "last", "longest", "shortest"), help='entry to take when duplicate sequence names are encountered. default: %(default)s')
matcher = parser.add_argument_group('matching arguments')
matcher.add_argument('-g', '--regex', type=str, default='.*', help='selected sequences are those matching regular expression. default: %(default)s')
matcher.add_argument('-v', '--invert-match', action="store_true", default=False, help="selected sequences are those not matching 'regions' argument. default: %(default)s")
masking = output.add_mutually_exclusive_group()
masking.add_argument('-m', '--mask-with-default-seq', action="store_true", default=False, help="mask the FASTA file using --default-seq default: %(default)s")
masking.add_argument('-M', '--mask-by-case', action="store_true", default=False, help="mask the FASTA file by changing to lowercase. default: %(default)s")
output.add_argument('--no-output', action="store_true", default=False, help="do not output any sequence. default: %(default)s")
parser.add_argument('--no-rebuild', action="store_true", default=False, help="do not rebuild the .fai index even if it is out of date. default: %(default)s")
parser.add_argument('--version', action="version", version=__version__, help="print pyfaidx version number")
# print help usage if no arguments are supplied
if len(sys.argv)==1 and not ext_args:
parser.print_help()
sys.exit(1)
elif ext_args:
args = parser.parse_args(ext_args)
else:
args = parser.parse_args()
if args.auto_strand:
if args.complement:
sys.stderr.write("--auto-strand and --complement are both set. Are you sure this is what you want?\n")
if args.reverse:
sys.stderr.write("--auto-strand and --reverse are both set. Are you sure this is what you want?\n")
if args.mask_with_default_seq or args.mask_by_case:
mask_sequence(args)
else:
write_sequence(args)
def check_seq_length(value):
if value is None:
pass # default value
elif len(value) != 1:
# user is passing a single character
raise argparse.ArgumentTypeError("--default-seq value must be a single character!")
return value
def parse_size_range(value):
""" Size range argument should be in the form start,end and is end-inclusive. """
if value is None:
return value
try:
start, end = value.replace(' ', '').replace('\t', '').split(',')
except (TypeError, ValueError, IndexError):
raise ValueError
return (int(start), int(end))
class Counter(dict):
'''Dict subclass for counting hashable objects. Sometimes called a bag
or multiset. Elements are stored as dictionary keys and their counts
are stored as dictionary values.
'''
def __init__(self, iterable=None, **kwds):
'''Create a new, empty Counter object. And if given, count elements
from an input iterable. Or, initialize the count from another mapping
of elements to their counts.
'''
self.update(iterable, **kwds)
def __missing__(self, key):
return 0
def most_common(self, n=None):
'''List the n most common elements and their counts from the most
common to the least. If n is None, then list all element counts.
'''
if n is None:
return sorted(self.iteritems(), key=itemgetter(1), reverse=True)
return nlargest(n, self.iteritems(), key=itemgetter(1))
def elements(self):
'''Iterator over elements repeating each as many times as its count.
If an element's count has been set to zero or is a negative number,
elements() will ignore it.
'''
for elem, count in self.iteritems():
for _ in repeat(None, count):
yield elem
# Override dict methods where the meaning changes for Counter objects.
@classmethod
def fromkeys(cls, iterable, v=None):
raise NotImplementedError(
'Counter.fromkeys() is undefined. Use Counter(iterable) instead.')
def update(self, iterable=None, **kwds):
'''Like dict.update() but add counts instead of replacing them.
Source can be an iterable, a dictionary, or another Counter instance.
'''
if iterable is not None:
if hasattr(iterable, 'iteritems'):
if self:
self_get = self.get
for elem, count in iterable.iteritems():
self[elem] = self_get(elem, 0) + count
else:
dict.update(self, iterable) # fast path when counter is empty
else:
self_get = self.get
for elem in iterable:
self[elem] = self_get(elem, 0) + 1
if kwds:
self.update(kwds)
def copy(self):
'Like dict.copy() but returns a Counter instance instead of a dict.'
return Counter(self)
def __delitem__(self, elem):
'Like dict.__delitem__() but does not raise KeyError for missing values.'
if elem in self:
dict.__delitem__(self, elem)
def __repr__(self):
if not self:
return '%s()' % self.__class__.__name__
items = ', '.join(map('%r: %r'.__mod__, self.most_common()))
return '%s({%s})' % (self.__class__.__name__, items)
# Multiset-style mathematical operations discussed in:
# Knuth TAOCP Volume II section 4.6.3 exercise 19
# and at http://en.wikipedia.org/wiki/Multiset
#
# Outputs guaranteed to only include positive counts.
#
# To strip negative and zero counts, add-in an empty counter:
# c += Counter()
def __add__(self, other):
'''Add counts from two counters.
'''
if not isinstance(other, Counter):
return NotImplemented
result = Counter()
for elem in set(self) | set(other):
newcount = self[elem] + other[elem]
if newcount > 0:
result[elem] = newcount
return result
def __sub__(self, other):
''' Subtract count, but keep only results with positive counts.
'''
if not isinstance(other, Counter):
return NotImplemented
result = Counter()
for elem in set(self) | set(other):
newcount = self[elem] - other[elem]
if newcount > 0:
result[elem] = newcount
return result
def __or__(self, other):
'''Union is the maximum of value in either of the input counters.
'''
if not isinstance(other, Counter):
return NotImplemented
_max = max
result = Counter()
for elem in set(self) | set(other):
newcount = _max(self[elem], other[elem])
if newcount > 0:
result[elem] = newcount
return result
def __and__(self, other):
''' Intersection is the minimum of corresponding counts.
'''
if not isinstance(other, Counter):
return NotImplemented
_min = min
result = Counter()
if len(self) < len(other):
self, other = other, self
for elem in filter(self.__contains__, other):
newcount = _min(self[elem], other[elem])
if newcount > 0:
result[elem] = newcount
return result
if __name__ == "__main__":
main() | AGouTI | /AGouTI-1.0.3.tar.gz/AGouTI-1.0.3/agouti_pkg/pyfaidx/cli.py | cli.py |
# Amazon HealthLake Imaging DICOM Exporter module
This project is a multi-processed python 3.8+ module facilitating the load of DICOM datasets stored in Amazon HealthLake Imaging into the memory or exported to the file system .
## Getting started
This module can be installed with the python pip utility.
1. Clone this repository:
```terminal
git clone https://github.com/aws-samples/healthlake-imaging-to-dicom-python-module.git
```
2. Locate your terminal in the cloned folder.
3. Execute the below command to install the modudle via pip :
```terminal
pip install .
```
## How to use this module
To use this module you need to import the AHItoDICOM class and instantiate the AHItoDICOM helper:
```python
from AHItoDICOMInterface.AHItoDICOM import AHItoDICOM
helper = AHItoDICOM( AHI_endpoint= AHIEndpoint , fetcher_process_count=fetcher_count , dicomizer_process_count=dicomizer_count)
```
Once the helper is instanciated, you can call th DICOMize() function to export DICOM data from AHI into the memory, as pydicom dataset array.
```python
instances = helper.DICOMizeImageSet(datastore_id=datastoreId , image_set_id=imageSetId)
```
## Available functions
|Function|Description|
|--------|-----------|
AHItoDICOM(<br>aws_access_key : str = None,<br> aws_secret_key : str = None ,<br>AHI_endpoint : str = None,<br> fetcher_process_count : int = None,<br> dicomizer_process_count : int = None )| Use to instantiate the helper. All paraneters are non-mandatory.<br><br> <b>aws_access_key & aws_secret_key and</b> : Can be used if there is no default credentials configured in the aws client, or if the code runs in an environment not supporting IAM profile.<br> <b>AHI_endpoint</b> : Only useful to AWS employees. Other users should let this value set to None.<br><b>fetcher_process_count</b> : This parameter defines the number of fetcher processes to instanciate to fetch and uncompress the frames. By default the module will create 4 x the number of cores.<br><b>dicomizer_process_count</b> : This parameter defines the number of DICOMizer processes to instanciate to create the pydicom datasets. By default the module will create 1 x the number of cores.|
|DICOMizeImageSet(datastore_id: str, image_set_id: str)| Use to request the pydicom datasets to be loaded in memory. <br><br><b>datastore_id</b> : The AHI datastore where the ImageSet is stored.<br><b>image_set_id</b> : The AHI ImageSet Id of the image collection requested.<br>|
|DICOMizeByStudyInstanceUID(datastore_id: str, study_instance_uid: str)| Use to request the pydicom datasets to be loaded in memory. <br><br><b>datastore_id</b> : The AHI datastore where the ImageSet is stored.<br><b>study_instance_uid</b> : The DICOM study instance uid of the Study to export.<br>|
|getImageSetToSeriesUIDMap(datastore_id: str, study_instance_uid: str)| Returns an array of thes series descriptors for the given study, associated with theit ImageSetIds. Can be useful to decide which series to later load in memory. <br><br><b>datastore_id</b> : The AHI datastore where the ImageSet is stored.<br><b>study_instance_uid</b> : The study instance UID of the DICOM study.<br><br>Returns an array of series descriptors like his :<br>[{'SeriesNumber': '1', 'Modality': 'CT', 'SeriesDescription': 'CT series for liver tumor from nii 014', 'SeriesInstanceUID': '1.2.826.0.1.3680043.2.1125.1.34918616334750294149839565085991567'}]|
|saveAsDICOM(ds: Dataset,<br>destination : str)| Saves the DICOM in memory object on the filesystem destination.<br><br><b>ds</b> : The pydicom dataset representing the instance. Mostly one instance of the array returned by DICOMize().<br><b>destination</b> : The file path where to store the DIOCM P10 file.|
|saveAsPngPIL(ds: Dataset,<br>destination : str)| Saves a representation of the pixel raster of one instance on the filesystem as PNG.<br><br><b>ds</b> : The pydicom dataset representing the instance. Mostly one instance of the array returned by DICOMize().<br><b>destination</b> : The file path where to store the PNG file.|
## Code Example
The file `example/main.py` demonstrates how to use the various functions described above. To use it modifiy the `datastoreId` the `imageSetId` and the `studyInstanceUID` variables in the main function. You can also experiment by changing the `fetcher_count` and `dicomizer_count` parameters for better performance. Below is an example how the example can be started with an environment where the AWS CLI was configure with an IAM user and the region us-east-2 selected as default :
```
$ python3 main.py
python main.py
Getting ImageSet JSON metadata object.
5
Listing ImageSets and Series info by StudyInstanceUID
[{'ImageSetId': '0aaf9a3b6405bd6d393876806034b1c0', 'SeriesNumber': '3', 'Modality': 'CT', 'SeriesDescription': 'KneeHR 1.0 B60s', 'SeriesInstanceUID': '1.3.6.1.4.1.19291.2.1.2.1140133144321975855136128320349', 'InstanceCount': 74}, {'ImageSetId': '81bfc6aa3416912056e95188ab74870b', 'SeriesNumber': '2', 'Modality': 'CT', 'SeriesDescription': 'KneeHR 3.0 B60s', 'SeriesInstanceUID': '1.3.6.1.4.1.19291.2.1.2.1140133144321975855136128221126', 'InstanceCount': 222}]
DICOMizing by StudyInstanceUID
DICOMizebyStudyInstanceUID
0aaf9a3b6405bd6d393876806034b1c0
81bfc6aa3416912056e95188ab74870b
DICOMizing by ImageSetID
222 DICOMized in 3.3336379528045654.
Exporting images of the ImageSet in png format.
Exporting images of the ImageSet in DICOM P10 format.
```
After the example code has returned the file system now contains folders named with the `StudyInstanceUID` of the imageSet exported within the `out` folder. This fodler prefixed with `dcm_` holds the DICOM P10 files for the imageSet. The folder prefixed with `png_` holds PNG image representations of the imageSet.
## Using this module in Amazon SageMaker
This package can be used in Amazn SageMaker by adding the following code to the SageMaker notebook instance 2 first cells:
### Cell 1
```python
#Install the python packages
%%sh
pip install --upgrade pip --quiet
pip install boto3 botocore awscliv2 AHItoDICOMInterface --upgrade --quiet
```
### Cell 2
```python
#Restart the Kernel to take the new versions of awscliv2 in account.
import IPython
IPython.Application.instance().kernel.do_shutdown(True) #automatically restarts kernel
```
An example of a SageMaker Jupyter notebook using this module is available in the `example` folder of this repository : [jupyter-sagemaker-example.ipynb](./example/jupyter-sagemaker-example.ipynb)
| AHItoDICOMInterface | /AHItoDICOMInterface-0.1.3.1.tar.gz/AHItoDICOMInterface-0.1.3.1/README.md | README.md |
# AHP
层次分析法
## How to use
Install
`pip install ahp`
use
```python
from AHP import AHP
import numpy as np
# 准则重要性矩阵
criteria = np.array([[1, 2, 7, 5, 5],
[1 / 2, 1, 4, 3, 3],
[1 / 7, 1 / 4, 1, 1 / 2, 1 / 3],
[1 / 5, 1 / 3, 2, 1, 1],
[1 / 5, 1 / 3, 3, 1, 1]])
# 对每个准则,方案优劣排序
b1 = np.array([[1, 1 / 3, 1 / 8], [3, 1, 1 / 3], [8, 3, 1]])
b2 = np.array([[1, 2, 5], [1 / 2, 1, 2], [1 / 5, 1 / 2, 1]])
b3 = np.array([[1, 1, 3], [1, 1, 3], [1 / 3, 1 / 3, 1]])
b4 = np.array([[1, 3, 4], [1 / 3, 1, 1], [1 / 4, 1, 1]])
b5 = np.array([[1, 4, 1 / 2], [1 / 4, 1, 1 / 4], [2, 4, 1]])
b = [b1, b2, b3, b4, b5]
a = AHP(criteria, b).run()
```
打印:
```text
==========准则层==========
最大特征值5.072084,CR=0.014533,检验通过
准则层权重 = [0.47583538 0.26360349 0.0538146 0.09806829 0.10867824]
==========方案层==========
方案0 方案1 方案2 最大特征值 CR 一致性检验
准则0 0.081935 0.236341 0.681725 3.001542 8.564584e-04 True
准则1 0.595379 0.276350 0.128271 3.005535 3.075062e-03 True
准则2 0.428571 0.428571 0.142857 3.000000 -4.934325e-16 True
准则3 0.633708 0.191921 0.174371 3.009203 5.112618e-03 True
准则4 0.344545 0.108525 0.546931 3.053622 2.978976e-02 True
==========目标层==========
[[0.318586 0.23898522 0.44242878]]
最优选择是方案2
``` | AHP | /AHP-0.0.1.tar.gz/AHP-0.0.1/README.md | README.md |
import unittest
import datetime
import pkgutil
from io import StringIO
from typing import Union, NoReturn, Tuple, Dict
# Third-Party Dependencies
import numpy as np
from ..common.constants import *
def geodetic2spherical(lat: float, lon: float, h: float, a: float = EARTH_EQUATOR_RADIUS/1000.0, b: float = EARTH_POLAR_RADIUS/1000.0) -> Tuple[float, float, float]:
"""
Transform geodetic coordinates into spherical geocentric coordinates
The transformation cannot be a simple cylindric to spherical conversion, as
we must also consider a planet's ellipsoid form. With the aid of its
pre-defined flatness and eccentricity, we can better approximate the values
of the conversion.
In this function the Earth's major and minor semi-axis are considered.
However, we can convert the coordinates of different ellipsoidal bodies, by
giving the dimensions of its semi-axes.
Notice that the longitude between both systems remains the same.
Parameters
----------
lat : float
Latitude, in radians, of point in geodetic coordinates
lon : float
Longitude, in radians, of point in geodetic coordinates
h : float
Height, in kilometers, of point in geodetic coordinates
a : float, default: 6378.137
Major semi-axis, in kilometers. Defaults to Earth's equatorial radius
b : float, default: 6356.752314245
Minor semi-axis, in kilometers. Defaults to Earth's polar radius
Returns
-------
lat_spheric : float
Latitude of point in spherical coordinates.
lon : float
Longitue of point in spherical coordinates. Same as geodetic.
r : float
Radial distance of point in spherical coordinates.
"""
# Estimate Spheroid's Flatness and First Eccentricity
f = (a-b)/a # Flatness
e2 = f*(2.0-f) # First Eccentricity
# Transform geodetic coordinates into spherical geocentric coordinates
Rc = a/np.sqrt(1.0-e2*np.sin(lat)**2) # Radius of curvature of prime vertical
rho = (Rc+h)*np.cos(lat)
z = (Rc*(1-e2)+h)*np.sin(lat)
r = np.linalg.norm([rho, z]) # Radial distance
lat_spheric = np.arcsin(z/r) # Spherical latitude
return lat_spheric, lon, r
class WMM:
"""
World Magnetic Model
It is mainly used to compute all elements of the World Magnetic Model (WMM)
at any given point on Earth.
The main magnetic field :math:`B` is a potential field defined, in
geocentric spherical coordinates (longitude :math:`\\lambda`, latitude
:math:`\\phi '` and radius :math:`r`), as the negative spatial gradient of a
scalar potential at a time :math:`t`. This potential can be expanded in
terms of spherical harmonics:
.. math::
V(\\lambda, \\phi', r, t) = a\\sum_{n=1}^{N}\\Big(\\frac{a}{r}\\Big)^{n+1}\\sum_{m=0}^{n}f(n, m, \\lambda, t)P_n^m(\\phi')
where
.. math::
f(n, m, \\lambda, t) = g_n^m(t) \\cos(m\\lambda) + h_n^m(t) \\sin(m\\lambda)
and the Schmidt semi-normalized associated Legendre functions :math:`P_n^m(\\phi')`
are defined as:
.. math::
P_n^m(\\mu) = \\left\\{
\\begin{array}{ll}
\\sqrt{2\\frac{(n-m)!}{(n+m)!}}P_{n, m}(\\mu) & \\mathrm{if} \; m > 0 \\\\
P_{n, m}(\\mu) & \\mathrm{if} \; m = 0
\\end{array}
\\right.
Any object of this class is initialized with the corresponding epoch,
determined by the given date. If no date is given, it is assumed for the
day of the object's creation.
Once the WMM object is created, the estimation of the geomagnetic elements
is carried out with a call to the method `magnetic_field` giving the
location on Earth at which the magnetic elements will be calculated. This
location is given in decimal geodetic coordinates. See examples.
Every WMM object is created with a set of coefficients read from a COF
file, defined by the desired working date of the model. The latest
model available is WMM2020 corresponding to the lustrum 2020-2024.
This class can create models with dates between 2015 and 2024.
Parameters
----------
date : datetime.date, int or float, default: current day
Date of desired magnetic field estimation.
latitude : float, default: None
Latitude, in decimal degrees, in geodetic coordinates.
longitude : float, default: None
Longitude, in decimal degrees, in geodetic coordinates.
height : float, default: 0.0
Mean Sea Level Height, in kilometers.
Attributes
----------
date : datetime.date, default: datetime.date.today()
Desired date to estimate
date_dec : float
Desired date to estimate as decimal
epoch : float
Initial time of model in decimal years
model : str
WMM Model identificator
modeldate : str
Release date of WMM Model
wmm_filename : str
COF File used to build Model
degree : int
Degree of model
latitude : float
Latitude, in decimal degrees, in geodetic coordinates
longitude : float
Longitude in decimal degrees, in geodetic coordinates
height : float, default: 0.0
Mean Sea Level Height in kilometers
X : float, default: None
Northerly intensity, in nT
Y : float, default: None
Easterly intensity, in nT
Z : float, default: None
Vertical intensity, in nT
H : float, default: None
Horizontal intensity, in nT
F : float, default: None
Total intensity, in nT
I : float, default: None
Inclination angle (dip), in degrees
D : float, default: None
Declination angle, in degrees
GV : float, default: None
Grivation, in degrees
Examples
--------
The magnetic field can be computed at the creation of the WMM object by
passing the main parameters to its constructor:
>>> wmm = ahrs.utils.WMM(datetime.date(2017, 5, 12), latitude=10.0, longitude=-20.0, height=10.5)
>>> wmm.magnetic_elements
{'X': 30499.640469609083, 'Y': -5230.267158472566, 'Z': -1716.633311360368,
'H': 30944.850352270452, 'F': 30992.427998627096, 'I': -3.1751692563622993,
'D': -9.73078560629778, 'GV': -9.73078560629778}
"""
def __init__(self, date: Union[datetime.date, int, float] = None, latitude: float = None, longitude: float = None, height: float = 0.0) -> NoReturn:
self.reset_coefficients(date)
self.__dict__.update(dict.fromkeys(['X', 'Y', 'Z', 'H', 'F', 'I', 'D', 'GV']))
self.latitude = latitude
self.longitude = longitude
self.height = height
if all([self.latitude, self.longitude]):
self.magnetic_field(self.latitude, self.longitude, self.height, date=self.date)
def reset_coefficients(self, date: Union[datetime.date, int, float] = None) -> NoReturn:
"""
Reset Gauss coefficients to given date.
Given the date, the corresponding coefficients are updated. Basic
properties (epoch, release date, and model id) are read and updated in
the current instance.
The two coefficient tables (arrays) are also updated, where the
attribute `c` contains the Gaussian coefficients, while the attribute
`cd` contains the secular Gaussian coefficients.
The lenght of the Gaussian coefficient array determines the degree
:math:`n` of the model. This property updates the value of attribute
``degree``.
Parameters
----------
date : datetime.date, int or float, default: current day
Date of desired magnetic field estimation.
"""
self.reset_date(date)
self.__dict__.update(self.get_properties(self.wmm_filename))
self.load_coefficients(self.wmm_filename)
def load_coefficients(self, cof_file: str) -> NoReturn:
"""
Load model coefficients from COF file.
The model coefficients, also referred to as Gauss coefficients, are
listed in a COF file. These coefficients can be used to compute values
of the fields elements and their annual rates of change at any
location near the surface of the Earth.
The COF file has 6 columns:
* ``n`` is the degree.
* ``m`` is the order.
* ``g`` are time-dependent Gauss coefficients of degree ``n`` and order ``m``.
* ``h`` are time-dependent Gauss coefficients of degree ``n`` and order ``m``.
* ``gd`` are secular variations of coefficient ``g``.
* ``hd`` are secular variations of coefficient ``h``.
which constitute the *model* of the field. The first-order time
derivatives are called *secular terms*. The units are ``nT`` for the
main field, and ``nT/year`` for the secular variation.
The Gauss coefficients are defined for a time :math:`t` as:
.. math::
\\begin{eqnarray}
g_n^m(t) & = & g_n^m(t_0) + (t-t_0) \\dot{g}_n^m(t_0) \\\\
h_n^m(t) & = & h_n^m(t_0) + (t-t_0) \\dot{h}_n^m(t_0)
\\end{eqnarray}
where time is given in decimal years and :math:`t_0` corresponds to the
epoch read from the corresponding COF file.
Parameters
----------
cof_file : str
Path to COF file with the coefficients of the WMM
"""
file_data = pkgutil.get_data(__name__, self.wmm_filename).decode()
data = np.genfromtxt(StringIO(file_data), comments="999999", skip_header=1)
self.degree = int(max(data[:, 0]))
self.c = np.zeros((self.degree+1, self.degree+1))
self.cd = np.zeros((self.degree+1, self.degree+1))
for row in data:
n, m = row[:2].astype(int)
self.c[m, n] = row[2] # g_n^m
self.cd[m, n] = row[4] # g_n^m secular
if m != 0:
self.c[n, m-1] = row[3] # h_n^m
self.cd[n, m-1] = row[5] # h_n^m secular
def get_properties(self, cof_file: str) -> Dict[str, Union[str, float]]:
"""
Return dictionary of WMM properties from COF file.
Three properties are read and returned in a dictionary:
* ``epoch`` is the initial time :math:`t_0` as a `float`.
* ``model`` is a string of model used for the required lustrum.
* ``modeldate`` is the release date of used magnetic model.
Parameters
----------
cof_file : str
Path to COF file with the coefficients of the WMM
Returns
-------
properties : dictionary
Dictionary with the three WMM properties.
Examples
--------
>>> wmm = ahrs.WMM()
>>> wmm.get_properties('my_coefficients.COF')
{'model': 'WMM-2020', 'modeldate': '12/10/2019', 'epoch': 2020.0}
"""
if not cof_file.endswith(".COF"):
raise TypeError("File must have extension 'COF'")
first_line = pkgutil.get_data(__name__, self.wmm_filename).decode().split('\n')[0]
v = first_line.strip().split()
properties = dict(zip(["model", "modeldate"], v[1:]))
properties.update({"epoch": float(v[0])})
return properties
def reset_date(self, date: Union[datetime.date, int, float]) -> NoReturn:
"""
Set date to use with the model.
The WMM requires a date. This date can be given as an instance of
`datetime.date` or as a decimalized date of the format ``YYYY.d``.
If None is given it sets the date to the present day. In addition, the
corresponding COF file is also set.
Please note that only coefficents from year 2015 and later are provided
with this module.
Parameters
----------
date : datetime.date, int or float, default: current day
Date of desired magnetic field estimation.
"""
if date is None:
self.date = datetime.date.today()
self.date_dec = self.date.year + self.date.timetuple().tm_yday/365.0
if isinstance(date, (int, float)):
self.date_dec = float(date)
self.date = datetime.date.fromordinal(round(datetime.date(int(date), 1, 1).toordinal() + (self.date_dec-int(self.date_dec))*365))
if isinstance(date, datetime.date):
self.date = date
self.date_dec = self.date.year + self.date.timetuple().tm_yday/365.0
if self.date.year < 2015:
raise ValueError("No available coefficients for dates before 2015.")
self.wmm_filename = 'WMM2015/WMM.COF' if self.date_dec < 2020.0 else 'WMM2020/WMM.COF'
def denormalize_coefficients(self, latitude: float) -> NoReturn:
"""Recursively estimate associated Legendre polynomials and derivatives
done in a recursive way as described by Michael Plett in [Wertz]_ for
an efficient computation.
Given the Gaussian coefficients, it is possible to estimate the
magnetic field at any latitude on Earth for a certain date.
First, it is assumed that :math:`P_n^m(x)` are the Schmidt
semi-normalized functions. A normalization is made so that the
relative strength of terms of same degree :math:`n` but order :math:`m`
are used by comparing their respective Gauss coefficients.
For :math:`m=0` they are called *Legendre Polynomials* and can be
computed recursively with:
.. math::
P_n(x) = \\frac{2n-1}{n} x P_{n-1}(x) - \\frac{n-1}{n}P_{n-2}(x)
For :math:`m>0` they are known as *associated Legendre functions*
of degree :math:`n` and order :math:`m` and reduced to:
.. math::
P_{nm}(x) = (1-t^2)^{m/2} \\frac{d^m P_n(x)}{dt^m}
expressing the associated Legendre functions in terms of the Legendre
polynomials of same degree :math:`n`.
A more general formula to estimate both polynomial and associated
functions is given by:
.. math::
P_{nm}(x) = 2^{-n}(1-x^2)^{m/2} \\sum_{k=0}^{K}(-1)^k\\frac{(2n-2k)!}{k!(n-k)!(n-m-2k)!}x^{n-m-2k}
where :math:`K` is either :math:`(n-m)/2` or :math:`(n-m-1)/2`,
whichever is an integer. For a computational improvement, the terms are
calculated recursively.
We have to denormalize the coefficients from Schmidt to Gauss. The
Gauss functions :math:`P^{n, m}` are related to Schmidt functions
:math:`P_n^m` as:
.. math::
P_n^m = S_{n, m}P^{n, m}
where the factors :math:`S_{n, m}` are combined with Gaussian
coefficients to accelerate the computation, because they are
independent of the geographic location. Thus, we denormalize the
coefficients with:
.. math::
\\begin{array}{ll}
g^{n,m} & = S_{n,m} g_n^m \\\\
h^{n,m} & = S_{n,m} h_n^m
\\end{array}
The recursion for :math:`S_{n, m}` is:
.. math::
\\begin{array}{rlr}
S_{0,0} & = 1 & \\\\
S_{n,0} & = S_{n-1, 0} \\frac{2n-1}{n} & n\\geq 1 \\\\
S_{n,m} & = S_{n-1, m}\\sqrt{\\frac{(n-m+1)(\\delta _m^1+1)}{n+m}} & m\\geq 1
\\end{array}
where the Kronecker delta :math:`\\delta_j^i` is:
.. math::
\\delta_j^i = \\left\\{
\\begin{array}{ll}
1 & \\: i = j \\\\
0 & \\: \\mathrm{otherwise}
\\end{array}
\\right.
Similarly, :math:`P^{n, m}(x)` can be recursively obtained:
.. math::
\\begin{array}{ll}
P^{0,0} & = 1 \\\\
P^{n,n} & = \\sin (x) P^{n-1, n-1} \\\\
P^{n,m} & = \\cos (x) P^{n-1, m} - K^{n, m} P^{n-2, m}
\\end{array}
where:
.. math::
K^{n, m} = \\left\\{
\\begin{array}{ll}
\\frac{(n-1)^2-m^2}{(2n-1)(2n-3)} & \\: n>1 \\\\
0 & \\: n=1
\\end{array}
\\right.
Likewise, the gradient :math:`\\frac{dP^{n, m}}{dx}` is estimated as:
.. math::
\\begin{array}{llr}
\\frac{dP^{0, 0}}{dx} & = 1 & \\\\
\\frac{dP^{n, n}}{dx} & = \\sin (x) \\frac{dP^{n-1, n-1}}{dx} + \\cos (x) P^{n-1, n-1} & n\\geq 1 \\\\
\\frac{dP^{n, m}}{dx} & = \\cos (x) \\frac{dP^{n-1, m}}{dx} - \\sin (x) P^{n-1, m} - K^{n, m} \\frac{dP^{n-2, m}}{dx} &
\\end{array}
Parameters
----------
latitude : float
Latitude in spherical geocentric coordinates
"""
cos_lat = np.cos(latitude) # cos(phi')
sin_lat = np.sin(latitude) # sin(phi')
S = np.identity(self.degree+1) # Scale factors
self.k = np.zeros((self.degree+1, self.degree+1))
self.P = np.identity(self.degree+2)
self.dP = np.zeros((self.degree+2, self.degree+1))
for n in range(1, self.degree+1):
S[0, n] = S[0, n-1] * (2*n-1)/n
delta = 1 # Kronecker delta
for m in range(n+1):
self.k[m, n] = ((n-1)**2 - m**2) / ((2*n-1)*(2*n-3))
if m>0:
S[m, n] = S[m-1, n] * np.sqrt((n-m+1)*(delta+1)/(n+m))
self.c[n, m-1] *= S[m, n]
self.cd[n, m-1] *= S[m, n]
delta = 0
if n==m:
self.P[m, n] = cos_lat*self.P[m-1, n-1]
self.dP[m, n] = cos_lat*self.dP[m-1, n-1] + sin_lat*self.P[m-1, n-1]
else:
self.P[m, n] = sin_lat*self.P[m, n-1] - self.k[m, n]*self.P[m, n-2]
self.dP[m, n] = sin_lat*self.dP[m, n-1] - cos_lat*self.P[m, n-1] - self.k[m, n]*self.dP[m, n-2]
self.c[m, n] *= S[m, n]
self.cd[m, n] *= S[m, n]
def magnetic_field(self, latitude: float, longitude: float, height: float = 0.0, date: Union[datetime.date, int, float] = datetime.date.today()) -> NoReturn:
"""
Calculate the geomagnetic field elements for a location on Earth.
The code includes comments with references to equation numbers
corresponding to the ones in the official report.
Having the coefficients :math:`g^{n, m}` and :math:`h^{n, m}`, we
extrapolate them for the desired time :math:`t` as:
.. math::
\\begin{array}{ll}
g_n^m(t) & = g_n^m(t_0) + \\Delta_t \\dot{g}_n^m (t_0) \\\\
h_n^m(t) & = h_n^m(t_0) + \\Delta_t \\dot{h}_n^m (t_0)
\\end{array}
where :math:`\\Delta_t = t-t_0` is the difference between the time
:math:`t` and the reference epoch model :math:`t_0` (``2020.0`` for the
newest version.)
The vector components of the main magnetic field :math:`B` are then
calculated with:
.. math::
\\begin{array}{ll}
X' & = -\\sum_{n=1}^N\\Big(\\frac{a}{r}\\Big)^{n+2} \\sum_{m=0}^n\\big(g_n^m(t) \\cos(m\\lambda)+h_n^m(t)\\sin(m\\lambda)\\big) \\frac{dP_n^m(\\sin \\phi ')}{d\\phi '} \\\\
Y' & = \\frac{1}{\\cos\\phi '}\\sum_{n=1}^N\\Big(\\frac{a}{r}\\Big)^{n+2} \\sum_{m=0}^n m\\big(g_n^m(t) \\sin(m\\lambda)-h_n^m(t)\\cos(m\\lambda)\\big)P_n^m(\\sin \\phi ') \\\\
Z' & = -\\sum_{n=1}^N(n+1)\\Big(\\frac{a}{r}\\Big)^{n+2} \\sum_{m=0}^n\\big(g_n^m(t) \\cos(m\\lambda)+h_n^m(t)\\sin(m\\lambda)\\big)P_n^m(\\sin \\phi ')
\\end{array}
Finally, the geomagnetic vector components are rotated into ellipsoidal
reference frame.
.. math::
\\begin{array}{ll}
X & = X'\\cos(\\phi ' - \\phi) - Z' \\sin(\\phi ' - \\phi) \\\\
Y & = Y' \\\\
Z & = X'\\sin(\\phi ' - \\phi) + Z' \\cos(\\phi ' - \\phi)
\\end{array}
These components are used to compute the rest of the magnetic elements:
.. math::
\\begin{array}{ll}
H & = \\sqrt{X^2 + Y^2} \\\\
F & = \\sqrt{H^2 + Z^2} \\\\
I & = \\arctan(\\frac{Z}{H}) \\\\
D & = \\arctan(\\frac{Y}{X})
\\end{array}
.. note::
The use of ``arctan2`` yields a more precise result than ``arctan``,
because it estimates the angle exploring all quadrants.
For polar regions, where the declination changes drastically, the WMM
defines two different grivations (one for each pole) defined as:
.. math::
GV = \\left\\{
\\begin{array}{ll}
D-\\lambda & \\: \\phi > 55 ° \\\\
D+\\lambda & \\: \\phi < -55 °
\\end{array}
\\right.
Parameters
----------
latitude : float
Latitude, in decimal degrees, in geodetic coordinates
longitude : float
Longitude in decimal degrees, in geodetic coordinates
height : float, default: 0.0
Mean Sea Level Height in kilometers
date : datetime.date, int or float, default: datetime.date.today()
Desired date to estimate
"""
if date is not None:
self.reset_coefficients(date)
self.latitude = latitude
self.longitude = longitude
self.height = height
latitude *= DEG2RAD
longitude *= DEG2RAD
# Transform geodetic coordinates into spherical geocentric coordinates
lat_prime, _, r = geodetic2spherical(latitude, longitude, self.height)
# Compute cos(m*phi') and sin(m*phi') for all m values
self.sp = np.zeros(self.degree+1) # sin(m*phi')
self.cp = np.ones(self.degree+2) # cos(m*phi')
self.sp[1] = np.sin(longitude)
self.cp[1] = np.cos(longitude)
for m in range(2, self.degree+1):
self.sp[m] = self.sp[1]*self.cp[m-1] + self.cp[1]*self.sp[m-1]
self.cp[m] = self.cp[1]*self.cp[m-1] - self.sp[1]*self.sp[m-1]
dt = round(self.date_dec, 1) - self.epoch # t - t_0
self.gh = np.zeros((self.degree+2, self.degree+1))
self.denormalize_coefficients(lat_prime)
cos_lat = np.cos(lat_prime) # cos(phi')
sin_lat = np.sin(lat_prime) # sin(phi')
Zp = Xp = Yp = Bp = 0.0
a = EARTH_MEAN_RADIUS/1000.0 # Mean earth radius in km
ar = a/r
# Spherical Harmonics (eq. 4)
for n in range(1, self.degree+1): #### !! According to report it must be equal to defined degree (12 not 13)
arn2 = ar**(n+2)
x_p = y_p = z_p = 0.0
for m in range(n+1): #### !! According to report it must be equal to n
self.gh[m, n] = self.c[m, n] + dt*self.cd[m, n] # g_n^m (eq. 9)
# Terms of spherical harmonic expansions
gchs = self.gh[m, n]*self.cp[m] # g(t)cos(ml)
gshc = self.gh[m, n]*self.sp[m] # g(t)sin(ml)
if m>0:
self.gh[n, m-1] = self.c[n, m-1] + dt*self.cd[n, m-1] # h_n^m (eq. 9)
gchs += self.gh[n, m-1]*self.sp[m] # g(t)cos(ml) + h(t)sin(ml)
gshc -= self.gh[n, m-1]*self.cp[m] # g(t)sin(ml) - h(t)cos(ml)
x_p += gchs * self.dP[m, n]
y_p += m * gshc * self.P[m, n]
z_p += gchs * self.P[m, n]
# SPECIAL CASE: NORTH/SOUTH GEOGRAPHIC POLES
if (cos_lat==0.0 and m==1):
Bp += arn2 * gshc
if n>1:
Bp *= sin_lat - self.k[m, n]
Xp += arn2 * x_p # (eq. 10) #### !! According to report must be a substraction. Must re-check this
Yp += arn2 * y_p # (eq. 11)
Zp -= (n+1) * arn2 * z_p # (eq. 12)
Yp = Bp if cos_lat==0.0 else Yp/cos_lat
# Transform magnetic vector components to geodetic coordinates (eq. 17)
self.X = Xp*np.cos(lat_prime-latitude) - Zp*np.sin(lat_prime-latitude)
self.Y = Yp
self.Z = Xp*np.sin(lat_prime-latitude) + Zp*np.cos(lat_prime-latitude)
# Total Intensity, Inclination and Declination (eq. 19)
self.H = np.linalg.norm([self.X, self.Y]) # sqrt(X^2+Y^2)
self.F = np.linalg.norm([self.H, self.Z]) # sqrt(H^2+Z^2)
self.I = RAD2DEG*np.arctan2(self.Z, self.H)
self.D = RAD2DEG*np.arctan2(self.Y, self.X)
# Grivation (eq. 1)
self.GV = self.D.copy()
if self.latitude>55.0:
self.GV -= self.longitude
if self.latitude<-55.0:
self.GV += self.longitude
@property
def magnetic_elements(self) -> Dict[str, float]:
"""Main geomagnetic elements in a dictionary
+---------+-----------------------------------------------+
| Element | Definition |
+=========+===============================================+
| X | Northerly intensity |
+---------+-----------------------------------------------+
| Y | Easterly intensity |
+---------+-----------------------------------------------+
| Z | Vertical intensity (Positive downwards) |
+---------+-----------------------------------------------+
| H | Horizontal intensity |
+---------+-----------------------------------------------+
| F | Total intensity |
+---------+-----------------------------------------------+
| I | Inclination angle (a.k.a. dip angle) |
+---------+-----------------------------------------------+
| D | Declination angle (a.k.a. magnetic variation) |
+---------+-----------------------------------------------+
| GV | Grivation |
+---------+-----------------------------------------------+
Example
-------
>>> wmm = WMM(datetime.date(2017, 5, 12), latitude=10.0, longitude=-20.0, height=10.5)
>>> wmm.magnetic_elements
{'X': 30499.640469609083, 'Y': -5230.267158472566, 'Z': -1716.633311360368,
'H': 30944.850352270452, 'F': 30992.427998627096, 'I': -3.1751692563622993,
'D': -9.73078560629778, 'GV': -9.73078560629778}
"""
return {k: self.__dict__[k] for k in ['X', 'Y', 'Z', 'H', 'F', 'I', 'D', 'GV']}
class GeoMagTest(unittest.TestCase):
"""Test Magnetic Field estimation with provided values
WMM 2015 uses a CSV test file with values split with semicolons, whereas
the WMM 2020 uses a TXT file with values split with spaces. The position of
their values is different. The following table shows their differences:
+-------+-------------------+-------------------+
| Index | CSV File (WM2015) | TXT File (WM2020) |
+=======+===================+===================+
| 0 | date | date |
+-------+-------------------+-------------------+
| 1 | height (km) | height (km) |
+-------+-------------------+-------------------+
| 2 | latitude (deg) | latitude (deg) |
+-------+-------------------+-------------------+
| 3 | longitude (deg) | longitude (deg) |
+-------+-------------------+-------------------+
| 4 | X (nT) | D (deg) |
+-------+-------------------+-------------------+
| 5 | Y (nT) | I (deg) |
+-------+-------------------+-------------------+
| 6 | Z (nT) | H (nT) |
+-------+-------------------+-------------------+
| 7 | H (nT) | X (nT) |
+-------+-------------------+-------------------+
| 8 | F (nT) | Y (nT) |
+-------+-------------------+-------------------+
| 9 | I (deg) | Z (nT) |
+-------+-------------------+-------------------+
| 10 | D (deg) | F (nT) |
+-------+-------------------+-------------------+
| 11 | GV (deg) | dD/dt (deg/year) |
+-------+-------------------+-------------------+
| 12 | Xdot (nT/yr) | dI/dt (deg/year) |
+-------+-------------------+-------------------+
| 13 | Ydot (nT/yr) | dH/dt (nT/year) |
+-------+-------------------+-------------------+
| 14 | Zdot (nT/yr) | dX/dt (nT/year) |
+-------+-------------------+-------------------+
| 15 | Hdot (nT/yr) | dY/dt (nT/year) |
+-------+-------------------+-------------------+
| 16 | Fdot (nT/yr) | dZ/dt (nT/year) |
+-------+-------------------+-------------------+
| 17 | dI/dt (deg/year) | dF/dt (nT/year) |
+-------+-------------------+-------------------+
| 18 | dD/dt (deg/year) | |
+-------+-------------------+-------------------+
Besides using a different order, the newest format prescinds from grid
variation (GV)
"""
def _load_test_values(self, filename: str) -> np.ndarray:
"""Load test values from file.
Parameters
----------
filename : str
Path to file with test values.
Returns
-------
data : ndarray
NumPy array with the test values.
"""
if filename.endswith('.csv'):
data = np.genfromtxt(filename, delimiter=';', skip_header=1)
if data.shape[1]<19:
raise ValueError("File has incomplete data")
keys = ["date", "height", "latitude", "longitude", "X", "Y", "Z", "H", "F", "I", "D", "GV",
"dX", "dY", "dZ", "dH", "dF", "dI", "dD"]
return dict(zip(keys, data.T))
if filename.endswith('.txt'):
data = np.genfromtxt(filename, skip_header=1, comments='#')
if data.shape[1]<18:
raise ValueError("File has incomplete data")
keys = ["date", "height", "latitude", "longitude", "D", "I", "H", "X", "Y", "Z", "F",
"dD", "dI", "dH", "dX", "dY", "dZ", "dF"]
return dict(zip(keys, data.T))
raise ValueError("File type is not supported. Try a csv or txt File.")
def test_wmm2015(self):
"""Test WMM 2015"""
wmm = WMM()
test_values = self._load_test_values("./WMM2015/WMM2015_test_values.csv")
num_tests = len(test_values['date'])
for i in range(num_tests):
wmm.magnetic_field(test_values['latitude'][i], test_values['longitude'][i], test_values['height'][i], date=test_values['date'][i])
self.assertAlmostEqual(test_values['X'][i], wmm.X, 1, 'Expected {:.1f}, result {:.1f}'.format(test_values['X'][i], wmm.X))
self.assertAlmostEqual(test_values['Y'][i], wmm.Y, 1, 'Expected {:.1f}, result {:.1f}'.format(test_values['Y'][i], wmm.Y))
self.assertAlmostEqual(test_values['Z'][i], wmm.Z, 1, 'Expected {:.1f}, result {:.1f}'.format(test_values['Z'][i], wmm.Z))
self.assertAlmostEqual(test_values['I'][i], wmm.I, 2, 'Expected {:.2f}, result {:.2f}'.format(test_values['I'][i], wmm.I))
self.assertAlmostEqual(test_values['D'][i], wmm.D, 2, 'Expected {:.2f}, result {:.2f}'.format(test_values['D'][i], wmm.D))
self.assertAlmostEqual(test_values['GV'][i], wmm.GV, 2, 'Expected {:.2f}, result {:.2f}'.format(test_values['GV'][i], wmm.GV))
del wmm
def test_wmm2020(self):
"""Test WMM 2020"""
wmm = WMM()
test_values = self._load_test_values("./WMM2020/WMM2020_TEST_VALUES.txt")
num_tests = len(test_values['date'])
for i in range(num_tests):
wmm.magnetic_field(test_values['latitude'][i], test_values['longitude'][i], test_values['height'][i], date=test_values['date'][i])
self.assertAlmostEqual(test_values['X'][i], wmm.X, 1, 'Expected {:.1f}, result {:.1f}'.format(test_values['X'][i], wmm.X))
self.assertAlmostEqual(test_values['Y'][i], wmm.Y, 1, 'Expected {:.1f}, result {:.1f}'.format(test_values['Y'][i], wmm.Y))
self.assertAlmostEqual(test_values['Z'][i], wmm.Z, 1, 'Expected {:.1f}, result {:.1f}'.format(test_values['Z'][i], wmm.Z))
self.assertAlmostEqual(test_values['I'][i], wmm.I, 2, 'Expected {:.2f}, result {:.2f}'.format(test_values['I'][i], wmm.I))
self.assertAlmostEqual(test_values['D'][i], wmm.D, 2, 'Expected {:.2f}, result {:.2f}'.format(test_values['D'][i], wmm.D))
del wmm
if __name__ == '__main__':
unittest.main() | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/utils/wmm.py | wmm.py |
import numpy as np
from ..common.orientation import logR
def euclidean(x: np.ndarray, y: np.ndarray, **kwargs) -> float:
"""
Euclidean distance between two arrays as described in [Huynh]_:
.. math::
d(\\mathbf{x}, \\mathbf{y}) = \\sqrt{(x_0-y_0)^2 + \\dots + (x_n-y_n)^2}
Accepts the same parameters as the function ``numpy.linalg.norm()``.
This metric gives values in the range [0, :math:`\\pi\\sqrt{3}`]
Parameters
----------
x : array
M-by-N array to compare. Usually a reference array.
y : array
M-by-N array to compare.
mode : str
Mode of distance computation.
Return
------
d : float
Distance or difference between arrays.
Examples
--------
>>> import numpy as np
>>> from ahrs.utils.metrics import euclidean
>>> num_samples = 5
>>> angles = np.random.uniform(low=-180.0, high=180.0, size=(num_samples, 3))
>>> noisy = angles + np.random.randn(num_samples, 3)
>>> euclidean(angles, noisy)
2.585672169476804
>>> euclidean(angles, noisy, axis=0)
array([1.36319772, 1.78554071, 1.28032688])
>>> euclidean(angles, noisy, axis=1) # distance per sample
array([0.88956871, 1.19727356, 1.5243858 , 0.68765523, 1.29007067])
"""
return np.linalg.norm(x-y, **kwargs)
def chordal(R1: np.ndarray, R2: np.ndarray) -> float:
"""
Chordal Distance
The chordal distance between two rotations :math:`\\mathbf{R}_1` and
:math:`\\mathbf{R}_2` in SO(3) is the Euclidean distance between them in
the embedding space :math:`\\mathbb{R}^{3\\times 3}=\\mathbb{R}^9`
[Hartley]_:
.. math::
d(\\mathbf{R}_1, \\mathbf{R}_2) = \\|\\mathbf{R}_1-\\mathbf{R}_2\\|_F
where :math:`\\|\\mathbf{X}\\|_F` represents the Frobenius norm of the
matrix :math:`\\mathbf{X}`.
Parameters
----------
R1 : numpy.ndarray
3-by-3 rotation matrix.
R2 : numpy.ndarray
3-by-3 rotation matrix.
Returns
-------
d : float
Chordal distance between matrices.
"""
return np.linalg.norm(R1-R2, 'fro')
def identity_deviation(R1: np.ndarray, R2: np.ndarray) -> float:
"""
Deviation from Identity Matrix as defined in [Huynh]_:
.. math::
d(\\mathbf{R}_1, \\mathbf{R}_2) = \\|\\mathbf{I}-\\mathbf{R}_1\\mathbf{R}_2^T\\|_F
where :math:`\\|\\mathbf{X}\\|_F` represents the Frobenius norm of the
matrix :math:`\\mathbf{X}`.
The error lies within: [0, :math:`2\\sqrt{2}`]
Parameters
----------
R1 : numpy.ndarray
3-by-3 rotation matrix.
R2 : numpy.ndarray
3-by-3 rotation matrix.
Returns
-------
d : float
Deviation from identity matrix.
"""
return np.linalg.norm(np.eye(3)[email protected], 'fro')
def angular_distance(R1: np.ndarray, R2: np.ndarray) -> float:
"""
Angular distance between two rotations :math:`\\mathbf{R}_1` and
:math:`\\mathbf{R}_2` in SO(3), as defined in [Hartley]_:
.. math::
d(\\mathbf{R}_1, \\mathbf{R}_2) = \\|\\log(\\mathbf{R}_1\\mathbf{R}_2^T)\\|
where :math:`\\|\\mathbf{x}\\|` represents the usual euclidean norm of the
vector :math:`\\mathbf{x}`.
Parameters
----------
R1 : numpy.ndarray
3-by-3 rotation matrix.
R2 : numpy.ndarray
3-by-3 rotation matrix.
Returns
-------
d : float
Angular distance between rotation matrices
"""
if R1.shape!=R2.shape:
raise ValueError("Cannot compare R1 of shape {} and R2 of shape {}".format(R1.shape, R2.shape))
return np.linalg.norm(logR([email protected]))
def qdist(q1: np.ndarray, q2: np.ndarray) -> float:
"""
Euclidean distance between two unit quaternions as defined in [Huynh]_ and
[Hartley]_:
.. math::
d(\\mathbf{q}_1, \\mathbf{q}_2) = \\mathrm{min} \\{ \\|\\mathbf{q}_1-\\mathbf{q}_2\\|, \\|\\mathbf{q}_1-\\mathbf{q}_2\\|\\}
The error lies within [0, :math:`\\sqrt{2}`]
Parameters
----------
q1 : numpy.ndarray
First quaternion, or set of quaternions, to compare.
q2 : numpy.ndarray
Second quaternion, or set of quaternions, to compare.
Returns
-------
d : float
Euclidean distance between given unit quaternions
"""
q1 = np.copy(q1)
q2 = np.copy(q2)
if q1.shape!=q2.shape:
raise ValueError("Cannot compare q1 of shape {} and q2 of shape {}".format(q1.shape, q2.shape))
if q1.ndim==1:
q1 /= np.linalg.norm(q1)
q2 /= np.linalg.norm(q2)
if np.allclose(q1, q2) or np.allclose(-q1, q2):
return 0.0
return min(np.linalg.norm(q1-q2), np.linalg.norm(q1+q2))
q1 /= np.linalg.norm(q1, axis=1)[:, None]
q2 /= np.linalg.norm(q2, axis=1)[:, None]
return np.r_[[np.linalg.norm(q1-q2, axis=1)], [np.linalg.norm(q1+q2, axis=1)]].min(axis=0)
def qeip(q1: np.ndarray, q2: np.ndarray) -> float:
"""
Euclidean distance of inner products as defined in [Huynh]_ and [Kuffner]_:
.. math::
d(\\mathbf{q}_1, \\mathbf{q}_2) = 1 - |\\mathbf{q}_1\\cdot\\mathbf{q}_2|
The error lies within: [0, 1]
Parameters
----------
q1 : numpy.ndarray
First quaternion, or set of quaternions, to compare.
q2 : numpy.ndarray
Second quaternion, or set of quaternions, to compare.
Returns
-------
d : float
Euclidean distance of inner products between given unit quaternions.
"""
q1 = np.copy(q1)
q2 = np.copy(q2)
if q1.shape!=q2.shape:
raise ValueError("Cannot compare q1 of shape {} and q2 of shape {}".format(q1.shape, q2.shape))
if q1.ndim==1:
q1 /= np.linalg.norm(q1)
q2 /= np.linalg.norm(q2)
if np.allclose(q1, q2) or np.allclose(-q1, q2):
return 0.0
return 1.0-abs(q1@q2)
q1 /= np.linalg.norm(q1, axis=1)[:, None]
q2 /= np.linalg.norm(q2, axis=1)[:, None]
return 1.0-abs(np.nansum(q1*q2, axis=1))
def qcip(q1: np.ndarray, q2: np.ndarray) -> float:
"""
Cosine of inner products as defined in [Huynh]_:
.. math::
d(\\mathbf{q}_1, \\mathbf{q}_2) = \\arccos(|\\mathbf{q}_1\\cdot\\mathbf{q}_2|)
The error lies within: [0, :math:`\\frac{\\pi}{2}`]
Parameters
----------
q1 : numpy.ndarray
First quaternion, or set of quaternions, to compare.
q2 : numpy.ndarray
Second quaternion, or set of quaternions, to compare.
Returns
-------
d : float
Cosine of inner products of quaternions.
"""
q1 = np.copy(q1)
q2 = np.copy(q2)
if q1.shape!=q2.shape:
raise ValueError("Cannot compare q1 of shape {} and q2 of shape {}".format(q1.shape, q2.shape))
if q1.ndim==1:
q1 /= np.linalg.norm(q1)
q2 /= np.linalg.norm(q2)
if np.allclose(q1, q2) or np.allclose(-q1, q2):
return 0.0
return np.arccos(abs(q1@q2))
q1 /= np.linalg.norm(q1, axis=1)[:, None]
q2 /= np.linalg.norm(q2, axis=1)[:, None]
return np.arccos(abs(np.nansum(q1*q2, axis=1)))
def qad(q1: np.ndarray, q2: np.ndarray) -> float:
"""
Quaternion Angle Difference
Parameters
----------
q1 : numpy.ndarray
First quaternion, or set of quaternions, to compare.
q2 : numpy.ndarray
Second quaternion, or set of quaternions, to compare.
The error lies within: [0, :math:`\\frac{\\pi}{2}`]
Returns
-------
d : float
Angle difference between given unit quaternions.
"""
q1 = np.copy(q1)
q2 = np.copy(q2)
if q1.shape!=q2.shape:
raise ValueError("Cannot compare q1 of shape {} and q2 of shape {}".format(q1.shape, q2.shape))
if q1.ndim==1:
q1 /= np.linalg.norm(q1)
q2 /= np.linalg.norm(q2)
if np.allclose(q1, q2) or np.allclose(-q1, q2):
return 0.0
return np.arccos(2.0*(q1@q2)**2-1.0)
q1 /= np.linalg.norm(q1, axis=1)[:, None]
q2 /= np.linalg.norm(q2, axis=1)[:, None]
return np.arccos(2.0*np.nansum(q1*q2, axis=1)**2-1.0) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/utils/metrics.py | metrics.py |
import numpy as np
import matplotlib.pyplot as plt
def _hex_to_int(color):
"""Convert hex value to tuple of type int with values between 0 and 255
"""
a = color.lstrip('#')
return tuple(int(a[i:i+2], 16) for i in (0, 2, 4, 6))
def _hex_to_float(color):
"""Convert hex value to tuple of type float with values between 0.0 and 1.0
"""
a = color.lstrip('#')
return tuple(int(a[i:i+2], 16)/255.0 for i in (0, 2, 4, 6))
COLORS = [
"#FF0000FF", "#00AA00FF", "#0000FFFF", "#999933FF",
"#FF8888FF", "#88AA88FF", "#8888FFFF", "#999955FF",
"#660000FF", "#005500FF", "#000088FF", "#666600FF"]
COLORS_INTS = [_hex_to_int(c) for c in COLORS]
COLORS_FLOATS = [_hex_to_float(c) for c in COLORS]
def plot(*data, **kw):
"""
Plot data with custom formatting.
Given data is plotted in time domain. It locks any current process until
plotting window is closed.
Parameters
----------
data : array
Arrays with the contents of data to plot.
Extra Parameters
----------------
title : int or str
Window title setting figure number or label.
subtitles : list
List of strings of the titles of each subplot.
labels : list
List of labels that will be displayed in each subplot's legend.
xlabels : list
List of strings of the labels of each subplot's X-axis.
ylabels : list
List of strings of the labels of each subplot's Y-axis.
yscales : str
List of strings of the scales of each subplot's Y-axis. It supports
matlabs defaults values: "linear", "log", "symlog" and "logit"
Examples
--------
>>> from ahrs.utils import plot
>>> data = np.array([2., 3., 4., 5.])
>>> plot(data)
>>> data_2 = np.array([4., 5., 6., 7.])
>>> plot(data, data_2)
>>> plot(data, data_2, subtitles=["data", "data 2"])
"""
raise DeprecationWarning("This function will be removed. Actually I don't know how you got to use it.")
title = kw.get("title")
subtitles = kw.get("subtitles")
labels = kw.get("labels")
xlabels = kw.get("xlabels")
ylabels = kw.get("ylabels")
yscales = kw.get("yscales")
num_subplots = len(data)
fig, axs = plt.subplots(num_subplots, 1, num=title, squeeze=False)
for i, d in enumerate(data):
d = np.array(d)
if d.ndim < 2:
label = labels[i][0] if labels else None
axs[i, 0].plot(d, color=COLORS[0], lw=0.5, ls='-', label=label) # Plot a single red line in subplot
else:
d_sz = d.shape
if d_sz[0] > d_sz[1]:
d = d.T
for j, row in enumerate(d):
label = None
if labels:
if len(labels[i]) == len(d):
label = labels[i][j]
axs[i, 0].plot(row, color=COLORS[j], lw=0.5, ls='-', label=label)
if subtitles:
axs[i, 0].set_title(subtitles[i])
if xlabels:
axs[i, 0].set_xlabel(xlabels[i])
if ylabels:
axs[i, 0].set_ylabel(ylabels[i])
if yscales:
axs[i, 0].set_yscale(yscales[i])
if labels:
if len(labels[i]) > 0:
axs[i, 0].legend(loc='lower right')
fig.tight_layout()
plt.show()
def plot_sensors(*sensors, **kwargs):
"""
Plot data of sensor arrays.
Opens a window and plots each sensor array in a different row. The window
builds a subplot for each sensor array.
Parameters
----------
sensors : arrays
Arrays of sensors to plot. Each array is of size M-by-N, where M is the
number of samples, and N is the number of axes.
num_axes : int, optional
Number of axes per sensor. Default is 3.
x_axis : array, optional
X-axis data array of the plots. Default is `range(M)`.
title : str, optional
Title of window. Default is 'Sensors'
subtitles : list of strings, optional
List of titles for each subplot.
Examples
--------
>>> data = ahrs.utils.io.load("data.mat")
>>> ahrs.utils.plot_sensors(data.gyrs) # Plot Gyroscopes
Each call will open a new window with the requested plots and pause any
further computation, until the window is closed.
>>> ahrs.utils.plot_sensors(gyrs, accs) # Plot Gyroscopes and Accelerometers in same window
>>> time = data['time']
>>> ahrs.utils.plot_sensors(data.gyr, data.acc, data.mag, x_axis=data.time, title="Sensors")
"""
raise DeprecationWarning("This function will be removed. Actually I don't know how you got to use it.")
num_axes = kwargs.get('num_axes', 3)
title = kwargs.get('title', "Sensors")
subtitles = kwargs.get('subtitles', None)
fig = plt.figure(title)
for n, s in enumerate(sensors):
fig.add_subplot(len(sensors), 1, n+1)
if subtitles:
plt.subplot(len(sensors), 1, n+1, title=subtitles[n])
x_axis = kwargs.get('x_axis', range(s.shape[0]))
if s.ndim < 2:
plt.plot(s, 'k-', lw=0.3)
else:
for i in range(num_axes):
plt.plot(x_axis, s[:, i], c=COLORS[i+1], ls='-', lw=0.3)
plt.show()
def plot_euler(*angles, **kwargs):
"""
Plot Euler Angles.
Opens a window and plots the three Euler Angles in a centered plot.
Parameters
----------
angles : arrays
Array of Euler Angles to plot. Each array is of size M-by-3.
x_axis : array
Optional. X-axis data array of the plot. Default is `range(M)`.
title : str
Optional. Title of window. Default is 'Euler Angles'.
Examples
--------
>>> data = ahrs.utils.io.load("data.mat")
>>> ahrs.utils.plot_euler(data.euler_angles)
Each call will open a new window with the requested plots and pause any
further computation, until the window is closed.
>>> time = data['time']
>>> ahrs.utils.plot_euler(data.euler_angles, x_axis=data.time, title="My Angles")
"""
raise DeprecationWarning("This function will be removed. Actually I don't know how you got to use it.")
# x_axis = kwargs.get('x_axis', range(sz[0]))
title = kwargs.get('title', "Euler Angles")
subtitles = kwargs.get('subtitles', None)
fig = plt.figure(title)
for n, a in enumerate(angles):
fig.add_subplot(len(angles), 1, n+1)
if subtitles:
plt.subplot(len(angles), 1, n+1, title=subtitles[n])
x_axis = kwargs.get('x_axis', range(a.shape[0]))
for i in range(3):
plt.plot(x_axis, a[:, i], c=COLORS[i+1], ls='-', lw=0.3)
plt.show()
def plot_quaternions(*quaternions, **kwargs):
"""
Plot Quaternions.
Opens a window and plots the Quaternions in a centered plot.
Parameters
----------
sensors : arrays
Array of Quaternions to plot. Each array is of size M-by-4.
x_axis : array
Optional. X-axis data array of the plot. Default is `range(M)`.
title : str
Optional. Title of window. Default is 'Quaternions'.
Examples
--------
>>> data = ahrs.utils.io.load("data.mat")
>>> ahrs.utils.plot_quaternions(data.qts)
Each call will open a new window with the requested plots and pause any
further computation, until the window is closed.
>>> time = data['time']
>>> ahrs.utils.plot_quaternions(data.qts, x_axis=time, title="My Quaternions")
Two or more quaternions can also be plotted, like in the sensor plotting
function.
>>> ahrs.utils.plot_quaternions(data.qts, ref_quaternions)
"""
raise DeprecationWarning("This function will be removed. Actually I don't know how you got to use it.")
title = kwargs.get('title', "Quaternions")
subtitles = kwargs.get('subtitles', None)
fig = plt.figure(title)
for n, q in enumerate(quaternions):
fig.add_subplot(len(quaternions), 1, n+1)
if subtitles:
plt.subplot(len(quaternions), 1, n+1, title=subtitles[n])
x_axis = kwargs.get('x_axis', range(q.shape[0]))
for i in range(4):
plt.plot(x_axis, q[:, i], c=COLORS[i], ls='-', lw=0.3)
plt.show() | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/utils/plot.py | plot.py |
import unittest
import numpy as np
from ..common.constants import *
def international_gravity(lat: float, epoch: str = '1980') -> float:
"""
International Gravity Formula
Estimate the normal gravity, :math:`g`, using the International Gravity
Formula [Lambert]_, adapted from Stokes' formula, and adopted by the `International Association
of Geodesy <https://www.iag-aig.org/>`_ at its Stockholm Assembly in 1930.
The expression for gravity on a spheroid, which combines gravitational
attraction and centrifugal acceleration, at a certain latitude, :math:`\\phi`,
can be written in the form of a series:
.. math::
g = g_e\\big(1 + \\beta\\sin^2(\\phi) - \\beta_1\\sin^2(2\\phi) - \\beta_2\\sin^2(\\phi)\\sin^2(2\\phi) - \\beta_3\\sin^4(\\phi)\\sin^2(2\\phi) - \\dots\\big)
where the values of the :math:`\\beta`'s are:
.. math::
\\begin{array}{ll}
\\beta &= \\frac{5}{2}m\\Big(1-\\frac{17}{35}f - \\frac{1}{245}f^2 - \\frac{13}{18865}f^3 - \\dots\\Big) - f \\\\
\\beta_1 &= \\frac{1}{8}f(f+2\\beta) \\\\
\\beta_2 &= \\frac{1}{8}f^2(2f+3\\beta) - \\frac{1}{32}f^3(3f+4\\beta) \\\\
& \\vdots \\\\
& \\mathrm{etc.}
\\end{array}
and :math:`g_e` is the measured normal gravitaty on the Equator. For the
case of the International Ellipsoid, the third-order terms are negligible.
So, in practice, the term :math:`\\beta_2` and all following terms are
dropped to yield the form:
.. math::
g = g_e \\big(1 + \\beta \\sin^2\\phi - \\beta_1 \\sin^2(2\\phi)\\big)
In the original definition the values of :math:`\\beta` and :math:`\\beta_1`
are rounded off to seven decimal places to simply get the working formula:
.. math::
g = 9.78049 \\big(1 + 0.0052884 \\sin^2\\phi - 0.0000059 \\sin^2(2\\phi)\\big)
Originally, the definitions of the elementary properties (:math:`a`,
:math:`g_e`, etc.) weren't as accurate as now. At different moments in
history, the values were updated to improve the accuracy of the formula.
Those different moments are named **epochs** and are labeled according to
the year they were updated:
===== =========== =============== ===========
epoch :math:`g_e` :math:`\\beta` :math:`\\beta_1`
===== =========== =============== ===========
1930 9.78049 5.2884 x 10^-3 5.9 x 10^-6
1948 9.780373 5.2891 x 10^-3 5.9 x 10^-6
1967 9.780318 5.3024 x 10^-3 5.9 x 10^-6
1980 9.780327 5.3024 x 10^-3 5.8 x 10^-6
===== =========== =============== ===========
The latest epoch, 1980, is used here by default.
Parameters
----------
lat : float
Geographical Latitude, in decimal degrees.
epoch : str, default: '1980'
Epoch of the Geodetic Reference System. Options are ``'1930'``, ``'1948'``,
``'1967'`` and ``'1980'``.
Return
------
g : float
Normal gravity, in m/s^2, at given latitude.
Examples
--------
>>> ahrs.utils.international_gravity(10.0)
9.781884110728155
>>> ahrs.utils.international_gravity(10.0, epoch='1930')
9.7820428934191
"""
if abs(lat)>90.0:
raise ValueError("Latitude must be between -90.0 and 90.0 degrees.")
lat *= DEG2RAD
if epoch not in ['1930', '1948', '1967', '1980']:
return ValueError("Invalid epoch. Try '1930', '1948', '1967' or '1980'.")
g_e, b1, b2 = 9.780327, 5.3024e-3, 5.8e-6
if epoch=='1930':
g_e, b1, b2 = 9.78049, 5.2884e-3, 5.9e-6
if epoch=='1948':
g_e, b1, b2 = 9.780373, 5.2891e-3, 5.9e-6
if epoch=='1967':
g_e, b1, b2 = 9.780318, 5.3024e-3, 5.9e-6
return g_e*(1 + b1*np.sin(lat)**2 - b2*np.sin(2.0*lat)**2)
def welmec_gravity(lat: float, h: float = 0.0) -> float:
"""
Reference normal gravity of WELMEC's gravity zone
Gravity zones are implemented by European States on their territories for
weighing instruments that are sensitive to variations of gravity [WELMEC2009]_.
Manufacturers may adjust their instruments using the reference gravity
formula:
.. math::
g = 9.780318(1 + 0.0053024\\sin^2(\\phi) - 0.0000058\\sin^2(2\\phi)) - 0.000003085h
where :math:`\\phi` is the geographical latitude and :math:`h` is the
height above sea level in meters.
Parameters
----------
lat: float
Geographical Latitude, in decimal degrees.
h : float, default: 0.0
Mean sea level height, in meters.
Return
------
g : float
Normal gravity at given point in space, in m/s^2.
Examples
--------
>>> ahrs.utils.welmec_gravity(52.3, 80.0) # latitude = 52.3°, height = 80 m
9.818628439187075
"""
if abs(lat)>90.0:
raise ValueError("Latitude must be between -90.0 and 90.0 degrees.")
lat *= DEG2RAD
return 9.780318*(1 + 0.0053024*np.sin(lat)**2 - 0.0000058*np.sin(2*lat)**2) - 0.000003085*h
class WGS:
"""
World Geodetic System 1984
Parameters
----------
a : float, default: 6378137.0
Ellipsoid's Semi-major axis (Equatorial Radius), in meters. Defaults to
Earth's semi-major axis.
f : float, default: 0.0033528106647474805
Ellipsoid's flattening factor. Defaults to Earth's flattening.
GM : float, default: 3.986004418e14
Ellipsoid's Standard Gravitational Constant in m^3/s^2
w : float, default: 0.00007292115
Ellipsoid's rotation rate in rad/s
Attributes
----------
a : float
Ellipsoid's semi-major axis (Equatorial Radius), in meters.
f : float
Ellipsoid's flattening factor.
gm : float
Ellipsoid's Standard Gravitational Constant in m^3/s^2.
w : float
Ellipsoid's rotation rate in rad/s.
b : float
Ellipsoid's semi-minor axis (Polar Radius), in meters.
is_geodetic : bool
Whether the Ellipsoid describes Earth.
"""
def __init__(self, a: float = EARTH_EQUATOR_RADIUS, f: float = EARTH_FLATTENING, GM: float = EARTH_GM, w: float = EARTH_ROTATION):
self.a = a
self.f = f
self.b = self.a*(1-self.f)
self.gm = GM
self.w = w
self.is_geodetic = np.isclose(self.a, EARTH_EQUATOR_RADIUS)
self.is_geodetic &= np.isclose(self.f, EARTH_FLATTENING)
self.is_geodetic &= np.isclose(self.gm, EARTH_GM)
self.is_geodetic &= np.isclose(self.w, EARTH_ROTATION)
def normal_gravity(self, lat: float, h: float = 0.0) -> float:
"""Normal Gravity on (or above) Ellipsoidal Surface
Estimate the normal gravity on or above the surface of an ellipsoidal
body using Somigliana's formula (on surface) and a series expansion
(above surface).
Somigliana's closed formula as desribed by H. Moritz in [Tscherning]_ is:
.. math::
g = \\frac{ag_e \\cos^2\\phi + bg_p\\sin^2\\phi}{\\sqrt{a^2cos^2\\phi + b^2\\sin^2\\phi}}
For numerical computation, a more convenient form is:
.. math::
g = g_e\\frac{1+k\\sin^2\\phi}{\\sqrt{1-e^2\\sin^2\\phi}}
with the helper constant :math:`k`:
.. math::
k = \\frac{bg_p}{ag_e}-1
Parameters
----------
lat: float
Geographical latitude, in decimal degrees.
h : float, default: 0.0
Mean sea level height, in meters.
Return
------
g : float
Normal gravity at given point in space, in m/s^2.
Examples
--------
>>> wgs = ahrs.utils.WGS()
>>> wgs.normal_gravity(50.0)
9.810702135603085
>>> wgs.normal_gravity(50.0, 100.0)
9.810393625316983
"""
ge = self.equatorial_normal_gravity
gp = self.polar_normal_gravity
if ge is None or gp is None:
raise ValueError("No valid normal gravity values.")
lat *= DEG2RAD
e2 = self.first_eccentricity_squared
k = (self.b*gp)/(self.a*ge)-1
sin2 = np.sin(lat)**2
g = ge*(1+k*sin2)/np.sqrt(1-e2*sin2) # Gravity on Ellipsoid Surface (eq. 4-1)
if h==0.0:
return g
# Normal gravity above surface
m = self.w**2*self.a**2*self.b/self.gm # Gravity constant (eq. B-20)
g *= 1-2*h*(1+self.f+m-2*self.f*sin2)/self.a + 3.0*h**2/self.a**2 # Gravity Above Ellipsoid (eq. 4-3)
return g
def vertical_curvature_radius(self, lat: float) -> float:
"""
Radius of the curvature in the prime vertical, estimated at a given
latitude, :math:`\\phi`, as:
.. math::
R_N = \\frac{a}{\\sqrt{1-e^2\\sin^2\\phi}}
Parameters
----------
lat : float
Geographical latitude, in decimal degrees.
"""
e = np.sqrt(self.first_eccentricity_squared)
return self.a/np.sqrt(1-e**2*np.sin(lat)**2)
def meridian_curvature_radius(self, lat: float) -> float:
"""
Radius of the curvature in the prime meridian, estimated at a given
latitude, :math:`\\phi`, as:
.. math::
R_M = \\frac{a(1-e^2)}{\\sqrt[3]{1-e^2\\sin^2\\phi}}
Parameters
----------
lat : float
Geographical latitude, in decimal degrees.
"""
e = np.sqrt(self.first_eccentricity_squared)
return self.a*(1-e**2)/np.cbrt(1-e**2*np.sin(lat)**2)
@property
def first_eccentricity_squared(self):
"""
First Eccentricity Squared :math:`e^2` of the ellipsoid, computed as:
.. math::
e^2 = 2f - f^2
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.first_eccentricity_squared
0.0066943799901413165
"""
return 2*self.f - self.f**2
@property
def second_eccentricity_squared(self):
"""
Second Eccentricity Squared :math:`e'^2`, computed as:
.. math::
e'^2 = \\frac{a^2-b^2}{b^2}
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.second_eccentricity_squared
0.006739496742276434
"""
return (self.a**2-self.b**2)/self.b**2
@property
def linear_eccentricity(self):
"""
Linear Eccentricity :math:`E`, computed as:
.. math::
E = \\sqrt{a^2-b^2}
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.linear_eccentricity
521854.00842338527
"""
return np.sqrt(self.a**2-self.b**2)
@property
def aspect_ratio(self):
"""
Aspect Ratio :math:`AR`, computed as:
.. math::
AR = \\frac{b}{a}
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.aspect_ratio
0.9966471893352525
"""
return self.b/self.a
@property
def curvature_polar_radius(self):
"""
Polar Radius of Curvature :math:`R_P`, computed as:
.. math::
\\begin{array}{ll}
R_P &= \\frac{a^2}{b} = \\frac{a}{\\sqrt{1-e^2}} \\\\
&= \\frac{a}{1-f}
\\end{array}
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.curvature_polar_radius
6399593.625758493
"""
return self.a/(1-self.f)
@property
def arithmetic_mean_radius(self):
"""
Mean Radius :math:`R_1` of the Three Semi-Axes, computed as:
.. math::
R_1 = a\\Big(1-\\frac{f}{3}\\Big)
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.arithmetic_mean_radius
6371008.771415059
"""
return self.a*(1-self.f/3)
@property
def authalic_sphere_radius(self):
"""
Radius :math:`R_2` of a Sphere of Equal Area, computed as:
.. math::
R_2 = R_P \\Big(1-\\frac{2}{3}e'^2 + \\frac{26}{45}e'^4 - \\frac{100}{189}e'^6 + \\frac{7034}{14175}e'^8 - \\frac{220652}{467775}e'^{10} \\Big)
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.authalic_sphere_radius
6371007.1809182055
"""
r = self.curvature_polar_radius
es = np.sqrt(self.second_eccentricity_squared)
return r*(1 - 2*es**2/3 + 26*es**4/45 - 100*es**6/189 + 7034*es**8/14175 - 220652*es**10/467775)
@property
def equivolumetric_sphere_radius(self):
"""
Radius :math:`R_3` of a Sphere of Equal Volume, computed as:
.. math::
\\begin{array}{ll}
R_3 &= \\sqrt[3]{a^2b} \\\\
&= a\\sqrt[3]{1-f}
\\end{array}
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.equivolumetric_sphere_radius
6371000.790009159
"""
return self.a*np.cbrt(1-self.f)
@property
def normal_gravity_constant(self):
"""
Normal Gravity Formula Constant :math:`m`, computed as:
.. math::
m = \\frac{\\omega^2a^2b}{GM}
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.normal_gravity_constant
0.0034497865068408447
"""
return self.w**2*self.a**2*self.b/self.gm
@property
def dynamical_form_factor(self):
"""
WGS 84 Dynamical Form Factor :math:`J_2`, computed as:
.. math::
J_2 = \\frac{e^2}{3} \\Big(1-\\frac{2me'}{15q_0}\\Big)
where:
.. math::
q_0 = \\frac{1}{2}\\Big[\\Big(1+\\frac{3}{e'^2}\\Big)\\arctan e' - \\frac{3}{e'}\\Big]
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.dynamical_form_factor
0.0010826298213129219
"""
m = self.normal_gravity_constant
e2 = self.first_eccentricity_squared
es = np.sqrt(self.second_eccentricity_squared)
q0 = 0.5*((1+3/es**2)*np.arctan(es) - 3/es)
return e2*(1-2*m*es/(15*q0))/3
@property
def second_degree_zonal_harmonic(self):
"""
WGS 84 Second Degree Zonal Harmonic :math:`C_{2,0}`, computed as:
.. math::
C_{2,0} = -\\frac{J_2}{\\sqrt{5}}
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.second_degree_zonal_harmonic
-0.00048416677498482876
"""
return -self.dynamical_form_factor/np.sqrt(5.0)
@property
def normal_gravity_potential(self):
"""
Normal Gravity Potential :math:`U_0` of the WGS 84 Ellipsoid, computed
as:
.. math::
U_0 = \\frac{GM}{E} \\arctan e' + \\frac{\\omega^2a^2}{3}
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.normal_gravity_potential
62636851.71456948
"""
es = np.sqrt(self.second_eccentricity_squared)
return self.gm*np.arctan(es)/self.linear_eccentricity + self.w**2*self.a**2/3
@property
def equatorial_normal_gravity(self):
"""
Normal Gravity :math:`g_e` at the Equator, in
:math:`\\frac{\\mathrm{m}}{\\mathrm{s}^2}`, computed as:
.. math::
g_e = \\frac{GM}{ab}\\Big(1-m-\\frac{me'q_0'}{6q_0}\\Big)
where:
.. math::
\\begin{array}{ll}
q_0 &= \\frac{1}{2}\\Big[\\Big(1+\\frac{3}{e'^2}\\Big)\\arctan e' - \\frac{3}{e'}\\Big] \\\\
q_0' &= 3\\Big[\\Big(1+\\frac{1}{e'^2}\\Big)\\Big(1-\\frac{1}{e'}\\arctan e'\\Big)\\Big] - 1
\\end{array}
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.equatorial_normal_gravity
9.78032533590406
"""
m = self.normal_gravity_constant
es = np.sqrt(self.second_eccentricity_squared)
q0 = 0.5*((1 + 3/es**2)*np.arctan(es) - 3/es)
q0s = 3*((1 + 1/es**2)*(1 - np.arctan(es)/es)) - 1
return self.gm * (1 - m - m*es*q0s/(6*q0))/(self.a*self.b)
@property
def polar_normal_gravity(self):
"""
Normal Gravity :math:`g_p` at the Pole, in
:math:`\\frac{\\mathrm{m}}{\\mathrm{s}^2}`, computed as:
.. math::
g_p = \\frac{GM}{a^2}\\Big(1+\\frac{me'q_0'}{3q_0}\\Big)
where:
.. math::
\\begin{array}{ll}
q_0 &= \\frac{1}{2}\\Big[\\Big(1+\\frac{3}{e'^2}\\Big)\\arctan e' - \\frac{3}{e'}\\Big] \\\\
q_0' &= 3\\Big[\\Big(1+\\frac{1}{e'^2}\\Big)\\Big(1-\\frac{1}{e'}\\arctan e'\\Big)\\Big] - 1
\\end{array}
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.polar_normal_gravity
9.832184937863065
"""
m = self.normal_gravity_constant
es = np.sqrt(self.second_eccentricity_squared)
q0 = 0.5*((1 + 3/es**2)*np.arctan(es) - 3/es)
q0s = 3*((1 + 1/es**2)*(1 - np.arctan(es)/es)) - 1
return self.gm * (1 + m*es*q0s/(3*q0))/self.a**2
@property
def mean_normal_gravity(self):
"""
Mean Value :math:`\\bar{g}` of Normal Gravity, in
:math:`\\frac{\\mathrm{m}}{\\mathrm{s}^2}`, computed as:
.. math::
\\bar{g} = g_e\\Big(1 + \\frac{1}{6}e^2 + \\frac{1}{3}k + \\frac{59}{360}e^4 + \\frac{5}{18}e^2k + \\frac{2371}{15120}e^6 + \\frac{259}{1080}e^4k + \\frac{270229}{1814400}e^8 + \\frac{9623}{45360}e^6k \\Big)
where:
.. math::
k = \\frac{bg_p - ag_e}{ag_e} = \\frac{bg_p}{ag_e}-1
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.mean_normal_gravity
9.797643222256516
"""
e = np.sqrt(self.first_eccentricity_squared)
gp = self.polar_normal_gravity
ge = self.equatorial_normal_gravity
k = (self.b*gp)/(self.a*ge)-1
g = ge * (1 + e**2/6 + k/3 + 59*e**4/360 + 5*e**2*k/18 + 2371*e**6/15120 + 259*e**4*k/1080 + 270229*e**8/1814400 + 9623*e**6*k/45360)
return g
@property
def mass(self):
"""
The Mass :math:`M` of the Earth, in kg, computed as:
.. math::
M = \\frac{GM}{G}
where :math:`G` is the universal constant of gravitation equal to
:math:`6.67428\\times 10^{-11} \\frac{\\mathrm{m}^3}{\\mathrm{kg}\ \\mathrm{s}^2}`
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.mass
5.972186390142457e+24
"""
return self.gm/UNIVERSAL_GRAVITATION_WGS84
@property
def geometric_inertial_moment_about_Z(self):
"""
Geometric Moment of Inertia (:math:`C`), with respect to the Z-Axis of
Rotation, computed as:
.. math::
C_{geo} = \\frac{2}{3}Ma^2\\Big(1-\\frac{2}{5}\\sqrt{\\frac{5m}{2f}-1}\\Big)
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.geometric_inertial_moment_about_Z
8.073029370114392e+37
"""
return 2*self.mass*self.a**2*(1-0.4*np.sqrt(2.5*self.normal_gravity_constant/self.f - 1))/3
@property
def geometric_inertial_moment(self):
"""
Geometric Moment of Inertia (:math:`A`), with respect to Any Axis in
the Equatorial Plane, computed as:
.. math::
A_{geo} = C_{geo} + \\sqrt{5}Ma^2 C_{2,0geo}
where :math:`C_{2,0geo} = -4.84166774985\\times 10^{-4}` is Earth's
Geographic Second Degree Zonal Harmonic.
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.geometric_inertial_moment
8.046726628049449e+37
"""
if not self.is_geodetic:
raise NotImplementedError("The model must be Geodetic.")
return self.geometric_inertial_moment_about_Z + np.sqrt(5)*self.mass*self.a**2*EARTH_C20_GEO
@property
def geometric_dynamic_ellipticity(self):
"""
Geometric Solution for Dynamic Ellipticity :math:`H`, computed as:
.. math::
H_{geo} = \\frac{C_{geo}-A_{geo}}{C_{geo}}
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.geometric_dynamic_ellipticity
0.003258100628533992
"""
return (self.geometric_inertial_moment_about_Z - self.geometric_inertial_moment)/self.geometric_inertial_moment_about_Z
# Geodetic Properties
@property
def atmosphere_gravitational_constant(self):
"""
Gravitational Constant of the Atmosphere :math:`GM_A`, computed as:
.. math::
GM_A = G\\ M_A
where :math:`M_A` is the total mean mass of the atmosphere (with water
vapor) equal to :math:`5.148\\times 10^{18} \\mathrm{kg}`.
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.atmosphere_gravitational_constant
343591934.4
"""
if not self.is_geodetic:
raise AttributeError("The model is not Geodetic.")
return UNIVERSAL_GRAVITATION_WGS84*EARTH_ATMOSPHERE_MASS
@property
def gravitational_constant_without_atmosphere(self):
"""
Geocentric Gravitational Constant with Earth's Atmosphere Excluded
:math:`GM'`, computed as:
.. math::
GM' = GM - GM_A
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.gravitational_constant_without_atmosphere
398600098208065.6
"""
if not self.is_geodetic:
raise NotImplementedError("The model must be Geodetic.")
return self.gm - self.atmosphere_gravitational_constant
@property
def dynamic_inertial_moment_about_Z(self):
"""
Dynamic Moment of Inertia (:math:`C`), with respect to the Z-Axis of
Rotation, computed as:
.. math::
C_{dyn} = -\\sqrt{5}Ma^2\\frac{C_{2,0dyn}}{H}
where :math:`H=3.2737949 \\times 10^{-3}` is the Dynamic Ellipticity.
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.dynamic_inertial_moment_about_Z
8.03430094201443e+37
"""
if not self.is_geodetic:
raise NotImplementedError("The model must be Geodetic.")
return -np.sqrt(5)*self.mass*self.a**2*EARTH_C20_DYN/DYNAMIC_ELLIPTICITY
@property
def dynamic_inertial_moment_about_X(self):
"""
Dynamic Moment of Inertia (:math:`A`), with respect to the X-Axis of
Rotation, computed as:
.. math::
A_{dyn} = -\\sqrt{5}Ma^2\\Big[\\Big(1-\\frac{1}{H}\\Big)C_{2,0dyn} - \\frac{C_{2,2dyn}}{\\sqrt{3}}\\Big]
where :math:`H=3.2737949 \\times 10^{-3}` is the Dynamic Ellipticity.
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.dynamic_inertial_moment_about_X
8.007921777277886e+37
"""
if not self.is_geodetic:
raise NotImplementedError("The model must be Geodetic.")
return np.sqrt(5)*self.mass*self.a**2*((1-1/DYNAMIC_ELLIPTICITY)*EARTH_C20_DYN - EARTH_C22_DYN/np.sqrt(3))
@property
def dynamic_inertial_moment_about_Y(self):
"""
Dynamic Moment of Inertia (:math:`B`), with respect to the X-Axis of
Rotation, computed as:
.. math::
B_{dyn} = -\\sqrt{5}Ma^2\\Big[\\Big(1-\\frac{1}{H}\\Big)C_{2,0dyn} + \\frac{C_{2,2dyn}}{\\sqrt{3}}\\Big]
where :math:`H=3.2737949 \\times 10^{-3}` is the Dynamic Ellipticity.
Example
-------
>>> wgs = ahrs.utils.WGS()
>>> wgs.dynamic_inertial_moment_about_Y
8.008074799852911e+37
"""
if not self.is_geodetic:
raise NotImplementedError("The model must be Geodetic.")
return np.sqrt(5)*self.mass*self.a**2*((1-1/DYNAMIC_ELLIPTICITY)*EARTH_C20_DYN + EARTH_C22_DYN/np.sqrt(3))
class WGSTest(unittest.TestCase):
"""Test Gravitational Models
All values used to test the WGS model are retrieved from its latest report
as defined in Appendix B.
For the WELMEC reference gravity, only two example cases are provided,
which are also used to test it.
"""
def test_wgs84(self):
"""Test WGS84 with Earth's properties"""
wgs = WGS()
self.assertAlmostEqual(wgs.a, 6_378_137.0, 1)
self.assertAlmostEqual(wgs.f, 1/298.257223563, 7)
self.assertAlmostEqual(wgs.gm, 3.986004418e14, 7)
self.assertAlmostEqual(wgs.w, 7.292115e-5, 7)
self.assertAlmostEqual(wgs.b, 6_356_752.3142, 4)
self.assertAlmostEqual(wgs.first_eccentricity_squared, 8.1819190842622e-2**2, 7)
self.assertAlmostEqual(wgs.second_eccentricity_squared, 8.2094437949696e-2**2, 7)
self.assertAlmostEqual(wgs.linear_eccentricity, 5.2185400842339e5, 7)
self.assertAlmostEqual(wgs.aspect_ratio, 9.966471893352525e-1, 10)
self.assertAlmostEqual(wgs.curvature_polar_radius, 6399593.6258, 4)
self.assertAlmostEqual(wgs.arithmetic_mean_radius, 6371008.7714, 4)
self.assertAlmostEqual(wgs.authalic_sphere_radius, 6371007.181, 3)
self.assertAlmostEqual(wgs.equivolumetric_sphere_radius, 6371000.79, 2)
self.assertAlmostEqual(wgs.normal_gravity_constant, 3.44978650684084e-3, 10)
self.assertAlmostEqual(wgs.dynamical_form_factor, 1.082629821313e-3, 10)
self.assertAlmostEqual(wgs.second_degree_zonal_harmonic, -4.84166774985e-4, 10)
self.assertAlmostEqual(wgs.normal_gravity_potential, 6.26368517146e7, 4)
self.assertAlmostEqual(wgs.equatorial_normal_gravity, 9.7803253359, 10)
self.assertAlmostEqual(wgs.polar_normal_gravity, 9.8321849379, 10)
self.assertAlmostEqual(wgs.mean_normal_gravity, 9.7976432223, 10)
self.assertAlmostEqual(wgs.mass, 5.9721864e24, delta=1e17)
self.assertAlmostEqual(wgs.atmosphere_gravitational_constant, 3.43592e8, delta=1e3)
self.assertAlmostEqual(wgs.gravitational_constant_without_atmosphere, 3.986000982e14, delta=1e4)
self.assertAlmostEqual(wgs.dynamic_inertial_moment_about_Z, 8.0340094e37, delta=1e30) # FAILS with report's reference
self.assertAlmostEqual(wgs.dynamic_inertial_moment_about_X, 8.00792178e37, delta=1e29)
self.assertAlmostEqual(wgs.dynamic_inertial_moment_about_Y, 8.0080748e37, delta=1e30)
self.assertAlmostEqual(wgs.geometric_inertial_moment_about_Z, 8.07302937e37, delta=1e29)
self.assertAlmostEqual(wgs.geometric_inertial_moment, 8.04672663e37, delta=1e29)
self.assertAlmostEqual(wgs.geometric_dynamic_ellipticity, 3.2581004e-3, 9)
del wgs
def test_welmec(self):
"""Test WELMEC reference gravity"""
self.assertAlmostEqual(welmec_gravity(52.3, 80.0), 9.812484, 6) # Braunschweig, Germany
self.assertAlmostEqual(welmec_gravity(60.0, 250.0), 9.818399, 6) # Uppsala, Sweden
if __name__ == '__main__':
unittest.main() | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/utils/wgs84.py | wgs84.py |
import os
import sys
import scipy.io as sio
import numpy as np
def get_freq(times, units='s'):
"""
Identify and return the frequency a dataset is sampled.
Given an array with timestamps, the step between times is estimated, then
a mean of its values is inverted to obtain the sampling frequency.
Parameters
----------
times : array
1-D array with the timestamps of the file.
units : str
Time units of the array of timestamps. Default is 's' for seconds.
Possible options are: 's', 'ms', 'us' and 'ns'.
Returns
-------
frequency : float
Estimated sampling frequency in Herz.
Examples
--------
>>> t = np.arange(500) + np.random.randn(500)
>>> ahrs.utils.io.id_frequency(t)
0.9984740941199178
>>> t = [0.1, 0.2, 0.3, 0.4, 0.5]
>>> ahrs.utils.id_frequency(t)
10.0
>>> ahrs.utils.id_frequency(t, 'ms')
10000.0
"""
raise DeprecationWarning("This function will be removed. Actually I don't know how you got to use it.")
diffs = np.diff(times)
mean = np.nanmean(diffs)
if units == 'ms':
mean *= 1e-3
if units == 'us':
mean *= 1e-6
if units == 'ns':
mean *= 1e-9
return 1.0 / mean
def find_index(header, s):
for h in header:
if s in h.lower():
return header.index(h)
return None
def load(file_name, separator=';'):
"""
Load the contents of a file into a dictionary.
Supported formats, so far, are MAT and CSV files. More to come.
To Do:
- Get a better way to find data from keys of dictionary. PLEASE.
Parameters
----------
file_name : str
Name of the file
separator : str, default: ';'
String used to split values. Normally using a single character. Default
is the semicolon.
Returns
-------
data : Data
Read information stored in class Data.
"""
raise DeprecationWarning("This function will be removed. Actually I don't know how you got to use it.")
if not os.path.isfile(file_name):
sys.exit("[ERROR] The file {} does not exist.".format(file_name))
file_ext = file_name.strip().split('.')[-1]
if file_ext == 'mat':
d = sio.loadmat(file_name)
d.update({'rads':False})
return Data(d)
if file_ext == 'csv':
with open(file_name, 'r') as f:
all_lines = f.readlines()
split_header = all_lines[0].strip().split(separator)
a_idx = find_index(split_header, 'acc')
g_idx = find_index(split_header, 'gyr')
m_idx = find_index(split_header, 'mag')
q_idx = find_index(split_header, 'orient')
data = np.genfromtxt(all_lines[2:], delimiter=separator)
d = {'time' : data[:, 0],
'acc' : data[:, a_idx:a_idx+3],
'gyr' : data[:, g_idx:g_idx+3],
'mag' : data[:, m_idx:m_idx+3],
'qts' : data[:, q_idx:q_idx+4]}
d.update({'in_rads':True})
return Data(d)
return None
def load_ETH_EC(path):
"""
Loads data from a directory containing files of the Event-Camera Dataset
from the ETH Zurich (http://rpg.ifi.uzh.ch/davis_data.html)
The dataset includes 4 basic text files with recorded data, plus a file
listing all images of the recording included in the subfolder 'images.'
**events.txt**: One event per line (timestamp x y polarity)
**images.txt**: One image reference per line (timestamp filename)
**imu.txt**: One measurement per line (timestamp ax ay az gx gy gz)
**groundtruth.txt**: One ground truth measurements per line (timestamp px py pz qx qy qz qw)
**calib.txt**: Camera parameters (fx fy cx cy k1 k2 p1 p2 k3)
Parameters
----------
path : str
Path of the folder containing the TXT files.
Returns
-------
data : Data
class Data with the contents of the dataset.
"""
raise DeprecationWarning("This function will be removed. Actually I don't know how you got to use it.")
if not os.path.isdir(path):
print("Invalid path")
return None
data = {}
files = []
[files.append(f) for f in os.listdir(path) if f.endswith('.txt')]
missing = list(set(files).symmetric_difference([
'events.txt',
'images.txt',
'imu.txt',
'groundtruth.txt',
'calib.txt']))
if missing:
sys.exit("Incomplete data. Missing files:\n{}".format('\n'.join(missing)))
imu_data = np.loadtxt(os.path.join(path, 'imu.txt'), delimiter=' ')
data.update({"time_sensors": imu_data[:, 0]})
data.update({"accs": imu_data[:, 1:4]})
data.update({"gyros": imu_data[:, 4:7]})
data.update({"in_rads": False})
truth_data = np.loadtxt(os.path.join(path, 'groundtruth.txt'), delimiter=' ')
data.update({"time_truth": truth_data[:, 0]})
data.update({"qts": truth_data[:, 4:]})
return Data(data)
def load_ETH_EuRoC(path):
"""
Load data from the EuRoC MAV dataset of the ETH Zurich
(https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets)
Parameters
----------
path : str
Path to the folder containing the dataset.
References
----------
.. [ETH-EuRoC] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S.
Omari, M. Achtelik and R. Siegwart, The EuRoC micro aerial vehicle
datasets, International Journal of Robotic Research,
DOI: 10.1177/0278364915620033, early 2016.
"""
raise DeprecationWarning("This function will be removed. Actually I don't know how you got to use it.")
if not os.path.isdir(path):
print("Invalid path")
return None
valid_folders = ["imu", "groundtruth", "vicon"]
# Find data.csv files in each folder
folders = os.listdir(path)
subfolders = {}
for f in valid_folders:
for s in folders:
if f in s:
subfolders.update({f: s})
# Build data dictionary
data = {}
files = []
for sf in subfolders.keys():
full_path = os.path.join(path, subfolders[sf])
contents = os.listdir(full_path)
if "data.csv" not in contents:
print("ERROR: File data.csv was not found in {}".format(subfolders[sf]))
if sf == "imu":
file_path = os.path.join(full_path, "data.csv")
with open(file_path, 'r') as f:
all_lines = f.readlines()
time_array = np.genfromtxt(all_lines[1:], dtype=float, comments='#', delimiter=',', usecols=0)
gyrs_array = np.genfromtxt(all_lines[1:], dtype=float, comments='#', delimiter=',', usecols=(1, 2, 3))
accs_array = np.genfromtxt(all_lines[1:], dtype=float, comments='#', delimiter=',', usecols=(4, 5, 6))
data.update({"imu_time": time_array})
data.update({"imu_gyr": gyrs_array})
data.update({"imu_acc": accs_array})
if sf == "vicon":
file_path = os.path.join(full_path, "data.csv")
with open(file_path, 'r') as f:
all_lines = f.readlines()
time_array = np.genfromtxt(all_lines[1:], dtype=float, comments='#', delimiter=',', usecols=0)
pos_array = np.genfromtxt(all_lines[1:], dtype=float, comments='#', delimiter=',', usecols=(1, 2, 3))
qts_array = np.genfromtxt(all_lines[1:], dtype=float, comments='#', delimiter=',', usecols=(4, 5, 6, 7))
data.update({"vicon_time": time_array})
data.update({"vicon_position": pos_array})
data.update({"vicon_quaternion": qts_array})
if sf == "groundtruth":
file_path = os.path.join(full_path, "data.csv")
with open(file_path, 'r') as f:
all_lines = f.readlines()
time_array = np.genfromtxt(all_lines[1:], dtype=float, comments='#', delimiter=',', usecols=0)
pos_array = np.genfromtxt(all_lines[1:], dtype=float, comments='#', delimiter=',', usecols=(1, 2, 3))
qts_array = np.genfromtxt(all_lines[1:], dtype=float, comments='#', delimiter=',', usecols=(4, 5, 6, 7))
vel_array = np.genfromtxt(all_lines[1:], dtype=float, comments='#', delimiter=',', usecols=(8, 9, 10))
ang_vel_array = np.genfromtxt(all_lines[1:], dtype=float, comments='#', delimiter=',', usecols=(11, 12, 13))
acc_array = np.genfromtxt(all_lines[1:], dtype=float, comments='#', delimiter=',', usecols=(14, 15, 16))
data.update({"time": time_array})
data.update({"position": pos_array})
data.update({"qts": qts_array})
data.update({"vel": vel_array})
data.update({"ang_vel": ang_vel_array})
data.update({"acc": acc_array})
data.update({'in_rads':True})
return Data(data)
def load_OxIOD(path, sequence=1):
"""
Load data from the Oxford Inertial Odometry Dataset
(http://deepio.cs.ox.ac.uk/)
The OxIOD has several sequences stored in CSV, composed of sensors and
vicon recordings, with the names and formats:
imu:
[0] Time
[1] attitude_roll(radians)
[2] attitude_pitch(radians)
[3] attitude_yaw(radians)
[4] rotation_rate_x(radians/s)
[5] rotation_rate_y(radians/s)
[6] rotation_rate_z(radians/s)
[7] gravity_x(G)
[8] gravity_y(G)
[9] gravity_z(G)
[10] user_acc_x(G)
[11] user_acc_y(G)
[12] user_acc_z(G)
[13] magnetic_field_x(microteslas)
[14] magnetic_field_y(microteslas)
[15] magnetic_field_z(microteslas)
vicon:
[0] Time
[1] Header
[2] translation.x
[3] translation.y
[4] translation.z
[5] rotation.x
[6] rotation.y
[7] rotation.z
[8] rotation.w
Parameters
----------
path : str
Path to the folder containing the dataset.
sequence : int
Sequence to load. Default is 1.
References
----------
.. [OxIOD] Changhao Chen, Peijun Zhao, Chris Xiaoxuan Lu, Wei Wang, Andrew
Markham, Niki Trigoni. OxIOD: The Dataset for Deep Inertial Odometry.
arXiv:1809.07491. September 2018.
(https://arxiv.org/pdf/1809.07491.pdf)
"""
raise DeprecationWarning("This function will be removed. Actually I don't know how you got to use it.")
if not os.path.isdir(path):
print("Invalid path")
return None
imu_file = 'imu{}.csv'.format(sequence)
vicon_file = 'vi{}.csv'.format(sequence)
all_files = os.listdir(path)
# Assert exitence of required files
if imu_file not in all_files:
print("IMU Sequence does NOT exists.")
return None
if vicon_file not in all_files:
print("Vicon Sequence does NOT exists.")
return None
# Read files
base_path = os.path.relpath(path)
imu_file = os.path.join(base_path, imu_file)
vicon_file = os.path.join(base_path, vicon_file)
# Read Sensor information
data = {}
imu_data = np.genfromtxt(imu_file, dtype=np.float, delimiter=',', filling_values=np.nan)
data.update({"imu_time": imu_data[:, 0]})
data.update({"ang_pos": imu_data[:, 1:4]})
data.update({"gyr": imu_data[:, 4:7]})
data.update({"acc": imu_data[:, 7:10]})
data.update({"usr_acc": imu_data[:, 10:13]})
data.update({"mag": imu_data[:, 13:]})
data.update({'in_rads':True})
vicon_data = np.genfromtxt(vicon_file, dtype=np.float, delimiter=',', filling_values=np.nan)
data.update({"vicon_time": vicon_data[:, 0]})
data.update({"pos": vicon_data[:, 2:5]})
data.update({"q_ref": np.roll(vicon_data[:, 5:], 1, axis=1)}) # Roll data to fit standard quaternion notation
return Data(data)
class Data:
"""Data to store the arrays of the most common variables."""
time = None
acc = None
gyr = None
mag = None
q_ref = None
def __init__(self, *initial_data, **kwargs):
# def_attributes = ['time', 'acc', 'gyr', 'mag', 'q_ref', 'pos']
# for a in def_attributes:
# setattr(self, a, None)
for dictionary in initial_data:
for key in dictionary:
setattr(self, key, dictionary[key])
for key in kwargs:
setattr(self, key, kwargs[key])
self.num_samples = len(self.acc) if self.acc is not None else 0
def show_items(self):
for k in self.__dict__.keys():
print("{}".format(k)) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/utils/io.py | io.py |
import numpy as np
class SAAM:
"""
Super-fast Attitude from Accelerometer and Magnetometer
Parameters
----------
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in in m/s^2
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT
Attributes
----------
acc : numpy.ndarray
N-by-3 array with N accelerometer samples.
mag : numpy.ndarray
N-by-3 array with N magnetometer samples.
Q : numpy.ndarray, default: None
M-by-4 Array with all estimated quaternions, where M is the number of
samples. Equal to None when no estimation is performed.
Raises
------
ValueError
When dimension of input arrays ``acc`` and ``mag`` are not equal.
Examples
--------
>>> acc_data.shape, mag_data.shape # NumPy arrays with sensor data
((1000, 3), (1000, 3))
>>> from ahrs.filters import SAAM
>>> orientation = SAAM(acc=acc_data, mag=mag_data)
>>> orientation.Q.shape # Estimated attitudes as Quaternions
(1000, 4)
>>> orientation.Q
array([[-0.09867706, -0.33683592, -0.52706394, -0.77395607],
[-0.10247491, -0.33710813, -0.52117549, -0.77732433],
[-0.10082646, -0.33658091, -0.52082828, -0.77800078],
...,
[-0.78760687, -0.57789515, 0.2131519, -0.01669966],
[-0.78683706, -0.57879487, 0.21313092, -0.02142776],
[-0.77869223, -0.58616905, 0.22344478, -0.01080235]])
"""
def __init__(self, acc: np.ndarray = None, mag: np.ndarray = None):
self.acc = acc
self.mag = mag
self.Q = None
if self.acc is not None and self.mag is not None:
self.Q = self._compute_all()
def _compute_all(self) -> np.ndarray:
"""
Estimate the quaternions given all data.
Attributes ``acc`` and ``mag`` must contain data. It is assumed that
these attributes have the same shape (M, 3), where M is the number of
observations.
The full estimation is vectorized, to avoid the use of a time-wasting
loop.
Returns
-------
Q : numpy.ndarray
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.mag.shape:
raise ValueError("acc and mag are not the same size")
if self.acc.shape[-1] != 3:
raise ValueError(f"Sensor data must be of shape (M, 3), but it got {self.acc.shape}")
# Normalize measurements (eq. 1)
ax, ay, az = np.transpose(self.acc/np.linalg.norm(self.acc, axis=1)[:, None])
mx, my, mz = np.transpose(self.mag/np.linalg.norm(self.mag, axis=1)[:, None])
# Dynamic magnetometer reference vector (eq. 12)
mD = ax*mx + ay*my + az*mz
mN = np.sqrt(1-mD**2)
# Quaternion components (eq. 16)
qw = ax*my - ay*(mN+mx)
qx = (az-1)*(mN+mx) + ax*(mD-mz)
qy = (az-1)*my + ay*(mD-mz)
qz = az*mD - ax*mN-mz
# Final quaternion (eq. 18)
Q = np.c_[qw, qx, qy, qz]
return Q/np.linalg.norm(Q, axis=1)[:, None]
def estimate(self, acc: np.ndarray = None, mag: np.ndarray = None) -> np.ndarray:
"""
Attitude Estimation
Parameters
----------
a : numpy.ndarray
Sample of tri-axial Accelerometer.
m : numpy.ndarray
Sample of tri-axial Magnetometer.
Returns
-------
q : numpy.ndarray
Estimated quaternion.
Examples
--------
>>> acc_data = np.array([4.098297, 8.663757, 2.1355896])
>>> mag_data = np.array([-28.71550512, -25.92743566, 4.75683931])
>>> from ahrs.filters import SAAM
>>> saam = SAAM()
>>> saam.estimate(acc=acc_data, mag=mag_data) # Estimate attitude as quaternion
array([-0.09867706, -0.33683592, -0.52706394, -0.77395607])
"""
# Normalize measurements (eq. 1)
a_norm = np.linalg.norm(acc)
m_norm = np.linalg.norm(mag)
if not a_norm>0 or not m_norm>0: # handle NaN
return None
ax, ay, az = acc/a_norm
mx, my, mz = mag/m_norm
# Dynamic magnetometer reference vector (eq. 12)
mD = ax*mx + ay*my + az*mz
mN = np.sqrt(1-mD**2)
# Quaternion components (eq. 16)
qw = ax*my - ay*(mN+mx)
qx = (az-1)*(mN+mx) + ax*(mD-mz)
qy = (az-1)*my + ay*(mD-mz)
qz = az*mD - ax*mN-mz
# Final quaternion (eq. 18)
q = np.array([qw, qx, qy, qz])
return q/np.linalg.norm(q) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/saam.py | saam.py |
import numpy as np
from ..common.mathfuncs import cosd, sind
class OLEQ:
"""
Optimal Linear Estimator of Quaternion
Parameters
----------
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in in m/s^2
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT
magnetic_ref : float or numpy.ndarray
Local magnetic reference.
frame : str, default: 'NED'
Local tangent plane coordinate frame. Valid options are right-handed
``'NED'`` for North-East-Down and ``'ENU'`` for East-North-Up.
Raises
------
ValueError
When dimension of input arrays ``acc`` and ``mag`` are not equal.
Examples
--------
>>> acc_data.shape, mag_data.shape # NumPy arrays with sensor data
((1000, 3), (1000, 3))
>>> from ahrs.filters import OLEQ
>>> orientation = OLEQ(acc=acc_data, mag=mag_data)
>>> orientation.Q.shape # Estimated attitude
(1000, 4)
"""
def __init__(self,
acc: np.ndarray = None,
mag: np.ndarray = None,
weights: np.ndarray = None,
magnetic_ref: np.ndarray = None,
frame: str = 'NED'
):
self.acc = acc
self.mag = mag
self.a = weights if weights is not None else np.ones(2)
self.frame = frame
# Reference measurements
self._set_reference_frames(magnetic_ref, self.frame)
if self.acc is not None and self.mag is not None:
self.Q = self._compute_all()
def _set_reference_frames(self, mref: float, frame: str = 'NED') -> None:
if frame.upper() not in ['NED', 'ENU']:
raise ValueError(f"Invalid frame '{frame}'. Try 'NED' or 'ENU'")
# Magnetic Reference Vector
if mref is None:
# Local magnetic reference of Munich, Germany
from ..common.mathfuncs import MUNICH_LATITUDE, MUNICH_LONGITUDE, MUNICH_HEIGHT
from ..utils.wmm import WMM
wmm = WMM(latitude=MUNICH_LATITUDE, longitude=MUNICH_LONGITUDE, height=MUNICH_HEIGHT)
self.m_ref = np.array([wmm.X, wmm.Y, wmm.Z]) if frame.upper() == 'NED' else np.array([wmm.Y, wmm.X, -wmm.Z])
elif isinstance(mref, (int, float)):
cd, sd = cosd(mref), sind(mref)
self.m_ref = np.array([cd, 0.0, sd]) if frame.upper() == 'NED' else np.array([0.0, cd, -sd])
else:
self.m_ref = np.copy(mref)
self.m_ref /= np.linalg.norm(self.m_ref)
# Gravitational Reference Vector
self.a_ref = np.array([0.0, 0.0, -1.0]) if frame.upper() == 'NED' else np.array([0.0, 0.0, 1.0])
def _compute_all(self) -> np.ndarray:
"""Estimate the quaternions given all data.
Attributes ``acc`` and ``mag`` must contain data.
Returns
-------
Q : array
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.mag.shape:
raise ValueError("acc and mag are not the same size")
num_samples = len(self.acc)
Q = np.zeros((num_samples, 4))
for t in range(num_samples):
Q[t] = self.estimate(self.acc[t], self.mag[t])
return Q
def WW(self, Db: np.ndarray, Dr: np.ndarray) -> np.ndarray:
"""W Matrix
.. math::
\\mathbf{W} = D_x^r\\mathbf{M}_1 + D_y^r\\mathbf{M}_2 + D_z^r\\mathbf{M}_3
Parameters
----------
Db : numpy.ndarray
Normalized tri-axial observations vector.
Dr : numpy.ndarray
Normalized tri-axial reference vector.
Returns
-------
W_matrix : numpy.ndarray
W Matrix.
"""
bx, by, bz = Db
rx, ry, rz = Dr
M1 = np.array([
[bx, 0.0, bz, -by],
[0.0, bx, by, bz],
[bz, by, -bx, 0.0],
[-by, bz, 0.0, -bx]]) # (eq. 18a)
M2 = np.array([
[by, -bz, 0.0, bx],
[-bz, -by, bx, 0.0],
[0.0, bx, by, bz],
[bx, 0.0, bz, -by]]) # (eq. 18b)
M3 = np.array([
[bz, by, -bx, 0.0],
[by, -bz, 0.0, bx],
[-bx, 0.0, -bz, by],
[0.0, bx, by, bz]]) # (eq. 18c)
return rx*M1 + ry*M2 + rz*M3 # (eq. 20)
def estimate(self, acc: np.ndarray, mag: np.ndarray) -> np.ndarray:
"""Attitude Estimation
Parameters
----------
acc : numpy.ndarray
Sample of tri-axial Accelerometer.
mag : numpy.ndarray
Sample of tri-axial Magnetometer.
Returns
-------
q : numpy.ndarray
Estimated quaternion.
"""
# Normalize measurements (eq. 1)
a_norm = np.linalg.norm(acc)
m_norm = np.linalg.norm(mag)
if not a_norm > 0 or not m_norm > 0: # handle NaN
return None
acc = np.copy(acc)/a_norm
mag = np.copy(mag)/m_norm
sum_aW = self.a[0]*self.WW(acc, self.a_ref) + self.a[1]*self.WW(mag, self.m_ref) # (eq. 31)
R = 0.5*(np.identity(4) + sum_aW) # (eq. 33)
q = np.array([0., 0., 0., 1.]) # "random" quaternion
last_q = np.array([1., 0., 0., 0.])
i = 0
while np.linalg.norm(q-last_q) > 1e-8 and i <= 20:
last_q = q
q = R @ last_q # (eq. 24)
q /= np.linalg.norm(q)
i += 1
return q/np.linalg.norm(q) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/oleq.py | oleq.py |
import numpy as np
from ..common.constants import *
class Tilt:
"""
Gravity-based estimation of attitude.
Parameters
----------
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in in m/s^2
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT
as_angles : bool, default: False
Whether to return the attitude as rpy angles.
Attributes
----------
acc : numpy.ndarray
N-by-3 array with N tri-axial accelerometer samples.
mag : numpy.ndarray
N-by-3 array with N tri-axial magnetometer samples.
Q : numpy.ndarray, default: None
N-by-4 or N-by-3 array with
as_angles : bool, default: False
Whether to return the attitude as rpy angles.
Raises
------
ValueError
When shape of input array ``acc`` is not (N, 3)
Examples
--------
Assuming we have 3-axis accelerometer data in N-by-3 arrays, we can simply
give these samples to the constructor. The tilt estimation works solely
with accelerometer samples.
>>> from ahrs.filters import Tilt
>>> tilt = Tilt(acc_data)
The estimated quaternions are saved in the attribute ``Q``.
>>> tilt.Q
array([[0.76901856, 0.60247641, -0.16815772, 0.13174072],
[0.77310283, 0.59724644, -0.16900433, 0.1305612 ],
[0.7735134, 0.59644005, -0.1697294, 0.1308748 ],
...,
[0.7800751, 0.59908629, -0.14315079, 0.10993772],
[0.77916118, 0.59945374, -0.14520157, 0.11171197],
[0.77038613, 0.61061868, -0.14375869, 0.11394512]])
>>> tilt.Q.shape
(1000, 4)
If we desire to estimate each sample independently, we call the
corresponding method with each sample individually.
>>> tilt = Tilt()
>>> num_samples = len(acc_data)
>>> Q = np.zeros((num_samples, 4)) # Allocate quaternions array
>>> for t in range(num_samples):
... Q[t] = tilt.estimate(acc_data[t])
...
>>> tilt.Q[:5]
array([[0.76901856, 0.60247641, -0.16815772, 0.13174072],
[0.77310283, 0.59724644, -0.16900433, 0.1305612 ],
[0.7735134, 0.59644005, -0.1697294, 0.1308748 ],
[0.77294791, 0.59913005, -0.16502363, 0.12791369],
[0.76936935, 0.60323746, -0.16540014, 0.12968487]])
Originally, this estimation computes first the Roll-Pitch-Yaw angles and
then converts them to Quaternions. If we desire the angles instead, we set
it so in the parameters.
>>> tilt = Tilt(acc_data, as_angles=True)
>>> type(tilt.Q), tilt.Q.shape
(<class 'numpy.ndarray'>, (1000, 3))
>>> tilt.Q[:5]
array([[8.27467200e-04, 4.36167791e-06, 0.00000000e+00],
[9.99352822e-04, 8.38015258e-05, 0.00000000e+00],
[1.30423484e-03, 1.72201573e-04, 0.00000000e+00],
[1.60337482e-03, 8.53081042e-05, 0.00000000e+00],
[1.98459171e-03, -8.34729603e-05, 0.00000000e+00]])
.. note::
It will return the angles, in degrees, following the standard order
``Roll->Pitch->Yaw``.
The yaw angle is, expectedly, equal to zero, because the heading cannot be
estimated with the gravity acceleration only.
For this reason, magnetometer data can be used to estimate the yaw. This is
also implemented and the magnetometer will be taken into account when given
as parameter.
>>> tilt = Tilt(acc=acc_data, mag=mag_data, as_angles=True)
>>> tilt.Q[:5]
array([[8.27467200e-04, 4.36167791e-06, -4.54352439e-02],
[9.99352822e-04, 8.38015258e-05, -4.52836926e-02],
[1.30423484e-03, 1.72201573e-04, -4.49355365e-02],
[1.60337482e-03, 8.53081042e-05, -4.44276770e-02],
[1.98459171e-03, -8.34729603e-05, -4.36931634e-02]])
"""
def __init__(self, acc: np.ndarray = None, mag: np.ndarray = None, **kwargs):
self.acc = acc
self.mag = mag
self.as_angles = kwargs.get('as_angles', False)
if self.acc is not None:
self.Q = self._compute_all()
def _compute_all(self) -> np.ndarray:
"""
Estimate the orientation given all data.
Attributes ``acc`` and ``mag`` must contain data. It is assumed that
these attributes have the same shape (M, 3), where M is the number of
observations.
The full estimation is vectorized, to avoid the use of a time-wasting
loop.
Returns
-------
Q : numpy.ndarray
M-by-4 array with all estimated quaternions, where M is the number
of samples. It returns an M-by-3 array, if the flag ``as_angles``
is set to ``True``.
"""
if self.acc.shape[-1]!=3:
raise ValueError("Input data must be of shape (N, 3). Got shape {}".format(self.acc.shape))
num_samples = len(self.acc)
Q = np.zeros((num_samples, 3)) if self.as_angles else np.zeros((num_samples, 4))
# Normalization of 2D arrays
a = self.acc/np.linalg.norm(self.acc, axis=1)[:, None]
angles = np.zeros((len(a), 3)) # Allocation of angles array
# Estimate tilt angles
angles[:, 0] = np.arctan2(a[:, 1], a[:, 2])
angles[:, 1] = np.arctan2(-a[:, 0], np.sqrt(a[:, 1]**2 + a[:, 2]**2))
if self.mag is not None:
# Estimate heading angle
m = self.mag/np.linalg.norm(self.mag, axis=1)[:, None]
my2 = m[:, 2]*np.sin(angles[:, 0]) - m[:, 1]*np.cos(angles[:, 0])
mz2 = m[:, 1]*np.sin(angles[:, 0]) + m[:, 2]*np.cos(angles[:, 0])
mx3 = m[:, 0]*np.cos(angles[:, 1]) + mz2*np.sin(angles[:, 1])
angles[:, 2] = np.arctan2(my2, mx3)
# Return angles in degrees
if self.as_angles:
return angles*RAD2DEG
# RPY to Quaternion
cp = np.cos(0.5*angles[:, 1])
sp = np.sin(0.5*angles[:, 1])
cr = np.cos(0.5*angles[:, 0])
sr = np.sin(0.5*angles[:, 0])
cy = np.cos(0.5*angles[:, 2])
sy = np.sin(0.5*angles[:, 2])
Q[:, 0] = cy*cp*cr + sy*sp*sr
Q[:, 1] = cy*cp*sr - sy*sp*cr
Q[:, 2] = sy*cp*sr + cy*sp*cr
Q[:, 3] = sy*cp*cr - cy*sp*sr
return Q/np.linalg.norm(Q, axis=1)[:, None]
def estimate(self, acc: np.ndarray, mag: np.ndarray = None) -> np.ndarray:
"""
Estimate the quaternion from the tilting read by an orthogonal
tri-axial array of accelerometers.
The orientation of the roll and pitch angles is estimated using the
measurements of the accelerometers, and finally converted to a
quaternion representation according to [WikiConversions]_
Parameters
----------
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2.
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT.
Returns
-------
q : numpy.ndarray
Estimated attitude.
Examples
--------
>>> acc_data = np.array([4.098297, 8.663757, 2.1355896])
>>> mag_data = np.array([-28.71550512, -25.92743566, 4.75683931])
>>> from ahrs.filters import Tilt
>>> tilt = Tilt()
>>> tilt.estimate(acc=acc_data, mag=mag_data) # Estimate attitude as quaternion
array([0.09867706 0.33683592 0.52706394 0.77395607])
Optionally, the attitude can be retrieved as roll-pitch-yaw angles, in
degrees.
>>> tilt = Tilt(as_angles=True)
>>> tilt.estimate(acc=acc_data, mag=mag_data)
array([ 76.15281566 -24.66891862 146.02634429])
"""
a_norm = np.linalg.norm(acc)
if not a_norm>0:
if self.as_angles:
return np.zeros(3)
return np.array([1.0, 0.0, 0.0, 0.0])
ax, ay, az = acc/a_norm
### Tilt from Accelerometer
ex = np.arctan2( ay, az) # Roll
ey = np.arctan2(-ax, np.sqrt(ay**2 + az**2)) # Pitch
ez = 0.0 # Yaw
if mag is not None and np.linalg.norm(mag)>0:
mx, my, mz = mag/np.linalg.norm(mag)
# Get tilted reference frame
by = my*np.cos(ex) - mz*np.sin(ex)
bx = mx*np.cos(ey) + np.sin(ey)*(my*np.sin(ex) + mz*np.cos(ex))
ez = np.arctan2(-by, bx)
if self.as_angles:
return np.array([ex, ey, ez])*RAD2DEG
#### Euler to Quaternion
cp = np.cos(0.5*ey)
sp = np.sin(0.5*ey)
cr = np.cos(0.5*ex)
sr = np.sin(0.5*ex)
cy = np.cos(0.5*ez)
sy = np.sin(0.5*ez)
q = np.zeros(4)
q[0] = cy*cp*cr + sy*sp*sr
q[1] = cy*cp*sr - sy*sp*cr
q[2] = sy*cp*sr + cy*sp*cr
q[3] = sy*cp*cr - cy*sp*sr
return q/np.linalg.norm(q) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/tilt.py | tilt.py |
import numpy as np
from ..common.orientation import q2R, ecompass, acc2q
from ..common.mathfuncs import cosd, sind, skew
class EKF:
"""
Extended Kalman Filter to estimate orientation as Quaternion.
Examples
--------
>>> import numpy as np
>>> from ahrs.filters import EKF
>>> from ahrs.common.orientation import acc2q
>>> ekf = EKF()
>>> num_samples = 1000 # Assuming sensors have 1000 samples each
>>> Q = np.zeros((num_samples, 4)) # Allocate array for quaternions
>>> Q[0] = acc2q(acc_data[0]) # First sample of tri-axial accelerometer
>>> for t in range(1, num_samples):
... Q[t] = ekf.update(Q[t-1], gyr_data[t], acc_data[t])
The estimation is simplified by giving the sensor values at the
construction of the EKF object. This will perform all steps above and store
the estimated orientations, as quaternions, in the attribute ``Q``.
>>> ekf = EKF(gyr=gyr_data, acc=acc_data)
>>> ekf.Q.shape
(1000, 4)
In this case, the measurement vector, set in the attribute ``z``, is equal
to the measurements of the accelerometer. If extra information from a
magnetometer is available, it will also be considered to estimate the
attitude.
>>> ekf = EKF(gyr=gyr_data, acc=acc_data, mag=mag_data)
>>> ekf.Q.shape
(1000, 4)
For this case, the measurement vector contains the accelerometer and
magnetometer measurements together: ``z = [acc_data, mag_data]`` at each
time :math:`t`.
The most common sampling frequency is 100 Hz, which is used in the filter.
If that is different in the given sensor data, it can be changed too.
>>> ekf = EKF(gyr=gyr_data, acc=acc_data, mag=mag_data, frequency=200.0)
Normally, when using the magnetic data, a referencial magnetic field must
be given. This filter computes the local magnetic field in Munich, Germany,
but it can also be set to a different reference with the parameter
``mag_ref``.
>>> ekf = EKF(gyr=gyr_data, acc=acc_data, mag=mag_data, magnetic_ref=[17.06, 0.78, 34.39])
If the full referencial vector is not available, the magnetic dip angle, in
degrees, can be also used.
>>> ekf = EKF(gyr=gyr_data, acc=acc_data, mag=mag_data, magnetic_ref=60.0)
The initial quaternion is estimated with the first observations of the
tri-axial accelerometers and magnetometers, but it can also be given
directly in the parameter ``q0``.
>>> ekf = EKF(gyr=gyr_data, acc=acc_data, mag=mag_data, q0=[0.7071, 0.0, -0.7071, 0.0])
Measurement noise variances must be set from each sensor, so that the
Process and Measurement Covariance Matrix can be built. They are set in an
array equal to ``[0.3**2, 0.5**2, 0.8**2]`` for the gyroscope,
accelerometer and magnetometer, respectively. If a different set of noise
variances is used, they can be set with the parameter ``noises``:
>>> ekf = EKF(gyr=gyr_data, acc=acc_data, mag=mag_data, noises=[0.1**2, 0.3**2, 0.5**2])
or the individual variances can be set separately too:
>>> ekf = EKF(gyr=gyr_data, acc=acc_data, mag=mag_data, var_acc=0.3**2)
This class can also differentiate between NED and ENU frames. By default it
estimates the orientations using the NED frame, but ENU is used if set in
its parameter:
>>> ekf = EKF(gyr=gyr_data, acc=acc_data, mag=mag_data, frame='ENU')
Parameters
----------
gyr : numpy.ndarray, default: None
N-by-3 array with measurements of angular velocity in rad/s
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in in m/s^2
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT
frequency : float, default: 100.0
Sampling frequency in Herz.
frame : str, default: 'NED'
Local tangent plane coordinate frame. Valid options are right-handed
``'NED'`` for North-East-Down and ``'ENU'`` for East-North-Up.
q0 : numpy.ndarray, default: None
Initial orientation, as a versor (normalized quaternion).
magnetic_ref : float or numpy.ndarray
Local magnetic reference.
noises : numpy.ndarray
List of noise variances for each type of sensor. Default values:
``[0.3**2, 0.5**2, 0.8**2]``.
Dt : float, default: 0.01
Sampling step in seconds. Inverse of sampling frequency. NOT required
if ``frequency`` value is given.
"""
def __init__(self,
gyr: np.ndarray = None,
acc: np.ndarray = None,
mag: np.ndarray = None,
frequency: float = 100.0,
frame: str = 'NED',
**kwargs):
self.gyr = gyr
self.acc = acc
self.mag = mag
self.frequency = frequency
self.frame = frame # Local tangent plane coordinate frame
self.Dt = kwargs.get('Dt', 1.0/self.frequency)
self.q0 = kwargs.get('q0')
self.P = np.identity(4) # Initial state covariance
self.R = self._set_measurement_noise_covariance(**kwargs)
self._set_reference_frames(kwargs.get('magnetic_ref'), self.frame)
# Process of data is given
if self.gyr is not None and self.acc is not None:
self.Q = self._compute_all(self.frame)
def _set_measurement_noise_covariance(self, **kw) -> np.ndarray:
self.noises = np.array(kw.get('noises', [0.3**2, 0.5**2, 0.8**2]))
if 'var_gyr' in kw:
self.noises[0] = kw.get('var_gyr')
if 'var_acc' in kw:
self.noises[1] = kw.get('var_acc')
if 'var_mag' in kw:
self.noises[2] = kw.get('var_mag')
self.g_noise, self.a_noise, self.m_noise = self.noises
return np.diag(np.repeat(self.noises[1:], 3))
def _set_reference_frames(self, mref: float, frame: str = 'NED') -> None:
if frame.upper() not in ['NED', 'ENU']:
raise ValueError(f"Invalid frame '{frame}'. Try 'NED' or 'ENU'")
# Magnetic Reference Vector
if mref is None:
# Local magnetic reference of Munich, Germany
from ..common.mathfuncs import MUNICH_LATITUDE, MUNICH_LONGITUDE, MUNICH_HEIGHT
from ..utils.wmm import WMM
wmm = WMM(latitude=MUNICH_LATITUDE, longitude=MUNICH_LONGITUDE, height=MUNICH_HEIGHT)
self.m_ref = np.array([wmm.X, wmm.Y, wmm.Z]) if frame.upper() == 'NED' else np.array([wmm.Y, wmm.X, -wmm.Z])
elif isinstance(mref, (int, float)):
cd, sd = cosd(mref), sind(mref)
self.m_ref = np.array([cd, 0.0, sd]) if frame.upper() == 'NED' else np.array([0.0, cd, -sd])
else:
self.m_ref = np.copy(mref)
self.m_ref /= np.linalg.norm(self.m_ref)
# Gravitational Reference Vector
self.a_ref = np.array([0.0, 0.0, -1.0]) if frame.upper() == 'NED' else np.array([0.0, 0.0, 1.0])
def _compute_all(self, frame: str) -> np.ndarray:
"""
Estimate the quaternions given all sensor data.
Attributes ``gyr``, ``acc`` MUST contain data. Attribute ``mag`` is
optional.
Returns
-------
Q : numpy.ndarray
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.gyr.shape:
raise ValueError("acc and gyr are not the same size")
num_samples = len(self.acc)
Q = np.zeros((num_samples, 4))
Q[0] = self.q0
if self.mag is not None:
###### Compute attitude with MARG architecture ######
if self.mag.shape != self.gyr.shape:
raise ValueError("mag and gyr are not the same size")
if self.q0 is None:
Q[0] = ecompass(self.acc[0], self.mag[0], frame=frame, representation='quaternion')
Q[0] /= np.linalg.norm(Q[0])
# EKF Loop over all data
for t in range(1, num_samples):
Q[t] = self.update(Q[t-1], self.gyr[t], self.acc[t], self.mag[t])
return Q
###### Compute attitude with IMU architecture ######
if self.q0 is None:
Q[0] = acc2q(self.acc[0])
Q[0] /= np.linalg.norm(Q[0])
# EKF Loop over all data
for t in range(1, num_samples):
Q[t] = self.update(Q[t-1], self.gyr[t], self.acc[t])
return Q
def Omega(self, x: np.ndarray) -> np.ndarray:
"""Omega operator.
Given a vector :math:`\\mathbf{x}\\in\\mathbb{R}^3`, return a
:math:`4\\times 4` matrix of the form:
.. math::
\\boldsymbol\\Omega(\\mathbf{x}) =
\\begin{bmatrix}
0 & -\\mathbf{x}^T \\\\ \\mathbf{x} & \\lfloor\\mathbf{x}\\rfloor_\\times
\\end{bmatrix} =
\\begin{bmatrix}
0 & -x_1 & -x_2 & -x_3 \\\\
x_1 & 0 & x_3 & -x_2 \\\\
x_2 & -x_3 & 0 & x_1 \\\\
x_3 & x_2 & -x_1 & 0
\\end{bmatrix}
This operator is constantly used at different steps of the EKF.
Parameters
----------
x : numpy.ndarray
Three-dimensional vector.
Returns
-------
Omega : numpy.ndarray
Omega matrix.
"""
return np.array([
[0.0, -x[0], -x[1], -x[2]],
[x[0], 0.0, x[2], -x[1]],
[x[1], -x[2], 0.0, x[0]],
[x[2], x[1], -x[0], 0.0]])
def f(self, q: np.ndarray, omega: np.ndarray) -> np.ndarray:
"""Linearized function of Process Model (Prediction.)
.. math::
\\mathbf{f}(\\mathbf{q}_{t-1}) = \\Big(\\mathbf{I}_4 + \\frac{\\Delta t}{2}\\boldsymbol\\Omega_t\\Big)\\mathbf{q}_{t-1} =
\\begin{bmatrix}
q_w - \\frac{\\Delta t}{2} \\omega_x q_x - \\frac{\\Delta t}{2} \\omega_y q_y - \\frac{\\Delta t}{2} \\omega_z q_z\\\\
q_x + \\frac{\\Delta t}{2} \\omega_x q_w - \\frac{\\Delta t}{2} \\omega_y q_z + \\frac{\\Delta t}{2} \\omega_z q_y\\\\
q_y + \\frac{\\Delta t}{2} \\omega_x q_z + \\frac{\\Delta t}{2} \\omega_y q_w - \\frac{\\Delta t}{2} \\omega_z q_x\\\\
q_z - \\frac{\\Delta t}{2} \\omega_x q_y + \\frac{\\Delta t}{2} \\omega_y q_x + \\frac{\\Delta t}{2} \\omega_z q_w
\\end{bmatrix}
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
omega : numpy.ndarray
Angular velocity, in rad/s.
Returns
-------
q : numpy.ndarray
Linearized estimated quaternion in **Prediction** step.
"""
Omega_t = self.Omega(omega)
return (np.identity(4) + 0.5*self.Dt*Omega_t) @ q
def dfdq(self, omega: np.ndarray) -> np.ndarray:
"""Jacobian of linearized predicted state.
.. math::
\\mathbf{F} = \\frac{\\partial\\mathbf{f}(\\mathbf{q}_{t-1})}{\\partial\\mathbf{q}} =
\\begin{bmatrix}
1 & - \\frac{\\Delta t}{2} \\omega_x & - \\frac{\\Delta t}{2} \\omega_y & - \\frac{\\Delta t}{2} \\omega_z\\\\
\\frac{\\Delta t}{2} \\omega_x & 1 & \\frac{\\Delta t}{2} \\omega_z & - \\frac{\\Delta t}{2} \\omega_y\\\\
\\frac{\\Delta t}{2} \\omega_y & - \\frac{\\Delta t}{2} \\omega_z & 1 & \\frac{\\Delta t}{2} \\omega_x\\\\
\\frac{\\Delta t}{2} \\omega_z & \\frac{\\Delta t}{2} \\omega_y & - \\frac{\\Delta t}{2} \\omega_x & 1
\\end{bmatrix}
Parameters
----------
omega : numpy.ndarray
Angular velocity in rad/s.
Returns
-------
F : numpy.ndarray
Jacobian of state.
"""
x = 0.5*self.Dt*omega
return np.identity(4) + self.Omega(x)
def h(self, q: np.ndarray) -> np.ndarray:
"""Measurement Model
If only the gravitational acceleration is used to correct the
estimation, a vector with 3 elements is used:
.. math::
\\mathbf{h}(\\hat{\\mathbf{q}}_t) =
2 \\begin{bmatrix}
g_x (\\frac{1}{2} - q_y^2 - q_z^2) + g_y (q_wq_z + q_xq_y) + g_z (q_xq_z - q_wq_y) \\\\
g_x (q_xq_y - q_wq_z) + g_y (\\frac{1}{2} - q_x^2 - q_z^2) + g_z (q_wq_x + q_yq_z) \\\\
g_x (q_wq_y + q_xq_z) + g_y (q_yq_z - q_wq_x) + g_z (\\frac{1}{2} - q_x^2 - q_y^2)
\\end{bmatrix}
If the gravitational acceleration and the geomagnetic field are used,
then a vector with 6 elements is used:
.. math::
\\mathbf{h}(\\hat{\\mathbf{q}}_t) =
2 \\begin{bmatrix}
g_x (\\frac{1}{2} - q_y^2 - q_z^2) + g_y (q_wq_z + q_xq_y) + g_z (q_xq_z - q_wq_y) \\\\
g_x (q_xq_y - q_wq_z) + g_y (\\frac{1}{2} - q_x^2 - q_z^2) + g_z (q_wq_x + q_yq_z) \\\\
g_x (q_wq_y + q_xq_z) + g_y (q_yq_z - q_wq_x) + g_z (\\frac{1}{2} - q_x^2 - q_y^2) \\\\
r_x (\\frac{1}{2} - q_y^2 - q_z^2) + r_y (q_wq_z + q_xq_y) + r_z (q_xq_z - q_wq_y) \\\\
r_x (q_xq_y - q_wq_z) + r_y (\\frac{1}{2} - q_x^2 - q_z^2) + r_z (q_wq_x + q_yq_z) \\\\
r_x (q_wq_y + q_xq_z) + r_y (q_yq_z - q_wq_x) + r_z (\\frac{1}{2} - q_x^2 - q_y^2)
\\end{bmatrix}
Parameters
----------
q : numpy.ndarray
Predicted Quaternion.
Returns
-------
numpy.ndarray
Expected Measurements.
"""
C = q2R(q).T
if len(self.z) < 4:
return C @ self.a_ref
return np.r_[C @ self.a_ref, C @ self.m_ref]
def dhdq(self, q: np.ndarray, mode: str = 'normal') -> np.ndarray:
"""Linearization of observations with Jacobian.
If only the gravitational acceleration is used to correct the
estimation, a :math:`3\\times 4` matrix:
.. math::
\\mathbf{H}(\\hat{\\mathbf{q}}_t) =
2 \\begin{bmatrix}
g_yq_z - g_zq_y & g_yq_y + g_zq_z & - 2g_xq_y + g_yq_x - g_zq_w & - 2g_xq_z + g_yq_w + g_zq_x \\\\
-g_xq_z + g_zq_x & g_xq_y - 2g_yq_x + g_zq_w & g_xq_x + g_zq_z & -g_xq_w - 2g_yq_z + g_zq_y \\\\
g_xq_y - g_yq_x & g_xq_z - g_yq_w - 2g_zq_x & g_xq_w + g_yq_z - 2g_zq_y & g_xq_x + g_yq_y
\\end{bmatrix}
If the gravitational acceleration and the geomagnetic field are used,
then a :math:`6\\times 4` matrix is used:
.. math::
\\mathbf{H}(\\hat{\\mathbf{q}}_t) =
2 \\begin{bmatrix}
g_yq_z - g_zq_y & g_yq_y + g_zq_z & - 2g_xq_y + g_yq_x - g_zq_w & - 2g_xq_z + g_yq_w + g_zq_x \\\\
-g_xq_z + g_zq_x & g_xq_y - 2g_yq_x + g_zq_w & g_xq_x + g_zq_z & -g_xq_w - 2g_yq_z + g_zq_y \\\\
g_xq_y - g_yq_x & g_xq_z - g_yq_w - 2g_zq_x & g_xq_w + g_yq_z - 2g_zq_y & g_xq_x + g_yq_y \\\\
r_yq_z - r_zq_y & r_yq_y + r_zq_z & - 2r_xq_y + r_yq_x - r_zq_w & - 2r_xq_z + r_yq_w + r_zq_x \\\\
- r_xq_z + r_zq_x & r_xq_y - 2r_yq_x + r_zq_w & r_xq_x + r_zq_z & - r_xq_w - 2r_yq_z + r_zq_y \\\\
r_xq_y - r_yq_x & r_xq_z - r_yq_w - 2r_zq_x & r_xq_w + r_yq_z - 2r_zq_y & r_xq_x + r_yq_y
\\end{bmatrix}
If ``mode`` is equal to ``'refactored'``, the computation is carried
out as:
.. math::
\\mathbf{H}(\\hat{\\mathbf{q}}_t) = 2
\\begin{bmatrix}
\\mathbf{u}_g & \\lfloor\\mathbf{u}_g+\\hat{q}_w\\mathbf{g}\\rfloor_\\times + (\\hat{\\mathbf{q}}_v\\cdot\\mathbf{g})\\mathbf{I}_3 - \\mathbf{g}\\hat{\\mathbf{q}}_v^T \\\\
\\mathbf{u}_r & \\lfloor\\mathbf{u}_r+\\hat{q}_w\\mathbf{r}\\rfloor_\\times + (\\hat{\\mathbf{q}}_v\\cdot\\mathbf{r})\\mathbf{I}_3 - \\mathbf{r}\\hat{\\mathbf{q}}_v^T
\\end{bmatrix}
.. warning::
The refactored mode might lead to slightly different results as it
employs more and different operations than the normal mode,
created by the nummerical capabilities of the host system.
Parameters
----------
q : numpy.ndarray
Predicted state estimate.
mode : str, default: 'normal'
Computation mode for Observation matrix.
Returns
-------
H : numpy.ndarray
Jacobian of observations.
"""
if mode.lower() not in ['normal', 'refactored']:
raise ValueError(f"Mode '{mode}' is invalid. Try 'normal' or 'refactored'.")
qw, qx, qy, qz = q
if mode.lower() == 'refactored':
t = skew(self.a_ref)@q[1:]
H = np.c_[t, q[1:]*self.a_ref*np.identity(3) + skew(t + qw*self.a_ref) - np.outer(self.a_ref, q[1:])]
if len(self.z) == 6:
t = skew(self.m_ref)@q[1:]
H_2 = np.c_[t, q[1:]*self.m_ref*np.identity(3) + skew(t + qw*self.m_ref) - np.outer(self.m_ref, q[1:])]
H = np.vstack((H, H_2))
return 2.0*H
v = np.r_[self.a_ref, self.m_ref]
H = np.array([[-qy*v[2] + qz*v[1], qy*v[1] + qz*v[2], -qw*v[2] + qx*v[1] - 2.0*qy*v[0], qw*v[1] + qx*v[2] - 2.0*qz*v[0]],
[ qx*v[2] - qz*v[0], qw*v[2] - 2.0*qx*v[1] + qy*v[0], qx*v[0] + qz*v[2], -qw*v[0] + qy*v[2] - 2.0*qz*v[1]],
[-qx*v[1] + qy*v[0], -qw*v[1] - 2.0*qx*v[2] + qz*v[0], qw*v[0] - 2.0*qy*v[2] + qz*v[1], qx*v[0] + qy*v[1]]])
if len(self.z) == 6:
H_2 = np.array([[-qy*v[5] + qz*v[4], qy*v[4] + qz*v[5], -qw*v[5] + qx*v[4] - 2.0*qy*v[3], qw*v[4] + qx*v[5] - 2.0*qz*v[3]],
[ qx*v[5] - qz*v[3], qw*v[5] - 2.0*qx*v[4] + qy*v[3], qx*v[3] + qz*v[5], -qw*v[3] + qy*v[5] - 2.0*qz*v[4]],
[-qx*v[4] + qy*v[3], -qw*v[4] - 2.0*qx*v[5] + qz*v[3], qw*v[3] - 2.0*qy*v[5] + qz*v[4], qx*v[3] + qy*v[4]]])
H = np.vstack((H, H_2))
return 2.0*H
def update(self, q: np.ndarray, gyr: np.ndarray, acc: np.ndarray, mag: np.ndarray = None) -> np.ndarray:
"""
Perform an update of the state.
Parameters
----------
q : numpy.ndarray
A-priori orientation as quaternion.
gyr : numpy.ndarray
Sample of tri-axial Gyroscope in rad/s.
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2.
mag : numpy.ndarray
Sample of tri-axial Magnetometer in uT.
Returns
-------
q : numpy.ndarray
Estimated a-posteriori orientation as quaternion.
"""
if not np.isclose(np.linalg.norm(q), 1.0):
raise ValueError("A-priori quaternion must have a norm equal to 1.")
# Current Measurements
g = np.copy(gyr) # Gyroscope data as control vector
a = np.copy(acc)
a_norm = np.linalg.norm(a)
if a_norm == 0:
return q
a /= a_norm
self.z = np.copy(a)
if mag is not None:
m_norm = np.linalg.norm(mag)
if m_norm == 0:
raise ValueError(f"Invalid geomagnetic field. Its magnitude must be greater than zero.")
self.z = np.r_[a, mag/m_norm]
self.R = np.diag(np.repeat(self.noises[1:] if mag is not None else self.noises[1], 3))
# ----- Prediction -----
q_t = self.f(q, g) # Predicted State
F = self.dfdq(g) # Linearized Fundamental Matrix
W = 0.5*self.Dt * np.r_[[-q[1:]], q[0]*np.identity(3) + skew(q[1:])] # Jacobian W = df/dω
Q_t = 0.5*self.Dt * self.g_noise * [email protected] # Process Noise Covariance
P_t = [email protected]@F.T + Q_t # Predicted Covariance Matrix
# ----- Correction -----
y = self.h(q_t) # Expected Measurement function
v = self.z - y # Innovation (Measurement Residual)
H = self.dhdq(q_t) # Linearized Measurement Matrix
S = H@[email protected] + self.R # Measurement Prediction Covariance
K = [email protected]@np.linalg.inv(S) # Kalman Gain
self.P = (np.identity(4) - K@H)@P_t
self.q = q_t + K@v # Corrected state
self.q /= np.linalg.norm(self.q)
return self.q | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/ekf.py | ekf.py |
import numpy as np
from ..common.orientation import q_prod, q2R
from ..common.constants import MUNICH_LATITUDE, MUNICH_HEIGHT
# Reference Observations in Munich, Germany
from ..utils.wgs84 import WGS
GRAVITY = WGS().normal_gravity(MUNICH_LATITUDE, MUNICH_HEIGHT)
def slerp_I(q: np.ndarray, ratio: float, t: float) -> np.ndarray:
"""
Interpolation with identity quaternion
Interpolate a given quaternion with the identity quaternion
:math:`\\mathbf{q}_I=\\begin{pmatrix}1 & 0 & 0 & 0\\end{pmatrix}` to
scale it to closest versor.
The interpolation can be with either LERP (Linear) or SLERP (Spherical
Linear) methods, decided by a threshold value :math:`t`, which lies
between ``0.0`` and ``1.0``.
.. math::
\\mathrm{method} = \\left\\{
\\begin{array}{ll}
\\mathrm{LERP} & \\: q_w > t \\\\
\\mathrm{SLERP} & \\: \\mathrm{otherwise}
\\end{array}
\\right.
For LERP, a simple equation is implemented:
.. math::
\\hat{\\mathbf{q}} = (1-\\alpha)\\mathbf{q}_I + \\alpha\\Delta \\mathbf{q}
where :math:`\\alpha\\in [0, 1]` is the gain characterizing the cut-off
frequency of the filter. It basically decides how "close" to the given
quaternion or to the identity quaternion the interpolation is.
If the scalar part :math:`q_w` of the given quaternion is below the
threshold :math:`t`, SLERP is used:
.. math::
\\hat{\\mathbf{q}} = \\frac{\\sin([1-\\alpha]\\Omega)}{\\sin\\Omega} \\mathbf{q}_I + \\frac{\\sin(\\alpha\\Omega)}{\\sin\\Omega} \\mathbf{q}
where :math:`\\Omega=\\arccos(q_w)` is the subtended arc between the
quaternions.
Parameters
----------
q : numpy.array
Quaternion to inerpolate with.
ratio : float
Gain characterizing the cut-off frequency of the filter.
t : float
Threshold deciding interpolation method. LERP when qw>t, otherwise
SLERP.
Returns
-------
q : numpy.array
Interpolated quaternion
"""
q_I = np.array([1.0, 0.0, 0.0, 0.0])
if q[0]>t: # LERP
q = (1.0-ratio)*q_I + ratio*q # (eq. 50)
else: # SLERP
angle = np.arccos(q[0])
q = q_I*np.sin(abs(1.0-ratio)*angle)/np.sin(angle) + q*np.sin(ratio*angle)/np.sin(angle) # (eq. 52)
q /= np.linalg.norm(q) # (eq. 51)
return q
def adaptive_gain(gain: float, a_local: np.ndarray, t1: float = 0.1, t2: float = 0.2, g: float = GRAVITY) -> float:
"""
Adaptive filter gain factor
The estimated gain :math:`\\alpha` is dependent on the gain factor
:math:`f(e_m)`:
.. math::
\\alpha = a f(e_m)
where the magnitude error is defined by the measured acceleration
:math:`\\mathbf{a}` before normalization and the reference gravity
:math:`g\\approx 9.809196 \, \\frac{m}{s^2}`:
.. math::
e_m = \\frac{|\\|\\mathbf{a}\\|-g|}{g}
The gain factor is constant and equal to 1 when the magnitude of the
nongravitational acceleration is not high enough to overcome gravity.
If nongravitational acceleration rises and :math:`e_m` exceeds the
first threshold, the gain factor :math:`f` decreases linearly with the
increase of the magnitude until reaching zero at the second threshold
and above it.
.. math::
f(e_m) =
\\left\\{
\\begin{array}{ll}
1 & \\mathrm{if}\; e_m \\leq t_1 \\\\
\\frac{t_2-e_m}{t_1} & \\mathrm{if}\; t_1 < e_m < t_2 \\\\
0 & \\mathrm{if}\; e_m \\geq t_2
\\end{array}
\\right.
Empirically, both thresholds have been defined at ``0.1`` and ``0.2``,
respectively. They can be, however, changed by setting the values of
input parameters ``t1`` and ``t2``.
Parameters
----------
gain : float
Gain yielding best results in static conditions.
a_local : numpy.ndarray
Measured local acceleration vector.
t1 : float, default: 0.1
First threshold
t2 : float, default: 0.2
Second threshold
g : float, default: 9.809196
Reference gravitational acceleration in m/s^2. The estimated gravity in
Munich, Germany (``9.809196``) is used as default reference value.
Returns
-------
alpha : float
Gain factor
Examples
--------
>>> from ahrs.filters.aqua import adaptive_gain
>>> alpha = 0.01 # Best gain in static conditions
>>> acc = np.array([0.0699, 9.7688, -0.2589]) # Measured acceleration. Quasi-static state.
>>> adaptive_gain(alpha, acc)
0.01
>>> acc = np.array([0.8868, 10.8803, -0.4562]) # New measured acceleration. Slightly above first threshold.
>>> adaptive_gain(alpha, acc)
0.008615664547367627
>>> acc = np.array([4.0892, 12.7667, -2.6047]) # New measured acceleration. Above second threshold.
>>> adaptive_gain(alpha, acc)
0.0
>>> adaptive_gain(alpha, acc, t1=0.2, t2=0.5) # Same acceleration. New thresholds.
0.005390131074499384
>>> adaptive_gain(alpha, acc, t1=0.2, t2=0.5, g=9.82) # Same acceleration and thresholds. New reference gravity.
0.005466716107480152
"""
if t1 > t2:
raise ValueError("The second threshold should be greater than the first threshold.")
em = abs(np.linalg.norm(a_local)-g)/g # Magnitude error (eq. 60)
f = 0.0
if t1 < em < t2:
f = (t2-em)/t1
if em <= t1:
f = 1.0
return f*gain # Filtering gain (eq. 61)
class AQUA:
"""
Algebraic Quaternion Algorithm
Parameters
----------
gyr : numpy.ndarray, default: None
N-by-3 array with measurements of angular velocity in rad/s
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in m/s^2
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in nT
frequency : float, default: 100.0
Sampling frequency in Herz
Dt : float, default: 0.01
Sampling step in seconds. Inverse of sampling frequency. Not required
if ``frequency`` value is given.
alpha : float, default: 0.01
Gain characterizing cut-off frequency for accelerometer quaternion
beta : float, default: 0.01
Gain characterizing cut-off frequency for magnetometer quaternion
threshold : float, default: 0.9
Threshold to discriminate between LERP and SLERP interpolation
adaptive : bool, default: False
Whether to use an adaptive gain for each sample
q0 : numpy.ndarray, default: None
Initial orientation, as a versor (normalized quaternion).
Attributes
----------
gyr : numpy.ndarray
N-by-3 array with N gyroscope samples.
acc : numpy.ndarra
N-by-3 array with N accelerometer samples.
mag : numpy.ndarray
N-by-3 array with N magnetometer samples.
frequency : float
Sampling frequency in Herz
Dt : float
Sampling step in seconds. Inverse of sampling frequency.
alpha : float
Gain characterizing cut-off frequency for accelerometer quaternion.
beta : float
Gain characterizing cut-off frequency for magnetometer quaternion.
threshold : float
Threshold to discern between LERP and SLERP interpolation.
adaptive : bool
Flag indicating use of adaptive gain.
q0 : numpy.ndarray
Initial orientation, as a versor (normalized quaternion).
"""
def __init__(self, gyr: np.ndarray = None, acc: np.ndarray = None, mag: np.ndarray = None, **kw):
self.gyr = gyr
self.acc = acc
self.mag = mag
self.Q = None
self.frequency = kw.get('frequency', 100.0)
self.frame = kw.get('frame', 'NED')
self.Dt = kw.get('Dt', 1.0/self.frequency)
self.alpha = kw.get('alpha', 0.01)
self.beta = kw.get('beta', 0.01)
self.threshold = kw.get('threshold', 0.9)
self.adaptive = kw.get('adaptive', False)
self.q0 = kw.get('q0')
if self.acc is not None and self.gyr is not None:
self.Q = self._compute_all()
def _compute_all(self):
"""Estimate all quaternions with given sensor values"""
if self.acc.shape != self.gyr.shape:
raise ValueError("acc and gyr are not the same size")
num_samples = len(self.gyr)
Q = np.zeros((num_samples, 4))
# Compute with IMU architecture
if self.mag is None:
Q[0] = self.estimate(self.acc[0]) if self.q0 is None else self.q0.copy()
for t in range(1, num_samples):
Q[t] = self.updateIMU(Q[t-1], self.gyr[t], self.acc[t])
return Q
# Compute with MARG architecture
if self.mag.shape != self.gyr.shape:
raise ValueError("mag and gyr are not the same size")
Q[0] = self.estimate(self.acc[0], self.mag[0]) if self.q0 is None else np.copy(self.q0)
for t in range(1, num_samples):
Q[t] = self.updateMARG(Q[t-1], self.gyr[t], self.acc[t], self.mag[t])
return Q
def Omega(self, x: np.ndarray) -> np.ndarray:
"""Omega operator.
Given a vector :math:`\\mathbf{x}\\in\\mathbb{R}^3`, return a
:math:`4\\times 4` matrix of the form:
.. math::
\\boldsymbol\\Omega(\\mathbf{x}) =
\\begin{bmatrix}
0 & -\\mathbf{x}^T \\\\ \\mathbf{x} & \\lfloor\\mathbf{x}\\rfloor_\\times
\\end{bmatrix} =
\\begin{bmatrix}
0 & x_1 & x_2 & x_3 \\\\
-x_1 & 0 & x_3 & -x_2 \\\\
-x_2 & -x_3 & 0 & x_1 \\\\
-x_3 & x_2 & -x_1 & 0
\\end{bmatrix}
This operator is a simplification to create a 4-by-4 matrix used for
the product between the angular rate and a quaternion, such that:
.. math::
^L_G\\dot{\\mathbf{q}}_{\\omega, t_k} = \\frac{1}{2}\\boldsymbol\\Omega(\,^L\\mathbf{\\omega}_{q, t_k})\;^L_G\\mathbf{q}_{t_{k-1}}
.. note::
The original definition in the article (eq. 39) has an errata
missing the multiplication with :math:`\\frac{1}{2}`.
Parameters
----------
x : numpy.ndarray
Three-dimensional vector representing the angular rate around the
three axes of the local frame.
Returns
-------
Omega : numpy.ndarray
Omega matrix.
"""
return np.array([
[0.0, x[0], x[1], x[2]],
[-x[0], 0.0, x[2], -x[1]],
[-x[1], -x[2], 0.0, x[0]],
[-x[2], x[1], -x[0], 0.0]])
def estimate(self, acc: np.ndarray, mag: np.ndarray = None) -> np.ndarray:
"""
Quaternion from Earth-Field Observations
Algebraic estimation of a quaternion as a function of an observation of
the Earth's gravitational and magnetic fields.
It decomposes the quaternion :math:`\\mathbf{q}` into two auxiliary
quaternions :math:`\\mathbf{q}_{\\mathrm{acc}}` and
:math:`\\mathbf{q}_{\\mathrm{mag}}`, such that:
.. math::
\\mathbf{q} = \\mathbf{q}_{\\mathrm{acc}}\\mathbf{q}_{\\mathrm{mag}}
Parameters
----------
acc : numpy.ndarray, default: None
Sample of tri-axial Accelerometer in m/s^2
mag : numpy.ndarray, default: None
Sample of tri-axial Magnetometer in mT
Returns
-------
q : numpy.ndarray
Estimated quaternion.
"""
ax, ay, az = acc/np.linalg.norm(acc)
# Quaternion from Accelerometer Readings (eq. 25)
if az >= 0:
q_acc = np.array([np.sqrt((az+1)/2), -ay/np.sqrt(2*(1-ax)), ax/np.sqrt(2*(az+1)), 0.0])
else:
q_acc = np.array([-ay/np.sqrt(2*(1-az)), np.sqrt((1-az)/2.0), 0.0, ax/np.sqrt(2*(1-az))])
q_acc /= np.linalg.norm(q_acc)
if mag is not None:
m_norm = np.linalg.norm(mag)
if m_norm == 0:
raise ValueError(f"Invalid geomagnetic field. Its magnitude must be greater than zero.")
lx, ly, lz = q2R(q_acc).T @ (mag/np.linalg.norm(mag)) # (eq. 26)
Gamma = lx**2 + ly**2 # (eq. 28)
# Quaternion from Magnetometer Readings (eq. 35)
if lx >= 0:
q_mag = np.array([np.sqrt(Gamma+lx*np.sqrt(Gamma))/np.sqrt(2*Gamma), 0.0, 0.0, ly/np.sqrt(2)*np.sqrt(Gamma+lx*np.sqrt(Gamma))])
else:
q_mag = np.array([ly/np.sqrt(2)*np.sqrt(Gamma-lx*np.sqrt(Gamma)), 0.0, 0.0, np.sqrt(Gamma-lx*np.sqrt(Gamma))/np.sqrt(2*Gamma)])
# Generalized Quaternion Orientation (eq. 36)
q = q_prod(q_acc, q_mag)
return q/np.linalg.norm(q)
return q_acc
def updateIMU(self, q: np.ndarray, gyr: np.ndarray, acc: np.ndarray) -> np.ndarray:
"""
Quaternion Estimation with a IMU architecture.
The estimation is made in two steps: a *prediction* is done with the
angular rate (gyroscope) to integrate and estimate the current
orientation; then a *correction* step uses the measured accelerometer
to infer the expected gravity vector and use it to correct the
predicted quaternion.
If the gyroscope data is invalid, it returns the given a-priori
quaternion. Secondly, if the accelerometer data is invalid the
predicted quaternion (using gyroscopes) is returned.
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
gyr : numpy.ndarray
Sample of tri-axial Gyroscope in rad/s.
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2
Returns
-------
q : numpy.ndarray
Estimated quaternion.
"""
if gyr is None or not np.linalg.norm(gyr) > 0:
return q
# PREDICTION
qDot = 0.5*self.Omega(gyr) @ q # Quaternion derivative (eq. 39)
qInt = q + qDot*self.Dt # Quaternion integration (eq. 42)
qInt /= np.linalg.norm(qInt)
# CORRECTION
a_norm = np.linalg.norm(acc)
if not a_norm > 0:
return qInt
a = acc/a_norm
gx, gy, gz = q2R(qInt).T@a # Predicted gravity (eq. 44)
q_acc = np.array([np.sqrt((gz+1)/2.0), -gy/np.sqrt(2.0*(gz+1)), gx/np.sqrt(2.0*(gz+1)), 0.0]) # Delta Quaternion (eq. 47)
if self.adaptive:
self.alpha = adaptive_gain(self.alpha, acc)
q_acc = slerp_I(q_acc, self.alpha, self.threshold)
q_prime = q_prod(qInt, q_acc) # (eq. 53)
return q_prime/np.linalg.norm(q_prime)
def updateMARG(self, q: np.ndarray, gyr: np.ndarray, acc: np.ndarray, mag: np.ndarray) -> np.ndarray:
"""
Quaternion Estimation with a MARG architecture.
The estimation is made in two steps: a *prediction* is done with the
angular rate (gyroscope) to integrate and estimate the current
orientation; then a *correction* step uses the measured accelerometer
and magnetic field to infer the expected geodetic values. Its
divergence is used to correct the predicted quaternion.
If the gyroscope data is invalid, it returns the given a-priori
quaternion. Secondly, if the accelerometer data is invalid the
predicted quaternion (using gyroscopes) is returned. Finally, if the
magnetometer measurements are invalid, returns a quaternion corrected
by the accelerometer only.
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
gyr : numpy.ndarray
Sample of tri-axial Gyroscope in rad/s.
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2
mag : numpy.ndarray
Sample of tri-axial Magnetometer in mT
Returns
-------
q : numpy.ndarray
Estimated quaternion.
"""
if gyr is None or not np.linalg.norm(gyr)>0:
return q
# PREDICTION
qDot = 0.5*self.Omega(gyr) @ q # Quaternion derivative (eq. 39)
qInt = q + qDot*self.Dt # Quaternion integration (eq. 42)
qInt /= np.linalg.norm(qInt)
# CORRECTION
a_norm = np.linalg.norm(acc)
if not a_norm > 0:
return qInt
a = acc/a_norm
gx, gy, gz = q2R(qInt).T @ a # Predicted gravity (eq. 44)
# Accelerometer-Based Quaternion
q_acc = np.array([np.sqrt((gz+1.0)/2.0), -gy/np.sqrt(2.0*(gz+1.0)), gx/np.sqrt(2.0*(gz+1.0)), 0.0]) # Delta Quaternion (eq. 47)
if self.adaptive:
self.alpha = adaptive_gain(self.alpha, acc)
q_acc = slerp_I(q_acc, self.alpha, self.threshold)
q_prime = q_prod(qInt, q_acc) # (eq. 53)
q_prime /= np.linalg.norm(q_prime)
# Magnetometer-Based Quaternion
m_norm = np.linalg.norm(mag)
if not m_norm > 0:
return q_prime
lx, ly, lz = q2R(q_prime).T @ (mag/m_norm) # World frame magnetic vector (eq. 54)
Gamma = lx**2 + ly**2 # (eq. 28)
q_mag = np.array([np.sqrt(Gamma+lx*np.sqrt(Gamma))/np.sqrt(2*Gamma), 0.0, 0.0, ly/np.sqrt(2*(Gamma+lx*np.sqrt(Gamma)))]) # (eq. 58)
q_mag = slerp_I(q_mag, self.beta, self.threshold)
# Generalized Quaternion
q = q_prod(q_prime, q_mag) # (eq. 59)
return q/np.linalg.norm(q)
def init_q(self, acc: np.ndarray, mag: np.ndarray = None) -> np.ndarray:
return self.estimate(acc, mag) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/aqua.py | aqua.py |
import numpy as np
from ..common.orientation import chiaverini
from ..common.mathfuncs import *
# Reference Observations in Munich, Germany
from ..utils.wmm import WMM
MAG = WMM(latitude=MUNICH_LATITUDE, longitude=MUNICH_LONGITUDE, height=MUNICH_HEIGHT).magnetic_elements
class TRIAD:
"""
Tri-Axial Attitude Determination
TRIAD estimates the attitude as a Direction Cosine Matrix. To return it as
a quaternion, set the parameter ``as_quaternion`` to ``True``.
Parameters
----------
w1 : numpy.ndarray
First tri-axial observation vector in body frame. Usually a normalized
acceleration vector :math:`\\mathbf{a} = \\begin{bmatrix}a_x & a_y & a_z \\end{bmatrix}`
w2 : numpy.ndarray
Second tri-axial observation vector in body frame. Usually a normalized
magnetic field vector :math:`\\mathbf{m} = \\begin{bmatrix}m_x & m_y & m_z \\end{bmatrix}`
v1 : numpy.ndarray, optional.
First tri-axial reference vector. Defaults to normalized gravity vector
:math:`\\mathbf{g} = \\begin{bmatrix}0 & 0 & 1 \\end{bmatrix}`
v2 : numpy.ndarray, optional.
Second tri-axial reference vector. Defaults to normalized geomagnetic
field :math:`\\mathbf{h} = \\begin{bmatrix}h_x & h_y & h_z \\end{bmatrix}`
in Munich, Germany.
representation : str, default: ``'rotmat'``
Attitude representation. Options are ``rotmat'`` or ``'quaternion'``.
frame : str, default: 'NED'
Local tangent plane coordinate frame. Valid options are right-handed
``'NED'`` for North-East-Down and ``'ENU'`` for East-North-Up.
Attributes
----------
w1 : numpy.ndarray
First tri-axial observation vector in body frame.
w2 : numpy.ndarray
Second tri-axial observation vector in body frame.
v1 : numpy.ndarray, optional.
First tri-axial reference vector.
v2 : numpy.ndarray, optional.
Second tri-axial reference vector.
A : numpy.ndarray
Estimated attitude.
Examples
--------
>>> from ahrs.filters import TRIAD
>>> triad = TRIAD()
>>> triad.v1 = np.array([0.0, 0.0, 1.0]) # Reference gravity vector (g)
>>> triad.v2 = np.array([21.0097, 1.3335, 43.732]) # Reference geomagnetic field (h)
>>> a = np.array([-2.499e-04, 4.739e-02, 0.9988763]) # Measured acceleration (normalized)
>>> a /= np.linalg.norm(a)
>>> m = np.array([-0.36663061, 0.17598138, -0.91357132]) # Measured magnetic field (normalized)
>>> m /= np.linalg.norm(m)
>>> triad.estimate(w1=a, w2=m)
array([[-8.48320410e-01, -5.29483162e-01, -2.49900033e-04],
[ 5.28878238e-01, -8.47373587e-01, 4.73900062e-02],
[-2.53039690e-02, 4.00697428e-02, 9.98876431e-01]])
It also works by passing each array to its corresponding parameter. They
will be normalized too.
>>> triad = TRIAD(w1=a, w2=m, v1=[0.0, 0.0, 1.0], v2=[-0.36663061, 0.17598138, -0.91357132])
"""
def __init__(self,
w1: np.ndarray = None,
w2: np.ndarray = None,
v1: np.ndarray = None,
v2: np.ndarray = None,
representation: str = 'rotmat',
frame: str = 'NED'):
self.w1: np.ndarray = np.copy(w1)
self.w2: np.ndarray = np.copy(w2)
self.representation: str = representation
if representation.lower() not in ['rotmat', 'quaternion']:
raise ValueError("Wrong representation type. Try 'rotmat', or 'quaternion'")
if frame.upper() not in ['NED', 'ENU']:
raise ValueError(f"Given frame {frame} is NOT valid. Try 'NED' or 'ENU'")
# Reference frames
self.v1 = self._set_first_triad_reference(v1, frame)
self.v2 = self._set_second_triad_reference(v2, frame)
# Compute values if samples given
if all([self.w1, self.w2]):
self.A = self._compute_all(self.representation)
def _set_first_triad_reference(self, value, frame):
if value is None:
ref = np.array([0.0, 0.0, 1.0]) if frame.upper == 'NED' else np.array([0.0, 0.0, -1.0])
else:
ref = np.copy(value)
ref /= np.linalg.norm(ref)
return ref
def _set_second_triad_reference(self, value, frame):
ref = np.array([MAG['X'], MAG['Y'], MAG['Z']])
if isinstance(value, float):
if abs(value)>90:
raise ValueError(f"Dip Angle must be within range [-90, 90]. Got {value}")
ref = np.array([cosd(value), 0.0, sind(value)]) if frame.upper() == 'NED' else np.array([0.0, cosd(value), -sind(value)])
if isinstance(value, (np.ndarray, list)):
ref = np.copy(value)
return ref/np.linalg.norm(ref)
def _compute_all(self, representation) -> np.ndarray:
"""
Estimate the attitude given all data.
Attributes ``w1`` and ``w2`` must contain data.
Parameters
----------
representation : str
Attitude representation. Options are ``'rotmat'`` or ``'quaternion'``.
Returns
-------
A : numpy.ndarray
M-by-3-by-3 with all estimated attitudes as direction cosine
matrices, where M is the number of samples. It is an N-by-4 array
if ``representation`` is set to ``'quaternion'``.
"""
if self.w1.shape!=self.w2.shape:
raise ValueError("w1 and w2 are not the same size")
if self.w1.ndim == 1:
return self.estimate(self.w1, self.w2, representation)
num_samples = len(self.w1)
A = np.zeros((num_samples, 4)) if representation.lower() == 'quaternion' else np.zeros((num_samples, 3, 3))
for t in range(num_samples):
A[t] = self.estimate(self.w1[t], self.w2[t], representation)
return A
def estimate(self, w1: np.ndarray, w2: np.ndarray, representation: str = 'rotmat') -> np.ndarray:
"""
Attitude Estimation.
The equation numbers in the code refer to [Lerner1]_.
Parameters
----------
w1 : numpy.ndarray
Sample of first tri-axial sensor.
w2 : numpy.ndarray
Sample of second tri-axial sensor.
representation : str, default: ``'rotmat'``
Attitude representation. Options are ``rotmat'`` or ``'quaternion'``.
Returns
-------
A : numpy.ndarray
Estimated attitude as 3-by-3 Direction Cosine Matrix. If
``representation`` is set to ``'quaternion'``, it is returned as a
quaternion.
Examples
--------
>>> triad = ahrs.filters.TRIAD()
>>> triad.v1 = [0.0, 0.0, 1.0] # Normalized reference gravity vector (g)
>>> triad.v2 = [0.4328755, 0.02747412, 0.90103495] # Normalized reference geomagnetic field (h)
>>> a = [4.098297, 8.663757, 2.1355896] # Measured acceleration
>>> m = [-28715.50512, -25927.43566, 4756.83931] # Measured magnetic field
>>> triad.estimate(w1=a, w2=m) # Estimated attitude as DCM
array([[-7.84261e-01 , 4.5905718e-01, 4.1737417e-01],
[ 2.2883429e-01, -4.1126404e-01, 8.8232463e-01],
[ 5.7668844e-01, 7.8748232e-01, 2.1749032e-01]])
>>> triad.estimate(w1=a, w2=m, representation='quaternion') # Estimated attitude as quaternion
array([ 0.07410345, -0.3199659, -0.53747247, -0.77669417])
"""
if representation.lower() not in ['rotmat', 'quaternion']:
raise ValueError("Wrong representation type. Try 'rotmat', or 'quaternion'")
w1, w2 = np.copy(w1), np.copy(w2)
# Normalized Vectors
w1 /= np.linalg.norm(w1) # (eq. 12-39a)
w2 /= np.linalg.norm(w2)
# First Triad
w1xw2 = np.cross(w1, w2)
s2 = w1xw2 / np.linalg.norm(w1xw2) # (eq. 12-39b)
s3 = np.cross(w1, w1xw2) / np.linalg.norm(w1xw2) # (eq. 12-39c)
# Second Triad
v1xv2 = np.cross(self.v1, self.v2)
r2 = v1xv2 / np.linalg.norm(v1xv2)
r3 = np.cross(self.v1, v1xv2) / np.linalg.norm(v1xv2)
# Solve TRIAD
Mb = np.c_[w1, s2, s3] # (eq. 12-41)
Mr = np.c_[self.v1, r2, r3] # (eq. 12-42)
A = [email protected] # (eq. 12-45)
# Return according to desired representation
if representation.lower() == 'quaternion':
return chiaverini(A)
return A | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/triad.py | triad.py |
import numpy as np
from ..common.orientation import q_prod, q_conj, acc2q, am2q
class Madgwick:
"""
Madgwick's Gradient Descent Orientation Filter
If ``acc`` and ``gyr`` are given as parameters, the orientations will be
immediately computed with method ``updateIMU``.
If ``acc``, ``gyr`` and ``mag`` are given as parameters, the orientations
will be immediately computed with method ``updateMARG``.
Parameters
----------
gyr : numpy.ndarray, default: None
N-by-3 array with measurements of angular velocity in rad/s
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in in m/s^2
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT
frequency : float, default: 100.0
Sampling frequency in Herz.
Dt : float, default: 0.01
Sampling step in seconds. Inverse of sampling frequency. Not required
if `frequency` value is given.
gain : float, default: {0.033, 0.041}
Filter gain. Defaults to 0.033 for IMU implementations, or to 0.041 for
MARG implementations.
q0 : numpy.ndarray, default: None
Initial orientation, as a versor (normalized quaternion).
Attributes
----------
gyr : numpy.ndarray
N-by-3 array with N tri-axial gyroscope samples.
acc : numpy.ndarray
N-by-3 array with N tri-axial accelerometer samples.
mag : numpy.ndarray
N-by-3 array with N tri-axial magnetometer samples.
frequency : float
Sampling frequency in Herz
Dt : float
Sampling step in seconds. Inverse of sampling frequency.
gain : float
Filter gain.
q0 : numpy.ndarray
Initial orientation, as a versor (normalized quaternion).
Raises
------
ValueError
When dimension of input array(s) ``acc``, ``gyr``, or ``mag`` are not
equal.
Examples
--------
Assuming we have 3-axis sensor data in N-by-3 arrays, we can simply give
these samples to their corresponding type.
**IMU Array**
The Madgwick algorithm can work solely with gyroscope and accelerometer
samples.
The easiest way is to directly give the full array of samples to their
matching parameters.
>>> from ahrs.filters import Madgwick
>>> madgwick = Madgwick(gyr=gyro_data, acc=acc_data) # Using IMU
The estimated quaternions are saved in the attribute ``Q``.
>>> type(madgwick.Q), madgwick.Q.shape
(<class 'numpy.ndarray'>, (1000, 4))
If we desire to estimate each sample independently, we call the
corresponding method.
>>> madgwick = Madgwick()
>>> Q = np.tile([1., 0., 0., 0.], (num_samples, 1)) # Allocate for quaternions
>>> for t in range(1, num_samples):
... Q[t] = madgwick.updateIMU(Q[t-1], gyr=gyro_data[t], acc=acc_data[t])
**MARG Array**
Further on, we can also use magnetometer data.
>>> madgwick = Madgwick(gyr=gyro_data, acc=acc_data, mag=mag_data) # Using MARG
This algorithm is dynamically adding the orientation, instead of estimating
it from static observations. Thus, it requires an initial attitude to build
on top of it. This can be set with the parameter ``q0``:
>>> madgwick = Madgwick(gyr=gyro_data, acc=acc_data, q0=[0.7071, 0.0, 0.7071, 0.0])
If no initial orientation is given, then an attitude using the first
samples is estimated. This attitude is computed assuming the sensors are
straped to a system in a quasi-static state.
A constant sampling frequency equal to 100 Hz is used by default. To change
this value we set it in its parameter ``frequency``. Here we set it, for
example, to 150 Hz.
>>> madgwick = Madgwick(gyr=gyro_data, acc=acc_data, frequency=150.0)
Or, alternatively, setting the sampling step (:math:`\\Delta t = \\frac{1}{f}`):
>>> madgwick = Madgwick(gyr=gyro_data, acc=acc_data, Dt=1/150)
This is specially useful for situations where the sampling rate is variable:
>>> madgwick = Madgwick()
>>> Q = np.zeros((num_samples, 4)) # Allocation of quaternions
>>> Q[0] = [1.0, 0.0, 0.0, 0.0] # Initial attitude as a quaternion
>>> for t in range(1, num_samples):
>>> madgwick.Dt = new_sample_rate
... Q[t] = madgwick.updateIMU(Q[t-1], gyr=gyro_data[t], acc=acc_data[t])
Madgwick's algorithm uses a gradient descent method to correct the
estimation of the attitude. The **step size**, a.k.a.
`learning rate <https://en.wikipedia.org/wiki/Learning_rate>`_, is
considered a *gain* of this algorithm and can be set in the parameters too:
>>> madgwick = Madgwick(gyr=gyro_data, acc=acc_data, gain=0.01)
Following the original article, the gain defaults to ``0.033`` for IMU
arrays, and to ``0.041`` for MARG arrays.
"""
def __init__(self, gyr: np.ndarray = None, acc: np.ndarray = None, mag: np.ndarray = None, **kwargs):
self.gyr = gyr
self.acc = acc
self.mag = mag
self.frequency = kwargs.get('frequency', 100.0)
self.Dt = kwargs.get('Dt', 1.0/self.frequency)
self.q0 = kwargs.get('q0')
self.gain = kwargs.get('beta') # Setting gain with `beta` will be removed in the future.
if self.gain is None:
self.gain = kwargs.get('gain', 0.033 if self.mag is None else 0.041)
if self.acc is not None and self.gyr is not None:
self.Q = self._compute_all()
def _compute_all(self) -> np.ndarray:
"""
Estimate the quaternions given all data.
Attributes ``gyr`` and ``acc`` must contain data. If ``mag`` contains
data, the updateMARG() method is used.
Returns
-------
Q : numpy.ndarray
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.gyr.shape:
raise ValueError("acc and gyr are not the same size")
num_samples = len(self.acc)
Q = np.zeros((num_samples, 4))
# Compute with IMU architecture
if self.mag is None:
Q[0] = acc2q(self.acc[0]) if self.q0 is None else self.q0/np.linalg.norm(self.q0)
for t in range(1, num_samples):
Q[t] = self.updateIMU(Q[t-1], self.gyr[t], self.acc[t])
return Q
# Compute with MARG architecture
if self.mag.shape != self.gyr.shape:
raise ValueError("mag and gyr are not the same size")
Q[0] = am2q(self.acc[0], self.mag[0]) if self.q0 is None else self.q0/np.linalg.norm(self.q0)
for t in range(1, num_samples):
Q[t] = self.updateMARG(Q[t-1], self.gyr[t], self.acc[t], self.mag[t])
return Q
def updateIMU(self, q: np.ndarray, gyr: np.ndarray, acc: np.ndarray) -> np.ndarray:
"""
Quaternion Estimation with IMU architecture.
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
gyr : numpy.ndarray
Sample of tri-axial Gyroscope in rad/s
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2
Returns
-------
q : numpy.ndarray
Estimated quaternion.
Examples
--------
Assuming we have a tri-axial gyroscope array with 1000 samples, and
1000 samples of a tri-axial accelerometer. We get the attitude with the
Madgwick algorithm as:
>>> from ahrs.filters import Madgwick
>>> madgwick = Madgwick()
>>> Q = np.tile([1., 0., 0., 0.], (len(gyro_data), 1)) # Allocate for quaternions
>>> for t in range(1, num_samples):
... Q[t] = madgwick.updateIMU(Q[t-1], gyr=gyro_data[t], acc=acc_data[t])
...
Or giving the data directly in the class constructor will estimate all
attitudes at once:
>>> madgwick = Madgwick(gyr=gyro_data, acc=acc_data)
>>> madgwick.Q.shape
(1000, 4)
This builds the array ``Q``, where all attitude estimations will be
stored as quaternions.
"""
if gyr is None or not np.linalg.norm(gyr)>0:
return q
qDot = 0.5 * q_prod(q, [0, *gyr]) # (eq. 12)
a_norm = np.linalg.norm(acc)
if a_norm>0:
a = acc/a_norm
qw, qx, qy, qz = q/np.linalg.norm(q)
# Gradient objective function (eq. 25) and Jacobian (eq. 26)
f = np.array([2.0*(qx*qz - qw*qy) - a[0],
2.0*(qw*qx + qy*qz) - a[1],
2.0*(0.5-qx**2-qy**2) - a[2]]) # (eq. 25)
J = np.array([[-2.0*qy, 2.0*qz, -2.0*qw, 2.0*qx],
[ 2.0*qx, 2.0*qw, 2.0*qz, 2.0*qy],
[ 0.0, -4.0*qx, -4.0*qy, 0.0 ]]) # (eq. 26)
# Objective Function Gradient
gradient = J.T@f # (eq. 34)
gradient /= np.linalg.norm(gradient)
qDot -= self.gain*gradient # (eq. 33)
q += qDot*self.Dt # (eq. 13)
q /= np.linalg.norm(q)
return q
def updateMARG(self, q: np.ndarray, gyr: np.ndarray, acc: np.ndarray, mag: np.ndarray) -> np.ndarray:
"""
Quaternion Estimation with a MARG architecture.
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
gyr : numpy.ndarray
Sample of tri-axial Gyroscope in rad/s
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2
mag : numpy.ndarray
Sample of tri-axial Magnetometer in nT
Returns
-------
q : numpy.ndarray
Estimated quaternion.
Examples
--------
Assuming we have a tri-axial gyroscope array with 1000 samples, a
second array with 1000 samples of a tri-axial accelerometer, and a
third array with 1000 samples of a tri-axial magnetometer. We get the
attitude with the Madgwick algorithm as:
>>> from ahrs.filters import Madgwick
>>> madgwick = Madgwick()
>>> Q = np.tile([1., 0., 0., 0.], (len(gyro_data), 1)) # Allocate for quaternions
>>> for t in range(1, num_samples):
... Q[t] = madgwick.updateMARG(Q[t-1], gyr=gyro_data[t], acc=acc_data[t], mag=mag_data[t])
...
Or giving the data directly in the class constructor will estimate all
attitudes at once:
>>> madgwick = Madgwick(gyr=gyro_data, acc=acc_data, mag=mag_data)
>>> madgwick.Q.shape
(1000, 4)
This builds the array ``Q``, where all attitude estimations will be
stored as quaternions.
"""
if gyr is None or not np.linalg.norm(gyr)>0:
return q
if mag is None or not np.linalg.norm(mag)>0:
return self.updateIMU(q, gyr, acc)
qDot = 0.5 * q_prod(q, [0, *gyr]) # (eq. 12)
a_norm = np.linalg.norm(acc)
if a_norm>0:
a = acc/a_norm
m = mag/np.linalg.norm(mag)
# Rotate normalized magnetometer measurements
h = q_prod(q, q_prod([0, *m], q_conj(q))) # (eq. 45)
bx = np.linalg.norm([h[1], h[2]]) # (eq. 46)
bz = h[3]
qw, qx, qy, qz = q/np.linalg.norm(q)
# Gradient objective function (eq. 31) and Jacobian (eq. 32)
f = np.array([2.0*(qx*qz - qw*qy) - a[0],
2.0*(qw*qx + qy*qz) - a[1],
2.0*(0.5-qx**2-qy**2) - a[2],
2.0*bx*(0.5 - qy**2 - qz**2) + 2.0*bz*(qx*qz - qw*qy) - m[0],
2.0*bx*(qx*qy - qw*qz) + 2.0*bz*(qw*qx + qy*qz) - m[1],
2.0*bx*(qw*qy + qx*qz) + 2.0*bz*(0.5 - qx**2 - qy**2) - m[2]]) # (eq. 31)
J = np.array([[-2.0*qy, 2.0*qz, -2.0*qw, 2.0*qx ],
[ 2.0*qx, 2.0*qw, 2.0*qz, 2.0*qy ],
[ 0.0, -4.0*qx, -4.0*qy, 0.0 ],
[-2.0*bz*qy, 2.0*bz*qz, -4.0*bx*qy-2.0*bz*qw, -4.0*bx*qz+2.0*bz*qx],
[-2.0*bx*qz+2.0*bz*qx, 2.0*bx*qy+2.0*bz*qw, 2.0*bx*qx+2.0*bz*qz, -2.0*bx*qw+2.0*bz*qy],
[ 2.0*bx*qy, 2.0*bx*qz-4.0*bz*qx, 2.0*bx*qw-4.0*bz*qy, 2.0*bx*qx ]]) # (eq. 32)
gradient = J.T@f # (eq. 34)
gradient /= np.linalg.norm(gradient)
qDot -= self.gain*gradient # (eq. 33)
q += qDot*self.Dt # (eq. 13)
q /= np.linalg.norm(q)
return q | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/madgwick.py | madgwick.py |
import numpy as np
from ..common.mathfuncs import *
# Reference Observations in Munich, Germany
from ..utils.wmm import WMM
MAG = WMM(latitude=MUNICH_LATITUDE, longitude=MUNICH_LONGITUDE, height=MUNICH_HEIGHT).magnetic_elements
class FLAE:
"""Fast Linear Attitude Estimator
Parameters
----------
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in in m/s^2
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT
method : str, default: 'symbolic'
Method used to estimate the attitude. Options are: 'symbolic', 'eig'
and 'newton'.
weights : np.ndarray, default: [0.5, 0.5]
Weights used for each sensor. They must add up to 1.
magnetic_dip : float
Geomagnetic Inclination angle at local position, in degrees. Defaults
to magnetic dip of Munich, Germany.
Raises
------
ValueError
When estimation method is invalid.
Examples
--------
>>> orientation = FLAE()
>>> accelerometer = np.array([-0.2853546, 9.657394, 2.0018768])
>>> magnetometer = np.array([12.32605, -28.825378, -26.586914])
>>> orientation.estimate(acc=accelerometer, mag=magnetometer)
array([-0.45447247, -0.69524546, 0.55014011, -0.08622285])
You can set a different estimation method passing its name to parameter
``method``.
>>> orientation.estimate(acc=accelerometer, mag=magnetometer, method='newton')
array([ 0.42455176, 0.68971918, -0.58315259, -0.06305803])
Or estimate all quaternions at once by giving the data to the constructor.
All estimated quaternions are stored in attribute ``Q``.
>>> orientation = FLAE(acc=acc_data, mag=mag_Data, method='eig')
>>> orientation.Q.shape
(1000, 4)
"""
def __init__(self, acc: np.ndarray = None, mag: np.ndarray = None, method: str = 'symbolic', **kw):
self.acc = acc
self.mag = mag
self.method = method
if self.method.lower() not in ['eig', 'symbolic', 'newton']:
raise ValueError(f"Given method '{self.method}' is not valid. Try 'symbolic', 'eig' or 'newton'")
# Reference measurements
mdip = kw.get('magnetic_dip') # Magnetic dip, in degrees
mag_ref = np.array([MAG['X'], MAG['Y'], MAG['Z']]) if mdip is None else np.array([cosd(mdip), 0., -sind(mdip)])
mag_ref /= np.linalg.norm(mag_ref)
acc_ref = np.array([0.0, 0.0, 1.0])
self.ref = np.vstack((acc_ref, mag_ref))
# Weights of sensors
self.a = kw.get('weights', np.array([0.5, 0.5]))
self.a /= np.sum(self.a)
if self.acc is not None and self.mag is not None:
self.Q = self._compute_all()
def _compute_all(self) -> np.ndarray:
"""
Estimate the quaternions given all data in class Data.
Class Data must have, at least, `acc` and `mag` attributes.
Returns
-------
Q : numpy.ndarray
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.mag.shape:
raise ValueError("acc and mag are not the same size")
num_samples = len(self.acc)
Q = np.zeros((num_samples, 4))
for t in range(num_samples):
Q[t] = self.estimate(self.acc[t], self.mag[t], method=self.method)
return Q
def _row_reduction(self, A: np.ndarray) -> np.ndarray:
"""Gaussian elimination
"""
for i in range(3):
A[i] /= A[i, i]
for j in range(4):
if i!=j:
A[j] -= A[j, i]*A[i]
return A
def P1Hx(self, Hx: np.ndarray) -> np.ndarray:
return np.array([
[ Hx[0], 0.0, -Hx[2], Hx[1]],
[ 0.0, Hx[0], Hx[1], Hx[2]],
[-Hx[2], Hx[1], -Hx[0], 0.0],
[ Hx[1], Hx[2], 0.0, -Hx[0]]])
def P2Hy(self, Hy: np.ndarray) -> np.ndarray:
return np.array([
[ Hy[1], Hy[2], 0.0, -Hy[0]],
[ Hy[2], -Hy[1], Hy[0], 0.0],
[ 0.0, Hy[0], Hy[1], Hy[2]],
[-Hy[0], 0.0, Hy[2], -Hy[1]]])
def P3Hz(self, Hz: np.ndarray) -> np.ndarray:
return np.array([
[ Hz[2], -Hz[1], Hz[0], 0.0],
[-Hz[1], -Hz[2], 0.0, Hz[0]],
[ Hz[0], 0.0, -Hz[2], Hz[1]],
[ 0.0, Hz[0], Hz[1], Hz[2]]])
def estimate(self, acc: np.ndarray, mag: np.ndarray, method: str = 'symbolic') -> np.ndarray:
"""
Estimate a quaternion with the given measurements and weights.
Parameters
----------
acc : numpy.ndarray
Sample of tri-axial Accelerometer.
mag : numpy.ndarray
Sample of tri-axial Magnetometer.
method : str, default: 'symbolic'
Method used to estimate the attitude. Options are: 'symbolic', 'eig'
and 'newton'.
Returns
-------
q : numpy.ndarray
Estimated orienation as quaternion.
Examples
--------
>>> accelerometer = np.array([-0.2853546, 9.657394, 2.0018768])
>>> magnetometer = np.array([12.32605, -28.825378, -26.586914])
>>> orientation = FLAE()
>>> orientation.estimate(acc=accelerometer, mag=magnetometer)
array([-0.45447247, -0.69524546, 0.55014011, -0.08622285])
>>> orientation.estimate(acc=accelerometer, mag=magnetometer, method='eig')
array([ 0.42455176, 0.68971918, -0.58315259, -0.06305803])
>>> orientation.estimate(acc=accelerometer, mag=magnetometer, method='newton')
array([ 0.42455176, 0.68971918, -0.58315259, -0.06305803])
"""
if acc.size!=3:
raise ValueError("Accelerometer sample must be a (3,) array. Got array of shape {}".format(acc.shape))
if mag.size!=3:
raise ValueError("Magnetometer sample must be a (3,) array. Got array of shape {}".format(mag.shape))
if method.lower() not in ['eig', 'symbolic', 'newton']:
raise ValueError(f"Given method '{method}' is not valid. Try 'symbolic', 'eig' or 'newton'")
Db = np.r_[[acc/np.linalg.norm(acc)], [mag/np.linalg.norm(mag)]]
H = self.a * Db.T @ self.ref # (eq. 42)
W = self.P1Hx(H[0]) + self.P2Hy(H[1]) + self.P3Hz(H[2]) # (eq. 44)
if method.lower()=='eig':
V, D = np.linalg.eig(W)
q = D[:, np.argmax(V)]
return q/np.linalg.norm(q)
# Polynomial parameters (eq. 49)
t1 = -2*np.trace([email protected])
t2 = -8*np.linalg.det(H.T)
t3 = np.linalg.det(W)
if method.lower()=='newton':
lam = lam_old = 1.0
i = 0
while abs(lam_old-lam)>1e-8 or i<=30:
lam_old = lam
f = lam**4 + t1*lam**2 + t2*lam + t3 # (eq. 48)
fp = 4*lam**3 + 2*t1*lam + t2 # (eq. 50)
lam -= f/fp # (eq. 51)
i += 1
if method.lower()=='symbolic':
# Parameters (eq. 53)
T0 = 2*t1**3 + 27*t2**2 - 72*t1*t3
T1 = np.cbrt(T0 + np.sqrt(abs(-4*(t1**2 + 12*t3)**3 + T0**2)))
T2 = np.sqrt(abs(-4*t1 + 2**(4/3)*(t1**2 + 12*t3)/T1 + 2**(2/3)*T1))
# Solutions to polynomial (eq. 52)
L = np.zeros(4)
k1 = -T2**2 - 12*t1
k2 = 12*np.sqrt(6)*t2/T2
L[0] = T2 - np.sqrt(abs(k1 - k2))
L[1] = T2 + np.sqrt(abs(k1 - k2))
L[2] = -(T2 + np.sqrt(abs(k1 + k2)))
L[3] = -(T2 - np.sqrt(abs(k1 + k2)))
L /= 2*np.sqrt(6)
lam = L[(np.abs(L-1.0)).argmin()] # Eigenvalue closest to 1
N = W - lam*np.identity(4) # (eq. 54)
N = self._row_reduction(N) # (eq. 55)
q = np.array([N[0, 3], N[1, 3], N[2, 3], -1]) # (eq. 58)
return q/np.linalg.norm(q) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/flae.py | flae.py |
import numpy as np
from ..common.orientation import q_prod, q_conj, acc2q, am2q, q2R
class Mahony:
"""Mahony's Nonlinear Complementary Filter on SO(3)
If ``acc`` and ``gyr`` are given as parameters, the orientations will be
immediately computed with method ``updateIMU``.
If ``acc``, ``gyr`` and ``mag`` are given as parameters, the orientations
will be immediately computed with method ``updateMARG``.
Parameters
----------
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in m/s^2
gyr : numpy.ndarray, default: None
N-by-3 array with measurements of angular velocity in rad/s
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT
frequency : float, default: 100.0
Sampling frequency in Herz
Dt : float, default: 0.01
Sampling step in seconds. Inverse of sampling frequency. Not required
if `frequency` value is given
k_P : float, default: 1.0
Proportional filter gain
k_I : float, default: 0.3
Integral filter gain
q0 : numpy.ndarray, default: None
Initial orientation, as a versor (normalized quaternion).
Attributes
----------
gyr : numpy.ndarray
N-by-3 array with N gyroscope samples.
acc : numpy.ndarray
N-by-3 array with N accelerometer samples.
mag : numpy.ndarray
N-by-3 array with N magnetometer samples.
frequency : float
Sampling frequency in Herz.
Dt : float
Sampling step in seconds. Inverse of sampling frequency.
k_P : float
Proportional filter gain.
k_I : float
Integral filter gain.
q0 : numpy.ndarray
Initial orientation, as a versor (normalized quaternion)
Raises
------
ValueError
When dimension of input array(s) ``acc``, ``gyr``, or ``mag`` are not
equal.
Examples
--------
Assuming we have 3-axis sensor data in N-by-3 arrays, we can simply give
these samples to their corresponding type. The Mahony algorithm can work
solely with gyroscope samples, although the use of accelerometer samples is
much encouraged.
The easiest way is to directly give the full array of samples to their
matching parameters.
>>> from ahrs.filters import Mahony
>>> orientation = Mahony(gyr=gyro_data, acc=acc_data) # Using IMU
The estimated quaternions are saved in the attribute ``Q``.
>>> type(orientation.Q), orientation.Q.shape
(<class 'numpy.ndarray'>, (1000, 4))
If we desire to estimate each sample independently, we call the
corresponding method.
.. code:: python
orientation = Mahony()
Q = np.tile([1., 0., 0., 0.], (num_samples, 1)) # Allocate for quaternions
for t in range(1, num_samples):
Q[t] = orientation.updateIMU(Q[t-1], gyr=gyro_data[t], acc=acc_data[t])
Further on, we can also use magnetometer data.
>>> orientation = Mahony(gyr=gyro_data, acc=acc_data, mag=mag_data) # Using MARG
This algorithm is dynamically adding the orientation, instead of estimating
it from static observations. Thus, it requires an initial attitude to build
on top of it. This can be set with the parameter ``q0``:
>>> orientation = Mahony(gyr=gyro_data, acc=acc_data, q0=[0.7071, 0.0, 0.7071, 0.0])
If no initial orientation is given, then an attitude using the first
samples is estimated. This attitude is computed assuming the sensors are
straped to a system in a quasi-static state.
A constant sampling frequency equal to 100 Hz is used by default. To change
this value we set it in its parameter ``frequency``. Here we set it, for
example, to 150 Hz.
>>> orientation = Mahony(gyr=gyro_data, acc=acc_data, frequency=150.0)
Or, alternatively, setting the sampling step (:math:`\\Delta t = \\frac{1}{f}`):
>>> orientation = Mahony(gyr=gyro_data, acc=acc_data, Dt=1/150)
This is specially useful for situations where the sampling rate is variable:
.. code:: python
orientation = Mahony()
Q = np.tile([1., 0., 0., 0.], (num_samples, 1)) # Allocate for quaternions
for t in range(1, num_samples):
orientation.Dt = new_sample_rate
Q[t] = orientation.updateIMU(Q[t-1], gyr=gyro_data[t], acc=acc_data[t])
Mahony's algorithm uses an explicit complementary filter with two gains
:math:`k_P` and :math:`k_I` to correct the estimation of the attitude.
These gains can be set in the parameters too:
>>> orientation = Mahony(gyr=gyro_data, acc=acc_data, kp=0.5, ki=0.1)
Following the experimental settings of the original article, the gains are,
by default, :math:`k_P=1` and :math:`k_I=0.3`.
"""
def __init__(self,
gyr: np.ndarray = None,
acc: np.ndarray = None,
mag: np.ndarray = None,
frequency: float = 100.0,
k_P: float = 1.0,
k_I: float = 0.3,
q0: np.ndarray = None,
**kwargs):
self.gyr = gyr
self.acc = acc
self.mag = mag
self.frequency = frequency
self.q0 = q0
self.k_P = k_P
self.k_I = k_I
# Old parameter names for backward compatibility
self.k_P = kwargs.get('kp', k_P)
self.k_I = kwargs.get('ki', k_I)
self.Dt = kwargs.get('Dt', 1.0/self.frequency)
# Estimate all orientations if sensor data is given
if self.gyr is not None and self.acc is not None:
self.Q = self._compute_all()
def _compute_all(self):
"""
Estimate the quaternions given all data
Attributes ``gyr``, ``acc`` and, optionally, ``mag`` must contain data.
Returns
-------
Q : numpy.ndarray
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.gyr.shape:
raise ValueError("acc and gyr are not the same size")
num_samples = len(self.gyr)
Q = np.zeros((num_samples, 4))
# Compute with IMU Architecture
if self.mag is None:
Q[0] = acc2q(self.acc[0]) if self.q0 is None else self.q0/np.linalg.norm(self.q0)
for t in range(1, num_samples):
Q[t] = self.updateIMU(Q[t-1], self.gyr[t], self.acc[t])
return Q
# Compute with MARG Architecture
if self.mag.shape != self.gyr.shape:
raise ValueError("mag and gyr are not the same size")
Q[0] = am2q(self.acc[0], self.mag[0]) if self.q0 is None else self.q0/np.linalg.norm(self.q0)
for t in range(1, num_samples):
Q[t] = self.updateMARG(Q[t-1], self.gyr[t], self.acc[t], self.mag[t])
return Q
def updateIMU(self, q: np.ndarray, gyr: np.ndarray, acc: np.ndarray) -> np.ndarray:
"""
Attitude Estimation with a IMU architecture.
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
gyr : numpy.ndarray
Sample of tri-axial Gyroscope in rad/s.
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2.
Returns
-------
q : numpy.ndarray
Estimated quaternion.
Examples
--------
>>> orientation = Mahony()
>>> Q = np.tile([1., 0., 0., 0.], (num_samples, 1)) # Allocate for quaternions
>>> for t in range(1, num_samples):
... Q[t] = orientation.updateIMU(Q[t-1], gyr=gyro_data[t], acc=acc_data[t])
"""
if gyr is None or not np.linalg.norm(gyr)>0:
return q
Omega = np.copy(gyr)
a_norm = np.linalg.norm(acc)
if a_norm>0:
R = q2R(q)
v_a = [email protected]([0.0, 0.0, 1.0]) # Expected Earth's gravity
# ECF
omega_mes = np.cross(acc/a_norm, v_a) # Cost function (eqs. 32c and 48a)
b = -self.k_I*omega_mes # Estimated Gyro bias (eq. 48c)
Omega = Omega - b + self.k_P*omega_mes # Gyro correction
p = np.array([0.0, *Omega])
qDot = 0.5*q_prod(q, p) # Rate of change of quaternion (eqs. 45 and 48b)
q += qDot*self.Dt # Update orientation
q /= np.linalg.norm(q) # Normalize Quaternion (Versor)
return q
def updateMARG(self, q: np.ndarray, gyr: np.ndarray, acc: np.ndarray, mag: np.ndarray) -> np.ndarray:
"""
Attitude Estimation with a MARG architecture.
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
gyr : numpy.ndarray
Sample of tri-axial Gyroscope in rad/s.
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2.
mag : numpy.ndarray
Sample of tri-axail Magnetometer in uT.
Returns
-------
q : numpy.ndarray
Estimated quaternion.
Examples
--------
>>> orientation = Mahony()
>>> Q = np.tile([1., 0., 0., 0.], (num_samples, 1)) # Allocate for quaternions
>>> for t in range(1, num_samples):
... Q[t] = orientation.updateMARG(Q[t-1], gyr=gyro_data[t], acc=acc_data[t], mag=mag_data[t])
"""
if gyr is None or not np.linalg.norm(gyr)>0:
return q
Omega = np.copy(gyr)
a_norm = np.linalg.norm(acc)
if a_norm>0:
m_norm = np.linalg.norm(mag)
if not m_norm>0:
return self.updateIMU(q, gyr, acc)
a = np.copy(acc)/a_norm
m = np.copy(mag)/m_norm
R = q2R(q)
v_a = [email protected]([0.0, 0.0, 1.0]) # Expected Earth's gravity
# Rotate magnetic field to inertial frame
h = R@m
v_m = [email protected]([-np.linalg.norm([h[0], h[1]]), 0.0, h[2]])
v_m /= np.linalg.norm(v_m)
# ECF
omega_mes = np.cross(a, v_a) + np.cross(m, v_m) # Cost function (eqs. 32c and 48a)
b = -self.k_I*omega_mes # Estimated Gyro bias (eq. 48c)
Omega = Omega - b + self.k_P*omega_mes # Gyro correction
p = np.array([0.0, *Omega])
qDot = 0.5*q_prod(q, p) # Rate of change of quaternion (eqs. 45 and 48b)
q += qDot*self.Dt # Update orientation
q /= np.linalg.norm(q) # Normalize Quaternion (Versor)
return q | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/mahony.py | mahony.py |
import numpy as np
from ..common.orientation import ecompass
from ..common.mathfuncs import cosd, sind
class ROLEQ:
"""
Recursive Optimal Linear Estimator of Quaternion
Uses OLEQ to estimate the initial attitude.
Parameters
----------
gyr : numpy.ndarray, default: None
N-by-3 array with measurements of angular velocity in rad/s.
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in in m/s^2.
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT.
Attributes
----------
gyr : numpy.ndarray
N-by-3 array with N gyroscope samples.
acc : numpy.ndarray
N-by-3 array with N accelerometer samples.
mag : numpy.ndarray
N-by-3 array with N magnetometer samples.
frequency : float
Sampling frequency in Herz
Dt : float
Sampling step in seconds. Inverse of sampling frequency.
Q : numpy.array, default: None
M-by-4 Array with all estimated quaternions, where M is the number of
samples. Equal to None when no estimation is performed.
Raises
------
ValueError
When dimension of input arrays ``gyr``, ``acc`` or ``mag`` are not
equal.
Examples
--------
>>> gyr_data.shape, acc_data.shape, mag_data.shape # NumPy arrays with sensor data
((1000, 3), (1000, 3), (1000, 3))
>>> from ahrs.filters import ROLEQ
>>> orientation = ROLEQ(gyr=gyr_data, acc=acc_data, mag=mag_data)
>>> orientation.Q.shape # Estimated attitude
(1000, 4)
"""
def __init__(self,
gyr: np.ndarray = None,
acc: np.ndarray = None,
mag: np.ndarray = None,
weights: np.ndarray = None,
magnetic_ref: np.ndarray = None,
frame: str = 'NED',
**kwargs
):
self.gyr = gyr
self.acc = acc
self.mag = mag
self.a = weights if weights is not None else np.ones(2)
self.Q = None
self.frequency = kwargs.get('frequency', 100.0)
self.Dt = kwargs.get('Dt', 1.0/self.frequency)
self.q0 = kwargs.get('q0')
self.frame = frame
# Reference measurements
self._set_reference_frames(magnetic_ref, self.frame)
# Estimate all quaternions if data is given
if self.acc is not None and self.gyr is not None and self.mag is not None:
self.Q = self._compute_all()
def _set_reference_frames(self, mref: float, frame: str = 'NED'):
if frame.upper() not in ['NED', 'ENU']:
raise ValueError(f"Invalid frame '{frame}'. Try 'NED' or 'ENU'")
# Magnetic Reference Vector
if mref is None:
# Local magnetic reference of Munich, Germany
from ..common.mathfuncs import MUNICH_LATITUDE, MUNICH_LONGITUDE, MUNICH_HEIGHT
from ..utils.wmm import WMM
wmm = WMM(latitude=MUNICH_LATITUDE, longitude=MUNICH_LONGITUDE, height=MUNICH_HEIGHT)
self.m_ref = np.array([wmm.X, wmm.Y, wmm.Z]) if frame.upper() == 'NED' else np.array([wmm.Y, wmm.X, -wmm.Z])
elif isinstance(mref, (int, float)):
cd, sd = cosd(mref), sind(mref)
self.m_ref = np.array([cd, 0.0, sd]) if frame.upper() == 'NED' else np.array([0.0, cd, -sd])
else:
self.m_ref = np.copy(mref)
self.m_ref /= np.linalg.norm(self.m_ref)
# Gravitational Reference Vector
self.a_ref = np.array([0.0, 0.0, -1.0]) if frame.upper() == 'NED' else np.array([0.0, 0.0, 1.0])
def _compute_all(self) -> np.ndarray:
"""Estimate the quaternions given all data.
Attributes ``gyr``, ``acc`` and ``mag`` must contain data.
Returns
-------
Q : array
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.gyr.shape:
raise ValueError("acc and gyr are not the same size")
if self.acc.shape != self.mag.shape:
raise ValueError("acc and mag are not the same size")
num_samples = len(self.acc)
Q = np.zeros((num_samples, 4))
Q[0] = ecompass(self.acc[0], self.mag[0], frame=self.frame, representation='quaternion')
for t in range(1, num_samples):
Q[t] = self.update(Q[t-1], self.gyr[t], self.acc[t], self.mag[t])
return Q
def attitude_propagation(self, q: np.ndarray, omega: np.ndarray) -> np.ndarray:
"""Attitude estimation from previous quaternion and current angular velocity.
.. math::
\\mathbf{q}_\\omega = \\Big(\\mathbf{I}_4 + \\frac{\\Delta t}{2}\\boldsymbol\\Omega_t\\Big)\\mathbf{q}_{t-1} =
\\begin{bmatrix}
q_w - \\frac{\\Delta t}{2} \\omega_x q_x - \\frac{\\Delta t}{2} \\omega_y q_y - \\frac{\\Delta t}{2} \\omega_z q_z\\\\
q_x + \\frac{\\Delta t}{2} \\omega_x q_w - \\frac{\\Delta t}{2} \\omega_y q_z + \\frac{\\Delta t}{2} \\omega_z q_y\\\\
q_y + \\frac{\\Delta t}{2} \\omega_x q_z + \\frac{\\Delta t}{2} \\omega_y q_w - \\frac{\\Delta t}{2} \\omega_z q_x\\\\
q_z - \\frac{\\Delta t}{2} \\omega_x q_y + \\frac{\\Delta t}{2} \\omega_y q_x + \\frac{\\Delta t}{2} \\omega_z q_w
\\end{bmatrix}
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
omega : numpy.ndarray
Angular velocity, in rad/s.
Returns
-------
q : numpy.ndarray
Attitude as a quaternion.
"""
Omega_t = np.array([
[0.0, -omega[0], -omega[1], -omega[2]],
[omega[0], 0.0, omega[2], -omega[1]],
[omega[1], -omega[2], 0.0, omega[0]],
[omega[2], omega[1], -omega[0], 0.0]])
q_omega = (np.identity(4) + 0.5*self.Dt*Omega_t) @ q # (eq. 37)
return q_omega/np.linalg.norm(q_omega)
def WW(self, Db, Dr):
"""W Matrix
.. math::
\\mathbf{W} = D_x^r\\mathbf{M}_1 + D_y^r\\mathbf{M}_2 + D_z^r\\mathbf{M}_3
Parameters
----------
Db : numpy.ndarray
Normalized tri-axial observations vector.
Dr : numpy.ndarray
Normalized tri-axial reference vector.
Returns
-------
W_matrix : numpy.ndarray
W Matrix.
"""
bx, by, bz = Db
rx, ry, rz = Dr
M1 = np.array([
[bx, 0.0, bz, -by],
[0.0, bx, by, bz],
[bz, by, -bx, 0.0],
[-by, bz, 0.0, -bx]]) # (eq. 18a)
M2 = np.array([
[by, -bz, 0.0, bx],
[-bz, -by, bx, 0.0],
[0.0, bx, by, bz],
[bx, 0.0, bz, -by]]) # (eq. 18b)
M3 = np.array([
[bz, by, -bx, 0.0],
[by, -bz, 0.0, bx],
[-bx, 0.0, -bz, by],
[0.0, bx, by, bz]]) # (eq. 18c)
return rx*M1 + ry*M2 + rz*M3 # (eq. 20)
def oleq(self, acc: np.ndarray, mag: np.ndarray, q_omega: np.ndarray) -> np.ndarray:
"""OLEQ with a single rotation by R.
Parameters
----------
acc : numpy.ndarray
Sample of tri-axial Accelerometer.
mag : numpy.ndarray
Sample of tri-axial Magnetometer.
q_omega : numpy.ndarray
Preceding quaternion estimated with angular velocity.
Returns
-------
q : np.ndarray
Final quaternion.
"""
a_norm = np.linalg.norm(acc)
m_norm = np.linalg.norm(mag)
if not a_norm > 0 or not m_norm > 0: # handle NaN
return q_omega
acc = np.copy(acc) / np.linalg.norm(acc)
mag = np.copy(mag) / np.linalg.norm(mag)
sum_aW = self.a[0]*self.WW(acc, self.a_ref) + self.a[1]*self.WW(mag, self.m_ref) # (eq. 31)
R = 0.5*(np.identity(4) + sum_aW) # (eq. 33)
q = R @ q_omega # (eq. 25)
return q / np.linalg.norm(q)
def update(self, q: np.ndarray, gyr: np.ndarray, acc: np.ndarray, mag: np.ndarray) -> np.ndarray:
"""Update Attitude with a Recursive OLEQ
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
gyr : numpy.ndarray
Sample of angular velocity in rad/s
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2
mag : numpy.ndarray
Sample of tri-axial Magnetometer in mT
Returns
-------
q : numpy.ndarray
Estimated quaternion.
"""
q_g = self.attitude_propagation(q, gyr) # Quaternion from previous quaternion and angular velocity
q = self.oleq(acc, mag, q_g) # Second stage: Estimate with OLEQ
return q | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/roleq.py | roleq.py |
import numpy as np
from ..common.orientation import q_prod, q_conj, am2q
from ..common.mathfuncs import *
# Reference Observations in Munich, Germany
from ..utils.wmm import WMM
from ..utils.wgs84 import WGS
MAG = WMM(latitude=MUNICH_LATITUDE, longitude=MUNICH_LONGITUDE, height=MUNICH_HEIGHT).magnetic_elements
GRAVITY = WGS().normal_gravity(MUNICH_LATITUDE, MUNICH_HEIGHT)
class Fourati:
"""
Fourati's attitude estimation
Parameters
----------
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in in m/s^2
gyr : numpy.ndarray, default: None
N-by-3 array with measurements of angular velocity in rad/s
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT
frequency : float, default: 100.0
Sampling frequency in Herz.
Dt : float, default: 0.01
Sampling step in seconds. Inverse of sampling frequency. Not required
if `frequency` value is given.
gain : float, default: 0.1
Filter gain factor.
q0 : numpy.ndarray, default: None
Initial orientation, as a versor (normalized quaternion).
magnetic_dip : float
Magnetic Inclination angle, in degrees.
gravity : float
Normal gravity, in m/s^2.
Attributes
----------
gyr : numpy.ndarray
N-by-3 array with N gyroscope samples.
acc : numpy.ndarray
N-by-3 array with N accelerometer samples.
mag : numpy.ndarray
N-by-3 array with N magnetometer samples.
frequency : float
Sampling frequency in Herz
Dt : float
Sampling step in seconds. Inverse of sampling frequency.
gain : float
Filter gain factor.
q0 : numpy.ndarray
Initial orientation, as a versor (normalized quaternion).
Raises
------
ValueError
When dimension of input array(s) ``acc``, ``gyr``, or ``mag`` are not equal.
"""
def __init__(self, gyr: np.ndarray = None, acc: np.ndarray = None, mag: np.ndarray = None, **kwargs):
self.gyr = gyr
self.acc = acc
self.mag = mag
self.frequency = kwargs.get('frequency', 100.0)
self.Dt = kwargs.get('Dt', 1.0/self.frequency)
self.gain = kwargs.get('gain', 0.1)
self.q0 = kwargs.get('q0')
# Reference measurements
mdip = kwargs.get('magnetic_dip') # Magnetic dip, in degrees
self.m_q = np.array([0.0, MAG['X'], MAG['Y'], MAG['Z']]) if mdip is None else np.array([0.0, cosd(mdip), 0.0, sind(mdip)])
self.m_q /= np.linalg.norm(self.m_q)
self.g_q = np.array([0.0, 0.0, 0.0, 1.0]) # Normalized Gravity vector
# Process of given data
if self.acc is not None and self.gyr is not None and self.mag is not None:
self.Q = self._compute_all()
def _compute_all(self):
"""
Estimate the quaternions given all data
Attributes ``gyr``, ``acc`` and ``mag`` must contain data.
Returns
-------
Q : array
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.gyr.shape:
raise ValueError("acc and gyr are not the same size")
if self.mag.shape != self.gyr.shape:
raise ValueError("mag and gyr are not the same size")
num_samples = len(self.gyr)
Q = np.zeros((num_samples, 4))
Q[0] = am2q(self.acc[0], self.mag[0]) if self.q0 is None else self.q0.copy()
for t in range(1, num_samples):
Q[t] = self.update(Q[t-1], self.gyr[t], self.acc[t], self.mag[t])
return Q
def update(self, q: np.ndarray, gyr: np.ndarray, acc: np.ndarray, mag: np.ndarray) -> np.ndarray:
"""
Quaternion Estimation with a MARG architecture.
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
gyr : numpy.ndarray
Sample of tri-axial Gyroscope in rad/s
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2
mag : numpy.ndarray
Sample of tri-axial Magnetometer in mT
Returns
-------
q : numpy.ndarray
Estimated quaternion.
"""
if gyr is None or not np.linalg.norm(gyr)>0:
return q
qDot = 0.5 * q_prod(q, [0, *gyr]) # (eq. 5)
a_norm = np.linalg.norm(acc)
m_norm = np.linalg.norm(mag)
if a_norm>0 and m_norm>0 and self.gain>0:
# Levenberg Marquardt Algorithm
fhat = q_prod(q_conj(q), q_prod(self.g_q, q)) # (eq. 21)
hhat = q_prod(q_conj(q), q_prod(self.m_q, q)) # (eq. 22)
y = np.r_[acc/a_norm, mag/m_norm] # Measurements (eq. 6)
yhat = np.r_[fhat[1:], hhat[1:]] # Estimated values (eq. 8)
dq = y - yhat # Modeling Error
X = -2*np.c_[skew(fhat[1:]), skew(hhat[1:])].T # Jacobian Matrix (eq. 23)
lam = 1e-8 # Deviation to guarantee inversion
K = self.gain*np.linalg.inv(X.T@X + lam*np.eye(3))@X.T # Filter gain (eq. 24)
Delta = [1, *K@dq] # Correction term (eq. 25)
qDot = q_prod(qDot, Delta) # Corrected quaternion rate (eq. 7)
q += qDot*self.Dt
return q/np.linalg.norm(q) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/fourati.py | fourati.py |
import numpy as np
from ..common.mathfuncs import *
from ..utils.wmm import WMM
from ..utils.wgs84 import WGS
# Reference Observations in Munich, Germany
MAG = WMM(latitude=MUNICH_LATITUDE, longitude=MUNICH_LONGITUDE, height=MUNICH_HEIGHT).magnetic_elements
GRAVITY = WGS().normal_gravity(MUNICH_LATITUDE, MUNICH_HEIGHT)
class QUEST:
"""
QUaternion ESTimator
Parameters
----------
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in in m/s^2
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT
weights : array-like
Array with two weights. One per sensor measurement.
magnetic_dip : float
Local magnetic inclination angle, in degrees.
gravity : float
Local normal gravity, in m/s^2.
Attributes
----------
acc : numpy.ndarray
N-by-3 array with N accelerometer samples.
mag : numpy.ndarray
N-by-3 array with N magnetometer samples.
w : numpy.ndarray
Weights for each observation.
Raises
------
ValueError
When dimension of input arrays ``acc`` and ``mag`` are not equal.
"""
def __init__(self, acc: np.ndarray = None, mag: np.ndarray = None, **kw):
self.acc = acc
self.mag = mag
self.w = kw.get('weights', np.ones(2))
# Reference measurements
mdip = kw.get('magnetic_dip') # Magnetic dip, in degrees
self.m_q = np.array([MAG['X'], MAG['Y'], MAG['Z']]) if mdip is None else np.array([cosd(mdip), 0., sind(mdip)])
g = kw.get('gravity', GRAVITY) # Earth's normal gravity in m/s^2
self.g_q = np.array([0.0, 0.0, g]) # Normal Gravity vector
if self.acc is not None and self.mag is not None:
self.Q = self._compute_all()
def _compute_all(self) -> np.ndarray:
"""Estimate the quaternions given all data.
Attributes ``acc`` and ``mag`` must contain data.
Returns
-------
Q : numpy.ndarray
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.mag.shape:
raise ValueError("acc and mag are not the same size")
num_samples = len(self.acc)
Q = np.zeros((num_samples, 4))
for t in range(num_samples):
Q[t] = self.estimate(self.acc[t], self.mag[t])
return Q
def estimate(self, acc: np.ndarray = None, mag: np.ndarray = None) -> np.ndarray:
"""Attitude Estimation.
Parameters
----------
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2
mag : numpy.ndarray
Sample of tri-axial Magnetometer in T
Returns
-------
q : numpy.ndarray
Estimated attitude as a quaternion.
"""
B = self.w[0]*np.outer(acc, self.g_q) + self.w[1]*np.outer(mag, self.m_q) # Attitude profile matrix
S = B + B.T
z = np.array([B[1, 2]-B[2, 1], B[2, 0]-B[0, 2], B[0, 1]-B[1, 0]]) # Pseudovector (Axial vector)
# Parameters of characeristic equation (eq. 63)
sigma = B.trace()
Delta = np.linalg.det(S)
kappa = (Delta*np.linalg.inv(S)).trace()
# (eq. 71)
a = sigma**2 - kappa
b = sigma**2 + z@z
c = Delta + z@S@z
d = z@S**2@z
# Newton-Raphson method (eq. 70)
k = a*b + c*sigma - d
lam = lam_0 = self.w.sum()
while abs(lam-lam_0)>=1e-12:
lam_0 = lam
phi = lam**4 - (a+b)*lam**2 - c*lam + k
phi_prime = 4*lam**3 - 2*(a+b)*lam - c
lam -= phi/phi_prime
# (eq. 66)
alpha = lam**2 - sigma**2 + kappa
beta = lam - sigma
gamma = alpha*(lam+sigma) - Delta
Chi = (alpha*np.eye(3) + beta*S + S**2)@z # (eq. 68)
# Optimal Quaternion (eq. 69)
q = np.array([gamma, *Chi])
return q/np.linalg.norm(q) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/quest.py | quest.py |
import numpy as np
class AngularRate:
"""
Quaternion update by integrating angular velocity
Parameters
----------
gyr : numpy.ndarray, default: None
N-by-3 array with measurements of angular velocity in rad/s.
q0 : numpy.ndarray
Initial orientation, as a versor (normalized quaternion).
frequency : float, default: 100.0
Sampling frequency in Herz.
Dt : float, default: 0.01
Sampling step in seconds. Inverse of sampling frequency. Not required
if ``frequency`` value is given.
method : str, default: ``'closed'``
Estimation method to use. Options are: ``'series'`` or ``'closed'``.
order : int
Truncation order, if method ``'series'`` is used.
Attributes
----------
gyr : numpy.ndarray
N-by-3 array with N gyroscope samples.
Q : numpy.ndarray
Estimated quaternions.
method : str
Used estimation method.
order : int
Truncation order.
Examples
--------
>>> gyro_data.shape # NumPy arrays with gyroscope data in rad/s
(1000, 3)
>>> from ahrs.filters import AngularRate
>>> angular_rate = AngularRate(gyr=gyro_data)
>>> angular_rate.Q
array([[ 1.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 9.99999993e-01, 2.36511228e-06, -1.12991334e-04, 4.28771947e-05],
[ 9.99999967e-01, 1.77775173e-05, -2.43529706e-04, 8.33144162e-05],
...,
[-0.92576208, -0.23633121, 0.19738534, -0.2194337 ],
[-0.92547793, -0.23388968, 0.19889139, -0.22187479],
[-0.92504595, -0.23174096, 0.20086376, -0.22414251]])
>>> angular_rate.Q.shape # Estimated attitudes as Quaternions
(1000, 4)
The estimation of each attitude is built upon the previous attitude. This
estimator sets the initial attitude equal to the unit quaternion
``[1.0, 0.0, 0.0, 0.0]`` by default, because we cannot obtain the first
orientation with gyroscopes only.
We can use the class :class:`Tilt` to estimate the initial attitude with a
simple measurement of a tri-axial accelerometer:
>>> from ahrs.filter import Tilt
>>> tilt = Tilt()
>>> q_initial = tilt.estimate(acc=acc_sample) # One tridimensional sample suffices
>>> angular_rate = AngularRate(gyr=gyro_data, q0=q_initial)
>>> angular_rate.Q
array([[ 0.77547502, 0.6312126 , 0.01121595, -0.00912944],
[ 0.77547518, 0.63121388, 0.01110125, -0.00916754],
[ 0.77546726, 0.63122508, 0.01097435, -0.00921875],
...,
[-0.92576208, -0.23633121, 0.19738534, -0.2194337 ],
[-0.92547793, -0.23388968, 0.19889139, -0.22187479],
[-0.92504595, -0.23174096, 0.20086376, -0.22414251]])
The :class:`Tilt` can also use a magnetometer to improve the estimation
with the heading orientation.
>>> q_initial = tilt.estimate(acc=acc_sample, mag=mag_sample)
>>> angular_rate = AngularRate(gyr=gyro_data, q0=q_initial)
>>> angular_rate.Q
array([[ 0.66475674, 0.55050651, -0.30902706, -0.39942875],
[ 0.66473764, 0.5504497 , -0.30912672, -0.39946172],
[ 0.66470495, 0.55039529, -0.30924191, -0.39950193],
...,
[-0.90988476, -0.10433118, 0.28970402, 0.27802214],
[-0.91087203, -0.1014633 , 0.28977124, 0.2757716 ],
[-0.91164416, -0.09861271, 0.2903888 , 0.27359606]])
"""
def __init__(self, gyr: np.ndarray = None, q0: np.ndarray = None, frequency: float = 100.0, order: int = 1, **kw):
self.gyr = gyr
self.frequency = frequency
self.order = order
self.method = kw.get('method', 'closed')
self.q0 = kw.get('q0', np.array([1.0, 0.0, 0.0, 0.0]))
self.Dt = kw.get('Dt', 1.0/self.frequency)
if self.gyr is not None:
self.Q = self._compute_all()
def _compute_all(self):
"""Estimate all quaternions with given sensor values"""
num_samples = len(self.gyr)
Q = np.zeros((num_samples, 4))
Q[0] = self.q0
for t in range(1, num_samples):
Q[t] = self.update(Q[t-1], self.gyr[t], method=self.method, order=self.order)
return Q
def update(self, q: np.ndarray, gyr: np.ndarray, method: str = 'closed', order: int = 1) -> np.ndarray:
"""Update the quaternion estimation
Estimate quaternion :math:`\\mathbf{q}_{t+1}` from given a-priori
quaternion :math:`\\mathbf{q}_t` with a given angular rate measurement
:math:`\\mathbf{\\omega}`.
If ``method='closed'``, the new orienation is computed with the
closed-form solution:
.. math::
\\mathbf{q}_{t+1} =
\\Bigg[
\\cos\\Big(\\frac{\\|\\boldsymbol\\omega\\|\\Delta t}{2}\\Big)\\mathbf{I}_4 +
\\frac{1}{\\|\\boldsymbol\\omega\\|}\\sin\\Big(\\frac{\\|\\boldsymbol\\omega\\|\\Delta t}{2}\\Big)\\boldsymbol\\Omega(\\boldsymbol\\omega)
\\Bigg]\\mathbf{q}_t
Otherwise, if ``method='series'``, it is computed with a series of the
form:
.. math::
\\mathbf{q}_{t+1} =
\\Bigg[\\sum_{k=0}^\\infty \\frac{1}{k!} \\Big(\\frac{\\Delta t}{2}\\boldsymbol\\Omega(\\boldsymbol\\omega)\\Big)^k\\Bigg]\\mathbf{q}_t
where the order :math:`k` in the series has to be set as non-negative
integer in the parameter ``order``. By default it is set equal to 1.
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
gyr : numpy.ndarray
Array with triaxial measurements of angular velocity in rad/s.
method : str, default: ``'closed'``
Estimation method to use. Options are: ``'series'`` or ``'closed'``.
order : int
Truncation order, if method ``'series'`` is used.
Returns
-------
q : numpy.ndarray
Estimated quaternion.
Examples
--------
>>> from ahrs.filters import AngularRate
>>> gyro_data.shape
(1000, 3)
>>> num_samples = gyro_data.shape[0]
>>> Q = np.zeros((num_samples, 4)) # Allocation of quaternions
>>> Q[0] = [1.0, 0.0, 0.0, 0.0] # Initial attitude as a quaternion
>>> angular_rate = AngularRate()
>>> for t in range(1, num_samples):
... Q[t] = angular_rate.update(Q[t-1], gyro_data[t])
...
>>> Q
array([[ 1.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 9.99999993e-01, 2.36511228e-06, -1.12991334e-04, 4.28771947e-05],
[ 9.99999967e-01, 1.77775173e-05, -2.43529706e-04, 8.33144162e-05],
...,
[-0.92576208, -0.23633121, 0.19738534, -0.2194337 ],
[-0.92547793, -0.23388968, 0.19889139, -0.22187479],
[-0.92504595, -0.23174096, 0.20086376, -0.22414251]])
"""
if method.lower() not in ['series', 'closed']:
raise ValueError(f"Invalid method '{method}'. Try 'series' or 'closed'")
q = np.copy(q)
if gyr is None or not np.linalg.norm(gyr)>0:
return q
Omega = np.array([
[ 0.0, -gyr[0], -gyr[1], -gyr[2]],
[gyr[0], 0.0, gyr[2], -gyr[1]],
[gyr[1], -gyr[2], 0.0, gyr[0]],
[gyr[2], gyr[1], -gyr[0], 0.0]])
if method.lower() == 'closed':
w = np.linalg.norm(gyr)
A = np.cos(w*self.Dt/2.0)*np.eye(4) + np.sin(w*self.Dt/2.0)*Omega/w
else:
if order < 0:
raise ValueError(f"The order must be an int equal or greater than 0. Got {order}")
S = 0.5 * self.Dt * Omega
A = np.identity(4)
for i in range(1, order+1):
A += S**i / np.math.factorial(i)
q = A @ q
return q / np.linalg.norm(q) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/angular.py | angular.py |
import numpy as np
from ..common.mathfuncs import *
# Reference Observations in Munich, Germany
from ..utils.wmm import WMM
from ..utils.wgs84 import WGS
MAG = WMM(latitude=MUNICH_LATITUDE, longitude=MUNICH_LONGITUDE, height=MUNICH_HEIGHT).magnetic_elements
GRAVITY = WGS().normal_gravity(MUNICH_LATITUDE, MUNICH_HEIGHT)
class Davenport:
"""
Davenport's q-Method for attitude estimation
Parameters
----------
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in in m/s^2
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT
weights : array-like
Array with two weights used in each observation.
magnetic_dip : float
Magnetic Inclination angle, in degrees. Defaults to magnetic dip of
Munich, Germany.
gravity : float
Normal gravity, in m/s^2. Defaults to normal gravity of Munich,
Germany.
Attributes
----------
acc : numpy.ndarray
N-by-3 array with N accelerometer samples.
mag : numpy.ndarray
N-by-3 array with N magnetometer samples.
w : numpy.ndarray
Weights of each observation.
Raises
------
ValueError
When dimension of input arrays ``acc`` and ``mag`` are not equal.
"""
def __init__(self, acc: np.ndarray = None, mag: np.ndarray = None, **kw):
self.acc = acc
self.mag = mag
self.w = kw.get('weights', np.ones(2))
# Reference measurements
mdip = kw.get('magnetic_dip') # Magnetic dip, in degrees
self.m_q = np.array([MAG['X'], MAG['Y'], MAG['Z']]) if mdip is None else np.array([cosd(mdip), 0., sind(mdip)])
g = kw.get('gravity', GRAVITY) # Earth's normal gravity, in m/s^2
self.g_q = np.array([0.0, 0.0, g]) # Normal Gravity vector
if self.acc is not None and self.mag is not None:
self.Q = self._compute_all()
def _compute_all(self) -> np.ndarray:
"""
Estimate all quaternions given all data.
Attributes ``acc`` and ``mag`` must contain data.
Returns
-------
Q : array
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.mag.shape:
raise ValueError("acc and mag are not the same size")
num_samples = len(self.acc)
Q = np.zeros((num_samples, 4))
for t in range(num_samples):
Q[t] = self.estimate(self.acc[t], self.mag[t])
return Q
def estimate(self, acc: np.ndarray = None, mag: np.ndarray = None) -> np.ndarray:
"""
Attitude Estimation
Parameters
----------
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2
mag : numpy.ndarray
Sample of tri-axial Magnetometer in T
Returns
-------
q : numpy.ndarray
Estimated attitude as a quaternion.
"""
B = self.w[0]*np.outer(acc, self.g_q) + self.w[1]*np.outer(mag, self.m_q) # Attitude profile matrix
sigma = B.trace()
z = np.array([B[1, 2]-B[2, 1], B[2, 0]-B[0, 2], B[0, 1]-B[1, 0]])
S = B+B.T
K = np.zeros((4, 4))
K[0, 0] = sigma
K[1:, 1:] = S - sigma*np.eye(3)
K[0, 1:] = K[1:, 0] = z
w, v = np.linalg.eig(K)
return v[:, np.argmax(w)] # Eigenvector associated to largest eigenvalue is optimal quaternion | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/davenport.py | davenport.py |
import numpy as np
from ahrs.common.orientation import *
from ahrs.common import DEG2RAD
class GravityQuaternion:
"""
Class of Gravity-based estimation of quaternion.
Parameters
----------
"""
def __init__(self, *args, **kwargs):
self.input = args[0] if args else None
# Data is given
if self.input:
self.Q = self.estimate_all()
def estimate_all(self):
data = self.input
Q = np.zeros((data.num_samples, 4))
for t in range(data.num_samples):
Q[t] = self.estimate(data.acc[t].copy())
return Q
def estimate(self, a):
"""
Estimate the quaternion from the tilting read by an orthogonal
tri-axial array of accelerometers.
The orientation of the roll and pitch angles is estimated using the
measurements of the accelerometers, and finally converted to a
quaternion representation according to [WKDCM2Q]_
Parameters
----------
a : array
Sample of tri-axial Accelerometer in m/s^2.
Returns
-------
q : array
Estimated quaternion.
"""
a_norm = np.linalg.norm(a)
if a_norm == 0:
return np.array([1.0, 0.0, 0.0, 0.0])
a /= a_norm
ax, ay, az = a
# Euler Angles from Tilt
ex = np.arctan2( ay, az)
ey = np.arctan2(-ax, np.sqrt(ay**2 + az**2))
ez = 0.0
# ex = np.arctan(ax/ay)
# ey = np.arccos(az/np.linalg.norm(a))
# ex = np.arctan(ax/np.sqrt(ay**2+az**2))
# ey = np.arctan(ay/np.sqrt(ax**2+az**2))
# ez = np.arctan(np.sqrt(ax**2+ay**2)/az)
# Euler to Quaternion
q = np.array([1.0, 0.0, 0.0, 0.0])
# Roll
cx = np.cos(ex/2.0)
sx = np.sin(ex/2.0)
# Pitch
cy = np.cos(ey/2.0)
sy = np.sin(ey/2.0)
# Yaw
cz = np.cos(ez/2.0)
sz = np.sin(ez/2.0)
q = np.array([
cz*cy*cx + sz*sy*sx,
cz*cy*sx - sz*sy*cx,
sz*cy*sx + cz*sy*cx,
sz*cy*cx - cz*sy*sx])
return q/np.linalg.norm(q) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/gravityquaternion.py | gravityquaternion.py |
import numpy as np
from ..common.constants import *
from ..common.orientation import q_prod, q_conj
# Reference Observations in Munich, Germany
from ..utils.wmm import WMM
MAG = WMM(latitude=MUNICH_LATITUDE, longitude=MUNICH_LONGITUDE, height=MUNICH_HEIGHT).magnetic_elements
class FQA:
"""
Factored Quaternion Algorithm
Parameters
----------
acc : numpy.ndarray, default: None
N-by-3 array with N measurements of the gravitational acceleration.
mag : numpy.ndarray, default: None
N-by-3 array with N measurements of the geomagnetic field.
mag_ref : numpy.ndarray, default: None
Reference geomagnetic field. If None is given, defaults to the
geomagnetic field of Munich, Germany.
Attributes
----------
acc : numpy.ndarray
N-by-3 array with N accelerometer samples.
mag : numpy.ndarray
N-by-3 array with N magnetometer samples.
m_ref : numpy.ndarray
Normalized reference geomagnetic field.
Q : numpy.ndarray
Estimated attitude as quaternion.
Raises
------
ValueError
When dimension of input arrays ``acc`` and ``mag`` are not equal.
"""
def __init__(self, acc: np.ndarray = None, mag: np.ndarray = None, mag_ref: np.ndarray = None):
self.acc = acc
self.mag = mag
# Reference measurements
self.m_ref = np.array([MAG['X'], MAG['Y'], MAG['Z']]) if mag_ref is None else mag_ref
self.m_ref = self.m_ref[:2]/np.linalg.norm(self.m_ref[:2])
if self.acc is not None and self.mag is not None:
self.Q = self._compute_all()
def _compute_all(self) -> np.ndarray:
"""
Estimate the quaternions given all data.
Attributes ``acc`` and ``mag`` must contain data.
Returns
-------
Q : numpy.ndarray
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.mag.shape:
raise ValueError("acc and mag are not the same size")
num_samples = len(self.acc)
Q = np.zeros((num_samples, 4))
for t in range(num_samples):
Q[t] = self.estimate(self.acc[t], self.mag[t])
return Q
def estimate(self, acc: np.ndarray = None, mag: np.ndarray = None) -> np.ndarray:
"""Attitude Estimation.
Parameters
----------
acc : numpy.ndarray
Sample of tri-axial Accelerometer.
mag : numpy.ndarray
Sample of tri-axial Magnetometer.
Returns
-------
q : numpy.ndarray
Estimated quaternion.
"""
a_norm = np.linalg.norm(acc)
if a_norm == 0: # handle NaN
return np.array([1., 0., 0., 0.])
a = acc/a_norm
# Elevation Quaternion
s_theta = a[0] # (eq. 21)
c_theta = np.sqrt(1.0-s_theta**2) # (eq. 22)
s_theta_2 = np.sign(s_theta)*np.sqrt((1.0-c_theta)/2.0) # (eq. 23)
c_theta_2 = np.sqrt((1.0+c_theta)/2.0) # (eq. 24)
q_e = np.array([c_theta_2, 0.0, s_theta_2, 0.0]) # (eq. 25)
q_e /= np.linalg.norm(q_e)
# Roll Quaternion
is_singular = c_theta==0.0
s_phi = 0.0 if is_singular else -a[1]/c_theta # (eq. 30)
c_phi = 0.0 if is_singular else -a[2]/c_theta # (eq. 31)
s_phi_2 = np.sign(s_phi)*np.sqrt((1.0-c_phi)/2.0)
c_phi_2 = np.sqrt((1.0+c_phi)/2.0)
q_r = np.array([c_phi_2, s_phi_2, 0.0, 0.0]) # (eq. 32)
q_r /= np.linalg.norm(q_r)
q_er = q_prod(q_e, q_r)
q_er /= np.linalg.norm(q_er)
# Azimuth Quaternion
m_norm = np.linalg.norm(mag)
if not m_norm>0:
return q_er
q_a = np.array([1., 0., 0., 0.])
if m_norm>0:
m = mag/m_norm
bm = np.array([0.0, *m])
em = q_prod(q_e, q_prod(q_r, q_prod(bm, q_prod(q_conj(q_r), q_conj(q_e))))) # (eq. 34)
# em = [0.0, *q2R(q_e)@q2R(q_r)@m]
# N = self.m_ref[:2].copy() # (eq. 36)
N = self.m_ref.copy() # (eq. 36)
_, Mx, My, _ = em/np.linalg.norm(em) # (eq. 37)
c_psi, s_psi = np.array([[Mx, My], [-My, Mx]])@N # (eq. 39)
s_psi_2 = np.sign(s_psi)*np.sqrt((1.0-c_psi)/2.0)
c_psi_2 = np.sqrt((1.0+c_psi)/2.0)
q_a = np.array([c_psi_2, 0.0, 0.0, s_psi_2]) # (eq. 40)
q_a /= np.linalg.norm(q_a)
# Final Quaternion
q = q_prod(q_a, q_er) # (eq. 41)
return q/np.linalg.norm(q) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/fqa.py | fqa.py |
import numpy as np
class FAMC:
"""Fast Accelerometer-Magnetometer Combination
Parameters
----------
acc : numpy.ndarray, default: None
M-by-3 array with measurements of acceleration in in m/s^2
mag : numpy.ndarray, default: None
M-by-3 array with measurements of magnetic field in mT
Attributes
----------
acc : numpy.ndarray
M-by-3 array with M accelerometer samples.
mag : numpy.ndarray
M-by-3 array with m magnetometer samples.
Q : numpy.array, default: None
M-by-4 Array with all estimated quaternions, where M is the number of
samples. Equal to None when no estimation is performed.
Raises
------
ValueError
When dimension of input arrays ``acc`` and ``mag`` are not equal.
Examples
--------
>>> acc_data.shape, mag_data.shape # NumPy arrays with sensor data
((1000, 3), (1000, 3))
>>> from ahrs.filters import FAMC
>>> famc = FAMC(acc=acc_data, mag=mag_data)
>>> famc.Q # Estimated attitudes as Quaternions
array([[-0.82311077, 0.45760535, -0.33408929, -0.0383452 ],
[-0.82522048, 0.4547043 , -0.33277675, -0.03892033],
[-0.82463698, 0.4546915 , -0.33422422, -0.03903417],
...,
[-0.82420642, 0.56217735, 0.02548005, -0.06317571],
[-0.82364606, 0.56311099, 0.0241655 , -0.06268338],
[-0.81844766, 0.57077781, 0.02532182, -0.06095017]])
>>> famc.Q.shape
(1000, 4)
"""
def __init__(self, acc: np.ndarray = None, mag: np.ndarray = None):
self.acc = acc.copy()
self.mag = mag.copy()
self.Q = None
if self.acc is not None and self.mag is not None:
self.Q = self._compute_all()
def _compute_all(self) -> np.ndarray:
"""Estimate the quaternions given all data.
Attributes ``acc`` and ``mag`` must contain data.
Returns
-------
Q : array
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.mag.shape:
raise ValueError("acc and mag are not the same size")
num_samples = len(self.acc)
Q = np.zeros((num_samples, 4))
for t in range(num_samples):
Q[t] = self.estimate(self.acc[t], self.mag[t])
return Q
def estimate(self, acc: np.ndarray, mag: np.ndarray) -> np.ndarray:
"""Attitude Estimation
Parameters
----------
a : numpy.ndarray
Sample of tri-axial Accelerometer.
m : numpy.ndarray
Sample of tri-axial Magnetometer.
Returns
-------
q : numpy.ndarray
Estimated quaternion of the form :math:`\\begin{pmatrix}q_w & q_x & q_y & q_z\\end{pmatrix}`
Examples
--------
>>> acc_data = np.array([4.098297, 8.663757, 2.1355896])
>>> mag_data = np.array([-28.71550512, -25.92743566, 4.75683931])
>>> from ahrs.filters import FAMC
>>> famc = FAMC()
>>> famc.estimate(acc=acc_data, mag=mag_data) # Estimate attitude as quaternion
array([-0.82311077, 0.45760535, -0.33408929, -0.0383452])
"""
# Normalize measurements (eq. 10)
a_norm = np.linalg.norm(acc)
m_norm = np.linalg.norm(mag)
if not a_norm>0 or not m_norm>0: # handle NaN
return None
acc /= a_norm # A = [ax, ay, az] in body frame
mag /= m_norm # M = [mx, my, mz] in body frame
# Dynamic magnetometer reference vector
m_D = acc@mag # (eq. 13)
m_N = np.sqrt(1.0-m_D**2)
# Parameters
B = np.zeros((3, 3)) # (eq. 18)
B[:, 0] = m_N*mag
B[:, 2] = m_D*mag + acc
B *= 0.5
tau = B[0, 2] + B[2, 0]
alpha = np.zeros(3)
Y = np.zeros((3, 3))
alpha[0] = B[0, 0] - B[2, 2] - 1
Y[0] = np.array([-1, B[1, 0], tau])/alpha[0]
alpha[1] = B[1, 0]**2/alpha[0] - B[0, 0] - B[2, 2] - 1
Y[1] = np.array([-B[1, 0]/alpha[0], -1, B[1, 2]+B[1, 0]*Y[0, 2]])/alpha[1]
alpha[2] = alpha[0] - 2 + tau**2/alpha[0] + Y[1, 2]**2*alpha[1]
Y[2] = np.array([(tau+B[1, 0]*Y[1, 2])/alpha[0], Y[1, 2], 1])/alpha[2]
# Quaternion Elements (eq. 21)
a = B[1, 2]*(Y[0, 0] + Y[0, 1]*(Y[1, 2]*Y[2, 0] + Y[1, 0]) + Y[0, 2]*Y[2, 0]) - (B[0, 2]-B[2, 0])*(Y[1, 2]*Y[2, 0] + Y[1, 0]) - Y[2, 0]*B[1, 0]
b = B[1, 2]*( Y[0, 1]*(Y[1, 2]*Y[2, 1] + Y[1, 1]) + Y[0, 2]*Y[2, 1]) - (B[0, 2]-B[2, 0])*(Y[1, 2]*Y[2, 1] + Y[1, 1]) - Y[2, 1]*B[1, 0]
c = B[1, 2]*( Y[0, 1]* Y[1, 2]*Y[2, 2] + Y[0, 2]*Y[2, 2]) - (B[0, 2]-B[2, 0])*(Y[1, 2]*Y[2, 2]) - Y[2, 2]*B[1, 0]
q = np.array([-1, a, b, c]) # (eq. 22)
return q/np.linalg.norm(q) # (eq. 23) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/famc.py | famc.py |
import numpy as np
class Complementary:
"""
Complementary filter for attitude estimation as quaternion.
Parameters
----------
gyr : numpy.ndarray, default: None
N-by-3 array with measurements of angular velocity, in rad/s.
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration, in m/s^2.
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field, in mT.
frequency : float, default: 100.0
Sampling frequency in Herz.
Dt : float, default: 0.01
Sampling step in seconds. Inverse of sampling frequency. Not required
if ``frequency`` value is given.
gain : float, default: 0.1
Filter gain.
q0 : numpy.ndarray, default: None
Initial orientation, as a versor (normalized quaternion).
Raises
------
ValueError
When dimension of input arrays ``acc``, ``gyr``, or ``mag`` are not equal.
"""
def __init__(self,
gyr: np.ndarray = None,
acc: np.ndarray = None,
mag: np.ndarray = None,
frequency: float = 100.0,
gain = 0.1,
**kwargs):
self.gyr: np.ndarray = gyr
self.acc: np.ndarray = acc
self.mag: np.ndarray = mag
self.frequency: float = frequency
self.gain: float = gain
if not(0.0 <= self.gain <= 1.0):
raise ValueError(f"Filter gain must be in the range [0, 1]. Got{self.gain}")
self.Dt: float = kwargs.get('Dt', 1.0/self.frequency)
self.q0: np.ndarray = kwargs.get('q0')
# Process of given data
if self.gyr is not None and self.acc is not None:
self.Q = self._compute_all()
def _compute_all(self) -> np.ndarray:
"""Estimate the quaternions given all data
Attributes ``gyr``, ``acc`` and, optionally, ``mag`` must contain data.
Returns
-------
Q : numpy.ndarray
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.gyr.shape:
raise ValueError("acc and gyr are not the same size")
num_samples = len(self.acc)
Q = np.zeros((num_samples, 4))
if self.mag is None:
self.mag = [None]*num_samples
else:
if self.mag.shape != self.gyr.shape:
raise ValueError("mag and gyr are not the same size")
Q[0] = self.am_estimation(self.acc[0], self.mag[0]) if self.q0 is None else self.q0.copy()
for t in range(1, num_samples):
Q[t] = self.update(Q[t-1], self.gyr[t], self.acc[t], self.mag[t])
return Q
def attitude_propagation(self, q: np.ndarray, omega: np.ndarray) -> np.ndarray:
"""
Attitude propagation of the orientation.
Estimate the current orientation at time :math:`t`, from a given
orientation at time :math:`t-1` and a given angular velocity,
:math:`\\omega`, in rad/s.
It is computed by numerically integrating the angular velocity and
adding it to the previous orientation.
.. math::
\\mathbf{q}_\\omega =
\\begin{bmatrix}
q_w - \\frac{\\Delta t}{2} \\omega_x q_x - \\frac{\\Delta t}{2} \\omega_y q_y - \\frac{\\Delta t}{2} \\omega_z q_z\\\\
q_x + \\frac{\\Delta t}{2} \\omega_x q_w - \\frac{\\Delta t}{2} \\omega_y q_z + \\frac{\\Delta t}{2} \\omega_z q_y\\\\
q_y + \\frac{\\Delta t}{2} \\omega_x q_z + \\frac{\\Delta t}{2} \\omega_y q_w - \\frac{\\Delta t}{2} \\omega_z q_x\\\\
q_z - \\frac{\\Delta t}{2} \\omega_x q_y + \\frac{\\Delta t}{2} \\omega_y q_x + \\frac{\\Delta t}{2} \\omega_z q_w
\\end{bmatrix}
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
omega : numpy.ndarray
Tri-axial angular velocity, in rad/s.
Returns
-------
q_omega : numpy.ndarray
Estimated orientation, as quaternion.
"""
w = 0.5*self.Dt*omega
A = np.array([
[1.0, -w[0], -w[1], -w[2]],
[w[0], 1.0, w[2], -w[1]],
[w[1], -w[2], 1.0, w[0]],
[w[2], w[1], -w[0], 1.0]])
q_omega = A @ q
return q_omega / np.linalg.norm(q_omega)
def am_estimation(self, acc: np.ndarray, mag: np.ndarray = None) -> np.ndarray:
"""Attitude estimation from an Accelerometer-Magnetometer architecture.
First estimate the tilt from a given accelerometer sample
:math:`\\mathbf{a}=\\begin{bmatrix}a_x & a_y & a_z\\end{bmatrix}^T` as:
.. math::
\\begin{array}{rcl}
\\theta &=& \\mathrm{arctan2}(a_y, a_z) \\\\
\\phi &=& \\mathrm{arctan2}\\big(-a_x, \\sqrt{a_y^2+a_z^2}\\big)
\\end{array}
Then the yaw angle, :math:`\\psi`, is computed, if a magnetometer
sample :math:`\\mathbf{m}=\\begin{bmatrix}m_x & m_y & m_z\\end{bmatrix}^T`
is available:
.. math::
\\psi = \\mathrm{arctan2}(-b_y, b_x)
where
.. math::
\\begin{array}{rcl}
b_x &=& m_x\\cos\\theta + m_y\\sin\\theta\\sin\\phi + m_z\\sin\\theta\\cos\\phi \\\\
b_y &=& m_y\\cos\\phi - m_z\\sin\\phi
\\end{array}
And the roll-pitch-yaw angles are transformed to a quaternion that is
then returned:
.. math::
\\mathbf{q}_{am} =
\\begin{pmatrix}q_w\\\\q_x\\\\q_y\\\\q_z\\end{pmatrix} =
\\begin{pmatrix}
\\cos\\Big(\\frac{\\phi}{2}\\Big)\\cos\\Big(\\frac{\\theta}{2}\\Big)\\cos\\Big(\\frac{\\psi}{2}\\Big) + \\sin\\Big(\\frac{\\phi}{2}\\Big)\\sin\\Big(\\frac{\\theta}{2}\\Big)\\sin\\Big(\\frac{\\psi}{2}\\Big) \\\\
\\sin\\Big(\\frac{\\phi}{2}\\Big)\\cos\\Big(\\frac{\\theta}{2}\\Big)\\cos\\Big(\\frac{\\psi}{2}\\Big) - \\cos\\Big(\\frac{\\phi}{2}\\Big)\\sin\\Big(\\frac{\\theta}{2}\\Big)\\sin\\Big(\\frac{\\psi}{2}\\Big) \\\\
\\cos\\Big(\\frac{\\phi}{2}\\Big)\\sin\\Big(\\frac{\\theta}{2}\\Big)\\cos\\Big(\\frac{\\psi}{2}\\Big) + \\sin\\Big(\\frac{\\phi}{2}\\Big)\\cos\\Big(\\frac{\\theta}{2}\\Big)\\sin\\Big(\\frac{\\psi}{2}\\Big) \\\\
\\cos\\Big(\\frac{\\phi}{2}\\Big)\\cos\\Big(\\frac{\\theta}{2}\\Big)\\sin\\Big(\\frac{\\psi}{2}\\Big) - \\sin\\Big(\\frac{\\phi}{2}\\Big)\\sin\\Big(\\frac{\\theta}{2}\\Big)\\cos\\Big(\\frac{\\psi}{2}\\Big)
\\end{pmatrix}
Parameters
----------
acc : numpy.ndarray
Tri-axial sample of the accelerometer.
mag : numpy.ndarray, default: None
Tri-axial sample of the magnetometer.
Returns
-------
q_am : numpy.ndarray
Estimated attitude.
"""
if acc.shape[-1] != 3:
raise ValueError(f"Accelerometer sample must have three elements. It has {acc.shape[-1]}")
a_norm = np.linalg.norm(acc)
if a_norm == 0.0:
raise ValueError(f"Accelerometer sample does not describe any direction.")
ax, ay, az = np.copy(acc)/a_norm
# Estimate Tilt from Accelerometer sample
ex = np.arctan2( ay, az) # Roll
ey = np.arctan2(-ax, np.sqrt(ay**2 + az**2)) # Pitch
cx, sx = np.cos(ex/2.0), np.sin(ex/2.0)
cy, sy = np.cos(ey/2.0), np.sin(ey/2.0)
if mag is None:
q_a = np.array([cy*cx, cy*sx, sy*cx, -sy*sx])
return q_a / np.linalg.norm(q_a)
# Estimate Yaw from compensated compass
mx, my, mz = np.copy(mag)/np.linalg.norm(mag)
bx = mz*np.sin(ex) - my*np.cos(ex)
by = mx*np.cos(ey) + np.sin(ey)*(my*np.sin(ex) + mz*np.cos(ex))
ez = np.arctan2(-by, bx)
# Roll-Pitch-Yaw to Quaternion
cz, sz = np.cos(ez/2.0), np.sin(ez/2.0)
q_am = np.array([
cz*cy*cx + sz*sy*sx,
cz*cy*sx - sz*sy*cx,
sz*cy*sx + cz*sy*cx,
sz*cy*cx - cz*sy*sx])
return q_am / np.linalg.norm(q_am)
def update(self, q: np.ndarray, gyr: np.ndarray, acc: np.ndarray, mag: np.ndarray = None) -> np.ndarray:
"""
Attitude Estimation from given measurements and previous orientation.
The new orientation is first estimated with the angular velocity, then
another orientation is computed using the accelerometers and
magnetometers. The magnetometer is optional.
Each orientation is estimated independently and fused with a
complementary filter.
.. math::
\\mathbf{q} = (1 - \\alpha) \\mathbf{q}_\\omega + \\alpha\\mathbf{q}_{am}
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
gyr : numpy.ndarray
Sample of tri-axial Gyroscope in rad/s.
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2.
mag : numpy.ndarray, default: None
Sample of tri-axial Magnetometer in uT.
Returns
-------
q : numpy.ndarray
Estimated quaternion.
"""
if gyr is None or not np.linalg.norm(gyr)>0:
return q
q_omega = self.attitude_propagation(q, gyr)
q_am = self.am_estimation(acc, mag)
# Complementary Estimation
q = (1.0 - self.gain)*q_omega + self.gain*q_am
return q/np.linalg.norm(q) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/filters/complementary.py | complementary.py |
from typing import Tuple
import numpy as np
from .mathfuncs import skew
from .orientation import rotation
from .orientation import rot_seq
# Functions to convert DCM to quaternion representation
from .orientation import shepperd
from .orientation import hughes
from .orientation import chiaverini
from .orientation import itzhack
from .orientation import sarabandi
class DCM(np.ndarray):
"""
Direction Cosine Matrix in SO(3)
Class to represent a Direction Cosine Matrix. It is built from a 3-by-3
array, but it can also be built from 3-dimensional vectors representing the
roll-pitch-yaw angles, a quaternion, or an axis-angle pair representation.
Parameters
----------
array : array-like, default: None
Array to build the DCM with.
q : array-like, default: None
Quaternion to convert to DCM.
rpy : array-like, default: None
Array with roll->pitch->yaw angles.
euler : tuple, default: None
Dictionary with a set of angles as a pair of string and array.
axang : tuple, default: None
Tuple with an array and a float of the axis and the angle
representation.
Attributes
----------
A : numpy.ndarray
Array with the 3-by-3 direction cosine matrix.
Examples
--------
All DCM are created as an identity matrix, which means no rotation.
>>> from ahrs import DCM
>>> DCM()
DCM([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
A rotation around a single axis can be defined by giving the desired axis
and its value, in degrees.
>>> DCM(x=10.0)
DCM([[ 1. , 0. , 0. ],
[ 0. , 0.98480775, -0.17364818],
[ 0. , 0.17364818, 0.98480775]])
>>> DCM(y=20.0)
DCM([[ 0.93969262, 0. , 0.34202014],
[ 0. , 1. , 0. ],
[-0.34202014, 0. , 0.93969262]])
>>> DCM(z=30.0)
DCM([[ 0.8660254, -0.5 , 0. ],
[ 0.5 , 0.8660254, 0. ],
[ 0. , 0. , 1. ]])
If we want a rotation conforming the roll-pitch-yaw sequence, we can give
the corresponding angles.
>>> DCM(rpy=[30.0, 20.0, 10.0])
DCM([[ 0.81379768, -0.44096961, 0.37852231],
[ 0.46984631, 0.88256412, 0.01802831],
[-0.34202014, 0.16317591, 0.92541658]])
.. note::
Notice the angles are given in reverse order, as it is the way the
matrices are multiplied.
>>> DCM(z=30.0) @ DCM(y=20.0) @ DCM(x=10.0)
DCM([[ 0.81379768, -0.44096961, 0.37852231],
[ 0.46984631, 0.88256412, 0.01802831],
[-0.34202014, 0.16317591, 0.92541658]])
But also a different sequence can be defined, if given as a tuple with two
elements: the order of the axis to rotate about, and the value of the
rotation angles (again in reverse order)
>>> DCM(euler=('zyz', [40.0, 50.0, 60.0]))
DCM([[-0.31046846, -0.74782807, 0.58682409],
[ 0.8700019 , 0.02520139, 0.49240388],
[-0.38302222, 0.66341395, 0.64278761]])
>>> DCM(z=40.0) @ DCM(y=50.0) @ DCM(z=60.0)
DCM([[-0.31046846, -0.74782807, 0.58682409],
[ 0.8700019 , 0.02520139, 0.49240388],
[-0.38302222, 0.66341395, 0.64278761]])
Another option is to build the rotation matrix from a quaternion:
>>> DCM(q=[1., 2., 3., 4.])
DCM([[-0.66666667, 0.13333333, 0.73333333],
[ 0.66666667, -0.33333333, 0.66666667],
[ 0.33333333, 0.93333333, 0.13333333]])
The quaternions are automatically normalized to make them versors and be
used as rotation operators.
Finally, we can also build the rotation matrix from an axis-angle
representation:
>>> DCM(axang=([1., 2., 3.], 60.0))
DCM([[-0.81295491, 0.52330834, 0.25544608],
[ 0.03452394, -0.3945807 , 0.91821249],
[ 0.58130234, 0.75528436, 0.30270965]])
The axis of rotation is also normalized to be used as part of the rotation
operator.
"""
def __new__(subtype, array: np.ndarray = None, **kwargs):
if array is None:
array = np.identity(3)
if 'q' in kwargs:
array = DCM.from_q(DCM, np.array(kwargs.pop('q')))
if any(x.lower() in ['x', 'y', 'z'] for x in kwargs):
array = np.identity(3)
array = array@rotation('x', kwargs.pop('x', 0.0))
array = array@rotation('y', kwargs.pop('y', 0.0))
array = array@rotation('z', kwargs.pop('z', 0.0))
if 'rpy' in kwargs:
array = rot_seq('zyx', kwargs.pop('rpy'))
if 'euler' in kwargs:
seq, angs = kwargs.pop('euler')
array = rot_seq(seq, angs)
if 'axang' in kwargs:
ax, ang = kwargs.pop('axang')
array = DCM.from_axisangle(DCM, np.array(ax), ang)
if array.shape[-2:]!=(3, 3):
raise ValueError("Direction Cosine Matrix must have shape (3, 3) or (N, 3, 3), got {}.".format(array.shape))
in_SO3 = np.isclose(np.linalg.det(array), 1.0)
in_SO3 &= np.allclose([email protected], np.identity(3))
if not in_SO3:
raise ValueError("Given attitude is not in SO(3)")
# Create the ndarray instance of type DCM. This will call the standard
# ndarray constructor, but return an object of type DCM.
obj = super(DCM, subtype).__new__(subtype, array.shape, float, array)
obj.A = array
return obj
@property
def I(self) -> np.ndarray:
"""
synonym of property :meth:`inv`.
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.I
array([[ 0.92541658, 0.16317591, 0.34202014],
[-0.31879578, 0.82317294, 0.46984631],
[-0.20487413, -0.54383814, 0.81379768]])
Returns
-------
np.ndarray
Inverse of the DCM.
"""
return self.A.T
@property
def inv(self) -> np.ndarray:
"""
Inverse of the DCM.
The direction cosine matrix belongs to the Special Orthogonal group
`SO(3) <https://en.wikipedia.org/wiki/SO(3)>`_, where its transpose is
equal to its inverse:
.. math::
\\mathbf{R}^T\\mathbf{R} = \\mathbf{R}^{-1}\\mathbf{R} = \\mathbf{I}_3
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.inv
array([[ 0.92541658, 0.16317591, 0.34202014],
[-0.31879578, 0.82317294, 0.46984631],
[-0.20487413, -0.54383814, 0.81379768]])
Returns
-------
np.ndarray
Inverse of the DCM.
"""
return self.A.T
@property
def det(self) -> float:
"""
Synonym of property :meth:`determinant`.
Returns
-------
float
Determinant of the DCM.
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.det
1.0000000000000002
"""
return np.linalg.det(self.A)
@property
def determinant(self) -> float:
"""
Determinant of the DCM.
Given a direction cosine matrix :math:`\\mathbf{R}`, its determinant
:math:`|\\mathbf{R}|` is found as:
.. math::
|\\mathbf{R}| =
\\begin{vmatrix}r_{11} & r_{12} & r_{13} \\\\ r_{21} & r_{22} & r_{23} \\\\ r_{31} & r_{32} & r_{33}\\end{vmatrix}=
r_{11}\\begin{vmatrix}r_{22} & r_{23}\\\\r_{32} & r_{33}\\end{vmatrix} -
r_{12}\\begin{vmatrix}r_{21} & r_{23}\\\\r_{31} & r_{33}\\end{vmatrix} +
r_{13}\\begin{vmatrix}r_{21} & r_{22}\\\\r_{31} & r_{32}\\end{vmatrix}
where the determinant of :math:`\\mathbf{B}\\in\\mathbb{R}^{2\\times 2}`
is:
.. math::
|\\mathbf{B}|=\\begin{vmatrix}b_{11}&b_{12}\\\\b_{21}&b_{22}\\end{vmatrix}=b_{11}b_{22}-b_{12}b_{21}
All matrices in SO(3), to which direction cosine matrices belong, have
a determinant equal to :math:`+1`.
Returns
-------
float
Determinant of the DCM.
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.determinant
1.0000000000000002
"""
return np.linalg.det(self.A)
@property
def fro(self) -> float:
"""
Synonym of property :meth:`frobenius`.
Returns
-------
float
Frobenius norm of the DCM.
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.fro
1.7320508075688774
"""
return np.linalg.norm(self.A, 'fro')
@property
def frobenius(self) -> float:
"""
Frobenius norm of the DCM.
The `Frobenius norm <https://en.wikipedia.org/wiki/Matrix_norm#Frobenius_norm>`_
of a matrix :math:`\\mathbf{A}` is defined as:
.. math::
\\|\\mathbf{A}\\|_F = \\sqrt{\\sum_{i=1}^m\\sum_{j=1}^n|a_{ij}|^2}
Returns
-------
float
Frobenius norm of the DCM.
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.frobenius
1.7320508075688774
"""
return np.linalg.norm(self.A, 'fro')
@property
def log(self) -> np.ndarray:
"""
Logarithm of DCM.
The logarithmic map is defined as the inverse of the exponential map.
It corresponds to the logarithm given by the Rodrigues rotation formula:
.. math::
\\log(\\mathbf{R}) = \\frac{\\theta(\\mathbf{R}-\\mathbf{R}^T)}{2\\sin\\theta}
with :math:`\\theta=\\arccos\\Big(\\frac{\\mathrm{tr}(\\mathbf{R}-1)}{2}\\Big)`.
Returns
-------
log : numpy.ndarray
Logarithm of DCM
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.log
array([[ 0. , -0.26026043, -0.29531805],
[ 0.26026043, 0. , -0.5473806 ],
[ 0.29531805, 0.5473806 , 0. ]])
"""
angle = np.arccos((self.A.trace()-1)/2)
S = self.A-self.A.T # Skew-symmetric matrix
logR = angle*S/(2*np.sin(angle))
return logR
@property
def adjugate(self) -> np.ndarray:
"""
Return the adjugate of the DCM.
The adjugate, a.k.a. *classical adjoint*, of a matrix :math:`\\mathbf{A}`
is the transpose of its *cofactor matrix*. For orthogonal matrices, it
simplifies to:
.. math::
\\mathrm{adj}(\\mathbf{A}) = \\mathrm{det}(\\mathbf{A})\\mathbf{A}^T
Returns
-------
np.ndarray
Adjugate of the DCM.
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.adjugate
array([[ 0.92541658, 0.16317591, 0.34202014],
[-0.31879578, 0.82317294, 0.46984631],
[-0.20487413, -0.54383814, 0.81379768]])
"""
return np.linalg.det(self.A)*self.A.T
@property
def adj(self) -> np.ndarray:
"""
Synonym of property :meth:`adjugate`.
Returns
-------
np.ndarray
Adjugate of the DCM.
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.adj
array([[ 0.92541658, 0.16317591, 0.34202014],
[-0.31879578, 0.82317294, 0.46984631],
[-0.20487413, -0.54383814, 0.81379768]])
"""
return np.linalg.det(self.A)*self.A.T
def to_axisangle(self) -> Tuple[np.ndarray, float]:
"""
Return axis-angle representation of the DCM.
Defining a *rotation matrix* :math:`\\mathbf{R}`:
.. math::
\\mathbf{R} =
\\begin{bmatrix}
r_{11} & r_{12} & r_{13} \\\\
r_{21} & r_{22} & r_{23} \\\\
r_{31} & r_{32} & r_{33}
\\end{bmatrix}
The axis-angle representation of :math:`\\mathbf{R}` is obtained with:
.. math::
\\theta = \\arccos\\Big(\\frac{\\mathrm{tr}(\\mathbf{R})-1}{2}\\Big)
for the **rotation angle**, and:
.. math::
\\mathbf{k} = \\frac{1}{2\\sin\\theta}
\\begin{bmatrix}r_{32} - r_{23} \\\\ r_{13} - r_{31} \\\\ r_{21} - r_{12}\\end{bmatrix}
for the **rotation vector**.
.. note::
The axis-angle representation is not unique since a rotation of
:math:`−\\theta` about :math:`−\\mathbf{k}` is the same as a
rotation of :math:`\\theta` about :math:`\\mathbf{k}`.
Returns
-------
axis : numpy.ndarray
Axis of rotation.
angle : float
Angle of rotation, in radians.
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.to_axisangle()
(array([ 0.81187135, -0.43801381, 0.38601658]), 0.6742208510527136)
"""
angle = np.arccos((self.A.trace()-1)/2)
axis = np.zeros(3)
if angle!=0:
S = np.array([self.A[2, 1]-self.A[1, 2], self.A[0, 2]-self.A[2, 0], self.A[1, 0]-self.A[0, 1]])
axis = S/(2*np.sin(angle))
return axis, angle
def to_axang(self) -> Tuple[np.ndarray, float]:
"""
Synonym of method :meth:`to_axisangle`.
Returns
-------
axis : numpy.ndarray
Axis of rotation.
angle : float
Angle of rotation, in radians.
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.to_axang()
(array([ 0.81187135, -0.43801381, 0.38601658]), 0.6742208510527136)
"""
return self.to_axisangle()
def from_axisangle(self, axis: np.ndarray, angle: float) -> np.ndarray:
"""
DCM from axis-angle representation
Use Rodrigue's formula to obtain the DCM from the axis-angle
representation.
.. math::
\\mathbf{R} = \\mathbf{I}_3 - (\\sin\\theta)\\mathbf{K} + (1-\\cos\\theta)\\mathbf{K}^2
where :math:`\\mathbf{R}` is the DCM, which rotates through an **angle**
:math:`\\theta` counterclockwise about the **axis** :math:`\\mathbf{k}`,
:math:`\\mathbf{I}_3` is the :math:`3\\times 3` identity matrix, and
:math:`\\mathbf{K}` is the `skew-symmetric <https://en.wikipedia.org/wiki/Skew-symmetric_matrix>`_
matrix of :math:`\\mathbf{k}`.
Parameters
----------
axis : numpy.ndarray
Axis of rotation.
angle : float
Angle of rotation, in radians.
Returns
-------
R : numpy.ndarray
3-by-3 direction cosine matrix
Examples
--------
>>> R = DCM()
>>> R.view()
DCM([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
>>> R.from_axisangle([0.81187135, -0.43801381, 0.38601658], 0.6742208510527136)
array([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
"""
axis /= np.linalg.norm(axis)
K = skew(axis)
return np.identity(3) + np.sin(angle)*K + (1-np.cos(angle))*K@K
def from_axang(self, axis: np.ndarray, angle: float) -> np.ndarray:
"""
Synonym of method :meth:`from_axisangle`.
Parameters
----------
axis : numpy.ndarray
Axis of rotation.
angle : float
Angle of rotation, in radians.
Returns
-------
R : numpy.ndarray
3-by-3 direction cosine matrix
Examples
--------
>>> R = DCM()
>>> R.view()
DCM([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
>>> R.from_axang([0.81187135, -0.43801381, 0.38601658], 0.6742208510527136)
array([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
"""
return self.from_axisangle(axis, angle)
def from_quaternion(self, q: np.ndarray) -> np.ndarray:
"""
DCM from given quaternion
The quaternion :math:`\\mathbf{q}` has the form :math:`\\mathbf{q} = (q_w, q_x, q_y, q_z)`,
where :math:`\\mathbf{q}_v = (q_x, q_y, q_z)` is the vector part, and
:math:`q_w` is the scalar part.
The resulting matrix :math:`\\mathbf{R}` has the form:
.. math::
\\mathbf{R}(\\mathbf{q}) =
\\begin{bmatrix}
1 - 2(q_y^2 + q_z^2) & 2(q_xq_y - q_wq_z) & 2(q_xq_z + q_wq_y) \\\\
2(q_xq_y + q_wq_z) & 1 - 2(q_x^2 + q_z^2) & 2(q_yq_z - q_wq_x) \\\\
2(q_xq_z - q_wq_y) & 2(q_wq_x + q_yq_z) & 1 - 2(q_x^2 + q_y^2)
\\end{bmatrix}
The identity Quaternion :math:`\\mathbf{q} = \\begin{pmatrix}1 & 0 & 0 & 0\\end{pmatrix}`,
produces a :math:`3 \\times 3` Identity matrix :math:`\\mathbf{I}_3`.
Parameters
----------
q : numpy.ndarray
Quaternion
Returns
-------
R : numpy.ndarray
3-by-3 direction cosine matrix
Examples
--------
>>> R = DCM()
>>> R.from_quaternion([0.70710678, 0.0, 0.70710678, 0.0])
array([[-2.22044605e-16, 0.00000000e+00, 1.00000000e+00],
[ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00],
[-1.00000000e+00, 0.00000000e+00, -2.22044605e-16]])
Non-normalized quaternions will be normalized and transformed too.
>>> R.from_quaternion([1, 0.0, 1, 0.0])
array([[ 2.22044605e-16, 0.00000000e+00, 1.00000000e+00],
[ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00],
[-1.00000000e+00, 0.00000000e+00, 2.22044605e-16]])
A list (or a Numpy array) with N quaternions will return an N-by-3-by-3
array with the corresponding DCMs.
.. code-block::
>>> R.from_q([[1, 0.0, 1, 0.0], [1.0, -1.0, 0.0, 1.0], [0.0, 0.0, -1.0, 1.0]])
array([[[ 2.22044605e-16, 0.00000000e+00, 1.00000000e+00],
[ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00],
[-1.00000000e+00, 0.00000000e+00, 2.22044605e-16]],
[[ 3.33333333e-01, -6.66666667e-01, -6.66666667e-01],
[ 6.66666667e-01, -3.33333333e-01, 6.66666667e-01],
[-6.66666667e-01, -6.66666667e-01, 3.33333333e-01]],
[[-1.00000000e+00, -0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 2.22044605e-16, -1.00000000e+00],
[ 0.00000000e+00, -1.00000000e+00, 2.22044605e-16]]])
"""
if q is None:
return np.identity(3)
q = np.copy(q)
if q.shape[-1] != 4 or q.ndim > 2:
raise ValueError("Quaternion must be of the form (4,) or (N, 4)")
if q.ndim > 1:
q /= np.linalg.norm(q, axis=1)[:, None] # Normalize
R = np.zeros((q.shape[0], 3, 3))
R[:, 0, 0] = 1.0 - 2.0*(q[:, 2]**2 + q[:, 3]**2)
R[:, 1, 0] = 2.0*(q[:, 1]*q[:, 2]+q[:, 0]*q[:, 3])
R[:, 2, 0] = 2.0*(q[:, 1]*q[:, 3]-q[:, 0]*q[:, 2])
R[:, 0, 1] = 2.0*(q[:, 1]*q[:, 2]-q[:, 0]*q[:, 3])
R[:, 1, 1] = 1.0 - 2.0*(q[:, 1]**2 + q[:, 3]**2)
R[:, 2, 1] = 2.0*(q[:, 0]*q[:, 1]+q[:, 2]*q[:, 3])
R[:, 0, 2] = 2.0*(q[:, 1]*q[:, 3]+q[:, 0]*q[:, 2])
R[:, 1, 2] = 2.0*(q[:, 2]*q[:, 3]-q[:, 0]*q[:, 1])
R[:, 2, 2] = 1.0 - 2.0*(q[:, 1]**2 + q[:, 2]**2)
return R
q /= np.linalg.norm(q)
return np.array([
[1.0-2.0*(q[2]**2+q[3]**2), 2.0*(q[1]*q[2]-q[0]*q[3]), 2.0*(q[1]*q[3]+q[0]*q[2])],
[2.0*(q[1]*q[2]+q[0]*q[3]), 1.0-2.0*(q[1]**2+q[3]**2), 2.0*(q[2]*q[3]-q[0]*q[1])],
[2.0*(q[1]*q[3]-q[0]*q[2]), 2.0*(q[0]*q[1]+q[2]*q[3]), 1.0-2.0*(q[1]**2+q[2]**2)]])
def from_q(self, q: np.ndarray) -> np.ndarray:
"""
Synonym of method :meth:`from_quaternion`.
Parameters
----------
q : numpy.ndarray
Quaternion
Returns
-------
R : numpy.ndarray
3-by-3 direction cosine matrix
Examples
--------
>>> R = DCM()
>>> R.from_q([0.70710678, 0.0, 0.70710678, 0.0])
array([[-2.22044605e-16, 0.00000000e+00, 1.00000000e+00],
[ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00],
[-1.00000000e+00, 0.00000000e+00, -2.22044605e-16]])
Non-normalized quaternions will be normalized and transformed too.
>>> R.from_q([1, 0.0, 1, 0.0])
array([[ 2.22044605e-16, 0.00000000e+00, 1.00000000e+00],
[ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00],
[-1.00000000e+00, 0.00000000e+00, 2.22044605e-16]])
A list (or a Numpy array) with N quaternions will return an N-by-3-by-3
array with the corresponding DCMs.
.. code-block::
>>> R.from_q([[1, 0.0, 1, 0.0], [1.0, -1.0, 0.0, 1.0], [0.0, 0.0, -1.0, 1.0]])
array([[[ 2.22044605e-16, 0.00000000e+00, 1.00000000e+00],
[ 0.00000000e+00, 1.00000000e+00, 0.00000000e+00],
[-1.00000000e+00, 0.00000000e+00, 2.22044605e-16]],
[[ 3.33333333e-01, -6.66666667e-01, -6.66666667e-01],
[ 6.66666667e-01, -3.33333333e-01, 6.66666667e-01],
[-6.66666667e-01, -6.66666667e-01, 3.33333333e-01]],
[[-1.00000000e+00, -0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 2.22044605e-16, -1.00000000e+00],
[ 0.00000000e+00, -1.00000000e+00, 2.22044605e-16]]])
"""
return self.from_quaternion(q)
def to_quaternion(self, method: str = 'chiaverini', **kw) -> np.ndarray:
"""
Quaternion from Direction Cosine Matrix.
There are five methods available to obtain a quaternion from a
Direction Cosine Matrix:
* ``'chiaverini'`` as described in [Chiaverini]_.
* ``'hughes'`` as described in [Hughes]_.
* ``'itzhack'`` as described in [Bar-Itzhack]_ using version ``3`` by
default. Possible options are integers ``1``, ``2`` or ``3``.
* ``'sarabandi'`` as described in [Sarabandi]_ with a threshold equal
to ``0.0`` by default. Possible threshold values are floats between
``-3.0`` and ``3.0``.
* ``'shepperd'`` as described in [Shepperd]_.
Parameters
----------
method : str, default: ``'chiaverini'``
Method to use. Options are: ``'chiaverini'``, ``'hughes'``,
``'itzhack'``, ``'sarabandi'``, and ``'shepperd'``.
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.to_quaternion() # Uses method 'chiaverini' by default
array([ 0.94371436, 0.26853582, -0.14487813, 0.12767944])
>>> R.to_quaternion('shepperd')
array([ 0.94371436, -0.26853582, 0.14487813, -0.12767944])
>>> R.to_quaternion('hughes')
array([ 0.94371436, -0.26853582, 0.14487813, -0.12767944])
>>> R.to_quaternion('itzhack', version=2)
array([ 0.94371436, -0.26853582, 0.14487813, -0.12767944])
>>> R.to_quaternion('sarabandi', threshold=0.5)
array([0.94371436, 0.26853582, 0.14487813, 0.12767944])
"""
q = np.array([1., 0., 0., 0.])
if method.lower()=='hughes':
q = hughes(self.A)
if method.lower()=='chiaverini':
q = chiaverini(self.A)
if method.lower()=='shepperd':
q = shepperd(self.A)
if method.lower()=='itzhack':
q = itzhack(self.A, version=kw.get('version', 3))
if method.lower()=='sarabandi':
q = sarabandi(self.A, eta=kw.get('threshold', 0.0))
return q/np.linalg.norm(q)
def to_q(self, method: str = 'chiaverini', **kw) -> np.ndarray:
"""
Synonym of method :meth:`to_quaternion`.
Parameters
----------
method : str, default: ``'chiaverini'``
Method to use. Options are: ``'chiaverini'``, ``'hughes'``,
``'itzhack'``, ``'sarabandi'``, and ``'shepperd'``.
Examples
--------
>>> R = DCM(rpy=[10.0, -20.0, 30.0])
>>> R.view()
DCM([[ 0.92541658, -0.31879578, -0.20487413],
[ 0.16317591, 0.82317294, -0.54383814],
[ 0.34202014, 0.46984631, 0.81379768]])
>>> R.to_q() # Uses method 'chiaverini' by default
array([ 0.94371436, 0.26853582, -0.14487813, 0.12767944])
>>> R.to_q('shepperd')
array([ 0.94371436, -0.26853582, 0.14487813, -0.12767944])
>>> R.to_q('hughes')
array([ 0.94371436, -0.26853582, 0.14487813, -0.12767944])
>>> R.to_q('itzhack', version=2)
array([ 0.94371436, -0.26853582, 0.14487813, -0.12767944])
>>> R.to_q('sarabandi', threshold=0.5)
array([0.94371436, 0.26853582, 0.14487813, 0.12767944])
"""
return self.to_quaternion(method=method, **kw)
def to_angles(self) -> np.ndarray:
"""
Synonym of method :meth:`to_rpy`.
Returns
-------
a : numpy.ndarray
roll-pitch-yaw angles
"""
return self.to_rpy()
def to_rpy(self) -> np.ndarray:
"""
Roll-Pitch-Yaw Angles from DCM
A set of Roll-Pitch-Yaw angles may be written according to:
.. math::
\\mathbf{a} =
\\begin{bmatrix}\\phi \\\\ \\theta \\\\ \\psi\\end{bmatrix} =
\\begin{bmatrix}\\mathrm{arctan2}(r_{23}, r_{33}) \\\\ -\\arcsin(r_{13}) \\\\ \\mathrm{arctan2}(r_{12}, r_{11})\\end{bmatrix}
Returns
-------
a : numpy.ndarray
roll-pitch-yaw angles.
"""
phi = np.arctan2(self.A[1, 2], self.A[2, 2]) # Roll Angle
theta = -np.arcsin(self.A[0, 2]) # Pitch Angle
psi = np.arctan2(self.A[0, 1], self.A[0, 0]) # Yaw Angle
return np.array([phi, theta, psi])
def ode(self, w: np.ndarray) -> np.ndarray:
"""
Ordinary Differential Equation of the DCM.
Parameters
----------
w : numpy.ndarray
Angular velocity, in rad/s, about X-, Y- and Z-axis.
Returns
-------
dR/dt : numpy.ndarray
Derivative of DCM
"""
return self.A@skew(w) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/common/dcm.py | dcm.py |
import numpy as np
from .constants import EARTH_FIRST_ECCENTRICITY
from .constants import EARTH_SECOND_ECCENTRICITY_2
from .constants import EARTH_EQUATOR_RADIUS
from .constants import EARTH_POLAR_RADIUS
def geo2rect(lon: float, lat: float, h: float, r: float, ecc: float = EARTH_SECOND_ECCENTRICITY_2) -> np.ndarray:
"""
Geodetic to Rectangular Coordinates conversion in the e-frame.
Parameters
----------
lon : float
Longitude
lat : float
Latitude
h : float
Height above ellipsoidal surface
r : float
Normal radius
ecc : float, default: 6.739496742276486e-3
Ellipsoid's second eccentricity squared. Defaults to Earth's.
Returns
-------
X : numpy.ndarray
ECEF rectangular coordinates
"""
X = np.zeros(3)
X[0] = (r+h)*np.cos(lat)*np.cos(lon)
X[1] = (r+h)*np.cos(lat)*np.sin(lon)
X[2] = (r*(1.0-ecc)+h)*np.sin(lat)
return X
def rec2geo(X: np.ndarray, a: float = EARTH_EQUATOR_RADIUS, b: float = EARTH_POLAR_RADIUS, e: float = EARTH_FIRST_ECCENTRICITY, ecc: float = EARTH_SECOND_ECCENTRICITY_2) -> np.ndarray:
"""
Rectangular to Geodetic Coordinates conversion in the e-frame.
Parameters
----------
X : numpy.ndarray
Rectangular coordinates in the e-frame.
a : float, default: 6378137.0
Ellipsoid's equatorial radius, in meters. Defaults to Earth's.
b : float, default: 6356752.3142
Ellipsoid's polar radius, in meters. Defaults to Earth's.
e : float, default: 0.081819190842622
Ellipsoid's first eccentricity. Defaults to Earth's.
ecc : float, default: 6.739496742276486e-3
Ellipsoid's second eccentricity squared. Defaults to Earth's.
"""
x, y, z = X
p = np.linalg.norm([x, y])
theta = np.arctan(z*a/(p*b))
lon = 2*np.arctan(y / (x + p))
lat = np.arctan((z + ecc*b*np.sin(theta)**3) / (p - e*a*np.cos(theta)**3))
N = a**2/np.sqrt(a**2*np.cos(lat)**2 + b**2*np.sin(lat)**2)
h = p/np.cos(lat) - N
return np.array([lon, lat, h])
def llf2ecef(lat, lon):
"""
Transform coordinates from LLF to ECEF
Parameters
----------
lat : float
Latitude.
lon : float
Longitude.
Returns
-------
R : np.ndarray
Rotation Matrix.
"""
return np.array([
[-np.sin(lat), -np.sin(lon)*np.cos(lat), np.cos(lon)*np.cos(lat)],
[ np.cos(lat), -np.sin(lon)*np.sin(lat), np.cos(lon)*np.sin(lat)],
[0.0, np.cos(lon), np.sin(lon)]])
def ecef2llf(lat, lon):
"""
Transform coordinates from ECEF to LLF
Parameters
----------
lat : float
Latitude.
lon : float
Longitude.
Returns
-------
R : np.ndarray
Rotation Matrix.
"""
return np.array([
[-np.sin(lat), np.cos(lat), 0.0],
[-np.sin(lon)*np.cos(lat), -np.sin(lon)*np.sin(lat), np.cos(lon)],
[np.cos(lon)*np.cos(lat), np.cos(lon)*np.sin(lat), np.sin(lon)]])
def eci2ecef(w, t=0):
"""
Transformation between ECI and ECEF
Parameters
----------
w : float
Rotation rate in rad/s
t : float, default: 0.0
Time since reference epoch.
"""
return np.array([
[ np.cos(w)*t, np.sin(w)*t, 0.0],
[-np.sin(w)*t, np.cos(w)*t, 0.0],
[ 0.0, 0.0, 1.0]])
def ecef2enu(lat, lon):
"""
Transform coordinates from ECEF to ENU
Parameters
----------
lat : float
Latitude.
lon : float
Longitude.
Returns
-------
R : np.ndarray
Rotation Matrix.
"""
return np.array([
[-np.sin(lon), np.cos(lon), 0.0],
[-np.cos(lat)*np.cos(lon), -np.cos(lat)*np.sin(lon), np.sin(lat)],
[np.sin(lat)*np.cos(lon), np.sin(lat)*np.sin(lon), np.cos(lat)]])
def enu2ecef(lat, lon):
"""
Transform coordinates from ENU to ECEF
Parameters
----------
lat : float
Latitude.
lon : float
Longitude.
Returns
-------
R : np.ndarray
Rotation Matrix.
"""
return np.array([
[-np.sin(lon), -np.cos(lat)*np.cos(lon), np.sin(lat)*np.cos(lon)],
[np.cos(lon), -np.cos(lat)*np.sin(lon), np.sin(lat)*np.sin(lon)],
[0.0, np.sin(lat), np.cos(lat)]])
def ned2enu(x):
"""
Transform coordinates from NED to ENU.
Parameters
----------
x : np.ndarray
3D coordinates of point(s) to project.
Returns
-------
x' : np.ndarray
Transformed coordinates.
"""
if x.shape[-1] != 3 or x.ndim > 2:
raise ValueError(f"Given coordinates must have form (3, ) or (N, 3). Got {x.shape}")
A = np.array([[0.0, 1.0, 0.0], [1.0, 0.0, 0.0], [0.0, 0.0, -1.0]])
if x.ndim > 1:
return (A @ x.T).T
return A @ x
def enu2ned(x):
"""
Transform coordinates from ENU to NED.
Parameters
----------
x : np.ndarray
3D coordinates of point(s) to project.
Returns
-------
x' : np.ndarray
Transformed coordinates.
"""
if x.shape[-1] != 3 or x.ndim > 2:
raise ValueError(f"Given coordinates must have form (3, ) or (N, 3). Got {x.shape}")
A = np.array([[0.0, 1.0, 0.0], [1.0, 0.0, 0.0], [0.0, 0.0, -1.0]])
if x.ndim > 1:
return (A @ x.T).T
return A @ x | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/common/frames.py | frames.py |
import numpy as np
from typing import Type, Union, Any, Tuple, List
# Functions to convert DCM to quaternion representation
from .orientation import shepperd
from .orientation import hughes
from .orientation import chiaverini
from .orientation import itzhack
from .orientation import sarabandi
def slerp(q0: np.ndarray, q1: np.ndarray, t_array: np.ndarray, threshold: float = 0.9995) -> np.ndarray:
"""Spherical Linear Interpolation between two quaternions.
Return a valid quaternion rotation at a specified distance along the minor
arc of a great circle passing through any two existing quaternion endpoints
lying on the unit radius hypersphere.
Based on the method detailed in [Wiki_SLERP]_
Parameters
----------
q0 : numpy.ndarray
First endpoint quaternion.
q1 : numpy.ndarray
Second endpoint quaternion.
t_array : numpy.ndarray
Array of times to interpolate to.
threshold : float, default: 0.9995
Threshold to closeness of interpolation.
Returns
-------
q : numpy.ndarray
New quaternion representing the interpolated rotation.
"""
qdot = np.dot(q0, q1)
# Ensure SLERP takes the shortest path
if qdot < 0.0:
q1 *= -1.0
qdot *= -1.0
# Interpolate linearly (LERP)
if qdot > threshold:
result = q0[np.newaxis, :] + t_array[:, np.newaxis]*(q1 - q0)[np.newaxis, :]
return (result.T / np.linalg.norm(result, axis=1)).T
# Angle between vectors
theta_0 = np.arccos(qdot)
sin_theta_0 = np.sin(theta_0)
theta = theta_0*t_array
sin_theta = np.sin(theta)
s0 = np.cos(theta) - qdot*sin_theta/sin_theta_0
s1 = sin_theta/sin_theta_0
return s0[:,np.newaxis]*q0[np.newaxis,:] + s1[:,np.newaxis]*q1[np.newaxis,:]
class Quaternion(np.ndarray):
"""
Representation of a quaternion. It can be built with 3- or 4-dimensional
vectors. The quaternion object is always normalized to represent rotations
in 3D space, also known as a **versor**.
Parameters
----------
q : array-like, default: None
Vector to build the quaternion with. It can be either 3- or
4-dimensional.
versor : bool, default: True
Treat the quaternion as versor. It will normalize it immediately.
dcm : array-like
Create quaternion object from a 3-by-3 rotation matrix. It is built
only if no array was given to build.
rpy : array-like
Create quaternion object from roll-pitch-yaw angles. It is built only
if no array was given to build.
Attributes
----------
A : numpy.ndarray
Array with the 4 elements of quaternion of the form [w, x, y, z]
w : float
Scalar part of the quaternion.
x : float
First element of the vector part of the quaternion.
y : float
Second element of the vector part of the quaternion.
z : float
Third element of the vector part of the quaternion.
v : numpy.ndarray
Vector part of the quaternion.
Raises
------
ValueError
When length of input array is not equal to either 3 or 4.
Examples
--------
>>> from ahrs import Quaternion
>>> q = Quaternion([1., 2., 3., 4.])
>>> q
Quaternion([0.18257419, 0.36514837, 0.54772256, 0.73029674])
>>> x = [1., 2., 3.]
>>> q.rot(x)
[1.8 2. 2.6]
>>> R = q.to_DCM()
>>> R@x
[1.8 2. 2.6]
A call to method :meth:`product` will return an array of a multiplied
vector.
>>> q1 = Quaternion([1., 2., 3., 4.])
>>> q2 = Quaternion([5., 4., 3., 2.])
>>> q1.product(q2)
array([-0.49690399, 0.1987616 , 0.74535599, 0.3975232 ])
Multiplication operators are overriden to perform the expected hamilton
product.
>>> q1*q2
array([-0.49690399, 0.1987616 , 0.74535599, 0.3975232 ])
>>> q1@q2
array([-0.49690399, 0.1987616 , 0.74535599, 0.3975232 ])
Basic operators are also overriden.
>>> q1+q2
Quaternion([0.46189977, 0.48678352, 0.51166727, 0.53655102])
>>> q1-q2
Quaternion([-0.69760203, -0.25108126, 0.19543951, 0.64196028])
Pure quaternions are built from arrays with three elements.
>>> q = Quaternion([1., 2., 3.])
>>> q
Quaternion([0. , 0.26726124, 0.53452248, 0.80178373])
>>> q.is_pure()
True
Conversions between representations are also possible.
>>> q.to_axang()
(array([0.26726124, 0.53452248, 0.80178373]), 3.141592653589793)
>>> q.to_angles()
array([ 1.24904577, -0.44291104, 2.8198421 ])
And a nice representation as a string is also implemented to conform to
Hamilton's notation.
>>> str(q)
'(0.0000 +0.2673i +0.5345j +0.8018k)'
"""
def __new__(subtype, q: np.ndarray = None, versor: bool = True, **kwargs):
if q is None:
q = np.array([1.0, 0.0, 0.0, 0.0])
if "dcm" in kwargs:
q = Quaternion.from_DCM(Quaternion, np.array(kwargs.pop("dcm")))
if "rpy" in kwargs:
q = Quaternion.from_rpy(Quaternion, np.array(kwargs.pop("rpy")))
if "angles" in kwargs: # Older call to rpy
q = Quaternion.from_angles(Quaternion, np.array(kwargs.pop("angles")))
q = np.array(q, dtype=float)
if q.ndim!=1 or q.shape[-1] not in [3, 4]:
raise ValueError("Expected `q` to have shape (4,) or (3,), got {}.".format(q.shape))
if q.shape[-1]==3:
q = np.array([0.0, *q])
if versor:
q /= np.linalg.norm(q)
# Create the ndarray instance of type Quaternion. This will call the
# standard ndarray constructor, but return an object of type Quaternion.
obj = super(Quaternion, subtype).__new__(subtype, q.shape, float, q)
obj.A = q
return obj
@property
def w(self) -> float:
"""Scalar part of the Quaternion.
Given a quaternion :math:`\\mathbf{q}=\\begin{pmatrix}q_w & \\mathbf{q}_v\\end{pmatrix} = \\begin{pmatrix}q_w & q_x & q_y & q_z\\end{pmatrix}`,
the scalar part, a.k.a. *real* part, is :math:`q_w`.
Returns
-------
w : float
Scalar part of the quaternion.
Examples
--------
>>> q = Quaternion([2.0, -3.0, 4.0, -5.0])
>>> q.view()
Quaternion([ 0.27216553, -0.40824829, 0.54433105, -0.68041382])
>>> q.w
0.2721655269759087
It can also be accessed directly, treating the Quaternion as an array:
>>> q[0]
0.2721655269759087
"""
return self.A[0]
@property
def x(self) -> float:
"""First element of the vector part of the Quaternion.
Given a quaternion :math:`\\mathbf{q}=\\begin{pmatrix}q_w & \\mathbf{q}_v\\end{pmatrix} = \\begin{pmatrix}q_w & q_x & q_y & q_z\\end{pmatrix}`,
the first element of the vector part is :math:`q_x`.
Returns
-------
x : float
First element of vector part of the quaternion.
Examples
--------
>>> q = Quaternion([2.0, -3.0, 4.0, -5.0])
>>> q.view()
Quaternion([ 0.27216553, -0.40824829, 0.54433105, -0.68041382])
>>> q.x
-0.408248290463863
It can also be accessed directly, treating the Quaternion as an array:
>>> q[1]
-0.408248290463863
"""
return self.A[1]
@property
def y(self) -> float:
"""Second element of the vector part of the Quaternion.
Given a quaternion :math:`\\mathbf{q}=\\begin{pmatrix}q_w & \\mathbf{q}_v\\end{pmatrix} = \\begin{pmatrix}q_w & q_x & q_y & q_z\\end{pmatrix}`,
the third element of the vector part is :math:`q_y`.
Returns
-------
q_y : float
Second element of vector part of the quaternion.
Examples
--------
>>> q = Quaternion([2.0, -3.0, 4.0, -5.0])
>>> q.view()
Quaternion([ 0.27216553, -0.40824829, 0.54433105, -0.68041382])
>>> q.y
0.5443310539518174
It can also be accessed directly, treating the Quaternion as an array:
>>> q[2]
0.5443310539518174
"""
return self.A[2]
@property
def z(self) -> float:
"""Third element of the vector part of the Quaternion.
Given a quaternion :math:`\\mathbf{q}=\\begin{pmatrix}q_w & \\mathbf{q}_v\\end{pmatrix} = \\begin{pmatrix}q_w & q_x & q_y & q_z\\end{pmatrix}`,
the third element of the vector part is :math:`q_z`.
Returns
-------
q_z : float
Third element of vector part of the quaternion.
Examples
--------
>>> q = Quaternion([2.0, -3.0, 4.0, -5.0])
>>> q.view()
Quaternion([ 0.27216553, -0.40824829, 0.54433105, -0.68041382])
>>> q.z
-0.6804138174397717
It can also be accessed directly, treating the Quaternion as an array:
>>> q[3]
-0.6804138174397717
"""
return self.A[3]
@property
def v(self) -> np.ndarray:
"""Vector part of the Quaternion.
Given a quaternion :math:`\\mathbf{q}=\\begin{pmatrix}q_w & q_x & q_y & q_z\\end{pmatrix}`
the vector part, a.k.a. *imaginary* part, is
:math:`\\mathbf{q}_v=\\begin{bmatrix}q_x & q_y & q_z\\end{bmatrix}`.
Returns
-------
q_v : numpy.ndarray
Vector part of the quaternion.
Examples
--------
>>> q = Quaternion([2.0, -3.0, 4.0, -5.0])
>>> q.view()
Quaternion([ 0.27216553, -0.40824829, 0.54433105, -0.68041382])
>>> q.v
array([-0.40824829, 0.54433105, -0.68041382])
It can also be accessed directly, treating the Quaternion as an array,
but is returned as a Quaternion object.
>>> q[1:]
Quaternion([-0.40824829, 0.54433105, -0.68041382])
"""
return self.A[1:]
@property
def conjugate(self) -> np.ndarray:
"""
Conjugate of quaternion
A quaternion, whose form is :math:`\\mathbf{q} = \\begin{pmatrix}q_w & q_x & q_y & q_z\\end{pmatrix}`,
has a conjugate of the form :math:`\\mathbf{q}^* = \\begin{pmatrix}q_w & -q_x & -q_y & -q_z\\end{pmatrix}`.
A product of the quaternion with its conjugate yields:
.. math::
\\mathbf{q}\\mathbf{q}^* =
\\begin{bmatrix}q_w^2 + q_x^2 + q_y^2 + q_z^2\\\\ \\mathbf{0}_v \\end{bmatrix}
A versor (normalized quaternion) multiplied with its own conjugate
gives the identity quaternion back.
.. math::
\\mathbf{q}\\mathbf{q}^* =
\\begin{bmatrix}1 & 0 & 0 & 0 \\end{bmatrix}
Returns
-------
q* : numpy.array
Conjugated quaternion.
Examples
--------
>>> q = Quaternion([0.603297, 0.749259, 0.176548, 0.20850])
>>> q.conjugate
array([0.603297, -0.749259, -0.176548, -0.20850 ])
"""
return self.A*np.array([1.0, -1.0, -1.0, -1.0])
@property
def conj(self) -> np.ndarray:
"""Synonym of property :meth:`conjugate`
Returns
-------
q* : numpy.ndarray
Conjugated quaternion.
Examples
--------
>>> q = Quaternion([0.603297, 0.749259, 0.176548, 0.20850])
>>> q.conj
array([0.603297, -0.749259, -0.176548, -0.20850 ])
"""
return self.conjugate
@property
def inverse(self) -> np.ndarray:
"""
Return the inverse Quaternion
The inverse quaternion :math:`\\mathbf{q}^{-1}` is such that the
quaternion times its inverse gives the identity quaternion
:math:`\\mathbf{q}_I=\\begin{pmatrix}1 & 0 & 0 & 0\\end{pmatrix}`
It is obtained as:
.. math::
\\mathbf{q}^{-1} = \\frac{\\mathbf{q}^*}{\\|\\mathbf{q}\\|^2}
If the quaternion is normalized (called *versor*) its inverse is the
conjugate.
.. math::
\\mathbf{q}^{-1} = \\mathbf{q}^*
Returns
-------
out : numpy.ndarray
Inverse of quaternion.
Examples
--------
>>> q = Quaternion([1., -2., 3., -4.])
>>> q
Quaternion([ 0.18257419, -0.36514837, 0.54772256, -0.73029674])
>>> q.inverse
array([ 0.18257419, 0.36514837, -0.54772256, 0.73029674])
>>> [email protected]
array([1.00000000e+00, 0.00000000e+00, 0.00000000e+00, 2.77555756e-17])
"""
if self.is_versor():
return self.conjugate
return self.conjugate / np.linalg.norm(self.q)
@property
def inv(self) -> np.ndarray:
"""
Synonym of property :meth:`inverse`
Returns
-------
out : numpy.ndarray
Inverse of quaternion.
Examples
--------
>>> q = Quaternion([1., -2., 3., -4.])
>>> q
Quaternion([ 0.18257419, -0.36514837, 0.54772256, -0.73029674])
>>> q.inv
array([ 0.18257419, 0.36514837, -0.54772256, 0.73029674])
>>> [email protected]
array([1.00000000e+00, 0.00000000e+00, 0.00000000e+00, 2.77555756e-17])
"""
return self.inverse
@property
def exponential(self) -> np.ndarray:
"""
Exponential of Quaternion
The quaternion exponential works as in the ordinary case, defined with
the absolute convergent power series:
.. math::
e^{\\mathbf{q}} = \\sum_{k=0}^{\\infty}\\frac{\\mathbf{q}^k}{k!}
The exponential of a **pure quaternion** is, with the help of Euler
formula and the series of :math:`\\cos\\theta` and :math:`\\sin\\theta`,
redefined as:
.. math::
\\begin{array}{rcl}
e^{\\mathbf{q}_v} &=& \\sum_{k=0}^{\\infty}\\frac{\\mathbf{q}_v^k}{k!} \\\\
e^{\\mathbf{u}\\theta} &=&
\\Big(1-\\frac{\\theta^2}{2!} + \\frac{\\theta^4}{4!}+\\cdots\\Big)+
\\Big(\\mathbf{u}\\theta-\\frac{\\mathbf{u}\\theta^3}{3!} + \\frac{\\mathbf{u}\\theta^5}{5!}+\\cdots\\Big) \\\\
&=& \\cos\\theta + \\mathbf{u}\\sin\\theta \\\\
&=& \\begin{bmatrix}\\cos\\theta \\\\ \\mathbf{u}\\sin\\theta \\end{bmatrix}
\\end{array}
Letting :math:`\\mathbf{q}_v = \\mathbf{u}\\theta` with :math:`\\theta=\\|\mathbf{v}\\|`
and :math:`\\|\\mathbf{u}\\|=1`.
Since :math:`\\|e^{\\mathbf{q}_v}\\|^2=\\cos^2\\theta+\\sin^2\\theta=1`,
the exponential of a pure quaternion is always unitary. Therefore, if
the quaternion is real, its exponential is the identity.
For **general quaternions** the exponential is defined using
:math:`\\mathbf{u}\\theta=\\mathbf{q}_v` and the exponential of the pure
quaternion:
.. math::
\\begin{array}{rcl}
e^{\\mathbf{q}} &=& e^{q_w+\\mathbf{q}_v} = e^{q_w}e^{\\mathbf{q}_v}\\\\
&=& e^{q_w}
\\begin{bmatrix}
\\cos\\|\\mathbf{q}_v\\| \\\\ \\frac{\\mathbf{q}}{\\|\\mathbf{q}_v\\|}\\sin\\|\\mathbf{q}_v\\|
\\end{bmatrix}
\\end{array}
Returns
-------
exp : numpy.ndarray
Exponential of quaternion.
Examples
--------
>>> q1 = Quaternion([0.0, -2.0, 3.0, -4.0])
>>> str(q1)
'(0.0000 -0.3714i +0.5571j -0.7428k)'
>>> q1.exponential
[ 0.54030231 -0.31251448 0.46877172 -0.62502896]
>>> q2 = Quaternion([1.0, -2.0, 3.0, -4.0])
>>> str(q2)
'(0.1826 -0.3651i +0.5477j -0.7303k)'
>>> q2.exponential
[ 0.66541052 -0.37101103 0.55651655 -0.74202206]
"""
if self.is_real():
return np.array([1.0, 0.0, 0.0, 0.0])
t = np.linalg.norm(self.v)
u = self.v/t
q_exp = np.array([np.cos(t), *u*np.sin(t)])
if self.is_pure():
return q_exp
q_exp *= np.e**self.w
return q_exp
@property
def exp(self) -> np.ndarray:
"""Synonym of property :meth:`exponential`
Returns
-------
exp : numpy.ndarray
Exponential of quaternion.
Examples
--------
>>> q1 = Quaternion([0.0, -2.0, 3.0, -4.0])
>>> str(q1)
'(0.0000 -0.3714i +0.5571j -0.7428k)'
>>> q1.exp
[ 0.54030231 -0.31251448 0.46877172 -0.62502896]
>>> q2 = Quaternion([1.0, -2.0, 3.0, -4.0])
>>> str(q2)
'(0.1826 -0.3651i +0.5477j -0.7303k)'
>>> q2.exp
[ 0.66541052 -0.37101103 0.55651655 -0.74202206]
"""
return self.exponential
@property
def logarithm(self) -> np.ndarray:
"""Logarithm of Quaternion.
The logarithm of a **general quaternion**
:math:`\\mathbf{q}=\\begin{pmatrix}q_w & \\mathbf{q_v}\\end{pmatrix}`
is obtained from:
.. math::
\\log\\mathbf{q} = \\begin{bmatrix} \\log\\|\\mathbf{q}\\| \\\\ \\mathbf{u}\\theta \\end{bmatrix}
with:
.. math::
\\begin{array}{rcl}
\\mathbf{u} &=& \\frac{\\mathbf{q}_v}{\\|\\mathbf{q}_v\\|} \\\\
\\theta &=& \\arccos\\Big(\\frac{q_w}{\\|\\mathbf{q}\\|}\\Big)
\\end{array}
It is easy to see, that for a **pure quaternion**
:math:`\\mathbf{q}=\\begin{pmatrix}0 & \\mathbf{q_v}\\end{pmatrix}`, the
logarithm simplifies the computation through :math:`\\theta=\\arccos(0)=\\frac{\\pi}{2}`:
.. math::
\\log\\mathbf{q} = \\begin{bmatrix}\\log\\|\\mathbf{q}\\| \\\\ \\mathbf{u}\\frac{\\pi}{2}\\end{bmatrix}
Similarly, for **unitary quaternions** (:math:`\\|\\mathbf{q}\\|=1`)
the logarithm is:
.. math::
\\log\\mathbf{q} = \\begin{bmatrix} 0 \\\\ \\mathbf{u}\\arccos(q_w) \\end{bmatrix}
which further reduces for **pure unitary quaternions** (:math:`q_w=0` and :math:`\\|\\mathbf{q}\\|=1`)
to:
.. math::
\\log\\mathbf{q} = \\begin{bmatrix} 0 \\\\ \\mathbf{u}\\frac{\\pi}{2} \\end{bmatrix}
Returns
-------
log : numpy.ndarray
Logarithm of quaternion.
Examples
--------
>>> q = Quaternion([1.0, -2.0, 3.0, -4.0])
>>> q.view()
Quaternion([ 0.18257419, -0.36514837, 0.54772256, -0.73029674])
>>> q.logarithm
array([ 0. , -0.51519029, 0.77278544, -1.03038059])
>>> q = Quaternion([0.0, 1.0, -2.0, 3.0])
>>> q.view()
Quaternion([ 0. , 0.26726124, -0.53452248, 0.80178373])
>>> q.logarithm
array([ 0. , 0.41981298, -0.83962595, 1.25943893])
"""
u = self.v/np.linalg.norm(self.v)
if self.is_versor():
if self.is_pure():
return np.array([0.0, *(0.5*np.pi*u)])
return np.array([0.0, *(u*np.arccos(self.w))])
qn = np.linalg.norm(self.A)
if self.is_pure():
return np.array([np.log(qn), *(0.5*np.pi*u)])
return np.array([np.log(qn), *(u*np.arccos(self.w/qn))])
@property
def log(self) -> np.ndarray:
"""Synonym of property :meth:`logarithm`
Returns
-------
log : numpy.ndarray
Logarithm of quaternion.
Examples
--------
>>> q = Quaternion([1.0, -2.0, 3.0, -4.0])
>>> q.view()
Quaternion([ 0.18257419, -0.36514837, 0.54772256, -0.73029674])
>>> q.log
array([ 0. , -0.51519029, 0.77278544, -1.03038059])
>>> q = Quaternion([0.0, 1.0, -2.0, 3.0])
>>> q.view()
Quaternion([ 0. , 0.26726124, -0.53452248, 0.80178373])
>>> q.log
array([ 0. , 0.41981298, -0.83962595, 1.25943893])
"""
return self.logarithm
def __str__(self) -> str:
"""
Build a *printable* representation of quaternion
Returns
-------
q : str
Quaternion written as string.
Examples
--------
>>> q = Quaternion([0.55747131, 0.12956903, 0.5736954 , 0.58592763])
>>> str(q)
'(0.5575 +0.1296i +0.5737j +0.5859k)'
"""
return "({:-.4f} {:+.4f}i {:+.4f}j {:+.4f}k)".format(self.w, self.x, self.y, self.z)
def __add__(self, p: Any):
"""
Add quaternions
Return the sum of two Quaternions. The given input must be of class
Quaternion.
Parameters
----------
p : Quaternion
Second Quaternion to sum. NOT an array.
Returns
-------
q+p : Quaternion
Normalized sum of quaternions
Examples
--------
>>> q1 = Quaternion([0.55747131, 0.12956903, 0.5736954 , 0.58592763])
>>> q2 = Quaternion([0.49753507, 0.50806522, 0.52711628, 0.4652709])
>>> q3 = q1+q2
>>> str(q3)
'(0.5386 +0.3255i +0.5620j +0.5367k)'
"""
return Quaternion(self.to_array() + p)
def __sub__(self, p: Any):
"""
Difference of quaternions
Return the difference between two Quaternions. The given input must be
of class Quaternion.
Returns
-------
q-p : Quaternion
Normalized difference of quaternions
Examples
--------
>>> q1 = Quaternion([0.55747131, 0.12956903, 0.5736954 , 0.58592763])
>>> q2 = Quaternion([0.49753507, 0.50806522, 0.52711628, 0.4652709])
>>> q3 = q1-q2
>>> str(q3)
'(0.1482 -0.9358i +0.1152j +0.2983k)'
"""
return Quaternion(self.to_array() - p)
def __mul__(self, q: np.ndarray) -> Any:
"""
Product between quaternions
Given two unit quaternions :math:`\\mathbf{p}=(p_w, p_x, p_y, p_z)` and
:math:`\\mathbf{q} = (q_w, q_x, q_y, q_z)`, their product is obtained
[Dantam]_ [MWQW]_ as:
.. math::
\\mathbf{pq} =
\\begin{bmatrix}
p_w q_w - p_x q_x - p_y q_y - p_z q_z \\\\
p_x q_w + p_w q_x - p_z q_y + p_y q_z \\\\
p_y q_w + p_z q_x + p_w q_y - p_x q_z \\\\
p_z q_w - p_y q_x + p_x q_y + p_w q_z
\\end{bmatrix}
Parameters
----------
q : numpy.ndarray, Quaternion
Second quaternion to multiply with.
Returns
-------
out : Quaternion
Product of quaternions.
Examples
--------
>>> q1 = Quaternion([0.55747131, 0.12956903, 0.5736954 , 0.58592763])
>>> q2 = Quaternion([0.49753507, 0.50806522, 0.52711628, 0.4652709])
>>> q1*q2
array([-0.36348726, 0.38962514, 0.34188103, 0.77407146])
>>> q3 = q1*q2
>>> q3
<ahrs.common.quaternion.Quaternion object at 0x000001F379003748>
>>> str(q3)
'(-0.3635 +0.3896i +0.3419j +0.7740k)'
"""
if not hasattr(q, 'A'):
q = Quaternion(q)
return self.product(q.A)
def __matmul__(self, q: np.ndarray) -> np.ndarray:
"""
Product between quaternions using @ operator
Given two unit quaternions :math:`\\mathbf{p}=(p_w, p_x, p_y, p_z)` and
:math:`\\mathbf{q} = (q_w, q_x, q_y, q_z)`, their product is obtained
[Dantam]_ [MWQW]_ as:
.. math::
\\mathbf{pq} =
\\begin{bmatrix}
p_w q_w - p_x q_x - p_y q_y - p_z q_z \\\\
p_x q_w + p_w q_x - p_z q_y + p_y q_z \\\\
p_y q_w + p_z q_x + p_w q_y - p_x q_z \\\\
p_z q_w - p_y q_x + p_x q_y + p_w q_z
\\end{bmatrix}
Parameters
----------
q : numpy.ndarray, Quaternion
Second quaternion to multiply with.
Returns
-------
out : Quaternion
Product of quaternions.
Examples
--------
>>> q1 = Quaternion([0.55747131, 0.12956903, 0.5736954 , 0.58592763])
>>> q2 = Quaternion([0.49753507, 0.50806522, 0.52711628, 0.4652709])
>>> q1@q2
array([-0.36348726, 0.38962514, 0.34188103, 0.77407146])
>>> q3 = q1@q2
>>> q3
<ahrs.common.quaternion.Quaternion object at 0x000001F379003748>
>>> str(q3)
'(-0.3635 +0.3896i +0.3419j +0.7740k)'
"""
if not hasattr(q, 'A'):
q = Quaternion(q)
return self.product(q.A)
def __pow__(self, a: float) -> np.ndarray:
"""
Returns array of quaternion to the power of ``a``
Assuming the quaternion is a versor, its power can be defined using the
exponential:
.. math::
\\begin{eqnarray}
\\mathbf{q}^a & = & e^{\\log(\\mathbf{q}^a)} \\\\
& = & e^{a \\log(\\mathbf{q})} \\\\
& = & e^{a \\mathbf{u}\\theta} \\\\
& = & \\begin{bmatrix}
\\cos(a\\theta) \\\\
\\mathbf{u} \\sin(a\\theta)
\\end{bmatrix}
\\end{eqnarray}
Parameters
----------
a : float
Value to which to calculate quaternion power.
Returns
-------
q**a : numpy.ndarray
Quaternion :math:`\\mathbf{q}` to the power of ``a``
"""
return np.e**(a*self.logarithm)
def is_pure(self) -> bool:
"""
Returns a bool value, where ``True`` if quaternion is pure.
A pure quaternion has a scalar part equal to zero: :math:`\\mathbf{q} = 0 + xi + yj + zk`
.. math::
\\left\\{
\\begin{array}{ll}
\\mathrm{True} & \\: w = 0 \\\\
\\mathrm{False} & \\: \\mathrm{otherwise}
\\end{array}
\\right.
Returns
-------
out : bool
Boolean equal to True if :math:`q_w = 0`.
Examples
--------
>>> q = Quaternion()
>>> q
Quaternion([1., 0., 0., 0.])
>>> q.is_pure()
False
>>> q = Quaternion([0., 1., 2., 3.])
>>> q
Quaternion([0. , 0.26726124, 0.53452248, 0.80178373])
>>> q.is_pure()
True
"""
return self.w==0.0
def is_real(self) -> bool:
"""
Returns a bool value, where ``True`` if quaternion is real.
A real quaternion has all elements of its vector part equal to zero:
:math:`\\mathbf{q} = w + 0i + 0j + 0k = \\begin{pmatrix} q_w & \\mathbf{0}\\end{pmatrix}`
.. math::
\\left\\{
\\begin{array}{ll}
\\mathrm{True} & \\: \\mathbf{q}_v = \\begin{bmatrix} 0 & 0 & 0 \\end{bmatrix} \\\\
\\mathrm{False} & \\: \\mathrm{otherwise}
\\end{array}
\\right.
Returns
-------
out : bool
Boolean equal to True if :math:`q_v = 0`.
Examples
--------
>>> q = Quaternion()
>>> q
Quaternion([1., 0., 0., 0.])
>>> q.is_real()
True
>>> q = Quaternion([0., 1., 2., 3.])
>>> q
Quaternion([0. , 0.26726124, 0.53452248, 0.80178373])
>>> q.is_real()
False
>>> q = Quaternion([1., 2., 3., 4.])
>>> q
Quaternion([0.18257419, 0.36514837, 0.54772256, 0.73029674])
>>> q.is_real()
False
>>> q = Quaternion([5., 0., 0., 0.]) # All quaternions are normalized, by default
>>> q
Quaternion([1., 0., 0., 0.])
>>> q.is_real()
True
"""
return not any(self.v)
def is_versor(self) -> bool:
"""
Returns a bool value, where ``True`` if quaternion is a versor.
A versor is a quaternion of norm equal to one:
.. math::
\\left\\{
\\begin{array}{ll}
\\mathrm{True} & \\: \\|\\mathbf{q}\\| = 1 \\\\
\\mathrm{False} & \\: \\mathrm{otherwise}
\\end{array}
\\right.
Returns
-------
out : bool
Boolean equal to ``True`` if :math:`\\|\\mathbf{q}\\|=1`.
"""
return np.isclose(np.linalg.norm(self.A), 1.0)
def is_identity(self) -> bool:
"""
Returns a bool value, where ``True`` if quaternion is identity quaternion.
An **identity quaternion** has its scalar part equal to 1, and its
vector part equal to 0, such that :math:`\\mathbf{q} = 1 + 0i + 0j + 0k`.
.. math::
\\left\\{
\\begin{array}{ll}
\\mathrm{True} & \\: \\mathbf{q}\\ = \\begin{pmatrix} 1 & 0 & 0 & 0 \\end{pmatrix} \\\\
\\mathrm{False} & \\: \\mathrm{otherwise}
\\end{array}
\\right.
Returns
-------
out : bool
Boolean equal to ``True`` if :math:`\\mathbf{q}=\\begin{pmatrix}1 & 0 & 0 & 0\\end{pmatrix}`.
Examples
--------
>>> q = Quaternion()
>>> q
Quaternion([1., 0., 0., 0.])
>>> q.is_identity()
True
>>> q = Quaternion([0., 1., 0., 0.])
>>> q
Quaternion([0., 1., 0., 0.])
>>> q.is_identity()
False
"""
return np.allclose(self.A, np.array([1.0, 0.0, 0.0, 0.0]))
def normalize(self) -> None:
"""Normalize the quaternion
"""
self.A /= np.linalg.norm(self.A)
def product(self, q: np.ndarray) -> np.ndarray:
"""
Product of two quaternions.
Given two unit quaternions :math:`\\mathbf{p}=(p_w, \\mathbf{p}_v)` and
:math:`\\mathbf{q} = (q_w, \\mathbf{q}_v)`, their product is defined
[Sola]_ [Dantam]_ as:
.. math::
\\begin{eqnarray}
\\mathbf{pq} & = & \\big( (q_w p_w - \\mathbf{q}_v \\cdot \\mathbf{p}_v) \\; ,
\\; \\mathbf{q}_v \\times \\mathbf{p}_v + q_w \\mathbf{p}_v + p_w \\mathbf{q}_v \\big) \\\\
& = &
\\begin{bmatrix}
p_w & -\\mathbf{p}_v^T \\\\ \\mathbf{p}_v & p_w \\mathbf{I}_3 + \\lfloor \\mathbf{p}_v \\rfloor_\\times
\\end{bmatrix}
\\begin{bmatrix} q_w \\\\ \\mathbf{q}_v \\end{bmatrix}
\\\\
& = &
\\begin{bmatrix}
p_w & -p_x & -p_y & -p_z \\\\
p_x & p_w & -p_z & p_y \\\\
p_y & p_z & p_w & -p_x \\\\
p_z & -p_y & p_x & p_w
\\end{bmatrix}
\\begin{bmatrix} q_w \\\\ q_x \\\\ q_y \\\\ q_z \\end{bmatrix}
\\\\
& = &
\\begin{bmatrix}
p_w q_w - p_x q_x - p_y q_y - p_z q_z \\\\
p_w q_x + p_x q_w + p_y q_z - p_z q_y \\\\
p_w q_y - p_x q_z + p_y q_w + p_z q_x \\\\
p_w q_z + p_x q_y - p_y q_x + p_z q_w
\\end{bmatrix}
\\end{eqnarray}
where :math:`\\lfloor \\mathbf{a} \\rfloor_\\times` represents the
`skew-symmetric matrix <https://en.wikipedia.org/wiki/Skew-symmetric_matrix>`_
of :math:`\\mathbf{a}`.
Parameters
----------
r : numpy.ndarray, Quaternion
Quaternion to multiply with.
Returns
-------
qr : numpy.ndarray
Product of quaternions.
Examples
--------
>>> q1 = Quaternion([0.55747131, 0.12956903, 0.5736954 , 0.58592763])
Can multiply with a given quaternion in vector form...
>>> q1.product([0.49753507, 0.50806522, 0.52711628, 0.4652709])
array([-0.36348726, 0.38962514, 0.34188103, 0.77407146])
or with a Quaternion object...
>>> q2 = Quaternion([0.49753507, 0.50806522, 0.52711628, 0.4652709 ])
>>> q1.product(q2)
array([-0.36348726, 0.38962514, 0.34188103, 0.77407146])
It holds with the result after the cross and dot product definition
>>> qw = q1.w*q2.w - np.dot(q1.v, q2.v)
>>> qv = q1.w*q2.v + q2.w*q1.v + np.cross(q1.v, q2.v)
>>> qw, qv
(-0.36348726, array([0.38962514, 0.34188103, 0.77407146]))
"""
if isinstance(q, Quaternion):
q = q.A.copy()
q /= np.linalg.norm(q)
return np.array([
self.w*q[0] - self.x*q[1] - self.y*q[2] - self.z*q[3],
self.w*q[1] + self.x*q[0] + self.y*q[3] - self.z*q[2],
self.w*q[2] - self.x*q[3] + self.y*q[0] + self.z*q[1],
self.w*q[3] + self.x*q[2] - self.y*q[1] + self.z*q[0]])
def mult_L(self) -> np.ndarray:
"""
Matrix form of a left-sided quaternion multiplication Q.
Matrix representation of quaternion product with a left sided
quaternion [Sarkka]_:
.. math::
\\mathbf{qp} = \\mathbf{L}(\\mathbf{q})\\mathbf{p} =
\\begin{bmatrix}
q_w & -q_x & -q_y & -q_z \\\\
q_x & q_w & -q_z & q_y \\\\
q_y & q_z & q_w & -q_x \\\\
q_z & -q_y & q_x & q_w
\\end{bmatrix}
\\begin{bmatrix}p_w \\\\ p_x \\\\ p_y \\\\ p_z\\end{bmatrix}
Returns
-------
Q : numpy.ndarray
Matrix form of the left side quaternion multiplication.
"""
return np.array([
[self.w, -self.x, -self.y, -self.z],
[self.x, self.w, -self.z, self.y],
[self.y, self.z, self.w, -self.x],
[self.z, -self.y, self.x, self.w]])
def mult_R(self) -> np.ndarray:
"""
Matrix form of a right-sided quaternion multiplication Q.
Matrix representation of quaternion product with a right sided
quaternion [Sarkka]_:
.. math::
\\mathbf{qp} = \\mathbf{R}(\\mathbf{p})\\mathbf{q} =
\\begin{bmatrix}
p_w & -p_x & -p_y & -p_z \\\\
p_x & p_w & p_z & -p_y \\\\
p_y & -p_z & p_w & p_x \\\\
p_z & p_y & -p_x & p_w
\\end{bmatrix}
\\begin{bmatrix}q_w \\\\ q_x \\\\ q_y \\\\ q_z\\end{bmatrix}
Returns
-------
Q : numpy.ndarray
Matrix form of the right side quaternion multiplication.
"""
return np.array([
[self.w, -self.x, -self.y, -self.z],
[self.x, self.w, self.z, -self.y],
[self.y, -self.z, self.w, self.x],
[self.z, self.y, -self.x, self.w]])
def rotate(self, a: np.ndarray) -> np.ndarray:
"""Rotate array :math:`\\mathbf{a}` through quaternion :math:`\\mathbf{q}`.
Parameters
----------
a : numpy.ndarray
3-by-N array to rotate in 3 dimensions, where N is the number of
vectors to rotate.
Returns
-------
a' : numpy.ndarray
3-by-N rotated array by current quaternion.
Examples
--------
>>> q = Quaternion([-0.00085769, -0.0404217, 0.29184193, -0.47288709])
>>> v = [0.25557699 0.74814091 0.71491841]
>>> q.rotate(v)
array([-0.22481078 -0.99218916 -0.31806219])
>>> A = [[0.18029565, 0.14234782], [0.47473686, 0.38233722], [0.90000689, 0.06117298]]
>>> q.rotate(A)
array([[-0.10633285 -0.16347163]
[-1.02790041 -0.23738541]
[-0.00284403 -0.29514739]])
"""
a = np.array(a)
if a.shape[0] != 3:
raise ValueError("Expected `a` to have shape (3, N) or (3,), got {}.".format(a.shape))
return self.to_DCM()@a
def to_array(self) -> np.ndarray:
"""
Return quaternion as a NumPy array
Quaternion values are stored in attribute ``A``, which is a NumPy array.
This method simply returns that attribute.
Returns
-------
out : numpy.ndarray
Quaternion.
Examples
--------
>>> q = Quaternion([0.0, -2.0, 3.0, -4.0])
>>> q.to_array()
array([ 0. -0.37139068 0.55708601 -0.74278135])
>>> type(q.to_array())
<class 'numpy.ndarray'>
"""
return self.A
def to_list(self) -> list:
"""
Return quaternion as list
Quaternion values are stored in attribute ``A``, which is a NumPy array.
This method reads that attribute and returns it as a list.
Returns
-------
out : list
Quaternion values in list
Examples
--------
>>> q = Quaternion([0.0, -2.0, 3.0, -4.0])
>>> q.to_list()
[0.0, -0.3713906763541037, 0.5570860145311556, -0.7427813527082074]
"""
return self.A.tolist()
def to_axang(self) -> Tuple[np.ndarray, float]:
"""
Return equivalent axis-angle representation of the quaternion.
Returns
-------
axis : numpy.ndarray
Three-dimensional axis to rotate about
angle : float
Amount of rotation, in radians, to rotate about.
Examples
--------
>>> q = Quaternion([0.7071, 0.7071, 0.0, 0.0])
>>> q.to_axang()
(array([1., 0., 0.]), 1.5707963267948966)
"""
denom = np.linalg.norm(self.v)
angle = 2.0*np.arctan2(denom, self.w)
axis = np.zeros(3) if angle==0.0 else self.v/denom
return axis, angle
def to_angles(self) -> np.ndarray:
"""
Return corresponding Euler angles of quaternion.
Given a unit quaternion :math:`\\mathbf{q} = \\begin{pmatrix}q_w & q_x & q_y & q_z\\end{pmatrix}`,
its corresponding Euler angles [WikiConversions]_ are:
.. math::
\\begin{bmatrix}
\\phi \\\\ \\theta \\\\ \\psi
\\end{bmatrix} =
\\begin{bmatrix}
\\mathrm{atan2}\\big(2(q_wq_x + q_yq_z), 1-2(q_x^2+q_y^2)\\big) \\\\
\\arcsin\\big(2(q_wq_y - q_zq_x)\\big) \\\\
\\mathrm{atan2}\\big(2(q_wq_z + q_xq_y), 1-2(q_y^2+q_z^2)\\big)
\\end{bmatrix}
Returns
-------
angles : numpy.ndarray
Euler angles of quaternion.
"""
phi = np.arctan2(2.0*(self.w*self.x + self.y*self.z), 1.0 - 2.0*(self.x**2 + self.y**2))
theta = np.arcsin(2.0*(self.w*self.y - self.z*self.x))
psi = np.arctan2(2.0*(self.w*self.z + self.x*self.y), 1.0 - 2.0*(self.y**2 + self.z**2))
return np.array([phi, theta, psi])
def to_DCM(self) -> np.ndarray:
"""
Return a Direction Cosine matrix :math:`\\mathbf{R} \\in SO(3)` from a
given unit quaternion :math:`\\mathbf{q}`.
The given unit quaternion must have the form
:math:`\\mathbf{q} = \\begin{pmatrix}q_w & q_x & q_y & q_z\\end{pmatrix}`,
where :math:`\\mathbf{q}_v = \\begin{bmatrix}q_x & q_y & q_z\\end{bmatrix}`
is the vector part, and :math:`q_w` is the scalar part.
The resulting matrix :math:`\\mathbf{R}` [WikiConversions]_ has the
form:
.. math::
\\mathbf{R}(\\mathbf{q}) =
\\begin{bmatrix}
1 - 2(q_y^2 + q_z^2) & 2(q_xq_y - q_wq_z) & 2(q_xq_z + q_wq_y) \\\\
2(q_xq_y + q_wq_z) & 1 - 2(q_x^2 + q_z^2) & 2(q_yq_z - q_wq_x) \\\\
2(q_xq_z - q_wq_y) & 2(q_wq_x + q_yq_z) & 1 - 2(q_x^2 + q_y^2)
\\end{bmatrix}
The identity Quaternion :math:`\\mathbf{q} = \\begin{pmatrix}1 & 0 & 0 & 0\\end{pmatrix}`,
produces a :math:`3 \\times 3` Identity matrix :math:`\\mathbf{I}_3`.
Returns
-------
DCM : numpy.ndarray
3-by-3 Direction Cosine Matrix
Examples
--------
>>> q = Quaternion()
>>> q.view()
Quaternion([1., 0., 0., 0.])
>>> q.to_DCM()
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
>>> q = Quaternion([1., -2., 3., -4.])
>>> q.view()
Quaternion([ 0.18257419, -0.36514837, 0.54772256, -0.73029674])
>>> q.to_DCM()
array([[-0.66666667, -0.13333333, 0.73333333],
[-0.66666667, -0.33333333, -0.66666667],
[ 0.33333333, -0.93333333, 0.13333333]])
>>> q = Quaternion([0., -4., 3., -2.])
>>> q.view()
Quaternion([ 0. , -0.74278135, 0.55708601, -0.37139068])
>>> q.to_DCM()
array([[ 0.10344828, -0.82758621, 0.55172414],
[-0.82758621, -0.37931034, -0.4137931 ],
[ 0.55172414, -0.4137931 , -0.72413793]])
"""
return np.array([
[1.0-2.0*(self.y**2+self.z**2), 2.0*(self.x*self.y-self.w*self.z), 2.0*(self.x*self.z+self.w*self.y)],
[2.0*(self.x*self.y+self.w*self.z), 1.0-2.0*(self.x**2+self.z**2), 2.0*(self.y*self.z-self.w*self.x)],
[2.0*(self.x*self.z-self.w*self.y), 2.0*(self.w*self.x+self.y*self.z), 1.0-2.0*(self.x**2+self.y**2)]])
def from_DCM(self, dcm: np.ndarray, method: str = 'chiaverini', **kw) -> np.ndarray:
"""
Quaternion from Direction Cosine Matrix.
There are five methods available to obtain a quaternion from a
Direction Cosine Matrix:
* ``'chiaverini'`` as described in [Chiaverini]_
* ``'hughes'`` as described in [Hughes]_
* ``'itzhack'`` as described in [Bar-Itzhack]_
* ``'sarabandi'`` as described in [Sarabandi]_
* ``'shepperd'`` as described in [Shepperd]_
Parameters
----------
dcm : numpy.ndarray
3-by-3 Direction Cosine Matrix.
method : str, default: 'chiaverini'
Method to use. Options are: 'chiaverini', 'hughes', 'itzhack',
'sarabandi', and 'shepperd'.
"""
if dcm.shape != (3, 3):
raise TypeError("Expected matrix of size (3, 3). Got {}".format(dcm.shape))
q = None
if method.lower()=='hughes':
q = hughes(dcm)
if method.lower()=='chiaverini':
q = chiaverini(dcm)
if method.lower()=='shepperd':
q = shepperd(dcm)
if method.lower()=='itzhack':
q = itzhack(dcm, version=kw.get('version', 3))
if method.lower()=='sarabandi':
q = sarabandi(dcm, eta=kw.get('threshold', 0.0))
if q is None:
raise KeyError("Given method '{}' is not implemented.".format(method))
q /= np.linalg.norm(q)
return q
def from_rpy(self, angles: np.ndarray) -> np.ndarray:
"""
Quaternion from given RPY angles.
The quaternion can be constructed from the Aerospace cardanian angle
sequence that follows the order :math:`\\phi\\to\\theta\\to\\psi`,
where :math:`\\phi` is the **roll** (or *bank*) angle, :math:`\\theta`
is the **pitch** (or *elevation*) angle, and :math:`\\psi` is the
**yaw** (or *heading*) angle.
The composing quaternions are:
.. math::
\\begin{array}{rcl}
\\mathbf{q}_X &=& \\begin{pmatrix}\\cos\\frac{\\phi}{2} & \\sin\\frac{\\phi}{2} & 0 & 0\\end{pmatrix} \\\\ && \\\\
\\mathbf{q}_Y &=& \\begin{pmatrix}\\cos\\frac{\\theta}{2} & 0 & \\sin\\frac{\\theta}{2} & 0\\end{pmatrix} \\\\ && \\\\
\\mathbf{q}_Z &=& \\begin{pmatrix}\\cos\\frac{\\psi}{2} & 0 & 0 & \\sin\\frac{\\psi}{2}\\end{pmatrix}
\\end{array}
The elements of the final quaternion
:math:`\\mathbf{q}=\\mathbf{q}_Z\\mathbf{q}_Y\\mathbf{q}_X = q_w+q_xi+q_yj+q_zk`
are obtained as:
.. math::
\\begin{array}{rcl}
q_w &=& \\cos\\frac{\\psi}{2}\\cos\\frac{\\theta}{2}\\cos\\frac{\\phi}{2} + \\sin\\frac{\\psi}{2}\\sin\\frac{\\theta}{2}\\sin\\frac{\\phi}{2} \\\\ && \\\\
q_x &=& \\cos\\frac{\\psi}{2}\\cos\\frac{\\theta}{2}\\sin\\frac{\\phi}{2} - \\sin\\frac{\\psi}{2}\\sin\\frac{\\theta}{2}\\cos\\frac{\\phi}{2} \\\\ && \\\\
q_y &=& \\cos\\frac{\\psi}{2}\\sin\\frac{\\theta}{2}\\cos\\frac{\\phi}{2} + \\sin\\frac{\\psi}{2}\\cos\\frac{\\theta}{2}\\sin\\frac{\\phi}{2} \\\\ && \\\\
q_z &=& \\sin\\frac{\\psi}{2}\\cos\\frac{\\theta}{2}\\cos\\frac{\\phi}{2} - \\cos\\frac{\\psi}{2}\\sin\\frac{\\theta}{2}\\sin\\frac{\\phi}{2}
\\end{array}
.. warning::
The Aerospace sequence :math:`\\phi\\to\\theta\\to\\psi` is only
one of the `twelve possible rotation sequences
<https://en.wikipedia.org/wiki/Euler_angles#Tait.E2.80.93Bryan_angles>`_
around the main axes. Other sequences might be more suitable for
other applications, but this one is the most common in practice.
Parameters
----------
angles : numpy.ndarray
3 cardanian angles, in radians, following the order: roll -> pitch -> yaw.
Returns
-------
q : numpy.ndarray
Quaternion from roll-pitch-yaw angles.
Examples
--------
>>> from ahrs import DEG2RAD # Helper variable to convert angles to radians
>>> q = Quaternion()
>>> q.from_rpy(np.array([10.0, 20.0, 30.0])*DEG2RAD) # Give roll-pitch-yaw angles as radians.
array([0.95154852, 0.23929834, 0.18930786, 0.03813458])
It can be corroborated with the class `DCM <./dcm.html>`_, which represents a Direction
Cosine Matrix, and can also be built with roll-pitch-yaw angles.
>>> from ahrs import DCM
>>> R = DCM(rpy=[10.0, 20.0, 30.0]) # Here you give the angles as degrees
>>> R
DCM([[ 0.92541658, 0.01802831, 0.37852231],
[ 0.16317591, 0.88256412, -0.44096961],
[-0.34202014, 0.46984631, 0.81379768]])
>>> q.from_DCM(R)
array([0.95154852, 0.23929834, 0.18930786, 0.03813458])
With both approaches the same quaternion is obtained.
"""
angles = np.array(angles)
if angles.ndim != 1 or angles.shape[0] != 3:
raise ValueError("Expected `angles` to have shape (3,), got {}.".format(angles.shape))
yaw, pitch, roll = angles
cy = np.cos(0.5*yaw)
sy = np.sin(0.5*yaw)
cp = np.cos(0.5*pitch)
sp = np.sin(0.5*pitch)
cr = np.cos(0.5*roll)
sr = np.sin(0.5*roll)
q = np.zeros(4)
q[0] = cy*cp*cr + sy*sp*sr
q[1] = cy*cp*sr - sy*sp*cr
q[2] = cy*sp*cr + sy*cp*sr
q[3] = sy*cp*cr - cy*sp*sr
return q
def from_angles(self, angles: np.ndarray) -> np.ndarray:
"""
Synonym to method from_rpy()
Parameters
----------
angles : numpy.ndarray
3 cardanian angles in following order: roll -> pitch -> yaw.
Returns
-------
q : numpy.ndarray
Quaternion from roll-pitch-yaw angles.
Examples
--------
>>> from ahrs import DEG2RAD # Helper variable to convert angles to radians
>>> q = Quaternion()
>>> q.from_angles(np.array([10.0, 20.0, 30.0])*DEG2RAD) # Give roll-pitch-yaw angles as radians.
array([0.95154852, 0.23929834, 0.18930786, 0.03813458])
It can be corroborated with the class `DCM <./dcm.html>`_, which represents a Direction
Cosine Matrix, and can also be built with roll-pitch-yaw angles.
>>> from ahrs import DCM
>>> R = DCM(rpy=[10.0, 20.0, 30.0]) # Here you give the angles as degrees
>>> R
DCM([[ 0.92541658, 0.01802831, 0.37852231],
[ 0.16317591, 0.88256412, -0.44096961],
[-0.34202014, 0.46984631, 0.81379768]])
>>> q.from_DCM(R)
array([0.95154852, 0.23929834, 0.18930786, 0.03813458])
With both approaches the same quaternion is obtained.
"""
return self.from_rpy(angles)
def ode(self, w: np.ndarray) -> np.ndarray:
"""
Ordinary Differential Equation of the quaternion.
Parameters
----------
w : numpy.ndarray
Angular velocity, in rad/s, about X-, Y- and Z-axis.
Returns
-------
dq/dt : numpy.ndarray
Derivative of quaternion
"""
if w.ndim != 1 or w.shape[0] != 3:
raise ValueError("Expected `w` to have shape (3,), got {}.".format(w.shape))
F = np.array([
[0.0, -w[0], -w[1], -w[2]],
[w[0], 0.0, -w[2], w[1]],
[w[1], w[2], 0.0, -w[0]],
[w[2], -w[1], w[0], 0.0]])
return 0.5*[email protected]
def random(self) -> np.ndarray:
"""
Generate a random quaternion
To generate a random quaternion a mapping in SO(3) is first created and
then transformed as explained originally by [Shoemake]_.
Returns
-------
q : numpy.ndarray
Random array corresponding to a valid quaternion
"""
u = np.random.random(3)
q = np.zeros(4)
s2pi = np.sin(2.0*np.pi)
c2pi = np.cos(2.0*np.pi)
q[0] = np.sqrt(1.0-u[0])*s2pi*u[1]
q[1] = np.sqrt(1.0-u[0])*c2pi*u[1]
q[2] = np.sqrt(u[0])*s2pi*u[2]
q[3] = np.sqrt(u[0])*c2pi*u[2]
return q / np.linalg.norm(q)
class QuaternionArray(np.ndarray):
"""Array of Quaternions
Class to represent quaternion arrays. It can be built from N-by-3 or N-by-4
arrays. The objects are **always normalized** to represent rotations in 3D
space (versors), unless explicitly specified setting the parameter ``versors``
to ``False``.
If an N-by-3 array is given, it is assumed to represent pure quaternions,
setting their scalar part equal to zero.
Parameters
----------
q : array-like, default: None
N-by-4 or N-by-3 array containing the quaternion values to use.
versors : bool, default: True
Treat quaternions as versors. It will normalize them immediately.
Attributes
----------
array : numpy.ndarray
Array with all N quaternions.
w : numpy.ndarray
Scalar parts of all quaternions.
x : numpy.ndarray
First elements of the vector part of all quaternions.
y : numpy.ndarray
Second elements of the vector part of all quaternions.
z : numpy.ndarray
Third elements of the vector part of all quaternions.
v : numpy.ndarray
Vector part of all quaternions.
Raises
------
ValueError
When length of input array is not equal to either 3 or 4.
Examples
--------
>>> from ahrs import QuaternionArray
>>> Q = QuaternionArray(np.random.random((3, 4))-0.5)
>>> Q.view()
QuaternionArray([[ 0.39338362, -0.29206111, -0.07445273, 0.86856573],
[ 0.65459935, 0.14192058, -0.69722158, 0.25542183],
[-0.42837174, 0.85451579, -0.02786928, 0.29244439]])
If an N-by-3 array is given, it is used to build an array of pure
quaternions:
>>> Q = QuaternionArray(np.random.random((5, 3))-0.5)
>>> Q.view()
QuaternionArray([[ 0. , -0.73961715, 0.23572589, 0.63039652],
[ 0. , -0.54925142, 0.67303056, 0.49533093],
[ 0. , 0.46936253, 0.39912076, 0.78765566],
[ 0. , 0.52205066, -0.16510523, -0.83678155],
[ 0. , 0.11844943, -0.27839573, -0.95313459]])
Transformations to other representations are possible:
.. code-block:: python
>>> Q = QuaternionArray(np.random.random((3, 4))-0.5)
>>> Q.to_angles()
array([[-0.41354414, 0.46539024, 2.191703 ],
[-1.6441448 , -1.39912606, 2.21590455],
[-2.12380045, -0.49600967, -0.34589322]])
>>> Q.to_DCM()
array([[[-0.51989927, -0.63986956, -0.56592552],
[ 0.72684856, -0.67941224, 0.10044993],
[-0.44877158, -0.3591183 , 0.81831419]],
[[-0.10271648, -0.53229811, -0.84030235],
[ 0.13649774, 0.82923647, -0.54197346],
[ 0.98530081, -0.17036898, -0.01251876]],
[[ 0.82739916, 0.20292036, 0.52367352],
[-0.2981793 , -0.63144191, 0.71580041],
[ 0.47591988, -0.74840126, -0.46194785]]])
Markley's method to obtain the average quaternion is implemented too:
>>> qts = np.tile([1., -2., 3., -4], (5, 1)) # Five equal arrays
>>> v = np.random.randn(5, 4)*0.1 # Gaussian noise
>>> Q = QuaternionArray(qts + v)
>>> Q.view()
QuaternionArray([[ 0.17614144, -0.39173347, 0.56303067, -0.70605634],
[ 0.17607515, -0.3839024 , 0.52673809, -0.73767437],
[ 0.16823806, -0.35898889, 0.53664261, -0.74487424],
[ 0.17094453, -0.3723117 , 0.54109885, -0.73442086],
[ 0.1862619 , -0.38421818, 0.5260265 , -0.73551276]])
>>> Q.average()
array([-0.17557859, 0.37832975, -0.53884688, 0.73190355])
If, for any reason, the signs of certain quaternions are flipped (they
still represent the same rotation in 3D Euclidean space), we can use the
method rempve jumps to flip them back.
>>> Q.view()
QuaternionArray([[ 0.17614144, -0.39173347, 0.56303067, -0.70605634],
[ 0.17607515, -0.3839024 , 0.52673809, -0.73767437],
[ 0.16823806, -0.35898889, 0.53664261, -0.74487424],
[ 0.17094453, -0.3723117 , 0.54109885, -0.73442086],
[ 0.1862619 , -0.38421818, 0.5260265 , -0.73551276]])
>>> Q[1:3] *= -1
>>> Q.view()
QuaternionArray([[ 0.17614144, -0.39173347, 0.56303067, -0.70605634],
[-0.17607515, 0.3839024 , -0.52673809, 0.73767437],
[-0.16823806, 0.35898889, -0.53664261, 0.74487424],
[ 0.17094453, -0.3723117 , 0.54109885, -0.73442086],
[ 0.1862619 , -0.38421818, 0.5260265 , -0.73551276]])
>>> Q.remove_jumps()
>>> Q.view()
QuaternionArray([[ 0.17614144, -0.39173347, 0.56303067, -0.70605634],
[ 0.17607515, -0.3839024 , 0.52673809, -0.73767437],
[ 0.16823806, -0.35898889, 0.53664261, -0.74487424],
[ 0.17094453, -0.3723117 , 0.54109885, -0.73442086],
[ 0.1862619 , -0.38421818, 0.5260265 , -0.73551276]])
"""
def __new__(subtype, q: np.ndarray = None, versors: bool = True):
if q is None:
q = np.array([[1.0, 0.0, 0.0, 0.0]])
q = np.array(q, dtype=float)
if q.ndim!=2 or q.shape[-1] not in [3, 4]:
raise ValueError("Expected array to have shape (N, 4) or (N, 3), got {}.".format(q.shape))
if q.shape[-1] == 3:
q = np.c_[np.zeros(q.shape[0]), q]
if versors:
q /= np.linalg.norm(q, axis=1)[:, None]
# Create the ndarray instance of type QuaternionArray. This will call
# the standard ndarray constructor, but return an object of type
# QuaternionArray.
obj = super(QuaternionArray, subtype).__new__(subtype, q.shape, float, q)
obj.array = q
obj.num_qts = q.shape[0]
return obj
@property
def w(self) -> np.ndarray:
"""Scalar parts of all Quaternions.
Having the quaternion elements :math:`\\mathbf{q}_i=\\begin{pmatrix}w_i & \\mathbf{v}_i\\end{pmatrix}=\\begin{pmatrix}w_i & x_i & y_i & z_i\\end{pmatrix}\\in\\mathbb{R}^4`
stacked vertically in an :math:`N\\times 4` matrix :math:`\\mathbf{Q}`:
.. math::
\\mathbf{Q} =
\\begin{bmatrix} \\mathbf{q}_0 \\\\ \\mathbf{q}_1 \\\\ \\vdots \\\\ \\mathbf{q}_{N-1} \\end{bmatrix} =
\\begin{bmatrix} w_0 & x_0 & y_0 & z_0 \\\\ w_1 & x_1 & y_1 & z_1 \\\\
\\vdots & \\vdots & \\vdots & \\vdots \\\\ w_{N-1} & x_{N-1} & y_{N-1} & z_{N-1} \\end{bmatrix}
The scalar elements of all quaternions are:
.. math::
\\mathbf{w} = \\begin{bmatrix}w_0 & w_1 & \\cdots & w_{N-1}\\end{bmatrix}
Returns
-------
w : numpy.ndarray
Scalar parts of all quaternions.
Examples
--------
>>> Q = QuaternionArray(np.random.random((3, 4))-0.5)
>>> Q.view()
QuaternionArray([[ 0.39338362, -0.29206111, -0.07445273, 0.86856573],
[ 0.65459935, 0.14192058, -0.69722158, 0.25542183],
[-0.42837174, 0.85451579, -0.02786928, 0.29244439]])
>>> Q.w
array([ 0.39338362, 0.65459935, -0.42837174])
They can also be accessed directly, returned as a QuaternionArray:
>>> Q[:, 0]
QuaternionArray([ 0.39338362, 0.65459935, -0.42837174])
"""
return self.array[:, 0]
@property
def x(self) -> np.ndarray:
"""First elements of the vector part of all Quaternions.
Having the quaternion elements :math:`\\mathbf{q}_i=\\begin{pmatrix}w_i & \\mathbf{v}_i\\end{pmatrix}=\\begin{pmatrix}w_i & x_i & y_i & z_i\\end{pmatrix}\\in\\mathbb{R}^4`
stacked vertically in an :math:`N\\times 4` matrix :math:`\\mathbf{Q}`:
.. math::
\\mathbf{Q} =
\\begin{bmatrix} \\mathbf{q}_0 \\\\ \\mathbf{q}_1 \\\\ \\vdots \\\\ \\mathbf{q}_{N-1} \\end{bmatrix} =
\\begin{bmatrix} w_0 & x_0 & y_0 & z_0 \\\\ w_1 & x_1 & y_1 & z_1 \\\\
\\vdots & \\vdots & \\vdots & \\vdots \\\\ w_{N-1} & x_{N-1} & y_{N-1} & z_{N-1} \\end{bmatrix}
The first elements of the vector parts of all quaternions are:
.. math::
\\mathbf{x} = \\begin{bmatrix}x_0 & x_1 & \\cdots & x_{N-1}\\end{bmatrix}
Returns
-------
x : numpy.ndarray
First elements of the vector part of all quaternions.
Examples
--------
>>> Q = QuaternionArray(np.random.random((3, 4))-0.5)
>>> Q.view()
QuaternionArray([[ 0.39338362, -0.29206111, -0.07445273, 0.86856573],
[ 0.65459935, 0.14192058, -0.69722158, 0.25542183],
[-0.42837174, 0.85451579, -0.02786928, 0.29244439]])
>>> Q.x
array([-0.29206111, 0.14192058, 0.85451579])
They can also be accessed directly, returned as a QuaternionArray:
>>> Q[:, 1]
QuaternionArray([-0.29206111, 0.14192058, 0.85451579])
"""
return self.array[:, 1]
@property
def y(self) -> np.ndarray:
"""Second elements of the vector part of all Quaternions.
Having the quaternion elements :math:`\\mathbf{q}_i=\\begin{pmatrix}w_i & \\mathbf{v}_i\\end{pmatrix}=\\begin{pmatrix}w_i & x_i & y_i & z_i\\end{pmatrix}\\in\\mathbb{R}^4`
stacked vertically in an :math:`N\\times 4` matrix :math:`\\mathbf{Q}`:
.. math::
\\mathbf{Q} =
\\begin{bmatrix} \\mathbf{q}_0 \\\\ \\mathbf{q}_1 \\\\ \\vdots \\\\ \\mathbf{q}_{N-1} \\end{bmatrix} =
\\begin{bmatrix} w_0 & x_0 & y_0 & z_0 \\\\ w_1 & x_1 & y_1 & z_1 \\\\
\\vdots & \\vdots & \\vdots & \\vdots \\\\ w_{N-1} & x_{N-1} & y_{N-1} & z_{N-1} \\end{bmatrix}
The second elements of the vector parts of all quaternions are:
.. math::
\\mathbf{y} = \\begin{bmatrix}y_0 & y_1 & \\cdots & y_{N-1}\\end{bmatrix}
Returns
-------
y : numpy.ndarray
Second elements of the vector part of all quaternions.
Examples
--------
>>> Q = QuaternionArray(np.random.random((3, 4))-0.5)
>>> Q.view()
QuaternionArray([[ 0.39338362, -0.29206111, -0.07445273, 0.86856573],
[ 0.65459935, 0.14192058, -0.69722158, 0.25542183],
[-0.42837174, 0.85451579, -0.02786928, 0.29244439]])
>>> Q.y
array([-0.07445273, -0.69722158, -0.02786928])
They can also be accessed directly, returned as a QuaternionArray:
>>> Q[:, 2]
QuaternionArray([-0.07445273, -0.69722158, -0.02786928])
"""
return self.array[:, 2]
@property
def z(self) -> np.ndarray:
"""Third elements of the vector part of all Quaternions.
Having the quaternion elements :math:`\\mathbf{q}_i=\\begin{pmatrix}w_i & \\mathbf{v}_i\\end{pmatrix}=\\begin{pmatrix}w_i & x_i & y_i & z_i\\end{pmatrix}\\in\\mathbb{R}^4`
stacked vertically in an :math:`N\\times 4` matrix :math:`\\mathbf{Q}`:
.. math::
\\mathbf{Q} =
\\begin{bmatrix} \\mathbf{q}_0 \\\\ \\mathbf{q}_1 \\\\ \\vdots \\\\ \\mathbf{q}_{N-1} \\end{bmatrix} =
\\begin{bmatrix} w_0 & x_0 & y_0 & z_0 \\\\ w_1 & x_1 & y_1 & z_1 \\\\
\\vdots & \\vdots & \\vdots & \\vdots \\\\ w_{N-1} & x_{N-1} & y_{N-1} & z_{N-1} \\end{bmatrix}
The third elements of the vector parts of all quaternions are:
.. math::
\\mathbf{z} = \\begin{bmatrix}z_0 & z_1 & \\cdots & z_{N-1}\\end{bmatrix}
Returns
-------
z : numpy.ndarray
Third elements of the vector part of all quaternions.
Examples
--------
>>> Q = QuaternionArray(np.random.random((3, 4))-0.5)
>>> Q.view()
QuaternionArray([[ 0.39338362, -0.29206111, -0.07445273, 0.86856573],
[ 0.65459935, 0.14192058, -0.69722158, 0.25542183],
[-0.42837174, 0.85451579, -0.02786928, 0.29244439]])
>>> Q.z
array([0.86856573, 0.25542183, 0.29244439])
They can also be accessed directly, returned as a QuaternionArray:
>>> Q[:, 3]
QuaternionArray([0.86856573, 0.25542183, 0.29244439])
"""
return self.array[:, 3]
@property
def v(self) -> np.ndarray:
"""Vector part of all Quaternions.
Having the quaternion elements :math:`\\mathbf{q}_i=\\begin{pmatrix}w_i & \\mathbf{v}_i\\end{pmatrix}=\\begin{pmatrix}w_i & x_i & y_i & z_i\\end{pmatrix}\\in\\mathbb{R}^4`
stacked vertically in an :math:`N\\times 4` matrix :math:`\\mathbf{Q}`:
.. math::
\\mathbf{Q} =
\\begin{bmatrix} \\mathbf{q}_0 \\\\ \\mathbf{q}_1 \\\\ \\vdots \\\\ \\mathbf{q}_{N-1} \\end{bmatrix} =
\\begin{bmatrix} w_0 & x_0 & y_0 & z_0 \\\\ w_1 & x_1 & y_1 & z_1 \\\\
\\vdots & \\vdots & \\vdots & \\vdots \\\\ w_{N-1} & x_{N-1} & y_{N-1} & z_{N-1} \\end{bmatrix}
The vector parts of all quaternions are:
.. math::
\\mathbf{V} = \\begin{bmatrix} x_0 & y_0 & z_0 \\\\ x_1 & y_1 & z_1 \\\\
\\vdots & \\vdots & \\vdots \\\\ x_{N-1} & y_{N-1} & z_{N-1} \\end{bmatrix}
Returns
-------
V : numpy.ndarray
N-by-3 array with vector parts of all quaternions.
Examples
--------
>>> Q = QuaternionArray(np.random.random((3, 4))-0.5)
>>> Q.view()
QuaternionArray([[ 0.39338362, -0.29206111, -0.07445273, 0.86856573],
[ 0.65459935, 0.14192058, -0.69722158, 0.25542183],
[-0.42837174, 0.85451579, -0.02786928, 0.29244439]])
>>> Q.v
array([[-0.29206111, -0.07445273, 0.86856573],
[ 0.14192058, -0.69722158, 0.25542183],
[ 0.85451579, -0.02786928, 0.29244439]])
They can also be accessed directly, slicing the Quaternion like an
array, but returned as a Quaternion object.
>>> Q[:, 1:]
QuaternionArray([[-0.29206111, -0.07445273, 0.86856573],
[ 0.14192058, -0.69722158, 0.25542183],
[ 0.85451579, -0.02786928, 0.29244439]])
"""
return self.array[:, 1:]
def is_pure(self) -> np.ndarray:
"""Returns an array of boolean values, where a value is ``True`` if its
corresponding quaternion is pure.
A pure quaternion has a scalar part equal to zero: :math:`\\mathbf{q} = 0 + xi + yj + zk`
.. math::
\\left\\{
\\begin{array}{ll}
\\mathrm{True} & \\: w = 0 \\\\
\\mathrm{False} & \\: \\mathrm{otherwise}
\\end{array}
\\right.
Returns
-------
out : np.ndarray
Array of booleans.
Example
-------
>>> Q = QuaternionArray(np.random.random((3, 4))-0.5)
>>> Q[1, 0] = 0.0
>>> Q.view()
QuaternionArray([[ 0.32014817, 0.47060011, 0.78255824, 0.25227621],
[ 0. , -0.79009137, 0.47021242, -0.26103598],
[-0.65182559, -0.3032904 , 0.16078433, -0.67622979]])
>>> Q.is_pure()
array([False, True, False])
"""
return np.isclose(self.w, np.zeros_like(self.w.shape[0]))
def is_real(self) -> np.ndarray:
"""Returns an array of boolean values, where a value is ``True`` if its
corresponding quaternion is real.
A real quaternion has all elements of its vector part equal to zero:
:math:`\\mathbf{q} = w + 0i + 0j + 0k = \\begin{pmatrix} q_w & \\mathbf{0}\\end{pmatrix}`
.. math::
\\left\\{
\\begin{array}{ll}
\\mathrm{True} & \\: \\mathbf{q}_v = \\begin{bmatrix} 0 & 0 & 0 \\end{bmatrix} \\\\
\\mathrm{False} & \\: \\mathrm{otherwise}
\\end{array}
\\right.
Returns
-------
out : np.ndarray
Array of booleans.
Example
-------
>>> Q = QuaternionArray(np.random.random((3, 4))-0.5)
>>> Q[1, 1:] = 0.0
>>> Q.view()
QuaternionArray([[-0.8061095 , 0.42513151, 0.37790158, -0.16322091],
[ 0.04515362, 0. , 0. , 0. ],
[ 0.29613776, 0.21692562, -0.16253866, -0.91587493]])
>>> Q.is_real()
array([False, True, False])
"""
return np.all(np.isclose(self.v, np.zeros_like(self.v)), axis=1)
def is_versor(self) -> np.ndarray:
"""Returns an array of boolean values, where a value is ``True`` if its
corresponding quaternion has a norm equal to one.
A **versor** is a quaternion, whose `euclidean norm
<https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm>`_ is
equal to one: :math:`\\|\\mathbf{q}\\| = \\sqrt{w^2+x^2+y^2+z^2} = 1`
.. math::
\\left\\{
\\begin{array}{ll}
\\mathrm{True} & \\: \\sqrt{w^2+x^2+y^2+z^2} = 1 \\\\
\\mathrm{False} & \\: \\mathrm{otherwise}
\\end{array}
\\right.
Returns
-------
out : np.ndarray
Array of booleans.
Example
-------
>>> Q = QuaternionArray(np.random.random((3, 4))-0.5)
>>> Q[1] = [1.0, 2.0, 3.0, 4.0]
>>> Q.view()
QuaternionArray([[-0.8061095 , 0.42513151, 0.37790158, -0.16322091],
[ 1. , 2. , 3. , 4. ],
[ 0.29613776, 0.21692562, -0.16253866, -0.91587493]])
>>> Q.is_versor()
array([ True, False, True])
"""
return np.isclose(np.linalg.norm(self.array, axis=1), 1.0)
def is_identity(self) -> np.ndarray:
"""Returns an array of boolean values, where a value is ``True`` if its
quaternion is equal to the identity quaternion.
An **identity quaternion** has its scalar part equal to 1, and its
vector part equal to 0, such that :math:`\\mathbf{q} = 1 + 0i + 0j + 0k`.
.. math::
\\left\\{
\\begin{array}{ll}
\\mathrm{True} & \\: \\mathbf{q}\\ = \\begin{pmatrix} 1 & 0 & 0 & 0 \\end{pmatrix} \\\\
\\mathrm{False} & \\: \\mathrm{otherwise}
\\end{array}
\\right.
Returns
-------
out : np.ndarray
Array of booleans.
Example
-------
>>> Q = QuaternionArray(np.random.random((3, 4))-0.5)
>>> Q[1] = [1.0, 0.0, 0.0, 0.0]
>>> Q.view()
QuaternionArray([[-0.8061095 , 0.42513151, 0.37790158, -0.16322091],
[ 1. , 0. , 0. , 0. ],
[ 0.29613776, 0.21692562, -0.16253866, -0.91587493]])
>>> Q.is_identity()
array([False, True, False])
"""
return np.all(np.isclose(self.array, np.tile([1., 0., 0., 0.], (self.array.shape[0], 1))), axis=1)
def conjugate(self) -> np.ndarray:
"""
Return the conjugate of all quaternions.
Returns
-------
q* : numpy.ndarray
Array of conjugated quaternions.
Examples
--------
>>> Q = QuaternionArray(np.random.random((5, 4))-0.5)
>>> Q.view()
QuaternionArray([[-0.68487217, 0.45395092, -0.53551826, -0.19518931],
[ 0.49389483, 0.28781475, -0.7085184 , -0.41380217],
[-0.39583397, 0.46873203, -0.21517704, 0.75980563],
[ 0.57515971, 0.33286283, 0.23442397, 0.70953439],
[-0.34067259, -0.24989624, 0.5950285 , -0.68369229]])
>>> Q.conjugate()
array([[-0.68487217, -0.45395092, 0.53551826, 0.19518931],
[ 0.49389483, -0.28781475, 0.7085184 , 0.41380217],
[-0.39583397, -0.46873203, 0.21517704, -0.75980563],
[ 0.57515971, -0.33286283, -0.23442397, -0.70953439],
[-0.34067259, 0.24989624, -0.5950285 , 0.68369229]])
"""
return self.array*np.array([1.0, -1.0, -1.0, -1.0])
def conj(self) -> np.ndarray:
"""Synonym of :meth:`conjugate`
Returns
-------
q* : numpy.ndarray
Array of conjugated quaternions.
Examples
--------
>>> Q = QuaternionArray(np.random.random((5, 4))-0.5)
>>> Q.view()
QuaternionArray([[-0.68487217, 0.45395092, -0.53551826, -0.19518931],
[ 0.49389483, 0.28781475, -0.7085184 , -0.41380217],
[-0.39583397, 0.46873203, -0.21517704, 0.75980563],
[ 0.57515971, 0.33286283, 0.23442397, 0.70953439],
[-0.34067259, -0.24989624, 0.5950285 , -0.68369229]])
>>> Q.conj()
array([[-0.68487217, -0.45395092, 0.53551826, 0.19518931],
[ 0.49389483, -0.28781475, 0.7085184 , 0.41380217],
[-0.39583397, -0.46873203, 0.21517704, -0.75980563],
[ 0.57515971, -0.33286283, -0.23442397, -0.70953439],
[-0.34067259, 0.24989624, -0.5950285 , 0.68369229]])
"""
return self.conjugate()
def to_angles(self) -> np.ndarray:
"""
Return corresponding roll-pitch-yaw angles of quaternion.
Having a unit quaternion :math:`\\mathbf{q} = \\begin{pmatrix}q_w & q_x & q_y & q_z\\end{pmatrix}`,
its corresponding roll-pitch-yaw angles [WikiConversions]_ are:
.. math::
\\begin{bmatrix}
\\phi \\\\ \\theta \\\\ \\psi
\\end{bmatrix} =
\\begin{bmatrix}
\\mathrm{atan2}\\big(2(q_wq_x + q_yq_z), 1-2(q_x^2+q_y^2)\\big) \\\\
\\arcsin\\big(2(q_wq_y - q_zq_x)\\big) \\\\
\\mathrm{atan2}\\big(2(q_wq_z + q_xq_y), 1-2(q_y^2+q_z^2)\\big)
\\end{bmatrix}
Returns
-------
angles : numpy.ndarray
Euler angles of quaternion.
Examples
--------
>>> Q = QuaternionArray(np.random.random((5, 4))-0.5) # Five random Quaternions
>>> Q.view()
QuaternionArray([[-0.5874517 , -0.2181631 , -0.25175194, 0.73751361],
[ 0.64812786, 0.18534342, 0.73606315, -0.06155591],
[-0.0014204 , 0.8146498 , 0.26040532, 0.51820146],
[ 0.55231315, -0.6287687 , -0.02216051, 0.5469086 ],
[ 0.08694828, -0.96884826, 0.05115712, -0.22617689]])
>>> Q.to_angles()
array([[-0.14676831, 0.66566299, -1.84716657],
[ 2.36496457, 1.35564472, 2.01193563],
[ 2.61751194, -1.00664968, 0.91202161],
[-1.28870906, 0.72519173, 1.00562317],
[-2.92779394, -0.4437908 , -0.15391635]])
"""
phi = np.arctan2(2.0*(self.array[:, 0]*self.array[:, 1] + self.array[:, 2]*self.array[:, 3]), 1.0 - 2.0*(self.array[:, 1]**2 + self.array[:, 2]**2))
theta = np.arcsin(2.0*(self.array[:, 0]*self.array[:, 2] - self.array[:, 3]*self.array[:, 1]))
psi = np.arctan2(2.0*(self.array[:, 0]*self.array[:, 3] + self.array[:, 1]*self.array[:, 2]), 1.0 - 2.0*(self.array[:, 2]**2 + self.array[:, 3]**2))
return np.c_[phi, theta, psi]
def to_DCM(self) -> np.ndarray:
"""
Having *N* quaternions return *N* `direction cosine matrices
<https://en.wikipedia.org/wiki/Euclidean_vector#Conversion_between_multiple_Cartesian_bases>`_
in `SO(3) <https://en.wikipedia.org/wiki/3D_rotation_group>`_.
Any **unit quaternion** has the form
:math:`\\mathbf{q} = \\begin{pmatrix}q_w & \\mathbf{q}_v\\end{pmatrix}`,
where :math:`\\mathbf{q}_v = \\begin{bmatrix}q_x & q_y & q_z\\end{bmatrix}`
is the vector part, :math:`q_w` is the scalar part, and :math:`\\|\\mathbf{q}\\|=1`.
The `rotation matrix <https://en.wikipedia.org/wiki/Rotation_matrix#In_three_dimensions>`_
:math:`\\mathbf{R}` [WikiConversions]_ built from :math:`\\mathbf{q}`
has the form:
.. math::
\\mathbf{R}(\\mathbf{q}) =
\\begin{bmatrix}
1 - 2(q_y^2 + q_z^2) & 2(q_xq_y - q_wq_z) & 2(q_xq_z + q_wq_y) \\\\
2(q_xq_y + q_wq_z) & 1 - 2(q_x^2 + q_z^2) & 2(q_yq_z - q_wq_x) \\\\
2(q_xq_z - q_wq_y) & 2(q_wq_x + q_yq_z) & 1 - 2(q_x^2 + q_y^2)
\\end{bmatrix}
The identity quaternion :math:`\\mathbf{q}_\\mathbf{I} = \\begin{pmatrix}1 & 0 & 0 & 0\\end{pmatrix}`,
produces a :math:`3 \\times 3` Identity matrix :math:`\\mathbf{I}_3`.
Returns
-------
DCM : numpy.ndarray
N-by-3-by-3 Direction Cosine Matrices.
Examples
--------
.. code-block:: python
>>> Q = QuaternionArray(np.random.random((3, 4))-0.5) # Three random quaternions
>>> Q.view()
QuaternionArray([[-0.75641558, 0.42233104, 0.39637415, 0.30390704],
[-0.52953832, -0.7187872 , -0.44551683, 0.06669994],
[ 0.264412 , 0.15784685, -0.80536887, 0.50650928]])
>>> Q.to_DCM()
array([[[ 0.50105608, 0.79456226, -0.34294842],
[-0.12495782, 0.45855401, 0.87983735],
[ 0.85634593, -0.39799377, 0.32904804]],
[[ 0.59413175, 0.71110393, 0.37595034],
[ 0.56982324, -0.04220784, -0.82068263],
[-0.56772259, 0.70181885, -0.43028056]],
[[-0.81034134, -0.52210413, -0.2659966 ],
[ 0.01360439, 0.43706545, -0.89932681],
[ 0.58580016, -0.73238041, -0.3470693 ]]])
"""
if not all(self.is_versor()):
raise AttributeError("All quaternions must be versors to be represented as Direction Cosine Matrices.")
R = np.zeros((self.num_qts, 3, 3))
R[:, 0, 0] = 1.0 - 2.0*(self.array[:, 2]**2 + self.array[:, 3]**2)
R[:, 1, 0] = 2.0*(self.array[:, 1]*self.array[:, 2]+self.array[:, 0]*self.array[:, 3])
R[:, 2, 0] = 2.0*(self.array[:, 1]*self.array[:, 3]-self.array[:, 0]*self.array[:, 2])
R[:, 0, 1] = 2.0*(self.array[:, 1]*self.array[:, 2]-self.array[:, 0]*self.array[:, 3])
R[:, 1, 1] = 1.0 - 2.0*(self.array[:, 1]**2 + self.array[:, 3]**2)
R[:, 2, 1] = 2.0*(self.array[:, 0]*self.array[:, 1]+self.array[:, 2]*self.array[:, 3])
R[:, 0, 2] = 2.0*(self.array[:, 1]*self.array[:, 3]+self.array[:, 0]*self.array[:, 2])
R[:, 1, 2] = 2.0*(self.array[:, 2]*self.array[:, 3]-self.array[:, 0]*self.array[:, 1])
R[:, 2, 2] = 1.0 - 2.0*(self.array[:, 1]**2 + self.array[:, 2]**2)
return R
def average(self, span: Tuple[int, int] = None, weights: np.ndarray = None) -> np.ndarray:
"""Average quaternion using Markley's method [Markley2007]_
It has to be clear that we intend to average **atttitudes** rather than
quaternions. It just happens that we represent these attitudes with
unit quaternions, that is :math:`\\|\\mathbf{q}\\|=1`.
The average quaternion :math:`\\bar{\\mathbf{q}}` should minimize a
weighted sum of the squared `Frobenius norms
<https://en.wikipedia.org/wiki/Matrix_norm#Frobenius_norm>`_ of
attitude matrix differences:
.. math::
\\bar{\\mathbf{q}} = \\mathrm{arg min}\\sum_{i=1}^nw_i\\|\\mathbf{A}(\\mathbf{q}) - \\mathbf{A}(\\mathbf{q}_i)\\|_F^2
Taking advantage of the attitude's orthogonality in SO(3), this can be
rewritten as a maximization problem:
.. math::
\\bar{\\mathbf{q}} = \\mathrm{arg max} \\big\\{\\mathrm{tr}(\\mathbf{A}(\\mathbf{q})\\mathbf{B}^T)\\big\\}
with:
.. math::
\\mathbf{B} = \\sum_{i=1}^nw_i\\mathbf{A}(\\mathbf{q}_i)
We can verify the identity:
.. math::
\\mathrm{tr}(\\mathbf{A}(\\mathbf{q})\\mathbf{B}^T) = \\mathbf{q}^T\\mathbf{Kq}
using Davenport's symmetric traceless :math:`4\\times 4` matrix:
.. math::
\\mathbf{K}=4\\mathbf{M}-w_\\mathrm{tot}\\mathbf{I}_{4\\times 4}
where :math:`w_\\mathrm{tot}=\\sum_{i=1}^nw_i`, and :math:`\\mathbf{M}` is the
:math:`4\\times 4` matrix:
.. math::
\\mathbf{M} = \\sum_{i=1}^nw_i\\mathbf{q}_i\\mathbf{q}_i^T
.. warning::
In this case, the product :math:`\\mathbf{q}_i\\mathbf{q}_i^T` is a
*normal matrix multiplication*, not the Hamilton product, of the
elements of each quaternion.
Finally, the average quaternion :math:`\\bar{\\mathbf{q}}` is the
eigenvector corresponding to the maximum eigenvalue of :math:`\\mathbf{M}`,
which in turns maximizes the procedure:
.. math::
\\bar{\\mathbf{q}} = \\mathrm{arg max} \\big\\{\\mathbf{q}^T\\mathbf{Mq}\\big\\}
Changing the sign of any :math:`\\mathbf{q}_i` does not change the
value of :math:`\\mathbf{M}`. Thus, the averaging procedure determines
:math:`\\bar{\\mathbf{q}}` up to a sign, which is consistent with the
nature of the attitude representation using unit quaternions.
Parameters
----------
span : tuple, default: None
Span of data to average. If none given, it averages all.
weights : numpy.ndarray, default: None
Weights of each quaternion. If none given, they are all equal to 1.
Returns
-------
q : numpy.ndarray
Average quaternion.
Example
-------
>>> qts = np.tile([1., -2., 3., -4], (5, 1)) # Five equal quaternions
>>> v = np.random.randn(5, 4)*0.1 # Gaussian noise
>>> Q1 = QuaternionArray(qts + v)
>>> Q1.view()
QuaternionArray([[ 0.17614144, -0.39173347, 0.56303067, -0.70605634],
[ 0.17607515, -0.3839024 , 0.52673809, -0.73767437],
[ 0.16823806, -0.35898889, 0.53664261, -0.74487424],
[ 0.17094453, -0.3723117 , 0.54109885, -0.73442086],
[ 0.1862619 , -0.38421818, 0.5260265 , -0.73551276]])
>>> Q1.average()
array([-0.17557859, 0.37832975, -0.53884688, 0.73190355])
The result is as expected, remembering that a quaternion with opposite
signs on each element represents the same orientation.
"""
if not all(self.is_versor()):
raise AttributeError("All quaternions must be versors to be averaged.")
q = self.array.copy()
if span is not None:
if hasattr(span, '__iter__') and len(span) == 2:
q = q[span[0]:span[1]]
else:
raise ValueError("span must be a pair of integers indicating the indices of the data.")
if weights is not None:
if weights.ndim>1:
raise ValueError("The weights must be in a one-dimensional array.")
if weights.size != q.shape[0]:
raise ValueError("The number of weights do not match the number of quaternions.")
q *= weights[:, None]
eigvals, eigvecs = np.linalg.eig(q.T@q)
return eigvecs[:, eigvals.argmax()]
def remove_jumps(self) -> None:
"""
Flip sign of opposite quaternions.
Some estimations and measurements of quaternions might have "jumps"
produced when their values are multiplied by -1. They still represent
the same rotation, but the continuity of the signal "flips", making it
difficult to evaluate continuously.
To revert this, the flipping instances are identified and the next
samples are multiplied by -1, until it "flips back". This
function does that correction over all values of the attribute ``array``.
Examples
--------
>>> qts = np.tile([1., -2., 3., -4], (5, 1)) # Five equal arrays
>>> v = np.random.randn(5, 4)*0.1 # Gaussian noise
>>> Q = QuaternionArray(qts + v)
>>> Q.view()
QuaternionArray([[ 0.17614144, -0.39173347, 0.56303067, -0.70605634],
[ 0.17607515, -0.3839024 , 0.52673809, -0.73767437],
[ 0.16823806, -0.35898889, 0.53664261, -0.74487424],
[ 0.17094453, -0.3723117 , 0.54109885, -0.73442086],
[ 0.1862619 , -0.38421818, 0.5260265 , -0.73551276]])
>>> Q[1:3] *= -1 # 2nd and 3rd Quaternions "flip"
>>> Q.view()
QuaternionArray([[ 0.17614144, -0.39173347, 0.56303067, -0.70605634],
[-0.17607515, 0.3839024 , -0.52673809, 0.73767437],
[-0.16823806, 0.35898889, -0.53664261, 0.74487424],
[ 0.17094453, -0.3723117 , 0.54109885, -0.73442086],
[ 0.1862619 , -0.38421818, 0.5260265 , -0.73551276]])
>>> Q.remove_jumps()
>>> Q.view()
QuaternionArray([[ 0.17614144, -0.39173347, 0.56303067, -0.70605634],
[ 0.17607515, -0.3839024 , 0.52673809, -0.73767437],
[ 0.16823806, -0.35898889, 0.53664261, -0.74487424],
[ 0.17094453, -0.3723117 , 0.54109885, -0.73442086],
[ 0.1862619 , -0.38421818, 0.5260265 , -0.73551276]])
"""
q_diff = np.diff(self.array, axis=0)
jumps = np.nonzero(np.where(np.linalg.norm(q_diff, axis=1)>1, 1, 0))[0]+1
if len(jumps)%2:
jumps = np.append(jumps, [len(q_diff)+1])
jump_pairs = jumps.reshape((len(jumps)//2, 2))
for j in jump_pairs:
self.array[j[0]:j[1]] *= -1.0
def rotate_by(self, q: np.ndarray) -> np.ndarray:
"""Rotate all Quaternions in the array around quaternion :math:`\\mathbf{q}`.
Parameters
----------
q : numpy.ndarray
4 element array to rotate around.
Returns
-------
Q' : numpy.ndarray
4-by-N array with all Quaternions rotated around q.
Examples
--------
>>> q = Quaternion([-0.00085769, -0.0404217, 0.29184193, -0.47288709])
>>> v = [0.25557699 0.74814091 0.71491841]
>>> q.rotate(v)
array([-0.22481078 -0.99218916 -0.31806219])
>>> A = [[0.18029565, 0.14234782], [0.47473686, 0.38233722], [0.90000689, 0.06117298]]
>>> q.rotate(A)
array([[-0.10633285 -0.16347163]
[-1.02790041 -0.23738541]
[-0.00284403 -0.29514739]])
"""
q = np.copy(q)
if q.size != 4:
raise ValueError("Given quaternion to rotate about must have 4 elements.")
q /= np.linalg.norm(q)
qQ = np.zeros_like(self.array)
qQ[:, 0] = q[0]*self.array[:, 0] - q[1]*self.array[:, 1] - q[2]*self.array[:, 2] - q[3]*self.array[:, 3]
qQ[:, 1] = q[0]*self.array[:, 1] + q[1]*self.array[:, 0] + q[2]*self.array[:, 3] - q[3]*self.array[:, 2]
qQ[:, 2] = q[0]*self.array[:, 2] - q[1]*self.array[:, 3] + q[2]*self.array[:, 0] + q[3]*self.array[:, 1]
qQ[:, 3] = q[0]*self.array[:, 3] + q[1]*self.array[:, 2] - q[2]*self.array[:, 1] + q[3]*self.array[:, 0]
qQ /= np.linalg.norm(qQ, axis=1)[:, None]
return qQ | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/common/quaternion.py | quaternion.py |
import cmath
# TRIGONOMETRY
M_PI = cmath.pi
DEG2RAD = M_PI/180
RAD2DEG = 180/M_PI
##### Geodetic constants as defined in WORLD GEODETIC SYSTEM 1984 (rev. 2014)
# Defining parameters
EARTH_EQUATOR_RADIUS = 6_378_137.0 # Semi-major axis of Earth (Equatorial Radius) [m]
EARTH_FLATTENING_INV = 298.257223563 # Flattening Factor of the Earth
EARTH_GM = 3.986004418e14 # Earth's Gravitational Constant (Atmosphere included) [m^3/s^2]
EARTH_ROTATION = 7.292115e-5 # Earth's Rotation rate [rad/s]
# Fundamental constants
LIGHT_SPEED = 2.99792458e8 # Velocity of light in vacuum [m/s]
EARTH_ATMOSPHERE_MASS = 5.148e18 # Total mean mass of the Atmosphere (with water vapor) [kg]
DYNAMIC_ELLIPTICITY = 3.2737949e-3 # Dynamic Ellipticity (H)
# Universal Constant of Gravitation [m^3/(kg*s^2)]
UNIVERSAL_GRAVITATION_WGS84 = 6.67428e-11 # As defined in latest report of WGS 84
UNIVERSAL_GRAVITATION_CODATA2018 = 6.67430e-11 # As recommended by CODATA 2018 in latest report of NIST 2019
UNIVERSAL_GRAVITATION_CODATA2014 = 6.67408e-11 # As recommended by CODATA 2014 and referenced by NASA's Jet Propulsion Laboratory
EARTH_GM_GPSNAV = 3.9860050e14 # Earth's Gravitational Constant for GPS Navigation Message [m^3/s^2]
# Derived geometric constants
EARTH_FLATTENING = 1/EARTH_FLATTENING_INV # Earth's Flattening (reduced)
EARTH_POLAR_RADIUS = 6_356_752.3142 # Semi-minor axis of Earth (Polar Radius) [m]
EARTH_FIRST_ECCENTRICITY = 8.1819190842622e-2
EARTH_FIRST_ECCENTRICITY_2 = EARTH_FIRST_ECCENTRICITY**2
EARTH_SECOND_ECCENTRICITY = 8.2094437949696e-2
EARTH_SECOND_ECCENTRICITY_2 = EARTH_SECOND_ECCENTRICITY**2
EARTH_LINEAR_ECCENTRICITY = 5.2185400842339e5
EARTH_POLAR_CURVATURE_RADIUS = 6_399_593.6258 # Polar radius of Curvature [m]
EARTH_AXIS_RATIO = 9.96647189335e-1 # Axis ratio: EARTH_POLAR_RADIUS / EARTH_EQUATOR_RADIUS
EARTH_MEAN_RADIUS = 6_371_200.0 # Earth's Arithmetic Mean radius [m] ((2*EQUATOR_RADIUS + POLAR_RADIUS) / 3)
EARTH_MEAN_AXIAL_RADIUS = 6_371_008.7714 # Mean Radius of the Three Semi-axes [m]
EARTH_AUTHALIC_RADIUS = 6_371_007.1810 # Radius of equal area sphere [m]
EARTH_EQUIVOLUMETRIC_RADIUS = 6_371_000.79 # Radius of equal volume sphere [m] ((EQUATOR_RADIUS^2 * POLAR_RADIUS)^(1/3))
EARTH_C20_DYN = -4.84165143790815e-4 # Earth's Dynamic Second Degree Zonal Harmonic (C_2,0 dyn)
EARTH_C22_DYN = 2.43938357328313e-6 # Earth's Dynamic Second Degree Sectorial Harmonic (C_2,2 dyn)
EARTH_C20_GEO = -4.84166774985e-4 # Earth's Geographic Second Degree Zonal Harmonic
EARTH_J2 = 1.08263e-3 # Earth's Dynamic Form Factor
# Derived physical constants
NORMAL_GRAVITY_POTENTIAL = 6.26368517146 # Normal Gravity Potential on the Ellipsoid [m^2/s^2]
EQUATORIAL_NORMAL_GRAVITY = 9.7803253359 # Normal Gravity at the Equator (on the ellipsoid) [m/s^2]
POLAR_NORMAL_GRAVITY = 9.8321849379 # Normal Gravity at the Pole (on the ellipsoid) [m/s^2]
MEAN_NORMAL_GRAVITY = 9.7976432223 # Mean Normal Gravity [m/s^2]
SOMIGLIANA_GRAVITY = 1.931852652458e-3 # Somigliana's Formula Normal Gravity constant
NORMAL_GRAVITY_FORMULA = 3.449786506841e-3 # Normal Gravity Formula constant (EARTH_ROTATION^2 * EQUATOR_RADIUS^2 * POLAR_RADIUS / EARTH_GM)
EARTH_MASS = 5.9721864e24 # Earth's Mass (Atmosphere inclulded) [kg]
EARTH_GM_1 = 3.986000982e14 # Geocentric Gravitational Constant (Atmosphere excluded) [m^3/s^2]
EARTH_GM_2 = 3.4359e8 # Gravitational Constant of the Earth’s Atmosphere [m^3/s^2]
EARTH_SIDEREAL_DAY = 86164.09053083288 # Earth's duration of sidereal day [s]
##### Planetary Characteristics (without Earth)
MOON_EQUATOR_RADIUS = 1_738_100.0
MOON_POLAR_RADIUS = 1_736_000.0
MOON_MASS = 7.346e22
MOON_GM = MOON_MASS*UNIVERSAL_GRAVITATION_CODATA2018
MOON_ROTATION = 1.109027709148159e-7 # Inferred from [7]
MOON_J2 = 2.027e-4 # As defined in [7]
MERCURY_EQUATOR_RADIUS = 2_440_530.0 # As defined in [6]
MERCURY_POLAR_RADIUS = 2_438_260.0
MERCURY_ROTATION = 1.2399326882596827e-6 # Inferred from [7]
MERCURY_MASS = 3.30114e23 # As defined in [6]
MERCURY_GM = MERCURY_MASS*UNIVERSAL_GRAVITATION_CODATA2018
MERCURY_J2 = 5.03e-5 # As defined in [7]
VENUS_EQUATOR_RADIUS = 6_051_800.0 # As defined in [6]
VENUS_POLAR_RADIUS = 6_051_800.0
VENUS_ROTATION = -2.9923691869737844e-7 # Inferred from [7]
VENUS_MASS = 4.86747e24
VENUS_GM = VENUS_MASS*UNIVERSAL_GRAVITATION_CODATA2018
VENUS_J2 = 4.458e-6 # As defined in [7]
MARS_EQUATOR_RADIUS = 3_396_190.0 # As defined in [6]
MARS_POLAR_RADIUS = 3_376_200.0
MARS_ROTATION = 7.088235959185674e-5
MARS_MASS = 6.41712e23
MARS_GM = MARS_MASS*UNIVERSAL_GRAVITATION_CODATA2018
MARS_J2 = 1.96045e-3 # As defined in [7]
JUPITER_EQUATOR_RADIUS = 71_492_000.0 # As defined in [6]
JUPITER_POLAR_RADIUS = 66_854_000.0
JUPITER_ROTATION = 1.758518138029551e-4 # Inferred from [7]
JUPITER_MASS = 1.898187e27
JUPITER_GM = JUPITER_MASS*UNIVERSAL_GRAVITATION_CODATA2018
JUPITER_J2 = 1.4736e-2 # As defined in [7]
SATURN_EQUATOR_RADIUS = 60_268_000.0 # As defined in [6]
SATURN_POLAR_RADIUS = 54_364_000.0
SATURN_ROTATION = 1.637884057802486e-4 # Inferred from [7]
SATURN_MASS = 5.683174e26
SATURN_GM = SATURN_MASS*UNIVERSAL_GRAVITATION_CODATA2018
SATURN_J2 = 1.6298e-2 # As defined in [7]
URANUS_EQUATOR_RADIUS = 25_559_000.0 # As defined in [6]
URANUS_POLAR_RADIUS = 24_973_000.0
URANUS_ROTATION = -1.012376653716682e-4 # Inferred from [7]
URANUS_MASS = 8.68127e25
URANUS_GM = URANUS_MASS*UNIVERSAL_GRAVITATION_CODATA2018
URANUS_J2 = 3.343430e-3 # As defined in [7]
NEPTUNE_EQUATOR_RADIUS = 24_764_000.0 # As defined in [6]
NEPTUNE_POLAR_RADIUS = 24_341_000.0
NEPTUNE_ROTATION = 1.083382527619075e-4 # Inferred from [7]
NEPTUNE_MASS = 1.024126e26
NEPTUNE_GM = NEPTUNE_MASS*UNIVERSAL_GRAVITATION_CODATA2018
NEPTUNE_J2 = 3.411e-3 # As defined in [7]
PLUTO_EQUATOR_RADIUS = 1_188_300.0 # As defined in [6]
PLUTO_POLAR_RADIUS = 1_188_300.0
PLUTO_ROTATION = -1.138559183467410e-05 # Inferred from [7]
PLUTO_MASS = 1.303e22
PLUTO_GM = PLUTO_MASS*UNIVERSAL_GRAVITATION_CODATA2018
##### Local information
MUNICH_LATITUDE = 48.137154
MUNICH_LONGITUDE = 11.576124
MUNICH_HEIGHT = 0.519 | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/common/constants.py | constants.py |
import numpy as np
from .constants import *
def cosd(x):
"""
Return the cosine of `x`, which is expressed in degrees.
If `x` is a list, it will be converted first to a NumPy array, and then the
cosine operation over each value will be carried out.
Parameters
----------
x : float
Angle in Degrees
Returns
-------
y : float
Cosine of given angle
Examples
--------
>>> from ahrs.common.mathfuncs import cosd
>>> cosd(0.0)
1.0
>>> cosd(90.0)
0.0
>>> cosd(-120.0)
-0.5
"""
if isinstance(x, list):
x = np.asarray(x)
return np.cos(x*DEG2RAD)
def sind(x):
"""
Return the sine of `x`, which is expressed in degrees.
If `x` is a list, it will be converted first to a NumPy array, and then the
sine operation over each value will be carried out.
Parameters
----------
x : float
Angle in Degrees
Returns
-------
y : float
Sine of given angle
Examples
--------
>>> from ahrs.common.mathfuncs import sind
>>> sind(0.0)
0.0
>>> sind(90.0)
1.0
>>> sind(-120.0)
-0.86602540378
"""
if isinstance(x, list):
x = np.asarray(x)
return np.sin(x*DEG2RAD)
def skew(x):
"""
Return the 3-by-3 skew-symmetric matrix [Wiki_skew]_ of a 3-element vector x.
Parameters
----------
x : array
3-element array with values to be ordered in a skew-symmetric matrix.
Returns
-------
X : ndarray
3-by-3 numpy array of the skew-symmetric matrix.
Examples
--------
>>> from ahrs.common.mathfuncs import skew
>>> a = [1, 2, 3]
>>> skew(a)
[[ 0. -3. 2.]
[ 3. 0. -1.]
[-2. 1. 0.]]
>>> a = np.array([[4.0], [5.0], [6.0]])
>>> skew(a)
[[ 0. -6. 5.]
[ 6. 0. -4.]
[-5. 4. 0.]]
References
----------
.. [Wiki_skew] https://en.wikipedia.org/wiki/Skew-symmetric_matrix
"""
if len(x) != 3:
raise ValueError("Input must be an array with three elements")
return np.array([[0, -x[2], x[1]], [x[2], 0, -x[0]], [-x[1], x[0], 0.0]]) | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/common/mathfuncs.py | mathfuncs.py |
from typing import Tuple, Union
import numpy as np
from .mathfuncs import cosd, sind
from .constants import *
def q_conj(q: np.ndarray) -> np.ndarray:
"""
Conjugate of unit quaternion
A unit quaternion, whose form is :math:`\\mathbf{q} = (q_w, q_x, q_y, q_z)`,
has a conjugate of the form :math:`\\mathbf{q}^* = (q_w, -q_x, -q_y, -q_z)`.
Multiple quaternions in an N-by-4 array can also be conjugated.
Parameters
----------
q : numpy.ndarray
Unit quaternion or 2D array of Quaternions.
Returns
-------
q_conj : numpy.ndarray
Conjugated quaternion or 2D array of conjugated Quaternions.
Examples
--------
>>> from ahrs.common.orientation import q_conj
>>> q = np.array([0.603297, 0.749259, 0.176548, 0.20850 ])
>>> q_conj(q)
array([0.603297, -0.749259, -0.176548, -0.20850 ])
>>> Q = np.array([[0.039443, 0.307174, 0.915228, 0.257769],
[0.085959, 0.708518, 0.039693, 0.699311],
[0.555887, 0.489330, 0.590976, 0.319829],
[0.578965, 0.202390, 0.280560, 0.738321],
[0.848611, 0.442224, 0.112601, 0.267611]])
>>> q_conj(Q)
array([[ 0.039443, -0.307174, -0.915228, -0.257769],
[ 0.085959, -0.708518, -0.039693, -0.699311],
[ 0.555887, -0.489330, -0.590976, -0.319829],
[ 0.578965, -0.202390, -0.280560, -0.738321],
[ 0.848611, -0.442224, -0.112601, -0.267611]])
References
----------
.. [1] Dantam, N. (2014) Quaternion Computation. Institute for Robotics
and Intelligent Machines. Georgia Tech.
(http://www.neil.dantam.name/note/dantam-quaternion.pdf)
.. [2] https://en.wikipedia.org/wiki/Quaternion#Conjugation,_the_norm,_and_reciprocal
"""
if q.ndim>2 or q.shape[-1]!=4:
raise ValueError("Quaternion must be of shape (4,) or (N, 4), but has shape {}".format(q.shape))
return np.array([1., -1., -1., -1.])*np.array(q)
def q_random(size: int = 1) -> np.ndarray:
"""
Generate random quaternions
Parameters
----------
size : int
Number of Quaternions to generate. Default is 1 quaternion only.
Returns
-------
q : numpy.ndarray
M-by-4 array of generated random Quaternions, where M is the requested size.
Examples
--------
>>> import ahrs
>>> q = ahrs.common.orientation.q_random()
array([0.65733485, 0.29442787, 0.55337745, 0.41832587])
>>> q = ahrs.common.orientation.q_random(5)
>>> q
array([[-0.81543924, -0.06443342, -0.08727487, -0.56858621],
[ 0.23124879, 0.55068024, -0.59577746, -0.53695855],
[ 0.74998503, -0.38943692, 0.27506719, 0.45847506],
[-0.43213176, -0.55350396, -0.54203589, -0.46161954],
[-0.17662536, 0.55089287, -0.81357401, 0.05846234]])
>>> np.linalg.norm(q, axis=1) # Each quaternion is, naturally, normalized
array([1., 1., 1., 1., 1.])
"""
if size<1 or not isinstance(size, int):
raise ValueError("size must be a positive non-zero integer value.")
q = np.random.random((size, 4))-0.5
q /= np.linalg.norm(q, axis=1)[:, np.newaxis]
if size==1:
return q[0]
return q
def q_norm(q: np.ndarray) -> np.ndarray:
"""
Normalize quaternion [WQ1]_ :math:`\\mathbf{q}_u`, also known as a versor
[WV1]_ :
.. math::
\\mathbf{q}_u = \\frac{1}{\\|\\mathbf{q}\\|} \\mathbf{q}
where:
.. math::
\\|\\mathbf{q}_u\\| = 1.0
Parameters
----------
q : numpy.ndarray
Quaternion to normalize
Returns
-------
q_u : numpy.ndarray
Normalized Quaternion
Examples
--------
>>> from ahrs.common.orientation import q_norm
>>> q = np.random.random(4)
>>> q
array([0.94064704, 0.12645116, 0.80194097, 0.62633894])
>>> q = q_norm(q)
>>> q
array([0.67600473, 0.0908753 , 0.57632232, 0.45012429])
>>> np.linalg.norm(q)
1.0
References
----------
.. [WQ1] https://en.wikipedia.org/wiki/Quaternion#Unit_quaternion
.. [WV1] https://en.wikipedia.org/wiki/Versor
"""
if q.ndim>2 or q.shape[-1]!=4:
raise ValueError("Quaternion must be of shape (4,) or (N, 4), but has shape {}".format(q.shape))
if q.ndim>1:
return q/np.linalg.norm(q, axis=1)[:, np.newaxis]
return q/np.linalg.norm(q)
def q_prod(p: np.ndarray, q: np.ndarray) -> np.ndarray:
"""
Product of two unit quaternions.
Given two unit quaternions :math:`\\mathbf{p}=(p_w, \\mathbf{p}_v)` and
:math:`\\mathbf{q} = (q_w, \\mathbf{q}_v)`, their product is defined [ND]_ [MWQW]_
as:
.. math::
\\begin{eqnarray}
\\mathbf{pq} & = & \\big( (q_w p_w - \\mathbf{q}_v \\cdot \\mathbf{p}_v) \\; ,
\\; \\mathbf{q}_v \\times \\mathbf{p}_v + q_w \\mathbf{p}_v + p_w \\mathbf{q}_v \\big) \\\\
& = &
\\begin{bmatrix}
p_w & -\\mathbf{p}_v^T \\\\ \\mathbf{p}_v & p_w \\mathbf{I}_3 + \\lfloor \\mathbf{p}_v \\rfloor
\\end{bmatrix}
\\begin{bmatrix} q_w \\\\ \\mathbf{q}_v \\end{bmatrix}
\\\\
& = &
\\begin{bmatrix}
p_w & -p_x & -p_y & -p_z \\\\
p_x & p_w & -p_z & p_y \\\\
p_y & p_z & p_w & -p_x \\\\
p_z & -p_y & p_x & p_w
\\end{bmatrix}
\\begin{bmatrix} q_w \\\\ q_x \\\\ q_y \\\\ q_z \\end{bmatrix}
\\\\
& = &
\\begin{bmatrix}
p_w q_w - p_x q_x - p_y q_y - p_z q_z \\\\
p_x q_w + p_w q_x - p_z q_y + p_y q_z \\\\
p_y q_w + p_z q_x + p_w q_y - p_x q_z \\\\
p_z q_w - p_y q_x + p_x q_y + p_w q_z
\\end{bmatrix}
\\end{eqnarray}
Parameters
----------
p : numpy.ndarray
First quaternion to multiply
q : numpy.ndarray
Second quaternion to multiply
Returns
-------
pq : numpy.ndarray
Product of both quaternions
Examples
--------
>>> import numpy as np
>>> from ahrs import quaternion
>>> q = ahrs.common.orientation.q_random(2)
>>> q[0]
array([0.55747131, 0.12956903, 0.5736954 , 0.58592763])
>>> q[1]
array([0.49753507, 0.50806522, 0.52711628, 0.4652709 ])
>>> quaternion.q_prod(q[0], q[1])
array([-0.36348726, 0.38962514, 0.34188103, 0.77407146])
References
----------
.. [ND] Dantam, N. (2014) Quaternion Computation. Institute for Robotics
and Intelligent Machines. Georgia Tech.
(http://www.neil.dantam.name/note/dantam-quaternion.pdf)
.. [MWQM] Mathworks: Quaternion Multiplication.
https://www.mathworks.com/help/aeroblks/quaternionmultiplication.html
"""
pq = np.zeros(4)
pq[0] = p[0]*q[0] - p[1]*q[1] - p[2]*q[2] - p[3]*q[3]
pq[1] = p[0]*q[1] + p[1]*q[0] + p[2]*q[3] - p[3]*q[2]
pq[2] = p[0]*q[2] - p[1]*q[3] + p[2]*q[0] + p[3]*q[1]
pq[3] = p[0]*q[3] + p[1]*q[2] - p[2]*q[1] + p[3]*q[0]
return pq
def q_mult_L(q: np.ndarray) -> np.ndarray:
"""
Matrix form of a left-sided quaternion multiplication Q.
Parameters
----------
q : numpy.ndarray
Quaternion to multiply from the left side.
Returns
-------
Q : numpy.ndarray
Matrix form of the left side quaternion multiplication.
"""
q /= np.linalg.norm(q)
Q = np.array([
[q[0], -q[1], -q[2], -q[3]],
[q[1], q[0], -q[3], q[2]],
[q[2], q[3], q[0], -q[1]],
[q[3], -q[2], q[1], q[0]]])
return Q
def q_mult_R(q: np.ndarray) -> np.ndarray:
"""
Matrix form of a right-sided quaternion multiplication Q.
Parameters
----------
q : numpy.ndarray
Quaternion to multiply from the right side.
Returns
-------
Q : numpy.ndarray
Matrix form of the right side quaternion multiplication.
"""
q /= np.linalg.norm(q)
Q = np.array([
[q[0], -q[1], -q[2], -q[3]],
[q[1], q[0], q[3], -q[2]],
[q[2], -q[3], q[0], q[1]],
[q[3], q[2], -q[1], q[0]]])
return Q
def q_rot(q: np.ndarray, v: np.ndarray) -> np.ndarray:
"""
Rotate vector :math:`\\mathbf{v}` through quaternion :math:`\\mathbf{q}`.
It should be equal to calling `q2R(q).T@v`
Parameters
----------
v : numpy.ndarray
Vector to rotate in 3 dimensions.
q : numpy.ndarray
Quaternion to rotate through.
Returns
-------
v' : numpy.ndarray
Rotated vector `v` through quaternion `q`.
"""
qw, qx, qy, qz = q
return np.array([
-2.0*v[0]*(qy**2 + qz**2 - 0.5) + 2.0*v[1]*(qw*qz + qx*qy) - 2.0*v[2]*(qw*qy - qx*qz),
-2.0*v[0]*(qw*qz - qx*qy) - 2.0*v[1]*(qx**2 + qz**2 - 0.5) + 2.0*v[2]*(qw*qx + qy*qz),
2.0*v[0]*(qw*qy + qx*qz) - 2.0*v[1]*(qw*qx - qy*qz) - 2.0*v[2]*(qx**2 + qy**2 - 0.5)])
def axang2quat(axis: np.ndarray, angle: Union[int, float], rad: bool = True) -> np.ndarray:
"""
Quaternion from given Axis-Angle.
Parameters
----------
axis : numpy.ndarray
Unit vector indicating the direction of an axis of rotation.
angle : int or float
Angle describing the magnitude of rotation about the axis.
Returns
-------
q : numpy.ndarray
Quaternion
Examples
--------
>>> import numpy as np
>>> from ahrs.quaternion import axang2quat
>>> q = axang2quat([1.0, 0.0, 0.0], np.pi/2.0)
array([0.70710678 0.70710678 0. 0. ])
References
----------
.. [1] https://en.wikipedia.org/wiki/Axis%E2%80%93angle_representation
.. [2] https://www.mathworks.com/help/robotics/ref/axang2quat.html
"""
if axis is None:
return [1.0, 0.0, 0.0, 0.0]
if len(axis)!= 3:
raise ValueError()
axis /= np.linalg.norm(axis)
qw = np.cos(angle/2.0) if rad else cosd(angle/2.0)
s = np.sin(angle/2.0) if rad else sind(angle/2.0)
q = np.array([qw] + list(s*axis))
return q/np.linalg.norm(q)
def quat2axang(q: np.ndarray) -> Tuple[np.ndarray, float]:
"""
Axis-Angle representation from Quaternion.
Parameters
----------
q : numpy.ndarray
Unit quaternion
Returns
-------
axis : numpy.ndarray
Unit vector indicating the direction of an axis of rotation.
angle : float
Angle describing the magnitude of rotation about the axis.
References
----------
.. [1] https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation#Recovering_the_axis-angle_representation
"""
if q is None:
return [0.0, 0.0, 0.0], 1.0
if len(q) != 4:
return None
# Normalize input quaternion
q /= np.linalg.norm(q)
axis = np.copy(q[1:])
denom = np.linalg.norm(axis)
angle = 2.0*np.arctan2(denom, q[0])
axis = np.array([0.0, 0.0, 0.0]) if angle == 0.0 else axis/denom
return axis, angle
def q_correct(q: np.ndarray) -> np.ndarray:
"""
Correct quaternion flipping its sign
If a quaternion flips its sign, it will be corrected and brought back to
its original position.
Parameters
----------
q : numpy.ndarray
N-by-4 array of quaternions, where N is the number of continuous
quaternions.
Returns
-------
new_q : numpy.ndarray
Corrected array of quaternions.
"""
if q.ndim<2 or q.shape[-1]!=4:
raise ValueError("Input must be of shape (N, 4). Got {}".format(q.shape))
q_diff = np.diff(q, axis=0)
norms = np.linalg.norm(q_diff, axis=1)
binaries = np.where(norms>1, 1, 0)
nonzeros = np.nonzero(binaries)
jumps = nonzeros[0]+1
if len(jumps)%2:
jumps = np.append(jumps, [len(q_diff)+1])
jump_pairs = jumps.reshape((len(jumps)//2, 2))
new_q = q.copy()
for j in jump_pairs:
new_q[j[0]:j[1]] *= -1.0
return new_q
def q2R(q: np.ndarray) -> np.ndarray:
"""
Direction Cosine Matrix from given quaternion.
The given unit quaternion :math:`\\mathbf{q}` must have the form
:math:`\\mathbf{q} = (q_w, q_x, q_y, q_z)`, where :math:`\\mathbf{q}_v = (q_x, q_y, q_z)`
is the vector part, and :math:`q_w` is the scalar part.
The resulting DCM (a.k.a. rotation matrix) :math:`\\mathbf{R}` has the form:
.. math::
\\mathbf{R}(\\mathbf{q}) =
\\begin{bmatrix}
1 - 2(q_y^2 + q_z^2) & 2(q_xq_y - q_wq_z) & 2(q_xq_z + q_wq_y) \\\\
2(q_xq_y + q_wq_z) & 1 - 2(q_x^2 + q_z^2) & 2(q_yq_z - q_wq_x) \\\\
2(q_xq_z - q_wq_y) & 2(q_wq_x + q_yq_z) & 1 - 2(q_x^2 + q_y^2)
\\end{bmatrix}
The default value is the unit Quaternion :math:`\\mathbf{q} = (1, 0, 0, 0)`,
which produces a :math:`3 \\times 3` Identity matrix :math:`\\mathbf{I}_3`.
Parameters
----------
q : numpy.ndarray
Unit quaternion
Returns
-------
R : numpy.ndarray
3-by-3 Direction Cosine Matrix.
References
----------
.. [1] https://en.wikipedia.org/wiki/Rotation_matrix#Quaternion
.. [2] https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation#Quaternion-derived_rotation_matrix
"""
if q is None:
return np.identity(3)
if q.shape[-1]!= 4:
raise ValueError("Quaternion Array must be of the form (4,) or (N, 4)")
if q.ndim>1:
q /= np.linalg.norm(q, axis=1)[:, None] # Normalize all quaternions
R = np.zeros((q.shape[0], 3, 3))
R[:, 0, 0] = 1.0 - 2.0*(q[:, 2]**2 + q[:, 3]**2)
R[:, 1, 0] = 2.0*(q[:, 1]*q[:, 2]+q[:, 0]*q[:, 3])
R[:, 2, 0] = 2.0*(q[:, 1]*q[:, 3]-q[:, 0]*q[:, 2])
R[:, 0, 1] = 2.0*(q[:, 1]*q[:, 2]-q[:, 0]*q[:, 3])
R[:, 1, 1] = 1.0 - 2.0*(q[:, 1]**2 + q[:, 3]**2)
R[:, 2, 1] = 2.0*(q[:, 0]*q[:, 1]+q[:, 2]*q[:, 3])
R[:, 0, 2] = 2.0*(q[:, 1]*q[:, 3]+q[:, 0]*q[:, 2])
R[:, 1, 2] = 2.0*(q[:, 2]*q[:, 3]-q[:, 0]*q[:, 1])
R[:, 2, 2] = 1.0 - 2.0*(q[:, 1]**2 + q[:, 2]**2)
return R
q /= np.linalg.norm(q)
return np.array([
[1.0-2.0*(q[2]**2+q[3]**2), 2.0*(q[1]*q[2]-q[0]*q[3]), 2.0*(q[1]*q[3]+q[0]*q[2])],
[2.0*(q[1]*q[2]+q[0]*q[3]), 1.0-2.0*(q[1]**2+q[3]**2), 2.0*(q[2]*q[3]-q[0]*q[1])],
[2.0*(q[1]*q[3]-q[0]*q[2]), 2.0*(q[0]*q[1]+q[2]*q[3]), 1.0-2.0*(q[1]**2+q[2]**2)]])
def q2euler(q: np.ndarray) -> np.ndarray:
"""
Euler Angles from unit Quaternion.
Parameters
----------
q : numpy.ndarray
Quaternion
Returns
-------
angles : numpy.ndarray
Euler Angles around X-, Y- and Z-axis.
References
----------
.. [1] https://en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles#Quaternion_to_Euler_Angles_Conversion
"""
if sum(np.array([1., 0., 0., 0.])-q) == 0.0:
return np.zeros(3)
if len(q) != 4:
return None
R_00 = 2.0*q[0]**2 - 1.0 + 2.0*q[1]**2
R_10 = 2.0*(q[1]*q[2] - q[0]*q[3])
R_20 = 2.0*(q[1]*q[3] + q[0]*q[2])
R_21 = 2.0*(q[2]*q[3] - q[0]*q[1])
R_22 = 2.0*q[0]**2 - 1.0 + 2.0*q[3]**2
phi = np.arctan2( R_21, R_22)
theta = -np.arctan( R_20/np.sqrt(1.0-R_20**2))
psi = np.arctan2( R_10, R_00)
return np.array([phi, theta, psi])
def rotation(ax: Union[str, int] = None, ang: float = 0.0) -> np.ndarray:
"""
Return a Direction Cosine Matrix
The rotation matrix :math:`\\mathbf{R}` [1]_ is created for the given axis
with the given angle :math:`\\theta`. Where the possible rotation axes are:
.. math::
\\mathbf{R}_X(\\theta) =
\\begin{bmatrix}
1 & 0 & 0 \\\\
0 & \\cos \\theta & -\\sin \\theta \\\\
0 & \\sin \\theta & \\cos \\theta
\\end{bmatrix}
\\mathbf{R}_Y(\\theta) =
\\begin{bmatrix}
\\cos \\theta & 0 & \\sin \\theta \\\\
0 & 1 & 0 \\\\
-\\sin \\theta & 0 & \\cos \\theta
\\end{bmatrix}
\\mathbf{R}_Z(\\theta) =
\\begin{bmatrix}
\\cos \\theta & -\\sin \\theta & 0 \\\\
\\sin \\theta & \\cos \\theta & 0 \\\\
0 & 0 & 1
\\end{bmatrix}
where :math:`\\theta` is a float number representing the angle of rotation
in degrees.
Parameters
----------
ax : string or int
Axis to rotate around. Possible are `X`, `Y` or `Z` (upper- or
lowercase) or the corresponding axis index 0, 1 or 2. Defaults to 'z'.
angle : float
Angle, in degrees, to rotate around. Default is 0.
Returns
-------
R : numpy.ndarray
3-by-3 Direction Cosine Matrix.
Examples
--------
>>> from ahrs import rotation
>>> rotation()
array([[1. 0. 0.],
[0. 1. 0.],
[0. 0. 1.]])
>>> rotation('z', 30.0)
array([[ 0.8660254 -0.5 0. ],
[ 0.5 0.8660254 0. ],
[ 0. 0. 1. ]])
>>> # Accepts angle input as string
... rotation('x', '-30')
array([[ 1. 0. 0. ],
[ 0. 0.8660254 0.5 ],
[ 0. -0.5 0.8660254]])
Handles wrong inputs
>>> rotation('false_axis', 'invalid_angle')
array([[1. 0. 0.],
[0. 1. 0.],
[0. 0. 1.]])
>>> rotation(None, None)
array([[1. 0. 0.],
[0. 1. 0.],
[0. 0. 1.]])
References
----------
.. [1] http://mathworld.wolfram.com/RotationMatrix.html
"""
# Default values
valid_axes = list('xyzXYZ')
I_3 = np.identity(3)
# Handle input
if ang==0.0:
return I_3
if ax is None:
ax = "z"
if isinstance(ax, int):
if ax < 0:
ax = 2 # Negative axes default to 2 (Z-axis)
ax = valid_axes[ax] if ax < 3 else "z"
try:
ang = float(ang)
except:
return I_3
# Return 3-by-3 Identity matrix if invalid input
if ax not in valid_axes:
return I_3
# Compute rotation
ca, sa = cosd(ang), sind(ang)
if ax.lower()=="x":
return np.array([[1.0, 0.0, 0.0], [0.0, ca, -sa], [0.0, sa, ca]])
if ax.lower()=="y":
return np.array([[ca, 0.0, sa], [0.0, 1.0, 0.0], [-sa, 0.0, ca]])
if ax.lower()=="z":
return np.array([[ca, -sa, 0.0], [sa, ca, 0.0], [0.0, 0.0, 1.0]])
def rot_seq(axes: Union[list, str] = None, angles: Union[list, float] = None) -> np.ndarray:
"""
Direciton Cosine Matrix from set of axes and angles.
The rotation matrix :math:`\\mathbf{R}` is created from the given list of
angles rotating around the given axes order.
Parameters
----------
axes : list of str
List of rotation axes.
angles : list of floats
List of rotation angles.
Returns
-------
R : numpy.ndarray
3-by-3 Direction Cosine Matrix.
Examples
--------
>>> import numpy as np
>>> import random
>>> from ahrs import quaternion
>>> num_rotations = 5
>>> axis_order = random.choices("XYZ", k=num_rotations)
>>> axis_order
['Z', 'Z', 'X', 'Z', 'Y']
>>> angles = np.random.uniform(low=-180.0, high=180.0, size=num_rotations)
>>> angles
array([-139.24498146, 99.8691407, -171.30712526, -60.57132043,
17.4475838 ])
>>> R = quaternion.rot_seq(axis_order, angles)
>>> R # R = R_z(-139.24) R_z(99.87) R_x(-171.31) R_z(-60.57) R_y(17.45)
array([[ 0.85465231 0.3651317 0.36911822]
[ 0.3025091 -0.92798938 0.21754072]
[ 0.4219688 -0.07426006 -0.90356393]])
References
----------
.. [1] https://en.wikipedia.org/wiki/Rotation_matrix#General_rotations
.. [2] https://en.wikipedia.org/wiki/Euler_angles
"""
accepted_axes = list('xyzXYZ')
R = np.identity(3)
if axes is None:
axes = np.random.choice(accepted_axes, 3)
if not isinstance(axes, list):
axes = list(axes)
num_rotations = len(axes)
if num_rotations < 1:
return R
if angles is None:
angles = np.random.uniform(low=-180.0, high=180.0, size=num_rotations)
if set(axes).issubset(set(accepted_axes)):
# Perform the matrix multiplications
for i in range(num_rotations-1, -1, -1):
R = rotation(axes[i], angles[i])@R
return R
def dcm2quat(R: np.ndarray) -> np.ndarray:
"""
Quaternion from Direct Cosine Matrix.
Parameters
----------
R : numpy.ndarray
Direction Cosine Matrix.
Returns
-------
q : numpy.ndarray
Unit Quaternion.
References
----------
.. [1] F. Landis Markley. Attitude Determination using two Vector
Measurements.
"""
if(R.shape[0] != R.shape[1]):
raise ValueError('Input is not a square matrix')
if(R.shape[0] != 3):
raise ValueError('Input needs to be a 3x3 array or matrix')
q = np.array([1., 0., 0., 0.])
q[0] = 0.5*np.sqrt(1.0 + R.trace())
q[1] = (R[1, 2] - R[2, 1]) / q[0]
q[2] = (R[2, 0] - R[0, 2]) / q[0]
q[3] = (R[0, 1] - R[1, 0]) / q[0]
q[1:] /= 4.0
return q / np.linalg.norm(q)
def rpy2q(angles: np.ndarray, in_deg: bool = False) -> np.ndarray:
"""
Quaternion from roll-pitch-yaw angles
Roll is the first rotation (about X-axis), pitch is the second rotation
(about Y-axis), and yaw is the last rotation (about Z-axis.)
Parameters
----------
angles : numpy.ndarray
roll-pitch-yaw angles.
in_deg : bool, default: False
Angles are given in degrees.
Returns
-------
q : array
Quaternion.
"""
if angles.shape[-1] != 3:
raise ValueError("Input angles must be an array with three elements.")
if in_deg:
angles *= DEG2RAD
if angles.ndim<2:
yaw, pitch, roll = angles
else:
yaw, pitch, roll = angles.T
cr = np.cos(0.5*roll)
sr = np.sin(0.5*roll)
cp = np.cos(0.5*pitch)
sp = np.sin(0.5*pitch)
cy = np.cos(0.5*yaw)
sy = np.sin(0.5*yaw)
# To Quaternion
q = np.array([
cy*cp*cr + sy*sp*sr,
cy*cp*sr - sy*sp*cr,
sy*cp*sr + cy*sp*cr,
sy*cp*cr - cy*sp*sr])
q /= np.linalg.norm(q)
return q
def cardan2q(angles: np.ndarray, in_deg: bool = False) -> np.ndarray:
"""synonym to function :func:`rpy2q`."""
return rpy2q(angles, in_deg=in_deg)
def q2rpy(q: np.ndarray, in_deg: bool = False) -> np.ndarray:
"""
Roll-pitch-yaw angles from quaternion.
Roll is the first rotation (about X-axis), pitch is the second rotation
(about Y-axis), and yaw is the last rotation (about Z-axis.)
Parameters
----------
q : numpy.ndarray
Quaternion.
in_deg : bool, default: False
Return the angles in degrees.
Returns
-------
angles : numpy.ndarray
roll-pitch-yaw angles.
"""
if q.shape[-1] != 4:
return None
roll = np.arctan2(2.0*(q[0]*q[1] + q[2]*q[3]), 1.0 - 2.0*(q[1]**2 + q[2]**2))
pitch = np.arcsin(2.0*(q[0]*q[2] - q[3]*q[1]))
yaw = np.arctan2(2.0*(q[0]*q[3] + q[1]*q[2]), 1.0 - 2.0*(q[2]**2 + q[3]**2))
angles = np.array([roll, pitch, yaw])
if in_deg:
return angles*RAD2DEG
return angles
def q2cardan(q: np.ndarray, in_deg: bool = False) -> np.ndarray:
"""synonym to function :func:`q2rpy`."""
return q2rpy(q, in_deg=in_deg)
def ecompass(a: np.ndarray, m: np.ndarray, frame: str = 'ENU', representation: str = 'rotmat') -> np.ndarray:
"""
Orientation from accelerometer and magnetometer readings
Parameters
----------
a : numpy.ndarray
Sample of tri-axial accelerometer, in m/s^2.
m : numpy.ndarray
Sample of tri-axial magnetometer, in uT.
frame : str, default: ``'ENU'``
Local tangent plane coordinate frame.
representation : str, default: ``'rotmat'``
Orientation representation.
Returns
-------
np.ndarray
Estimated orientation.
Raises
------
ValueError
When wrong local tangent plane coordinates, or invalid representation,
is given.
"""
if frame.upper() not in ['ENU', 'NED']:
raise ValueError("Wrong local tangent plane coordinate frame. Try 'ENU' or 'NED'")
if representation.lower() not in ['rotmat', 'quaternion', 'rpy', 'axisangle']:
raise ValueError("Wrong representation type. Try 'rotmat', 'quaternion', 'rpy', or 'axisangle'")
a = np.copy(a)
m = np.copy(m)
if a.shape[-1] != 3 or m.shape[-1] != 3:
raise ValueError("Input vectors must have exactly 3 elements.")
m /= np.linalg.norm(m)
Rz = a/np.linalg.norm(a)
if frame.upper() == 'NED':
Ry = np.cross(Rz, m)
Rx = np.cross(Ry, Rz)
else:
Rx = np.cross(m, Rz)
Ry = np.cross(Rz, Rx)
Rx /= np.linalg.norm(Rx)
Ry /= np.linalg.norm(Ry)
R = np.c_[Rx, Ry, Rz].T
if representation.lower() == 'quaternion':
return chiaverini(R)
if representation.lower() == 'rpy':
phi = np.arctan2(R[1, 2], R[2, 2]) # Roll Angle
theta = -np.arcsin(R[0, 2]) # Pitch Angle
psi = np.arctan2(R[0, 1], R[0, 0]) # Yaw Angle
return np.array([phi, theta, psi])
if representation.lower() == 'axisangle':
angle = np.arccos((R.trace()-1)/2)
axis = np.zeros(3)
if angle!=0:
S = np.array([R[2, 1]-R[1, 2], R[0, 2]-R[2, 0], R[1, 0]-R[0, 1]])
axis = S/(2*np.sin(angle))
return (axis, angle)
return R
def am2DCM(a: np.ndarray, m: np.ndarray, frame: str = 'ENU') -> np.ndarray:
"""
Direction Cosine Matrix from acceleration and/or compass using TRIAD.
Parameters
----------
a : numpy.ndarray
Array of single sample of 3 orthogonal accelerometers.
m : numpy.ndarray
Array of single sample of 3 orthogonal magnetometers.
frame : str, default: ``'ENU'``
Local Tangent Plane. Options are ``'ENU'`` or ``'NED'``. Defaults to
``'ENU'`` (East-North-Up) coordinates.
Returns
-------
pose : numpy.ndarray
Direction Cosine Matrix
References
----------
- Michel, T. et al. (2018) Attitude Estimation for Indoor Navigation and
Augmented Reality with Smartphones.
(http://tyrex.inria.fr/mobile/benchmarks-attitude/)
(https://hal.inria.fr/hal-01650142v2/document)
"""
if frame.upper() not in ['ENU', 'NED']:
raise ValueError("Wrong coordinate frame. Try 'ENU' or 'NED'")
a = np.array(a)
m = np.array(m)
H = np.cross(m, a)
H /= np.linalg.norm(H)
a /= np.linalg.norm(a)
M = np.cross(a, H)
if frame.upper()=='ENU':
return np.array([[H[0], M[0], a[0]],
[H[1], M[1], a[1]],
[H[2], M[2], a[2]]])
return np.array([[M[0], H[0], -a[0]],
[M[1], H[1], -a[1]],
[M[2], H[2], -a[2]]])
def am2q(a: np.ndarray, m: np.ndarray, frame: str = 'ENU') -> np.ndarray:
"""
Quaternion from acceleration and/or compass using TRIAD method.
Parameters
----------
a : numpy.ndarray
Array with sample of 3 orthogonal accelerometers.
m : numpy.ndarray
Array with sample of 3 orthogonal magnetometers.
frame : str, default: 'ENU'
Local Tangent Plane. Options are 'ENU' or 'NED'. Defaults to 'ENU'
(East-North-Up) coordinates.
Returns
-------
pose : numpy.ndarray
Quaternion
References
----------
.. [1] Michel, T. et al. (2018) Attitude Estimation for Indoor
Navigation and Augmented Reality with Smartphones.
(http://tyrex.inria.fr/mobile/benchmarks-attitude/)
(https://hal.inria.fr/hal-01650142v2/document)
.. [2] Janota, A. Improving the Precision and Speed of Euler Angles
Computation from Low-Cost Rotation Sensor Data.
(https://www.mdpi.com/1424-8220/15/3/7016/pdf)
"""
R = am2DCM(a, m, frame=frame)
q = dcm2quat(R)
return q
def acc2q(a: np.ndarray, return_euler: bool = False) -> np.ndarray:
"""
Quaternion from given acceleration.
Parameters
----------
a : numpy.ndarray
A sample of 3 orthogonal accelerometers.
return_euler : bool, default: False
Return pose as Euler angles
Returns
-------
pose : numpy.ndarray
Quaternion or Euler Angles.
References
----------
.. [1] Michel, T. et al. (2018) Attitude Estimation for Indoor
Navigation and Augmented Reality with Smartphones.
(http://tyrex.inria.fr/mobile/benchmarks-attitude/)
(https://hal.inria.fr/hal-01650142v2/document)
.. [2] Zhang, H. et al (2015) Axis-Exchanged Compensation and Gait
Parameters Analysis for High Accuracy Indoor Pedestrian Dead Reckoning.
(https://www.researchgate.net/publication/282535868_Axis-Exchanged_Compensation_and_Gait_Parameters_Analysis_for_High_Accuracy_Indoor_Pedestrian_Dead_Reckoning)
.. [3] Yun, X. et al. (2008) A Simplified Quaternion-Based Algorithm for
Orientation Estimation From Earth Gravity and Magnetic Field Measurements.
(https://apps.dtic.mil/dtic/tr/fulltext/u2/a601113.pdf)
.. [4] Jung, D. et al. Inertial Attitude and Position Reference System
Development for a Small UAV.
(https://pdfs.semanticscholar.org/fb62/903d8e6c051c8f4780c79b6b18fbd02a0ff9.pdf)
.. [5] Bleything, T. How to convert Magnetometer data into Compass Heading.
(https://blog.digilentinc.com/how-to-convert-magnetometer-data-into-compass-heading/)
.. [6] RT IMU Library. (https://github.com/RTIMULib/RTIMULib2/blob/master/RTIMULib/RTFusion.cpp)
.. [7] Janota, A. Improving the Precision and Speed of Euler Angles
Computation from Low-Cost Rotation Sensor Data. (https://www.mdpi.com/1424-8220/15/3/7016/pdf)
.. [8] Trimpe, S. Accelerometer -based Tilt Estimation of a Rigid Body
with only Rotational Degrees of Freedom. 2010.
(http://www.idsc.ethz.ch/content/dam/ethz/special-interest/mavt/dynamic-systems-n-control/idsc-dam/Research_DAndrea/Balancing%20Cube/ICRA10_1597_web.pdf)
"""
q = np.array([1.0, 0.0, 0.0, 0.0])
ex, ey, ez = 0.0, 0.0, 0.0
if np.linalg.norm(a)>0 and len(a)==3:
ax, ay, az = a
# Normalize accelerometer measurements
a_norm = np.linalg.norm(a)
ax /= a_norm
ay /= a_norm
az /= a_norm
# Euler Angles from Gravity vector
ex = np.arctan2(ay, az)
ey = np.arctan2(-ax, np.sqrt(ay**2 + az**2))
ez = 0.0
if return_euler:
return np.array([ex, ey, ez])*RAD2DEG
# Euler to Quaternion
cx2 = np.cos(ex/2.0)
sx2 = np.sin(ex/2.0)
cy2 = np.cos(ey/2.0)
sy2 = np.sin(ey/2.0)
q = np.array([cx2*cy2, sx2*cy2, cx2*sy2, -sx2*sy2])
q /= np.linalg.norm(q)
return q
def am2angles(a: np.ndarray, m: np.ndarray, in_deg: bool = False) -> np.ndarray:
"""
Roll-pitch-yaw angles from acceleration and compass.
Parameters
----------
a : numpy.ndarray
N-by-3 array with N samples of 3 orthogonal accelerometers.
m : numpy.ndarray
N-by-3 array with N samples of 3 orthogonal magnetometers.
in_deg : bool, default: False
Return the angles in degrees.
Returns
-------
pose : numpy.ndarray
Roll-pitch-yaw angles
References
----------
.. [DT0058] A. Vitali. Computing tilt measurement and tilt-compensated
e-compass. ST Technical Document DT0058. October 2018.
(https://www.st.com/resource/en/design_tip/dm00269987.pdf)
"""
if a.ndim<2:
a = np.atleast_2d(a)
if m.ndim<2:
m = np.atleast_2d(m)
# Normalization of 2D arrays
a /= np.linalg.norm(a, axis=1)[:, None]
m /= np.linalg.norm(m, axis=1)[:, None]
angles = np.zeros((len(a), 3)) # Allocation of angles array
# Estimate tilt angles
angles[:, 0] = np.arctan2(a[:, 1], a[:, 2])
angles[:, 1] = np.arctan2(-a[:, 0], np.sqrt(a[:, 1]**2 + a[:, 2]**2))
# Estimate heading angle
my2 = m[:, 2]*np.sin(angles[:, 0]) - m[:, 1]*np.cos(angles[:, 0])
mz2 = m[:, 1]*np.sin(angles[:, 0]) + m[:, 2]*np.cos(angles[:, 0])
mx3 = m[:, 0]*np.cos(angles[:, 1]) + mz2*np.sin(angles[:, 1])
angles[:, 2] = np.arctan2(my2, mx3)
# Return in degrees or in radians
if in_deg:
return angles*RAD2DEG
return angles
def slerp(q0: np.ndarray, q1: np.ndarray, t_array: np.ndarray, threshold: float = 0.9995) -> np.ndarray:
"""
Spherical Linear Interpolation between quaternions.
Return a valid quaternion rotation at a specified distance along the minor
arc of a great circle passing through any two existing quaternion endpoints
lying on the unit radius hypersphere.
Based on the method detailed in [Wiki_SLERP]_
Parameters
----------
q0 : numpy.ndarray
First endpoint quaternion.
q1 : numpy.ndarray
Second endpoint quaternion.
t_array : numpy.ndarray
Array of times to interpolate to.
Extra Parameters
----------------
threshold : float
Threshold to closeness of interpolation.
Returns
-------
q : numpy.ndarray
New quaternion representing the interpolated rotation.
References
----------
.. [Wiki_SLERP] https://en.wikipedia.org/wiki/Slerp
"""
qdot = q0@q1
# Ensure SLERP takes the shortest path
if qdot < 0.0:
q1 *= -1.0
qdot *= -1.0
# Interpolate linearly (LERP)
if qdot > threshold:
result = q0[np.newaxis, :] + t_array[:, np.newaxis]*(q1 - q0)[np.newaxis, :]
return (result.T / np.linalg.norm(result, axis=1)).T
# Angle between vectors
theta_0 = np.arccos(qdot)
sin_theta_0 = np.sin(theta_0)
theta = theta_0*t_array
sin_theta = np.sin(theta)
s0 = np.cos(theta) - qdot*sin_theta/sin_theta_0
s1 = sin_theta/sin_theta_0
return s0[:,np.newaxis]*q0[np.newaxis,:] + s1[:,np.newaxis]*q1[np.newaxis,:]
def logR(R: np.ndarray) -> np.ndarray:
S = 0.5*(R-R.T)
y = np.array([S[2, 1], -S[2, 0], S[1, 0]])
if np.allclose(np.zeros(3), y):
return np.zeros(3)
y_norm = np.linalg.norm(y)
return np.arcsin(y_norm)*y/y_norm
def chiaverini(dcm: np.ndarray) -> np.ndarray:
"""
Quaternion from a Direction Cosine Matrix with Chiaverini's algebraic method [Chiaverini]_.
Parameters
----------
dcm : numpy.ndarray
3-by-3 Direction Cosine Matrix.
Returns
-------
q : numpy.ndarray
Quaternion.
"""
n = 0.5*np.sqrt(dcm.trace() + 1.0)
e = np.array([0.5*np.sign(dcm[2, 1]-dcm[1, 2])*np.sqrt(dcm[0, 0]-dcm[1, 1]-dcm[2, 2]+1.0),
0.5*np.sign(dcm[0, 2]-dcm[2, 0])*np.sqrt(dcm[1, 1]-dcm[2, 2]-dcm[0, 0]+1.0),
0.5*np.sign(dcm[1, 0]-dcm[0, 1])*np.sqrt(dcm[2, 2]-dcm[0, 0]-dcm[1, 1]+1.0)])
return np.array([n, *e])
def hughes(dcm: np.ndarray) -> np.ndarray:
"""
Quaternion from a Direction Cosine Matrix with Hughe's method [Hughes]_.
Parameters
----------
dcm : numpy.ndarray
3-by-3 Direction Cosine Matrix.
Returns
-------
q : numpy.ndarray
Quaternion.
"""
tr = dcm.trace()
if np.isclose(tr, 3.0):
return np.array([1., 0., 0., 0.]) # No rotation. DCM is identity.
n = 0.5*np.sqrt(1.0 + tr)
if np.isclose(n, 0): # trace = -1: q_w = 0 (Pure Quaternion)
e = np.sqrt((1.0+np.diag(dcm))/2.0)
else:
e = 0.25*np.array([dcm[1, 2]-dcm[2, 1], dcm[2, 0]-dcm[0, 2], dcm[0, 1]-dcm[1, 0]])/n
return np.array([n, *e])
def sarabandi(dcm: np.ndarray, eta: float = 0.0) -> np.ndarray:
"""
Quaternion from a Direction Cosine Matrix with Sarabandi's method [Sarabandi]_.
Parameters
----------
dcm : numpy.ndarray
3-by-3 Direction Cosine Matrix.
eta : float,d efault: 0.0
Threshold.
Returns
-------
q : numpy.ndarray
Quaternion.
"""
# Get elements of R
r11, r12, r13 = dcm[0, 0], dcm[0, 1], dcm[0, 2]
r21, r22, r23 = dcm[1, 0], dcm[1, 1], dcm[1, 2]
r31, r32, r33 = dcm[2, 0], dcm[2, 1], dcm[2, 2]
# Compute qw
dw = r11+r22+r33
if dw > eta:
qw = 0.5*np.sqrt(1.0+dw)
else:
nom = (r32-r23)**2+(r13-r31)**2+(r21-r12)**2
denom = 3.0-dw
qw = 0.5*np.sqrt(nom/denom)
# Compute qx
dx = r11-r22-r33
if dx > eta:
qx = 0.5*np.sqrt(1.0+dx)
else:
nom = (r32-r23)**2+(r12+r21)**2+(r31+r13)**2
denom = 3.0-dx
qx = 0.5*np.sqrt(nom/denom)
# Compute qy
dy = -r11+r22-r33
if dy > eta:
qy = 0.5*np.sqrt(1.0+dy)
else:
nom = (r13-r31)**2+(r12+r21)**2+(r23+r32)**2
denom = 3.0-dy
qy = 0.5*np.sqrt(nom/denom)
# Compute qz
dz = -r11-r22+r33
if dz > eta:
qz = 0.5*np.sqrt(1.0+dz)
else:
nom = (r21-r12)**2+(r31+r13)**2+(r23+r32)**2
denom = 3.0-dz
qz = 0.5*np.sqrt(nom/denom)
return np.array([qw, qx, qy, qz])
def itzhack(dcm: np.ndarray, version: int = 3) -> np.ndarray:
"""
Quaternion from a Direction Cosine Matrix with Bar-Itzhack's method [Itzhack]_.
Versions 1 and 2 are used with orthogonal matrices (which all rotation
matrices should be.)
Parameters
----------
dcm : numpy.ndarray
3-by-3 Direction Cosine Matrix.
version : int, default: 3
Version used to compute the Quaternion. Options are 1, 2 or 3.
Returns
-------
q : numpy.ndarray
Quaternion.
"""
is_orthogonal = np.isclose(np.linalg.det(dcm), 1.0) and np.allclose([email protected], np.eye(3))
if is_orthogonal:
if version == 1:
K2 = np.array([
[dcm[0, 0]-dcm[1, 1], dcm[1, 0]+dcm[0, 1], dcm[2, 0], -dcm[2, 1]],
[dcm[1, 0]+dcm[0, 1], dcm[1, 1]-dcm[0, 0], dcm[2, 1], dcm[2, 0]],
[dcm[2, 0], dcm[2, 1], -dcm[0, 0]-dcm[1, 1], dcm[0, 1]-dcm[1, 0]],
[-dcm[2, 1], dcm[2, 0], dcm[0, 1]-dcm[1, 0], dcm[0, 0]+dcm[1, 1]]])/2.0
eigval, eigvec = np.linalg.eig(K2)
q = eigvec[:, np.where(np.isclose(eigval, 1.0))[0]].flatten().real
return np.roll(q, 1)
if version == 2:
K3 = np.array([
[dcm[0, 0]-dcm[1, 1]-dcm[2, 2], dcm[1, 0]+dcm[0, 1], dcm[2, 0]+dcm[0, 2], dcm[1, 2]-dcm[2, 1]],
[dcm[1, 0]+dcm[0, 1], dcm[1, 1]-dcm[0, 0]-dcm[2, 2], dcm[2, 1]+dcm[1, 2], dcm[2, 0]-dcm[0, 2]],
[dcm[2, 0]+dcm[0, 2], dcm[2, 1]+dcm[1, 2], dcm[2, 2]-dcm[0, 0]-dcm[1, 1], dcm[0, 1]-dcm[1, 0]],
[dcm[1, 2]-dcm[2, 1], dcm[2, 0]-dcm[0, 2], dcm[0, 1]-dcm[1, 0], dcm[0, 0]+dcm[1, 1]+dcm[2, 2]]])/3.0
eigval, eigvec = np.linalg.eig(K3)
q = eigvec[:, np.where(np.isclose(eigval, 1.0))[0]].flatten().real
return np.roll(q, 1)
# Non-orthogonal DCM. Use version 3
K3 = np.array([
[dcm[0, 0]-dcm[1, 1]-dcm[2, 2], dcm[1, 0]+dcm[0, 1], dcm[2, 0]+dcm[0, 2], dcm[1, 2]-dcm[2, 1]],
[dcm[1, 0]+dcm[0, 1], dcm[1, 1]-dcm[0, 0]-dcm[2, 2], dcm[2, 1]+dcm[1, 2], dcm[2, 0]-dcm[0, 2]],
[dcm[2, 0]+dcm[0, 2], dcm[2, 1]+dcm[1, 2], dcm[2, 2]-dcm[0, 0]-dcm[1, 1], dcm[0, 1]-dcm[1, 0]],
[dcm[1, 2]-dcm[2, 1], dcm[2, 0]-dcm[0, 2], dcm[0, 1]-dcm[1, 0], dcm[0, 0]+dcm[1, 1]+dcm[2, 2]]])/3.0
eigval, eigvec = np.linalg.eig(K3)
q = eigvec[:, eigval.argmax()]
return np.roll(q, 1)
def shepperd(dcm: np.ndarray) -> np.ndarray:
"""
Quaternion from a Direction Cosine Matrix with Shepperd's method [Shepperd]_.
Parameters
----------
dcm : numpy.ndarray
3-by-3 Direction Cosine Matrix.
Returns
-------
q : numpy.ndarray
Quaternion.
"""
d = np.diag(dcm)
b = np.array([dcm.trace(), *d])
i = b.argmax()
if i==0:
q = np.array([1.0+sum(d), dcm[1, 2]-dcm[2, 1], dcm[2, 0]-dcm[0, 2], dcm[0, 1]-dcm[1, 0]])
elif i==1:
q = np.array([dcm[1, 2]-dcm[2, 1], 1.0+d[0]-d[1]-d[2], dcm[1, 0]+dcm[0, 1], dcm[2, 0]+dcm[0, 2]])
elif i==2:
q = np.array([dcm[2, 0]-dcm[0, 2], dcm[1, 0]+dcm[0, 1], 1.0-d[0]+d[1]-d[2], dcm[2, 1]+dcm[1, 2]])
else:
q = np.array([dcm[0, 1]-dcm[1, 0], dcm[2, 0]+dcm[0, 2], dcm[2, 1]+dcm[1, 2], 1.0-d[0]-d[1]+d[2]])
q /= 2.0*np.sqrt(q[i])
return q | AHRS | /AHRS-0.3.1-py3-none-any.whl/ahrs/common/orientation.py | orientation.py |
# AI_Cdvst
AI_Cdvst ist eine Python-Bibliothek für einfaches AI-Erstellen und -Aufnahme. Die Bibliothek bietet Funktionen zum Abspielen von WAV- und MP3-Dateien sowie zur Aufzeichnung von AI in einer WAV-Datei. Die Bibliothek enthält auch eine automatische Erkennung von Sprache. Die Dokumentation für AI_Cdvst finden Sie [hier](https://now4free.de/python/module/AI_Cdvst/documentation).
## Installation
Um AI_Cdvst zu installieren, führen Sie den folgenden Befehl aus:
```
pip install AI_Cdvst
```
## Klassen
AI_Cdvst enthält die folgenden Klassen:
| Klasse | Beschreibung |
| :----------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `SpeechRecognizer` | Diese Klasse stellt Funktionen zur Erkennung von Sprache zur Verfügung. Mit dieser Klasse können AI-Dateien, AI-Streams oder Live-AI erkannt werden. Es ist auch möglich, eine Liste von Schlüsselwörtern einzurichten, um die Erkennung zu verbessern und zu verfeinern. Hier ist ein Beispiel, wie eine Liste von Schlüsselwörtern definiert werden kann: `recognizer.set_phrases(['hello', 'world', 'foo', 'bar'])` |
| `MicrophoneRecorder` | Diese Klasse stellt Funktionen zur Aufnahme von AI zur Verfügung. Diese Klasse ermöglicht es Ihnen, AI aufzuzeichnen und anschließend als WAV-, MP3- oder M4A-Datei zu speichern. Es ist auch möglich, das aufgezeichnete AI direkt in die Cloud hochzuladen oder in einer Datenbank zu speichern. Hier ist ein Beispiel, wie AI aufgezeichnet und als WAV-Datei gespeichert werden kann: `recorder.record_AI(record_time=5)` und `recorder.save_AI(format='wav')`. |
## Funktionen
AI_Cdvst enthält folgende Funktionen:
| Funktion | Beschreibung |
| :---------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------- |
| `SpeechRecognizer.recognize_speech()` | Erkennt AI-Dateien, AI-Streams oder Live-AI. |
| `SpeechRecognizer.set_phrases()` | Setzt eine Liste von Schlüsselwörtern für die Erkennung. |
| `SpeechRecognizer.add_phrase()` | Fügt ein Schlüsselwort zur Liste hinzu. |
| `SpeechRecognizer.remove_phrase()` | Entfernt ein Schlüsselwort aus der Liste. |
| `MicrophoneRecorder.record_AI()` | Startet die Aufnahme des AI-Streams. |
| `MicrophoneRecorder.save_AI()` | Speichert das aufgezeichnete AI als WAV-, MP3- oder M4A-Datei. |
| `MicrophoneRecorder.upload_to_cloud()` | Lädt das aufgezeichnete AI direkt in die Cloud hoch. |
| `MicrophoneRecorder.save_to_database()` | Speichert das aufgezeichnete AI in einer Datenbank. |
## Tests
Um die Tests für AI_Cdvst auszuführen, führen Sie den folgenden Befehl aus:
```
python -m unittest discover -s tests
```
## Support
Für Fragen und Hilfe bei der Verwendung von AI_Cdvst wenden Sie sich bitte an [email protected].
## Lizenz
AI_Cdvst ist lizenziert unter der MIT-Lizenz. | AI-Cdvst | /AI_Cdvst-0.4.tar.gz/AI_Cdvst-0.4/README.md | README.md |
import os
import openai
import tkinter as tk
from tkinter import messagebox
import logging
###----------------------###
###-- OpenAICodeEditor --###
###----------------------###
class OpenAICodeEditor:
"""
Diese Klasse ermöglicht es, Code oder Texte mit Hilfe von OpenAI-Modellen zu bearbeiten.
"""
def __init__(self):
"""
Initialisiert die Klasse OpenAICodeEditor und setzt die API-Schlüssel von OpenAI.
Wenn kein gültiger Schlüssel gefunden wird, wird der Benutzer aufgefordert, einen API-Schlüssel einzugeben.
Args:
None
Returns:
None
"""
self.logger = logging.getLogger(__name__)
api_key = os.getenv("OPENAI_API_KEY")
if api_key is None:
api_key = self.get_api_key()
openai.api_key = api_key
def edit_code(self, input_text, instruction_text, return_json=False):
"""
Bearbeitet den gegebenen Code-Text unter Verwendung des code-davinci-edit-001-Modells von OpenAI.
Args:
input_text (str): Der Code-Text, der bearbeitet werden soll.
instruction_text (str): Die Bearbeitungsanweisungen, die für den Code-Text verwendet werden sollen.
return_json (bool): Gibt an, ob das Ergebnis als JSON-Objekt oder nur als Text zurückgegeben werden soll.
Returns:
Das bearbeitete Ergebnis als Text oder JSON-Objekt der Klasse Edit von OpenAI, oder None, wenn ein Fehler auftritt.
"""
try:
result = openai.Edit.create(
model="code-davinci-edit-001",
input=input_text,
instruction=instruction_text
)
if return_json:
return result
else:
return result["choices"][0]["text"]
except Exception as e:
self.logger.error(f"Fehler beim Bearbeiten des Codes: {e}")
return None
def edit_text(self, input_text, instruction_text, return_json=False):
"""
Bearbeitet den gegebenen Text unter Verwendung destext-davinci-edit-001-Modells von OpenAI.
Args:
input_text (str): Der Text, der bearbeitet werden soll.
instruction_text (str): Die Bearbeitungsanweisungen, die für den Text verwendet werden sollen.
return_json (bool): Gibt an, ob das Ergebnis als JSON-Objekt oder nur als Text zurückgegeben werden soll.
Returns:
Das bearbeitete Ergebnis als Text oder JSON-Objekt der Klasse Edit von OpenAI, oder None, wenn ein Fehler auftritt.
"""
try:
result = openai.Edit.create(
model="text-davinci-edit-001",
input=input_text,
instruction=instruction_text
)
if return_json:
return result
else:
return result["choices"][0]["text"]
except Exception as e:
self.logger.error(f"Fehler beim Bearbeiten des Texts: {e}")
return None
def get_api_key(self):
"""
Fordert den Benutzer auf, einen gültigen OpenAI-API-Schlüssel einzugeben. Der Schlüssel wird überprüft und gespeichert,
wenn er gültig ist.
Args:
None
Returns:
Der gültige API-Schlüssel als String.
"""
root = tk.Tk()
root.withdraw()
api_key = tk.simpledialog.askstring(title="API-Schlüssel", prompt="Bitte geben Sie Ihren OpenAI-API-Schlüssel ein:")
while not self.check_api_key(api_key):
api_key = tk.simpledialog.askstring(title="API-Schlüssel", prompt="Ungültiger API-Schlüssel. Bitte geben Sie einen gültigen OpenAI-API-Schlüssel ein:")
os.environ["OPENAI_API_KEY"] = api_key
return api_key
def check_api_key(self, api_key):
"""
Überprüft, ob der gegebene API-Schlüssel gültig ist.
Args:
api_key (str): Der API-Schlüssel, der überprüft werden soll.
Returns:
True, wenn der Schlüssel gültig ist, andernfalls False.
"""
openai.api_key = api_key
try:
openai.Usage.retrieve()
return True
except Exception as e:
self.logger.error(f"Fehler beim Überprüfen des API-Schlüssels: {e}")
return False
# AI_Cdvst.py
class DALLEImageGenerator:
ENDPOINT = "https://api.openai.com/v1/images/generations"
@classmethod
def generate_image(cls, prompt):
import requests
from requests.structures import CaseInsensitiveDict
import json
import base64
from io import BytesIO
from PIL import Image
from Data_Cdvst import ConfigManager
api_key = ConfigManager.load_config()["api_key"]
headers = CaseInsensitiveDict()
headers["Content-Type"] = "application/json"
headers["Authorization"] = f"Bearer {api_key}"
data = """
{
"""
data += f'"model": "image-alpha-001",'
data += f'"prompt": "{prompt}",'
data += """
"num_images":1,
"size":"1024x1024",
"response_format":"url"
}
"""
resp = requests.post(cls.ENDPOINT, headers=headers, data=data)
if resp.status_code != 200:
raise ValueError("Failed to generate image")
response_data = json.loads(resp.text)
image_url = response_data['data'][0]['url']
# Download and display the image
image_data = requests.get(image_url).content
img = Image.open(BytesIO(image_data))
img.show()
return image_data
###-------------------###
###------ Tests ------###
###-------------------###
import unittest
import os
import openai
import tkinter as tk
from tkinter import messagebox
import logging
class TestOpenAICodeEditor(unittest.TestCase):
def setUp(self):
self.editor = OpenAICodeEditor()
def test_get_api_key(self):
api_key = self.editor.get_api_key()
self.assertTrue(self.editor.check_api_key(api_key))
def test_edit_code(self):
input_text = 'print("Hello World!")'
instruction_text = 'Replace "World" with "Universe"'
result = self.editor.edit_code(input_text, instruction_text)
expected_result = 'print("Hello Universe!")'
self.assertEqual(result, expected_result)
def test_edit_text(self):
input_text = 'This is a test.'
instruction_text = 'Add "It worked!" to the end of the sentence.'
result = self.editor.edit_text(input_text, instruction_text)
expected_result = 'This is a test. It worked!'
self.assertEqual(result, expected_result)
def test_edit_code_return_json(self):
input_text = 'print("Hello World!")'
instruction_text = 'Replace "World" with "Universe"'
result = self.editor.edit_code(input_text, instruction_text, return_json=True)
self.assertTrue(isinstance(result, openai.edits.Edit))
def test_edit_text_return_json(self):
input_text = 'This is a test.'
instruction_text = 'Add "It worked!" to the end of the sentence.'
result = self.editor.edit_text(input_text, instruction_text, return_json=True)
self.assertTrue(isinstance(result, openai.edits.Edit))
###-------------------###
###------ Main -------###
###-------------------###
if __name__ == '__main__':
unittest.main() | AI-Cdvst | /AI_Cdvst-0.4.tar.gz/AI_Cdvst-0.4/AI_Cdvst/__init__.py | __init__.py |
# AI-Chess
Basic chess features that includes an AI for decision making in Python
# Install Guide
To install packages run in terminal (Python 3.9+):
pip install AI-Chess
# Quick Tutorial
To start the board and the AI, do the following:
>>> from AIchess import *
>>> aic = AIChess()
If you want to take a look at the board do this:
>>> print(aic.board)
If you want to get the board as a 2D list for easy use and print each row do this:
>>> [print(sub_list) for sub_list in aic.get_boardAs2DList()]
If you want to get the board as a flipped 2D list for easy use with white on top and black on bottom but with chess notation like 'a1' still being on white then print each row do this (Extra functions to support this board exist in documentation like automatically flipping the row and col to support this):
>>> [print(sub_list) for sub_list in aic.get_boardAs2DListFlipped()]
If you want change the AI algorithmic minimax depth to a higher number for better accuracy at the cost of computational requirement do this (Default: 3; Needs to be greater than 0):
>>> aic.minimaxDepth = 2
If you want to use the minimax AI to get one of the best possible moves for whoevers turn it is and use that move on the board then print the board do this (This assumes that the game isn't over for any reason):
>>> aic.makeChessMove(aic.chessAIMove()[0])
>>> [print(sub_list) for sub_list in aic.get_boardAs2DList()]
# Documentation
__init__()
Initialize the library by creating board: chess.Board which is the starting chess board in the chess library, it is public for you to use however be careful as you can break some functions and minimaxDepth: int which is the depth of the search algorithm. Higher the better but requires move computational power. Single process and needs to be > 0. Default 3.
chessAIMove() -> List[str]
Returns a list of the best possible legal_moves for whoevers turn it is in UCI however, it is possible that one or more of the entries can be the string 'claim_draw' instead of a UCI which is to indicate the desire to claim a draw like FIFTY_MOVES or THREEFOLD_REPETITION
get_whiteBlackPointsDifference(game: chess.Board) -> int
Returns the point difference between white and black where Pawn: 1, Bishop: 3, Knight: 3, Rook: 5, Queen 9
makeChessMove(uci: chess.Move | str) -> None
Needs to be at least pseudo_legal
listAllPossibleMoves() -> List[Move]
Lists all possible legal_moves for whoevers turn it is for each piece
listUCIPosPossibleMoves(uciPos: str) -> List[Move]
Lists all possible legal_moves for whoevers turn it is for the uciPos like 'a2', or 'b1'
Returns an empty List if no possible moves
reset() -> None
Resets board in chess.Board
willMoveNeedPawnPromotion(uci: chess.Move | str) -> bool
Return True if move will result in a pawn promotion, False otherwise
pieceToPieceType(result: chess.Piece | str) -> int
Return chess.PieceType as a int
rowColToUCI(rowColFrom: list[int], rowColTo: list[int]) -> str
Accepts two List[int] which is the row and col from and to locations
Returns a str which is a uci representing the inputs
uciToRowCol(uci: chess.Move | str) -> List[int], List[int]
Accepts a str which is a uci
Returns two List[int] which are the row and col from and to representing the inputs
rowColToUCIPos(row: int col: int) -> str
Accepts a row and col of a 2D list that is the chess board
Returns a uciPos which is a single position like a2 or b1 representing the inputs
uciToRowColPos(uciPos: str) -> int, int
Accepts a uciPos which is a single position like a2 or b1
Returns a row and col of a 2D list that is the chess board representing the inputs
uciToFlippedRowCol(uci: chess.Move | str) -> List[int], List[int]
Accepts a uci
Returns two List[int] which are the row and col from and to in a 2D list that is the chess board representing
the inputs but flipped so that white is on top and black is bottom
flippedRowColToUCIPos(row: int, col: int) -> str
Accepts a row and col position that has been flipped so that white is on top and black is on bottom
Returns a uciPos which is a single position like a2 or b1 representing the inputs
uciToFlippedRowColPos(uciPos: str) -> int, int
Accepts a uciPos which is a single position like a2 or b1
Returns a row and col position that has been flipped so that white is on top and black is on bottom representing the inputs
flipRowCol(rowColFrom: list[int], rowColTo: list[int]) -> List[int], List[int]
Accepts two List[int] which is the row and col from and to locations
Returns two List[int] which is the row and col from and to locations but flipped so that white is one top and black is on bottom
flipRowColPos(row: int, col: int) -> int, int
Accepts two int which is the row and col for a location
Returns two int which is the row and col for a location but flipped so that white is one top and black is on bottom
get_boardAs2DList() -> List[int][int]
Returns a 2D list that represents the chess board with white on bottom and black on top for easy use
get_boardAs2DListFlipped() -> List[int][int]
Returns a 2D list that represents the chess board but flipped so white is on top and black is on bottom for easy use
get_isWhiteTurn() -> bool
Returns a bool where True is if it is white's turn and False if it is black's turn
get_isStartOfGame() -> bool
Returns a bool where True is if it is the start of the game and False if it isn't
# Extra
For more information on how to use this project visit https://python-chess.readthedocs.io/en/latest/ where some of this project uses. | AI-Chess | /AI-Chess-2.0.8.tar.gz/AI-Chess-2.0.8/README.md | README.md |
import chess
class AIChess:
def __init__(self):
"""
Initialize the library by creating board: chess.Board which is the starting chess board in the chess library,
it is public for you to use however be careful as you can break some functions and
minimaxDepth: int which is the depth of the search algorithm.
Higher the better but requires move computational power. Single process and needs to be > 1. Default 3.
"""
self.board = chess.Board()
self.minimaxDepth = 3
def chessAIMove(self):
"""
chessAIMove() -> List[str]
Returns a list of the best possible legal_moves for whoevers turn it is in UCI however,
it is possible that one or more of the entries can be the string 'claim_draw' instead of a UCI
which is to indicate the desire to claim a draw like FIFTY_MOVES or THREEFOLD_REPETITION
"""
bestMovesStr = []
alpha = -50000
beta = 50000
if self.get_isWhiteTurn():
bestMoveEval = -50000
if self.board.copy().outcome(claim_draw = True) != None and (str(self.board.copy().outcome(claim_draw = True).termination) != 'Termination.CHECKMATE' or str(self.board.copy().outcome(claim_draw = True).termination) != 'Termination.VARIANT_WIN' or str(self.board.copy().outcome(claim_draw = True).termination) != 'Termination.VARIANT_LOSS'):
bestMoveEval = 200
alpha = 200
bestMovesStr.append('claim_draw')
for child in reversed(list(self.board.legal_moves)):
newGameBoard = self.board.copy()
newGameBoard.push(child)
eval = self.__minimax(self.minimaxDepth - 1, newGameBoard, alpha, beta, False)
if eval == bestMoveEval:
bestMovesStr.append(str(newGameBoard.peek()))
elif eval > bestMoveEval:
bestMovesStr.clear()
bestMovesStr.append(str(newGameBoard.peek()))
bestMoveEval = eval
alpha = eval
else:
bestMoveEval = 50000
if self.board.copy().outcome(claim_draw = True) != None and (str(self.board.copy().outcome(claim_draw = True).termination) != 'Termination.CHECKMATE' or str(self.board.copy().outcome(claim_draw = True).termination) != 'Termination.VARIANT_WIN' or str(self.board.copy().outcome(claim_draw = True).termination) != 'Termination.VARIANT_LOSS'):
bestMoveEval = -200
beta = -200
bestMovesStr.append('claim_draw')
for child in reversed(list(self.board.legal_moves)):
newGameBoard = self.board.copy()
newGameBoard.push(child)
eval = self.__minimax(self.minimaxDepth - 1, newGameBoard, alpha, beta, True)
if eval == bestMoveEval:
bestMovesStr.append(str(newGameBoard.peek()))
elif eval < bestMoveEval:
bestMovesStr.clear()
bestMovesStr.append(str(newGameBoard.peek()))
bestMoveEval = eval
beta = eval
return bestMovesStr
def __minimax(self, depth: int, game: chess.Board, alpha: int, beta: int, isMaximisingPlayer: bool):
if game.outcome() != None:
if str(game.outcome().termination) == 'Termination.CHECKMATE' or str(game.outcome().termination) == 'Termination.VARIANT_WIN' or str(game.outcome().termination) == 'Termination.VARIANT_LOSS':
if game.outcome().winner == chess.WHITE:
return 1000
else:
return -1000
else:
if self.minimaxDepth - depth % 2 == 0:
return 200
else:
return -200
elif depth == 0:
return self.get_whiteBlackPointsDifference(game)
if isMaximisingPlayer:
maxEval = -50000
if game.copy().outcome(claim_draw = True) != None and (str(game.copy().outcome(claim_draw = True).termination) != 'Termination.CHECKMATE' or str(self.board.copy().outcome(claim_draw = True).termination) != 'Termination.VARIANT_WIN' or str(game.copy().outcome(claim_draw = True).termination != 'Termination.VARIANT_LOSS')):
if self.minimaxDepth - depth % 2 == 0:
maxEval = 200
else:
maxEval = -200
alpha = max(alpha, maxEval)
if beta <= alpha:
return maxEval
for child in reversed(list(game.legal_moves)):
newGameBoard = game.copy()
newGameBoard.push(child)
eval = self.__minimax(depth - 1, newGameBoard, alpha, beta, False)
maxEval = max(maxEval, eval)
alpha = max(alpha, eval)
if beta <= alpha:
return maxEval
return maxEval
else:
minEval = 50000
if game.copy().outcome(claim_draw = True) != None and (str(game.copy().outcome(claim_draw = True).termination) != 'Termination.CHECKMATE' or str(self.board.copy().outcome(claim_draw = True).termination) != 'Termination.VARIANT_WIN' or str(game.copy().outcome(claim_draw = True).termination) != 'Termination.VARIANT_LOSS'):
if self.minimaxDepth - depth % 2 == 0:
maxEval = -200
else:
maxEval = 200
beta = min(beta, minEval)
if beta <= alpha:
return minEval
for child in reversed(list(game.legal_moves)):
newGameBoard = game.copy()
newGameBoard.push(child)
eval = self.__minimax(depth - 1, newGameBoard, alpha, beta, True)
minEval = min(minEval, eval)
beta = min(beta, eval)
if beta <= alpha:
return minEval
return minEval
def get_whiteBlackPointsDifference(self, game: chess.Board):
"""
get_whiteBlackPointsDifference(game: chess.Board) -> int
Returns the point difference between white and black
where Pawn: 1, Bishop: 3, Knight: 3, Rook: 5, Queen 9
"""
whitePoints = 0
blackPoints = 0
boardAs2DList = [['.' for i in range(8)] for j in range(8)]
for square in chess.SQUARES:
if game.piece_at(square) != None:
boardAs2DList[int((63 - square) / 8)][square % 8] = game.piece_at(square).symbol()
for boardAs2DListRow in boardAs2DList:
for boardSquare in boardAs2DListRow:
if boardSquare == 'P':
whitePoints += 1
elif boardSquare == 'p':
blackPoints += 1
elif boardSquare == 'B':
whitePoints += 3
elif boardSquare == 'N':
whitePoints += 3
elif boardSquare == 'R':
whitePoints += 5
elif boardSquare == 'b':
blackPoints += 3
elif boardSquare == 'n':
blackPoints += 3
elif boardSquare == 'r':
blackPoints += 5
elif boardSquare == 'Q':
whitePoints += 9
elif boardSquare == 'q':
blackPoints += 9
return whitePoints - blackPoints
def makeChessMove(self, uci):
"""
makeChessMove(uci: chess.Move | str) -> None
Needs to be at least pseudo_legal
"""
self.board.push_uci(str(uci))
def listAllPossibleMoves(self):
"""
listAllPossibleMoves() -> List[Move]
Lists all possible legal_moves for whoevers turn it is for each piece
"""
return list(self.board.legal_moves)
def listUCIPosPossibleMoves(self, uciPos: str):
"""
listUCIPosPossibleMoves(uciPos: str) -> List[Move]
Lists all possible legal_moves for whoevers turn it is for the uciPos like 'a2', or 'b1'
Returns an empty List if no possible moves
"""
allPossibleMoves = self.listAllPossibleMoves()
uciPossibleMoves = []
for Move in allPossibleMoves:
if Move.uci()[0:2] == uciPos:
uciPossibleMoves.append(Move)
return uciPossibleMoves
def reset(self):
"""
reset() -> None
Resets board in chess.Board
"""
self.board.reset()
def willMoveNeedPawnPromotion(self, uci):
"""
willMoveNeedPawnPromotion(uci: chess.Move | str) -> bool
Return True if move will result in a pawn promotion, False otherwise
"""
if (str(uci)[1] == '2' and str(uci)[3] == '1' and self.board.piece_at(chess.parse_square(str(uci)[0:2])).symbol().upper() == 'P') or (str(uci)[1] == '7' and str(uci)[3] == '8' and self.board.piece_at(chess.parse_square(str(uci)[0:2])).symbol().upper() == 'P'):
return True
else:
return False
def pieceToPieceType(self, result):
"""
pieceToPieceType(result: chess.Piece | str) -> int
Return chess.PieceType as a int
"""
if str(result).lower() == 'p':
return 1
elif str(result).lower() == 'n':
return 2
elif str(result).lower() == 'b':
return 3
elif str(result).lower() == 'r':
return 4
elif str(result).lower() == 'q':
return 5
elif str(result).lower() == 'k':
return 6
else:
return 0
def rowColToUCI(self, rowColFrom: list[int], rowColTo: list[int]):
"""
rowColToUCI(rowColFrom: list[int], rowColTo: list[int]) -> str
Accepts two List[int] which is the row and col from and to locations
Returns a str which is a uci representing the inputs
"""
uciCol = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']
return uciCol[rowColFrom[1]] + str(8 - rowColFrom[0]) + uciCol[rowColTo[1]] + str(8 - rowColTo[0])
def uciToRowCol(self, uci):
"""
uciToRowCol(uci: chess.Move | str) -> List[int], List[int]
Accepts a str which is a uci
Returns two List[int] which are the row and col from and to representing the inputs
"""
colUCI = dict(a = 0, b = 1, c = 2, d = 3, e = 4, f = 5, g = 6, h = 7)
return [8 - int(str(uci)[1]), colUCI[str(uci)[0]]], [8 - int(str(uci)[3]), colUCI[str(uci)[2]]]
def rowColToUCIPos(self, row: int, col: int):
"""
rowColToUCIPos(row: int col: int) -> str
Accepts a row and col of a 2D list that is the chess board
Returns a uciPos which is a single position like a2 or b1 representing the inputs
"""
uciCol = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']
return uciCol[col] + str(8 - row)
def uciToRowColPos(self, uciPos: str):
"""
uciToRowColPos(uciPos: str) -> int, int
Accepts a uciPos which is a single position like a2 or b1
Returns a row and col of a 2D list that is the chess board representing the inputs
"""
colUCI = dict(a = 0, b = 1, c = 2, d = 3, e = 4, f = 5, g = 6, h = 7)
row = 8 - int(uciPos[1])
col = colUCI[uciPos[0]]
return row, col
def uciToFlippedRowCol(self, uci):
"""
uciToFlippedRowCol(uci: chess.Move | str) -> List[int], List[int]
Accepts a uci
Returns two List[int] hich are the row and col from and to in a 2D list that is the chess board representing the inputs but flipped so that white is on top and black is bottom
"""
flippedRowColFrom, flippedRowColTo = self.uciToRowCol(uci)
return self.flipRowCol(flippedRowColFrom, flippedRowColTo)
def flippedRowColToUCIPos(self, row: int, col: int):
"""
flippedRowColToUCIPos(row: int, col: int) -> str
Accepts a row and col position that has been flipped so that white is on top and black is on bottom
Returns a uciPos which is a single position like a2 or b1 representing the inputs
"""
flippedRow, flippedCol = self.flipRowColPos(row, col)
return self.rowColToUCIPos(flippedRow, flippedCol)
def uciToFlippedRowColPos(self, uciPos: str):
"""
uciToFlippedRowColPos(uciPos: str) -> int, int
Accepts a uciPos which is a single position like a2 or b1
Returns a row and col position that has been flipped so that white is on top and black is on bottom representing the inputs
"""
row, col = self.uciToRowColPos(uciPos)
return self.flipRowColPos(row, col)
def flipRowCol(self, rowColFrom: list[int], rowColTo: list[int]):
"""
flipRowCol(rowColFrom: list[int], rowColTo: list[int]) -> List[int], List[int]
Accepts two List[int] which is the row and col from and to locations
Returns two List[int] which is the row and col from and to locations but flipped so that white is one top and black is on bottom
"""
rowColFrom[0] = 7 - rowColFrom[0]
rowColFrom[1] = 7 - rowColFrom[1]
rowColTo[0] = 7 - rowColTo[0]
rowColTo[1] = 7 - rowColTo[1]
return rowColFrom, rowColTo
def flipRowColPos(self, row: int, col: int):
"""
flipRowColPos(row: int, col: int) -> int, int
Accepts two int which is the row and col for a location
Returns two int which is the row and col for a location but flipped so that white is one top and black is on bottom
"""
return 7 - row, 7 - col
def get_boardAs2DList(self):
"""
get_boardAs2DList() -> List[int][int]
Returns a 2D list that represents the chess board with white on bottom and black on top for easy use
"""
boardAs2DList = [['.' for i in range(8)] for j in range(8)]
for square in chess.SQUARES:
if self.board.piece_at(square) != None:
boardAs2DList[int((63 - square) / 8)][square % 8] = self.board.piece_at(square).symbol()
return boardAs2DList
def get_boardAs2DListFlipped(self):
"""
get_boardAs2DListFlipped() -> List[int][int]
Returns a 2D list that represents the chess board but flipped so white is on top and black is on bottom for easy use
"""
boardAs2DList = [['.' for i in range(8)] for j in range(8)]
for square in chess.SQUARES:
if self.board.piece_at(square) != None:
boardAs2DList[int((square) / 8)][(63 - square) % 8] = self.board.piece_at(square).symbol()
return boardAs2DList
def get_isWhiteTurn(self):
"""
get_isWhiteTurn() -> bool
Returns a bool where True is if it is white's turn and False if it is black's turn
"""
return self.board.turn
def get_isStartOfGame(self):
"""
get_isStartOfGame() -> bool
Returns a bool where True is if it is the start of the game and False if it isn't
"""
try:
self.board.peek()
except IndexError:
return True
return False | AI-Chess | /AI-Chess-2.0.8.tar.gz/AI-Chess-2.0.8/AIchess/AIchess.py | AIchess.py |

# Prof.Li's Python Education Tools
## Author: 道法自然
## **模块安装:pip install -U python-education-tools**
* Python Education Tools模块为教材配套工具。模块由作者Prof.Luqun Li团队自主开发 。
* 模块简称“pet工具”。取Python Education Tools 首字母的缩写pet(英文意思:宠物),希望该工具成为大家学习的宠物!。
* 所有工具包的根目录是pet。模块安装后对应的安装包为 pet.data.* 、pet.textbook1.*等。
* 今后会根据大家的需求,不断扩充相关教学相关工具和数据集。如果您有好的建议,请发邮件到[email protected]联系我。

`````
** Pet相关模块的使用:**
1.教材配套的案例下载(运行以下1行Python代码):
import pet.textbook1.codes
稍后,即可将教学案例下载到桌面教学案例目录。
2.趣谈编程之道(运行以下1行Python代码):
import pet.this
与“晦涩难懂的”Python编程之禅import this对应,本模块从中国传统文化,心,术,道三个层次,引用古文阐述编程之道。
内容大家自己体会,道法自然,道不简则理不明!!Just for Fun!!!(不笑不足以为道!!)
3.教材相关的样本数据加载:
from pet.data import load_data
df1 = load_data.load_data(数据集名称)
'''
数据集名称如下:
'ip_address.xlsx' -ip地址分类。返回:dataframe
'st.xls' -某高校研究生初试成绩。返回:dataframe
'subway.xlsx' -上海地铁线路数据。返回:dataframe
'ddj.txt' -道德经文本。返回:字符串。
'tyjhzz.txt'-《太乙金华宗旨》。返回:字符串。
'2022pst.xlsx'-2022年优秀毕业论文。返回:dataframe。
'2022tsk.xlsx'-2022年通识课。返回:dataframe。
'beijing_bus.xlsx'-北京公交车信息。返回:dataframe。
今后将陆续增加数据集。
'''
其它数据:
load_data.votes #投票文本数据
load_data.cookies #某cookies数据
4.伪数据集的随机生成函数:
以下2个函数可以分别随机生成Series和DataFrame对象数据。
dffs = load_data.generate_sr(rows=100)
'''
生成一个Series对象,对象内数据条数默认40条,可任意设置。
'''
dfff = load_data.generate_df(rows=200)
'''
生成一个Dataframe对象,对象内数据条数默认40条,可任意设置。shape是(n,13),n为数据条数
'''
`````
 | AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/README.md | README.md |
* 第1章 绪论 *
***安装本教材配套模块***
```
!pip install -U python-education-tools
```
**Python 官方网站 https://python.org**
试一试:在Python提示符下输入:>>>license() ,可查看Python的历史与版权信息介绍。
```
license()
```
查看一下Python 的排名:[Python 排名](https://www.tiobe.com/tiobe-index/)

【例1.1】使用一行代码,实现九九乘法表的输出。
```
[f'{i}*{j}={i*j}' for i in range(1,10) for j in range(1,10)]
```
[Python之禅(Zen)](https://www.python.org/dev/peps/pep-0020)
```
import pet.that
import this
```
交换两个变量的值的Python代码实现。
用Pythonic风格的写法:
```
a,b=5,6
a,b = b,a
print(a,b)
```
Python第三方库PYPI(https://pypi.org/) [本书配套模块](https://pypi.org/project/python-education-tools/)
MicroPython官方网站 [MicroPython](https://micropython.org/)
[Python官方网站](https://www.python.org)
Python技术标准以PEP文档发布,[PEP](https://www.python.org/dev/peps/)
Python就业 [51job](https://www.51job.com/)
python 技术顶会! [pycon](https://us.pycon.org/2022/)

```
```
| AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/chapter.01.绪论.ipynb | chapter.01.绪论.ipynb |
```
import sys
sys.stdin.read(10)
sys.stdout.write('hello')
sys.stderr.write('hello')
import sys
old_out=sys.stdout
f=open('help.txt','w')
help(sys)
import contextlib, io
f = open('out/record.txt','w+')
with contextlib.redirect_stdout(f):
import this
f.close()
print(open('out/record.txt').read())
print(contextlib)
```

```
f=open('out/help-math.txt')
print(f.read(100))
f = open("out/my_file.txt", 'w')
f.write("hello world")
#或可以直接将print()函数的输出定向为文件。
print('hello world',file=open('out/ok.txt','w'))
f.close()
try:
f = open('out/help-math.txt')
print(f.read())
except:
print("An exception occurred")
finally:
if f:
f.close()
with open('out/my_file.txt', 'a+', encoding='utf-8') as f:
f.write('test\n\hello world\n')
#文件读操作
with open('out/my_file.txt', 'r', encoding='utf-8') as f:
print(f.readlines())
with open('out/demo.txt', 'w', encoding='utf-8') as f:
f.write("这里演示文件的创建!以及相关函数的使用和功能!!!\n" * 10)
with open('out/demo.txt', 'r', encoding='utf-8') as f:
print(f'文件读位置:{f.tell()=}')
print(f.read(10))
f.seek(6)
print(f'文件读位置::{f.tell()=}')
print(f.read())
with open('out/demo.txt', 'a+', encoding='utf-8') as f:
f.write("hello world!!!\n")
with open('out/demo.txt', encoding='utf-8') as f:
print(f.read())
import shelve
#(1) 保存数据。
with shelve.open('test_shelf') as w: #
w['abc'] = {'age': 10, 'float': 9.5, 'String': 'china'}
w['efg'] = [1, 2, 3]
#(2) 查找数据。
with shelve.open('test_shelf') as r: #
print(r['abc'])
print(r['efg'])
#(3) 删除、插入、更新数据。
with shelve.open('test_shelf', flag='w', writeback=True) as dm:
del dm['abc']
dm['gre'] = [99879, 2, 3]
dm['efg'] = "thi is a test".split()
#(4) 遍历数据。
with shelve.open('test_shelf') as s:
for key, value in s.items():
print(key, value)
import pickle
tup1 = ('[email protected]!!', {1,2,3}, True)
#使用 dumps() 函数将 tup1 转成 p1
p1 = pickle.dumps(tup1)
def hi(name):
print('hello'+name)
#使用 dumps() 函数将 函数hi 转成 p2
p2 = pickle.dumps(hi)
print(p1)
print(p2)
import pickle
tup1 = ('[email protected]!!', {1,2,3}, True)
#使用 dumps() 函数将 tup1 转成 p1
p1 = pickle.dumps(tup1)
def hi(name):
print('hello'+name)
#使用 dumps() 函数将 函数hi 转成 p2
p2 = pickle.dumps(hi)
print(p1)
print(p2)
t1=pickle.loads(p1)
t2 = pickle.loads(p2)
import pickle
tup1 = ('[email protected]!!', {1,2,3}, True)
def hi(name):
print('hello'+name)
with open ("out/a.txt", 'wb') as f: #打开文件
#使用 dumps() 函数将 tup1 转成 p1
p1 = pickle.dump(tup1,f)
#使用 dumps() 函数将 函数hi 转成 p2
p2 = pickle.dump(hi,f)
pickle.dump(tup1, f)
import pickle
tup1 = ('[email protected]!!', {1,2,3}, True)
def hi(name):
print('hello'+name)
with open ("out/a.txt", 'wb') as f: #打开文件
#使用 dumps() 函数将 tup1 转成 p1
p1 = pickle.dump(tup1,f)
#使用 dumps() 函数将 函数hi 转成 p2
p2 = pickle.dump(hi,f)
with open ("out/a.txt", 'rb') as f: #打开文件
t3 = pickle.load(f) #将二进制文件对象转换成 Python 对象
t4=pickle.load(f)
print(t3)
t4(' from China!!')
print(t3)
import json
# Python 对象(字典):
x = {
"name": "zhang",
"age": 33,
"city": "shanghai"
}
# 转换为 JSON:
y = json.dumps(x)
print(type(y))
print(y)
import json
# 一些 JSON:
x = '{ "name":"zhang", "age":33, "city":"shanghai"}'
# 解析 x:
y = json.loads(x)
print(type(y))
print(y)
import json
# Python 对象(字典):
x = {
"name": "zhang",
"age": 33,
"city": "shanghai"
}
with open("record.json","w") as dump_f:
json.dump(x,dump_f)
import json
with open("record.json",'r') as load_f:
load_dict = json.load(load_f)
print(type(load_dict))
print(load_dict)
import os
for dirpath, dirname, files in os.walk('.'):
print(f'Found directory: {dirpath}')
for file_name in files:
print(file_name)
```
| AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/chapter.07.文件与输入输出.ipynb | chapter.07.文件与输入输出.ipynb |
```
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
%matplotlib inline
#rc={'figure.dpi':300,'font.sans-serif':'SimHei','axes.unicode_minus':False}
#sns.set(context='notebook', style='whitegrid', rc=rc)
rc={'figure.dpi':800,'font.sans-serif':'SimHei','axes.unicode_minus':False}
sns.set(context='poster', style='whitegrid', rc=rc)
import pandas as pd
students = pd.read_excel('data/st.xlsx')
#查看数据的shape
students.dropna(inplace=True)
students.shape
students.head(2)
students.dtypes
students.describe()
students['id']=students['id'].astype(str)
students['性别']=students['性别'].astype("category")
students['毕业年月'] = pd.to_datetime(students['毕业年月'],format='%Y%m')
students['英语分类']=students['英语分类'].astype("category")
students['数学分类']=students['数学分类'].astype("category")
students['专业课分类']=students['专业课分类'].astype("category")
import datetime
students['复习时间']=datetime.datetime(2019,7,1)-students['毕业年月']
students['复习时间']=students['复习时间'].dt.days
students.to_excel('st1.xlsx')
students.head(2)
students.info()
students.columns
order = ['id', '性别', '本科院校', '本科专业', '毕业年月','复习时间', '报考代码', '英语分类', '英语成绩', '数学分类', '数学成绩', '政治成绩', '专业课分类', '专业课成绩',
'报考院校', '报考专业', '总分' ]
students = students[order]
students.describe()
sns.distplot(students['总分'])
sns.distplot(students['英语成绩']);
sns.distplot(students['数学成绩'],hist=False)
sns.distplot(students['英语成绩'],kde=False,bins=30)
sns.distplot(students['专业课成绩'], hist=False, rug=True)
sns.distplot(students['专业课成绩'], kde=False, fit=stats.gamma)
sns.kdeplot(students['专业课成绩'], shade=True)
sns.kdeplot(students['专业课成绩'])
sns.kdeplot(students['专业课成绩'], bw=.2, label="bw: 0.2")
sns.kdeplot(students['专业课成绩'], bw=1, label="bw: 2")
plt.legend()
sns.kdeplot(students['专业课成绩'], shade=True, cut=5)
sns.kdeplot(students['英语成绩'], students['数学成绩'] )
sns.rugplot(students['专业课成绩'],height=0.3)
sns.jointplot(x="数学成绩", y="总分", data=students)
sns.jointplot(x="数学成绩", y="总分",data=students, kind="hex", color="b")
sns.jointplot(x="数学成绩", y="总分", data=students,kind="reg")
sns.jointplot(x="英语成绩", y="数学成绩", data=students, kind="kde")
sns.relplot(x="复习时间", y="总分", data=students, kind="line")
sns.pairplot(students)
sns.pairplot(students, hue="性别")
sns.pairplot(students, x_vars=["数学成绩", "英语成绩",'政治成绩'], y_vars=["总分"], height=5, aspect=.8)
sns.relplot(x="数学成绩", y="总分", hue="数学分类",sizes=(35, 100),size='复习时间',col='性别',data=students)
sns.relplot(x="数学成绩", y="总分", hue="数学分类",col='性别',data=students, kind="line");#默认对数据排序
sns.relplot(x="数学成绩", y="总分", hue="数学分类",col='性别',data=students,sort=False, kind="line")
sns.relplot(x="数学成绩", y="总分", kind="line", data=students)
sns.relplot(x="英语成绩", y="总分", kind="line", ci=False,data=students)
sns.relplot(x="英语成绩", y="总分", kind="line", ci='sd',data=students)
sns.relplot(x="英语成绩", y="总分", kind="line", estimator=None,data=students)
sns.relplot(x="数学成绩", y="总分", kind="line",hue='性别',data=students)
sns.relplot(x="数学成绩", y="总分", kind="line",hue='性别', style='英语分类', markers=True,data=students)
sns.relplot(x="毕业年月", y="数学成绩", kind="line", data=students)
sns.relplot(x="复习时间", y="英语成绩", kind="line", data=students)
sns.catplot(x="性别", y="总分", data=students)
sns.catplot(x="英语分类", y="总分",jitter=False, data=students)
sns.catplot(y="数学分类", x="总分",kind="swarm", data=students)
sns.catplot(x="性别", y="英语成绩", kind="box", data=students)
sns.catplot(x="性别", y="英语成绩", kind="box", hue='英语分类',data=students,order=['男','女'])
sns.catplot(x="性别", y="英语成绩", kind="boxen", hue='英语分类',data=students,order=['男','女'])
sns.catplot(x="数学分类", y="总分", hue="性别",kind="violin", data=students)
sns.catplot(x="性别", y="总分", hue="数学分类", kind="bar", data=students)
sns.catplot(x="报考专业", y="总分", hue="性别", kind="point", data=students);
plt.tick_params(axis='x',labelsize=8,rotation=90)
sns.catplot(x="数学分类", hue="性别", kind="count", data=students)
sns.pointplot(x="数学分类", y="总分", data=students)
sns.regplot(x="数学成绩", y="总分", data=students)
sns.lmplot(x="数学成绩", y="总分", data=students)
sns.pairplot(students, hue="性别");
sns.pairplot(students)
sns.pairplot(students, x_vars=["数学成绩", "英语成绩",'政治成绩'], y_vars=["总分"], height=5, aspect=.8, kind="reg")
sns.lmplot(x="数学成绩", y="总分", data=students,x_estimator=np.mean)
sns.regplot(x="数学成绩", y="总分", data=students)
sns.lmplot(x="数学成绩", y="总分",robust=True, data=students)
!pip install statsmodels
sns.regplot(x="数学成绩", y="总分", data=students,ci=None,order = 4)
students["英语成绩"]
students["总分"]
sns.lmplot(x="英语成绩", y="总分", data=students,order = 3)
sns.residplot(x="数学成绩", y="总分", data=students)
sns.lmplot(x="数学成绩", y="总分", hue="性别", data=students)
sns.lmplot(x="数学成绩", y="总分", hue="性别", markers=["o", "x"], palette="Set1",data=students)
sns.lmplot(x="数学成绩", y="总分", hue="性别", markers=["o", "x"],col="英语分类", palette="Set1",data=students)
sns.lmplot(x="数学成绩", y="总分", hue="性别", markers=["o", "x"],col="英语分类", palette="Set2",height=4,data=students)
sns.jointplot(x="数学成绩", y="总分", data=students, kind="reg")
flights_long = sns.load_dataset("flights")
flights_long
flights = flights_long.pivot("month", "year", "passengers")
flights
sns.heatmap(flights, annot=True, fmt="d", linewidths=.5)
students
st = students.pivot("本科院校", "报考院校", "总分")
```
| AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/Chapter12.数据可视化ok.ipynb | Chapter12.数据可视化ok.ipynb |
```
s='(A) gre (B) Toefl (C) PETS (D) HK [B]'
import re
result=re.findall('\s*\([A-E]\)\s*(\w*)\s?',s)
result
```
输入0~100任意成绩数据,通过if语句判断成绩的等级。
```
num = int(input('请输入分数:'))
if 0<= num < 60: # 判断值是否在0~60之间
print ('挂科!')
elif 60<=num<70: # 判断值是否在60~70之间
print ('及格')
elif 70<=num<80: # 判断值是否在70~80之间
print ('中')
elif 80<=num<90: # 判断值是否在80~90之间
print ('良好')
elif 80<=num<=100: # 判断值是否在90~100之间
print ('优秀')
else:
print ('输入有误!')
x=float(input('input your mark:'))
result='及格了' if x>=60 else '不及格'
print(result)
m=90
grade=['不及格','及格','中','良','优秀']
line=[m>60,m>=70,m>=80,m>=90]
result=grade[sum(line)]
result
# 用while循环,必须有一控制逻辑条件
count=0
while count<3:
# 循环就是重复执行循环体里面的代码
print(f'Round.{count} test!')
count+=1
cities = ["北京", "上海", "广州", "天津"]
for city in cities:
print(f"我来逛{city}")
else:
print("没有循环数据!")
print("完成循环!")
x = int(input("输入性别(0,1): "))
match x:
case 0:
print('female')
case 1:
print('male')
case _:
print("输入错误")
t = [1.0, 2, (3, 3, 3),(3,9,9),[4, 4, 4], {'5': 'ok', '55': 'good'}, '6', {7, 8, 9}]
for x in t:
match x:
case 1.0:
print(f'匹配类型{type(x)},得到:星期一')
case 2:
print(f'匹配类型{type(x)},得到:星期二')
case (3, _, _):
print(f'匹配类型{type(x)},得到:星期三')
case [4, 4, _]:
print(f'匹配类型{type(x)},得到:星期四')
case {'5': 'ok', '55': 'good'}:
print(f'匹配类型{type(x)},得到:星期五')
case '6':
print(f'匹配类型{type(x)},得到:星期六')
case _:
print("输入错误")
x = int(input("输入(1~7)数值: "))
match x:
case 1 | 2 | 3 | 4 | 5:
print('工作日')
case 6 | 7:
print('周末')
case _:
print("输入错误")
x=[1,2,3,4,5]
match x:
case [1,*other]:
print(f'{other=}')
case _:
print("未匹配")
x={1: "星期一", 2: "星期二",3:'星期三'}
match x:
case {2: "星期二", **other}:
print(f'{other=}')
case _:
print('未匹配')
game = '12345'
for round in game:
if round == '3':
continue #放弃第3局
print(round,end=" ")
#运行结果:1,2,4,5,
#break,跳出循环
for round in game:
if round == '3':
break #从3局就不玩了
print(round,end=" ")
#运行结果:1, 2,
for round in game:
pass
for round in game:
...
for round in game:
"可以使用任意字符串作为占位符"
for round in game:
Ellipsis
num=input('please input the number?')
for i in range(int(num)):
print(i)
for i in range(3):
try:
mark=input(f'{i}.请输入成绩(0~100)?')
print('及格了') if float(mark)>=60 else print('不及格')
except ValueError as arg:
print(f'{arg.args}:输入成绩有误,重来!!')
finally:
print('无论有无异常,我都执行!!')
try:
mark=input('input x:')
if float(mark) < 0 or float(mark) > 100:
raise ValueError('成绩输入异常')
except ValueError as arg:
print(arg.args)
assert 1==1
assert 1==0
mark = input('请输入成绩(0~100)?')
assert 0 <= float(mark) <= 100
print('及格了') if float(mark) >= 60 else print('不及格')
```
Python中可变对象与不可变对象代码测试(通过代码自动判读对象是否可以改变)。
```
a=[1,2,'hello']
b=set('hello')
c=dict(one=11, two=22, three=33)
d=bytearray('hello',encoding='utf-8')
e=(1,2,'hello')
f=range(1,6,2)
g='hello'
h=bytes('hello',encoding='utf-8')
i=memoryview(b'hello')
j=frozenset('hello')
test=[a,b,c,d,e,f,g,h,i,j]
for i in test:
try:
hash(i)
print(f'{type(i)} 是不可变对象')
except:
print(f'{type(i)} 是可变对象')
test=[a,b,c,d,e,f,g,h,i,j]
results={}
for i in test:
try:
hash(i)
results[type(i)]='不可变对象'
print(f'{type(i)} 是不可变对象')
except:
results[type(i)]='可变对象'
print(f'{type(i)} 是可变对象')
print(results)
print(results)
```
| AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/chapter.05.程序流控制与异常处理.ipynb | chapter.05.程序流控制与异常处理.ipynb |
chapter.04.Python内置数据类型
```
a=8
print(f'{id(a)=}, {type(a)},{a.bit_length()=}')
var1 = 1688 # 十进制
var2 = 0b11001 # 二进制,0b是二进制的标志符号
var3 = 0o1577 # 八进制,0o是八进制的标志符号
var4 = 0x98ff # 十六进制,0x是十六进制的标志符号
print(f'{var1=},{var2=},{var3=},{var4=}')
```
用内置函数 hex()、oct()、bin(),可以把十进制数字转化为其他进制。如:
```
a=1688
print(f'{a=},{hex(a)=},{oct(a)=},{bin(a)=}')
```
【例4.2】查看int和float数字类型的相关信息。Python内置的sys模块,该模块提供了一些接口,用于访问 所安装Python系统相关信息
```
import sys
print(f'{sys.int_info=}')
print(f'{sys.float_info=}')
```
赋值运算符

```
a=12
print(f'a=12,赋值后-->a={a}')
a+=8
print(f'a+=8,赋值后-->a={a}')
a-=2
print(f'a-=2,赋值后-->a={a}')
a*=2
print(f'a*=2,赋值后-->a={a}')
a/=8
print(f'a/=8,赋值后-->a={a}')
a%=8
print(f'a%=8,赋值后-->a={a}')
a**=2
print(f'a**=2,赋值后-->a={a}')
a//=8
print(f'a//=8,赋值后-->a={a}')
```

```
a,b=' hello ','world'
print(f'{a+b=}')
print(f'{(a+b)*3=}')
import math
dir(math)
math.__doc__
f=[i for i in dir(math) if '__' not in i]
for i in f:
help(eval('math.'+i))
from contextlib import redirect_stdout
with open('out/help-math.txt', 'w') as ft:
with redirect_stdout(ft):
for i in f:
help(eval('math.'+i))
print(*open('out/help-math.txt'))
x,y=10,10
print(f'{x,y=}')
print(f'id(x)==id(y) 测试结果:{x==y}')
print(f'x is y 测试结果:{x is y}')
xx,yy=[1,2,3], [1,2,3]
print(f'{xx,yy=}')
print(f'id(xx)==id(yy) 测试结果:{id(xx)==id(yy)}')
print(f'xx==yy 测试结果:{xx==yy}')
print(f'xx is yy 测试结果:{xx is yy}')
```
异或实现数字加密
```
a,b = 54321, 99888888
# c为加密后的数据:
print(f'加密后数据 a^b {a^b=}')
print(f'解密后数据 (a^b)^b {a^b^b=}')
print(f'验证 {a==a^b^b=}')
isinstance(True,int)
issubclass(bool,int)
x=y=True
(x+y)*(x+y)
bool([' '])
bool([])
bool(' ')
bool(None)
```

```
False or bool('this is a test')
```
随机数生成
```
import random
random.seed(2025)
random.random()
random.randint(0,100)
random.uniform(0,100)
random.uniform(100,0)
random.sample(range(1,33),k=6)
```

```
s=['C','H','I','N','A']
for i in range(len(s)):
print(f's[{i}]={s[i]}')
```
元素切片演示代码。有十张扑克牌,三个玩家依次摸牌,求各位玩家摸到的牌。
```
poke=['A', 'J', '2', 'king', '4', 'queen', '6', 'K', '8', 'Q']
player1= poke[0:10:3]
player2= poke[1:10:3]
player3= poke[2:10:3]
print(f'{player1=}')
print(f'{player2 =}')
print(f'{player3=}')
poke=['A', 'J', '2', 'king', '4', 'queen', '6', 'K', '8', 'Q']
print(f'{poke[::]=}')
print(f'{poke[::2]=}')
print(f'{poke[:8:]=}')
print(f'{poke[::-1]=}')
a=[print,len]
a[0]('hello world')
a[1]('hello world')
a=list(range(6))
print(f'{a=}')
a[::2]=[99,88,77]
print(f'{a=}')
print(f'{a[::2]=}')
a.sort()
print(f'排序后 {a=}')
a.reverse()
print(f'逆排序 {a=}')
list(range(10))
list(range(1, 11))
list(range(0, 10, 3))
list(range(0, -10, -1))
range(33)[22]
demo=([1,2],3,4,5)
demo[0].append(99)
demo
```
>>> ord('A')
65
>>> ord('中')
20013
>>> chr(66)
'B'
>>> chr(25991)
'文'
>>> '\u4e2d\u6587'
'中文'
```
s,t='hello','hello'
print(f'id(s)={id(s)}')
print(f'id(t)={id(t)}')
print(f's is t: {s is t}')
s=s+ ' world'
print(f'id({s})={id(s)}')
s='道可道,非常道。名可名,非常名。'
s.count('道')
type(s.encode())
s.startswith(('道','可','常'))
t=str.maketrans('0123456789元', '零一二三四伍陆柒捌玖圆','上海师范大学')
t
'上海师范大学桂林路100号,邮编:200234'.translate(t)
table = str.maketrans( {'大':'宏伟壮观'})
'大山'.translate(table)
book_info = '#2022年虎年吉祥: James Gosling#'
book_info.strip('#').upper().replace(':',' by ')
import random,string
t=string.printable
''.join(random.choices(t,k=8))
bytes('上海师范大学','utf-8')
'上海师范大学'.encode()
bytearray('hello',encoding='utf-8')
bytearray('中国',encoding='utf-8')
bytearray(b'\xe4\xb8\xad\xe5\x9b\xbd')
bytearray('中国',encoding='utf-8').decode('utf-8')
memoryview('上海师范大学'.encode('utf-8')).hex()
import random
red_balls,blue_balls=list(range(1,34)),list(range(1,17))
random.shuffle(red_balls)
random.shuffle(blue_balls)
print(f'红球:{red_balls=} ')
print(f'篮球:{blue_balls=}')
#投注红色球6个数字
myRedBallChoice=random.sample(red_balls,k=6)
#投注篮色球1个数字
myBlueBallChoice=random.sample(blue_balls,k=1)
print(f'投注红球:{myRedBallChoice=} 投注篮球:{myBlueBallChoice=}')
#开奖6个红球数字
kjRedBallChoice=random.sample(red_balls,k=6)
#开奖1个蓝色球数字
kjBlueBallChoice=random.sample(blue_balls,k=1)
print(f'开奖红球:{kjRedBallChoice=} 开奖篮球:{kjBlueBallChoice=}')
#检查 红球、篮球中奖情况(利用集合求交集)
judgeRedBall=set(myRedBallChoice)&set(kjRedBallChoice)
judgeBlueBall=set(myBlueBallChoice)&set(kjBlueBallChoice)
print(f'中奖红球:{judgeRedBall=} 中{len(judgeBlueBall)}红球,中奖篮球:{judgeBlueBall},中{len(judgeBlueBall)}个篮球')
s1,s2='hello','world'
d1=dict(zip(range(len(s1)),list(s1)))
d2=dict(zip(range(len(s2)),list(s2)))
print(d1)
print(d2)
d1.update(d2)
print(d1)
currentEmployee = {1: 'a', 2: "b", 3:"c"}
formerEmployee = {2: 'd', 4: "e"}
{**currentEmployee,**formerEmployee}
d1=dict(a=1,b=3,c=4)
d2=dict(a=3,b=9,e=33,f=100)
keySet=d1.keys()|d2.keys()
dic=dict()
for key in keySet:
dic[key]=d1.get(key,0)+d2.get(key,0)
print(dic)
dict={'a':1, 'b':2}
dict["c"] = dict.pop("a")
dict
from itertools import chain
a=list('hello world')
b=range(10)
[i for i in chain(a,b)]
```
| AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/chapter.04.Python内置数据类型.ipynb | chapter.04.Python内置数据类型.ipynb |

# Prof.Li's Python Education Tools
## Author: 道法自然
## **模块安装:pip install -U python-education-tools**
* Python Education Tools模块为教材配套工具。模块由作者Prof.Luqun Li团队自主开发 。
* 模块简称“pet工具”。取Python Education Tools 首字母的缩写pet(英文意思:宠物),希望该工具成为大家学习的宠物!。
* 所有工具包的根目录是pet。模块安装后对应的安装包为 pet.data.* 、pet.textbook1.*等。
* 今后会根据大家的需求,不断扩充相关教学相关工具和数据集。如果您有好的建议,请发邮件到[email protected]联系我。

`````
** Pet相关模块的使用:**
1.教材配套的案例下载(运行以下1行Python代码):
import pet.textbook1.codes
稍后,即可将教学案例下载到桌面教学案例目录。
2.趣谈编程之道(运行以下1行Python代码):
import pet.this
与“晦涩难懂的”Python编程之禅import this对应,本模块从中国传统文化,心,术,道三个层次,引用古文阐述编程之道。
内容大家自己体会,道法自然,道不简则理不明!!Just for Fun!!!(不笑不足以为道!!)
3.教材相关的样本数据加载:
from pet.data import load_data
df1 = load_data.load_data(数据集名称)
'''
数据集名称如下:
'ip_address.xlsx' -ip地址分类。返回:dataframe
'st.xls' -某高校研究生初试成绩。返回:dataframe
'subway.xlsx' -上海地铁线路数据。返回:dataframe
'ddj.txt' -道德经文本。返回:字符串。
'tyjhzz.txt'-《太乙金华宗旨》。返回:字符串。
'2022pst.xlsx'-2022年优秀毕业论文。返回:dataframe。
'2022tsk.xlsx'-2022年通识课。返回:dataframe。
'beijing_bus.xlsx'-北京公交车信息。返回:dataframe。
今后将陆续增加数据集。
'''
其它数据:
load_data.votes #投票文本数据
load_data.cookies #某cookies数据
4.伪数据集的随机生成函数:
以下2个函数可以分别随机生成Series和DataFrame对象数据。
dffs = load_data.generate_sr(rows=100)
'''
生成一个Series对象,对象内数据条数默认40条,可任意设置。
'''
dfff = load_data.generate_df(rows=200)
'''
生成一个Dataframe对象,对象内数据条数默认40条,可任意设置。shape是(n,13),n为数据条数
'''
`````
 | AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/README.md | README.md |
```
def func_demo(r):
'''
此函数输入参数为圆的半径r,返回值为圆的周长和面积
'''
pi = 3.14
p,a = 2 * pi * r,pi * r * r
return p, a
r = 9
print(help(func_demo))
print(func_demo.__doc__)
def print_info(name, a, b):
print(name+'-'*30)
print(f'{id(a)=},{a=};{id(b)=},{b=}' )
def demo(a, b):
a = a + 20
b.append(99)
print_info('函数体内',a, b)
if __name__ == '__main__':
a, *b = list(range(1, 6))
print_info('初始状态:',a, b)
demo(a, b)
print_info('结束状态:', a, b)
def repeat(msg, times=1):
print(msg * times)
repeat('Hello')
repeat('World ', 5)
def fun(x, y, z=1):
print(x, y, z)
fun( 1,y=2,z=3)
fun(z=3,y=2,x=1)
def fun(x,*xx,y=21,z=2, **yy):
print(locals())
fun(1,2,4, z=88, y=99, k1=0,g=33)
def func(x):
print(f'{x=}')
x = 2
print(f'函数体内:{id(x)=}')
print(f'局部变量x: {x=}')
if __name__=='__main__':
x = 50
print(f'调用前:{id(x)=}')
func(x)
print(f'调用{func.__name__}后:{id(x)=}')
print(f'x的值: {x=}')
def func(x):
global gx
print(f'{x=}')
gx = 2
print(f'函数体内:{id(gx)=}, {id(x)=}')
print(f'局部变量x: {x=}')
if __name__=='__main__':
gx = 50
print(f'调用前:{id(gx)=}')
func(gx)
print(f'调用{func.__name__}后:{id(gx)=}')
print(f'x的值: {gx=}')
def inspect():
lx = 300
ly = 400
locals()['l'] = 'l-自设局部变量'
print(f'函数体内:{locals()=},总共有{len(locals())}个局部变量')
print(f'函数体内:{globals()=},总共有:{len(globals())}全局变量')
print(f'函数体内:{vars()=},总共有:{len(vars())}个变量')
if __name__ == '__main__':
x, y = 100, 200
inspect()
globals()['g'] = 'g-自设全局变量'
vars()['v'] = 'v-自设变量'
locals()['gl'] = 'gl-自设局部变量'
print(f'{locals()=},总共有{len(locals())}个局部变量')
print(f'{globals()=},总共有:{len(globals())}全局变量')
print(f'{vars()=},总共有:{len(vars())}个变量')
print(f'测试globals()与vars()是否相同:{globals() == vars()}')
print(f'测试globals()与locals()是否相同:{globals() == locals()}')
def outer():
x = 119
def inter():
x = 114
inter()
print(x)
outer()
def outer():
x = 119
def inter():
nonlocal x
x = 114
inter()
print(x)
outer()
x = 120
def outer():
# x = 119
def inter():
nonlocal x
x = 114
inter()
print(x)
outer()
x,y=2,3
f=lambda x,y:x*x+y*y
g=lambda : 39
print(f(x,y))
print(g())
a= (x*x for x in range(10) if 3<x<6)
b=[x*x for x in range(10) if x%2==0]
c={x*x for x in range(10) if x%3==0 }
print(*a,b,c)
dic = {'a':'b','c':'d'}
dic1 = {'hello '+dic[i]:i+' world' for i in dic}
print(dic1)
print([k+v for (k,v) in dic.items()])
names = [["张三","李四","王五","赵六"],
["张无忌","李世民","李鸿章","赵匡胤"]]
la = lambda x: [n for f in names for n in f if x in n]
print(la('李'), la('赵'))
x=filter(lambda x: x>'c','bcdef')
z=filter(lambda x: 5< x<10,range(1,100))
show=lambda x,t: print(f'{type(x)=},{x=},{t(x)=}')
show(x,list)
show(z,tuple)
ps,qm=[30,89,90],[69,59,89]
zp=map(lambda x,y:0.3*x+0.7*y,ps,qm)
print(f'{type(zp)=},{zp=},{list(zp)}')
from functools import reduce
def add(x,y):
return x+y
print (reduce(add,[1,2,3,4,5]))
print (reduce(add,(1,2,3,4,5),100))
print (reduce(add,{1,2,3,4,5},-10))
print(reduce(lambda x,y:x+y,range(1,100)))
print(reduce(add,[[2,3],[5,9],[3,4,5]],[]))
import time
#计算时间函数
def show_run_time(func):
def wrapper(*args, **kw):
start_time = time.time()
print(locals())
func(*args, **kw)
print(f'函数名: {func.__name__} 函数参数:{args=},{kw=} 运行时间: {time.time() - start_time}')
return wrapper
@show_run_time
def demo_func(n,name):
for i in range(n):
pass
if __name__=='__main__':
demo_func(900000,name='装饰器demo')
import random
def walk():
start = 0
def step():
end=random.randint(1,100)
nonlocal start
print(f'I walk from {start=} to {end=}')
start = end
return step
t=walk()
t()
t()
t()
def outfunc(x):
y = 10
def infunc(z):
c = x + y + z
return c
return infunc
a = 90
print(f'{a=},{outfunc(a)=}')
print(f'{outfunc(a).__name__=}')
print(f'{outfunc(a)(5)=}')
print(f'{outfunc(a)(6)=}')
def func():
name = '计数器' # 常驻内存 防止其他程序改变这个变量
counter = 0
def inner():
nonlocal counter
counter += 1
print(f'{name} :No.{counter}')
# 内层函数调用外层函数的变量,叫闭包,可以让一个局部变量常驻内存
return inner
ret = func()
ret()
ret()
ret()
print(f'{ret.__closure__=}')
class MyIterator:
def __init__(self, start, end):
self.value = start
self.end = end
def __iter__(self):
return self
def __next__(self):
if self.value >= self.end:
raise StopIteration
current = self.value
self.value += 1
return current
it1=MyIterator(1,100)
for num in it1:
print(num)
from collections.abc import Iterable, Iterator
lst = list(range(5))
a,b=isinstance(lst, Iterable), isinstance(lst, Iterator)
#是可迭代对象, 但不是迭代器
print(f'{lst=} Iterable:{a=}, Itertor:{b=}')
iterator = iter(lst) # 返回可迭代对象 lst 的迭代器
aa,bb=isinstance(iterator, Iterable), isinstance(iterator, Iterator)
# 既是可迭代对象, 也是迭代器
print(f'{iterator= } --> Iterable:{aa=}, Itertor:{bb=}')
def gen(content:str):
for i in content.split():
yield i
if __name__=='__main__':
for i in gen('this is only a test for generator!!'):
print(i)
import math
data = list(range(1, 40))
def func(data,epoch=5):
batchSize = math.ceil(len(data) / epoch)
j = 0
for i in range(1, epoch + 1):
content = data[j:i * batchSize]
j = i * batchSize
yield content
g = func(data) # 获取生成器
for i in g:
print(i)
import math
def func(data,epoch=5):
batchSize = math.ceil(len(data) / epoch)
j = 0
for i in range(1, epoch + 1):
content = data[j:i * batchSize]
j = i * batchSize
call_id=yield content
print(f'{call_id=}, got {content=} ')
g = func(list(range(1,30)),8)
print(f'{g.__next__()=}')
ret2 = g.send("欧阳锋")
ret3 = g.send("洪七公")
ret4 = g.send("黄药师")
print(f'{g.__next__()=}')
print(f'{g.__next__()=}')
x = 1
eval('x+1')
eval('[2,3,4] [1]')
x=100
exec('x=x+12')
print(x)
exec('print(dir())')
```
| AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/chapter.06.函数及其高级应用.ipynb | chapter.06.函数及其高级应用.ipynb |
使用正则表达式做字符串模式验证。如:银行只允许对信用卡使用6位数字做密码,若使用含有非数字或超过或不足6位数字做密码,则显示密码无效。
```
import re
pwd=input('请输入密码(6位数字):')
result = re.match("^\d{6}$",pwd)
print('密码无效!') if result==None else print('密码合法!')
import re
s='''
Select the exam:
(A)GRE (B)TOEFL (C) IELTS (D)PETS
'''
re.findall('\s?\([A-D]\)\s*(\w*)\s?',s)
import re
s='''
Select the exam:
(A)GRE (B)TOEFL (C) IELTS (D)PETS
'''
re.split('\s?\([A-D]\)\s*',s)[1:]
```
从字符串中提取特定的信息。如:从某地区天气预报信息中查找降水量数据。
解题思路:首先,确定查找降水量的正则表达式'\d+~\d+毫米';然后,使用re模块处理表达式,即可实现特定字符串的查找。
```
import re
s='预计,7月22日08时至23日08时,陕西南部、河南中东部、山东中南部、安徽北部、江苏北部、湖北西部、重庆北部等地部分地区有大到暴雨(60~90毫米),' \
'其中,山东中部等地局地有大暴雨(100~140毫米)。'
p=re.compile(r'\d+~\d+毫米')
print(f'{p.search(s)=}')
print(f'{p.findall(s)}')
for i in p.finditer(s):
print('\t',i)
```
演示使用正则表达实现分割字符串。如:同时使用空格、冒号、逗号、顿号正则表达式字符串(('[\s:,、]'),来分割字符串。
```
content='''原始数据 来源:单位,公交集团,数据描述:线路名称、方向、站点序号、站点名称'''
import re
p=re.compile('[\s:,、]')
d=p.split(content)
print(d,len(d))
```
有一个字符串内容为小明的各门课考试成绩,总分部分有待填充。
已知:字符串'小明的考试成绩,语文:65.5 数学:72 英语:83.5 总分:{}'。
请使用正则表达式求出小明的总分,并填写总分。
解决此问题,需要2个正则表达式,一个是提取各门课成绩,另一个是提取总分的占位符“{}”,然后用数据替换总分的“{}”占位符。
```
content='小明的考试成绩,语文:65.5 数学:72 英语:83.5 总分:{}'
import re
p=re.compile(r'\d+\.?\d')
marks=map(float,p.findall(content))
total=sum(marks)
pp=re.compile(r'\{\}')
pp.sub(str(total),content)
```
标志位的使用例子如下,比如查找一个字符,可以选择忽略大小写匹配。
```
import re
result = re.findall("c","ICCC")
print(result)
import re
result = re.findall("c","ICCC",re.I)
print(result)
```
区分与不区分Unicode案例:
```
import re
#匹配任意字符含Unicode字符
target_str = "中国 China 世界 are friends"
# 不使用re.A or re.ASCII
result = re.findall(r"\b\w{2,}\b", target_str)
print(result)
# 只匹配 re.A or re.ASCII
result = re.findall(r"\b\w{2,}\b", target_str, re.A)
print(result)
```
演示正则表达式Match对象对检索结果的处理。
```
import re
string = "Shnu is guilin road.100, and postcode is 200234"
p=re.compile(r"(\d+).+ (\d+)")
match = p.search( string)
print(match.expand(r"shnu 在桂林路: \1 and 邮编 : \2"))
```
在正则表达式后面加上“?”作用是减少贪婪搜索。
```
import re
re.findall('a{2,3}','a aa aaa aaaa')
import re
re.findall('a{2,3}?','a aa aaa aaaa')
```
过滤字符串中的标点符号。
```
import re
s = "str中333_国。人们、\ing,、. With. Pun~ctuation?"
# 如果空白符也需要过滤,使用 r'[^\w]'
s = re.sub(r'[^\w\s]','',s)
print(s)
```
使用Pattern对象,需要先对正则表达式编译,然后使用编译得到的对象去进一步完成字符串的匹配、查找、分割、替换等操作。如:
```
import re
string = "Shnu is guilin road.100, and postcode is 200234"
p=re.compile(r"(\d+).+ (\d+)")
match = p.search( string)
print(match.expand(r"shnu 在桂林路: \1 and 邮编 : \2"))
```
而使用re模块的函数,则无需编译正则表达,直接把上述2个步骤合二为一,按照函数参数形式,把正则表达以及相关参数传给函数即可。如:上述代码可以改写为:
```
import re
string = "Shnu is guilin road.100, and postcode is 200234"
match = re.search(r"(\d+).+ (\d+)", string)
print(match.expand(r"shnu 在桂林路: \1 and 邮编 : \2"))
```
Re相关函数完成如下操作。已知:字符串'小明的考试成绩,语文:65.5 数学:72 英语:83.5 总分:{}',“总分”部分有待填充。请使用正则表达式,求出小明的总分,并填写上总分。
解决此问题,需要2个正则表达式,一个是'\d+\.?\d'用以提取各门课成绩,另一个是'\{\}'用以提取总分填写的位置{}。
```
content='小明的考试成绩,语文:65.5 数学:72 英语:83.5 总分:{}'
import re
marks=map(float,re.findall(r'\d+\.?\d',content))
total=sum(marks)
re.sub(r'\{\}',str(total),content)
```
从复杂文本中提取特定信息。以下是某小区微信群接龙投票信息,请使用正则表达式提取“投票序号”、“业主房号”、“投票”信息。信息如下(由于信息较长,只显示少了数据):
```
votes= '''
#接龙
投票接龙
1. 房号+姓名+反对/赞成/弃权
2. 100 神仆 赞成
3. 184朱良 赞成
4. 118号 反对
5. 97号 弃权
6. 62号(不能退钱就赞成,可以退钱就算了,不想烦)
7. 174号 赞成
8. 86-海鱼 反对(1来历尴尬;2过于破旧,维修维护成本未知,建议及时止损。如果无法退款,已花的费用众筹算我一份)
9. 223 九凤 赞同
10. 126一郑桂华 赞同
11. 247 大卫林 赞同
12. 128号孙伟 弃权(照顾个别业主,可以放到不显眼处)
13. 禾亮188 赞同
14. 168茅 赞同
15. 229 亚梅 赞同
16. 109-21赞同
17. 233林 赞同 (为了照顾少数人位置重新协商)
18. 129号 赞同
19. 136号 赞成
20. Xing 31号 赞同 希望小区越来越好,支持所有正能量的行为!
21. 120号 赞成(位置为照顾个别人想法,可以协商)
22. 42号ringing 反对,和小区建筑风格不符
23. 245号 赞成
24. 83小宝 反对
25. 3号 反对
26. 242 赞成、英雄不问出处,正能压邪!
27. 瑞华1号 赞成
28. 108-301 赞同
29. 227赞成
30. 224严,赞同!墓区边的房子都买了,还怕这个!就算从风水讲,墓区的东西面还是好风水。原先比当今小区还要乱的时候,就有热心的业主捐了五六块镜子,放在转角处,改善小区道路行车安全,经过几届业委会和全体正常交物业管理费业主的共同努力,小区面貌已有较大的改善,愿意为小区建设奉献的行为理应得到鼓励和支持!
31. 青青翠竹 赞同
32. 青青翠竹 赞同88号 南赞同
33. 南88 赞同
34. 78-安妮 弃权(既然已经来了后续协商更新外观或者位置就行)
35. 139-常 赞同
36. 143徐 赞同
37. 157号 赞同
38. 19-rongying 反对,和小区风格不搭
39. 106- 赞同 喜欢马车 无论来自哪里都喜欢
40. 62号叶师傅 赞同
41. 241~赵永 弃权(出发点是好的,但随意性强,没有遵循小区基本的议事规则,没有事先征询大多数业主意见。)
42. 127-凌耀初 赞同!(由于马儿和马车锈烂严重,希望好好修补。另,来历也确实是有点尴尬,建议修复时颜色重新考虑)。通过这件事情如能形成小区的议事规则,如能形成网络投票的新机制,那将大大提高业主大会和业委会的决策效率,那是一件大好事!我们小区急需做的大事还有不少~
43. 108-402陈 弃权(不论结果怎么样,至少体现了办事透明度和业主参与度,是好事。)
44. 110-401可可 赞成(本来就是业委会牵头做的事情,也是为了改善小区环境,如果每样小事都需要全体业主投票,业主们就太累了)
45. 72号 赞同
46. 76号 赞同
47. 华爷140 弃权
48. 74号陆 赞同
49. 185-麻辣面 弃权
50. 202号王焱 赞成
51. 61-芊茉 赞同
52. 151田 赞同
53. 21-夏 赞同
54. 117 赞同
55. 9号 弃权 虽然参加了众筹,但是的确不知道还有那么多邻居没有进新群,不知道众筹这个事;虽然初心是为了美丽家园做出贡献,但的确不知道青博馆大门开在海湾园内;虽然放在海湾园里的东西肯定不会全是祭品(比如园区办公室的办公用品、摆设等等),但他的确是海湾园里出来的;虽然我不信邪,但的确有人会觉得这个晦气。
56. 115-402 赞同 心中为阳处处阳,心中为阴处处阴,心灵纯洁一点就不会有那么多的事情了
57. 静80 反对放在大门口,可以改个地方放吗?听说是海湾园里出来的的确会让人觉得晦气。
58. 艺嘉 赞同
59. 114-402 赞同
60. 219号戴 赞同。
61. 8-陈 赞同(既来之则安之)
62. 172杰 赞同(是饰品非祭品)
63. 148号艺嘉 赞成
64. 152CQ 赞成
65. 211号 赞成
66. 10-嘟嘟爸 赞成
67. 135 反对。这种材质注定了保养翻新不会只有一次,这一次大家众筹了那么下次呢?如果不翻新,那么一到小区门口就会感到这个小区的破败,如果翻新,那么钱从哪里出?因为不赞同,所以后续费用也不愿意承担。桃花岛上的亭子想要翻新我看大家都想选一劳永逸的材质,为什么在小区门口要放一个需要反复翻新的?
68. 178-冰姐 赞成,小区要做成一件事太难了
69. 217 赞同
70. 15洪虹 弃权
71. 55号 赞成
认知的差异性产生了多样性的思想碰撞现象,我思故我在
72. 105号301 赞成
73. 84-wang 弃权
'''
import re
import pandas as pd
from pet.data import generator
votes=generator.votes
votes=re.sub('赞同', '赞成', votes)
results=re.findall('(\d+)\.\s\D{,6}\s*(\d+[--号]?\d*).*(反对|赞成|弃权|赞同)+', votes, re.MULTILINE)
print(results)
print(len(results))
df = pd.DataFrame(results, columns=['序号','门牌号', '投票'])
with pd.ExcelWriter('小区投票与统计.xlsx') as writer:
df.to_excel(writer, sheet_name='投票结果')
df['投票'].value_counts().to_excel(writer,sheet_name='统计结果')
```
jieba的三种中文分词模式演示。
```
import jieba
content="老王在阳光海岸小区写信用卡消费记录。"
seg_list = jieba.cut(content, cut_all=False)
print(f"精准模式(默认): " + "/".join(seg_list))
seg_list = jieba.cut(content, cut_all=True)
print("全模式: " + "/ ".join(seg_list))
seg_list = jieba.cut_for_search(content)
print("搜索引擎模式: " + "/ ".join(seg_list))
```
使用用户字典,只需要在jieba分词之前加载字典jieba.load_userdict("userdict1.txt") ,或者动态加载词语jieba.add_word('阳光海岸') 。如果想动态删除词语,可以使用del_word(word) 可动态删除用户定义的词语。
```
import jieba
content="老王在阳光海岸小区写信用卡消费记录。"
jieba.add_word('阳光海岸')
seg_list = jieba.cut(content, cut_all=False)
print(f"精准模式(默认): " + "/".join(seg_list))
seg_list = jieba.cut(content, cut_all=True)
print("全模式: " + "/ ".join(seg_list))
seg_list = jieba.cut_for_search(content)
print("搜索引擎模式: " + "/ ".join(seg_list))
t='''
教育部关于2020年春季学期延期开学的通知
经研究决定,2020年春季学期延期开学,具体通知如下。
一、部属各高等学校适当推迟2020年春季学期开学时间,具体开学时间与当地高校开学时间保持一致,并报教育部备案。春节返乡学生未经学校批准不要提前返校。其他中央部门所属高校可参照执行。
二、地方所属院校、中小学校、幼儿园等学校春季学期开学时间,由当地教育行政部门按照地方党委和政府统一部署确定。
三、各类学校要加强寒假期间对学生学习、生活的指导,要求在家不外出、不聚会、不举办和参加集中性活动。对寒假在校和自行返校的学生,要切实做好疫情防控工作。要做好开学后疫情防控工作预案,建立师生流动台账,明确防控工作要求,加大环境卫生整治力度,全面做好疫情防控工作。'''
import jieba.analyse
jieba.analyse.extract_tags(t, topK=5, withWeight=True, allowPOS=())
```
根据原始字符串内容生成词云图。
```
import wordcloud
w=wordcloud.WordCloud(width=800,
height=600,
background_color='white',
font_path='msyh.ttc')
content="""
新冠肺炎疫情正在美国各州蔓延,美国总统特朗普期望美国经济能够在复活节到来前得到“重启”。这一言论受到了民主党总统候选人、前副总统拜登的批评,拜登在接受采访时对特朗普的言论评价道:“如果你想长期破坏经济,那就让这(疫情)再度暴发吧。我们现在甚至还没有减缓疫情增长的趋势,听到总统这样说真是令人失望。他还是不要再说话了,多听专家的意见吧。”拜登还调侃道:“如果可能的话,我还想明天就进政府当上总统呢。”
拜登指出,目前美国疫情形势加重是因为“在应该响应的时候没有做出行动”,并呼吁特朗普把民众的健康作为工作重心。同时,拜登还建议特朗普政府多遵循国家过敏症与传染病研究所主任福西等医疗专家的建议,让民众保持社交距离,并且为控制疫情做好充分工作。
据美国约翰斯·霍普金斯大学数据显示,截至北京时间2020年3月25日12时30分左右,美国累计确诊新冠肺炎病例55222例,累计死亡797例。。
"""
w.generate(content)
w.to_file('result.jpg')
```

将中文字符串进行分词后,然后生成词云图。
```
import jieba
txtlist = jieba.lcut(content)
string = " ".join(txtlist)
w.generate(string)
# 将词云图片导出到当前文件夹
w.to_file('result-jieba.jpg')
```

```
import imageio
mk = imageio.imread("usa.jpg")
w= wordcloud.WordCloud(width=600,height=300,
background_color='white',font_path='msyh.ttc',
contour_width=1,contour_color='steelblue',mask=mk)
w.generate(string)
w.to_file('result-jieba-usa.jpg')
```

将中文字符串使用SnowNLP进行文本关键词和摘要的提取。
```
t='''
教育部关于2020年春季学期延期开学的通知
经研究决定,2020年春季学期延期开学,具体通知如下。
一、部属各高等学校适当推迟2020年春季学期开学时间,具体开学时间与当地高校开学时间保持一致,并报教育部备案。春节返乡学生未经学校批准不要提前返校。其他中央部门所属高校可参照执行。
二、地方所属院校、中小学校、幼儿园等学校春季学期开学时间,由当地教育行政部门按照地方党委和政府统一部署确定。
三、各类学校要加强寒假期间对学生学习、生活的指导,要求在家不外出、不聚会、不举办和参加集中性活动。对寒假在校和自行返校的学生,要切实做好疫情防控工作。要做好开学后疫情防控工作预案,建立师生流动台账,明确防控工作要求,加大环境卫生整治力度,全面做好疫情防控工作。'''
import snownlp
s = snownlp.SnowNLP(t)
print('关键词',s.keywords(limit=5))
print('摘要',s.summary(limit=2))
```
将中文字符串使用SnowNLP进行情感分析
```
import snownlp
word = snownlp.SnowNLP('为中华崛起而读书!!')
print(word.tf)
print(word.pinyin)
print(word.keywords())
feeling = word.sentiments
print(feeling)
```
取得无线设备的wifi密码
```
import subprocess
import re
import locale
lcode = locale.getpreferredencoding()
def get_wifi_password_by_profile():
s = subprocess.check_output(['netsh', 'wlan', 'show', 'profile']).decode(lcode)
wifi_ssid=re.findall(':\s(.+)\r',s)
print(wifi_ssid)
info = {}
for i in wifi_ssid:
profile_info = subprocess.check_output(
['netsh', 'wlan', 'show', 'profile', i, 'key=clear']).decode(lcode)
pwd=re.findall(r'[关键内容|Content]\s+:\s(\w+)',profile_info )
info[i] = pwd
return info
if __name__ == '__main__':
d = get_wifi_password_by_profile()
print(d)
```
| AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/chapter.09.文本数据处理与分析.ipynb | chapter.09.文本数据处理与分析.ipynb |
***安装本教材配套模块***
```
!pip install -U python-education-tools
```
Python程序代码示例
```
# -*- coding: utf-8 -*-
"""
Created on Tue Feb 25 09:07:47 2020
@author: Administrator
"""
import datetime
# 获取当前时间
t=datetime.datetime.now()
def hi(name):
"""
Parameters
----------
f : list
files and directories list
Returns
-------
flpy : list
list only contains python files.
"""
return 'hello:'+ str(name)
if __name__ == "__main__":
print(t)
print(hi('James'))
# 第一个注释
print ("Hello world!, Python!")
!pip install -U pyttsx3
import pyttsx3
engine = pyttsx3.init()
engine.say('欢迎进入Python世界!!欢迎使用 Python Education Tools')
engine.runAndWait()
```
查看__doc__属性
```
help(hi)
print(hi.__doc__)
if True:
print ("True")
else:
print ("False")
if True:
print ("Answer")
print ("True")
else:
print ("Answer")
print ("False") # 缩进不一致,会导致运行错误
```
Python语言一行长代码换行写法代码示例。
```
total ="A33"\
"B44"\
"C55"
print(total)
#在 [], {}, 或 () 中的多行语句,不需要使用反斜杠(\),例如:
total = ['item_one', 'item_two', 'item_three',
'item_four', 'item_five']
```

```
import keyword
print(keyword.kwlist) #关键字列表
print(len(keyword.kwlist)) #统计关键字的个数
const=[False,True,None,NotImplemented,Ellipsis,__debug__]
for i in const:
print(f'value is {i}, type is {type(i)}')
```

[内部函数](https://docs.python.org/3.9/library/functions.html)
```
dir()
x=100
dir()
```
定义一个函数后的命名空间
```
def s(length):
x=300
y=400
dir()
globals()
locals()
vars()
```
| AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/chapter.03.Python的基本概念.ipynb | chapter.03.Python的基本概念.ipynb |
```
!pip install -U python-education-tools
from pet.data import generator
df=generator.load('st')
df
import pandas as pd
dt=pd.read_excel('dtypes.xlsx')
dt
dt.info(verbose=True)
```
Series对象的构造函数如下:
Series(data=None, index=None, dtype=None, name=None, copy=False, fastpath=False)
参数copy、fastpath一般情况,无需设置。
通常使用下面4个参数构建Series对象:
Series(data,index,dtype,name)
Series对象的创建
```
import pandas as pd
data=[5, 3.3, ['o','k'], ('o','k'), {"name": "nick", "age": 12}, print]
for i in data:
print(f'pd.Series({i})得到:\n',pd.Series(i))
pd.Series(data)
import pandas as pd
s=pd.Series([1,3,4],['a','b','a'],name="Demo")
print(s)
print(f'{s.name=},{s.dtypes=}')
print(f'{s.shape=},{s.ndim=}')
print(f'{s.size=},{s.index=}')
print(f'{s.values=},{s.hasnans=}')
import pandas as pd
s=pd.Series(list('hello'))
print(s)
s.to_excel('s.xlsx',index=None)
# 读入s.xlsx 文件,得到s1对象
s1=pd.read_excel('s.xlsx')
print(s1)
print(s.to_list())
print(s.to_json())
print(s.to_dict())
from pet.data import generator
df=generator.generate_df()
df.head(3)
print(f'{df.shape=},{df.ndim=}')
print(f'{df.size=},{df.index=}')
print(f'{df.columns=}')
print(f'{df.dtypes=}')
print(f'{df.values=}')
df.iloc[1:3]
df.iloc[:,1:6].head(3)
df.iloc[1:,1:].head(3)
df.loc[1:,'姓名':'专业课'].head(3)
[i for i in df]
[i for i in df.keys()]
[i for i in df]==[i for i in df.keys()]
for col, ser in df.head(3).iteritems():
print(type(col),type(ser))
{col:ser for col, ser in df.head(3).iteritems()}
for i,j in df.iterrows():
print(type(i),type(j))
print('*'*33)
for i in df.head(3).itertuples():
print(i)
df.loc[1,:]=[220151099,'赵六','男','华东理工','化学工程',84,92,63,66,88,99,39,123]
df.head(3)
df
df.loc[len(df),:]=[220151099,'赵六','男','华东理工','化学工程',84,92,63,66,88,99,39,123]
df
df.loc[len(df)+3,:]=[220151099,'赵六','男','华东理工','化学工程',84,92,63,66,88,99,39,123]
df
df.drop([38:])
df
df.drop([1,3])
df.drop(df.index[1:4])
df
df.loc[[1,2],:]=df.loc[[2,1],:].values
df
import pandas as pd
df['新增列']=pd.Series(['道','法','自','然'])
df
df
from pet.data import generator
df=generator.generate_df()
df['新增列']=pd.Series(['道','法','自','然'])
df
del df['英语']
df
df=df.drop('新增列',axis=1)
df
df.pop('线代')
df
from pet.data import generator
df=generator.generate_df()
df
df.loc[:,['政治', '英语']] = df.loc[:,[ '英语','政治']].values
df
df.replace({'姓名':{'苏贵慧':'牛人'},'高数':{38: 111}})
df.sort_values(['高数', '英语'])
df.sort_values(['高数', '英语'],ascending=False)
df
df.rename(index={2:999},columns={'面试':'复试'})
df.columns.tolist()
import numpy as np
df['总分']=df[[ '英语', '政治', '线代', '概率', '高数', '专业课', '表达能力', '面试']].apply(np.sum,axis=1)
df.head(3)
df
df.reindex([0,2])
df
dff=df.set_index('姓名')
dff.head(3)
df
df.reindex([0,2])
df.reindex(columns=['姓名', '线代','概率','高数','专业课']).head(3)
df.set_index('姓名').sort_index(ascending=False)
df
df.set_index(['学校','班级','性别']).head(5)
df
df['排名']=df['总分'].rank(ascending=False)
df
df.corr()
df.corrwith(df['英语'])
df.groupby(['学校','班级']).agg([min,max])[['英语','高数','总分']]
df.groupby('学校').median()
df.groupby(['学校','性别']).sum()
df.groupby(['学校','性别']).idxmax()
df.groupby(['学校','性别']).sem()
df.pivot_table(index=['学校','班级','性别'],values=['高数','英语','线代'],aggfunc='median')
df
df.pivot_table(index=['学校','班级'],values=['高数','英语','线代'],columns='性别',aggfunc='median',fill_value=0)
import numpy as np
df['总分']=df[[ '英语', '政治', '线代', '概率', '高数', '专业课', '表达能力', '面试']].apply(np.sum,axis=1)
df
df['总分'].sort_values(ascending=False)
df['总评']=pd.cut(df['总分'], [0,500,560,650,750,1000], labels=["不及格","及格","中","良","优秀"])
df
pd.melt(df,id_vars=['考号','姓名','性别'])
pd.get_dummies(df,columns=['性别'],prefix='sex')
df
df.sample(n=3)
df.sample(frac=.1,ignore_index=True)
import pandas as pd
left=pd.DataFrame({'name':['foo','bar','success'],
'age':[12,23,33],
'mark':[23,78,99]})
right=pd.DataFrame({'name':['foo','bar'],
'age':[4,5]})
pd.concat([left,right])
pd.concat([left,right],axis=1)
import pandas as pd
df1=pd.DataFrame({'key':['a','b','b','g'],'data1':range(4)})
df2=pd.DataFrame({'key':['a','b','c','k'],'data2':range(3,7)})
df1
df2
pd.merge(df1,df2)
pd.merge(df1,df2,left_on='data1',right_on='data2')
df1.join(df2,how='left',lsuffix='left',rsuffix='right',sort=True)
from pet.data import generator
df=generator.generate_df()
df=df[['高数','英语','面试']].head(5)
import matplotlib.pyplot as plt
font = {'family': 'SimHei','weight': 'bold','size': 20}
plt.rc('font', **font)
parameters = {'axes.labelsize': 15,
'axes.titlesize': 15}
plt.rcParams.update(parameters)
fig = plt.figure(figsize=(100,100),dpi=200)
df.plot()
df.plot.bar()
df.plot.bar(stacked=True)
df.plot.scatter(x='高数',y='英语')
df.plot.hist()
df.plot.hexbin(x='高数',y='英语',gridsize=15)
import pandas as pd
url='https://baike.baidu.com/item/IP%E5%9C%B0%E5%9D%80/150859?fr=aladdin'
dfs = pd.read_html(url)
dfs[0]
dfs[0].to_excel('ip_address.xlsx',index=None,header=None)
df=pd.read_excel('ip_address.xlsx')
df
import pandas as pd
df1=pd.DataFrame({'key':['a','b','b','g'],'data1':range(4)})
df2=pd.DataFrame({'key':['a','b','c','k'],'data2':range(3,7)})
df1
df2
pd.merge(df1,df2)
pd.merge(df1,df2,how='outer')
pd.merge(df1,df2,how='left')
pd.merge(df1,df2,how='right')
```
| AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/chapter.11.Pandas数据分析.ipynb | chapter.11.Pandas数据分析.ipynb |
```
import numpy as np
a=np.array(range(3))
print(f'{a=},{type(a)=}')
b=np.asarray(range(6))
print(f'{b=},{type(b)=}')
c=np.array([range(3),range(5,8)])
print(f'{c=},{type(c)=}')
d=np.ones((2,3))
print(f'{d=},{type(d)=}')
e=np.zeros((1,3))
print(f'{e=},{type(e)=}')
import numpy as np
print(f'{np.arange(3)=}')
print(f'{np.arange(2,9,2)=}')
print(f'{np.linspace(3, 6,3, endpoint =True)=}')
print(f'{np.logspace(1.2,3,3,base=2)=}')
print(f'{np.random.uniform(49.5, 99.5)=}')
print(f'{np.random.uniform(75.5, 125.5, size=(2, 2))=}')
print(f'{np.random.rand(2,3)=}')
import array as arr
# creating an array with integer type
a = arr.array('i', [1, 2, 3])
# printing original array
print ("The new created array is : ",a, end =" ")
print(f"{arr.array('i', [1, 2, 3])=}")
print(f"{np.array(range(3))=}")
print(f"{np.random.rand(2,2)=}")
np.save('ok',a)
b=np.load('ok.npy')
a==b
import numpy as np
a = np.array([[1,2,3],[4,5,6]])
b = np.arange(0, 1.0, 0.1)
c =a*a
np.savez("ok3.npz", a, b, ok = c)
r = np.load("ok3.npz")
print(r.files) # 查看各个数组名称
print(r["arr_0"]) # 数组 a
print(r["arr_1"]) # 数组 b
print(r["ok"]) # 数组 c
a=np.random.uniform(2,6,(2,2))
print(a)
np.savetxt('ok.txt',a,fmt='%f',delimiter=',')
b=np.loadtxt('ok.txt',dtype=float,delimiter=',')
print(b)
a=np.arange(1,10)
print(f'{np.arange(1,10)=}')
print(f'{a.reshape(3,3)=}')
print(f'{a.flatten()=}')
print(f'{a.ravel()=}')
a.resize((4,4),refcheck=False)
a
a=np.arange(6)
b=np.expand_dims(a,axis=0)
c=np.expand_dims(a,axis=1)
print(a)
print(b)
print(c)
np.squeeze(b)
np.squeeze(c)
import numpy as np
a = np.array([[1,2],[3,4]])
b = np.array([[5,6],[7,8]])
c=np.concatenate((a,b),axis=0)
d=np.concatenate((a,b),axis=1)
print(a,b,c,d)
np.stack((a,b),0)
np.stack((a,b),1)
a = np.array([[1,2,3],[4,5,6]])
np.append(a, [7,8,9])
np.append(a, [[7,8,9]],axis = 0)
np.append(a, [[5,5,5],[7,8,9]],axis = 1)
np.insert(a,3,[11,12])
np.insert(a,1,[11],axis = 0)
np.insert(a,1,11,axis = 1)
np.broadcast_to(3,(3,3))
np.unique(b,return_index = True)
import numpy as np
a = np.arange(6).reshape(2,3)
for x in np.nditer(a):
print (x, end=", " )
for x in np.nditer(a,order="F"):
print(x,end=',')
for row in a:
print(row)
for i in a.flat:
print(i,end=',')
a=np.arange(6)
np.amin(a)
np.amax(a)
np.ptp(a)
np.percentile(a,60)
np.mean(a)
np.std(a)
np.var(a)
import numpy as np
aa=np.random.uniform(1,5,(2,3))
np.sort(aa)
np.argmax(aa)
np.argmin(aa)
np.where(aa>2.5)
np.extract(aa>2.5,aa)
```
| AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/chapter.10.NumPy与数学运算.ipynb | chapter.10.NumPy与数学运算.ipynb |
***安装本教材配套模块***
```
!pip install -U python-education-tools
```
* 第2章 Python开发环境配置与维护 *

查看目前[Python的实现版本](https://wiki.python.org/moin/PythonImplementations)

进入Python官方网站(https://www.python.org/downloads/),根据操作系统的版本选择相应的Python版本下载。

```
import sys
sys.version_info
sys.copyright
print('hello world')
```
在线文档为https://docs.python.org/3/ (英文),https://docs.python.org/zh-cn/3/ (中文版)。[在线文档](https://docs.python.org/zh-cn/3/)

```
help(print)
```
** Python 案例代码搜索引擎 ** [网站](https://www.programcreek.com/python/)
离线本地安装
[离线本地安装包 下载网站](https://www.lfd.uci.edu/~gohlke/pythonlibs/)
[ 10 BEST Python IDE & Code Editors for Windows, Linux & Mac](https://www.guru99.com/python-ide-code-editor.html)

```
```
| AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/chapter.02.Python开发环境配置与维护.ipynb | chapter.02.Python开发环境配置与维护.ipynb |
```
import sys
sys.stdin.read(10)
sys.stdout.write('hello')
sys.stderr.write('hello')
import sys
old_out=sys.stdout
f=open('help.txt','w')
help(sys)
import contextlib, io
f = open('out/record.txt','w+')
with contextlib.redirect_stdout(f):
import this
f.close()
print(open('out/record.txt').read())
print(open(__file__).read())
print(contextlib)
```

```
f=open('out/help-math.txt')
print(f.read(100))
f = open("out/my_file.txt", 'w')
f.write("hello world")
#或可以直接将print()函数的输出定向为文件。
print('hello world',file=open('out/ok.txt','w'))
f.close()
try:
f = open('out/help-math.txt')
print(f.read())
except:
print("An exception occurred")
finally:
if f:
f.close()
with open('out/my_file.txt', 'a+', encoding='utf-8') as f:
f.write('test\n\hello world\n')
#文件读操作
with open('out/my_file.txt', 'r', encoding='utf-8') as f:
print(f.readlines())
with open('out/demo.txt', 'w', encoding='utf-8') as f:
f.write("这里演示文件的创建!以及相关函数的使用和功能!!!\n" * 10)
with open('out/demo.txt', 'r', encoding='utf-8') as f:
print(f'文件读位置:{f.tell()=}')
print(f.read(10))
f.seek(6)
print(f'文件读位置::{f.tell()=}')
print(f.read())
with open('out/demo.txt', 'a+', encoding='utf-8') as f:
f.write("hello world!!!\n")
with open('out/demo.txt', encoding='utf-8') as f:
print(f.read())
import shelve
#(1) 保存数据。
with shelve.open('test_shelf') as w: #
w['abc'] = {'age': 10, 'float': 9.5, 'String': 'china'}
w['efg'] = [1, 2, 3]
#(2) 查找数据。
with shelve.open('test_shelf') as r: #
print(r['abc'])
print(r['efg'])
#(3) 删除、插入、更新数据。
with shelve.open('test_shelf', flag='w', writeback=True) as dm:
del dm['abc']
dm['gre'] = [99879, 2, 3]
dm['efg'] = "thi is a test".split()
#(4) 遍历数据。
with shelve.open('test_shelf') as s:
for key, value in s.items():
print(key, value)
import pickle
tup1 = ('[email protected]!!', {1,2,3}, True)
#使用 dumps() 函数将 tup1 转成 p1
p1 = pickle.dumps(tup1)
def hi(name):
print('hello'+name)
#使用 dumps() 函数将 函数hi 转成 p2
p2 = pickle.dumps(hi)
print(p1)
print(p2)
import pickle
tup1 = ('[email protected]!!', {1,2,3}, True)
#使用 dumps() 函数将 tup1 转成 p1
p1 = pickle.dumps(tup1)
def hi(name):
print('hello'+name)
#使用 dumps() 函数将 函数hi 转成 p2
p2 = pickle.dumps(hi)
print(p1)
print(p2)
t1=pickle.loads(p1)
t2 = pickle.loads(p2)
import pickle
tup1 = ('[email protected]!!', {1,2,3}, True)
def hi(name):
print('hello'+name)
with open ("out/a.txt", 'wb') as f: #打开文件
#使用 dumps() 函数将 tup1 转成 p1
p1 = pickle.dump(tup1,f)
#使用 dumps() 函数将 函数hi 转成 p2
p2 = pickle.dump(hi,f)
pickle.dump(tup1, f)
import pickle
tup1 = ('[email protected]!!', {1,2,3}, True)
def hi(name):
print('hello'+name)
with open ("out/a.txt", 'wb') as f: #打开文件
#使用 dumps() 函数将 tup1 转成 p1
p1 = pickle.dump(tup1,f)
#使用 dumps() 函数将 函数hi 转成 p2
p2 = pickle.dump(hi,f)
with open ("out/a.txt", 'rb') as f: #打开文件
t3 = pickle.load(f) #将二进制文件对象转换成 Python 对象
t4=pickle.load(f)
print(t3)
t4(' from China!!')
print(t3)
import json
# Python 对象(字典):
x = {
"name": "zhang",
"age": 33,
"city": "shanghai"
}
# 转换为 JSON:
y = json.dumps(x)
print(type(y))
print(y)
import json
# 一些 JSON:
x = '{ "name":"zhang", "age":33, "city":"shanghai"}'
# 解析 x:
y = json.loads(x)
print(type(y))
print(y)
import json
# Python 对象(字典):
x = {
"name": "zhang",
"age": 33,
"city": "shanghai"
}
with open("record.json","w") as dump_f:
json.dump(x,dump_f)
import json
with open("record.json",'r') as load_f:
load_dict = json.load(load_f)
print(type(load_dict))
print(load_dict)
import os
for dirpath, dirname, files in os.walk('.'):
print(f'Found directory: {dirpath}')
for file_name in files:
print(file_name)
```
| AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/textbook1/.ipynb_checkpoints/chapter.07.文件与输入输出-checkpoint.ipynb | chapter.07.文件与输入输出-checkpoint.ipynb |
from importlib.resources import files
import numpy as np
import pandas as pd
cookies = """
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: zh-CN;q=0.8
Cache-Control: max-age=0
Connection: keep-alive
Cookie: cookiesession1=678B287ESTV0234567898901234AFCF1; SL_G_WPT_TO=zh; SL_GWPT_Show_Hide_tmp=1; SL_wptGlobTipTmp=1
Host: www.edu.cn
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Sec-GPC: 1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.81 Safari/537.36
"""
votes = '''
#接龙
投票接龙
1. 房号+姓名+反对/赞成/弃权
2. 100 神仆 赞成
3. 184朱良 赞成
4. 118号 反对
5. 97号 弃权
6. 62号(不能退钱就赞成,可以退钱就算了,不想烦)
7. 174号 赞成
8. 86-海鱼 反对(1来历尴尬;2过于破旧,维修维护成本未知,建议及时止损。如果无法退款,已花的费用众筹算我一份)
9. 223 九凤 赞同
10. 126一郑桂华 赞同
11. 247 大卫林 赞同
12. 128号孙伟 弃权(照顾个别业主,可以放到不显眼处)
13. 禾亮188 赞同
14. 168茅 赞同
15. 229 亚梅 赞同
16. 109-21赞同
17. 233林 赞同 (为了照顾少数人位置重新协商)
18. 129号 赞同
19. 136号 赞成
20. Xing 31号 赞同 希望小区越来越好,支持所有正能量的行为!
21. 120号 赞成(位置为照顾个别人想法,可以协商)
22. 42号ringing 反对,和小区建筑风格不符
23. 245号 赞成
24. 83小宝 反对
25. 3号 反对
26. 242 赞成、英雄不问出处,正能压邪!
27. 瑞华1号 赞成
28. 108-301 赞同
29. 227赞成
30. 224严,赞同!墓区边的房子都买了,还怕这个!就算从风水讲,墓区的东西面还是好风水。原先比当今小区还要乱的时候,就有热心的业主捐了五六块镜子,放在转角处,改善小区道路行车安全,经过几届业委会和全体正常交物业管理费业主的共同努力,小区面貌已有较大的改善,愿意为小区建设奉献的行为理应得到鼓励和支持!
31. 青青翠竹 赞同
32. 青青翠竹 赞同88号 南赞同
33. 南88 赞同
34. 78-安妮 弃权(既然已经来了后续协商更新外观或者位置就行)
35. 139-常 赞同
36. 143徐 赞同
37. 157号 赞同
38. 19-rongying 反对,和小区风格不搭
39. 106- 赞同 喜欢马车 无论来自哪里都喜欢
40. 62号叶师傅 赞同
41. 241~赵永 弃权(出发点是好的,但随意性强,没有遵循小区基本的议事规则,没有事先征询大多数业主意见。)
42. 127-凌耀初 赞同!(由于马儿和马车锈烂严重,希望好好修补。另,来历也确实是有点尴尬,建议修复时颜色重新考虑)。通过这件事情如能形成小区的议事规则,如能形成网络投票的新机制,那将大大提高业主大会和业委会的决策效率,那是一件大好事!我们小区急需做的大事还有不少~
43. 108-402陈 弃权(不论结果怎么样,至少体现了办事透明度和业主参与度,是好事。)
44. 110-401可可 赞成(本来就是业委会牵头做的事情,也是为了改善小区环境,如果每样小事都需要全体业主投票,业主们就太累了)
45. 72号 赞同
46. 76号 赞同
47. 华爷140 弃权
48. 74号陆 赞同
49. 185-麻辣面 弃权
50. 202号王焱 赞成
51. 61-芊茉 赞同
52. 151田 赞同
53. 21-夏 赞同
54. 117 赞同
55. 9号 弃权 虽然参加了众筹,但是的确不知道还有那么多邻居没有进新群,不知道众筹这个事;虽然初心是为了美丽家园做出贡献,但的确不知道青博馆大门开在海湾园内;虽然放在海湾园里的东西肯定不会全是祭品(比如园区办公室的办公用品、摆设等等),但他的确是海湾园里出来的;虽然我不信邪,但的确有人会觉得这个晦气。
56. 115-402 赞同 心中为阳处处阳,心中为阴处处阴,心灵纯洁一点就不会有那么多的事情了
57. 静80 反对放在大门口,可以改个地方放吗?听说是海湾园里出来的的确会让人觉得晦气。
58. 艺嘉 赞同
59. 114-402 赞同
60. 219号戴 赞同。
61. 8-陈 赞同(既来之则安之)
62. 172杰 赞同(是饰品非祭品)
63. 148号艺嘉 赞成
64. 152CQ 赞成
65. 211号 赞成
66. 10-嘟嘟爸 赞成
67. 135 反对。这种材质注定了保养翻新不会只有一次,这一次大家众筹了那么下次呢?如果不翻新,那么一到小区门口就会感到这个小区的破败,如果翻新,那么钱从哪里出?因为不赞同,所以后续费用也不愿意承担。桃花岛上的亭子想要翻新我看大家都想选一劳永逸的材质,为什么在小区门口要放一个需要反复翻新的?
68. 178-冰姐 赞成,小区要做成一件事太难了
69. 217 赞同
70. 15洪虹 弃权
71. 55号 赞成
认知的差异性产生了多样性的思想碰撞现象,我思故我在
72. 105号301 赞成
73. 84-wang 弃权
'''
def create_name(name='姓名', rows=40):
xm = [
'赵钱孙李周吴郑王冯陈褚蒋沈韩杨朱秦尤许何吕施张孔曹严华金魏陶姜戚谢邹喻柏窦章苏潘葛奚范彭郎鲁韦昌马苗方俞任袁柳',
"群平风华正茂仁义礼智媛强天霸红和丽平世莉界中华正义伟岸茂盛繁圆一懿贵妃彭习嬴政韦近荣群智慧睿兴平风清扬自成世民嬴旺品网红丽文天学与翔斌霸学花文教学忠谋书"
]
x = np.random.choice(list(xm[0]), (rows, 1))
m = np.random.choice(list(xm[1]), (rows, 2))
nm = np.hstack((x, m))
df = pd.DataFrame(nm)
df[2] = df[2].apply(lambda x: ('', x)[np.random.randint(0, 2)])
dff = pd.DataFrame()
dff[name] = df[0] + df[1] + df[2]
return dff[name]
def create_columns(column_list, value_list, rows=40):
size = (rows, len(column_list))
if type(value_list[0]) == int and len(value_list) == 2:
return pd.DataFrame(np.random.randint(*value_list, size=size), columns=column_list)
else:
return pd.DataFrame(np.random.choice(value_list, size=size), columns=column_list)
def generate_df(rows=40):
return pd.concat([
pd.DataFrame(data=range(220151000, 220151000 + rows), columns=['考号']),
create_name('姓名', rows),
create_columns(['性别'], ['男', '女'], rows),
# create_columns(['邮编'], [171019, 200234], rows),
create_columns(['学校'], ['清华大学', '北京大学', '复旦大学', '上海师大', '上海交大'], rows),
create_columns(['班级'], ['计算机科学与技术', '人工智能', '数据科学'], rows),
create_columns(['英语', '政治', '线代', '概率'], [20, 100], rows),
create_columns(['高数', '专业课', '表达能力', '面试'], [30, 150], rows)],
axis=1)
def generate_sr(v='英语', i='姓名', rows=40):
dd = generate_df(rows)
return pd.Series(data=dd[v].values.tolist(), index=dd[i], name="学生成绩")
def load(name):
from importlib.resources import files
data_file = files('pet.data').joinpath(f'{name}.xlsx')
return pd.read_excel(data_file)
def load_data(file_name):
data_file = files('pet.data').joinpath(f'{file_name}')
if file_name.split('.')[-1] == 'xlsx':
return pd.read_excel(data_file)
elif file_name.split('.')[-1] == 'txt':
return open(data_file, encoding="UTF-8").read()
elif file_name.split('.')[-1] == 'csv':
return pd.read_csv(data_file)
else:
return eval(file_name)
def download_textbook1():
import os, shutil
from importlib.resources import files
dst = os.path.join(os.path.expanduser("~"), 'Desktop') + '\\Python与数据分析及可视化教学案例'
src = files('pet.textbook1')
print('Please wait....')
shutil.copytree(src, dst, dirs_exist_ok=True)
print('done!!')
os.system(f"start explorer {dst}")
if __name__ == '__main__':
# dd=generate_df(10)
# dff=dd.set_index('姓名')
# s=dff['英语']
print(generate_df())
# ss=pd.Series(data=dd['英语'].values.tolist(),index=dd['姓名'],name="学生成绩")
# print(ss)
print(generate_sr(rows=10))
print(load_data('tyjhzz.txt')) | AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/data/load_data.py | load_data.py |
import numpy as np
import pandas as pd
votes = '''
#接龙
投票接龙
1. 房号+姓名+反对/赞成/弃权
2. 100 神仆 赞成
3. 184朱良 赞成
4. 118号 反对
5. 97号 弃权
6. 62号(不能退钱就赞成,可以退钱就算了,不想烦)
7. 174号 赞成
8. 86-海鱼 反对(1来历尴尬;2过于破旧,维修维护成本未知,建议及时止损。如果无法退款,已花的费用众筹算我一份)
9. 223 九凤 赞同
10. 126一郑桂华 赞同
11. 247 大卫林 赞同
12. 128号孙伟 弃权(照顾个别业主,可以放到不显眼处)
13. 禾亮188 赞同
14. 168茅 赞同
15. 229 亚梅 赞同
16. 109-21赞同
17. 233林 赞同 (为了照顾少数人位置重新协商)
18. 129号 赞同
19. 136号 赞成
20. Xing 31号 赞同 希望小区越来越好,支持所有正能量的行为!
21. 120号 赞成(位置为照顾个别人想法,可以协商)
22. 42号ringing 反对,和小区建筑风格不符
23. 245号 赞成
24. 83小宝 反对
25. 3号 反对
26. 242 赞成、英雄不问出处,正能压邪!
27. 瑞华1号 赞成
28. 108-301 赞同
29. 227赞成
30. 224严,赞同!墓区边的房子都买了,还怕这个!就算从风水讲,墓区的东西面还是好风水。原先比当今小区还要乱的时候,就有热心的业主捐了五六块镜子,放在转角处,改善小区道路行车安全,经过几届业委会和全体正常交物业管理费业主的共同努力,小区面貌已有较大的改善,愿意为小区建设奉献的行为理应得到鼓励和支持!
31. 青青翠竹 赞同
32. 青青翠竹 赞同88号 南赞同
33. 南88 赞同
34. 78-安妮 弃权(既然已经来了后续协商更新外观或者位置就行)
35. 139-常 赞同
36. 143徐 赞同
37. 157号 赞同
38. 19-rongying 反对,和小区风格不搭
39. 106- 赞同 喜欢马车 无论来自哪里都喜欢
40. 62号叶师傅 赞同
41. 241~赵永 弃权(出发点是好的,但随意性强,没有遵循小区基本的议事规则,没有事先征询大多数业主意见。)
42. 127-凌耀初 赞同!(由于马儿和马车锈烂严重,希望好好修补。另,来历也确实是有点尴尬,建议修复时颜色重新考虑)。通过这件事情如能形成小区的议事规则,如能形成网络投票的新机制,那将大大提高业主大会和业委会的决策效率,那是一件大好事!我们小区急需做的大事还有不少~
43. 108-402陈 弃权(不论结果怎么样,至少体现了办事透明度和业主参与度,是好事。)
44. 110-401可可 赞成(本来就是业委会牵头做的事情,也是为了改善小区环境,如果每样小事都需要全体业主投票,业主们就太累了)
45. 72号 赞同
46. 76号 赞同
47. 华爷140 弃权
48. 74号陆 赞同
49. 185-麻辣面 弃权
50. 202号王焱 赞成
51. 61-芊茉 赞同
52. 151田 赞同
53. 21-夏 赞同
54. 117 赞同
55. 9号 弃权 虽然参加了众筹,但是的确不知道还有那么多邻居没有进新群,不知道众筹这个事;虽然初心是为了美丽家园做出贡献,但的确不知道青博馆大门开在海湾园内;虽然放在海湾园里的东西肯定不会全是祭品(比如园区办公室的办公用品、摆设等等),但他的确是海湾园里出来的;虽然我不信邪,但的确有人会觉得这个晦气。
56. 115-402 赞同 心中为阳处处阳,心中为阴处处阴,心灵纯洁一点就不会有那么多的事情了
57. 静80 反对放在大门口,可以改个地方放吗?听说是海湾园里出来的的确会让人觉得晦气。
58. 艺嘉 赞同
59. 114-402 赞同
60. 219号戴 赞同。
61. 8-陈 赞同(既来之则安之)
62. 172杰 赞同(是饰品非祭品)
63. 148号艺嘉 赞成
64. 152CQ 赞成
65. 211号 赞成
66. 10-嘟嘟爸 赞成
67. 135 反对。这种材质注定了保养翻新不会只有一次,这一次大家众筹了那么下次呢?如果不翻新,那么一到小区门口就会感到这个小区的破败,如果翻新,那么钱从哪里出?因为不赞同,所以后续费用也不愿意承担。桃花岛上的亭子想要翻新我看大家都想选一劳永逸的材质,为什么在小区门口要放一个需要反复翻新的?
68. 178-冰姐 赞成,小区要做成一件事太难了
69. 217 赞同
70. 15洪虹 弃权
71. 55号 赞成
认知的差异性产生了多样性的思想碰撞现象,我思故我在
72. 105号301 赞成
73. 84-wang 弃权
'''
def create_name(name='姓名', rows=40):
xm = [
'赵钱孙李周吴郑王冯陈褚蒋沈韩杨朱秦尤许何吕施张孔曹严华金魏陶姜戚谢邹喻柏窦章苏潘葛奚范彭郎鲁韦昌马苗方俞任袁柳',
"群平风华正茂仁义礼智媛强天霸红和丽平世莉界中华正义伟岸茂盛繁圆一懿贵妃彭习嬴政韦近荣群智慧睿兴平风清扬自成世民嬴旺品网红丽文天学与翔斌霸学花文教学忠谋书"
]
x = np.random.choice(list(xm[0]), (rows, 1))
m = np.random.choice(list(xm[1]), (rows, 2))
nm = np.hstack((x, m))
df = pd.DataFrame(nm)
df[2] = df[2].apply(lambda x: ('', x)[np.random.randint(0, 2)])
dff = pd.DataFrame()
dff[name] = df[0] + df[1] + df[2]
return dff[name]
def create_columns(column_list, value_list, rows=40):
size = (rows, len(column_list))
if type(value_list[0]) == int and len(value_list) == 2:
return pd.DataFrame(np.random.randint(*value_list, size=size), columns=column_list)
else:
return pd.DataFrame(np.random.choice(value_list, size=size), columns=column_list)
def generate_df(rows=40):
return pd.concat([
pd.DataFrame(data=range(220151000, 220151000 + rows), columns=['考号']),
create_name('姓名', rows),
create_columns(['性别'], ['男', '女'], rows),
# create_columns(['邮编'], [171019, 200234], rows),
create_columns(['学校'], ['清华大学', '北京大学', '复旦大学', '上海师大', '上海交大'], rows),
create_columns(['班级'], ['计算机科学与技术', '人工智能', '数据科学'], rows),
create_columns(['英语', '政治', '线代', '概率'], [20, 100], rows),
create_columns(['高数', '专业课', '表达能力', '面试'], [30, 150], rows)],
axis=1)
def generate_sr(v='英语', i='姓名', rows=40):
dd = generate_df(rows)
return pd.Series(data=dd[v].values.tolist(), index=dd[i], name="学生成绩")
def load(name):
from importlib.resources import files
# Reads contents with UTF-8 encoding and returns str.
data_file = files('pet.data').joinpath(f'{name}.xlsx')
return pd.read_excel(data_file)
if __name__ == '__main__':
# dd=generate_df(10)
# dff=dd.set_index('姓名')
# s=dff['英语']
print(generate_df())
# ss=pd.Series(data=dd['英语'].values.tolist(),index=dd['姓名'],name="学生成绩")
# print(ss)
print(generate_sr(rows=10)) | AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/data/generator.py | generator.py |
import datetime, os
import numpy as np
import pandas as pd
from importlib.resources import files
from random import choices, randrange, random
import os
pet_home = os.path.expanduser("~") + '\\pet_home'
os.makedirs(pet_home, exist_ok=True)
def gen_iid(init=220151000, number=40):
""" 生成从init起始的一批学号
init:起始学号
number:元素个数
"""
init = 220151000 if not isinstance(init, int) else init
return pd.Series(data=range(init, init + number))
def gen_name(xm, number=40):
""" 生成姓名,
xm=['姓字符串','名字字符串],若传入的是空字符串"",则生成默认姓名
根据姓,名,生成n个假名字
number:元素个数
"""
xm = [
'赵钱孙李周吴郑王冯陈褚蒋沈韩杨朱秦尤许何吕施张孔曹严华金魏陶姜戚谢邹喻柏窦章苏潘葛奚范彭郎鲁韦昌马苗方俞任袁柳',
"群平风华正茂仁义礼智媛强天霸红和丽平世莉界中华正义伟岸茂盛繁圆一懿贵妃彭习嬴政韦荣群智慧睿兴平风清扬自成世民嬴旺品网红丽文天学与翔斌霸学花文教学忠谋书"
] if not isinstance(xm, (list, tuple)) else xm
names = ["".join(choices(xm[0], k=1) + choices(xm[1], k=randrange(1, 3))) for _ in range(number)]
return pd.Series(names)
def gen_int_series(int_range_lst=[0, 100], name='mark', number=40):
''' 生成整数随机series
int_range_lst:[start,end]
记录条数:number
'''
int_range_lst = [0, 100] if not isinstance(int_range_lst, (list, tuple)) else int_range_lst
low, high = int_range_lst
return pd.Series(np.random.randint(low, high, number), name=name)
def gen_float_series(float_range_lst=[0, 100, 2], name='mark', number=40):
''' 生成浮点数 series
float_range_lst:[start,end,length] ,length:小数点的位数
记录条数:number
'''
float_range_lst = [0, 100, 2] if not isinstance(float_range_lst, (list, tuple)) else float_range_lst
low, high, length = float_range_lst
out = map(lambda x: round(x, length), (np.random.rand(number) * (high - low) + low))
return pd.Series(out, name=name)
def gen_date_time_series(period=['2020-2-24 00:00:00', '2022-12-31 00:00:00'], number=40, frmt="%Y-%m-%d %H:%M:%S"):
'''
print(gen_date_time_series('2022-1-01 07:00:00', '2020-11-01 09:00:00', 10))
随机生成某一时间段内的日期,时刻:
:param start: 起始时间
:param end: 结束时间
:param number: 记录数
:param frmt: 格式
:return: series
'''
period = ['2020-2-24 00:00:00', '2022-12-31 00:00:00'] if not isinstance(period, (list, tuple)) else period
start, end = period
stime = datetime.datetime.strptime(start, frmt)
etime = datetime.datetime.strptime(end, frmt)
time_datetime = [random() * (etime - stime) + stime for _ in range(number)]
time_str = [t.strftime(frmt) for t in time_datetime]
return pd.Series(time_str)
def gen_date_series(date_period=['2020-2-24', '2024-12-31'], number=40, frmt="%Y-%m-%d"):
'''
随机生成某一时间段内的日期:
print(gen_date_time_series('2022-1-01', '2020-11-01', 10))
:param start: 起始时间
:param end: 结束时间
:param number: 记录数
:param frmt: 格式
:return: series
'''
date_period = ['2020-2-24', '2024-12-31'] if not isinstance(date_period, (list, tuple)) else date_period
return gen_date_time_series(date_period, number, frmt="%Y-%m-%d")
def gen_time_series(time_period=['00:00:00', '23:59:59'], number=40, frmt="%H:%M:%S"):
'''
随机生成某一时间段内的时刻:
print(gen_time_series('07:00:00', '12:00:00', 10))
:param start: 起始时间
:param end: 结束时间
:param number: 记录数
:param frmt: 格式
:return: series
'''
time_period = ['00:00:00', '23:59:59'] if not isinstance(time_period, (list, tuple)) else time_period
return gen_date_time_series(time_period, number, frmt="%H:%M:%S")
def gen_category_series(lst, number=40):
''' 生成category数据 series
lst:可选数据列表
记录条数:number
'''
return pd.Series(np.random.choice(lst, size=number))
'''
对上述函数做简化名称,目的为了选择解析模板数据后调用函数名称。自动实现一一对应。
'''
iid = gen_iid
n = gen_name
i = gen_int_series
f = gen_float_series
d = gen_date_series
t = gen_time_series
dt = gen_date_time_series
c = gen_category_series
sample_order = {
'学号.iid': 220151000,
'考号.i': [151000, 789000],
'姓名.n': '', # ""生成默认的随机名字,也可以设置姓名字符串,['赵钱孙李','微甜地平天下'],
'性别.c': ['男', '女'],
'毕业日期.d': ['2018-1-1', '2022-12-31'],
'录入时间.t': ['00:00:00', '23:59:59'],
'年龄.i': [18, 24],
'政治面貌.c': ['中共党员', '团员', '群众', '民革', '九三学社'],
'专业.c': ['计算机科学与技术', '人工智能', '软件工程', '自动控制', '机械制造', '自动控制'],
'学校.c': ['清华大学', '北京大学', '复旦大学', '上海交通大学', '上海师范大学', '中国科技大学', '上海大学'],
'政治.i': [19, 100],
'英语.i': [29, 100],
'英语类别.c': ['英语一', '英语二'],
'高等数学.i': (30, 140),
'数学类别.c': ['数学一', '数学二', '数学三'],
'专业课.i': [30, 150],
'净收入.f': (30.3, 150.55, 3)
}
def add_noise(df, noise=0.1, repeat=2) -> pd.DataFrame:
'''
对 DataFrame加入噪声,非法数据
:param df:
:return:
'''
scope_n = int((len(df) + len(df.columns)) * .8)
noise_n = int(scope_n * noise)
df = pd.concat([df] * repeat)
df = df.sample(frac=1 / repeat).reset_index(drop=True)
for i in df.columns:
df[i] = df[i].apply(lambda x: None if np.random.randint(1, scope_n) in range(noise_n) else x)
return df
def generator(order: dict = sample_order,
number: int = 40,
dst: str = f'{pet_home}/generated_dataset_{datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S")}.xlsx',
noise: float = 0,
repeat: int = 1):
'''
根据订单生成数据
:param order: 订单字典
:param number: 数据元素个数
:return:
订单字典格式:
sample_order = {
'学号.iid': 220151000,
'考号.i': [151000, 789000],
'姓名.n': '', # ""生成默认的随机名字,也可以设置姓名字符串,['赵钱孙李','微甜地平天下'],
'性别.c': ['男', '女'],
'日期.d': ['2020-2-24', '2024-12-31'],
'时间.t': ['00:00:00', '23:59:59'],
'年龄.i': [18, 24],
'政治面貌.c': ['党员', '团员', '群众'],
'专业.c': ['计算机科学与技术', '人工智能', '软件工程', '自动控制', '机械制造', '自动控制'],
'学校.c': ['清华大学', '北京大学', '复旦大学', '上海交通大学', '上海师范大学', '中国科技大学', '上海大学'],
'政治.i': [19, 100],
'英语.i': [29, 100],
'英语类别.c': ['英语一', '英语二'],
'高等数学.i': [30, 140],
'数学类别.c': ['数学一', '数学二', '数学三'],
'专业课.i': [30, 150],
'净收入.f': [30.3, 150.55, 3],
}
iid = gen_iid
n = gen_name
i = gen_int_series
f = gen_float_series
d = gen_date_series
t = gen_time_series
dt = gen_date_time_series
c = gen_category_series
'''
df = pd.DataFrame()
for k, v in order.items():
na, func = k.split('.')
df[na] = eval(func)(v, number=number)
if noise > 0.0:
df = add_noise(df, noise=noise, repeat=repeat)
df.to_excel(dst, index=None)
print(f'Dataset is generated and saved in {dst} !!!')
return df
def gen_sample_series(number: int = 40,
dst=f'{pet_home}/generated_sample_series_{datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S")}.xlsx',
noise=0,
repeat=1):
order = {
'姓名.n': '', # ""生成默认的随机名字,也可以设置姓名字符串,['赵钱孙李','微甜地平天下'],
'成绩.i': ''
}
df = generator(order, number, dst)
df = pd.concat([df] * repeat)
df = df.sample(frac=1 / repeat).reset_index(drop=True)
df.set_index(df['姓名'], inplace=True)
df['成绩'] = df['成绩'].apply(lambda x: None if np.random.randint(1, len(df)) in range(int(noise * len(df))) else x)
return df['成绩']
def gen_sample_dataframe(sample_order=sample_order,
number: int = 40,
dst=f'{pet_home}/generated_sample_dataframe_{datetime.datetime.now().strftime("%Y-%m-%d %H-%M-%S")}.xlsx',
noise=0,
repeat=1):
print('*' * number)
from pprint import pprint
print('订单格式:')
pprint(sample_order)
print("*" * number)
return generator(order=sample_order, number=number, dst=dst, noise=noise, repeat=repeat)
def show_order_sample():
from pprint import pprint
pprint(sample_order)
'''
def add_noise(df) -> pd.DataFrame:
for i in df.columns:
df[i] = df[i].apply(lambda x: None if np.random.randint(1, 12) == 2 else x)
return df
'''
datafile_dict = {
'ip地址分类': 'ip_address.xlsx',
'研究生初试成绩': 'st.xlsx',
'上海地铁线路': 'subway.xlsx',
'北京公交车': 'beijing_bus.xlsx',
'通识课': '2022tsk.xlsx',
'优秀毕业论文': '2022pst.xlsx',
'太乙金华宗旨': 'tyjhzz.txt',
'微信接龙投票': 'votes.txt',
'cookie样例': 'sample_cookies.txt',
'道德经': 'ddj.txt',
'双色球': 'ssq_22134.xlsx'
}
def get_datasets_list():
return datafile_dict.keys()
def load_data(key='道德经'):
print(f'可选数据集: {get_datasets_list()}')
file_name = datafile_dict.get(key, "ddj.txt")
data_file = files('pet.datasets.database').joinpath(file_name)
if file_name.split('.')[-1] == 'xlsx':
return pd.read_excel(data_file)
elif file_name.split('.')[-1] == 'txt':
return open(data_file, encoding="UTF-8").read().replace('\n', '')
elif file_name.split('.')[-1] == 'csv':
return pd.read_csv(data_file)
else:
'''
如果文件名非法,则默认线上《道德经》
'''
print('目前仅支持 txt,xlsx,csv 文件类型')
f = files('pet.datasets.database.ddj.txt')
return open(data_file, encoding="UTF-8").read()
def download_textbook1():
import os, shutil
from importlib.resources import files
dst = os.path.join(os.path.expanduser("~"), 'Desktop') + '\\Python与数据分析及可视化教学案例'
src = files('pet.textbook1')
print('Please wait....')
shutil.copytree(src, dst, dirs_exist_ok=True)
print('done!!')
os.system(f"start explorer {dst}")
if __name__ == '__main__':
df = gen_sample_dataframe(number=40,noise=.1, repeat=2)
print(df)
print(gen_sample_series(30, noise=0.1, repeat=2))
d = load_data('ip地址分类')
print(f'{d=}') | AI-Education-Tools | /AI%20%20Education%20Tools-1.9.11.tar.gz/AI Education Tools-1.9.11/src/pet/datasets/factory.py | factory.py |
Intro = '''
Application Name:- Jarvis
Developer Name:- RISHABH-SAHIL
'''
# ----------------------Modules Name----------------------
# Windows Based pip install pyttsx3
# Chrome Based pip install selenium==4.1.3
# ----------------------Windows Based----------------------
import pyttsx3
def Speak(Text,speed=170,speakvoices=0):
engine = pyttsx3.init("sapi5")
voices = engine.getProperty('voices')
engine.setProperty('voices', voices[speakvoices].id)
engine.setProperty('rate', speed)
print("")
print(f"You:- {Text}")
print("")
engine.say(Text)
engine.runAndWait()
# ----------------------Chrome Based----------------------
# from selenium import webdriver
# from selenium.webdriver.support.ui import Select
# from selenium.webdriver.chrome.options import Options
# from selenium.webdriver.common.by import By
# from time import sleep
# chrome_options = Options()
# chrome_options.add_argument('--log-level=3')
# chrome_options.headless = True
# Path = "DataBase\chromedriver.exe"
# driver = webdriver.Chrome(Path,options=chrome_options)
# driver.maximize_window()
# website = r"https://ttsmp3.com/text-to-speech/British%20English/"
# driver.get(website)
# ButtonSelaction = Select(driver.find_element(by=By.XPATH,value='/html/body/div[4]/div[2]/form/select'))
# ButtonSelaction.select_by_visible_text('British English / Brian')
# # ButtonSelaction.select_by_visible_text('Indian English / Raveena')
# def AI_Speak(Text):
# lengthtext = len(str(Text))
# if lengthtext==0:
# pass
# else:
# print("")
# print(f"Jarvis:- {Text}")
# print("")
# Data = str(Text)
# Xpathodsec = '/html/body/div[4]/div[2]/form/textarea'
# driver.find_element(By.XPATH,value=Xpathodsec).send_keys(Data)
# driver.find_element(By.XPATH,value='//*[@id="vorlesenbutton"]').click()
# driver.find_element(By.XPATH,value='/html/body/div[4]/div[2]/form/textarea').clear()
# if lengthtext>=30:
# sleep(4)
# elif lengthtext>=40:
# sleep(6)
# elif lengthtext>=55:
# sleep(8)
# elif lengthtext>=70:
# sleep(10)
# elif lengthtext>=100:
# sleep(13)
# elif lengthtext>=120:
# sleep(14)
# else:
# sleep(2)
# while True:
# aa=input(">> ")
# Speak(aa) | AI-Jarvis-Tools | /AI_Jarvis_Tools-0.0.3.tar.gz/AI_Jarvis_Tools-0.0.3/AI_Jarvis_Tools/Speak.py | Speak.py |
Intro = '''
Application Name:- Jarvis
Developer Name:- RISHABH-SAHIL
'''
Intro = '''
Application Name:- Jarvis
Developer Name:- RISHABH-SAHIL
'''
try:
import os
# Create a folder
os.makedirs("DataBase")
os.makedirs("Data")
# Open a file
f = open("DataBase\\qna_log.txt", "w")
first='''Answer: Jarvis: My Name is Jarvis'''
# Write to the file
f.write(str(first))
# Close the file
f.close()
f = open("Data\\Api.txt", "w")
# Write to the file
f.write('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')
# Close the file
f.close()
except:
try:
# Open a file
f = open("DataBase\\qna_log.txt", "w")
first='''Answer: Jarvis: My Name is Jarvis'''
# Write to the file
f.write(str(first))
# Close the file
f.close()
f = open("Data\\Api.txt", "w")
# Write to the file
f.write('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')
# Close the file
f.close()
except:
pass
# ----------------------Modules Name----------------------
import openai
from dotenv import load_dotenv
def Questions_Answers(question,chat_log=None):
fileopen = open("Data\\Api.txt","r")
API = fileopen.read()
fileopen.close()
openai.api_key = API
load_dotenv()
completion = openai.Completion()
FileLog = open("DataBase\\qna_log.txt","r")
chat_log_template = FileLog.read()
FileLog.close()
if chat_log is None:
chat_log = chat_log_template
prompt = f'{chat_log} Question : {question}\nAnswer : '
response = completion.create(
model = "text-davinci-002",
prompt=prompt,
temperature = 0,
max_tokens = 100,
top_p = 1,
frequency_penalty = 0,
presence_penalty = 0)
answer = response.choices[0].text.strip()
chat_log_template_update = chat_log_template + f"\nQuestion : {question} \nAnswer : {answer}"
FileLog = open("DataBase\\qna_log.txt","w")
FileLog.write(chat_log_template_update)
FileLog.close()
return answer
# while True:
# query = input(">> ")
# replay = Questions_Answers(query)
# print(replay) | AI-Jarvis-Tools | /AI_Jarvis_Tools-0.0.3.tar.gz/AI_Jarvis_Tools-0.0.3/AI_Jarvis_Tools/QNA.py | QNA.py |
import pyaudio
import struct
import math
INITIAL_TAP_THRESHOLD = 0.1 # 0.01 to 1.5
FORMAT = pyaudio.paInt16
SHORT_NORMALIZE = (1.0/32768.0)
CHANNELS = 2
RATE = 44100
INPUT_BLOCK_TIME = 0.05
INPUT_FRAMES_PER_BLOCK = int(RATE*INPUT_BLOCK_TIME)
OVERSENSITIVE = 15.0/INPUT_BLOCK_TIME
UNDERSENSITIVE = 120.0/INPUT_BLOCK_TIME
MAX_TAP_BLOCKS = 0.15/INPUT_BLOCK_TIME
def get_rms( block ):
count = len(block)/2
format = "%dh"%(count)
shorts = struct.unpack( format, block )
sum_squares = 0.0
for sample in shorts:
n = sample * SHORT_NORMALIZE
sum_squares += n*n
return math.sqrt( sum_squares / count )
class TapTester(object):
def __init__(self):
self.pa = pyaudio.PyAudio()
self.stream = self.open_mic_stream()
self.tap_threshold = INITIAL_TAP_THRESHOLD
self.noisycount = MAX_TAP_BLOCKS+1
self.quietcount = 0
self.errorcount = 0
def stop(self):
self.stream.close()
def find_input_device(self):
device_index = None
for i in range( self.pa.get_device_count() ):
devinfo = self.pa.get_device_info_by_index(i)
# print( "Device %d: %s"%(i,devinfo["name"]) )
for keyword in ["mic","input"]:
if keyword in devinfo["name"].lower():
# print( "Found an input: device %d - %s"%(i,devinfo["name"]) )
device_index = i
return device_index
if device_index == None:
print( "No preferred input found; using default input device." )
return device_index
def open_mic_stream( self ):
device_index = self.find_input_device()
stream = self.pa.open( format = FORMAT,
channels = CHANNELS,
rate = RATE,
input = True,
input_device_index = device_index,
frames_per_buffer = INPUT_FRAMES_PER_BLOCK)
return stream
def listen(self):
try:
block = self.stream.read(INPUT_FRAMES_PER_BLOCK)
except IOError as e:
self.errorcount += 1
print( "(%d) Error recording: %s"%(self.errorcount,e) )
self.noisycount = 1
return
amplitude = get_rms( block )
if amplitude > self.tap_threshold:
self.quietcount = 0
self.noisycount += 1
if self.noisycount > OVERSENSITIVE:
self.tap_threshold *= 1.1
else:
if 1 <= self.noisycount <= MAX_TAP_BLOCKS:
return "True-Mic"
self.noisycount = 0
self.quietcount += 1
if self.quietcount > UNDERSENSITIVE:
self.tap_threshold *= 2
def Tester():
tt = TapTester()
while True:
kk = tt.listen()
if "True-Mic" == kk:
print("")
print("> Clap Detected: Starting The Jarvis.")
print("")
return "True-Mic" | AI-Jarvis-Tools | /AI_Jarvis_Tools-0.0.3.tar.gz/AI_Jarvis_Tools-0.0.3/AI_Jarvis_Tools/Clap.py | Clap.py |
Intro = '''
Application Name:- Jarvis
Developer Name:- RISHABH-SAHIL
'''
Intro = '''
Application Name:- Jarvis
Developer Name:- RISHABH-SAHIL
'''
try:
import os
# Create a folder
os.makedirs("DataBase")
os.makedirs("Data")
# Open a file
f = open("DataBase\\chat_log.txt", "w")
first='''Jarvis: My Name is Jarvis'''
# Write to the file
f.write(str(first))
# Close the file
f.close()
f = open("Data\\Api.txt", "w")
# Write to the file
f.write('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')
# Close the file
f.close()
except:
try:
# Open a file
f = open("DataBase\\chat_log.txt", "w")
first='''Jarvis: My Name is Jarvis'''
# Write to the file
f.write(str(first))
# Close the file
f.close()
f = open("Data\\Api.txt", "w")
# Write to the file
f.write('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')
# Close the file
f.close()
except:
pass
# ----------------------Modules Name----------------------
import openai
from dotenv import load_dotenv
def ChatBot_Brain(question,chat_log=None):
fileopen = open("Data\Api.txt","r") # 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
API = fileopen.read()
fileopen.close()
openai.api_key = API
load_dotenv()
completion = openai.Completion()
FileLog = open("DataBase\\chat_log.txt","r")
chat_log_template = FileLog.read()
FileLog.close()
if chat_log is None:
chat_log = chat_log_template
prompt = f'{chat_log} You : {question}\nJarvis : '
response = completion.create(
model = "text-davinci-002",
prompt=prompt,
temperature = 0.5,
max_tokens = 60,
top_p = 0.3,
frequency_penalty = 0.5,
presence_penalty = 0)
answer = response.choices[0].text.strip()
chat_log_template_update = chat_log_template + f"\nYou : {question} \nJarvis : {answer}"
FileLog = open("DataBase\\chat_log.txt","w")
FileLog.write(chat_log_template_update)
FileLog.close()
return answer
# while True:
# query = input(">> ")
# replay = ChatBot_Brain(query)
# print(replay) | AI-Jarvis-Tools | /AI_Jarvis_Tools-0.0.3.tar.gz/AI_Jarvis_Tools-0.0.3/AI_Jarvis_Tools/Brain.py | Brain.py |
Intro = '''
Application Name:- Jarvis
Developer Name:- RISHABH-SAHIL
'''
# ----------------------Modules Name----------------------
# Windows Based pip install pyttsx3
# Chrome Based pip install selenium==4.1.3
# ----------------------Windows Based----------------------
import pyttsx3
def Speak(Text,speed=170,speakvoices=0):
engine = pyttsx3.init("sapi5")
voices = engine.getProperty('voices')
engine.setProperty('voices', voices[speakvoices].id)
engine.setProperty('rate', speed)
print("")
print(f"You:- {Text}")
print("")
engine.say(Text)
engine.runAndWait()
# ----------------------Chrome Based----------------------
# from selenium import webdriver
# from selenium.webdriver.support.ui import Select
# from selenium.webdriver.chrome.options import Options
# from selenium.webdriver.common.by import By
# from time import sleep
# chrome_options = Options()
# chrome_options.add_argument('--log-level=3')
# chrome_options.headless = True
# Path = "DataBase\chromedriver.exe"
# driver = webdriver.Chrome(Path,options=chrome_options)
# driver.maximize_window()
# website = r"https://ttsmp3.com/text-to-speech/British%20English/"
# driver.get(website)
# ButtonSelaction = Select(driver.find_element(by=By.XPATH,value='/html/body/div[4]/div[2]/form/select'))
# ButtonSelaction.select_by_visible_text('British English / Brian')
# # ButtonSelaction.select_by_visible_text('Indian English / Raveena')
# def AI_Speak(Text):
# lengthtext = len(str(Text))
# if lengthtext==0:
# pass
# else:
# print("")
# print(f"Jarvis:- {Text}")
# print("")
# Data = str(Text)
# Xpathodsec = '/html/body/div[4]/div[2]/form/textarea'
# driver.find_element(By.XPATH,value=Xpathodsec).send_keys(Data)
# driver.find_element(By.XPATH,value='//*[@id="vorlesenbutton"]').click()
# driver.find_element(By.XPATH,value='/html/body/div[4]/div[2]/form/textarea').clear()
# if lengthtext>=30:
# sleep(4)
# elif lengthtext>=40:
# sleep(6)
# elif lengthtext>=55:
# sleep(8)
# elif lengthtext>=70:
# sleep(10)
# elif lengthtext>=100:
# sleep(13)
# elif lengthtext>=120:
# sleep(14)
# else:
# sleep(2)
# while True:
# aa=input(">> ")
# Speak(aa) | AI-Jarvis | /AI_Jarvis-0.0.3-py3-none-any.whl/AI_Jarvis/Speak.py | Speak.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.