code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
usage(){
echo "
Written by Brian Bushnell
Last modified April 4, 2018
Description: Fuses sequences together, padding gaps with Ns.
Usage: fuse.sh in=<input file> out=<output file> pad=<number of Ns>
Parameters:
in=<file> The 'in=' flag is needed if the input file is not the
first parameter. 'in=stdin' will pipe from standard in.
out=<file> The 'out=' flag is needed if the output file is not the
second parameter. 'out=stdout' will pipe to standard out.
pad=300 Pad this many N between sequences.
maxlen=2g If positive, don't make fused sequences longer than this.
quality=30 Fake quality scores, if generating fastq from fasta.
overwrite=t (ow) Set to false to force the program to abort rather
than overwrite an existing file.
ziplevel=2 (zl) Set to 1 (lowest) through 9 (max) to change
compression level; lower compression is faster.
fusepairs=f Default mode fuses all sequences into one long sequence.
Setting fusepairs=t will instead fuse each pair together.
name= Set name of output sequence. Default is the name of
the first input sequence.
Java Parameters:
-Xmx This will set Java's memory usage, overriding
autodetection. -Xmx20g will specify 20 gigs of RAM, and -Xmx200m will
specify 200 megs. The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx2g"
z2="-Xms2g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 2000m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
fuse() {
local CMD="java $EA $EOOM $z -cp $CP jgi.FuseSequence $@"
echo $CMD >&2
eval $CMD
}
fuse "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/fuse.sh | fuse.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified February 20, 2020
Description: Estimates cardinality of unique kmers in sequence data.
Processes multiple kmer lengths simultaneously to produce a histogram.
Usage: kmercountmulti.sh in=<file> sweep=<20,100,8> out=<histogram output>
Parameters:
in=<file> (in1) Input file, or comma-delimited list of files.
in2=<file> Optional second file for paired reads.
out=<file> Histogram output. Default is stdout.
k= Comma-delimited list of kmer lengths to use.
sweep=min,max,incr Use incremented kmer values from min to max. For example,
sweep=20,26,2 is equivalent to k=20,22,24,26.
buckets=2048 Use this many buckets for counting; higher decreases
variance, for large datasets. Must be a power of 2.
seed=-1 Use this seed for hash functions.
A negative number forces a random seed.
minprob=0 Set to a value between 0 and 1 to exclude kmers with a
lower probability of being correct.
hashes=1 Use this many hash functions. More hashes yield greater
accuracy, but H hashes takes H times as long.
stdev=f Print standard deviations.
Shortcuts:
The # symbol will be substituted for 1 and 2.
For example:
kmercountmulti.sh in=read#.fq
...is equivalent to:
kmercountmulti.sh in1=read1.fq in2=read2.fq
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx500m"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
}
calcXmx "$@"
function kmercountmulti() {
local CMD="java $EA $EOOM $z -cp $CP jgi.KmerCountMulti $@"
echo $CMD >&2
eval $CMD
}
kmercountmulti "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/kmercountmulti.sh | kmercountmulti.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified December 19, 2019
Description: Counts the number of unique kmers in a file.
Generates a kmer frequency histogram and genome size estimate (in peaks output),
and prints a file containing all kmers and their counts.
Supports K=1 to infinity, though not all values are allowed.
SEE ALSO: bbnorm.sh/khist.sh, loglog.sh, and kmercountmulti.sh.
Usage: kmercountexact.sh in=<file> khist=<file> peaks=<file>
Input may be fasta or fastq, compressed or uncompressed.
Output may be stdout or a file. out, khist, and peaks are optional.
Input parameters:
in=<file> Primary input file.
in2=<file> Second input file for paired reads.
amino=f Run in amino acid mode.
Output parameters:
out=<file> Print kmers and their counts. This is produces a
huge file, so skip it if you only need the histogram.
fastadump=t Print kmers and counts as fasta versus 2-column tsv.
mincount=1 Only print kmers with at least this depth.
reads=-1 Only process this number of reads, then quit (-1 means all).
dumpthreads=-1 Use this number of threads for dumping kmers (-1 means auto).
Hashing parameters:
k=31 Kmer length (1-31 is fastest).
prealloc=t Pre-allocate memory rather than dynamically growing; faster and more memory-efficient. A float fraction (0-1) may be specified, default 1.
prefilter=0 If set to a positive integer, use a countmin sketch to ignore kmers with depth of that value or lower.
prehashes=2 Number of hashes for prefilter.
prefiltersize=0.2 Fraction of memory to use for prefilter.
minq=6 Ignore kmers containing bases with quality below this. (TODO)
minprob=0.0 Ignore kmers with overall probability of correctness below this.
threads=X Spawn X hashing threads (default is number of logical processors).
onepass=f If true, prefilter will be generated in same pass as kmer counts. Much faster but counts will be lower, by up to prefilter's depth limit.
rcomp=t Store and count each kmer together and its reverse-complement.
Histogram parameters:
khist=<file> Print kmer frequency histogram.
histcolumns=2 2 columns: (depth, count). 3 columns: (depth, rawCount, count).
histmax=100000 Maximum depth to print in histogram output.
histheader=t Set true to print a header line.
nzo=t (nonzeroonly) Only print lines for depths with a nonzero kmer count.
gchist=f Add an extra histogram column with the average GC%.
Smoothing parameters:
smoothkhist=f Smooth the output kmer histogram.
smoothpeaks=t Smooth the kmer histogram for peak-calling, but does not affect the output histogram.
smoothradius=1 Initial radius of progressive smoothing function.
maxradius=10 Maximum radius of progressive smoothing function.
progressivemult=2 Increment radius each time depth increases by this factor.
logscale=t Transform to log-scale prior to peak-calling.
logwidth=0.1 The larger the number, the smoother.
Peak calling parameters:
peaks=<file> Write the peaks to this file. Default is stdout.
Also contains the genome size estimate in bp.
minHeight=2 (h) Ignore peaks shorter than this.
minVolume=5 (v) Ignore peaks with less area than this.
minWidth=3 (w) Ignore peaks narrower than this.
minPeak=2 (minp) Ignore peaks with an X-value below this.
maxPeak=BIG (maxp) Ignore peaks with an X-value above this.
maxPeakCount=12 (maxpc) Print up to this many peaks (prioritizing height).
ploidy=-1 Specify ploidy; otherwise it will be autodetected.
Sketch parameters (for making a MinHashSketch):
sketch=<file> Write a minhash sketch to this file.
sketchlen=10000 Output the top 10000 kmers. Only kmers with at least mincount are included.
sketchname= Name of output sketch.
sketchid= taxID of output sketch.
Quality parameters:
qtrim=f Trim read ends to remove bases with quality below minq.
Values: t (trim both ends), f (neither end), r (right end only), l (left end only).
trimq=4 Trim quality threshold.
minavgquality=0 (maq) Reads with average quality (before trimming) below this will be discarded.
Overlap parameters (for overlapping paired-end reads only):
merge=f Attempt to merge reads before counting kmers.
ecco=f Error correct via overlap, but do not merge reads.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx1g"
z2="-Xms1g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 3200m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
kmercountexact() {
local CMD="java $EA $EOOM $z $z2 -cp $CP jgi.KmerCountExact $@"
echo $CMD >&2
eval $CMD
}
kmercountexact "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/kmercountexact.sh | kmercountexact.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified November 12, 2019
Description: Creates a blacklist sketch from common kmers,
which occur in at least X different sketches or taxa.
BlacklistMaker2 makes blacklists from sketches rather than sequences.
It is advisable to make the input sketches larger than normal,
e.g. sizemult=2, because new kmers will be introduced in the final
sketches to replace the blacklisted kmers.
Usage: sketchblacklist.sh ref=<sketch files> out=<sketch file>
or sketchblacklist.sh *.sketch out=<sketch file>
or sketchblacklist.sh ref=taxa#.sketch out=<sketch file>
Standard parameters:
ref=<file> Sketch files.
out=<file> Output filename.
mintaxcount=20 Retain keys occuring in at least this many taxa.
length=300000 Retain at most this many keys (prioritizing high count).
k=32,24 Kmer lengths, 1-32.
mode=taxa Possible modes:
sequence: Count kmers once per sketch.
taxa: Count kmers once per taxonomic unit.
name= Set the blacklist sketch name.
delta=t Delta-compress sketches.
a48=t Encode sketches as ASCII-48 rather than hex.
amino=f Amino-acid mode.
Taxonomy-specific flags:
tree= Specify a taxtree file. On Genepool, use 'auto'.
gi= Specify a gitable file. On Genepool, use 'auto'.
accession= Specify one or more comma-delimited NCBI accession to
taxid files. On Genepool, use 'auto'.
taxlevel=subspecies Taxa hits below this rank will be promoted and merged
with others.
tossjunk=t For taxa mode, discard taxonomically uninformative
sequences. This includes sequences with no taxid,
with a tax level NO_RANK, of parent taxid of LIFE.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
For more detailed information, please read /bbmap/docs/guides/BBSketchGuide.txt.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx31g"
z2="-Xms31g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 4000m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
sketchblacklist() {
local CMD="java $EA $EOOM $z $z2 -cp $CP sketch.BlacklistMaker2 $@"
echo $CMD >&2
eval $CMD
}
sketchblacklist "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/sketchblacklist2.sh | sketchblacklist2.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified December 19, 2019
Description: Compares query sketches to reference sketches hosted on a
remote server via the Internet. The input can be sketches made by sketch.sh,
or fasta/fastq files from which SendSketch will generate sketches.
Only sketches will sent, not sequences.
Please read bbmap/docs/guides/BBSketchGuide.txt for more information.
Usage:
sendsketch.sh in=file
To change nucleotide servers, add the server name, e.g.:
sendsketch.sh in=file nt
For the protein server with nucleotide input:
sendsketch.sh in=file protein
for the protein server with amino input:
sendsketch.sh in=file amino protein
Standard parameters:
in=<file> Sketch or fasta file to compare.
out=stdout Comparison output. Can be set to a file instead.
outsketch= Optional, to write the sketch to a file.
local=f For local files, have the server load the sketches.
Allows use of whitelists; recommended for Silva.
Local can only be used when the client and server access
the same filesystem - e.g., Genepool and Cori.
address= Address of remote server. Default address:
https://refseq-sketch.jgi-psf.org/sketch
You can also specify these abbreviations:
nt: nt server
refseq: Refseq server
silva: Silva server
protein: RefSeq prokaryotic amino acid sketches
img: IMG server (Not Yet Available)
mito: RefSeq mitochondrial server (NYA)
fungi: RefSeq fungi sketches (NYA)
Using an abbreviation automatically sets the address,
the blacklist, and k.
aws=f Set aws=t to use the aws servers instead of NERSC.
When, for example, NERSC (or the whole SF Bay area) is down.
Sketch-making parameters:
mode=single Possible modes, for fasta input:
single: Generate one sketch per file.
sequence: Generate one sketch per sequence.
k=31 Kmer length, 1-32. This is automatic and does not need to
be set for JGI servers, only for locally-hosted servers.
samplerate=1 Set to a lower value to sample a fraction of input reads.
For raw reads (rather than an assembly), 1-3x coverage
gives best results, by reducing error kmers. Somewhat
higher is better for high-error-rate data like PacBio.
minkeycount=1 Ignore kmers that occur fewer times than this. Values
over 1 can be used with raw reads to avoid error kmers.
minprob=0.0001 Ignore kmers below this probability of correctness.
minqual=0 Ignore kmers spanning bases below this quality.
entropy=0.66 Ignore sequence with entropy below this value.
merge=f Merge paired reads prior to sketching.
amino=f Use amino acid mode. Input should be amino acids.
translate=f Call genes and translate to proteins. Input should be
nucleotides. Designed for prokaryotes.
sixframes=f Translate all 6 frames instead of predicting genes.
ssu=t Scan for and retain full-length SSU sequence.
printssusequence=f Print the query SSU sequence (JSON mode only).
refid= Instead of a query file, specify a reference sketch by name
or taxid; e.g. refid=h.sapiens or refid=9606.
Size parameters:
size=10000 Desired size of sketches (if not using autosize).
mgf=0.01 (maxfraction) Max fraction of genomic kmers to use.
minsize=100 Do not generate sketches for genomes smaller than this.
autosize=t Use flexible sizing instead of fixed-length. This is
nonlinear; a human sketch is only ~6x a bacterial sketch.
sizemult=1 Multiply the autosized size of sketches by this factor.
Normally a bacterial-size genome will get a sketch size
of around 10000; if autosizefactor=2, it would be ~20000.
density= If this flag is set (to a number between 0 and 1),
autosize and sizemult are ignored, and this fraction of
genomic kmers are used. For example, at density=0.001,
a 4.5Mbp bacteria will get a 4500-kmer sketch.
sketchheapfactor=4 If minkeycount>1, temporarily track this many kmers until
counts are known and low-count kmers are discarded.
Taxonomy and filtering parameters:
level=2 Only report the best record per taxa at this level.
Either level names or numbers may be used.
0: disabled
1: subspecies
2: species
3: genus
...etc
include= Restrict output to organisms in these clades.
May be a comma-delimited list of names or NCBI TaxIDs.
includelevel=0 Promote the include list to this taxonomic level.
For example, include=h.sapiens includelevel=phylum
would only include organisms in the same phylum as human.
includestring= Only report records whose name contains this string.
exclude= Ignore organisms in these clades.
May be a comma-delimited list of names or NCBI TaxIDs.
excludelevel=0 Promote the exclude list to this taxonomic level.
For example, exclude=h.sapiens excludelevel=phylum
would exclude all organisms in the same phylum as human.
excludestring= Do not records whose name contains this string.
banunclassified=f Ignore organisms descending from nodes like
'unclassified Bacteria'
banvirus=f Ignore viruses.
requiressu=f Ignore records without SSUs.
minrefsize=0 Ignore ref sketches smaller than this (unique kmers).
minrefsizebases=0 Ignore ref sketches smaller than this (total base pairs).
Output format:
format=2 2: Default format with, per query, one query header line;
one column header line; and one reference line per hit.
3: One line per hit, with columns query, reference, ANI,
and sizeRatio.
4: JSON (format=json also works).
5: Constellation (format=constellation also works).
usetaxidname=f For format 3, print the taxID in the name column.
usetaxname for format 3, print the taxonomic name in the name column.
useimgname For format 3, print the img ID in the name column.
d3=f Output in JSON format, with a tree for visualization.
Output columns (for format=2):
printall=f Enable all output columns.
printani=t (ani) Print average nucleotide identity estimate.
completeness=t Genome completeness estimate.
score=f Score (used for sorting the output).
printmatches=t Number of kmer matches to reference.
printlength=f Number of kmers compared.
printtaxid=t NCBI taxID.
printimg=f IMG identifier (only for IMG data).
printgbases=f Number of genomic bases.
printgkmers=f Number of genomic kmers.
printgsize=t Estimated number of unique genomic kmers.
printgseqs=t Number of sequences (scaffolds/reads).
printtaxname=t Name associated with this taxID.
printname0=f (pn0) Original seqeuence name.
printqfname=t Query filename.
printrfname=f Reference filename.
printtaxa=f Full taxonomy of each record.
printcontam=t Print contamination estimate, and factor contaminant kmers
into calculations. Kmers are considered contaminant if
present in some ref sketch but not the current one.
printunique=t Number of matches unique to this reference.
printunique2=f Number of matches unique to this reference's taxa.
printunique3=f Number of query kmers unique to this reference's taxa,
regardless of whether they are in this reference sketch.
printnohit=f Number of kmers that don't hit anything.
printrefhits=f Average number of ref sketches hit by shared kmers.
printgc=f GC content.
printucontam=f Contam hits that hit exactly one reference sketch.
printcontam2=f Print contamination estimate using only kmer hits
to unrelated taxa.
contamlevel=species Taxonomic level to use for contam2/unique2/unique3.
NOTE: unique2/unique3/contam2/refhits require an index.
printdepth=f (depth) Print average depth of sketch kmers; intended
for shotgun read input.
printdepth2=f (depth2) Print depth compensating for genomic repeats.
Requires reference sketches to be generated with depth.
actualdepth=t If this is false, the raw average count is printed.
If true, the raw average (observed depth) is converted
to estimated actual depth (including uncovered areas).
printvolume=f (volume) Product of average depth and matches.
printca=f Print common ancestor, if query taxID is known.
printcal=f Print common ancestor tax level, if query taxID is known.
recordsperlevel=0 If query TaxID is known, and this is positive, print at
most this many records per common ancestor level.
Sorting:
sortbyscore=t Default sort order is by score.
sortbydepth=f Include depth as a factor in sort order.
sortbydepth2=f Include depth2 as a factor in sort order.
sortbyvolume=f Include volume as a factor in sort order.
sortbykid=f Sort strictly by KID.
sortbyani=f Sort strictly by ANI/AAI/WKID.
sortbyhits=f Sort strictly by the number of kmer hits.
Other output parameters:
minhits=3 (hits) Only report records with at least this many hits.
minani=0 (ani) Only report records with at least this ANI (0-1).
minwkid=0.0001 (wkid) Only report records with at least this WKID (0-1).
anifromwkid=t Calculate ani from wkid. If false, use kid.
minbases=0 Ignore ref sketches of sequences shortert than this.
minsizeratio=0 Don't compare sketches if the smaller genome is less than
this fraction of the size of the larger.
records=20 Report at most this many best-matching records.
color=family Color records at the family level. color=f will disable.
Colors work in most terminals but may cause odd characters
to appear in text editors. So, color defaults to f if
writing to a file.
intersect=f Print sketch intersections. delta=f is suggested.
Metadata parameters (optional, for the query sketch header):
taxid=-1 Set the NCBI taxid.
imgid=-1 Set the IMG id.
spid=-1 Set the sequencing project id (JGI-specific).
name= Set the name (taxname).
name0= Set name0 (normally the first sequence header).
fname= Set fname (normally the file name).
meta_= Set an arbitrary metadata field.
For example, meta_Month=March.
Other parameters:
requiredmeta= (rmeta) Required optional metadata values. For example:
rmeta=subunit:ssu,source:silva
bannedmeta= (bmeta) Forbidden optional metadata values.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
For more detailed information, please read /bbmap/docs/guides/BBSketchGuide.txt.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx4g"
z2="-Xms4g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 3200m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
sendsketch() {
local CMD="java $EA $EOOM $z -cp $CP sketch.SendSketch $@"
# echo $CMD >&2
eval $CMD
}
sendsketch "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/sendsketch.sh | sendsketch.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified September 30, 2019
Description: Uses kmer counts to assemble contigs, extend sequences,
or error-correct reads. Tadpole has no upper bound for kmer length,
but some values are not supported. Specifically, it allows 1-31,
multiples of 2 from 32-62, multiples of 3 from 63-93, etc.
Please read bbmap/docs/guides/TadpoleGuide.txt for more information.
Usage:
Assembly: tadpole.sh in=<reads> out=<contigs>
Extension: tadpole.sh in=<reads> out=<extended> mode=extend
Correction: tadpole.sh in=<reads> out=<corrected> mode=correct
Recommended parameters for optimal assembly:
tadpole.sh in=<reads> out=<contigs> shave rinse pop k=<50-70% of read length>
Extension and correction may be done simultaneously. Error correction on
multiple files may be done like this:
tadpole.sh in=libA_r1.fq,libA_merged.fq in2=libA_r2.fq,null extra=libB_r1.fq out=ecc_libA_r1.fq,ecc_libA_merged.fq out2=ecc_libA_r2.fq,null mode=correct
Extending contigs with reads could be done like this:
tadpole.sh in=contigs.fa out=extended.fa el=100 er=100 mode=extend extra=reads.fq k=62
Input parameters:
in=<file> Primary input file for reads to use as kmer data.
in2=<file> Second input file for paired data.
extra=<file> Extra files for use as kmer data, but not for error-
correction or extension.
reads=-1 Only process this number of reads, then quit (-1 means all).
NOTE: in, in2, and extra may also be comma-delimited lists of files.
Output parameters:
out=<file> Write contigs (in contig mode) or corrected/extended
reads (in other modes).
out2=<file> Second output file for paired output.
outd=<file> Write discarded reads, if using junk-removal flags.
dot=<file> Write a contigs connectivity graph (partially implemented)
dump=<file> Write kmers and their counts.
fastadump=t Write kmers and counts as fasta versus 2-column tsv.
mincounttodump=1 Only dump kmers with at least this depth.
showstats=t Print assembly statistics after writing contigs.
Prefiltering parameters:
prefilter=0 If set to a positive integer, use a countmin sketch
to ignore kmers with depth of that value or lower.
prehashes=2 Number of hashes for prefilter.
prefiltersize=0.2 (pff) Fraction of memory to use for prefilter.
minprobprefilter=t (mpp) Use minprob for the prefilter.
prepasses=1 Use this many prefiltering passes; higher be more thorough
if the filter is very full. Set to 'auto' to iteratively
prefilter until the remaining kmers will fit in memory.
onepass=f If true, prefilter will be generated in same pass as kmer
counts. Much faster but counts will be lower, by up to
prefilter's depth limit.
filtermem=0 Allows manually specifying prefilter memory in bytes, for
deterministic runs. 0 will set it automatically.
Hashing parameters:
k=31 Kmer length (1 to infinity). Memory use increases with K.
prealloc=t Pre-allocate memory rather than dynamically growing;
faster and more memory-efficient. A float fraction (0-1)
may be specified; default is 1.
minprob=0.5 Ignore kmers with overall probability of correctness below this.
minprobmain=t (mpm) Use minprob for the primary kmer counts.
threads=X Spawn X worker threads; default is number of logical processors.
buildthreads=X Spawn X contig-building threads. If not set, defaults to the same
as threads. Setting this to 1 will make contigs deterministic.
rcomp=t Store and count each kmer together and its reverse-complement.
coremask=t All kmer extensions share the same hashcode.
fillfast=t Speed up kmer extension lookups.
Assembly parameters:
mincountseed=3 (mcs) Minimum kmer count to seed a new contig or begin extension.
mincountextend=2 (mce) Minimum kmer count continue extension of a read or contig.
It is recommended that mce=1 for low-depth metagenomes.
mincountretain=0 (mincr) Discard kmers with count below this.
maxcountretain=INF (maxcr) Discard kmers with count above this.
branchmult1=20 (bm1) Min ratio of 1st to 2nd-greatest path depth at high depth.
branchmult2=3 (bm2) Min ratio of 1st to 2nd-greatest path depth at low depth.
branchlower=3 (blc) Max value of 2nd-greatest path depth to be considered low.
minextension=2 (mine) Do not keep contigs that did not extend at least this much.
mincontig=auto (minc) Do not write contigs shorter than this.
mincoverage=1 (mincov) Do not write contigs with average coverage below this.
maxcoverage=inf (maxcov) Do not write contigs with average coverage above this.
trimends=0 (trim) Trim contig ends by this much. Trimming by K/2
may yield more accurate genome size estimation.
trimcircular=t Trim one end of contigs ending in LOOP/LOOP by K-1,
to eliminate the overlapping portion.
contigpasses=16 Build contigs with decreasing seed depth for this many iterations.
contigpassmult=1.7 Ratio between seed depth of two iterations.
ownership=auto For concurrency; do not touch.
processcontigs=f Explore the contig connectivity graph.
popbubbles=t (pop) Pop bubbles; increases contiguity. Requires
additional time and memory and forces processcontigs=t.
Processing modes:
mode=contig contig: Make contigs from kmers.
extend: Extend sequences to be longer, and optionally
perform error correction.
correct: Error correct only.
insert: Measure insert sizes.
discard: Discard low-depth reads, without error correction.
Extension parameters:
extendleft=100 (el) Extend to the left by at most this many bases.
extendright=100 (er) Extend to the right by at most this many bases.
ibb=t (ignorebackbranches) Do not stop at backward branches.
extendrollback=3 Trim a random number of bases, up to this many, on reads
that extend only partially. This prevents the creation
of sharp coverage discontinuities at branches.
Error-correction parameters:
ecc=f Error correct via kmer counts.
reassemble=t If ecc is enabled, use the reassemble algorithm.
pincer=f If ecc is enabled, use the pincer algorithm.
tail=f If ecc is enabled, use the tail algorithm.
eccfull=f If ecc is enabled, use tail over the entire read.
aggressive=f (aecc) Use aggressive error correction settings.
Overrides some other flags like errormult1 and deadzone.
conservative=f (cecc) Use conservative error correction settings.
Overrides some other flags like errormult1 and deadzone.
rollback=t Undo changes to reads that have lower coverage for
any kmer after correction.
markbadbases=0 (mbb) Any base fully covered by kmers with count below
this will have its quality reduced.
markdeltaonly=t (mdo) Only mark bad bases adjacent to good bases.
meo=t (markerrorreadsonly) Only mark bad bases in reads
containing errors.
markquality=0 (mq) Set quality scores for marked bases to this.
A level of 0 will also convert the base to an N.
errormult1=16 (em1) Min ratio between kmer depths to call an error.
errormult2=2.6 (em2) Alternate ratio between low-depth kmers.
errorlowerconst=3 (elc) Use mult2 when the lower kmer is at most this deep.
mincountcorrect=3 (mcc) Don't correct to kmers with count under this.
pathsimilarityfraction=0.45(psf) Max difference ratio considered similar.
Controls whether a path appears to be continuous.
pathsimilarityconstant=3 (psc) Absolute differences below this are ignored.
errorextensionreassemble=5 (eer) Verify this many kmers before the error as
having similar depth, for reassemble.
errorextensionpincer=5 (eep) Verify this many additional bases after the
error as matching current bases, for pincer.
errorextensiontail=9 (eet) Verify additional bases before and after
the error as matching current bases, for tail.
deadzone=0 (dz) Do not try to correct bases within this distance of
read ends.
window=12 (w) Length of window to use in reassemble mode.
windowcount=6 (wc) If more than this many errors are found within a
a window, halt correction in that direction.
qualsum=80 (qs) If the sum of the qualities of corrected bases within
a window exceeds this, halt correction in that direction.
rbi=t (requirebidirectional) Require agreement from both
directions when correcting errors in the middle part of
the read using the reassemble algorithm.
errorpath=1 (ep) For debugging purposes.
Junk-removal parameters (to only remove junk, set mode=discard):
tossjunk=f Remove reads that cannot be used for assembly.
This means they have no kmers above depth 1 (2 for paired
reads) and the outermost kmers cannot be extended.
Pairs are removed only if both reads fail.
tossdepth=-1 Remove reads containing kmers at or below this depth.
Pairs are removed if either read fails.
lowdepthfraction=0 (ldf) Require at least this fraction of kmers to be
low-depth to discard a read; range 0-1. 0 still
requires at least 1 low-depth kmer.
requirebothbad=f (rbb) Only discard pairs if both reads are low-depth.
tossuncorrectable (tu) Discard reads containing uncorrectable errors.
Requires error-correction to be enabled.
Shaving parameters:
shave=t Remove dead ends (aka hair).
rinse=t Remove bubbles.
wash= Set shave and rinse at the same time.
maxshavedepth=1 (msd) Shave or rinse kmers at most this deep.
exploredist=300 (sed) Quit after exploring this far.
discardlength=150 (sdl) Discard shavings up to this long.
Note: Shave and rinse can produce substantially better assemblies
for low-depth data, but they are very slow for large metagenomes.
Overlap parameters (for overlapping paired-end reads only):
merge=f Attempt to merge overlapping reads prior to
kmer-counting, and again prior to correction. Output
will still be unmerged pairs.
ecco=f Error correct via overlap, but do not merge reads.
testmerge=t Test kmer counts around the read merge junctions. If
it appears that the merge created new errors, undo it.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx14g"
z2="-Xms14g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 15000m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
tadpole() {
local CMD="java $EA $EOOM $z $z2 -cp $CP assemble.Tadpole $@"
echo $CMD >&2
eval $CMD
}
tadpole "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/tadpole.sh | tadpole.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified October 19, 2017
Description: Normalizes read depth based on kmer counts.
Can also error-correct, bin reads by kmer depth, and generate a kmer depth histogram.
However, Tadpole has superior error-correction to BBNorm.
Please read bbmap/docs/guides/BBNormGuide.txt for more information.
Usage: bbnorm.sh in=<input> out=<reads to keep> outt=<reads to toss> hist=<histogram output>
Input parameters:
in=null Primary input. Use in2 for paired reads in a second file
in2=null Second input file for paired reads in two files
extra=null Additional files to use for input (generating hash table) but not for output
fastareadlen=2^31 Break up FASTA reads longer than this. Can be useful when processing scaffolded genomes
tablereads=-1 Use at most this many reads when building the hashtable (-1 means all)
kmersample=1 Process every nth kmer, and skip the rest
readsample=1 Process every nth read, and skip the rest
interleaved=auto May be set to true or false to force the input read file to ovverride autodetection of the input file as paired interleaved.
qin=auto ASCII offset for input quality. May be 33 (Sanger), 64 (Illumina), or auto.
Output parameters:
out=<file> File for normalized or corrected reads. Use out2 for paired reads in a second file
outt=<file> (outtoss) File for reads that were excluded from primary output
reads=-1 Only process this number of reads, then quit (-1 means all)
sampleoutput=t Use sampling on output as well as input (not used if sample rates are 1)
keepall=f Set to true to keep all reads (e.g. if you just want error correction).
zerobin=f Set to true if you want kmers with a count of 0 to go in the 0 bin instead of the 1 bin in histograms.
Default is false, to prevent confusion about how there can be 0-count kmers.
The reason is that based on the 'minq' and 'minprob' settings, some kmers may be excluded from the bloom filter.
tmpdir=$TMPDIR This will specify a directory for temp files (only needed for multipass runs). If null, they will be written to the output directory.
usetempdir=t Allows enabling/disabling of temporary directory; if disabled, temp files will be written to the output directory.
qout=auto ASCII offset for output quality. May be 33 (Sanger), 64 (Illumina), or auto (same as input).
rename=f Rename reads based on their kmer depth.
Hashing parameters:
k=31 Kmer length (values under 32 are most efficient, but arbitrarily high values are supported)
bits=32 Bits per cell in bloom filter; must be 2, 4, 8, 16, or 32. Maximum kmer depth recorded is 2^cbits. Automatically reduced to 16 in 2-pass.
Large values decrease accuracy for a fixed amount of memory, so use the lowest number you can that will still capture highest-depth kmers.
hashes=3 Number of times each kmer is hashed and stored. Higher is slower.
Higher is MORE accurate if there is enough memory, and LESS accurate if there is not enough memory.
prefilter=f True is slower, but generally more accurate; filters out low-depth kmers from the main hashtable. The prefilter is more memory-efficient because it uses 2-bit cells.
prehashes=2 Number of hashes for prefilter.
prefilterbits=2 (pbits) Bits per cell in prefilter.
prefiltersize=0.35 Fraction of memory to allocate to prefilter.
buildpasses=1 More passes can sometimes increase accuracy by iteratively removing low-depth kmers
minq=6 Ignore kmers containing bases with quality below this
minprob=0.5 Ignore kmers with overall probability of correctness below this
threads=auto (t) Spawn exactly X hashing threads (default is number of logical processors). Total active threads may exceed X due to I/O threads.
rdk=t (removeduplicatekmers) When true, a kmer's count will only be incremented once per read pair, even if that kmer occurs more than once.
Normalization parameters:
fixspikes=f (fs) Do a slower, high-precision bloom filter lookup of kmers that appear to have an abnormally high depth due to collisions.
target=100 (tgt) Target normalization depth. NOTE: All depth parameters control kmer depth, not read depth.
For kmer depth Dk, read depth Dr, read length R, and kmer size K: Dr=Dk*(R/(R-K+1))
maxdepth=-1 (max) Reads will not be downsampled when below this depth, even if they are above the target depth.
mindepth=5 (min) Kmers with depth below this number will not be included when calculating the depth of a read.
minkmers=15 (mgkpr) Reads must have at least this many kmers over min depth to be retained. Aka 'mingoodkmersperread'.
percentile=54.0 (dp) Read depth is by default inferred from the 54th percentile of kmer depth, but this may be changed to any number 1-100.
uselowerdepth=t (uld) For pairs, use the depth of the lower read as the depth proxy.
deterministic=t (dr) Generate random numbers deterministically to ensure identical output between multiple runs. May decrease speed with a huge number of threads.
passes=2 (p) 1 pass is the basic mode. 2 passes (default) allows greater accuracy, error detection, better contol of output depth.
Error detection parameters:
hdp=90.0 (highdepthpercentile) Position in sorted kmer depth array used as proxy of a read's high kmer depth.
ldp=25.0 (lowdepthpercentile) Position in sorted kmer depth array used as proxy of a read's low kmer depth.
tossbadreads=f (tbr) Throw away reads detected as containing errors.
requirebothbad=f (rbb) Only toss bad pairs if both reads are bad.
errordetectratio=125 (edr) Reads with a ratio of at least this much between their high and low depth kmers will be classified as error reads.
highthresh=12 (ht) Threshold for high kmer. A high kmer at this or above are considered non-error.
lowthresh=3 (lt) Threshold for low kmer. Kmers at this and below are always considered errors.
Error correction parameters:
ecc=f Set to true to correct errors. NOTE: Tadpole is now preferred for ecc as it does a better job.
ecclimit=3 Correct up to this many errors per read. If more are detected, the read will remain unchanged.
errorcorrectratio=140 (ecr) Adjacent kmers with a depth ratio of at least this much between will be classified as an error.
echighthresh=22 (echt) Threshold for high kmer. A kmer at this or above may be considered non-error.
eclowthresh=2 (eclt) Threshold for low kmer. Kmers at this and below are considered errors.
eccmaxqual=127 Do not correct bases with quality above this value.
aec=f (aggressiveErrorCorrection) Sets more aggressive values of ecr=100, ecclimit=7, echt=16, eclt=3.
cec=f (conservativeErrorCorrection) Sets more conservative values of ecr=180, ecclimit=2, echt=30, eclt=1, sl=4, pl=4.
meo=f (markErrorsOnly) Marks errors by reducing quality value of suspected errors; does not correct anything.
mue=t (markUncorrectableErrors) Marks errors only on uncorrectable reads; requires 'ecc=t'.
overlap=f (ecco) Error correct by read overlap.
Depth binning parameters:
lowbindepth=10 (lbd) Cutoff for low depth bin.
highbindepth=80 (hbd) Cutoff for high depth bin.
outlow=<file> Pairs in which both reads have a median below lbd go into this file.
outhigh=<file> Pairs in which both reads have a median above hbd go into this file.
outmid=<file> All other pairs go into this file.
Histogram parameters:
hist=<file> Specify a file to write the input kmer depth histogram.
histout=<file> Specify a file to write the output kmer depth histogram.
histcol=3 (histogramcolumns) Number of histogram columns, 2 or 3.
pzc=f (printzerocoverage) Print lines in the histogram with zero coverage.
histlen=1048576 Max kmer depth displayed in histogram. Also affects statistics displayed, but does not affect normalization.
Peak calling parameters:
peaks=<file> Write the peaks to this file. Default is stdout.
minHeight=2 (h) Ignore peaks shorter than this.
minVolume=5 (v) Ignore peaks with less area than this.
minWidth=3 (w) Ignore peaks narrower than this.
minPeak=2 (minp) Ignore peaks with an X-value below this.
maxPeak=BIG (maxp) Ignore peaks with an X-value above this.
maxPeakCount=8 (maxpc) Print up to this many peaks (prioritizing height).
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx31g"
z2="-Xms31g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 31000m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
normalize() {
local CMD="java $EA $EOOM $z $z2 -cp $CP jgi.KmerNormalize bits=32 $@"
echo $CMD >&2
eval $CMD
}
normalize "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/bbnorm.sh | bbnorm.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified December 19, 2018
Description: Calls peaks from a 2-column (x, y) tab-delimited histogram.
Usage: callpeaks.sh in=<histogram file> out=<output file>
Peak-calling parameters:
in=<file> 'in=stdin.fq' will pipe from standard in.
out=<file> Write the peaks to this file. Default is stdout.
minHeight=2 (h) Ignore peaks shorter than this.
minVolume=5 (v) Ignore peaks with less area than this.
minWidth=3 (w) Ignore peaks narrower than this.
minPeak=2 (minp) Ignore peaks with an X-value below this.
Useful when low-count kmers are filtered).
maxPeak=BIG (maxp) Ignore peaks with an X-value above this.
maxPeakCount=10 (maxpc) Print up to this many peaks (prioritizing height).
countColumn=1 (col) For multi-column input, this column, zero-based,
contains the counts.
ploidy=-1 Specify ploidy; otherwise it will be autodetected.
logscale=f Transform to log-scale prior to peak-calling. Useful
for kmer-frequency histograms.
Smoothing parameters:
smoothradius=0 Integer radius of triangle filter. Set above zero to
smooth data prior to peak-calling. Higher values are
smoother.
smoothprogressive=f Set to true to widen the filter as the x-coordinate
increases. Useful for kmer-frequency histograms.
maxradius=10 Maximum radius of progressive smoothing function.
progressivemult=2 Increment radius each time depth increases by this factor.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx200m"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
}
calcXmx "$@"
stats() {
local CMD="java $EA $EOOM -Xmx120m -cp $CP jgi.CallPeaks $@"
# echo $CMD >&2
eval $CMD
}
stats "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/callpeaks.sh | callpeaks.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified September 17, 2018
This script requires at least 10GB RAM.
It is designed for NERSC and uses hard-coded paths.
Description: Removes all reads that map to selected common microbial contaminant genomes.
Removes approximately 98.5% of common contaminant reads, with zero false-positives to non-bacteria.
NOTE! This program uses hard-coded paths and will only run on Nersc systems.
Usage: removemicrobes.sh in=<input file> outu=<clean output file>
Input may be fasta or fastq, compressed or uncompressed.
Parameters:
in=<file> Input reads. Should already be adapter-trimmed.
outu=<file> Destination for clean reads.
outm=<file> Optional destination for contaminant reads.
threads=auto (t) Set number of threads to use; default is number of logical processors.
overwrite=t (ow) Set to false to force the program to abort rather than overwrite an existing file.
interleaved=auto (int) If true, forces fastq input to be paired and interleaved.
trim=t Trim read ends to remove bases with quality below minq.
Values: t (trim both ends), f (neither end), r (right end only), l (left end only).
untrim=t Undo the trimming after mapping.
minq=4 Trim quality threshold.
ziplevel=6 (zl) Set to 1 (lowest) through 9 (max) to change compression level; lower compression is faster.
build=1 Choses which masking mode was used:
1 is most stringent and should be used for bacteria.
2 uses fewer bacteria for masking (only RefSeq references).
3 is only masked for plastids and entropy, for use on anything except bacteria.
4 is unmasked.
***** All BBMap parameters can be used; run bbmap.sh for more details. *****
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
JNI="-Djava.library.path=""$DIR""jni/"
JNI=""
z="-Xmx6000m"
z2="-Xms6000m"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
}
calcXmx "$@"
function removemicrobes() {
local CMD="java $EA $EOOM $z $z2 $JNI -cp $CP align2.BBMap strictmaxindel=4 bwr=0.16 bw=12 ef=0.001 minhits=2 path=/global/projectb/sandbox/gaag/bbtools/commonMicrobes pigz unpigz zl=6 qtrim=r trimq=10 untrim idtag printunmappedcount ztd=2 kfilter=25 maxsites=1 k=13 minid=0.95 idfilter=0.95 minhits=2 build=1 bloomfilter $@"
echo $CMD >&2
eval $CMD
}
removemicrobes "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/removemicrobes.sh | removemicrobes.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified May 23, 2016
Description: Replaces read names with names from another file.
The other file can either be sequences or simply names, with
one name per line (and no > or @ symbols). If you use one name
per line, please give the file a .header extension.
Usage: replaceheaders.sh in=<file> hin=<headers file> out=<out file>
Parameters:
in= Input sequences. Use in2 for a second paired file.
in= Header input sequences. Use hin2 for a second paired file.
out= Output sequences. Use out2 for a second paired file.
ow=f (overwrite) Overwrites files that already exist.
zl=4 (ziplevel) Set compression level, 1 (low) to 9 (max).
int=f (interleaved) Determines whether INPUT file is considered interleaved.
fastawrap=70 Length of lines in fasta output.
qin=auto ASCII offset for input quality. May be 33 (Sanger), 64 (Illumina), or auto.
qout=auto ASCII offset for output quality. May be 33 (Sanger), 64 (Illumina), or auto (same as input).
Renaming modes (if not default):
addprefix=f Rename the read by prepending the new name to the existing name.
Sampling parameters:
reads=-1 Set to a positive number to only process this many INPUT reads (or pairs), then quit.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx1g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
}
calcXmx "$@"
function reheader() {
local CMD="java $EA $EOOM $z -cp $CP jgi.ReplaceHeaders $@"
echo $CMD >&2
eval $CMD
}
reheader "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/replaceheaders.sh | replaceheaders.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified Jan 7, 2020
Description: Compares query sketches to others, and prints their kmer identity.
The input can be sketches made by sketch.sh, or fasta/fastq files.
It's recommended to first sketch references with sketch.sh for large files,
or when taxonomic information is desired.
Please read bbmap/docs/guides/BBSketchGuide.txt for more information.
Usage: comparesketch.sh in=<file,file,file...> ref=<file,file,file...>
Alternative: comparesketch.sh in=<file,file,file...> file file file
Alternative: comparesketch.sh in=<file,file,file...> *.sketch
Alternative: comparesketch.sh alltoall *.sketch.gz
File parameters:
in=<file,file...> Sketches or fasta files to compare.
out=stdout Comparison output. Can be set to a file instead.
outsketch=<file> Optionally write sketch files generated from the input.
ref=<file,file...> List of sketches to compare against. Files given without
a prefix (ref=) will be treated as references,
so you can use *.sketch without ref=.
You can also do ref=nt#.sketch to load all numbered files
fitting that pattern.
On NERSC, you can use these abbreviations (e.g. ref=nt):
nt: nt sketches
refseq: Refseq sketches
silva: Silva sketches
img: IMG sketches
mito: RefSeq mitochondrial sketches
fungi: RefSeq fungi sketches
protein: RefSeq prokaryotic amino acid sketches
Using an abbreviation automatically sets the blacklist,
and k. If the reference is in amino space, the query
also be in amino acid space with the flag amino added.
If the query is in nucleotide space, use the flag
'translate', but this will only work for prokaryotes.
Blacklist and Whitelist parameters:
blacklist=<file> Ignore keys in this sketch file. Additionally, there are
built-in blacklists that can be specified:
nt: Blacklist for nt
refseq: Blacklist for Refseq
silva: Blacklist for Silva
img: Blacklist for IMG
whitelist=f Ignore keys that are not in the index. Requires index=t.
Sketch-making parameters:
mode=perfile Possible modes, for sequence input:
single: Generate one sketch.
sequence: Generate one sketch per sequence.
perfile: Generate one sketch per file.
sketchonly=f Don't run comparisons, just write the output sketch file.
k=31 Kmer length, 1-32. To maximize sensitivity and
specificity, dual kmer lengths may be used: k=31,24
Dual kmers are fastest if the shorter is a multiple
of 4. Query and reference k must match.
samplerate=1 Set to a lower value to sample a fraction of input reads.
For raw reads (rather than an assembly), 1-3x coverage
gives best results, by reducing error kmers. Somewhat
higher is better for high-error-rate data like PacBio.
minkeycount=1 Ignore kmers that occur fewer times than this. Values
over 1 can be used with raw reads to avoid error kmers.
minprob=0.0001 Ignore kmers below this probability of correctness.
minqual=0 Ignore kmers spanning bases below this quality.
entropy=0.66 Ignore sequence with entropy below this value.
merge=f Merge paired reads prior to sketching.
amino=f Use amino acid mode. Input should be amino acids.
translate=f Call genes and translate to proteins. Input should be
nucleotides. Designed for prokaryotes.
sixframes=f Translate all 6 frames instead of predicting genes.
ssu=t Scan for and retain full-length SSU sequence.
printssusequence=f Print the query SSU sequence (JSON mode only).
Size parameters:
size=10000 Desired size of sketches (if not using autosize).
mgf=0.01 (maxfraction) Max fraction of genomic kmers to use.
minsize=100 Do not generate sketches for genomes smaller than this.
autosize=t Use flexible sizing instead of fixed-length. This is
nonlinear; a human sketch is only ~6x a bacterial sketch.
sizemult=1 Multiply the autosized size of sketches by this factor.
Normally a bacterial-size genome will get a sketch size
of around 10000; if autosizefactor=2, it would be ~20000.
density= If this flag is set (to a number between 0 and 1),
autosize and sizemult are ignored, and this fraction of
genomic kmers are used. For example, at density=0.001,
a 4.5Mbp bacteria will get a 4500-kmer sketch.
sketchheapfactor=4 If minkeycount>1, temporarily track this many kmers until
counts are known and low-count kmers are discarded.
Sketch comparing parameters:
threads=auto Use this many threads for comparison.
index=auto Index the sketches for much faster searching.
Requires more memory and adds startup time.
Recommended true for many query sketches, false for few.
prealloc=f Preallocate the index for greater efficiency.
Can be set to a number between 0 and 1 to determine how
much of total memory should be used.
alltoall (ata) Compare all refs to all. Must be sketches.
compareself=f In all-to-all mode, compare a sketch to itself.
Taxonomy-related parameters:
tree=<file> Specify a TaxTree file. On Genepool, use tree=auto.
Only necessary for use with printtaxa and level.
Assumes comparisons are done against reference sketches
with known taxonomy information.
level=2 Only report the best record per taxa at this level.
Either level names or numbers may be used.
0: disabled
1: subspecies
2: species
3: genus
...etc
include= Restrict output to organisms in these clades.
May be a comma-delimited list of names or NCBI TaxIDs.
includelevel=0 Promote the include list to this taxonomic level.
For example, include=h.sapiens includelevel=phylum
would only include organisms in the same phylum as human.
includestring= Only report records whose name contains this string.
exclude= Ignore organisms in these clades.
May be a comma-delimited list of names or NCBI TaxIDs.
excludelevel=0 Promote the exclude list to this taxonomic level.
For example, exclude=h.sapiens excludelevel=phylum
would exclude all organisms in the same phylum as human.
excludestring= Do not records whose name contains this string.
minlevel= Use this to restrict comparisons to distantly-related
organisms. Intended for finding misclassified organisms
using all-to-all comparisons. minlevel=order would only
report hits between organisms related at the order level
or higher, not between same species or genus.
banunclassified=f Ignore organisms descending from nodes like
'unclassified Bacteria'
banvirus=f Ignore viruses.
requiressu=f Ignore records without SSUs.
minrefsize=0 Ignore ref sketches smaller than this (unique kmers).
minrefsizebases=0 Ignore ref sketches smaller than this (total base pairs).
Output format:
format=2 2: Default format with, per query, one query header line;
one column header line; and one reference line per hit.
3: One line per hit, with columns query, reference, ANI,
and sizeRatio. Useful for all-to-all comparisons.
4: JSON (format=json also works).
5: Constellation (format=constellation also works).
usetaxidname=f For format 3, print the taxID in the name column.
usetaxname for format 3, print the taxonomic name in the name column.
useimgname For format 3, print the img ID in the name column.
Output columns (for format=2):
printall=f Enable all output columns.
printani=t (ani) Print average nucleotide identity estimate.
completeness=t Genome completeness estimate.
score=f Score (used for sorting the output).
printmatches=t Number of kmer matches to reference.
printlength=f Number of kmers compared.
printtaxid=t NCBI taxID.
printimg=f IMG identifier (only for IMG data).
printgbases=f Number of genomic bases.
printgkmers=f Number of genomic kmers.
printgsize=t Estimated number of unique genomic kmers.
printgseqs=t Number of sequences (scaffolds/reads).
printtaxname=t Name associated with this taxID.
printname0=f (pn0) Original seqeuence name.
printfname=t Query filename.
printtaxa=f Full taxonomy of each record.
printcontam=t Print contamination estimate, and factor contaminant kmers
into calculations. Kmers are considered contaminant if
present in some ref sketch but not the current one.
printunique=t Number of matches unique to this reference.
printunique2=f Number of matches unique to this reference's taxa.
printunique3=f Number of query kmers unique to this reference's taxa,
regardless of whether they are in this reference sketch.
printnohit=f Number of kmers that don't hit anything.
printrefhits=f Average number of ref sketches hit by shared kmers.
printgc=f GC content.
printucontam=f Contam hits that hit exactly one reference sketch.
printcontam2=f Print contamination estimate using only kmer hits
to unrelated taxa.
contamlevel=species Taxonomic level to use for contam2/unique2/unique3.
NOTE: unique2/unique3/contam2/refhits require an index.
printdepth=f (depth) Print average depth of sketch kmers; intended
for shotgun read input.
printdepth2=f (depth2) Print depth compensating for genomic repeats.
Requires reference sketches to be generated with depth.
actualdepth=t If this is false, the raw average count is printed.
If true, the raw average (observed depth) is converted
to estimated actual depth (including uncovered areas).
printvolume=f (volume) Product of average depth and matches.
printca=f Print common ancestor, if query taxID is known.
printcal=f Print common ancestor tax level, if query taxID is known.
recordsperlevel=0 If query TaxID is known, and this is positive, print this
many records per common ancestor level.
Sorting:
sortbyscore=t Default sort order is by score, a composite metric.
sortbydepth=f Include depth as a factor in sort order.
sortbydepth2=f Include depth2 as a factor in sort order.
sortbyvolume=f Include volume as a factor in sort order.
sortbykid=f Sort strictly by KID.
sortbyani=f Sort strictly by ANI/AAI/WKID.
sortbyhits=f Sort strictly by the number of kmer hits.
Other output parameters:
minhits=3 (hits) Only report records with at least this many hits.
minani=0 (ani) Only report records with at least this ANI (0-1).
minwkid=0.0001 (wkid) Only report records with at least this WKID (0-1).
anifromwkid=t Calculate ani from wkid. If false, use kid.
minbases=0 Ignore ref sketches of sequences shortert than this.
minsizeratio=0 Don't compare sketches if the smaller genome is less than
this fraction of the size of the larger.
records=20 Report at most this many best-matching records.
color=family Color records at the family level. color=f will disable.
Colors work in most terminals but may cause odd characters
to appear in text editors. So, color defaults to f if
writing to a file. Requires the taxtree to be loaded.
intersect=f Print sketch intersections. delta=f is suggested.
Metadata flags (optional, for the query sketch header):
taxid=-1 Set the NCBI taxid.
imgid=-1 Set the IMG id.
spid=-1 Set the JGI sequencing project id.
name= Set the name (taxname).
name0= Set name0 (normally the first sequence header).
fname= Set fname (normally the file name).
meta_= Set an arbitrary metadata field.
For example, meta_Month=March.
Other parameters:
requiredmeta= (rmeta) Required optional metadata values. For example:
rmeta=subunit:ssu,source:silva
bannedmeta= (bmeta) Forbidden optional metadata values.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
For more detailed information, please read /bbmap/docs/guides/BBSketchGuide.txt.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx4g"
z2="-Xms4g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 3200m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
comparesketch() {
local CMD="java $EA $EOOM $z $z2 -cp $CP sketch.CompareSketch $@"
# echo $CMD >&2
eval $CMD
}
comparesketch "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/comparesketch.sh | comparesketch.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified December 19, 2019
Description: Merges multiple sketches into a single sketch.
Please read bbmap/docs/guides/BBSketchGuide.txt for more information.
Usage: mergesketch.sh in=a.sketch,b.sketch out=c.sketch
With wildcards: mergesketch.sh *.sketch out=c.sketch
Standard parameters:
in=<file> Input sketches or fasta files; may be a comma-delimited
list. in= is optional so wildcards may be used.
out=<file> Output sketch.
amino=f Use amino acid mode.
Sketch-making parameters:
mode=single Possible modes, for fasta input:
single: Generate one sketch per file.
sequence: Generate one sketch per sequence.
autosize=t Produce an output sketch of whatever size the union
happens to be.
size= Restrict output sketch to this upper bound of size.
k=32,24 Kmer length, 1-32.
keyfraction=0.2 Only consider this upper fraction of keyspace.
minkeycount=1 Ignore kmers that occur fewer times than this. Values
over 1 can be used with raw reads to avoid error kmers.
depth=f Retain kmer counts if available.
Metadata parameters: (if blank the values of the first sketch will be used)
taxid=-1 Set the NCBI taxid.
imgid=-1 Set the IMG id.
spid=-1 Set the JGI sequencing project id.
name= Set the name (taxname).
name0= Set name0 (normally the first sequence header).
fname= Set fname (normally the file name).
meta_= Set an arbitrary metadata field.
For example, meta_Month=March.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
For more detailed information, please read /bbmap/docs/guides/BBSketchGuide.txt.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx4g"
z2="-Xms4g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 3200m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
sendsketch() {
local CMD="java $EA $EOOM $z -cp $CP sketch.MergeSketch $@"
# echo $CMD >&2
eval $CMD
}
sendsketch "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/mergesketch.sh | mergesketch.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified July 6, 2015
Description: Filters lines by exact match or substring.
Usage: filterlines.sh in=<file> out=<file> names=<file> include=<t/f>
Parameters:
include=f Set to 'true' to include the filtered names rather than excluding them.
prefix=f Allow matching of only the line's prefix (all characters up to first whitespace).
substring=f Allow one name to be a substring of the other, rather than a full match.
f: No substring matching.
t: Bidirectional substring matching.
line: Allow input lines to be substrings of names in list.
name: Allow names in list to be substrings of input lines.
case=t (casesensitive) Match case also.
ow=t (overwrite) Overwrites files that already exist.
app=f (append) Append to files that already exist.
zl=4 (ziplevel) Set compression level, 1 (low) to 9 (max).
names= A list of strings or files, comma-delimited. Files must have one name per line.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
To read from stdin, set 'in=stdin'. The format should be specified with an extension, like 'in=stdin.fq.gz'
To write to stdout, set 'out=stdout'. The format should be specified with an extension, like 'out=stdout.fasta'
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx800m"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 800m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
function filterlines() {
local CMD="java $EA $EOOM $z -cp $CP driver.FilterLines $@"
echo $CMD >&2
eval $CMD
}
filterlines "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/filterlines.sh | filterlines.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified September 1, 2016
Description: Filters reads by name.
Usage: filterbyname.sh in=<file> in2=<file2> out=<outfile> out2=<outfile2> names=<string,string,string> include=<t/f>
in2 and out2 are for paired reads and are optional.
If input is paired and there is only one output file, it will be written interleaved.
Important! Leading > and @ symbols are NOT part of sequence names; they are part of
the fasta, fastq, and sam specifications. Therefore, this is correct:
names=e.coli_K12
And these are incorrect:
names=>e.coli_K12
[email protected]_K12
Parameters:
include=f Set to 'true' to include the filtered names rather than excluding them.
substring=f Allow one name to be a substring of the other, rather than a full match.
f: No substring matching.
t: Bidirectional substring matching.
header: Allow input read headers to be substrings of names in list.
name: Allow names in list to be substrings of input read headers.
prefix=f Allow names to match read header prefixes.
case=t (casesensitive) Match case also.
ow=t (overwrite) Overwrites files that already exist.
app=f (append) Append to files that already exist.
zl=4 (ziplevel) Set compression level, 1 (low) to 9 (max).
int=f (interleaved) Determines whether INPUT file is considered interleaved.
names= A list of strings or files. The files can have one name per line, or
be a standard read file (fasta, fastq, or sam).
minlen=0 Do not output reads shorter than this.
ths=f (truncateheadersymbol) Ignore a leading @ or > symbol in the names file.
tws=f (truncatewhitespace) Ignore leading or trailing whitespace in the names file.
truncate=f Set both ths and tws at the same time.
Positional parameters:
These optionally allow you to output only a portion of a sequence. Zero-based, inclusive.
Intended for a single sequence and include=t mode.
from=-1 Only print bases starting at this position.
to=-1 Only print bases up to this position.
range= Set from and to with a single flag.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
To read from stdin, set 'in=stdin'. The format should be specified with an extension, like 'in=stdin.fq.gz'
To write to stdout, set 'out=stdout'. The format should be specified with an extension, like 'out=stdout.fasta'
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx800m"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 800m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
function filterbyname() {
local CMD="java $EA $EOOM $z -cp $CP driver.FilterReadsByName $@"
echo $CMD >&2
eval $CMD
}
filterbyname "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/filterbyname.sh | filterbyname.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified Jan 7, 2020
Description: Shrinks sketches to a smaller fixed length.
Please read bbmap/docs/guides/BBSketchGuide.txt for more information.
Usage: subsketch.sh in=file.sketch out=sub.sketch size=1000 autosize=f
Bulk usage: subsketch.sh in=big#.sketch out=small#.sketch sizemult=0.5
Standard parameters:
in=<file> Input sketch file containing one or more sketches.
out=<file> Output sketch file.
size=10000 Size of sketches to generate, if autosize=f.
autosize=t Autosize sketches based on genome size.
sizemult=1 Adjust default sketch autosize by this factor.
blacklist= Apply a blacklist to the sketch before resizing.
files=31 If the output filename contains a # symbol,
spread the output across this many files, replacing
the # with a number.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
For more detailed information, please read /bbmap/docs/guides/BBSketchGuide.txt.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx4g"
z2="-Xms4g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 3200m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
sendsketch() {
local CMD="java $EA $EOOM $z -cp $CP sketch.SubSketch $@"
# echo $CMD >&2
eval $CMD
}
sendsketch "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/subsketch.sh | subsketch.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified Jan 7, 2020
Description: Splits sequences according to their taxonomy,
as determined by the sequence name. Sequences should
be labeled with a gi number, NCBI taxID, or species name.
Usage: splitbytaxa.sh in=<input file> out=<output pattern> tree=<tree file> table=<table file> level=<name or number>
Input may be fasta or fastq, compressed or uncompressed.
Standard parameters:
in=<file> Primary input.
out=<file> Output pattern; must contain % symbol.
overwrite=f (ow) Set to false to force the program to abort rather than
overwrite an existing file.
showspeed=t (ss) Set to 'f' to suppress display of processing speed.
ziplevel=2 (zl) Set to 1 (lowest) through 9 (max) to change compression
level; lower compression is faster.
Processing parameters:
level=phylum Taxonomic level, such as phylum. Filtering will operate on
sequences within the same taxonomic level as specified ids.
tree= A taxonomic tree made by TaxTree, such as tree.taxtree.gz.
table= A table translating gi numbers to NCBI taxIDs.
Only needed if gi numbers will be used.
On Genepool, use 'tree=auto table=auto'.
* Note *
Tree and table files are in /global/projectb/sandbox/gaag/bbtools/tax
For non-Genepool users, or to make new ones, use taxtree.sh and gitable.sh
Java Parameters:
-Xmx This will set Java's memory usage, overriding automatic
memory detection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify
200 megs. The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx4g"
z2="-Xms4g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 1000m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
splitbytaxa() {
local CMD="java $EA $EOOM $z -cp $CP tax.SplitByTaxa $@"
echo $CMD >&2
eval $CMD
}
splitbytaxa "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/splitbytaxa.sh | splitbytaxa.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified January 22, 2020
Description: Splits a file of various rRNAs into one file per type
(16S, 18S, 5S, 23s).
Usage: splitribo.sh in=<file,file> out=<pattern>
Standard parameters:
in=<file> Input file.
out=<pattern> Output file pattern, such as out_#.fa. The # symbol is
required and will be substituted by the type name, such as
16S, to make out_16S.fa, for example.
overwrite=f (ow) Set to false to force the program to abort rather than
overwrite an existing file.
ziplevel=9 (zl) Set to 1 (lowest) through 9 (max) to change compression
level; lower compression is faster.
types=16S,18S,5S,23S,m16S,m18S,p16S
Align to these sequences. Fewer types is faster. m16S
and m18S are mitochondrial; p16S is plastid (chloroplast).
Processing parameters:
minid=0.59 Ignore alignments with identity lower than this to a
consensus sequences.
refineid=0.70 Refine score by aligning to clade-specific consensus if
the best alignment to a universal consensus is below this.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will
specify 200 megs. The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx4g"
z2="-Xms4g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 4000m 42
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
mergeribo() {
local CMD="java $EA $EOOM $z -cp $CP prok.SplitRibo $@"
echo $CMD >&2
eval $CMD
}
mergeribo "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/splitribo.sh | splitribo.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified October 17, 2017
Description: Masks sequences of low-complexity, or containing repeat kmers, or covered by mapped reads.
By default this program will mask using entropy with a window=80 and entropy=0.75
Please read bbmap/docs/guides/BBMaskGuide.txt for more information.
Usage: bbmask.sh in=<file> out=<file> sam=<file,file,...file>
Input may be stdin or a fasta or fastq file, raw or gzipped.
sam is optional, but may be a comma-delimited list of sam files to mask.
Sam files may also be used as arguments without sam=, so you can use *.sam for example.
If you pipe via stdin/stdout, please include the file type; e.g. for gzipped fasta input, set in=stdin.fa.gz
Input parameters:
in=<file> Input sequences to mask. 'in=stdin.fa' will pipe from standard in.
sam=<file,file> Comma-delimited list of sam files. Optional. Their mapped coordinates will be masked.
touppercase=f (tuc) Change all letters to upper-case.
interleaved=auto (int) If true, forces fastq input to be paired and interleaved.
qin=auto ASCII offset for input quality. May be 33 (Sanger), 64 (Illumina), or auto.
Output parameters:
out=<file> Write masked sequences here. 'out=stdout.fa' will pipe to standard out.
overwrite=t (ow) Set to false to force the program to abort rather than overwrite an existing file.
ziplevel=2 (zl) Set to 1 (lowest) through 9 (max) to change compression level; lower compression is faster.
fastawrap=70 Length of lines in fasta output.
qout=auto ASCII offset for output quality. May be 33 (Sanger), 64 (Illumina), or auto (same as input).
Processing parameters:
threads=auto (t) Set number of threads to use; default is number of logical processors.
maskrepeats=f (mr) Mask areas covered by exact repeat kmers.
kr=5 Kmer size to use for repeat detection (1-15). Use minkr and maxkr to sweep a range of kmers.
minlen=40 Minimum length of repeat area to mask.
mincount=4 Minimum number of repeats to mask.
masklowentropy=t (mle) Mask areas with low complexity by calculating entropy over a window for a fixed kmer size.
ke=5 Kmer size to use for entropy calculation (1-15). Use minke and maxke to sweep a range. Large ke uses more memory.
window=80 (w) Window size for entropy calculation.
entropy=0.70 (e) Mask windows with entropy under this value (0-1). 0.0001 will mask only homopolymers and 1 will mask everything.
lowercase=f (lc) Convert masked bases to lower case. Default is to convert them to N.
split=f Split into unmasked pieces and discard masked pieces.
Coverage parameters (only relevant if sam files are specified):
mincov=-1 If nonnegative, mask bases with coverage outside this range.
maxcov=-1 If nonnegative, mask bases with coverage outside this range.
delcov=t Include deletions when calculating coverage.
NOTE: If neither mincov nor maxcov are set, all covered bases will be masked.
Other parameters:
pigz=t Use pigz to compress. If argument is a number, that will set the number of pigz threads.
unpigz=t Use pigz to decompress.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx1g"
z2="-Xms1g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 3200m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
bbmask() {
local CMD="java $EA $EOOM $z $z2 -cp $CP jgi.BBMask $@"
echo $CMD >&2
eval $CMD
}
bbmask "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/bbmask.sh | bbmask.sh |
usage(){
echo "
BBSplit
Written by Brian Bushnell, from Dec. 2010 - present
Last modified June 11, 2018
Description: Maps reads to multiple references simultaneously.
Outputs reads to a file for the reference they best match, with multiple options for dealing with ambiguous mappings.
To index: bbsplit.sh build=<1> ref_x=<reference fasta> ref_y=<another reference fasta>
To map: bbsplit.sh build=<1> in=<reads> out_x=<output file> out_y=<another output file>
To be concise, and do everything in one command:
bbsplit.sh ref=x.fa,y.fa in=reads.fq basename=o%.fq
that is equivalent to
bbsplit.sh build=1 in=reads.fq ref_x=x.fa ref_y=y.fa out_x=ox.fq out_y=oy.fq
By default paired reads will yield interleaved output, but you can use the # symbol to produce twin output files.
For example, basename=o%_#.fq will produce ox_1.fq, ox_2.fq, oy_1.fq, and oy_2.fq.
Indexing Parameters (required when building the index):
ref=<file,file> A list of references, or directories containing fasta files.
ref_<name>=<ref.fa> Alternate, longer way to specify references. e.g., ref_ecoli=ecoli.fa
These can also be comma-delimited lists of files; e.g., ref_a=a1.fa,a2.fa,a3.fa
build=<1> If multiple references are indexed in the same directory, each needs a unique build ID.
path=<.> Specify the location to write the index, if you don't want it in the current working directory.
Input Parameters:
build=<1> Designate index to use. Corresponds to the number specified when building the index.
in=<reads.fq> Primary reads input; required parameter.
in2=<reads2.fq> For paired reads in two files.
qin=<auto> Set to 33 or 64 to specify input quality value ASCII offset.
interleaved=<auto> True forces paired/interleaved input; false forces single-ended mapping.
If not specified, interleaved status will be autodetected from read names.
Mapping Parameters:
maxindel=<20> Don't look for indels longer than this. Lower is faster. Set to >=100k for RNA-seq.
minratio=<0.56> Fraction of max alignment score required to keep a site. Higher is faster.
minhits=<1> Minimum number of seed hits required for candidate sites. Higher is faster.
ambiguous=<best> Set behavior on ambiguously-mapped reads (with multiple top-scoring mapping locations).
best (use the first best site)
toss (consider unmapped)
random (select one top-scoring site randomly)
all (retain all top-scoring sites. Does not work yet with SAM output)
ambiguous2=<best> Set behavior only for reads that map ambiguously to multiple different references.
Normal 'ambiguous=' controls behavior on all ambiguous reads;
Ambiguous2 excludes reads that map ambiguously within a single reference.
best (use the first best site)
toss (consider unmapped)
all (write a copy to the output for each reference to which it maps)
split (write a copy to the AMBIGUOUS_ output for each reference to which it maps)
qtrim=<true> Quality-trim ends to Q5 before mapping. Options are 'l' (left), 'r' (right), and 'lr' (both).
untrim=<true> Undo trimming after mapping. Untrimmed bases will be soft-clipped in cigar strings.
Output Parameters:
out_<name>=<file> Output reads that map to the reference <name> to <file>.
basename=prefix%suffix Equivalent to multiple out_%=prefix%suffix expressions, in which each % is replaced by the name of a reference file.
bs=<file> Write a shell script to 'file' that will turn the sam output into a sorted, indexed bam file.
scafstats=<file> Write statistics on how many reads mapped to which scaffold to this file.
refstats=<file> Write statistics on how many reads were assigned to which reference to this file.
Unmapped reads whose mate mapped to a reference are considered assigned and will be counted.
nzo=t Only print lines with nonzero coverage.
***** Notes *****
Almost all BBMap parameters can be used; run bbmap.sh for more details.
Exceptions include the 'nodisk' flag, which BBSplit does not support.
BBSplit is recommended for fastq and fasta output, not for sam/bam output.
When the reference sequences are shorter than read length, use Seal instead of BBSplit.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
This list is not complete. For more information, please consult $DIRdocs/readme.txt
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
JNI="-Djava.library.path=""$DIR""jni/"
JNI=""
z="-Xmx1g"
z2="-Xms1g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 3200m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
function bbsplit() {
local CMD="java $EA $EOOM $z $z2 $JNI -cp $CP align2.BBSplitter ow=t fastareadlen=500 minhits=1 minratio=0.56 maxindel=20 qtrim=rl untrim=t trimq=6 $@"
echo $CMD >&2
eval $CMD
}
bbsplit "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/bbsplit.sh | bbsplit.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified Jan 7, 2020
Description: Demultiplexes sequences into multiple files based on their names,
substrings of their names, or prefixes or suffixes of their names.
Allows unlimited output files while maintaining only a small number of open file handles.
Usage:
demuxbyname.sh in=<file> in2=<file2> out=<outfile> out2=<outfile2> names=<string,string,string...>
Alternately:
demuxbyname.sh in=<file> out=<outfile> delimiter=whitespace prefixmode=f
This will demultiplex by the substring after the last whitespace.
demuxbyname.sh in=<file> out=<outfile> length=8 prefixmode=t
This will demultiplex by the first 8 characters of read names.
demuxbyname.sh in=<file> out=<outfile> delimiter=: prefixmode=f
This will split on colons, and use the last substring as the name; useful for
demuxing by barcode for Illumina headers in this format:
@A00178:73:HH7H3DSXX:4:1101:13666:1047 1:N:0:ACGTTGGT+TGACGCAT
in2 and out2 are for paired reads in twin files and are optional.
If input is paired and there is only one output file, it will be written interleaved.
File Parameters:
in=<file> Input file.
in2=<file> If input reads are paired in twin files, use in2 for the second file.
out=<file> Output files for reads with matched headers (must contain % symbol).
For example, out=out_%.fq with names XX and YY would create out_XX.fq and out_YY.fq.
If twin files for paired reads are desired, use the # symbol. For example,
out=out_%_#.fq in this case would create out_XX_1.fq, out_XX_2.fq, out_YY_1.fq, etc.
outu=<file> Output file for reads with unmatched headers.
stats=<file> Print statistics about how many reads went to each file.
Processing Modes (determines how to convert a read into a name):
prefixmode=t (pm) Match prefix of read header. If false, match suffix of read header.
prefixmode=f is equivalent to suffixmode=t.
barcode=f Parse barcodes from Illumina headers.
chrom=f For mapped sam files, make one file per chromosome (scaffold) using the rname.
header=f Use the entire sequence header.
delimiter= For prefix or suffix mode, specifying a delimiter will allow exact matches even if the length is variable.
This allows demultiplexing based on names that are found without specifying a list of names.
In suffix mode, for example, everything after the last delimiter will be used.
Normally the delimiter will be used as a literal string (a Java regular expression); for example, ':' or 'HISEQ'.
But there are some special delimiters which will be replaced by the symbol they name,
because they are reserved in some operating systems or cause other problems.
These are provided for convenience due to possible OS conflicts:
space, tab, whitespace, pound, greaterthan, lessthan, equals,
colon, semicolon, bang, and, quote, singlequote
These are provided because they interfere with Java regular expression syntax:
backslash, hat, dollar, dot, pipe, questionmark, star,
plus, openparen, closeparen, opensquare, opencurly
In other words, to match '.', you should set 'delimiter=dot'.
substring=f Names can be substrings of read headers. Substring mode is
slow if the list of names is large. Requires a list of names.
Other Processing Parameters:
column=-1 If positive, split the header on a delimiter and match that column (1-based).
For example, using this header:
NB501886:61:HL3GMAFXX:1:11101:10717:1140 1:N:0:ACTGAGC+ATTAGAC
You could demux by tile (11101) using 'delimiter=: column=5'
Column is 1-based (first column is 1).
If column is omitted when a delimiter is present, prefixmode
will use the first substring, and suffixmode will use the last substring.
names= List of strings (or files containing strings) to parse from read names.
If the names are in text files, there should be one name per line.
This is optional. If a list of names is provided, files will only be created for those names.
For example, 'prefixmode=t length=5' would create a file for every unique last 5 characters in read names,
and every read would be written to one of those files. But if there was addionally 'names=ABCDE,FGHIJ'
then at most 2 files would be created, and anything not matching those names would go to outu.
length=0 If positive, use a suffix or prefix of this length from read name instead of or in addition to the list of names.
For example, you could create files based on the first 8 characters of read names.
hdist=0 Allow a hamming distance for demultiplexing barcodes. This requires a list of names (barcodes).
replace= Replace some characters in the output filenames. For example, replace=+-
would replace the + symbol in headers with the - symbol in filenames. So you could
match the name ACTGAGC+ATTAGAC in the header, but write to a file named ACTGAGC-ATTAGAC.
Buffering Parameters
streams=4 Allow at most this many active streams. The actual number of open files
will be 1 greater than this if outu is set, and doubled if output
is paired and written in twin files instead of interleaved.
minreads=0 Don't create a file for fewer than this many reads; instead, send them to unknown.
This option will incur additional memory usage.
Common parameters:
ow=t (overwrite) Overwrites files that already exist.
zl=4 (ziplevel) Set compression level, 1 (low) to 9 (max).
int=auto (interleaved) Determines whether INPUT file is considered interleaved.
qin=auto ASCII offset for input quality. May be 33 (Sanger), 64 (Illumina), or auto.
qout=auto ASCII offset for output quality. May be 33 (Sanger), 64 (Illumina), or auto (same as input).
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx2g"
z2="-Xms2g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 3200m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
function demuxbyname() {
local CMD="java $EA $EOOM $z $z2 -cp $CP jgi.DemuxByName2 $@"
echo $CMD >&2
eval $CMD
}
demuxbyname "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/demuxbyname.sh | demuxbyname.sh |
usage(){
echo "
Written by Brian Bushnell and Shijie Yao
Last modified Jan 7, 2020
Description: Starts a server that translates NCBI taxonomy.
Usage: taxserver.sh tree=<taxtree file> table=<gitable file> port=<number>
Usage examples:
taxserver.sh tree=tree.taxtree.gz table=gitable.int1d.gz port=1234
On Genepool:
taxserver.sh tree=auto table=auto port=1234
For accession number support, add accession=<something> E.g.:
External:
taxserver.sh -Xmx45g tree=tree.taxtree.gz table=gitable.int1d.gz accession=prot.accession2taxid.gz,nucl_wgs.accession2taxid.gz port=1234
On Genepool:
taxserver.sh tree=auto table=auto accession=auto port=1234
If all expected files are in some specific location, you can also do this:
taxserver.sh -Xmx45g tree=auto table=auto accession=auto port=1234 taxpath=/path/to/files
To kill remotely, launch with the flag kill=password, then access /kill/password
Parameters:
tree=auto taxtree path. Always necessary.
table=auto gitable path. Necessary for gi number support.
accession=null Comma-delimited paths of accession files.
Necessary for accession support.
img=null IMG dump file.
pattern=null Pattern file, for storing accessions more efficiently.
port=3068 Port number.
domain= Domain to be displayed in the help message.
Default is taxonomy.jgi-psf.org.
dbname= Set the name of the database in the help message.
sketchcomparethreads=16 Limit compare threads per connection.
sketchloadthreads=4 Limit load threads (for local queries of fastq).
sketchonly=f Don't hash taxa names.
k=31 Kmer length, 1-32. To maximize sensitivity and
specificity, dual kmer lengths may be used: k=31,24
prealloc=f Preallocate some data structures for faster loading.
Security parameters:
killcode= Set a password to allow remote killing.
oldcode= Set the password of a prior instance.
oldaddress= Attempt to kill a prior instance after initialization,
by sending the old code to this address. For example,
taxonomy.jgi-psf.org/kill/
allowremotefileaccess=f Allow non-internal queries to use internal files
for sketching in local mode.
allowlocalhost=f Consider a query internal if it originates from localhost
without being proxied.
addressprefix=128. Queries originating from this IP address prefix will be
considered internal.
Unrecognized parameters with no = symbol will be treated as sketch files.
Other sketch parameters such as index and k are also allowed.
Please consult bbmap/docs/guides/TaxonomyGuide.txt and BBSketchGuide.txt for more information.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx45g"
z2="-Xms45g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
}
calcXmx "$@"
taxserver() {
local CMD="java $EA $EOOM $z $z2 -cp $CP tax.TaxServer $@"
echo $CMD >&2
eval $CMD
}
taxserver "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/taxserver.sh | taxserver.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified Jan 29, 2020
Description: Adds, removes, or replaces SSU sequence of existing sketches.
Sketches and SSU fasta files must be annotated with TaxIDs.
Usage: addssu.sh in=a.sketch out=b.sketch 16S=16S.fa 18S=18S.fa
Standard parameters:
in=<file> Input sketch file.
out=<file> Output sketch file.
Additional files (optional):
16S=<file> A fasta file of 16S sequences. These should be renamed
so that they start with tid|# where # is the taxID.
Should not contain organelle rRNA.
18S=<file> A fasta file of 18S sequences. These should be renamed
so that they start with tid|# where # is the taxID.
Should not contain organelle rRNA.
tree=auto Path to TaxTree, if performing prok/euk-specific operations.
Processing parameters:
preferSSUMap=f
preferSSUMapEuks=f
preferSSUMapProks=f
SSUMapOnly=f
SSUMapOnlyEuks=f
SSUMapOnlyProks=f
clear16S=f
clear18S=f
clear16SEuks=f
clear18SEuks=f
clear16SProks=f
clear18SProks=f
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-da Disable assertions.
For more detailed information, please read /bbmap/docs/guides/BBSketchGuide.txt.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx4g"
z2="-Xms4g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
#freeRam 3200m 84
#z="-Xmx${RAM}m"
#z2="-Xms${RAM}m"
}
calcXmx "$@"
sendsketch() {
local CMD="java $EA $EOOM $z -cp $CP sketch.AddSSU $@"
echo $CMD >&2
eval $CMD
}
sendsketch "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/addssu.sh | addssu.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified July 31, 2018
Description: Subsamples reads to reach a target unique kmer limit.
Differences between versions:
kmerlimit.sh uses 1 pass and outputs all reads until a limit is hit,
meaning the input reads should be in random order with respect to sequence.
kmerlimit2.sh uses 2 passes and randomly subsamples from the file, so
it works with reads in any order.
Usage: kmerlimit2.sh in=<input file> out=<output file> limit=<number>
Standard parameters:
in=<file> Primary input, or read 1 input.
in2=<file> Read 2 input if reads are in two files.
out=<file> Primary output, or read 1 output.
out2=<file> Read 2 output if reads are in two files.
overwrite=t (ow) Set to false to force the program to abort rather than
overwrite an existing file.
ziplevel=2 (zl) Set to 1 (lowest) through 9 (max) to change compression
level; lower compression is faster.
Processing parameters:
k=31 Kmer length, 1-32.
limit= The number of unique kmers to produce.
mincount=1 Ignore kmers seen fewer than this many times.
minqual=0 Ignore bases with quality below this.
minprob=0.2 Ignore kmers with correctness probability below this.
trials=25 Simulation trials.
seed=-1 Set to a positive number for deterministic output.
maxlen=50m Max length of a temp array used in simulation.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will
specify 200 megs. The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx1000m"
z2="-Xms1000m"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
}
calcXmx "$@"
kmerlimit2() {
local CMD="java $EA $EOOM $z -cp $CP sketch.KmerLimit2 $@"
echo $CMD >&2
eval $CMD
}
kmerlimit2 "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/kmerlimit2.sh | kmerlimit2.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified November 9, 2016
Description: Reorders reads randomly, keeping pairs together.
Usage: shuffle.sh in=<file> out=<file>
Standard parameters:
in=<file> The 'in=' flag is needed if the input file is not the first parameter. 'in=stdin' will pipe from standard in.
in2=<file> Use this if 2nd read of pairs are in a different file.
out=<file> The 'out=' flag is needed if the output file is not the second parameter. 'out=stdout' will pipe to standard out.
out2=<file> Use this to write 2nd read of pairs to a different file.
overwrite=t (ow) Set to false to force the program to abort rather than overwrite an existing file.
ziplevel=2 (zl) Set to 1 (lowest) through 9 (max) to change compression level; lower compression is faster.
int=auto (interleaved) Set to t or f to override interleaving autodetection.
Processing parameters:
shuffle Randomly reorders reads (default).
name Sort reads by name.
coordinate Sort reads by mapping location.
sequence Sort reads by sequence.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx2g"
z2="-Xms2g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 2000m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
shuffle() {
local CMD="java $EA $EOOM $z $z2 -cp $CP sort.Shuffle $@"
echo $CMD >&2
eval $CMD
}
shuffle "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/shuffle.sh | shuffle.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified September 15, 2015
Description: Counts the number of lines shared between sets of files.
One output file will be printed for each input file. For example,
an output file for a file in the 'in1' set will contain one line per
file in the 'in2' set, indicating how many lines are shared.
Usage: countsharedlines.sh in1=<file,file...> in2=<file,file...>
Parameters:
include=f Set to 'true' to include the filtered names rather than excluding them.
prefix=f Allow matching of only the line's prefix (all characters up to first whitespace).
case=t (casesensitive) Match case also.
ow=t (overwrite) Overwrites files that already exist.
app=f (append) Append to files that already exist.
zl=4 (ziplevel) Set compression level, 1 (low) to 9 (max).
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx800m"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 800m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
function countsharedlines() {
local CMD="java $EA $EOOM $z -cp $CP driver.CountSharedLines $@"
echo $CMD >&2
eval $CMD
}
countsharedlines "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/countsharedlines.sh | countsharedlines.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified October 30, 2019
Description: Filters reads based on positional quality over a flowcell.
Quality is estimated based on quality scores and kmer uniqueness.
All reads within a small unit of area called a micro-tile are averaged,
then the micro-tile is either retained or discarded as a unit.
Please read bbmap/docs/guides/FilterByTileGuide.txt for more information.
Usage: filterbytile.sh in=<input> out=<output>
Input parameters:
in=<file> Primary input file.
in2=<file> Second input file for paired reads in two files.
indump=<file> Specify an already-made dump file to use instead of
analyzing the input reads.
reads=-1 Process this number of reads, then quit (-1 means all).
interleaved=auto Set true/false to override autodetection of the
input file as paired interleaved.
seed=-1 Set to a positive numver for deterministic output.
Output parameters:
out=<file> Output file for filtered reads.
dump=<file> Write a summary of quality information by coordinates.
Tile parameters:
xsize=500 Initial width of micro-tiles.
ysize=500 Initial height of micro-tiles.
size= Allows setting xsize and ysize tot he same value.
target=800 Iteratively increase the size of micro-tiles until they
contain an average of at least this number of reads.
A micro-tile is discarded if any of 3 metrics indicate a problem.
The metrics are kmer uniqueness (u), average quality (q), and probability
of being error-free (e). Each has 3 parameters: deviations (d),
fraction (f), and absolute (a). After calculating the difference (delta)
between a micro-tile and average, it is discarded only if all three of these
conditions are true for at least one metric (using quality as the example):
1) delta is greater than (qd) standard deviations.
2) delta is greater than average times the fraction (qf).
3) delta is greater than the absolute value (qa).
Filtering parameters:
udeviations=1.5 (ud) Standard deviations for uniqueness discarding.
qdeviations=2 (qd) Standard deviations for quality discarding.
edeviations=2 (ed) Standard deviations for error-free probablity
discarding.
ufraction=0.01 (uf) Min fraction for uniqueness discarding.
qfraction=0.01 (qf) Min fraction for quality discarding.
efraction=0.01 (ef) Min fraction for error-free probablity discarding.
uabsolute=1 (ua) Min absolute value for uniqueness discarding.
qabsolute=1 (qa) Min absolute value for quality discarding.
eabsolute=1 (ea) Min absolute value for error-free probablity discarding.
Other parameters:
lowqualityonly=t (lqo) Only filter low quality reads within low quality
micro-tiles, rather than the whole micro-tile.
trimq=-1 If set to a positive number, trim reads to that quality
level instead of filtering them.
qtrim=r If trimq is positive, to quality trimming on this end
of the reads. Values are r, l, and rl for right,
left, and both ends.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 GB of RAM; -Xmx200m will specify
200 MB. The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx8g"
z2="-Xms8g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 3200m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
filterbytile() {
local CMD="java $EA $EOOM $z $z2 -cp $CP hiseq.AnalyzeFlowCell $@"
echo $CMD >&2
eval $CMD
}
filterbytile "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/filterbytile.sh | filterbytile.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified September 4, 2018
Description: Error corrects reads and/or filters by depth, storing
kmer counts in a count-min sketch (a Bloom filter variant).
This uses a fixed amount of memory. The error-correction algorithm is taken
from Tadpole; with plenty of memory, the behavior is almost identical to
Tadpole. As the number of unique kmers in a dataset increases, the accuracy
decreases such that it will make fewer corrections. It is still capable
of making useful corrections far past the point where Tadpole would crash
by running out of memory, even with the prefilter flag. But if there is
sufficient memory to use Tadpole, then Tadpole is more desirable.
Because accuracy declines with an increasing number of unique kmers, it can
be useful with very large datasets to run this in 2 passes, with the first
pass for filtering only using a 2-bit filter with the flags tossjunk=t and
ecc=f (and possibly mincount=2 and hcf=0.4), and the second pass using a
4-bit filter for the actual error correction.
Usage: bbcms.sh in=<input file> out=<output> outb=<reads failing filters>
Example of use in error correction:
bbcms.sh in=reads.fq out=ecc.fq bits=4 hashes=3 k=31 merge
Example of use in depth filtering:
bbcms.sh in=reads.fq out=high.fq outb=low.fq k=31 mincount=2 ecc=f hcf=0.4
Error correction and depth filtering can be done simultaneously.
File parameters:
in=<file> Primary input, or read 1 input.
in2=<file> Read 2 input if reads are in two files.
out=<file> Primary read output.
out2=<file> Read 2 output if reads are in two files.
outb=<file> (outbad/outlow) Output for reads failing mincount.
outb2=<file> (outbad2/outlow2) Read 2 output if reads are in two files.
extra=<file> Additional comma-delimited files for generating kmer counts.
ref=<file> If ref is set, then only files in the ref list will be used
for kmer counts, and the input files will NOT be used for
counts; they will just be filtered or corrected.
overwrite=t (ow) Set to false to force the program to abort rather than
overwrite an existing file.
Hashing parameters:
k=31 Kmer length, currently 1-31.
hashes=3 Number of hashes per kmer. Higher generally reduces
false positives at the expense of speed; rapidly
diminishing returns above 4.
ksmall= Optional sub-kmer length; setting to slightly lower than k
can improve memory efficiency by reducing the number of hashes
needed. e.g. 'k=31 ksmall=29 hashes=2' has better speed and
accuracy than 'k=31 hashes=3' when the filter is very full.
minprob=0.5 Ignore kmers with probability of being correct below this.
memmult=1.0 Fraction of free memory to use for Bloom filter. 1.0 should
generally work; if the program crashes with an out of memory
error, set this lower. You may be able to increase accuracy
by setting it slightly higher.
cells= Option to set the number of cells manually. By default this
will be autoset to use all available memory. The only reason
to set this is to ensure deterministic output.
seed=0 This will change the hash function used. Useful if running
iteratively with a very full table. -1 uses a random seed.
Depth filtering parameters:
mincount=0 If positive, reads with kmer counts below mincount will
be discarded (sent to outb).
hcf=1.0 (highcountfraction) Fraction of kmers that must be at least
mincount to pass.
requireboth=t Require both reads in a pair to pass in order to go to out.
When true, if either read has a count below mincount, both
reads in the pair will go to outb. When false, reads will
only go to outb if both fail.
tossjunk=f Remove reads or pairs with outermost kmer depth below 2.
(Suggested params for huge metagenomes: mincount=2 hcf=0.4 tossjunk=t)
Error correction parameters:
ecc=t Perform error correction.
bits= Bits used to store kmer counts; max count is 2^bits-1.
Supports 2, 4, 8, 16, or 32. 16 is best for high-depth data;
2 or 4 are for huge, low-depth metagenomes that saturate the
bloom filter otherwise. Generally 4 bits is recommended for
error-correction and 2 bits is recommended for filtering only.
ecco=f Error-correct paired reads by overlap prior to kmer-counting.
merge=t Merge paired reads by overlap prior to kmer-counting, and
again prior to correction. Output will still be unmerged.
smooth=3 Remove spikes from kmer counts due to hash collisions.
The number is the max width of peaks to be smoothed; range is
0-3 (3 is most aggressive; 0 disables smoothing).
This also affects tossjunk.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will
specify 200 megs. The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx4g"
z2="-Xms4g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 4000m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
bloomfilter() {
local CMD="java $EA $EOOM $z $z2 -cp $CP bloom.BloomFilterCorrectorWrapper $@"
echo $CMD >&2
eval $CMD
}
bloomfilter "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/bbcms.sh | bbcms.sh |
usage(){
echo "
Written by Brian Bushnell and Jonathan Rood
Last modified February 19, 2020
Description: Accepts one or more files containing sets of sequences (reads or scaffolds).
Removes duplicate sequences, which may be specified to be exact matches, subsequences, or sequences within some percent identity.
Can also find overlapping sequences and group them into clusters.
Please read bbmap/docs/guides/DedupeGuide.txt for more information.
Usage: dedupe.sh in=<file or stdin> out=<file or stdout>
An example of running Dedupe for clustering short reads:
dedupe.sh in=x.fq am=f ac=f fo c pc rnc=f mcs=4 mo=100 s=1 pto cc qin=33 csf=stats.txt pattern=cluster_%.fq dot=graph.dot
Input may be fasta or fastq, compressed or uncompressed.
Output may be stdout or a file. With no output parameter, data will be written to stdout.
If 'out=null', there will be no output, but statistics will still be printed.
You can also use 'dedupe <infile> <outfile>' without the 'in=' and 'out='.
I/O parameters:
in=<file,file> A single file or a comma-delimited list of files.
out=<file> Destination for all output contigs.
pattern=<file> Clusters will be written to individual files, where the '%' symbol in the pattern is replaced by cluster number.
outd=<file> Optional; removed duplicates will go here.
csf=<file> (clusterstatsfile) Write a list of cluster names and sizes.
dot=<file> (graph) Write a graph in dot format. Requires 'fo' and 'pc' flags.
threads=auto (t) Set number of threads to use; default is number of logical processors.
overwrite=t (ow) Set to false to force the program to abort rather than overwrite an existing file.
showspeed=t (ss) Set to 'f' to suppress display of processing speed.
minscaf=0 (ms) Ignore contigs/scaffolds shorter than this.
interleaved=auto If true, forces fastq input to be paired and interleaved.
ziplevel=2 Set to 1 (lowest) through 9 (max) to change compression level; lower compression is faster.
Output format parameters:
storename=t (sn) Store scaffold names (set false to save memory).
#addpairnum=f Add .1 and .2 to numeric id of read1 and read2.
storequality=t (sq) Store quality values for fastq assemblies (set false to save memory).
uniquenames=t (un) Ensure all output scaffolds have unique names. Uses more memory.
mergenames=f When a sequence absorbs another, concatenate their headers.
mergedelimiter=> Delimiter between merged headers. Can be a symbol name like greaterthan.
numbergraphnodes=t (ngn) Label dot graph nodes with read numbers rather than read names.
sort=f Sort output (otherwise it will be random). Options:
length: Sort by length
quality: Sort by quality
name: Sort by name
id: Sort by input order
ascending=f Sort in ascending order.
ordered=f Output sequences in input order. Equivalent to sort=id ascending.
renameclusters=f (rnc) Rename contigs to indicate which cluster they are in.
printlengthinedges=f (ple) Print the length of contigs in edges.
Processing parameters:
absorbrc=t (arc) Absorb reverse-complements as well as normal orientation.
absorbmatch=t (am) Absorb exact matches of contigs.
absorbcontainment=t (ac) Absorb full containments of contigs.
#absorboverlap=f (ao) Absorb (merge) non-contained overlaps of contigs (TODO).
findoverlap=f (fo) Find overlaps between contigs (containments and non-containments). Necessary for clustering.
uniqueonly=f (uo) If true, all copies of duplicate reads will be discarded, rather than keeping 1.
rmn=f (requirematchingnames) If true, both names and sequence must match.
usejni=f (jni) Do alignments in C code, which is faster, if an edit distance is allowed.
This will require compiling the C code; details are in /jni/README.txt.
Subset parameters:
subsetcount=1 (sstc) Number of subsets used to process the data; higher uses less memory.
subset=0 (sst) Only process reads whose ((ID%subsetcount)==subset).
Clustering parameters:
cluster=f (c) Group overlapping contigs into clusters.
pto=f (preventtransitiveoverlaps) Do not look for new edges between nodes in the same cluster.
minclustersize=1 (mcs) Do not output clusters smaller than this.
pbr=f (pickbestrepresentative) Only output the single highest-quality read per cluster.
Cluster postprocessing parameters:
processclusters=f (pc) Run the cluster processing phase, which performs the selected operations in this category.
For example, pc AND cc must be enabled to perform cc.
fixmultijoins=t (fmj) Remove redundant overlaps between the same two contigs.
removecycles=t (rc) Remove all cycles so clusters form trees.
cc=t (canonicizeclusters) Flip contigs so clusters have a single orientation.
fcc=f (fixcanoncontradictions) Truncate graph at nodes with canonization disputes.
foc=f (fixoffsetcontradictions) Truncate graph at nodes with offset disputes.
mst=f (maxspanningtree) Remove cyclic edges, leaving only the longest edges that form a tree.
Overlap Detection Parameters
exact=t (ex) Only allow exact symbol matches. When false, an 'N' will match any symbol.
touppercase=t (tuc) Convert input bases to upper-case; otherwise, lower-case will not match.
maxsubs=0 (s) Allow up to this many mismatches (substitutions only, no indels). May be set higher than maxedits.
maxedits=0 (e) Allow up to this many edits (subs or indels). Higher is slower.
minidentity=100 (mid) Absorb contained sequences with percent identity of at least this (includes indels).
minlengthpercent=0 (mlp) Smaller contig must be at least this percent of larger contig's length to be absorbed.
minoverlappercent=0 (mop) Overlap must be at least this percent of smaller contig's length to cluster and merge.
minoverlap=200 (mo) Overlap must be at least this long to cluster and merge.
depthratio=0 (dr) When non-zero, overlaps will only be formed between reads with a depth ratio of at most this.
Should be above 1. Depth is determined by parsing the read names; this information can be added
by running KmerNormalize (khist.sh, bbnorm.sh, or ecc.sh) with the flag 'rename'
k=31 Seed length used for finding containments and overlaps. Anything shorter than k will not be found.
numaffixmaps=1 (nam) Number of prefixes/suffixes to index per contig. Higher is more sensitive, if edits are allowed.
hashns=f Set to true to search for matches using kmers containing Ns. Can lead to extreme slowdown in some cases.
#ignoreaffix1=f (ia1) Ignore first affix (for testing).
#storesuffix=f (ss) Store suffix as well as prefix. Automatically set to true when doing inexact matches.
Other Parameters
qtrim=f Set to qtrim=rl to trim leading and trailing Ns.
trimq=6 Quality trim level.
forcetrimleft=-1 (ftl) If positive, trim bases to the left of this position (exclusive, 0-based).
forcetrimright=-1 (ftr) If positive, trim bases to the right of this position (exclusive, 0-based).
Note on Proteins / Amino Acids
Dedupe supports amino acid space via the 'amino' flag. This also changes the default kmer length to 10.
In amino acid mode, all flags related to canonicity and reverse-complementation are disabled,
and nam (numaffixmaps) is currently limited to 2 per tip.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
JNI="-Djava.library.path=""$DIR""jni/"
JNI=""
z="-Xmx1g"
z2="-Xms1g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 3200m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
dedupe() {
local CMD="java $JNI $EA $EOOM $z $z2 -cp $CP jgi.Dedupe $@"
echo $CMD >&2
eval $CMD
}
dedupe "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/dedupe.sh | dedupe.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified November 9, 2016
Description: Re-pairs reads that became disordered or had some mates eliminated.
Please read bbmap/docs/guides/RepairGuide.txt for more information.
Usage: repair.sh in=<input file> out=<pair output> outs=<singleton output>
Input may be fasta, fastq, or sam, compressed or uncompressed.
Parameters:
in=<file> The 'in=' flag is needed if the input file is not the first
parameter. 'in=stdin' will pipe from standard in.
in2=<file> Use this if 2nd read of pairs are in a different file.
out=<file> The 'out=' flag is needed if the output file is not the second
parameter. 'out=stdout' will pipe to standard out.
out2=<file> Use this to write 2nd read of pairs to a different file.
outs=<file> (outsingle) Write singleton reads here.
overwrite=t (ow) Set to false to force the program to abort rather than
overwrite an existing file.
showspeed=t (ss) Set to 'f' to suppress display of processing speed.
ziplevel=2 (zl) Set to 1 (lowest) through 9 (max) to change compression
level; lower compression is faster.
fint=f (fixinterleaving) Fixes corrupted interleaved files using read
names. Only use on files with broken interleaving - correctly
interleaved files from which some reads were removed.
repair=t (rp) Fixes arbitrarily corrupted paired reads by using read
names. Uses much more memory than 'fint' mode.
ain=f (allowidenticalnames) When detecting pair names, allows
identical names, instead of requiring /1 and /2 or 1: and 2:
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will
specify 200 megs. The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx4g"
z2="-Xms4g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 4000m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
repair() {
local CMD="java $EA $EOOM $z -cp $CP jgi.SplitPairsAndSingles rp $@"
echo $CMD >&2
eval $CMD
}
repair "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/repair.sh | repair.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified March 28, 2018
Description: Performs high-speed alignment-free sequence quantification,
by counting the number of long kmers that match between a read and
a set of reference sequences. Designed for RNA-seq with alternative splicing.
Please read bbmap/docs/guides/SealGuide.txt for more information.
Usage: seal.sh in=<input file> ref=<file,file,file...> rpkm=<file>
Input may be fasta or fastq, compressed or uncompressed.
If you pipe via stdin/stdout, please include the file type; e.g. for gzipped
fasta input, set in=stdin.fa.gz
Input parameters:
in=<file> Main input. in=stdin.fq will pipe from stdin.
in2=<file> Input for 2nd read of pairs in a different file.
ref=<file,file> Comma-delimited list of reference files or directories.
Filenames may also be used without ref=, e.g. *.fa.
In addition to filenames, you may also use the keywords:
adapters, artifacts, phix, lambda, pjet, mtst, kapa.
literal=<seq,seq> Comma-delimited list of literal reference sequences.
touppercase=f (tuc) Change all bases upper-case.
interleaved=auto (int) t/f overrides interleaved autodetection.
qin=auto Input quality offset: 33 (Sanger), 64, or auto.
reads=-1 If positive, quit after processing X reads or pairs.
copyundefined=f (cu) Process non-AGCT IUPAC reference bases by making all
possible unambiguous copies. Intended for short motifs
or adapter barcodes, as time/memory use is exponential.
Output parameters:
out=<file> (outmatch) Write reads here that contain kmers matching
the reference. 'out=stdout.fq' will pipe to standard out.
out2=<file> (outmatch2) Use this to write 2nd read of pairs to a
different file.
outu=<file> (outunmatched) Write reads here that do not contain kmers
matching the database.
outu2=<file> (outunmatched2) Use this to write 2nd read of pairs to a
different file.
pattern=<file> Use this to write reads to one stream per ref sequence
match, replacing the % character with the sequence name.
For example, pattern=%.fq for ref sequences named dog and
cat would create dog.fq and cat.fq.
stats=<file> Write statistics about which contamininants were detected.
refstats=<file> Write statistics on a per-reference-file basis.
rpkm=<file> Write RPKM for each reference sequence (for RNA-seq).
dump=<file> Dump kmer tables to a file, in fasta format.
nzo=t Only write statistics about ref sequences with nonzero hits.
overwrite=t (ow) Grant permission to overwrite files.
showspeed=t (ss) 'f' suppresses display of processing speed.
ziplevel=2 (zl) Compression level; 1 (min) through 9 (max).
fastawrap=80 Length of lines in fasta output.
qout=auto Output quality offset: 33 (Sanger), 64, or auto.
statscolumns=5 (cols) Number of columns for stats output, 3 or 5.
5 includes base counts.
rename=f Rename reads to indicate which sequences they matched.
refnames=f Use names of reference files rather than scaffold IDs.
With multiple reference files, this is more efficient
than tracking statistics on a per-sequence basis.
trd=f Truncate read and ref names at the first whitespace.
ordered=f Set to true to output reads in same order as input.
kpt=t (keepPairsTogether) Paired reads will always be assigned
to the same ref sequence.
Processing parameters:
k=31 Kmer length used for finding contaminants. Contaminants
shorter than k will not be found. k must be at least 1.
rcomp=t Look for reverse-complements of kmers in addition to
forward kmers.
maskmiddle=t (mm) Treat the middle base of a kmer as a wildcard, to
increase sensitivity in the presence of errors.
minkmerhits=1 (mkh) A read needs at least this many kmer hits to be
considered a match.
minkmerfraction=0.0 (mkf) A reads needs at least this fraction of its total
kmers to hit a ref, in order to be considered a match.
hammingdistance=0 (hdist) Maximum Hamming distance for ref kmers (subs only).
Memory use is proportional to (3*K)^hdist.
qhdist=0 Hamming distance for query kmers; impacts speed, not memory.
editdistance=0 (edist) Maximum edit distance from ref kmers (subs and
indels). Memory use is proportional to (8*K)^edist.
forbidn=f (fn) Forbids matching of read kmers containing N.
By default, these will match a reference 'A' if hdist>0
or edist>0, to increase sensitivity.
match=all Determines when to quit looking for kmer matches. Values:
all: Attempt to match all kmers in each read.
first: Quit after the first matching kmer.
unique: Quit after the first uniquely matching kmer.
ambiguous=random (ambig) Set behavior on ambiguously-mapped reads (with an
equal number of kmer matches to multiple sequences).
first: Use the first best-matching sequence.
toss: Consider unmapped.
random: Select one best-matching sequence randomly.
all: Use all best-matching sequences.
clearzone=0 (cz) Threshhold for ambiguity. If the best match shares X
kmers with the read, the read will be considered
also ambiguously mapped to any sequence sharing at least
[X minus clearzone] kmers.
ecco=f For overlapping paired reads only. Performs error-
correction with BBMerge prior to kmer operations.
Containment parameters:
processcontainedref=f Require a reference sequence to be fully contained by
an input sequence
storerefbases=f Store reference bases so that ref containments can be
validated. If this is set to false and processcontainedref
is true, then it will only require that the read share the
same number of bases as are present in the ref sequence.
Taxonomy parameters (only use when doing taxonomy):
tax=<file> Output destination for taxonomy information.
taxtree=<file> (tree) A serialized TaxTree (tree.taxtree.gz).
gi=<file> A serialized GiTable (gitable.int1d.gz). Only needed if
reference sequence names start with 'gi|'.
mincount=1 Only display taxa with at least this many hits.
maxnodes=-1 If positive, display at most this many top hits.
minlevel=subspecies Do not display nodes below this taxonomic level.
maxlevel=life Do not display nodes above this taxonomic level.
Valid levels are subspecies, species, genus, family, order, class,
phylum, kingdom, domain, life
Speed and Memory parameters:
threads=auto (t) Set number of threads to use; default is number of
logical processors.
prealloc=f Preallocate memory in table. Allows faster table loading
and more efficient memory usage, for a large reference.
monitor=f Kill this process if CPU usage drops to zero for a long
time. monitor=600,0.01 would kill after 600 seconds
under 1% usage.
rskip=1 Skip reference kmers to reduce memory usage.
1 means use all, 2 means use every other kmer, etc.
qskip=1 Skip query kmers to increase speed. 1 means use all.
speed=0 Ignore this fraction of kmer space (0-15 out of 16) in both
reads and reference. Increases speed and reduces memory.
Note: Do not use more than one of 'speed', 'qskip', and 'rskip'.
Trimming/Masking parameters:
qtrim=f Trim read ends to remove bases with quality below trimq.
Performed AFTER looking for kmers. Values:
t (trim both ends),
f (neither end),
r (right end only),
l (left end only).
trimq=6 Regions with average quality BELOW this will be trimmed.
minlength=1 (ml) Reads shorter than this after trimming will be
discarded. Pairs will be discarded only if both are shorter.
maxlength= Reads longer than this after trimming will be discarded.
Pairs will be discarded only if both are longer.
minavgquality=0 (maq) Reads with average quality (after trimming) below
this will be discarded.
maqb=0 If positive, calculate maq from this many initial bases.
maxns=-1 If non-negative, reads with more Ns than this
(after trimming) will be discarded.
forcetrimleft=0 (ftl) If positive, trim bases to the left of this position
(exclusive, 0-based).
forcetrimright=0 (ftr) If positive, trim bases to the right of this position
(exclusive, 0-based).
forcetrimright2=0 (ftr2) If positive, trim this many bases on the right end.
forcetrimmod=0 (ftm) If positive, right-trim length to be equal to zero,
modulo this number.
restrictleft=0 If positive, only look for kmer matches in the
leftmost X bases.
restrictright=0 If positive, only look for kmer matches in the
rightmost X bases.
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an
out-of-memory exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx1g"
z2="-Xms1g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 2000m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
seal() {
local CMD="java $EA $EOOM $z $z2 -cp $CP jgi.Seal $@"
echo $CMD >&2
eval $CMD
}
seal "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/seal.sh | seal.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified September 17, 2018
This script requires at least 52GB RAM.
It is designed for NERSC and uses hard-coded paths.
Description: Removes all reads that map to the cat, dog, mouse, or human genome with at least 95% identity after quality trimming.
Removes approximately 98.6% of human 2x150bp reads, with zero false-positives to non-animals.
NOTE! This program uses hard-coded paths and will only run on Nersc systems.
Usage: removecatdogmousehuman.sh in=<input file> outu=<clean output file>
Input may be fasta or fastq, compressed or uncompressed.
Parameters:
threads=auto (t) Set number of threads to use; default is number of logical processors.
overwrite=t (ow) Set to false to force the program to abort rather than overwrite an existing file.
interleaved=auto (int) If true, forces fastq input to be paired and interleaved.
trim=t Trim read ends to remove bases with quality below minq.
Values: t (trim both ends), f (neither end), r (right end only), l (left end only).
untrim=t Undo the trimming after mapping.
minq=4 Trim quality threshold.
ziplevel=2 (zl) Set to 1 (lowest) through 9 (max) to change compression level; lower compression is faster.
outm=<file> File to output the reads that mapped to human.
***** All BBMap parameters can be used; run bbmap.sh for more details. *****
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
JNI="-Djava.library.path=""$DIR""jni/"
JNI=""
z="-Xmx50g"
z2="-Xms50g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
}
calcXmx "$@"
function removecatdogmousehuman() {
local CMD="java $EA $EOOM $z $z2 $JNI -cp $CP align2.BBMap minratio=0.9 maxindel=3 bwr=0.16 bw=12 quickmatch fast minhits=2 path=/global/projectb/sandbox/gaag/bbtools/mousecatdoghuman/ pigz unpigz zl=6 qtrim=r trimq=10 untrim idtag usemodulo printunmappedcount ztd=2 kfilter=25 maxsites=1 k=14 bloomfilter $@"
echo $CMD >&2
eval $CMD
}
removecatdogmousehuman "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/removecatdogmousehuman.sh | removecatdogmousehuman.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified February 17, 2015
***DEPRECATED***
Description: Randomly adds adapters to a file, or grades a trimmed file.
The input is a set of reads, paired or unpaired.
The output is those same reads with adapter sequence replacing some of the bases in some reads.
For paired reads, adapters are located in the same position in read1 and read2.
This is designed for benchmarking adapter-trimming software, and evaluating methodology.
randomreads.sh is better for paired reads, though, as it actually adds adapters at the correct location,
so that overlap may be used for adapter detection.
Usage: addadapters.sh in=<file> in2=<file2> out=<outfile> out2=<outfile2> adapters=<file>
in2 and out2 are for paired reads and are optional.
If input is paired and there is only one output file, it will be written interleaved.
Parameters:
ow=f (overwrite) Overwrites files that already exist.
int=f (interleaved) Determines whether INPUT file is considered interleaved.
qin=auto ASCII offset for input quality. May be 33 (Sanger), 64 (Illumina), or auto.
qout=auto ASCII offset for output quality. May be 33 (Sanger), 64 (Illumina), or auto (same as input).
add Add adapters to input files. Default mode.
grade Evaluate trimmed input files.
adapters=<file> Fasta file of adapter sequences.
literal=<sequence> Comma-delimited list of adapter sequences.
left Adapters are on the left (3') end of the read.
right Adapters are on the right (5') end of the read. Default mode.
adderrors=t Add errors to adapters based on the quality scores.
addpaired=t Add adapters to the same location for read 1 and read 2.
arc=f Add reverse-complemented adapters as well as forward.
rate=0.5 Add adapters to this fraction of reads.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx200m"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
}
calcXmx "$@"
function addadapters() {
local CMD="java $EA $EOOM $z -cp $CP jgi.AddAdapters $@"
echo $CMD >&2
eval $CMD
}
addadapters "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/addadapters.sh | addadapters.sh |
usage(){
echo "
Written by Brian Bushnell
Last modified July 25, 2018
Description: Reorders reads randomly, keeping pairs together.
Unlike Shuffle, Shuffle2 can write temp files to handle large datasets.
Usage: shuffle2.sh in=<file> out=<file>
Standard parameters:
in=<file> The 'in=' flag is needed if the input file is not the first parameter. 'in=stdin' will pipe from standard in.
in2=<file> Use this if 2nd read of pairs are in a different file.
out=<file> The 'out=' flag is needed if the output file is not the second parameter. 'out=stdout' will pipe to standard out.
out2=<file> Use this to write 2nd read of pairs to a different file.
overwrite=t (ow) Set to false to force the program to abort rather than overwrite an existing file.
ziplevel=2 (zl) Set to 1 (lowest) through 9 (max) to change compression level; lower compression is faster.
int=auto (interleaved) Set to t or f to override interleaving autodetection.
Processing parameters:
shuffle Randomly reorders reads (default).
Java Parameters:
-Xmx This will set Java's memory usage, overriding autodetection.
-Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs.
The max is typically 85% of physical memory.
-eoom This flag will cause the process to exit if an out-of-memory
exception occurs. Requires Java 8u92+.
-da Disable assertions.
Please contact Brian Bushnell at [email protected] if you encounter any problems.
"
}
#This block allows symlinked shellscripts to correctly set classpath.
pushd . > /dev/null
DIR="${BASH_SOURCE[0]}"
while [ -h "$DIR" ]; do
cd "$(dirname "$DIR")"
DIR="$(readlink "$(basename "$DIR")")"
done
cd "$(dirname "$DIR")"
DIR="$(pwd)/"
popd > /dev/null
#DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/"
CP="$DIR""current/"
z="-Xmx2g"
z2="-Xms2g"
set=0
if [ -z "$1" ] || [[ $1 == -h ]] || [[ $1 == --help ]]; then
usage
exit
fi
calcXmx () {
source "$DIR""/calcmem.sh"
setEnvironment
parseXmx "$@"
if [[ $set == 1 ]]; then
return
fi
freeRam 2000m 84
z="-Xmx${RAM}m"
z2="-Xms${RAM}m"
}
calcXmx "$@"
shuffle() {
local CMD="java $EA $EOOM $z -cp $CP sort.Shuffle2 $@"
echo $CMD >&2
eval $CMD
}
shuffle "$@" | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/shuffle2.sh | shuffle2.sh |
BBTools Config File Readme
Written by Brian Bushnell
Last updated May 12, 2015
A config file is a text file with a set of parameters that will be added to the command line.
The format is one parameter per line, with the # symbol indicating comments.
To use a config file, use the config=file flag. For example, take BBDuk:
bbduk.sh in=reads.fq out=trimmed.fq ref=ref.fa k=23 mink=11 hdist=1 tbo tpe
That is equivalent to:
bbduk.sh in=reads.fq out=trimmed.fq ref=ref.fa config=trimadapters.txt
...if trimadapters.txt contained these lines:
k=23
mink=11
hdist=1
tbo
tpe
Any parameter placed AFTER the config file will override the same parameter if it is in the config file.
For example, in this case k=20 will be used:
bbduk.sh in=reads.fq out=trimmed.fq ref=ref.fa config=trimadapters.txt k=20
But in this case, k=23 will be used, from the config file:
bbduk.sh in=reads.fq out=trimmed.fq ref=ref.fa k=20 config=trimadapters.txt
What are config files for? Well, mainly, to overcome difficulties like whitespace in file paths, or command lines that are too long.
There are some example config files in bbmap/config/. They are not used unless you specifically tell a program to use them.
| ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/docs/readme_config.txt | readme_config.txt |
BBTools are sensitive to filename extensions. For example, this command:
reformat.sh in=reads.fq out=reads.fa.gz
...will convert reads from fastq format to gzipped fasta. The recognized sequence file extensions are as follows:
fastq (fq)
fasta (fa, fna, fas, ffn, frn, seq, fsa, faa)
sam
bam [requires samtools]
qual
scarf [input only]
phylip [input only; only supported by phylip2fasta.sh]
header [output only]
oneline [tab delimited 2-column: name and bases]
embl [input only]
gbk [input only]
The recognized compression extensions:
gzip (gz) [can be accelerated by pigz]
zip
bz2 [requires bzip2 or pbzip2 or lbzip2]
fqz [requires fqz_comp]
In order to stream using standard in or standard out, it is recommended to include the format. For example:
cat data.fq.gz | reformat.sh in=stdin.fq.gz out=stdout.fa > file.fa
This allows the tool to determine the format. Otherwise it will revert to the default.
BBTools can usually determine the type of sequence data by examining the contents. To test this, run:
fileformat.sh in=file
...which will print the way the data is detected, e.g. Sanger (ASCII-33) quality, interleaved, etc. These can normally be overridden with the "qin" and "interleaved" flags.
When BBTools are processing gzipped files, they may, if possible, attempt to spawn a pigz process to accelerate it. This behavior can be forced with the "pigz=t unpigz=t" flags, or prevented with "pigz=f unpigz=f"; otherwise, the default behavior depends on the tool. In some cluster configurations, and some Amazon nodes, spawning a process may cause the program to killed with an indication that it used too much virtual memory. I recommend pigz be enabled unless that scenario occurs.
The most recent extension added is "header". You can use it like this:
reformat.sh in=reads.fq out=reads.header minlen=100
That will create a file containing headers of reads that pass the "minlen" filter.
| ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/docs/readme_filetypes.txt | readme_filetypes.txt |
BBMap/BBTools readme
Written by Brian Bushnell
Last updated January 2, 2018
The BBTools package is primarily devloped by Brian Bushnell, with some optional JNI and MPI components written by Jonathan Rood.
Some parts have also been written or modified by Shijie Yao, Alex Copeland, and Bryce Foster.
Citation:
Please see citation.txt
License:
The BBTools package is open source and free to use with no restrictions. For more information, please read Legal.txt and license.txt.
Documentation:
Documentation is in the /bbmap/docs/ directory, and in each tool's shellscript in /bbmap/.
readme.txt: This file.
UsageGuide.txt: Contains basic installation and usage information. Please read this first!
ToolDescriptions.txt: Contains a list of many BBTools, a description of what they do, and their hardware requirements.
compiling.txt: Information on compiling JNI code.
readme_config.txt: Usage information about config files.
readme_filetypes.txt: More detailed information on file formats supported by BBTools.
changelog.txt: List of changes by version, and current known issues.
Tool-specific Guides:
Some tools have specific guides, like BBDukGuide.txt. They are in /bbmap/docs/guides/. For complete documentation of a tool, I recommend that you read UsageGuide.txt first (which covers the shared functionality of all tools), then the tool's specific guide if it has one (such as ReformatGuide.txt), then the tool's shellscript (such as reformat.sh) which lists all of the flags.
Pipelines:
/bbmap/pipelines/ contains shellscripts. These are different than the ones in /bbmap/, which are wrappers for specific tools. The pipelines do not print a help message and do not accept any arguments. They are given to provide examples of the command lines and order of tools used to accomplish specific tasks.
Resources:
/bbmap/resources/ contains various data files. Most are fasta contaminant sequences. For more information see /bbmap/resources/contents.txt.
If you have any questions not answered in the documentation, please look at the relevant SeqAnswers thread (linked from here: http://seqanswers.com/forums/showthread.php?t=41057) and post a question there if it is not already answered. You can also contact JGI's BBTools team at [email protected], or me at [email protected]. But please read the documentation first.
Special thanks for help with shellscripts goes to:
Alex Copeland (JGI), Douglas Jacobsen (JGI/NERSC), Bill Andreopoulos (JGI), sdriscoll (SeqAnswers), Jon Rood (JGI/NERSC), and Elmar Pruesse (UC Denver).
Special thanks for helping to support BBTools goes to Genomax (SeqAnswers).
| ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/docs/readme.txt | readme.txt |
This is a readme for BBMap. However, it has not been maintained and is superceded by the information in the shellscript, bbmap.sh.
Basic Syntax:
(Using shellscript, under Unix, which autodetects RAM to set -Xmx parameter. You can also include a flag like '-Xmx31g' in the shellscript arguments to set RAM usage.)
To index:
bbmap.sh ref=<reference.fa>
To map:
bbmap.sh in=<reads.fq> out=<mapped.sam>
(without shellscript)
To index:
java -ea -Xmx31g -cp <PATH> align2.BBMap ref=<reference.fa>
To map:
java -ea -Xmx31g -cp <PATH> align2.BBMap in=<reads.fq> out=<mapped.sam>
...where "<PATH>" should indicate the path to the directory containing all the source code directories; e.g. "/usr/bin/bbmap/current"
Please note, the reference is only needed for building the index the first time; subsequently, just specify the build number which corresponds to that reference.
So for example the first time you map to e.coli you might specify "ref=ecoli_reference.fa build=3"; after that, just specify "build=3".
The index files would then be stored in ./ref/genome/3/ and ./ref/index/3/
Also, the -Xmx parameter should specify approximately 85% of the physical memory of the target machine; so, 21G for a 24GB node. The process needs approximately 8 bytes per reference base (plus a several hundred MB overhead).
Advanced Syntax:
Indexing Parameters (required when building the index):
path=<.> Base directory to store index files. Default is the local directory. The index will always be placed in a subdirectory "ref".
ref=<ref.fasta> Use this file to build the index. Needs to be specified only once; subsequently, the build number should be used.
build=<1> Write the index to this location (build=1 would be stored in /ref/genome/1/ and /ref/index/1/). Can be any integer. This parameter defaults to 1, but using additional numbers allows multiple references to be indexed in the same directory.
k=<13> Use length 13 kmers for indexing. Suggested values are 9-15, with lower typically being slower and more accurate. 13 is usually optimal. 14 is better for RNA-SEQ and very large references >4GB; 12 is better for PacBio and cross-species mapping.
midpad=<300> Put this many "N" in between scaffolds when making the index. 300 is fine for metagenomes with millions of contigs; for a finished genome like human with 25 scaffolds, this should be set to 100000+ to prevent cross-scaffold mapping.
startpad=<8000> Put this many "N" at the beginning of a "chrom" file when making index. It's best if this is longer than your longest expected read.
stoppad=<8000> Put this many "N" at the end of a "chrom" file when making index. It's best if this is longer than your longest expected read.
minscaf=<1> Do not include scaffolds shorter than this when generating index. Useful for assemblies with millions of fairly worthless unscaffolded contigs under 100bp. There's no reason to make this shorter than the kmer length.
usemodulo=<f> Throw away ~80% of kmers based on their remainder modulo a number. Reduces memory usage by around 50%, and reduces sensitivity slightly. Must be specified when indexing and when mapping.
Input Parameters:
path=<.> Base directory to read index files.
build=<1> Use the index at this location (same as when indexing).
in=<reads.fq> Use this as the input file for reads. Also accepts fasta. "in=sequential length=200" will break a genome into 200bp pieces and map them to itself. "in=stdin" will accept piped input. The format of piped input can be specified with e.g. "in=stdin.fq.gz" or "in=stdin.fa"; default is uncompressed fastq.
in2=<reads2.fq> Run mapping paired, with reads2 in the file "reads2.fq"
NOTE: As a shorthand, "in=reads#.fq" is equivalent to "in=reads1.fq in2=reads2.fq"
interleaved=<auto> Or "int". Set to "true" to run mapping paired, forcing the reads to be considered interleaved from a single input file. By default the reader will try to determine whether a file is interleaved based on the read names; so if you don't want this, set interleaved=false.
qin=<auto> Set to 33 or 64 to specify input quality value ASCII offset.
fastareadlen=<500> If fasta is used for input, breaks the fasta file up into reads of about this length. Useful if you want to map one reference against another, since BBMap currently has internal buffers limited to 500bp. I can change this easily if desired.
fastaminread=<1> Ignore fasta reads shorter than this. Useful if, say, you set fastareadlen=500, and get a length 518 read; this will be broken into a 500bp read and an 18bp read. But it's not usually worth mapping the 18bp read, which will often be ambiguous.
maxlen=<0> Break long fastq reads into pieces of this length.
minlen=<0> Throw away remainder of read that is shorter than this.
fakequality=<-1> Set to a positive number 1-50 to generate fake quality strings for fasta input reads. Less than one turns this function off.
blacklist=<a.fa,b.fa> Set a list of comma-delimited fasta files. Any read mapped to a scaffold name in these files will be considered "blacklisted" and can be handled differently by using the "outm", "outb", and "outputblacklisted" flags. The blacklist fasta files should also be merged with other fasta files to make a single combined fasta file; this combined file should be specified with the "ref=" flag when indexing.
touppercase=<f> Set true to convert lowercase read bases to upper case. This is required if any reads have lowercase letters (which real reads should never have).
Sampling Parameters:
reads=<-1> Process at most N reads, then stop. Useful for benchmarking. A negative number will use all reads.
samplerate=<1.0> Set to a fraction of 1 if you want to randomly sample reads. For example, samplerate=0.25 would randomly use a quarter of the reads and ignore the rest. Useful for huge datasets where all you want to know is the % mapped.
sampleseed=<1> Set to the RNG seed for random sampling. If this is set to a negative number, a random seed is used; for positive numbers, the number itself is the seed. Since the default is 1, this is deterministic unless you explicitly change it to a negative number.
idmodulo=<1> Set to a higher number if you want to map only every Nth read (for sampling huge datasets).
Mapping Parameters:
fast=<f> The fast flag is a macro. It will set many other paramters so that BBMap will run much faster, at slightly reduced sensitivity for most applications. Not recommended for RNAseq, cross-species alignment, or other situations where long deletions or low identity matches are expected.
minratio=<0.56> Alignment sensitivity as a fraction of a read's max possible mapping score. Lower is slower and more sensitive but gives more false positives. Ranges from 0 (very bad alignment) to 1 (perfect alignment only). Default varies between BBMap versions.
minidentity=<> Or "minid". Use this flag to set minratio more easily. If you set minid=0.9, for example, minratio will be set to a value that will be APPROXIMATELY equivalent to 90% identity alignments.
minapproxhits=<1> Controls minimum number of seed hits to examine a site. Higher is less accurate but faster (on large genomes). 2 is maybe 2.5x as fast and 3 is maybe 5x as fast on a genome with several gigabases. Does not speed up genomes under 100MB or so very much.
padding=<4> Sets extra padding for slow-aligning. Higher numbers are more accurate for indels near the tips of reads, but slower.
tipsearch=<100> Controls how far to look for possible deletions near tips of reads by brute force. tipsearch=0 disables this function. Higher is more accurate.
maxindel=<16000> Sets the maximum size of indels allowed during the quick mapping phase. Set higher (~100,000) for RNA-SEQ and lower (~20) for large assemblies with mostly very short contigs. Lower is faster.
strictmaxindel=<f> Set to true to disallow mappings with indels longer than maxindel. Alternately, for an integer X, 'strictmaxindel=X' is equivalent to the pair of flags 'strictmaxindel=t maxindel=X'.
pairlen=<32000> Maximum distance between mates allowed for pairing.
requirecorrectstrand=<t> Or "rcs". Requires correct strand orientation when pairing reads. Please set this to false for long mate pair libraries!
samestrandpairs=<f> Or "ssp". Defines correct strand orientation when pairing reads. Default is false, meaning opposite strands, as in Illumina fragment libraries. "ssp=true" mode is not fully tested.
killbadpairs=<f> Or "kbp". When true, if a read pair is mapped with an inappropriate insert size or orientation, the read with the lower mapping quality is marked unmapped.
rcompmate=<f> ***TODO*** Set to true if you wish the mate of paired reads to be reverse-complemented prior to mapping (to allow better pairing of same-strand pair libraries).
kfilter=<-1> If set to a positive number X, all potential mapping locatiosn that do not have X contiguous perfect matches with the read will be ignored. So, reads that map with "kfilter=51" are assured to have at least 51 contiguous bases that match the reference. Useful for mapping to assemblies generated by a De Bruijn graph assembly that used a kmer length of X, so that you know which reads were actually used in the assembly.
threads=<?> Or "t". Set number of threads. Default is # of logical cores. The total number of active threads will be higher than this, because input and output are in seperate threads.
perfectmode=<f> Only accept perfect mappings. Everything goes much faster.
semiperfectmode=<f> Only accept perfect or "semiperfect" mappings. Semiperfect means there are no mismatches of defined bases, but up to half of the reference is 'N' (to allow mapping to the edge of a contig).
rescue=<t> Controls whether paired may be rescued by searching near the mapping location of a mate. Increases accuracy, with usually a minor speed penalty.
expectedsites=<1> For BBMapPacBioSkimmer only, sets the expected number of correct mapping sites in the target reference. Useful if you are mapping reads to other reads with some known coverage depth.
msa=<> Advanced option, not recommended. Set classname of MSA to use.
bandwidth=0 Or "bw". When above zero, restricts alignment band to this width. Runs faster, but with reduced accuracy for reads with many or long indels.
bandwidthratio=0 Or "bwr". When above zero, restricts alignment band to this fraction of a read's length. Runs faster, but with reduced accuracy for reads with many or long indels.
usequality=<t> Or "uq". Set to false to ignore quality values when mapping. This will allow very low quality reads to be attempted to be mapped rather than discarded.
keepbadkeys=<f> Or "kbk". With kbk=false (default), read keys (kmers) have their probability of being incorrect evaluated from quality scores, and keys with a 94%+ chance of being wrong are discarded. This increases both speed and accuracy.
usejni=<f> Or "jni". Do alignments in C code, which is faster. Requires first compiling the C code; details are in /jni/README.txt. This will produce identical output.
maxsites2=<800> Don't analyze (or print) more than this many alignments per read.
minaveragequality=<0> (maq) Discard reads with average quality below this.
Post-Filtering Parameters:
idfilter=0 Different than "minid". No alignments will be allowed with an identity score lower than this value. This filter occurs at the very end and is unrelated to minratio, and has no effect on speed unless set to 1. Range is 0-1.
subfilter=-1 Ban alignments with more than this many substitutions.
insfilter=-1 Ban alignments with more than this many insertions.
delfilter=-1 Ban alignments with more than this many deletions.
indelfilter=-1 Ban alignments with more than this many indels.
editfilter=-1 Ban alignments with more than this many edits.
inslenfilter=-1 Ban alignments with an insertion longer than this.
dellenfilter=-1 Ban alignments with a deletion longer than this.
Output Parameters:
out=<outfile.sam> Write output to this file. If out=null, output is suppressed. If you want to output paired reads to paired files, use a "#" symbol, like out=mapped#.sam. Then reads1 will go to mapped1.sam and reads2 will go to mapped2.sam. (NOTE: split output currently diabled for .sam format, but allowed for native .txt format). To print to standard out, use "out=stdout"
outm=<> Write only mapped reads to this file (excluding blacklisted reads, if any).
outu=<> Write only unmapped reads to this file.
outb=<> Write only blacklisted reads to this file. If a pair has one end mapped to a non-blacklisted scaffold, it will NOT go to this file. (see: blacklist)
out2=<> If you set out2, outu2, outm2, or outb2, the second read in each pair will go to this file. Not currently allowed for SAM format, but OK for others (such as fasta, fastq, bread).
overwrite=<f> Or "ow". Overwrite output file if it exists, instead of aborting.
append=<f> Or "app". Append to output file if it exists, instead of aborting.
ambiguous=<best> Or "ambig". Sets how to handle ambiguous reads. "first" or "best" uses the first encountered best site (fastest). "all" returns all best sites. "random" selects a random site from all of the best sites (does not yet work with paired-ends). "toss" discards all sites and considers the read unmapped (same as discardambiguous=true). Note that for all options (aside from toss) ambiguous reads in SAM format will have the extra field "XT:A:R" while unambiguous reads will have "XT:A:U".
ambiguous2=<best> (for BBSplit only) Or "ambig2". Only for splitter mode. Ambiguous2 strictly refers to any read that maps to more than one reference set, regardless of whether it has multiple mappings within a reference set. This may be set to "best" (aka "first"), in which case the read will be written only to the first reference to which it has a best mapping; "all", in which case a read will be written to outputs for all references to which it maps; "toss", in which case it will be considered unmapped; or "split", in which case it will be written to a special output file with the prefix "AMBIGUOUS_" (one per reference).
outputunmapped=<t> Outputs unmapped reads to primary output stream (otherwise they are dropped).
outputblacklisted=<t> Outputs blacklisted reads to primary output stream (otherwise they are dropped).
ordered=<f> Set to true if you want reads to be output in the same order they were input. This takes more memory, and can be slower, due to buffering in multithreaded execution. Not needed for singlethreaded execution.
ziplevel=<2> Sets output compression level, from 1 (fast) to 9 (slow). I/O is multithreaded, and thus faster when writing paired reads to two files rather than one interleaved file.
nodisk=<f> "true" will not write the index to disk, and may load slightly faster. Prevents collisions between multiple bbmap instances writing indexes to the same location at the same time.
usegzip=<f> If gzip is installed, output file compression is done with a gzip subprocess instead of with Java's native deflate method. Can be faster when set to true. The output file must end in a compressed file extension for this to have effect.
usegunzip=<f> If gzip is installed, input file decompression is done with a gzip subprocess instead of with Java's native inflate method. Can be faster when set to true.
pigz=<f> Spawn a pigz (parallel gzip) process for faster compression than Java or gzip. Requires pigz to be installed.
unpigz=<f> Spawn a pigz process for faster decompression than Java or gzip. Requires pigz to be installed.
bamscript=<filename> (bs for short) Writes a shell script to <filename> with the command line to translate the sam output of BBMap into a sorted bam file, assuming you have samtools in your path.
maxsites=<5> Sets maximum alignments to print per read, if secondary alignments are allowed. Currently secondary alignments may lack cigar strings.
secondary=<f> Print secondary alignments.
sssr=<0.95> (secondarysitescoreratio) Print only secondary alignments with score of at least this fraction of primary.
ssao=<f> (secondarysiteasambiguousonly) Only print secondary alignments for ambiguously-mapped reads.
quickmatch=<f> Generate cigar strings during the initial alignment (before the best site is known). Currently, this must be enabled to generate cigar strings for secondary alignments. It increases overall speed but may in some very rare cases yield inferior alignments due to less padding.
local=<f> Output local alignments instead of global alignments. The mapping will still be based on the best global alignment, but the mapping score, cigar string, and mapping coordinate will reflect a local alignment (using the same affine matrix as the global alignment).
sortscaffolds=<f> Sort scaffolds alphabetically in SAM headers to allow easier comparisons with Tophat (in cuffdif, etc). Default is in same order as source fasta.
trimreaddescriptions=<f> (trd) Truncate read names at the first whitespace, assuming that the remaineder is a comment or description.
machineout=<f> Set to true to output statistics in machine-friendly 'key=value' format.
forcesectionname=<f> All fasta reads get an _# at the end of their name. The number is 1 for the first shred and continues ascending.
Sam settings and flags:
samversion=<1.4> SAM specification version. Set to 1.3 for cigar strings with 'M' or 1.4 for cigar strings with '=' and 'X'. Samtools 0.1.18 and earlier are incompatible with sam format version 1.4 and greater.
saa=<t> (secondaryalignmentasterisks) Use asterisks instead of bases for sam secondary alignments.
cigar=<t> Generate cigar strings (for bread format, this means match strings). cigar=false is faster. "cigar=" is synonymous with "match=". This must be enabled if match/insertion/deletion/substitution statistics are desired, but the program will run faster with cigar strings disabled.
keepnames=<f> Retain original names of paired reads, rather than ensuring both reads have the same name when written in sam format by renaming read2 to the same as read1. If this is set to true then the output may not be sam compliant.
mdtag=<f> Generate MD tags for SAM files. Requires that cigar=true. I do not recommend generating MD tags for RNASEQ or other data where long deletions are expected because they will be incredibly long.
xstag=<f> Generate XS (strand) tags for Cufflinks. This should be used with a stranded RNA-seq protocol.
xmtag=<t> Generate XM tag. Indicates number of best alignments. May only work correctly with ambig=all.
nhtag=<f> Write NH tags.
intronlen=<999999999> Set to a lower number like 10 to change 'D' to 'N' in cigar strings for deletions of at least that length. This is used by Cufflinks; 'N' implies an intron while 'D' implies a deletion, but they are otherwise identical.
stoptag=<f> Allows generation of custom SAM tag YS:i:<read stop location>
idtag=<f> Allows generation of custom SAM tag YI:f:<percent identity>
scoretag=<f> Allows generation of custom SAM tag YR:i:<raw mapping score>
inserttag=<f> Write a tag indicating insert size, prefixed by X8:Z:
rgid=<> Set readgroup ID. All other readgroup fields can be set similarly, with the flag rgXX=value.
noheader=<f> Suppress generation of output header lines.
Statistics and Histogram Parameters:
showprogress=<f> Set to true to print out a '.' once per million reads processed. You can also change the interval with e.g. showprogress=20000.
qhist=<file> Output a per-base average quality histogram to <file>.
aqhist=<file> Write histogram of average read quality to <file>.
bqhist=<file> Write a quality histogram designed for box plots to <file>.
obqhist=<file> Write histogram of overall base counts per quality score to <file>.
qahist=<file> Quality accuracy histogram; correlates claimed phred quality score with observed quality based on substitution, insertion, and deletion rates.
mhist=<file> Output a per-base match histogram to <file>. Requires cigar strings to be enabled. The columns give fraction of bases at each position having each match string operation: match, substitution, deletion, insertion, N, or other.
ihist=<file> Output a per-read-pair insert size histogram to <file>.
bhist=<file> Output a per-base composition histogram to <file>.
indelhist=<file> Output an indel length histogram.
lhist=<file> Output a read length histogram.
ehist=<file> Output an errors-per-read histogram.
gchist=<file> Output a gc content histogram.
gchistbins=<100> (gcbins) Set the number of bins in the gc content histogram.
idhist=<file> Write a percent identity histogram.
idhistbins=<100> (idbins) Set the number of bins in the identity histogram.
scafstats=<file> Track mapping statistics per scaffold, and output to <file>.
refstats=<file> For BBSplitter, enable or disable tracking of read mapping statistics on a per-reference-set basis, and output to <file>.
verbosestats=<0> From 0-3; higher numbers will print more information about internal program counters.
printunmappedcount=<f> Set true to print the count of reads that were unmapped. For paired reads this only includes reads whose mate was also unmapped.
Coverage output parameters (these may reduce speed and use more RAM):
covstats=<file> Per-scaffold coverage info.
covhist=<file> Histogram of # occurrences of each depth level.
basecov=<file> Coverage per base location.
bincov=<file> Print binned coverage per location (one line per X bases).
covbinsize=1000 Set the binsize for binned coverage output.
nzo=f Only print scaffolds with nonzero coverage.
twocolumn=f Change to true to print only ID and Avg_fold instead of all 6 columns to the 'out=' file.
32bit=f Set to true if you need per-base coverage over 64k.
bitset=f Store coverage data in BitSets.
arrays=t Store coverage data in Arrays.
ksb=t Keep residual bins shorter than binsize.
strandedcov=f Track coverage for plus and minus strand independently. Requires a # symbol in coverage output filenames which will be replaced by 1 for plus strand and 2 for minus strand.
startcov=f Only track start positions of reads.
concisecov=f Write basecov in a more concise format.
Trimming Parameters:
qtrim=<f> Options are false, left, right, or both. Allows quality-trimming of read ends before mapping.
false: Disable trimming.
left (l): Trim left (leading) end only.
right (r): Trim right (trailing) end only. This is the end with lower quality many platforms.
both (lr): Trim both ends.
trimq=<5> Set the quality cutoff. Bases will be trimmed until there are 2 consecutive bases with quality GREATER than this value; default is 5. If the read is from fasta and has no quality socres, Ns will be trimmed instead, as long as this is set to at least 1.
untrim=<f> Untrim the read after mapping, restoring the trimmed bases. The mapping position will be corrected (if necessary) and the restored bases will be classified as soft-clipped in the cigar string.
Java Parameters:
-Xmx If running from the shellscript, include it with the rest of the arguments and it will be passed to Java to set memory usage, overriding the shellscript's automatic memory detection. -Xmx20g will specify 20 gigs of RAM, and -Xmx200m will specify 200 megs. The max allowed is typically 85% of physical memory.
-da Disable assertions. Alternative is -ea which is the default.
Splitting Parameters:
The splitter is invoked by calling bbsplit.sh (or align2.BBSplitter) instead of bbmap.sh, for the indexing phase. It allows combining multiple references and outputting reads to different files depending on which one they mapped to best. The order in which references are specified is important in cases of ambiguous mappings; when a read has 2 identically-scoring mapping locations from different references, it will be mapped to the first reference.
All parameters are the same as BBMap with the exception of the ones listed below. You can still use "outu=" to capture unmapped reads.
ref_<name>=<fasta files> Defines a named set of organisms with a single fasta file or list. For example, ref_a=foo.fa,bar.fa defines the references for named set "a"; any read that maps to foo.fasta or bar.fasta will be considered a member of set a.
out_<name>=<output file> Sets the output file name for reads mapping to set <name>. out_a=stuff.sam would capture all the reads mapping to ref_a.
basename=<example%.sam> This shorthand for mass-specifying all output files, where the % symbol is a wildcard for the set name. For example, "ref_a=a.fa ref_b=b.fa basename=mapped_%.sam" would expand to "ref_a=a.fa ref_b=b.fa out_a=mapped_a.sam out_b=mapped_b.sam"
ref=<fasta files> When run through the splitter, this is shorthand for writing a bunch of ref_<name> entries. "ref=a.fa,b.fa" would expand to "ref_a=a.fa ref_b=b.fa".
Formats and Extensions
.gz,.gzip,.zip,.bz2 These file extensions are allowed on input and output files and will force reading/writing compressed data.
.fa,.fasta,.txt,.fq,.fastq These file extensions are allowed on input and output files. Having one is REQUIRED. So, reads.fq and reads.fq.zip are valid, but reads.zip is NOT valid. Note that outputting in fasta or fastq will not retain mapping locations.
.sam This is only allowed on output files.
.bam This is allowed on output files if samtools is installed. Beware of memory usage; samtools will run in a subprocess, and it can consume over 1kb per scaffold of the reference genome.
Different versions:
BBMap (bbmap.sh) Fastest version. Finds single best mapping location.
BBMapPacBio (mapPacBio.sh) Optimized for PacBio's error profile (more indels, fewer substitutions). Finds single best mapping location. PacBio reads should be in fasta format.
BBMapPacBioSkimmer (bbmapskimmer.sh) Designed to find ALL mapping locations with alignment score above a certain threshold; also optimized for Pac Bio reads.
BBSplitter (bbsplit.sh) Uses BBMap or BBMapPacBio to map to multiple references simultaneously, and output the reads to the file corresponding to the best-matching reference. Designed to split metagenomes or contaminated datasets prior to assembly.
BBWrap (bbwrap.sh) Maps multiple read files to the same reference, producing one sam file per input file. The advantage is that the reference/index only needs to be read once.
Notes.
File types are autodetected by parsing the filename. So you can name files, say, out.fq.gz or out.fastq.gz or reads1.fasta.bz2 or data.sam and it will work as long as the extensions are correct.
| ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/docs/guides/BBMap_old_readme.txt | BBMap_old_readme.txt |
import os
import sys
import argparse
import datetime
import shutil
import glob
import re
# import numpy as np
# import matplotlib
# matplotlib.use("Agg") ## This needs to skip the DISPLAY env var checking
# import matplotlib.pyplot as plt
# import mpld3
from pprint import pprint
## append the pipeline lib and tools relative path:
SRC_ROOT = os.path.abspath(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(SRC_ROOT + "/lib") # common
import rqc_fastq as fastqUtil
from common import get_logger, get_status, checkpoint_step, set_colors, get_subsample_rate, append_rqc_file, append_rqc_stats
from os_utility import run_sh_command, make_dir_p
from rqc_utility import get_dict_obj, pipeline_val
from readqc import do_html_body as do_readqc_html_body
from html_utility import html_tag
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## global vars
VERSION = "1.0.0"
SCRIPT_NAME = __file__
logLevel = "DEBUG"
DEBUG = False
color = {}
color = set_colors(color, True)
READQC_DIR = 'filtered-readqc'
PYDIR = os.path.abspath(os.path.dirname(__file__))
BBDIR = os.path.join(PYDIR, os.path.pardir)
FILTER_READ_COUNT = "read count"
FILTER_READ_SAMPLED_COUNT = "read sampled count"
STATUS_LOG_FNAME = 'status.log'
BB_STATS_LIST_FILE_NAME = 'filterStats.txt'
STATS_LIST_FILE_NAME = 'filter-stats.txt'
PIPE_START = 'start'
RQCFILTER_START = 'start rqcfilter'
RQCFILTER_END = 'end rqcfilter'
POST_START = 'start post process'
POST_END = 'end post process'
QC_START = 'start qc'
QC_END = 'end qc'
HTML_START = 'start html generation'
HTML_END = 'end html generation'
PIPE_COMPLETE = 'complete'
STEP_ORDER = {
PIPE_START : 0,
RQCFILTER_START : 10,
# rqcfilter.sh recorded steps:
'clumpify start' : 11,
'clumpify finish' : 12,
'ktrim start' : 20,
'ktrim finish' : 21,
'delete temp files start' : 30,
'delete temp files finish' : 31,
'filter start' : 40,
'filter finish' : 41,
'delete temp files start' : 50,
'delete temp files finish' : 51,
'short filter start' : 60,
'short filter finish' : 61,
'delete temp files start' : 70,
'delete temp files finish' : 71,
'removeCommonMicrobes start' : 80,
'removeCommonMicrobes finish' : 81,
'delete temp files start' : 90,
'delete temp files finish' : 91,
'dehumanize start' : 100,
'dehumanize finish' : 101,
'delete temp files start' : 110,
'delete temp files finish' : 111,
'merge start' : 120,
'merge finish' : 121,
'khist start' : 130,
'khist finish' : 131,
'rqcfilter complete' : 200,
RQCFILTER_END : 300,
POST_START : 381,
POST_END : 382,
QC_START : 400,
QC_END : 410,
HTML_START : 420,
HTML_END : 430,
PIPE_COMPLETE : 999
}
FILTER_METHODS_TXT = {"DNA": "dna.txt",
"FUNGAL": "fungal.txt",
"METAGENOME": "metagenome.txt",
"VIRAL-METAGENOME": "viral-metagenome.txt",
"ISO": "iso.txt",
"SAG": "sag.txt",
"CELL-ENRICHMENT": "cell-enrichment.txt",
"PLANT-2X150": "plant-2x150.txt",
"PLANT-2X250": "plant-2x250.txt",
"RNA": "rna.txt",
"RNAWOHUMAN": "rnawohuman.txt",
"3PRIMERNA": "3primerna.txt",
"SMRNA": "smrna.txt",
"METATRANSCRIPTOME": "mtaa.txt",
"MTF": "mtaa.txt",
"LFPE": "lfpe.txt",
"CLRS": "clrs.txt",
"CLIP-PE": "clip-pe.txt",
"NEXTERA-LMP": "nextera-lmp.txt",
"NEXTERA": "nextera-lmp.txt",
"NEXTSEQ": "nextseq.txt",
"ITAG": "itag.txt",
"MICROTRANS": "microtrans.txt",
"BISULPHITE": "bisulphite.txt",
"CHIPSEQ": "chip-seq.txt"}
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## pipeline function definitions
def log_and_print(msg):
print(msg)
log.info(msg)
"""
Run rqcfilter.sh
@param fastq: the raw fastq input file [input of filtering]
@param outDir: the output directory
@param prodType: product type
@param status: current status
@return outFastqFile, outRrnaFastqFile
"""
def run_rqcfilter(infastq, outDir, prodType, status, enableRmoveMicrobes, enableAggressive, disableRmoveMicrobes, disableClumpify, taxList, rdb, log):
log_and_print("\n\n%s - RUN RQCFILTER <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n" % (color['pink'], color['']))
make_dir_p(outDir)
opt = None
extraOptions = ""
optionFile = ""
if prodType.endswith("-OLD"):
extraOptions = " barcodefilter=f chastityfilter=f"
prodType = prodType.replace("-OLD", "")
if prodType in FILTER_METHODS_TXT:
optionFile = os.path.join(SRC_ROOT, "filter_param/", FILTER_METHODS_TXT[prodType].replace(".txt", ".config"))
log_and_print("Filter option file: %s" % optionFile)
opt = open(optionFile, 'r').readline().rstrip()
else:
log_and_print("The product type, %s, is not supported yet." % prodType)
sys.exit(2)
assert (opt), "Null filter options."
if prodType in ("METATRANSCRIPTOME", "MTF"):
if infastq.endswith(".gz"):
opt += " outribo=%s " % (os.path.basename(infastq).replace(".fastq", ".rRNA.fastq"))
else:
opt += " outribo=%s " % (os.path.basename(infastq).replace(".fastq", ".rRNA.fastq.gz"))
if enableRmoveMicrobes:
if opt.find("removemicrobes=f") != -1:
opt = opt.replace("removemicrobes=f", "removemicrobes=t")
opt += " removemicrobes=t "
if disableRmoveMicrobes:
if opt.find("removemicrobes=t") != -1:
opt = opt.replace("removemicrobes=t", "removemicrobes=f")
else: opt += " removemicrobes=f "
if enableAggressive:
opt += " aggressive=t microbebuild=3 "
if taxList:
opt += " taxlist=%s " % (taxList)
## Temp set clumpify=t for all prod types (RQC-890)
if not disableClumpify:
opt += " clumpify=t "
else:
opt += " clumpify=f "
opt += " tmpdir=null "
opt += extraOptions
cmd = os.path.join(BBDIR, "rqcfilter.sh")
filterLogFile = os.path.join(outDir, "filter.log")
cmdStr = "%s in=%s path=%s %s usejni=f rqcfilterdata=%s > %s 2>&1" % (cmd, infastq, outDir, opt, rdb, filterLogFile)
rtn = [None, status]
outFastqFile = None
shFileName = "%s/filter.sh" % outDir
def find_filtered_fastq(adir):
outFastqFile = None
searchPatt = os.path.join(adir, "*.fastq.gz")
outFastqFileList = glob.glob(searchPatt)
assert len(outFastqFileList) >= 1, "ERROR: cannot find *.fastq.gz output file."
for f in outFastqFileList:
f = os.path.basename(f)
t = f.split(".")
if t[-3] not in ("frag", "singleton", "unknown", "rRNA", "lmp"):
filterCode = t[-3]
elif t[-3] == "lmp": ## nextera
filterCode = t[-4]
if len(t) == 7: ## ex) 12345.1.1234.ACCCC.anqdpht.fastq.gz
fileNamePrefix = '.'.join(t[:4])
elif len(t) == 6: ## ex) 6176.5.39297.anqrpht.fastq.gz
fileNamePrefix = '.'.join(t[:3])
else:
log.warning("Unexpected filtered file name, %s", outFastqFileList)
fileNamePrefix = '.'.join(t[:-3])
log_and_print("Use %s as file prefix." %fileNamePrefix)
assert filterCode and fileNamePrefix, "ERROR: unexpected filter file name: %s" % (outFastqFileList)
of = os.path.join(adir, '.'.join([fileNamePrefix, filterCode, "fastq.gz"]))
lof = os.path.join(adir, '.'.join([fileNamePrefix, filterCode, "lmp.fastq.gz"]))
if os.path.isfile(of):
outFastqFile = of
elif os.path.isfile(lof):
outFastqFile = lof
else:
log.error("Cannot find fastq.gz file.")
# rename output file to *.filtered.fastq.gz
f = os.path.basename(outFastqFile)
t = f.split(".")
if t[-3] != 'filtered':
fto = os.path.join(adir, '.'.join(['.'.join(t[:-3]), 'filtered', "fastq.gz"]))
shutil.move(outFastqFile, fto)
outFastqFile = fto
return outFastqFile
def find_filter_number(outFastqFile):
filteredReadNum = fastqUtil.check_fastq_format(outFastqFile) / 4
if filteredReadNum < 0:
log_and_print("RUN RQCFILTER - filtered fastq format error: %s." % outFastqFile)
return filteredReadNum
if STEP_ORDER[status] < STEP_ORDER[RQCFILTER_END]:
create_shell(shFileName, (cmdStr,))
log_and_print("rqcfilter cmd=[%s]" % cmdStr)
log_and_print("sh file name=[%s]" % shFileName)
stdOut, stdErr, exitCode = run_sh_command(shFileName, True, log, True) ## stdOut of 0 is success
if exitCode != 0:
log.error("Failed to run : %s, stdout : %s, stderr: %s", shFileName, stdOut, stdErr)
return rtn
outFastqFile = find_filtered_fastq(outDir)
filteredReadNum = find_filter_number(outFastqFile)
log_and_print("Read counts after RQCFILTER step = %d" % filteredReadNum)
log_and_print("RUN RQCFILTER - completed")
checkpoint(RQCFILTER_END, status)
status = RQCFILTER_END
if filteredReadNum == 0:
log.warning("No reads left after filtering")
checkpoint(PIPE_COMPLETE, status)
with open(BB_STATS_LIST_FILE_NAME, 'a') as fh:
write_stats(fh, FILTER_READ_COUNT, 0, log)
write_stats(fh, FILTER_READ_BASE_COUNT, 0, log)
else:
log_and_print("No need to rerun RQCFILTER step, get filtered files and stats ... ")
outFastqFile = find_filtered_fastq(outDir)
rtn = [outFastqFile, status]
return rtn
"""
gzip the final filtered fastq file, and generate STATS_LIST_FILE_NAME log files.
@param fastq: the raw fastq file
@param outDir: output dir
@param filteredFastq: the filtered fastq file
@param status: where the pipeline was at by last run
@param log
@return filteredFastq or None if error
"""
def post_process(fastq, outDir, filteredFastq, status, log):
## obtain read counts from input and filtered fastq files and save the values to STATS_LIST_FILE_NAME file;
## compress the filtered fastq file
log_and_print("\n\n%s - RUN POST PROCESS <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n" % (color['pink'], color['']))
if STEP_ORDER[status] < STEP_ORDER[POST_END]:
checkpoint(POST_START, status)
rawCnt = 0
rawBaseCnt = 0
newCnt = 0
newBaseCnt = 0
stats = get_dict_obj(BB_STATS_LIST_FILE_NAME)
rawCnt = pipeline_val('inputReads', {'type': 'int', 'vtype': 'numeric'}, stats)
rawBaseCnt = pipeline_val('inputBases', {'type': 'int', 'vtype': 'numeric'}, stats)
newCnt = pipeline_val('outputReads', {'type': 'int', 'vtype': 'numeric'}, stats)
newBaseCnt = pipeline_val('outputBases', {'type': 'int', 'vtype': 'numeric'}, stats)
readCounts = {}
readRmPct = 100.0 * ((rawCnt - newCnt) / float(rawCnt))
baseRmPct = 100.0 * ((rawBaseCnt - newBaseCnt) / float(rawBaseCnt))
readCounts['readRmPct'] = '%.3f' % readRmPct
readCounts['baseRmPct'] = '%.3f' % baseRmPct
refStats = {}
filterLogStat = {}
cardinality = None
bbdukVersion = None
bbmapVersion = None
if os.path.isfile("filter.log"):
with open(os.path.join(outDir, "filter.log"), "r") as FLFH:
isContamNumChecked = False ## Contamination will be done twice for removeribo or for MTF
isKtrimmedTotalRemovedNumChecked = False ## for parsing "Total Removed" after ktrimming
for l in FLFH:
if l.startswith("Input:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 2
if 'adaptertriminput' not in filterLogStat:
filterLogStat["adaptertriminput"] = {"numreads": toks[0], "numbases": toks[1]}
elif 'contamtriminput' not in filterLogStat:
filterLogStat["contamtriminput"] = {"numreads": toks[0], "numbases": toks[1]}
elif l.startswith("FTrimmed:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["ftrimmed"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
elif l.startswith("KTrimmed:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["ktrimmed"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
isKtrimmedTotalRemovedNumChecked = True
## RQCSUPPORT-1987
elif l.startswith("Total Removed:") and isKtrimmedTotalRemovedNumChecked:
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["ktrimmed_total_removed"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
isKtrimmedTotalRemovedNumChecked = False
elif l.startswith("Trimmed by overlap:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["trimmedbyoverlap"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
elif l.startswith("Result:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
if 'adaptertrimresult' not in filterLogStat:
filterLogStat["adaptertrimresult"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
elif 'contamtrimresult' not in filterLogStat:
filterLogStat["contamtrimresult"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
elif l.startswith("Unique 31-mers:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 2 or len(toks) == 1
if 'adaptertrimunique31mers' not in filterLogStat:
if len(toks) == 2:
filterLogStat["adaptertrimunique31mers"] = {"num": toks[1]}
else:
filterLogStat["adaptertrimunique31mers"] = {"num":"0"}
else:
if len(toks) == 2:
filterLogStat["contamtrimunique31mers"] = {"num": toks[1]}
else:
filterLogStat["contamtrimunique31mers"] = {"num":"0"}
elif not isContamNumChecked and l.startswith("Contaminants:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["contaminants"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
isContamNumChecked = True
elif l.startswith("QTrimmed:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["qtrimmed"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
elif l.startswith("Short Read Discards:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["shortreaddiscards"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
elif l.startswith("Low quality discards:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["lowqualitydiscards"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
elif l.startswith("BBDuk version"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 1
bbdukVersion = toks[0]
elif l.startswith("BBMap version"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 1
bbmapVersion = toks[0]
## BBDuk 36.12 06272016
elif l.startswith("Adapter Sequence Removed:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["adaptersequenceremoved"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
elif l.startswith("Synthetic Contam Sequence Removed:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["syntheticcontamsequenceremoved"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
## 08112016
elif l.startswith("Short Synthetic Contam Sequence Removed:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["shortsyntheticcontamsequenceremoved"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
elif l.startswith("Ribosomal Sequence Removed:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["ribosomalsequenceremoved"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
## BBMap 36.12 06272016
elif l.startswith("Human Sequence Removed:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["humansequenceremoved"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
## RQC-862, RQC-880
elif l.startswith("Microbial Sequence Removed:"):
toks = re.findall("(\d+.\d*)", l.rstrip())
assert len(toks) == 4
filterLogStat["microbialremoved"] = {"numreads": toks[0], "percreads": toks[1], "numbases": toks[2], "percbases": toks[3]}
##
## refStats.txt format
##
## name %unambiguousReads unambiguousMB %ambiguousReads ambiguousMB unambiguousReads ambiguousReads
## human_masked 85.24693 498.92052 0.09378 0.55290 3350692 3686
## mouse_masked 0.03765 0.21670 0.10802 0.63690 1480 4246
## cat_masked 0.01862 0.09568 0.02514 0.14820 732 988
## dog_masked 0.00697 0.03815 0.01384 0.08160 274 544
##
if os.path.isfile("refStats.txt"):
refStatsFile = os.path.join(outDir, "refStats.txt")
with open(refStatsFile) as RFH:
## Need to report 0 if nothing matched
refStats['human'] = {"unambiguousReadsPerc":"0", "unambiguousMB":"0", "ambiguousReadsPerc":"0", "ambiguousMB":"0", "unambiguousReads":"0", "ambiguousReads":"0", "totalPerc":"0"}
refStats['cat'] = {"unambiguousReadsPerc":"0", "unambiguousMB":"0", "ambiguousReadsPerc":"0", "ambiguousMB":"0", "unambiguousReads":"0", "ambiguousReads":"0", "totalPerc":"0"}
refStats['dog'] = {"unambiguousReadsPerc":"0", "unambiguousMB":"0", "ambiguousReadsPerc":"0", "ambiguousMB":"0", "unambiguousReads":"0", "ambiguousReads":"0", "totalPerc":"0"}
refStats['mouse'] = {"unambiguousReadsPerc":"0", "unambiguousMB":"0", "ambiguousReadsPerc":"0", "ambiguousMB":"0", "unambiguousReads":"0", "ambiguousReads":"0", "totalPerc":"0"}
for l in RFH:
if l:
if l.startswith("#"):
continue
toks = l.rstrip().split()
assert len(toks) >= 7
## the number and percent of reads that map unambiguously or ambiguously to human, cat, dog.
## take the sum of the two numbers (ambiguous plus unambiguous) to use as the final percentage.
if l.startswith("human"):
refStats['human'] = {"unambiguousReadsPerc": toks[1], "unambiguousMB": toks[2], "ambiguousReadsPerc": toks[3], "ambiguousMB": toks[4], "unambiguousReads": toks[5], "ambiguousReads": toks[6], "totalPerc":float(toks[3])+float(toks[1])}
if l.startswith("cat"):
refStats['cat'] = {"unambiguousReadsPerc": toks[1], "unambiguousMB": toks[2], "ambiguousReadsPerc": toks[3], "ambiguousMB": toks[4], "unambiguousReads": toks[5], "ambiguousReads": toks[6], "totalPerc":float(toks[3])+float(toks[1])}
if l.startswith("dog"):
refStats['dog'] = {"unambiguousReadsPerc": toks[1], "unambiguousMB": toks[2], "ambiguousReadsPerc": toks[3], "ambiguousMB": toks[4], "unambiguousReads": toks[5], "ambiguousReads": toks[6], "totalPerc":float(toks[3])+float(toks[1])}
if l.startswith("mouse"):
refStats['mouse'] = {"unambiguousReadsPerc": toks[1], "unambiguousMB": toks[2], "ambiguousReadsPerc": toks[3], "ambiguousMB": toks[4], "unambiguousReads": toks[5], "ambiguousReads": toks[6], "totalPerc":float(toks[3])+float(toks[1])}
log.debug("refStats.txt: %s", str(refStats))
###########################################################
log_and_print("Write to stats file %s" % STATS_LIST_FILE_NAME)
###########################################################
if os.path.isfile(STATS_LIST_FILE_NAME):
os.remove(STATS_LIST_FILE_NAME)
with open(BB_STATS_LIST_FILE_NAME) as bbfh:
with open(STATS_LIST_FILE_NAME, 'a') as fh:
for line in bbfh:
if not line.startswith("#") and line.strip():
fh.write(line)
bbtoolsVersion = None
stats = get_dict_obj(STATS_LIST_FILE_NAME)
with open(STATS_LIST_FILE_NAME, 'a') as fh:
for key in readCounts:
if key not in stats:
write_stats(fh, key, readCounts[key], log)
for key in refStats:
for k in refStats[key]:
write_stats(fh, key+'_'+k, refStats[key][k], log)
write_stats(fh, "cardinality", cardinality, log)
## Write refStats to filterStats.txt file
for key in filterLogStat:
for k in filterLogStat[key]:
write_stats(fh, key + '_' + k, filterLogStat[key][k], log)
bbversionCmd = os.path.join(BBDIR, 'bbversion.sh')
cmd = "%s" % (bbversionCmd)
stdOut, _, exitCode = run_sh_command(cmd, True, log)
assert stdOut is not None
bbtoolsVersion = stdOut.strip()
## 05112017 Now bbtools version = bbmap version
# bbtoolsVersion = bbmapVersion if bbmapVersion else "37.xx"
assert bbtoolsVersion is not None
write_stats(fh, "filter_tool", "bbtools " + bbtoolsVersion, log)
write_stats(fh, "filter", VERSION, log)
## Version recording
if bbdukVersion is None: bbdukVersion = bbtoolsVersion
if bbmapVersion is None: bbmapVersion = bbtoolsVersion
write_stats(fh, "bbduk_version", bbdukVersion, log)
write_stats(fh, "bbmap_version", bbmapVersion, log)
checkpoint(POST_END, status)
status = POST_END
else:
log_and_print('No need to do post processing.')
return filteredFastq, status
##==============================================================================
## Helper functions
def clean_up(fList, log):
log_and_print("\n\n%s - CLEAN UP <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n" % ( color['pink'], color['']))
for f in fList:
if os.path.isfile(f):
log_and_print("Removing %s ... ", f)
os.remove(f)
log_and_print("CLEAN UP - completed")
def create_shell(shName, cmdArray):
with open(shName, 'w') as fh:
fh.write("#!/bin/bash\n")
fh.write("set -e\n")
fh.write("set -o pipefail\n")
#fh.write("module unload blast+; module load blast+\n")
for item in cmdArray:
fh.write("%s\n" % item)
os.chmod(shName, 0755) #-rwxr-xr-x
## checkpoint logging
def checkpoint(status, fromStatus=PIPE_START):
if status == PIPE_START or STEP_ORDER[status] > STEP_ORDER[fromStatus]:
checkpoint_step(STATUS_LOG_FNAME, status)
def write_stats(fh, k, v, log):
line = "%s=%s" % (k, v)
fh.write("%s\n" % line)
def read_qc(odir, fastq, status):
log_and_print("\n\n%s - RUN QC <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n" % (color['pink'], color['']))
if not os.path.isfile(fastq):
return False, status
qcdir = os.path.join(odir, READQC_DIR)
if STEP_ORDER[status] < STEP_ORDER[QC_END]:
checkpoint(QC_START, status)
qcexe = os.path.join(os.path.dirname(__file__), 'readqc.py')
cmd = '%s -o %s -f %s --skip-blast' % (qcexe, qcdir, fastq)
# print('DEBUG : %s' % cmd)
stdOut, stdErr, exitCode = run_sh_command(cmd, True)
if exitCode != 0:
print('ERROR : %s' % stdErr)
return False, qcdir, status
checkpoint(QC_END, status)
status = QC_END
else:
log_and_print("No need to do qc step.")
return True, qcdir, status
def do_html_body(odir, rawFastq, filteredFastq):
stats = get_dict_obj(os.path.join(odir, STATS_LIST_FILE_NAME))
tok_map = {
'inputReads' : {'token' : '[_RAW-READ-CNT_]', 'type': 'bigint'},
'inputBases' : {'token' : '[_RAW-BASE-CNT_]', 'type': 'bigint'},
'outputReads' : {'token' : '[_FILTERED-READ-CNT_]', 'type': 'bigint'},
'outputBases' : {'token' : '[_FILTERED-BASE-CNT_]', 'type': 'bigint'},
'readRmPct' : {'token' : '[_REMOVED-READ-PCT_]', 'type': 'raw'},
'baseRmPct' : {'token' : '[_REMOVED-BASE-PCT_]', 'type': 'raw'},
'lowqualitydiscards_numreads' : {'token' : '[_LOW-QUAL-REMOVED-READ-CNT_]', 'type': 'bigint', 'filter': 0},
'lowqualitydiscards_percreads' : {'token' : '[_LOW-QUAL-REMOVED-PCT_]', 'type': 'raw', 'filter': 0},
'contaminants_numreads' : {'token' : '[_ARTI-REMOVED-READ-CNT_]', 'type': 'bigint', 'filter': 0},
'contaminants_percreads' : {'token' : '[_ARTI-REMOVED-PCT_]', 'type': 'raw', 'filter': 0},
'ribosomalsequenceremoved_numreads' : {'token' : '[_RRNA-REMOVED-READ-CNT_]', 'type': 'bigint', 'filter': 0},
'ribosomalsequenceremoved_percreads' : {'token' : '[_RRNA-REMOVED-READ-PCT_]', 'type': 'raw', 'filter': 0},
'microbialremoved_numreads' : {'token' : '[_MICROBE-REMOVED-READ-CNT_]', 'type': 'bigint', 'filter': 0},
'microbialremoved_percreads' : {'token' : '[_MICROBE-REMOVED-READ-PCT_]', 'type': 'raw', 'filter': 0},
'human_unambiguousreads' : {'token' : '[_HUMAN-REMOVED-READ-CNT_]', 'type': 'bigint', 'filter': 0},
'human_unambiguousreadsperc' : {'token' : '[_HUMAN-REMOVED-READ-PCT_]', 'type': 'raw', 'filter': 0},
'dog_unambiguousreads' : {'token' : '[_DOG-REMOVED-READ-CNT_]', 'type': 'bigint', 'filter': 0},
'dog_unambiguousreadsperc' : {'token' : '[_DOG-REMOVED-READ-PCT_]', 'type': 'raw', 'filter': 0},
'cat_unambiguousreads' : {'token' : '[_CAT-REMOVED-READ-CNT_]', 'type': 'bigint', 'filter': 0},
'cat_unambiguousreadsperc' : {'token' : '[_CAT-REMOVED-READ-PCT_]', 'type': 'raw', 'filter': 0},
'mouse_unambiguousreads' : {'token' : '[_MOUSE-REMOVED-READ-CNT_]', 'type': 'bigint', 'filter': 0},
'mouse_unambiguousreadsperc' : {'token' : '[_MOUSE-REMOVED-READ-PCT_]', 'type': 'raw', 'filter': 0},
}
temp = os.path.join(PYDIR, 'template/filter_body_template.html')
html = ''
with open(temp, 'r') as fh:
html = fh.read()
## do the place-holder replacement !!
html = html.replace('[_RAW-FILE-LOCATION_]', rawFastq)
html = html.replace('[_FILTERED-FILE-LOCATION_]', filteredFastq)
fsize = format(os.stat(rawFastq).st_size / (1024*1024), ',')
html = html.replace('[_RAW-FILE-SIZE_]', fsize)
fsize = format(os.stat(filteredFastq).st_size / (1024*1024), ',')
html = html.replace('[_FILTERED-FILE-SIZE_]', fsize)
for key in tok_map:
dat = tok_map[key]
html = html.replace(dat['token'], pipeline_val(key, dat, stats))
# readqc on the filter file
if qcdir:
hbody = do_readqc_html_body(qcdir, odir)
else:
hbody = ''
html = html.replace('[_FILTERED-READ-QC_]', hbody)
return html
def do_html(odir, qcdir, rawFastq, filteredFastq, status):
log_and_print("\n\n%s - Create HTML file <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n" % (color['pink'], color['']))
fname = os.path.basename(rawFastq)
stats = get_dict_obj(os.path.join(odir, STATS_LIST_FILE_NAME))
temp = os.path.join(PYDIR, 'template/template.html')
with open(temp, 'r') as fh:
html = fh.read()
html = html.replace('[_PAGE-TITLE_]', 'Filter Report')
html = html.replace('[_REPORT-TITLE_]', 'BBTools Filtering Report')
html = html.replace('[_INPUT-FILE-NAME_]', fname)
html = html.replace('[_REPORT-DATE_]', '{:%Y-%m-%d %H:%M:%S}'.format(datetime.datetime.now()))
hbody = do_html_body(odir, rawFastq, filteredFastq)
html = html.replace('[_REPORT-BODY_]', hbody)
fbasename = 'filter.log'
fname = os.path.join(outputPath, fbasename)
html = html.replace('[_FILTER-LOG_]', html_tag('a', fbasename, {'href': fbasename}))
fsize = '%.1f' % (float(os.stat(fname).st_size) / 2014.0)
html = html.replace('[_FILTER-LOG-SIZE_]', fsize)
# fbasename = 'filter.txt'
# fname = os.path.join(outputPath, fbasename)
# html = html.replace('[_FILTER-REPORT_]', html_tag('a', fbasename, {'href': fbasename}))
# fsize = '%.1f' % (float(os.stat(fname).st_size) / 2014.0)
# html = html.replace('[_FILTER-REPORT-SIZE_]', fsize)
## write the html to file
idxfile = os.path.join(odir, 'index.html')
with open(idxfile, 'w') as fh2:
fh2.write(html)
print('HTML index file written to %s' % idxfile)
# copy the css file
cssdir = os.path.join(PYDIR, 'css')
todir = os.path.join(odir, 'css')
if os.path.isdir(todir):
shutil.rmtree(todir)
shutil.copytree(cssdir, todir, False, None)
# copy the image file
imgdir = os.path.join(PYDIR, 'images')
todir = os.path.join(odir, 'images')
if os.path.isdir(todir):
shutil.rmtree(todir)
shutil.copytree(imgdir, todir, False, None)
return status
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## main program
if __name__ == "__main__":
## Parse options
usage = "* Filter Pipeline, version %s\n" % (VERSION)
origCmd = ' '.join(sys.argv)
## command line options
parser = argparse.ArgumentParser(description=usage, formatter_class=argparse.ArgumentDefaultsHelpFormatter)
PROD_TYPE = [
'DNA',
'FUNGAL',
'METAGENOME',
'VIRAL-METAGENOME',
'SAG',
'ISO',
'SAG',
'CELL-ENRICHMENT',
'PLANT-2x150',
'PLANT-2x250',
'RNA',
'SMRNA',
'METATRANSCRIPTOME',
'LFPE',
'CLRS',
'CLIP-PE',
'NEXTERA',
'ITAG',
'MICROTRANS',
'BISULPHITE',
'3PRIMERNA',
'CHIPSEQ',
'RNAWOHUMAN'
]
parser.add_argument("-f", "--fastq", dest="fastq", help="Set input fastq file (full path to fastq)", required=True)
parser.add_argument("-o", "--output-path", dest="outputPath", help="Set output path to write to", required=True)
parser.add_argument("-p", "--prod-type", dest="prodType", help="Set product type: %s" % PROD_TYPE, required=True)
parser.add_argument("-rdb", "--ref-databases", dest="rqcfilterdata", help="Path to RQCFilterData dir", required=True)
parser.add_argument("-t", "--taxlist", dest="taxList", help="A list of taxid(s) to exclude in CSV format", required=False)
parser.add_argument("-ap", "--ap", dest="apNum", help="Set AP (Analysis Project) ID. Ex) -ap 123 or -ap 123,456,789", required=False)
parser.add_argument("-at", "--at", dest="atNum", help="Set AT (Analysis Task) ID. Ex) -at 123 or -at 123,456,789", required=False)
parser.add_argument("-v", "--version", action="version", version=VERSION)
## switches
parser.add_argument("-qc", "--qc", dest="doqc", action="store_true", help="also perform readqc analysis on the filtered output", default=False)
parser.add_argument("-a", "--aggressive", dest="enableAggressive", action="store_true", help="Enable aggressive=t and microbebuild=3", default=False)
parser.add_argument("-b", "--skip-blast", dest="skipBlastFlag", action="store_true", help="Skip Blast search", default=False)
parser.add_argument("-c", "--skip-cleanup", dest="skipCleanUp", action="store_true", help="Skip file clean up after pipeline complete", default=False)
parser.add_argument("-d", "--debug", dest="doDebug", action="store_true", help="Enable debug mode")
parser.add_argument("-l", "--disable-clumpify", dest="disableClumpify", action="store_true", help="Disable clumpify", default=False)
parser.add_argument("-m", "--skip-microbes-removal", dest="disableRmoveMicrobes", action="store_true", help="Skip microbes removal", default=False)
parser.add_argument("-r", "--contam", dest="enableRmoveMicrobes", action="store_true", help="Enable removemicrobes=t", default=False)
parser.add_argument("-s", "--skip-subsampleqc", dest="skipSubsampleQc", action="store_true", help="Skip the subsample and qc step", default=False)
parser.add_argument("-pl", "--print-log", dest="print_log", default = False, action = "store_true", help = "print log to screen")
## produce html when processing is done
parser.add_argument("-html", "--html", action="store_true", help="Create html file", dest="html", default=False, required=False)
skipCleanUp = False
skipSubsampleQc = False
outputPath = None ## output path, defaults to current working directory
fastq = None ## full path to input fastq
# logLevel = "DEBUG"
enableRmoveMicrobes = False
disableRmoveMicrobes = False
disableClumpify = False
enableAggressive = False
skipBlastFlag = False
apNum = None
atNum = None
taxList = ""
options = parser.parse_args()
print_log = options.print_log
if options.outputPath:
outputPath = options.outputPath
if not outputPath:
outputPath = os.getcwd()
## create output_directory if it doesn't exist
if not os.path.isdir(outputPath):
os.makedirs(outputPath)
outputPath = os.path.realpath(outputPath)
outputPath = outputPath.replace("/chos", "") if outputPath.startswith("/chos") else outputPath
## initialize my logger
logFile = os.path.join(outputPath, "rqc_filter_pipeline.log")
print "Started filtering pipeline with %s, writing log to: %s" % (SCRIPT_NAME, logFile)
log = get_logger("filter", logFile, logLevel, print_log, True)
if options.doDebug:
DEBUG = True
if options.skipCleanUp:
skipCleanUp = True
if options.skipBlastFlag:
skipBlastFlag = True
if options.skipSubsampleQc:
skipSubsampleQc = True
if options.enableRmoveMicrobes:
if options.disableRmoveMicrobes:
log.error("Conflict in option parameters: cannot set skip-contam with skip-microbes-removal.")
sys.exit(0)
enableRmoveMicrobes = True
if options.disableRmoveMicrobes:
if options.enableRmoveMicrobes:
log.error("Conflict in option parameters: cannot set skip-contam with skip-microbes-removal.")
sys.exit(0)
disableRmoveMicrobes = True
if options.enableAggressive:
enableAggressive = True
if options.disableClumpify:
disableClumpify = True
if options.fastq:
fastq = options.fastq
if options.taxList:
taxList = options.taxList
if options.apNum:
apNum = options.apNum
if options.atNum:
atNum = options.atNum
log_and_print("%s" % '#' * 80)
log_and_print(" Filtering pipeline (version %s)" % VERSION)
log_and_print("%s" % '#' * 80)
prodType = options.prodType.upper()
skipSubsampleQcFlag = options.skipSubsampleQc ## run subsample and qc or not
if not os.path.isdir(outputPath):
log.error("Cannot work with directory: %s", outputPath)
## check for fastq file
if fastq:
if not os.path.isfile(fastq):
log.error("Input fastq file, %s not found, abort!", fastq)
else:
log.error("No fastq defined, abort!")
fastq = os.path.realpath(fastq)
fastq = fastq.replace("/chos", "") if fastq.startswith("/chos") else fastq
##--------------------------------
## init log
log_and_print("Started pipeline, writing log to: %s" % logFile)
log_and_print("CMD: %s" % origCmd)
log_and_print("Fastq file: %s" % fastq)
log_and_print("Output path: %s" % outputPath)
os.chdir(outputPath)
cycle = 0
cycleMax = 1
bIsPaired = False
status = get_status(STATUS_LOG_FNAME, log)
log_and_print("Starting pipeline at [%s]" % status)
if status == PIPE_START:
checkpoint(PIPE_START)
## main loop: retry upto cycleMax times
while cycle < cycleMax:
cycle += 1
log_and_print("ATTEMPT [%d]" % cycle)
filesToRemove = [] # list of intermediate files for clean up
lastFastq = fastq # lastFastq : fastq produced by each step, init to input
rRnaFilterFile = None # MTF generates this 2nd filtered output file
subsampledFastq = None
bIsPaired = None
filteredReadNum = -1
filteredReadNumRrna = -1
FragFile = SingletonFile = UnknownFile = None
if cycle > 1:
status = get_status(STATUS_LOG_FNAME, log)
if status != PIPE_COMPLETE:
##
## Run rqcfilter.sh
##
lastFastq, status = run_rqcfilter(lastFastq, outputPath, prodType, status, enableRmoveMicrobes, enableAggressive, disableRmoveMicrobes, disableClumpify, taxList, options.rqcfilterdata, log) ## Only MTF type generates rRnaFilterFile
if filteredReadNum == 0:
break
##--------------------------------
## Run post processing
if lastFastq is not None and lastFastq != -1:
lastFastq, status = post_process(fastq, outputPath, lastFastq, status, log)
else:
print "Failed @ rqcfilter"
##--------------------------------
## Clean up
if lastFastq is not None and lastFastq != -1:
## run readQC on the filtered fastq
if options.doqc:
rtn, qcdir, status = read_qc(outputPath, lastFastq, status)
else:
qcdir = None
rtn = True
## create html file on the readQC results
if rtn:
status = do_html(outputPath, qcdir, fastq, lastFastq, status)
checkpoint(PIPE_COMPLETE)
if not skipCleanUp:
clean_up(filesToRemove, log)
else:
log_and_print("SKIP CLEANUP")
log_and_print("Pipeline Completed")
cycle = cycleMax + 1
else:
print "Failed @ postprocess"
else:
cycle = cycleMax + 1
log_and_print("Pipeline already completed")
print "Done."
sys.exit(0)
## EOF | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pytools/filter.py | filter.py |
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## libraries to use
import os
import sys
import argparse
import datetime
import shutil
SRC_ROOT = os.path.abspath(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(SRC_ROOT + "/lib") # common
from readqc_constants import RQCReadQcConfig, RQCReadQc, ReadqcStats
from common import get_logger, get_status, append_rqc_stats, append_rqc_file, set_colors, get_subsample_rate,run_command
from readqc_utils import *
#from readqc_utils import checkpoint_step_wrapper, fast_subsample_fastq_sequences, write_unique_20_mers, illumina_read_gc
from rqc_fastq import get_working_read_length, read_count
from readqc_report import *
from os_utility import make_dir
from html_utility import html_tag, html_th, html_tr, html_link
from rqc_utility import get_dict_obj, pipeline_val
VERSION = "1.0.0"
LOG_LEVEL = "DEBUG"
SCRIPT_NAME = __file__
PYDIR = os.path.abspath(os.path.dirname(__file__))
BBDIR = os.path.join(PYDIR, os.path.pardir)
color = {}
color = set_colors(color, True)
"""
STEP1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def do_fast_subsample_fastq_sequences(fastq, skipSubsampling, log):
log.info("\n\n%sSTEP1 - Subsampling reads <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
statsFile = RQCReadQcConfig.CFG["stats_file"]
status = "1_illumina_readqc_subsampling in progress"
checkpoint_step_wrapper(status)
log.info("1_illumina_readqc_subsampling in progress.")
inputReadNum = 0
sampleRate = 0.0
if not skipSubsampling:
# sampleRate = RQCReadQc.ILLUMINA_SAMPLE_PCTENTAGE ## 0.01
inputReadNum = read_count(fastq) ## read count from the original fastq
assert inputReadNum > 0, "ERROR: invalid input fastq"
sampleRate = get_subsample_rate(inputReadNum)
log.info("Subsampling rate = %s", sampleRate)
else:
log.info("1_illumina_readqc_subsampling: skip subsampling. Use all the reads.")
sampleRate = 1.0
retCode = None
totalReadNum = 0
firstSubsampledFastqFileName = ""
sequnitFileName = os.path.basename(fastq)
sequnitFileName = sequnitFileName.replace(".fastq", "").replace(".gz", "")
firstSubsampledFastqFileName = sequnitFileName + ".s" + str(sampleRate) + ".fastq"
retCode, firstSubsampledFastqFileName, totalBaseCount, totalReadNum, subsampledReadNum, bIsPaired, readLength = fast_subsample_fastq_sequences(fastq, firstSubsampledFastqFileName, sampleRate, True, log)
append_rqc_stats(statsFile, "SUBSAMPLE_RATE", sampleRate, log)
if retCode in (RQCExitCodes.JGI_FAILURE, -2):
log.info("1_illumina_readqc_subsampling failed.")
status = "1_illumina_readqc_subsampling failed"
checkpoint_step_wrapper(status)
else:
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_BASE_COUNT, totalBaseCount, log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_COUNT, totalReadNum, log)
log.info("1_illumina_readqc_subsampling complete.")
status = "1_illumina_readqc_subsampling complete"
checkpoint_step_wrapper(status)
return status, firstSubsampledFastqFileName, totalReadNum, subsampledReadNum, bIsPaired, readLength
"""
STEP2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def do_write_unique_20_mers(fastq, totalReadCount, log):
log.info("\n\n%sSTEP2 - Sampling unique 25 mers <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
if totalReadCount >= RQCReadQc.ILLUMINA_MER_SAMPLE_REPORT_FRQ * 2: ## 25000
log.debug("read count total in step2 = %s", totalReadCount)
filesFile = RQCReadQcConfig.CFG["files_file"]
statsFile = RQCReadQcConfig.CFG["stats_file"]
status = "2_unique_mers_sampling in progress"
checkpoint_step_wrapper(status)
log.info(status)
retCode, newDataFile, newPngPlotFile, newHtmlPlotFile = write_unique_20_mers(fastq, log)
if retCode != RQCExitCodes.JGI_SUCCESS:
status = "2_unique_mers_sampling failed"
log.error(status)
checkpoint_step_wrapper(status)
else:
## if no output files, skip this step.
if newDataFile is not None:
statsDict = {}
## in readqc_report.py
## 2014.07.23 read_level_mer_sampling is updated to process new output file format from bbcountunique
log.info("2_unique_mers_sampling: post-processing the bbcountunique output file.")
read_level_mer_sampling(statsDict, newDataFile, log)
for k, v in statsDict.items():
append_rqc_stats(statsFile, k, str(v), log)
## outputs from bbcountunique
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_20MER_UNIQUENESS_TEXT, newDataFile, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_20MER_UNIQUENESS_PLOT, newPngPlotFile, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_20MER_UNIQUENESS_D3_HTML_PLOT, newHtmlPlotFile, log)
log.info("2_unique_mers_sampling complete.")
status = "2_unique_mers_sampling complete"
checkpoint_step_wrapper(status)
else:
## if num reads < RQCReadQc.ILLUMINA_MER_SAMPLE_REPORT_FRQ = 25000
## just proceed to the next step
log.warning("2_unique_mers_sampling can't run it because the number of reads < %s.", RQCReadQc.ILLUMINA_MER_SAMPLE_REPORT_FRQ * 2)
status = "2_unique_mers_sampling complete"
checkpoint_step_wrapper(status)
return status
"""
STEP3 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def do_illumina_read_gc(fastq, log):
log.info("\n\n%sSTEP3 - Making read GC histograms <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
filesFile = RQCReadQcConfig.CFG["files_file"]
statsFile = RQCReadQcConfig.CFG["stats_file"]
status = "3_illumina_read_gc in progress"
checkpoint_step_wrapper(status)
log.info("3_illumina_read_gc in progress.")
reformat_gchist_file, png_file, htmlFile, mean_val, stdev_val, med_val, mode_val = illumina_read_gc(fastq, log)
if not reformat_gchist_file:
log.error("3_illumina_read_gc failed.")
status = "3_illumina_read_gc failed"
checkpoint_step_wrapper(status)
else:
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_GC_MEAN, mean_val, log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_GC_STD, stdev_val, log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_GC_MED, med_val, log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_GC_MODE, mode_val, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_GC_TEXT, reformat_gchist_file, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_GC_PLOT, png_file, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_GC_D3_HTML_PLOT, htmlFile, log)
log.info("3_illumina_read_gc complete.")
status = "3_illumina_read_gc complete"
checkpoint_step_wrapper(status)
return status
"""
STEP4 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def do_read_quality_stats(fastq, log):
log.info("\n\n%sSTEP4 - Analyzing read quality <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
filesFile = RQCReadQcConfig.CFG["files_file"]
statsFile = RQCReadQcConfig.CFG["stats_file"]
status = "4_illumina_read_quality_stats in progress"
checkpoint_step_wrapper(status)
log.info("4_illumina_read_quality_stats in progress.")
readLength = 0
readLenR1 = 0
readLenR2 = 0
isPairedEnd = None
if not os.path.isfile(fastq):
log.error("4_illumina_read_quality_stats failed. Cannot find the input fastq file")
status = "4_illumina_read_quality_stats failed"
checkpoint_step_wrapper(status)
return status
## First figure out if it's pair-ended or not
## NOTE: ssize=10000 is recommended!
readLength, readLenR1, readLenR2, isPairedEnd = get_working_read_length(fastq, log)
log.info("Read length = %s, and is_pair_ended = %s", readLength, isPairedEnd)
if readLength == 0:
log.error("Failed to run get_working_read_length.")
status = "4_illumina_read_quality_stats failed"
checkpoint_step_wrapper(status)
else:
## Pair-ended
r1_r2_baseposqual_png = None ## Average Base Position Quality Plot (*.qrpt.png)
r1_r2_baseposqual_html = None ## Average Base Position Quality D3 Plot (*.qrpt.html)
r1_r2_baseposqual_txt = None ## Read 1/2 Average Base Position Quality Text (*.qhist.txt)
r1_baseposqual_box_png = None ## Read 1 Average Base Position Quality Boxplot (*.r1.png)
r2_baseposqual_box_png = None ## Read 2 Average Base Position Quality Boxplot (*.r2.png)
r1_baseposqual_box_html = None ## Read 1 Average Base Position Quality D3 Boxplot (*.r1.html)
r2_baseposqual_box_html = None ## Read 2 Average Base Position Quality D3 Boxplot (*.r2.html)
r1_r2_baseposqual_box_txt = None ## Average Base Position Quality text
r1_cyclenbase_png = None ## Read 1 Percent N by Read Position (*.r1.fastq.base.stats.Npercent.png) --> Read 1 Cycle N Base Percent plot
r2_cyclenbase_png = None ## Read 2 Percent N by Read Position (*.r2.fastq.base.stats.Npercent.png) --> Read 2 Cycle N Base Percent plot
r1_cyclenbase_txt = None ## Read 1 Percent N by Read Position Text (*.r1.fastq.base.stats) --> Read 1 Cycle N Base Percent text
#r2_cyclenbase_txt = None ## Read 2 Percent N by Read Position Text (*.r2.fastq.base.stats) --> Read 2 Cycle N Base Percent text
r1_r2_cyclenbase_txt = None ## Merged Percent N by Read Position Text (*.r2.fastq.base.stats) --> Merged Cycle N Base Percent text
r1_cyclenbase_html = None
r2_cyclenbase_html = None
r1_nuclcompfreq_png = None ## Read 1 Nucleotide Composition Frequency Plot (*.r1.stats.png)
r2_nuclcompfreq_png = None ## Read 2 Nucleotide Composition Frequency Plot (*.r2.stats.png)
r1_nuclcompfreq_html = None
r2_nuclcompfreq_html = None
## Single-ended
#se_baseposqual_txt = None ## Average Base Position Quality Text (*.qrpt)
#se_nuclcompfreq_png = None ## Cycle Nucleotide Composition (*.stats.png)
if isPairedEnd:
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_LENGTH_1, readLenR1, log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_LENGTH_2, readLenR2, log)
else:
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_LENGTH_1, readLength, log)
## Average Base Position Quality Plot/Text using qhist.txt
r1_r2_baseposqual_txt, r1_r2_baseposqual_png, r1_r2_baseposqual_html = gen_average_base_position_quality_plot(fastq, isPairedEnd, log) ## .reformat.qhist.txt
log.debug("Outputs: %s %s %s", r1_r2_baseposqual_png, r1_r2_baseposqual_html, r1_r2_baseposqual_txt)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QUAL_POS_PLOT_MERGED, r1_r2_baseposqual_png, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QUAL_POS_PLOT_MERGED_D3_HTML_PLOT, r1_r2_baseposqual_html, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QUAL_POS_QRPT_1, r1_r2_baseposqual_txt, log) ## for backward compatibility
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QUAL_POS_QRPT_2, r1_r2_baseposqual_txt, log) ## for backward compatibility
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QUAL_POS_QRPT_MERGED, r1_r2_baseposqual_txt, log)
## Average Base Position Quality Plot/Text using bqhist.txt
r1_r2_baseposqual_box_txt, r1_baseposqual_box_png, r2_baseposqual_box_png, r1_baseposqual_box_html, r2_baseposqual_box_html = gen_average_base_position_quality_boxplot(fastq, log)
log.debug("Read qual outputs: %s %s %s %s %s", r1_r2_baseposqual_box_txt, r1_baseposqual_box_png, r1_baseposqual_box_html, r2_baseposqual_box_png, r2_baseposqual_box_html)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QUAL_POS_QRPT_BOXPLOT_1, r1_baseposqual_box_png, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QUAL_POS_QRPT_D3_HTML_BOXPLOT_1, r1_baseposqual_box_html, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QUAL_POS_QRPT_BOXPLOT_2, r2_baseposqual_box_png, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QUAL_POS_QRPT_D3_HTML_BOXPLOT_2, r2_baseposqual_box_html, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QUAL_POS_QRPT_BOXPLOT_TEXT, r1_r2_baseposqual_box_txt, log)
## ----------------------------------------------------------------------------------------------------
## compute Q20 of the two reads
q20Read1 = None
q20Read2 = None
## using bqhist.txt
if r1_r2_baseposqual_box_txt:
q20Read1 = q20_score_new(r1_r2_baseposqual_box_txt, 1, log)
if isPairedEnd:
q20Read2 = q20_score_new(r1_r2_baseposqual_box_txt, 2, log)
log.debug("q20 for read 1 = %s", q20Read1)
log.debug("q20 for read 2 = %s", q20Read2)
if q20Read1 is not None:
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_Q20_READ1, q20Read1, log)
else:
log.error("Failed to get q20 read 1 from %s", r1_r2_baseposqual_box_txt)
status = "4_illumina_read_quality_stats failed"
checkpoint_step_wrapper(status)
return status
if isPairedEnd:
if q20Read2 is not None:
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_Q20_READ2, q20Read2, log)
else:
log.error("Failed to get q20 read 2 from %s", r1_r2_baseposqual_box_txt)
status = "4_illumina_read_quality_stats failed"
checkpoint_step_wrapper(status)
return status
r1_r2_cyclenbase_txt, r1_nuclcompfreq_png, r1_nuclcompfreq_html, r2_nuclcompfreq_png, r2_nuclcompfreq_html = gen_cycle_nucleotide_composition_plot(fastq, readLength, isPairedEnd, log)
log.debug("gen_cycle_nucleotide_composition_plot() ==> %s %s %s %s %s", r1_cyclenbase_txt, r1_nuclcompfreq_png, r1_nuclcompfreq_html, r2_nuclcompfreq_png, r2_nuclcompfreq_html)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_COUNT_TEXT_1, r1_r2_cyclenbase_txt, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_COUNT_TEXT_2, r1_r2_cyclenbase_txt, log) # reformat.sh generates a single merged output file.
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_COUNT_PLOT_1, r1_nuclcompfreq_png, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_COUNT_D3_HTML_PLOT_1, r1_nuclcompfreq_html, log)
if r2_nuclcompfreq_png:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_COUNT_PLOT_2, r2_nuclcompfreq_png, log)
if r2_nuclcompfreq_html:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_COUNT_D3_HTML_PLOT_2, r2_nuclcompfreq_html, log)
### ---------------------------------------------------------------------------------------------------
## using bhist.txt
r1_cyclenbase_txt, r1_cyclenbase_png, r1_cyclenbase_html, r2_cyclenbase_png, r2_cyclenbase_html = gen_cycle_n_base_percent_plot(fastq, readLength, isPairedEnd, log)
log.debug("Outputs: %s %s %s", r1_cyclenbase_txt, r1_cyclenbase_png, r1_cyclenbase_html)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_PERCENTAGE_TEXT_1, r1_cyclenbase_txt, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_PERCENTAGE_TEXT_2, r1_cyclenbase_txt, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_PERCENTAGE_PLOT_1, r1_cyclenbase_png, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_PERCENTAGE_D3_HTML_PLOT_1, r1_cyclenbase_html, log)
if r2_cyclenbase_png:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_PERCENTAGE_PLOT_2, r2_cyclenbase_png, log)
if r2_cyclenbase_html:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_PERCENTAGE_D3_HTML_PLOT_2, r2_cyclenbase_html, log)
log.info("4_illumina_read_quality_stats complete.")
status = "4_illumina_read_quality_stats complete"
checkpoint_step_wrapper(status)
return status
"""
STEP5 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def do_write_base_quality_stats(fastq, log):
log.info("\n\n%sSTEP5 - Calculating base quality statistics for reads <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
filesFile = RQCReadQcConfig.CFG["files_file"]
statsFile = RQCReadQcConfig.CFG["stats_file"]
status = "5_illumina_read_quality_stats in progress"
checkpoint_step_wrapper(status)
log.info("5_illumina_read_quality_stats in progress.")
reformatObqhistFile = write_avg_base_quality_stats(fastq, log) ## *.reformat.obqhist.txt
if not reformatObqhistFile:
log.error("5_illumina_read_quality_stats failed.")
status = "5_illumina_read_quality_stats failed"
checkpoint_step_wrapper(status)
else:
## Generate qual scores and plots of read level QC
statsDict = {}
retCode = base_level_qual_stats(statsDict, reformatObqhistFile, log)
if retCode != RQCExitCodes.JGI_SUCCESS:
log.error("5_illumina_read_quality_stats failed.")
status = "5_illumina_read_quality_stats failed"
checkpoint_step_wrapper(status)
else:
for k, v in statsDict.items():
append_rqc_stats(statsFile, k, str(v), log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_BASE_QUALITY_STATS, reformatObqhistFile, log)
log.info("5_illumina_read_quality_stats complete.")
status = "5_illumina_read_quality_stats complete"
checkpoint_step_wrapper(status)
return status
"""
STEP6 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def do_illumina_count_q_score(fastq, log):
log.info("\n\n%sSTEP6 - Generating quality score histogram <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
filesFile = RQCReadQcConfig.CFG["files_file"]
statsFile = RQCReadQcConfig.CFG["stats_file"]
status = "6_illumina_count_q_score in progress"
checkpoint_step_wrapper(status)
log.info("6_illumina_count_q_score in progress.")
qhistTxtFile, qhistPngFile, qhistHtmlPlotFile = illumina_count_q_score(fastq, log) ## *.obqhist.txt
if not qhistTxtFile:
log.error("6_illumina_count_q_score failed.")
status = "6_illumina_count_q_score failed"
checkpoint_step_wrapper(status)
else:
## save qscores in statsFile
qscore = {}
read_level_qual_stats(qscore, qhistTxtFile, log)
for k, v in qscore.items():
append_rqc_stats(statsFile, k, str(v), log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QHIST_TEXT, qhistTxtFile, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QHIST_PLOT, qhistPngFile, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_QHIST_D3_HTML_PLOT, qhistHtmlPlotFile, log)
log.info("6_illumina_count_q_score complete.")
status = "6_illumina_count_q_score complete"
checkpoint_step_wrapper(status)
return status
"""
STEP7 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
## 20140903 removed
##def do_illumina_calculate_average_quality(fastq, log):
"""
STEP8 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def do_illumina_find_common_motifs(fastq, log):
log.info("\n\n%sSTEP8 - Locating N stutter motifs in sequence reads <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
filesFile = RQCReadQcConfig.CFG["files_file"]
statsFile = RQCReadQcConfig.CFG["stats_file"]
status = "8_illumina_find_common_motifs in progress"
checkpoint_step_wrapper(status)
log.info(status)
retCode, statDataFile = illumina_find_common_motifs(fastq, log)
log.info("nstutter statDataFile name = %s", statDataFile)
if retCode != RQCExitCodes.JGI_SUCCESS:
log.error("8_illumina_find_common_motifs failed.")
status = "8_illumina_find_common_motifs failed"
checkpoint_step_wrapper(status)
else:
## read_level_stutter
## ex)
##688 N----------------------------------------------------------------------------------------------------------------------------
##-------------------------
##346 NNNNNNNNNNNNN----------------------------------------------------------------------------------------------------------------
##-------------------------
##53924 ------------N----------------------------------------------------------------------------------------------------------------
##-------------------------
##sum pct patterNs past 0.1 == 15.9245930330268 ( 54958 / 345114 * 100 )
with open(statDataFile, "r") as stutFH:
lines = stutFH.readlines()
## if no motifs are detected the file is empty
if not lines:
log.warning("The *.nstutter.stat file is not available in function read_level_stutter(). The function still returns JGI_SUCCESS.")
else:
assert lines[-1].find("patterNs") != -1
t = lines[-1].split()
## ["sum", "pct", "patterNs", "past", "0.1", "==", "15.9245930330268", "(", "54958", "/", "345114", "*", "100", ")"]
percent = "%.2f" % float(t[6])
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_N_FREQUENCE, percent, log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_N_PATTERN, str("".join(lines[:-1])), log)
## NOTE: ???
append_rqc_file(filesFile, "find_common_motifs.dataFile", statDataFile, log)
log.info("8_illumina_find_common_motifs complete.")
status = "8_illumina_find_common_motifs complete"
checkpoint_step_wrapper(status)
return status
"""
STEP11 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def do_illumina_detect_read_contam(fastq, bpToCut, log):
log.info("\n\n%sSTEP11 - Detect read contam <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
filesFile = RQCReadQcConfig.CFG["files_file"]
statsFile = RQCReadQcConfig.CFG["stats_file"]
status = "11_illumina_detect_read_contam in progress"
checkpoint_step_wrapper(status)
log.info("11_illumina_detect_read_contam in progress.")
#########
## seal
#########
retCode2, outFileDict2, ratioResultDict2, contamStatDict = illumina_detect_read_contam3(fastq, bpToCut, log) ## seal version
if retCode2 != RQCExitCodes.JGI_SUCCESS:
log.error("11_illumina_detect_read_contam seal version failed.")
status = "11_illumina_detect_read_contam failed"
checkpoint_step_wrapper(status)
else:
for k, v in outFileDict2.items():
append_rqc_file(filesFile, k + " seal", str(v), log)
append_rqc_file(filesFile, k, str(v), log)
for k, v in ratioResultDict2.items():
append_rqc_stats(statsFile, k + " seal", str(v), log)
append_rqc_stats(statsFile, k, str(v), log)
## contamination stat
for k, v in contamStatDict.items():
append_rqc_stats(statsFile, k, str(v), log)
log.info("11_illumina_detect_read_contam seal version complete.")
status = "11_illumina_detect_read_contam complete"
checkpoint_step_wrapper(status)
return status
"""
STEP13 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
## Removed!!
##def do_illumina_read_megablast(firstSubsampledFastqFileName, skipSubsampling, subsampledReadNum, log, blastDbPath=None):
"""
New STEP13 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def do_illumina_subsampling_read_blastn(firstSubsampledFastqFileName, skipSubsampling, subsampledReadNum, log):
log.info("\n\n%sSTEP13 - Run subsampling for Blast search <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
subsampeldFastqFile = None
totalReadNum = 0
readNumToReturn = 0
status = "13_illumina_subsampling_read_megablast in progress"
checkpoint_step_wrapper(status)
log.info("13_illumina_subsampling_read_megablast in progress.")
if subsampledReadNum == 0:
cmd = " ".join(["grep", "-c", "'^+'", firstSubsampledFastqFileName])
stdOut, _, exitCode = run_command(cmd, True, log)
if exitCode != 0:
log.error("Failed to run grep cmd")
return RQCExitCodes.JGI_FAILURE, None, None, None
else:
readNum = int(stdOut)
else:
readNum = subsampledReadNum
log.info("Subsampled read number = %d", readNum)
sampl_per = RQCReadQc.ILLUMINA_SAMPLE_PCTENTAGE ## 0.01
max_count = RQCReadQc.ILLUMINA_SAMPLE_COUNT ## 50000
## TODO
## Use "samplereadstarget" option in reformat.sh
if skipSubsampling:
log.debug("No subsampling for megablast. Use fastq file, %s (readnum = %s) as query for megablast.", firstSubsampledFastqFileName, readNum)
subsampeldFastqFile = firstSubsampledFastqFileName
readNumToReturn = readNum
else:
if readNum > max_count:
log.debug("Run the 2nd subsampling for running megablast.")
secondSubsamplingRate = float(max_count) / readNum
log.debug("SecondSubsamplingRate=%s, max_count=%s, readNum=%s", secondSubsamplingRate, max_count, readNum)
log.info("Second subsampling of Reads after Percent Subsampling reads = %s with new percent subsampling %f.", readNum, secondSubsamplingRate)
secondSubsampledFastqFile = ""
# dataFile = ""
sequnitFileName = os.path.basename(firstSubsampledFastqFileName)
sequnitFileName = sequnitFileName.replace(".fastq", "").replace(".gz", "")
secondSubsampledFastqFile = sequnitFileName + ".s" + str(sampl_per) + ".s" + str(secondSubsamplingRate) + ".n" + str(max_count) + ".fastq"
## ex) fq_sub_sample.pl -f .../7601.1.77813.CTTGTA.s0.01.fastq -o .../7601.1.77813.CTTGTA.s0.01.stats -r 0.0588142575171 > .../7601.1.77813.CTTGTA.s0.01.s0.01.s0.0588142575171.n50000.fastq
## ex) reformat.sh
retCode, secondSubsampledFastqFile, totalBaseCount, totalReadNum, subsampledReadNum, _, _ = fast_subsample_fastq_sequences(firstSubsampledFastqFileName, secondSubsampledFastqFile, secondSubsamplingRate, False, log)
if retCode != RQCExitCodes.JGI_SUCCESS:
log.error("Second subsampling failed.")
return RQCExitCodes.JGI_FAILURE, None, None, None
else:
log.info("Second subsampling complete.")
log.info("Second Subsampling Total Base Count = %s.", totalBaseCount)
log.info("Second Subsampling Total Reads = %s.", totalReadNum)
log.info("Second Subsampling Sampled Reads = %s.", subsampledReadNum)
if subsampledReadNum == 0:
log.warning("Too small first subsampled fastq file. Skip the 2nd sampling.")
secondSubsampledFastqFile = firstSubsampledFastqFileName
subsampeldFastqFile = secondSubsampledFastqFile
readNumToReturn = subsampledReadNum
else:
log.debug("The readNum is smaller than max_count=%s. The 2nd sampling skipped.", str(max_count))
subsampeldFastqFile = firstSubsampledFastqFileName
totalReadNum = readNum
readNumToReturn = readNum
log.info("13_illumina_subsampling_read_megablast complete.")
status = "13_illumina_subsampling_read_megablast complete"
checkpoint_step_wrapper(status)
return status, subsampeldFastqFile, totalReadNum, readNumToReturn
##"""
##STEP14 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
##
##"""
##def do_illumina_read_blastn_refseq_microbial(subsampeldFastqFile, subsampledReadNum, log, blastDbPath=None):
## 12212015 sulsj REMOVED!
"""
New STEP14 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def do_illumina_read_blastn_refseq_archaea(subsampeldFastqFile, subsampledReadNum, log):
log.info("\n\n%sSTEP14 - Run read blastn against refseq archaea <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
statsFile = RQCReadQcConfig.CFG["stats_file"]
retCode = None
log.info("14_illumina_read_blastn_refseq_archaea in progress.")
status = "14_illumina_read_blastn_refseq_archaea in progress"
checkpoint_step_wrapper(status)
retCode = illumina_read_blastn_refseq_archaea(subsampeldFastqFile, log)
if retCode == RQCExitCodes.JGI_FAILURE:
log.error("14_illumina_read_blastn_refseq_archaea failed.")
status = "14_illumina_read_blastn_refseq_archaea failed"
elif retCode == -143: ## timeout
log.warning("14_illumina_read_blastn_refseq_archaea timeout.")
status = "14_illumina_read_blastn_refseq_archaea complete"
else:
## read number used in blast search?
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READS_NUMBER, subsampledReadNum, log)
ret2 = read_megablast_hits("refseq.archaea", log)
if ret2 != RQCExitCodes.JGI_SUCCESS:
log.error("Errors in read_megablast_hits() of refseq.microbial")
log.error("14_illumina_read_blastn_refseq_archaea reporting failed.")
status = "14_illumina_read_blastn_refseq_archaea failed"
else:
log.info("14_illumina_read_blastn_refseq_archaea complete.")
status = "14_illumina_read_blastn_refseq_archaea complete"
return status
"""
STEP15 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def do_illumina_read_blastn_refseq_bacteria(subsampeldFastqFile, log):
log.info("\n\n%sSTEP15 - Run read blastn against refseq bacteria <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
#statsFile = RQCReadQcConfig.CFG["stats_file"]
#retCode = None
log.info("15_illumina_read_blastn_refseq_bacteria in progress.")
status = "15_illumina_read_blastn_refseq_bacteria in progress"
checkpoint_step_wrapper(status)
retCode = illumina_read_blastn_refseq_bacteria(subsampeldFastqFile, log)
if retCode == RQCExitCodes.JGI_FAILURE:
log.error("15_illumina_read_blastn_refseq_bacteria failed.")
status = "15_illumina_read_blastn_refseq_bacteria failed"
elif retCode == -143: ## timeout
log.warning("15_illumina_read_blastn_refseq_bacteria timeout.")
status = "15_illumina_read_blastn_refseq_bacteria complete"
else:
ret2 = read_megablast_hits("refseq.bacteria", log)
if ret2 != RQCExitCodes.JGI_SUCCESS:
log.error("Errors in read_megablast_hits() of refseq.microbial")
log.error("15_illumina_read_blastn_refseq_bacteria reporting failed.")
status = "15_illumina_read_blastn_refseq_bacteria failed"
else:
log.info("15_illumina_read_blastn_refseq_bacteria complete.")
status = "15_illumina_read_blastn_refseq_bacteria complete"
return status
"""
STEP16 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def do_illumina_read_blastn_nt(subsampeldFastqFile, log):
log.info("\n\n%sSTEP16 - Run read blastn against nt <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
retCode = None
log.info("16_illumina_read_blastn_nt in progress.")
status = "16_illumina_read_blastn_nt in progress"
checkpoint_step_wrapper(status)
retCode = illumina_read_blastn_nt(subsampeldFastqFile, log)
if retCode == RQCExitCodes.JGI_FAILURE:
log.error("16_illumina_read_blastn_nt failed.")
status = "16_illumina_read_blastn_nt failed"
elif retCode == -143: ## timeout
log.warning("16_illumina_read_blastn_nt timeout.")
status = "16_illumina_read_blastn_nt complete"
else:
ret2 = read_megablast_hits("nt", log)
if ret2 != RQCExitCodes.JGI_SUCCESS:
log.error("Errors in read_megablast_hits() of nt")
log.error("16_illumina_read_blastn_nt reporting failed.")
status = "16_illumina_read_blastn_nt failed"
else:
log.info("16_illumina_read_blastn_nt complete.")
status = "16_illumina_read_blastn_nt complete"
return status
"""
STEP17 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
do_illumina_multiplex_statistics
Demultiplexing analysis for pooled lib
"""
def do_illumina_multiplex_statistics(fastq, log, isMultiplexed=None):
log.info("\n\n%sSTEP17 - Run Multiplex statistics analysis <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
filesFile = RQCReadQcConfig.CFG["files_file"]
log.info("17_multiplex_statistics in progress.")
status = "17_multiplex_statistics in progress"
checkpoint_step_wrapper(status)
log.debug("fastq file: %s", fastq)
retCode, demultiplexStatsFile, detectionPngPlotFile, detectionHtmlPlotFile = illumina_generate_index_sequence_detection_plot(fastq, log, isMultiplexed=isMultiplexed)
if retCode != RQCExitCodes.JGI_SUCCESS:
log.error("17_multiplex_statistics failed.")
status = "17_multiplex_statistics failed"
checkpoint_step_wrapper(status)
else:
log.info("17_multiplex_statistics complete.")
status = "17_multiplex_statistics complete"
checkpoint_step_wrapper(status)
if detectionPngPlotFile is not None:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_DEMULTIPLEX_STATS_PLOT, detectionPngPlotFile, log)
if detectionHtmlPlotFile is not None:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_DEMULTIPLEX_STATS_D3_HTML_PLOT, detectionHtmlPlotFile, log)
if demultiplexStatsFile is not None:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_DEMULTIPLEX_STATS, demultiplexStatsFile, log)
return status
"""
STEP18 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
do_end_of_read_illumina_adapter_check
"""
def do_end_of_read_illumina_adapter_check(firstSubsampledFastqFileName, log):
log.info("\n\n%sSTEP18 - Run end_of_read_illumina_adapter_check analysis <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
filesFile = RQCReadQcConfig.CFG["files_file"]
plotFile = None
dataFile = None
log.info("18_end_of_read_illumina_adapter_check in progress.")
status = "18_end_of_read_illumina_adapter_check in progress"
checkpoint_step_wrapper(status)
log.debug("sampled fastq file: %s", firstSubsampledFastqFileName)
retCode, dataFile, plotFile, htmlFile = end_of_read_illumina_adapter_check(firstSubsampledFastqFileName, log)
if retCode != RQCExitCodes.JGI_SUCCESS:
log.error("18_end_of_read_illumina_adapter_check failed.")
status = "18_end_of_read_illumina_adapter_check failed"
checkpoint_step_wrapper(status)
else:
log.info("18_end_of_read_illumina_adapter_check complete.")
status = "18_end_of_read_illumina_adapter_check complete"
checkpoint_step_wrapper(status)
if plotFile is not None:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_END_OF_READ_ADAPTER_CHECK_PLOT, plotFile, log)
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_END_OF_READ_ADAPTER_CHECK_D3_HTML_PLOT, htmlFile, log)
if dataFile is not None:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_END_OF_READ_ADAPTER_CHECK_DATA, dataFile, log)
return status
"""
STEP19 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
do_insert_size_analysis
"""
def do_insert_size_analysis(fastq, log):
log.info("\n\n%sSTEP19 - Run insert size analysis <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
filesFile = RQCReadQcConfig.CFG["files_file"]
statsFile = RQCReadQcConfig.CFG["stats_file"]
plotFile = None
dataFile = None
log.info("19_insert_size_analysis in progress.")
status = "19_insert_size_analysis in progress"
checkpoint_step_wrapper(status)
log.debug("fastq file used: %s", fastq)
retCode, dataFile, plotFile, htmlFile, statsDict = insert_size_analysis(fastq, log) ## by bbmerge.sh
if retCode != RQCExitCodes.JGI_SUCCESS:
log.error("19_insert_size_analysis failed.")
status = "19_insert_size_analysis failed"
checkpoint_step_wrapper(status)
else:
if plotFile is not None:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_HISTO_PLOT, plotFile, log)
if dataFile is not None:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_HISTO_DATA, dataFile, log)
if htmlFile is not None:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_HISTO_D3_HTML_PLOT, htmlFile, log)
if statsDict:
try:
## --------------------------------------------------------------------------------------------------------
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_AVG_INSERT, statsDict["avg_insert"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_STD_INSERT, statsDict["std_insert"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_MODE_INSERT, statsDict["mode_insert"], log)
## --------------------------------------------------------------------------------------------------------
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_TOTAL_TIME, statsDict["total_time"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_NUM_READS, statsDict["num_reads"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_JOINED_NUM, statsDict["joined_num"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_JOINED_PERC, statsDict["joined_perc"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_AMBIGUOUS_NUM, statsDict["ambiguous_num"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_AMBIGUOUS_PERC, statsDict["ambiguous_perc"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_NO_SOLUTION_NUM, statsDict["no_solution_num"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_NO_SOLUTION_PERC, statsDict["no_solution_perc"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_TOO_SHORT_NUM, statsDict["too_short_num"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_TOO_SHORT_PERC, statsDict["too_short_perc"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_INSERT_RANGE_START, statsDict["insert_range_start"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_INSERT_RANGE_END, statsDict["insert_range_end"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_90TH_PERC, statsDict["perc_90th"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_50TH_PERC, statsDict["perc_50th"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_10TH_PERC, statsDict["perc_10th"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_75TH_PERC, statsDict["perc_75th"], log)
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_INSERT_SIZE_25TH_PERC, statsDict["perc_25th"], log)
except KeyError:
log.error("19_insert_size_analysis failed (KeyError).")
status = "19_insert_size_analysis failed"
checkpoint_step_wrapper(status)
return status
log.info("19_insert_size_analysis complete.")
status = "19_insert_size_analysis complete"
checkpoint_step_wrapper(status)
return status
"""
STEP21 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
do_sketch_vs_nt_refseq_silva
"""
def do_sketch_vs_nt_refseq_silva(fasta, log):
log.info("\n\n%sSTEP21 - Run sketch vs nt, refseq, silva <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
status = "21_sketch_vs_nt_refseq_silva in progress"
checkpoint_step_wrapper(status)
sketchOutDir = "sketch"
sketchOutPath = os.path.join(outputPath, sketchOutDir)
make_dir(sketchOutPath)
change_mod(sketchOutPath, "0755")
seqUnitName = os.path.basename(fasta)
seqUnitName = file_name_trim(seqUnitName)
filesFile = RQCReadQcConfig.CFG["files_file"]
comOptions = "ow=t colors=f printtaxa=t depth depth2 unique2 merge"
## NT ##########################
sketchOutFile = os.path.join(sketchOutPath, seqUnitName + ".sketch_vs_nt.txt")
sendSketchShCmd = os.path.join(BBDIR, 'sendsketch.sh')
cmd = "%s in=%s out=%s %s nt" % (sendSketchShCmd, fasta, sketchOutFile, comOptions)
stdOut, stdErr, exitCode = run_sh_command(cmd, True, log, True)
if exitCode != 0:
log.error("Failed to run : %s, stdout : %s, stderr: %s", cmd, stdOut, stdErr)
status = "21_sketch_vs_nt_refseq_silva failed"
checkpoint_step_wrapper(status)
return status
append_rqc_file(filesFile, "sketch_vs_nt_output", sketchOutFile, log)
## Refseq ##########################
sketchOutFile = os.path.join(sketchOutPath, seqUnitName + ".sketch_vs_refseq.txt")
cmd = "%s in=%s out=%s %s refseq" % (sendSketchShCmd, fasta, sketchOutFile, comOptions)
stdOut, stdErr, exitCode = run_sh_command(cmd, True, log, True)
if exitCode != 0:
log.error("Failed to run : %s, stdout : %s, stderr: %s", cmd, stdOut, stdErr)
status = "21_sketch_vs_refseq failed"
checkpoint_step_wrapper(status)
return status
append_rqc_file(filesFile, "sketch_vs_refseq_output", sketchOutFile, log)
## Silva ##########################
sketchOutFile = os.path.join(sketchOutPath, seqUnitName + ".sketch_vs_silva.txt")
cmd = "%s in=%s out=%s %s silva" % (sendSketchShCmd, fasta, sketchOutFile, comOptions)
stdOut, stdErr, exitCode = run_sh_command(cmd, True, log, True)
if exitCode != 0:
log.error("Failed to run : %s, stdout : %s, stderr: %s", cmd, stdOut, stdErr)
status = "21_sketch_vs_silva failed"
checkpoint_step_wrapper(status)
return status
append_rqc_file(filesFile, "sketch_vs_silva_output", sketchOutFile, log)
log.info("21_sketch_vs_nt_refseq_silva complete.")
status = "21_sketch_vs_nt_refseq_silva complete"
checkpoint_step_wrapper(status)
return status
"""
STEP22 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
do_illumina_read_level_report_postprocessing
"""
## Removed!!
#def do_illumina_read_level_report(fastq, firstSubsampledFastqFileName, log):
"""
STEP23 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
do_cleanup_readqc
"""
def do_cleanup_readqc(log):
log.info("\n\n%sSTEP23 - Cleanup <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
status = "23_cleanup_readqc in progress"
checkpoint_step_wrapper(status)
## Get option
skipCleanup = RQCReadQcConfig.CFG["skip_cleanup"]
if not skipCleanup:
retCode = cleanup_readqc(log)
else:
log.warning("File cleaning is skipped.")
retCode = RQCExitCodes.JGI_SUCCESS
if retCode != RQCExitCodes.JGI_SUCCESS:
log.error("23_cleanup_readqc failed.")
status = "23_cleanup_readqc failed"
checkpoint_step_wrapper(status)
else:
log.info("23_cleanup_readqc complete.")
status = "23_cleanup_readqc complete"
checkpoint_step_wrapper(status)
return status
def stetch_section_note(table_header):
overview = [
'BBTools sketch.sh uses a technique called MinHash to rapidly compare large sequences. ',
'The result is similar to BLAST, a list of hits from a query sequence to various reference sequences, ',
'sorted by similarity but the mechanisms are very different. ',
'For more information, see <a href="http://bbtools.jgi.doe.gov/" target="_blank">http://bbtools.jgi.doe.gov</a>'
]
legend = {
'WKID' : 'Weighted Kmer IDentity, which is the kmer identity compensating for differences in size. So, comparing human chr1 to the full human genome would yield 100% WKID but approximately 10% KID.',
'KID' : 'Kmer IDentity, equal to matches/length; this is the fraction of shared kmers.',
'ANI' : 'Average Nucleotide Identity, derived from WKID and kmer length.',
'Complt' : 'Genome completeness (percent of the reference represented in the query). Derived from WKID and KID.',
'Contam' : 'Contamination (percent of the query that does not match this reference, but matches some other reference).',
'Depth' : 'Per-kmer depth in sketches, indicating how many times that kmer was seen in the sequence.',
'Depth2' : 'Repeat-compensated depth',
'Matches' : 'The number of shared kmers between query and ref.',
'Unique' : 'The number of shared kmers between query and ref, and no other ref.',
'Unique2' : '??',
'Depth2' : 'Repeat-compensated depth',
'noHit' : 'Number of kmers that did not hit any reference sequence. Though constant for a query, it will be reported differently for different references based on the relative size of the reference and query (if the reference is bigger than the query, it will report all of them).',
'TaxID' : 'NCBI taxonomic id, when available.',
'gSize' : 'Estimate of genomic size (number of unique kmers in the genome). This is based on the smallest hash value in the list. This is affected by blacklists or whitelists, and by using an assembly versus raw reads.',
'gSeqs' : 'Number of sequences used in the sketch.',
'taxName' : 'NCBI\'s name for that taxID. If there is no taxID, the sequence name will be used.',
}
html = ''
for name in table_header:
if name != 'taxonomy':
html += html_tag('li', '%s: %s' % (html_tag('span', name, {'class': 'notice'}), html_tag('span', legend.get(name), {'class': 'small-italic'})))
html = html_tag('span', ''.join(overview), {'class': 'notice'}) + '<br /><br />Column Legend<br />' + html_tag('ul', html)
return html
def sketch_table(fname):
# print('DEBUG : %s' % fname)
html = ''
if os.path.isfile(fname):
title = None
header = None
data = []
with open(fname, 'r') as fh:
for line in fh:
line = line.strip()
if line == '':
continue
if line.startswith('Query:'):
title = line
elif line.startswith('WKID'):
header = line.split('\t')
else:
data.append(line)
if title and header and data:
html += html_th(header)
for line in data:
row = line.split('\t')
html += html_tr(row)
# html = html_tag('p', title) + html_tag('table', html, attrs={'class': 'data'})
html = html_tag('table', html, attrs={'class': 'data'})
return html, header
def do_html_contam_art_first_n_pb_tr(stats, files, odir, filepath_prefix):
temp = os.path.join(PYDIR, 'template/readqc_artifacts.html')
html = ''
with open(temp, 'r') as fh:
html = fh.read()
## the optional artifact contam
artifact_tr = ''
artifact_type = '50'
artifact_val = pipeline_val('illumina read percent contamination artifact 50bp seal', {'type': 'raw'}, stats, files, filepath_prefix)
artifact_file = pipeline_val('artifact_50bp.seal.stats seal', {'type': 'file'}, stats, files, filepath_prefix)
if not artifact_file:
artifact_file = pipeline_val('artifact_20bp.seal.stats seal', {'type': 'file'}, stats, files, filepath_prefix)
artifact_type = '20'
artifact_val = pipeline_val('illumina read percent contamination artifact 20bp seal', {'type': 'raw'}, stats, files, filepath_prefix)
if artifact_file:
html = html.replace('[_CONTAM-ART-FIRST-BP_]', artifact_type)
html = html.replace('[_CONTAM-ART-FIRST-BP-SEAL_]', artifact_file)
html = html.replace('[_CONTAM-ART-FIRST-BP-SEAL-PCT_]', artifact_val)
return html
def do_html_body(odir, filepath_prefix):
print('do_html_body - %s' % filepath_prefix)
temp = os.path.join(PYDIR, 'template/readqc_body_template.html')
statsf = os.path.join(odir, 'readqc_stats.txt')
if not os.path.isfile(statsf):
print('ERROR : file not found: %s' % statsf)
stats = get_dict_obj(statsf)
filesf = os.path.join(odir, 'readqc_files.txt')
if not os.path.isfile(statsf):
print('ERROR : file not found: %s' % statsf)
files = get_dict_obj(os.path.join(odir, 'readqc_files.txt'))
## key (key name in readqc_stats or readqc_files) : {token (space holder in html template), type (value format)}
tok_map = {
## Average Base Quality section
'overall bases Q score mean' : {'token' : '[_BASE-QUALITY-SCORE_]', 'type': 'float', 'filter': 1},
'overall bases Q score std' : {'token' : '[_BASE-QUALITY-SCORE-STD_]', 'type': 'float', 'filter': 1},
'Q30 bases Q score mean' : {'token' : '[_Q30-BASE-QUALITY-SCORE_]', 'type': 'float', 'filter': 1},
'Q30 bases Q score std' : {'token' : '[_Q30-BASE-QUALITY-SCORE-STD_]', 'type': 'float', 'filter': 1},
'base C30' : {'token' : '[_COUNT-OF-BAESE-Q30_]', 'type': 'bigint'},
'base C25' : {'token' : '[_COUNT-OF-BAESE-Q25_]', 'type': 'bigint'},
'base C20' : {'token' : '[_COUNT-OF-BAESE-Q20_]', 'type': 'bigint'},
'base C15' : {'token' : '[_COUNT-OF-BAESE-Q15_]', 'type': 'bigint'},
'base C10' : {'token' : '[_COUNT-OF-BAESE-Q10_]', 'type': 'bigint'},
'base C5' : {'token' : '[_COUNT-OF-BAESE-Q5_]', 'type': 'bigint'},
'base Q30' : {'token' : '[_PCT-OF-BAESE-Q30_]', 'type': 'raw'},
'base Q25' : {'token' : '[_PCT-OF-BAESE-Q25_]', 'type': 'raw'},
'base Q20' : {'token' : '[_PCT-OF-BAESE-Q20_]', 'type': 'raw'},
'base Q15' : {'token' : '[_PCT-OF-BAESE-Q15_]', 'type': 'raw'},
'base Q10' : {'token' : '[_PCT-OF-BAESE-Q10_]', 'type': 'raw'},
'base Q5' : {'token' : '[_PCT-OF-BAESE-Q5_]', 'type': 'raw'},
## Average Read Quality section
'read Q30' : {'token' : '[_PCT-OF-READS-Q30_]', 'type': 'raw'},
'read Q25' : {'token' : '[_PCT-OF-READS-Q25_]', 'type': 'raw'},
'read Q20' : {'token' : '[_PCT-OF-READS-Q20_]', 'type': 'raw'},
'read Q15' : {'token' : '[_PCT-OF-READS-Q15_]', 'type': 'raw'},
'read Q10' : {'token' : '[_PCT-OF-READS-Q10_]', 'type': 'raw'},
'read Q5' : {'token' : '[_PCT-OF-READS-Q5_]', 'type': 'raw'},
'SUBSAMPLE_RATE' : {'token' : '[_SUBSAMPLE-RATE_]', 'type': 'pct', 'filter': 1},
'read base quality stats' : {'token' : '[_AVG-READ-QUAL-HISTO-DATA_]', 'type': 'file', 'filter': 'link', 'label':'data file'},
'ILLUMINA_READ_QHIST_D3_HTML_PLOT' : {'token' : '[_AVG-READ-QUAL-HOSTO-D3_]', 'type': 'file', 'filter': 'link', 'label':'interactive plot'},
'read qhist plot' : {'token' : '[_AVG-READ-QUALITY-HISTOGRAM_]', 'type': 'file'},
## Average Base Position Quality section
'read q20 read1' : {'token' : '[_READ_Q20_READ1_]', 'type': 'raw'},
'read q20 read2' : {'token' : '[_READ_Q20_READ2_]', 'type': 'raw'},
'read qual pos qrpt 1' : {'token' : '[_AVG-BASE-POS-QUAL-HISTO-DATA_]', 'type': 'file', 'filter': 'link', 'label':'data file'},
'ILLUMINA_READ_QUAL_POS_PLOT_MERGED_D3_HTML_PLOT' : {'token' : '[_AVG-BASE-POS-QUAL-HISTO-D3_]', 'type': 'file', 'filter': 'link', 'label':'interactive plot'},
'read qual pos plot merged' : {'token' : '[_AVG-BASE-POSITION-QUALITY_]', 'type': 'file'},
## Insert Size
'ILLUMINA_READ_INSERT_SIZE_JOINED_PERC' : {'token' : '[_PCT-READS-JOINED_]', 'type': 'float', 'filter': 1},
'ILLUMINA_READ_INSERT_SIZE_AVG_INSERT' : {'token' : '[_PCT-READS-JOINED-AVG_]', 'type': 'raw'},
'ILLUMINA_READ_INSERT_SIZE_STD_INSERT' : {'token' : '[_PCT-READS-JOINED-STDDEV_]', 'type': 'raw'},
'ILLUMINA_READ_INSERT_SIZE_MODE_INSERT' : {'token' : '[_PCT-READS-JOINED-MODE_]', 'type': 'raw'},
'ILLUMINA_READ_INSERT_SIZE_HISTO_DATA' : {'token' : '[_INSERT-SIZE-HISTO-DATA_]', 'type': 'file', 'filter': 'link', 'label':'data file'},
'ILLUMINA_READ_INSERT_SIZE_HISTO_D3_HTML_PLOT' : {'token' : '[_INSERT-SIZE-HISTO-D3_]', 'type': 'file', 'filter': 'link', 'label':'interactive plot'},
'ILLUMINA_READ_INSERT_SIZE_HISTO_PLOT' : {'token' : '[_INSERT-SIZE-HISTOGRAM_]', 'type': 'file'},
## Read GC
'read GC mean' : {'token' : '[_READ-GC-AVG_]', 'type': 'float', 'filter': 1},
'read GC std' : {'token' : '[_READ-GC-STDDEV_]', 'type': 'float', 'filter': 1},
'read GC text hist' : {'token' : '[_READ-QC-HISTO-DATA_]', 'type': 'file', 'filter': 'link', 'label':'data file'},
'ILLUMINA_READ_GC_D3_HTML_PLOT' : {'token' : '[_READ-QC-HISTO-D3_]', 'type': 'file', 'filter': 'link', 'label':'interactive plot'},
'read GC plot' : {'token' : '[_READ-GC-HIST_]', 'type': 'file'},
## Cycle Nucleotide Composition
'read base count text 1' : {'token' : '[_NUC-COMP-FREQ-R1-DATA_]', 'type': 'file', 'filter': 'link', 'label':'data file'},
'read base count text 2' : {'token' : '[_NUC-COMP-FREQ-R2-DATA_]', 'type': 'file', 'filter': 'link', 'label':'data file'},
'ILLUMINA_READ_BASE_COUNT_D3_HTML_PLOT_1' : {'token' : '[_NUC-COMP-FREQ-R1-D3_]', 'type': 'file', 'filter': 'link', 'label':'interactive plot'},
'ILLUMINA_READ_BASE_COUNT_D3_HTML_PLOT_2' : {'token' : '[_NUC-COMP-FREQ-R2-D3_]', 'type': 'file', 'filter': 'link', 'label':'interactive plot'},
'read base count plot 1' : {'token' : '[_CYCLE-NUCL-COMPOSITION-READ1_]', 'type': 'file'},
'read base count plot 2' : {'token' : '[_CYCLE-NUCL-COMPOSITION-READ2_]', 'type': 'file'},
## Percentage of Common Contaminants
'illumina read percent contamination artifact seal' : {'token' : '[_CONTAM-ART-SEA-PCT_]', 'type': 'floatstr'},
'artifact.seal.stats seal' : {'token' : '[_CONTAM-ART-SEAL_]', 'type': 'file'},
'illumina read percent contamination DNA spikein seal' : {'token' : '[_DNA-SPIKEIN-SEAL_PCT_]', 'type': 'floatstr'},
'DNA_spikein.seal.stats seal' : {'token' : '[_DNA-SPIKEIN-SEAL_]', 'type': 'file'},
'illumina read percent contamination RNA spikein seal' : {'token' : '[_RNA-SPIKEIN-SEAL_PCT_]', 'type': 'floatstr'},
'RNA_spikein.seal.stats seal' : {'token' : '[_RNA-SPIKEIN-SEAL_]', 'type': 'file'},
'illumina read percent contamination fosmid seal' : {'token' : '[_CONTAM-FOSMID-SEAL-PCT_]', 'type': 'floatstr'},
'fosmid.seal.stats seal' : {'token' : '[_CONTAM-FOSMID-SEAL_]', 'type': 'file'},
'illumina read percent contamination fosmid seal' : {'token' : '[_CONTAM-FOSMID-SEAL-PCT_]', 'type': 'floatstr'},
'fosmid.seal.stats seal' : {'token' : '[_CONTAM-FOSMID-SEAL_]', 'type': 'file'},
'illumina read percent contamination mitochondrion seal' : {'token' : '[_CONTAM-MITO-SEAL-PCT_]', 'type': 'floatstr'},
'mitochondrion.seal.stats seal' : {'token' : '[_CONTAM-MITO-SEAL_]', 'type': 'file'},
'illumina read percent contamination plastid seal' : {'token' : '[_CONTAM-CHLO-SEAL-PCT_]', 'type': 'floatstr'},
'plastid.seal.stats seal' : {'token' : '[_CONTAM-CHLO-SEAL_]', 'type': 'file'},
'illumina read percent contamination phix seal' : {'token' : '[_CONTAM-PHIX-SEAL-PCT_]', 'type': 'floatstr'},
'phix.seal.stats seal' : {'token' : '[_CONTAM-PHIX-SEAL_]', 'type': 'file'},
'illumina read percent contamination rrna seal' : {'token' : '[_CONTAM-RRNA-SEAL-PCT_]', 'type': 'floatstr'},
'rrna.seal.stats seal' : {'token' : '[_CONTAM-RRNA-SEAL_]', 'type': 'file'},
'illumina read percent contamination microbes seal' : {'token' : '[_CONTAM-NON-SYN-SEAL-PCT_]', 'type': 'floatstr'},
'microbes.seal.stats seal' : {'token' : '[_CONTAM-NON-SYN-SEAL_]', 'type': 'file'},
# 'illumina read percent contamination synthetic seal' : {'token' : '[_CONTAM-SYN-SEAL-PCT_]', 'type': 'floatstr'},
'synthetic.seal.stats seal' : {'token' : '[_CONTAM-SYN-SEAL_]', 'type': 'file'},
'illumina read percent contamination adapters seal' : {'token' : '[_CONTAM-ADAPTER-PCT_]', 'type': 'floatstr'},
'adapters.seal.stats seal' : {'token' : '[_CONTAM-ADAPTER-SEAL_]', 'type': 'file'},
## Sketch vs NT
'sketch_vs_nt_output' : {'token' : '[_SKETCH-VS-NT_]', 'type': 'file'},
# 'sketch_vs_nt_output' : {'token' : '[_SKETCH-VS-NT-BASE_]', 'type': 'file', 'filter': 'base'},
}
html = ''
with open(temp, 'r') as fh:
html = fh.read()
for key in tok_map:
dat = tok_map[key]
val = pipeline_val(key, dat, stats, files, filepath_prefix)
# print('key=%s; %s; ====type=%s' % (key, val, type(val)))
html = html.replace(dat['token'], val)
artifact_tr = do_html_contam_art_first_n_pb_tr(stats, files, odir, filepath_prefix)
html = html.replace('[_CONTAN-ART-SEAL-FIRST-BP_]', artifact_tr)
## synthetic contam
contam_syn_file = pipeline_val('synthetic.seal.stats seal', {'type': 'file', 'filter': 'full'}, stats, files, filepath_prefix)
if contam_syn_file and os.path.isfile(contam_syn_file):
contam_syn_pct_key = 'illumina read percent contamination synthetic seal'
cmd = 'grep "#Matched" %s' % contam_syn_file
stdOut, stdErr, exitCode = run_sh_command(cmd, True)
if exitCode == 0:
toks = stdOut.strip().split()
if len(toks) == 3:
html = html.replace('[_CONTAM-SYN-SEAL-PCT_]', toks[2][:-1])
###--- Sketch
sketch_html = ''
## add Sketch vs NT if file exists:
sketch_nt_file = pipeline_val('sketch_vs_nt_output', {'type': 'file', 'filter': 'full'}, stats, files, filepath_prefix)
if os.path.isfile(sketch_nt_file):
basename = os.path.basename(sketch_nt_file)
# href = pipeline_val('sketch_vs_nt_output', {'type': 'file'}, stats, files, filepath_prefix)
sketch_html = html_tag('h4', 'Sketch vs. NT') #+ '<br />' + 'Raw file:%s' % html_link(href, basename)
sketch_tab, sketch_table_header = sketch_table(sketch_nt_file)
sketch_html += sketch_tab
## add Sketch vs refseq if file exists?
if sketch_html:
sketch_html = html_tag('h2', 'Read Sketch', attrs={'class': 'section-title'}) \
+ stetch_section_note(sketch_table_header) \
+ html_tag('div', sketch_html, {'class': 'section'})
html = html.replace('[_SKETCH-TABLE_]', sketch_html)
# copy the image file
imgdir = os.path.join(PYDIR, 'images')
todir = os.path.join(odir, 'images')
if os.path.isdir(todir):
shutil.rmtree(todir)
shutil.copytree(imgdir, todir, False, None)
return html
def do_html(odir, infile):
# odir = os.path.abspath(odir) ## DO NOT convert!! The relative file path in html need this original odir string matching
screen('Output dir - %s' % odir)
fname = os.path.basename(infile)
screen('Create HTML page in %s for %s ..' % (odir, fname))
temp = os.path.join(PYDIR, 'template/template.html')
with open(temp, 'r') as fh:
html = fh.read()
html = html.replace('[_PAGE-TITLE_]', 'Read QC Report')
html = html.replace('[_REPORT-TITLE_]', 'BBTools Read QC Report')
html = html.replace('[_INPUT-FILE-NAME_]', fname)
html = html.replace('[_REPORT-DATE_]', '{:%Y-%m-%d %H:%M:%S}'.format(datetime.datetime.now()))
hbody =do_html_body(odir, odir)
html = html.replace('[_REPORT-BODY_]', hbody)
## write the html to file
idxfile = os.path.join(odir, 'index.html')
with open(idxfile, 'w') as fh2:
fh2.write(html)
screen('HTML index file written to %s' % idxfile)
# copy the css file
cssdir = os.path.join(PYDIR, 'css')
todir = os.path.join(odir, 'css')
if os.path.isdir(todir):
shutil.rmtree(todir)
shutil.copytree(cssdir, todir, False, None)
def screen(txt):
print('.. %s' % txt)
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## Main Program
if __name__ == "__main__":
desc = "RQC ReadQC Pipeline"
parser = argparse.ArgumentParser(description=desc)
parser.add_argument("-f", "--fastq", dest="fastq", help="Set input fastq file (full path to fastq)", required=True)
parser.add_argument("-o", "--output-path", dest="outputPath", help="Set output path to write to")
parser.add_argument("-x", "--cut", dest="cutLenOfFirstBases", help="Set Read cut length (bp) for read contamination detection")
parser.add_argument("-v", "--version", action="version", version=VERSION)
## For running on Crays
parser.add_argument("-l", "--lib-name", dest="libName", help="Set library name", required=False)
parser.add_argument("-m", "--is-multiplexed", dest="isMultiplexed", help="Set multiplexed data", required=False)
parser.add_argument("-r", "--is-rna", dest="isRna", help="Set RNA data", required=False)
## produce html when processing is done
parser.add_argument("-html", "--html", action="store_true", help="Create html file", dest="html", default=False, required=False)
## Toggle options
parser.add_argument("-b", "--skip-blast", action="store_true", help="Skip the blast run", dest="skipBlast", default=False, required=False)
parser.add_argument("-bn", "--skip-blast-nt", action="store_true", help="Skip Blast run against nt", dest="skipBlastNt", default=False, required=False)
parser.add_argument("-br", "--skip-blast-refseq", action="store_true", help="Skip Blast run against refseq.archaea and refseq.bateria", dest="skipBlastRefseq", default=False, required=False)
parser.add_argument("-c", "--skip-cleanup", action="store_true", help="Skip temporary file cleanup", dest="skipCleanup", default=False, required=False)
parser.add_argument("-p", "--pooled-analysis", action="store_true", help="Enable pooled analysis (demultiplexing)", dest="pooledAnalysis", default=False, required=False)
parser.add_argument("-s", "--skip-subsample", action="store_true", help="Skip subsampling.", dest="skipSubsample", default=False, required=False)
parser.add_argument("-z", "--skip-localization", action="store_true", help="Skip database localization", dest="skipDbLocalization", default=False, required=False)
options = parser.parse_args()
outputPath = None # output path, defaults to current working directory
fastq = None # full path to input fastq.gz
status = "start"
nextStepToDo = 0
if options.outputPath:
outputPath = options.outputPath
else:
outputPath = os.getcwd()
if options.fastq:
fastq = options.fastq
## create output_directory if it doesn't exist
if not os.path.isdir(outputPath):
os.makedirs(outputPath)
libName = None ## for illumina_sciclone_analysis()
isRna = None ## for illumina_sciclone_analysis()
isMultiplexed = None ## for illumina_generate_index_sequence_detection_plot()
bSkipBlast = False
bSkipBlastRefseq = False ## refseq.archaea and refseq.bacteria
bSkipBlastNt = False
bPooledAnalysis = False
## RQC-743 Need to specify the first cutting bp for read contam detection (eps. for smRNA )
firstBptoCut = 50
if options.cutLenOfFirstBases:
firstBptoCut = options.cutLenOfFirstBases
if options.libName:
libName = options.libName
## switches
if options.isRna:
isRna = options.isRna
if options.isMultiplexed:
isMultiplexed = options.isMultiplexed
if options.skipBlast:
bSkipBlast = options.skipBlast
if options.skipBlastRefseq:
bSkipBlastRefseq = options.skipBlastRefseq
if options.skipBlastNt:
bSkipBlastNt = options.skipBlastNt
if options.pooledAnalysis:
bPooledAnalysis = options.pooledAnalysis
skipSubsampling = options.skipSubsample
## Set readqc config
RQCReadQcConfig.CFG["status_file"] = os.path.join(outputPath, "readqc_status.log")
RQCReadQcConfig.CFG["files_file"] = os.path.join(outputPath, "readqc_files.tmp")
RQCReadQcConfig.CFG["stats_file"] = os.path.join(outputPath, "readqc_stats.tmp")
RQCReadQcConfig.CFG["output_path"] = outputPath
RQCReadQcConfig.CFG["skip_cleanup"] = options.skipCleanup
RQCReadQcConfig.CFG["skip_localization"] = options.skipDbLocalization
RQCReadQcConfig.CFG["log_file"] = os.path.join(outputPath, "readqc.log")
screen("Started readqc pipeline, writing log to: %s" % (RQCReadQcConfig.CFG["log_file"]))
log = get_logger("readqc", RQCReadQcConfig.CFG["log_file"], LOG_LEVEL, False, True)
log.info("=================================================================")
log.info(" Read Qc Analysis (version %s)", VERSION)
log.info("=================================================================")
log.info("Starting %s with %s", SCRIPT_NAME, fastq)
if os.path.isfile(RQCReadQcConfig.CFG["status_file"]):
status = get_status(RQCReadQcConfig.CFG["status_file"], log)
else:
checkpoint_step_wrapper(status)
if not os.path.isfile(fastq):
log.error("%s not found, aborting!", fastq)
exit(2)
elif status != "complete":
## check for fastq file
# if os.path.isfile(fastq):
log.info("Found %s, starting processing.", fastq)
log.info("Latest status = %s", status)
if status != 'start':
nextStepToDo = int(status.split("_")[0])
if status.find("complete") != -1:
nextStepToDo += 1
log.info("Next step to do = %s", nextStepToDo)
if status != 'complete':
bDone = False
cycle = 0
totalReadNum = 0
firstSubsampledFastqFileName = ""
secondSubsampledFastqFile = ""
totalReadCount = 0
subsampledReadNum = 0
bIsPaired = False
readLength = 0
firstSubsampledLogFile = os.path.join(outputPath, "subsample", "first_subsampled.txt")
secondSubsampledLogFile = os.path.join(outputPath, "subsample", "second_subsampled.txt")
while not bDone:
cycle += 1
if bPooledAnalysis:
nextStepToDo = 17
status = "16_illumina_read_blastn_nt complete"
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 1. fast_subsample_fastq_sequences
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 1 or status == "start":
status, firstSubsampledFastqFileName, totalReadCount, subsampledReadNum, bIsPaired, readLength = do_fast_subsample_fastq_sequences(fastq, skipSubsampling, log)
if status.endswith("failed"):
log.error("Subsampling failed.")
sys.exit(-1)
## if not skipSubsampling and subsampledReadNum == 0: ## too small input file
if not skipSubsampling and subsampledReadNum < RQCReadQc.ILLUMINA_MIN_NUM_READS: ## min=1000
log.info("Too small input fastq file. Skip subsampling: total number of reads = %s, sampled number of reads = %s", totalReadCount, subsampledReadNum)
skipSubsampling = True
status, firstSubsampledFastqFileName, totalReadCount, subsampledReadNum, bIsPaired, readLength = do_fast_subsample_fastq_sequences(fastq, skipSubsampling, log)
if status.endswith("failed"):
log.error("Subsampling failed.")
sys.exit(-1)
subsampledReadNum = totalReadCount
if subsampledReadNum >= RQCReadQc.ILLUMINA_MIN_NUM_READS:
status = "1_illumina_readqc_subsampling complete"
## Still too low number of reads -> record ILLUMINA_TOO_SMALL_NUM_READS and quit
##
if subsampledReadNum == 0 or subsampledReadNum < RQCReadQc.ILLUMINA_MIN_NUM_READS: ## min=1000
log.info("Too small number of reads (< %s). Stop processing.", RQCReadQc.ILLUMINA_MIN_NUM_READS)
print("WARNING : Too small number of reads (< %s). Stop processing." % RQCReadQc.ILLUMINA_MIN_NUM_READS)
log.info("Completed %s: %s", SCRIPT_NAME, fastq)
## Add ILLUMINA_TOO_SMALL_NUM_READS=1 to stats to be used in reporting
statsFile = RQCReadQcConfig.CFG["stats_file"]
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_TOO_SMALL_NUM_READS, "1", log)
## move rqc-files.tmp to rqc-files.txt
newFilesFile = os.path.join(outputPath, "readqc_files.txt")
newStatsFile = os.path.join(outputPath, "readqc_stats.txt")
cmd = "mv %s %s " % (RQCReadQcConfig.CFG["files_file"], newFilesFile)
log.info("mv cmd: %s", cmd)
run_sh_command(cmd, True, log)
cmd = "mv %s %s " % (RQCReadQcConfig.CFG["stats_file"], newStatsFile)
log.info("mv cmd: %s", cmd)
run_sh_command(cmd, True, log)
exit(0)
if status == "1_illumina_readqc_subsampling failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 2. write_unique_20_mers
## NOTE: fastq = orig input fastq,
## totalReadCount = total reads in the orig input fastq
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 2 or status == "1_illumina_readqc_subsampling complete":
## Cope with restarting
## save subsamples file in "first_subsampled.txt" so that the file
## name can be read when started
if not os.path.isfile(firstSubsampledLogFile):
make_dir(os.path.join(outputPath, "subsample"))
with open(firstSubsampledLogFile, "w") as SAMP_FH:
SAMP_FH.write(firstSubsampledFastqFileName + " " + str(totalReadCount) + " " + str(subsampledReadNum) + "\n")
if firstSubsampledFastqFileName == "" and options.skipSubsample:
if firstSubsampledFastqFileName == "" and os.path.isfile(firstSubsampledLogFile):
with open(firstSubsampledLogFile, "r") as SAMP_FH:
l = SAMP_FH.readline().strip()
t = l.split()
assert len(t) == 2 or len(t) == 3
firstSubsampledFastqFileName = t[0]
totalReadCount = t[1]
subsampledReadNum = t[2]
log.debug("firstSubsampledFastqFileName=%s, totalReadCount=%s, subsampledReadNum=%s", firstSubsampledFastqFileName, totalReadCount, subsampledReadNum)
else:
nextStepToDo = 1
continue
## TODO: move getting totalReadNum in the func ???
##
log.debug("firstSubsampledFastqFileName=%s, totalReadCount=%s, subsampledReadNum=%s", firstSubsampledFastqFileName, totalReadCount, subsampledReadNum)
if totalReadCount is None or totalReadCount == 0:
nextStepToDo = 1
continue
else:
status = do_write_unique_20_mers(fastq, totalReadCount, log)
if status == "2_unique_mers_sampling failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 3. generate read GC histograms: Illumina_read_gc
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 3 or status == "2_unique_mers_sampling complete":
## Read the subsampled file name
if firstSubsampledFastqFileName == "" and os.path.isfile(firstSubsampledLogFile):
with open(firstSubsampledLogFile, "r") as SAMP_FH:
firstSubsampledFastqFileName = SAMP_FH.readline().strip().split()[0]
status = do_illumina_read_gc(firstSubsampledFastqFileName, log)
if status == "3_illumina_read_gc failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 4. read_quality_stats
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 4 or status == "3_illumina_read_gc complete":
if firstSubsampledFastqFileName == "" and os.path.isfile(firstSubsampledLogFile):
with open(firstSubsampledLogFile, "r") as SAMP_FH:
firstSubsampledFastqFileName = SAMP_FH.readline().strip().split()[0]
status = do_read_quality_stats(firstSubsampledFastqFileName, log)
if status == "4_illumina_read_quality_stats failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 5. write_base_quality_stats
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 5 or status == "4_illumina_read_quality_stats complete":
if firstSubsampledFastqFileName == "" and os.path.isfile(firstSubsampledLogFile):
with open(firstSubsampledLogFile, "r") as SAMP_FH:
firstSubsampledFastqFileName = SAMP_FH.readline().strip().split()[0]
status = do_write_base_quality_stats(firstSubsampledFastqFileName, log)
if status == "5_illumina_read_quality_stats failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 6. illumina_count_q_score
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 6 or status == "5_illumina_read_quality_stats complete":
if firstSubsampledFastqFileName == "" and os.path.isfile(firstSubsampledLogFile):
with open(firstSubsampledLogFile, "r") as SAMP_FH:
firstSubsampledFastqFileName = SAMP_FH.readline().strip().split()[0]
status = do_illumina_count_q_score(firstSubsampledFastqFileName, log)
if status == "6_illumina_count_q_score failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 7. illumina_calculate_average_quality
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 7 or status == "6_illumina_count_q_score complete":
## Let's skip this step. (20140902)
log.info("\n\n%sSTEP7 - Skipping 21mer analysis <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
status = "7_illumina_calculate_average_quality in progress"
checkpoint_step_wrapper(status)
status = "7_illumina_calculate_average_quality complete"
checkpoint_step_wrapper(status)
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 8. illumina_find_common_motifs
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 8 or status == "7_illumina_calculate_average_quality complete":
if firstSubsampledFastqFileName == "" and os.path.isfile(firstSubsampledLogFile):
with open(firstSubsampledLogFile, "r") as SAMP_FH:
firstSubsampledFastqFileName = SAMP_FH.readline().strip().split()[0]
status = do_illumina_find_common_motifs(firstSubsampledFastqFileName, log)
if status == "8_illumina_find_common_motifs failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 10. illumina_run_tagdust
##
## NOTE: This step will be skipped. No need to run.
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# if nextStepToDo == 10 or status == "9_illumina_run_dedupe complete":
if nextStepToDo == 9 or status == "8_illumina_find_common_motifs complete":
## 20131023 skip this step
log.info("\n\n%sSTEP10 - Skipping tag dust <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
status = "10_illumina_run_tagdust in progress"
checkpoint_step_wrapper(status)
status = "10_illumina_run_tagdust complete"
checkpoint_step_wrapper(status)
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 11. illumina_detect_read_contam
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 11 or status == "10_illumina_run_tagdust complete":
if firstSubsampledFastqFileName == "" and os.path.isfile(firstSubsampledLogFile):
with open(firstSubsampledLogFile, "r") as SAMP_FH:
firstSubsampledFastqFileName = SAMP_FH.readline().strip().split()[0]
status = do_illumina_detect_read_contam(firstSubsampledFastqFileName, firstBptoCut, log)
if status == "11_illumina_detect_read_contam failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 13. illumina_subsampling_read_megablast
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 12 or status == "11_illumina_detect_read_contam complete":
if not bSkipBlast:
if firstSubsampledFastqFileName == "" and os.path.isfile(firstSubsampledLogFile):
with open(firstSubsampledLogFile, "r") as SAMP_FH:
l = SAMP_FH.readline().strip()
firstSubsampledFastqFileName = l.split()[0]
totalReadCount = int(l.split()[1])
subsampledReadNum = int(l.split()[2])
status, secondSubsampledFastqFile, second_read_cnt_total, second_read_cnt_sampled = do_illumina_subsampling_read_blastn(firstSubsampledFastqFileName, skipSubsampling, subsampledReadNum, log)
log.debug("status=%s, secondSubsampledFastqFile=%s, second_read_cnt_total=%s, second_read_cnt_sampled=%s", status, secondSubsampledFastqFile, second_read_cnt_total, second_read_cnt_sampled)
else:
statsFile = RQCReadQcConfig.CFG["stats_file"]
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_SKIP_BLAST, "1", log)
log.info("\n\n%sSTEP13 - Run subsampling for Blast search <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
log.info("13_illumina_subsampling_read_megablast skipped.\n")
status = "13_illumina_subsampling_read_megablast complete"
checkpoint_step_wrapper(status)
if status == "13_illumina_subsampling_read_megablast failed":
bDone = True
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 14. illumina_read_blastn_refseq_archaea
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 14 or status == "13_illumina_subsampling_read_megablast complete":
if not bSkipBlast and not bSkipBlastRefseq:
if secondSubsampledFastqFile == "" and not os.path.isfile(secondSubsampledLogFile):
nextStepToDo = 13
continue
if secondSubsampledFastqFile != "" and not os.path.isfile(secondSubsampledLogFile):
make_dir(os.path.join(outputPath, "subsample"))
with open(secondSubsampledLogFile, "w") as SAMP_FH:
SAMP_FH.write(secondSubsampledFastqFile + " " + str(second_read_cnt_total) + " " + str(second_read_cnt_sampled) + "\n")
else:
with open(secondSubsampledLogFile, "r") as SAMP_FH:
l = SAMP_FH.readline().strip()
secondSubsampledFastqFile = l.split()[0]
second_read_cnt_total = int(l.split()[1])
second_read_cnt_sampled = int(l.split()[2])
status = do_illumina_read_blastn_refseq_archaea(secondSubsampledFastqFile, second_read_cnt_sampled, log)
checkpoint_step_wrapper(status)
else:
log.info("\n\n%sSTEP14 - Run illumina_read_blastn_refseq_archaea analysis <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
log.info("14_illumina_read_blastn_refseq_archaea skipped.\n")
status = "14_illumina_read_blastn_refseq_archaea in progress"
checkpoint_step_wrapper(status)
status = "14_illumina_read_blastn_refseq_archaea complete"
checkpoint_step_wrapper(status)
if status == "14_illumina_read_blastn_refseq_archaea failed":
bDone = True
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 15. illumina_read_blastn_refseq_bacteria
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 15 or status == "14_illumina_read_blastn_refseq_archaea complete":
if not bSkipBlast and not bSkipBlastRefseq:
if secondSubsampledFastqFile == "" and not os.path.isfile(secondSubsampledLogFile):
nextStepToDo = 13
continue
if secondSubsampledFastqFile != "" and not os.path.isfile(secondSubsampledLogFile):
make_dir(os.path.join(outputPath, "subsample"))
with open(secondSubsampledLogFile, "w") as SAMP_FH:
SAMP_FH.write(secondSubsampledFastqFile + " " + str(second_read_cnt_total) + " " + str(second_read_cnt_sampled) + "\n")
else:
with open(secondSubsampledLogFile, "r") as SAMP_FH:
l = SAMP_FH.readline().strip()
secondSubsampledFastqFile = l.split()[0]
second_read_cnt_total = int(l.split()[1])
second_read_cnt_sampled = int(l.split()[2])
status = do_illumina_read_blastn_refseq_bacteria(secondSubsampledFastqFile, log)
checkpoint_step_wrapper(status)
else:
log.info("\n\n%sSTEP15 - Run illumina_read_blastn_refseq_bacteria analysis <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
log.info("15_illumina_read_blastn_refseq_bacteria skipped.\n")
status = "15_illumina_read_blastn_refseq_bacteria in progress"
checkpoint_step_wrapper(status)
status = "15_illumina_read_blastn_refseq_bacteria complete"
checkpoint_step_wrapper(status)
if status == "15_illumina_read_blastn_refseq_bacteria failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 16. illumina_read_blastn_nt
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 16 or status == "15_illumina_read_blastn_refseq_bacteria complete":
if not bSkipBlast and not bSkipBlastNt:
if secondSubsampledFastqFile == "" and not os.path.isfile(secondSubsampledLogFile):
nextStepToDo = 13
continue
if secondSubsampledFastqFile == "" and os.path.isfile(secondSubsampledLogFile):
with open(secondSubsampledLogFile, "r") as SAMP_FH:
l = SAMP_FH.readline().strip()
secondSubsampledFastqFile = l.split()[0]
second_read_cnt_total = int(l.split()[1])
second_read_cnt_sampled = int(l.split()[2])
status = do_illumina_read_blastn_nt(secondSubsampledFastqFile, log)
checkpoint_step_wrapper(status)
else:
log.info("\n\n%sSTEP16 - Run illumina_read_blastn_nt analysis <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
log.info("16_illumina_read_blastn_nt skipped.\n")
status = "16_illumina_read_blastn_nt in progress"
checkpoint_step_wrapper(status)
status = "16_illumina_read_blastn_nt complete"
checkpoint_step_wrapper(status)
if status == "16_illumina_read_blastn_nt failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 17. multiplex_statistics
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 17 or status == "16_illumina_read_blastn_nt complete":
status = do_illumina_multiplex_statistics(fastq, log, isMultiplexed=isMultiplexed)
if status == "17_multiplex_statistics failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 18. end_of_read_illumina_adapter_check
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 18 or status == "17_multiplex_statistics complete":
if firstSubsampledFastqFileName == "" and os.path.isfile(firstSubsampledLogFile):
with open(firstSubsampledLogFile, "r") as SAMP_FH:
firstSubsampledFastqFileName = SAMP_FH.readline().strip().split()[0]
status = do_end_of_read_illumina_adapter_check(firstSubsampledFastqFileName, log)
if status == "18_end_of_read_illumina_adapter_check failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 19. insert size analysis (bbmerge.sh)
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 19 or status == "18_end_of_read_illumina_adapter_check complete":
status = do_insert_size_analysis(fastq, log)
if status == "19_insert_size_analysis failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 21. sketch vs nt, refseq, silva
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# if nextStepToDo == 21 or status == "20_gc_divergence_analysis complete":
if nextStepToDo == 20 or status == "19_insert_size_analysis complete":
status = do_sketch_vs_nt_refseq_silva(fastq, log)
if status == "21_sketch_vs_nt_refseq_silva failed":
bDone = True
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 22. postprocessing & reporting
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 22 or status == "21_sketch_vs_nt_refseq_silva complete":
## 20131023 skip this step
log.info("\n\n%sSTEP22 - Run illumina_readqc_report_postprocess: mv rqc-*.tmp to rqc-*.txt <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<%s\n", color['pink'], color[''])
status = "22_illumina_readqc_report_postprocess in progress"
checkpoint_step_wrapper(status)
## move rqc-files.tmp to rqc-files.txt
newFilesFile = os.path.join(outputPath, "readqc_files.txt")
newStatsFile = os.path.join(outputPath, "readqc_stats.txt")
cmd = "mv %s %s " % (RQCReadQcConfig.CFG["files_file"], newFilesFile)
log.info("mv cmd: %s", cmd)
run_sh_command(cmd, True, log)
cmd = "mv %s %s " % (RQCReadQcConfig.CFG["stats_file"], newStatsFile)
log.info("mv cmd: %s", cmd)
run_sh_command(cmd, True, log)
log.info("22_illumina_readqc_report_postprocess complete.")
status = "22_illumina_readqc_report_postprocess complete"
checkpoint_step_wrapper(status)
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## 23. Cleanup
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if nextStepToDo == 23 or status == "22_illumina_readqc_report_postprocess complete":
status = do_cleanup_readqc(log)
if status == "23_cleanup_readqc failed":
bDone = True
if status == "23_cleanup_readqc complete":
status = "complete" ## FINAL COMPLETE!
bDone = True
## don't cycle more than 10 times ...
if cycle > 10:
bDone = True
if status != "complete":
log.info("Status %s", status)
else:
log.info("\n\nCompleted %s: %s", SCRIPT_NAME, fastq)
checkpoint_step_wrapper("complete")
log.info("Pipeline processing Done.")
print("Pipeline processing Done.")
else:
log.info("Pipeline processing is already complete, skip.")
print("Pipeline processing is already complete, skip.")
if status == 'complete' and options.html:
do_html(outputPath, fastq)
exit(0)
## EOF | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pytools/readqc.py | readqc.py |
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## libraries to use
#import re
import os
import time
import sys
#import getpass
import logging
#from colorlog import ColoredFormatter
# import EnvironmentModules # get_read_count_fastq
from subprocess import Popen, PIPE
from email.mime.text import MIMEText
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## function definitions
'''
creates a logging instance
https://docs.python.org/2/howto/logging.html
https://pypi.python.org/pypi/colorlog
'''
def get_logger(log_name, log_file, log_level = "INFO", stdout = False, color = False):
log = logging.getLogger(log_name)
handler = None
if stdout:
handler = logging.StreamHandler(sys.stdout)
else:
handler = logging.FileHandler(log_file)
formatter = logging.Formatter('%(filename)-15s:%(process)d %(asctime)s %(levelname)s: %(message)s')
if color and 1==2:
"""
formatter = ColoredFormatter("%(filename)-15s:%(process)d %(asctime)s %(log_color)s%(levelname)s: %(message)s", datefmt=None, reset=True,
log_colors={
'DEBUG': 'blue',
'INFO': 'green',
'WARNING': 'yellow',
'ERROR': 'red',
'CRITICAL': 'red, bg_white',
},
secondary_log_colors={},
style='%')
Not working in conda - 2017-04-29
"""
handler.setFormatter(formatter)
log.addHandler(handler)
log.setLevel(log_level)
return log
'''
Checkpoint the status plus a timestamp
- appends the status
@param status_log: /path/to/status.log (or whatever you name it)
@param status: status to append to status.log
'''
def checkpoint_step(status_log, status):
status_line = "%s,%s\n" % (status, time.strftime("%Y-%m-%d %H:%M:%S"))
with open(status_log, "a") as myfile:
myfile.write(status_line)
'''
returns the last step (status) from the pipeline
@param status_log: /path/to/status.log (or whatever you name it)
@param log: logger object
@return last status in the status log, "start" if nothing there
'''
def get_status(status_log, log = None):
#status_log = "%s/%s" % (output_path, "test_status.log")
status = "start"
timestamp = str(time.strftime("%Y-%m-%d %H:%M:%S"))
if os.path.isfile(status_log):
fh = open(status_log, 'r')
lines = fh.readlines()
fh.close()
for line in lines:
if line.startswith('#'): continue
line_list = line.split(",")
assert len(line_list) == 2
status = str(line_list[0]).strip()
timestamp = str(line_list[1]).strip()
if not status:
status = "start"
if log:
log.info("Last checkpointed step: %s (%s)", status, timestamp)
else:
if log:
log.info("Cannot find status.log (%s), assuming new run", status_log)
status = status.strip().lower()
return status
'''
run a command from python
@param cmd: command to run
@param live: False = run in dry mode (print command), True = run normally
@param log: logger object
@return std_out, std_err, exit_code
'''
def run_command(cmd, live=False, log=None):
stdOut = None
stdErr = None
exitCode = None
#end = 0
#elapsedSec = 0
if cmd:
if not live:
stdOut = "Not live: cmd = '%s'" % (cmd)
exitCode = 0
else:
if log: log.info("cmd: %s" % (cmd))
p = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True)
stdOut, stdErr = p.communicate()
exitCode = p.returncode
if log:
log.info("Return values: exitCode=" + str(exitCode) + ", stdOut=" + str(stdOut) + ", stdErr=" + str(stdErr))
if exitCode != 0:
log.warn("- The exit code has non-zero value.")
else:
if log:
log.error("- No command to run.")
return None, None, -1
return stdOut, stdErr, exitCode
'''
replacement for run_command
- includes logging, convert_cmd & post_mortem
'''
def run_cmd(cmd, log=None):
std_out = None
std_err = None
exit_code = 0
if cmd:
# convert to work on genepool/denovo
cmd = convert_cmd(cmd)
if log:
log.info("- cmd: %s", cmd)
p = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True)
std_out, std_err = p.communicate()
exit_code = p.returncode
post_mortem_cmd(cmd, exit_code, std_out, std_err, log)
return std_out, std_err, exit_code
'''
Simple function to output to the log what happened only if exit code > 0
Typical usage:
std_out, std_err, exit_code = run_command(cmd, True)
post_mortem_cmd(cmd, exit_code, std_out, std_err)
'''
def post_mortem_cmd(cmd, exit_code, std_out, std_err, log = None):
if exit_code > 0:
if log:
log.error("- cmd failed: %s", cmd)
log.error("- exit code: %s", exit_code)
else:
print "- cmd failed: %s" % (cmd)
print "- exit code: %s" % (exit_code)
if std_out:
if log:
log.error("- std_out: %s", std_out)
else:
print "- std_out: %s" % (std_out)
if std_err:
if log:
log.error("- std_err: %s", std_err)
else:
print "- std_err: %s" % (std_err)
'''
Convert command to use genepool or denovo (shifter) to run
replace #placeholder; with shifter or module load command
#placeholder.v; should specify the version to use
This should be the only place in the pipelines that specifies the images/modules translation
'''
def convert_cmd(cmd):
new_cmd = cmd
shifter_img = {
"#bbtools" : "shifter --image=bryce911/bbtools ",
"#pigz" : "module load pigz;",
"#jamo" : "shifter --image=registry.services.nersc.gov/htandra/jamo_dev:1.0 ", # works, but would like simple module to use - have one on Denovo but not Cori
"#gnuplot" : "shifter --image=bryce911/bbtools ", # (1)
"#spades/3.9.0" : "shifter --image=bryce911/spades3.9.0 ",
"#spades/3.10.1" : "shifter --image=bryce911/spades3.10.1 ",
"#spades/3.11.0" : "shifter --image=bryce911/spades-3.11.0 ", # GAA-3383
"#spades/3.11.1-check" : "shifter --image=bryce911/spades3.11.1-check ", # development
"#prodigal/2.6.3" : "shifter --image=registry.services.nersc.gov/jgi/prodigal ", # RQCSUPPORT-1318
"#prodigal/2.5.0" : "shifter --image=registry.services.nersc.gov/jgi/prodigal ",
"#prodigal/2.50" : "shifter --image=registry.services.nersc.gov/jgi/prodigal ",
"#lastal/869" : "shifter --image=bryce911/lastal:869 ",
"#lastal/828" : "shifter --image=bryce911/lastal:869 ",
#"#lastal" : "shifter --image=bryce911/lastal:869 ",
"#R/3.3.2" : "module load R/3.3.2;",
"#texlive" : "shifter --image=bryce911/bbtools ", # (1)
"#java" : "shifter --image=bryce911/bbtools ", # (1)
"#blast+/2.6.0" : "shifter --image=sulsj/ncbi-blastplus:2.6.0 ",
"#blast" : "shifter --image=sulsj/ncbi-blastplus:2.7.0 ",
"#megahit-1.1.1" : "shifter --image=foster505/megahit:v1.1.1-2-g02102e1 ",
"#smrtanalysis/2.3.0_p5" : "shifter --image=registry.services.nersc.gov/jgi/smrtanalysis:2.3.0_p5 ", # meth - need more memory
"#mummer/3.23" : "shifter --image=bryce911/mummer3.23 ", # 3.23
"#hmmer" : "shifter --image=registry.services.nersc.gov/jgi/hmmer:latest ", # 3.1b2
"#samtools/1.4" : "shifter --image=rmonti/samtools ",
"#mothur/1.39.5" : "shifter --image=bryce911/mothur1.39.5 ",
"#vsearch/2.4.3" : "shifter --image=bryce911/vsearch2.4.3 ",
"#graphviz" : "shifter --image=bryce911/bbtools ",
"#ssu-align/0.1.1" : "shifter --image=bryce911/ssu-align0.1.1 ", # openmpi/1.10 included in docker container
"#smrtlink/4.0.0.190159" : "shifter --image=registry.services.nersc.gov/jgi/smrtlink:4.0.0.190159 /smrtlink/smrtcmds/bin/", # progs not in path
"#smrtlink/5.0.1.9585" : "shifter --image=registry.services.nersc.gov/jgi/smrtlink:5.0.1.9585 /smrtlink/smrtcmds/bin/", # progs not in path, Tony created 2017-10-16
"#smrtlink" : "shifter --image=registry.services.nersc.gov/jgi/smrtlink:5.0.1.9585 /smrtlink/smrtcmds/bin/", # progs not in path
"#prodege" : "shifter --image=bryce911/prodege ", # 2.2.1
#"#hmmer" : "shifter --image=registry.services.nersc.gov/jgi/hmmer ", # 3.1b2 - Feb 2015, latest as of Oct 2017
"#checkm" : "shifter --image=registry.services.nersc.gov/jgi/checkm ",
}
# (1) - added as part of the bryce911 bbtools package
#cmd = "#bbtools-shijie;bbmap...."
# this dict will be deprecated as of March 2018 when genepool passes into legend
genepool_mod = {
"#bbtools" : "module load bbtools",
"#pigz" : "module load pigz",
"#jamo" : "module load jamo",
"#gnuplot" : "module load gnuplot/4.6.2", # sag,iso,sps,ce:gc_cov, gc_histogram, contig_gc
"#spades/3.9.0" : "module load spades/3.9.0",
"#spades/3.10.1" : "module load spades/3.10.1",
"#spades/3.11.1" : "module load spades/3.11.1-check",
"#prodigal/2.6.3" : "module load prodigal/2.50", # aka 2.50, also 2.60 is available
"#prodigal/2.5.0" : "module load prodigal/2.50",
"#prodigal/2.50" : "module load prodigal/2.50",
#"#lastal" : "module load last/828",
"#lastal/828" : "module load last/828",
"#R/3.3.2" : "module unload R;module load R/3.3.1", # 3.3.2 not on genepool - RQCSUPPORT-1516 unload R for Kecia
"#texlive" : "module load texlive",
"#blast+/2.6.0" : "module load blast+/2.6.0",
#"#blast+/2.7.0" : "module load blast+/2.7.0", # not created
"#blast" : "module load blast+/2.6.0",
"#java" : "", # java runs natively on genepool
"#megahit-1.1.1" : "module load megahit/1.1.1",
"#smrtanalysis/2.3.0_p5" : "module load smrtanalysis/2.3.0_p5",
"#smrtanalysis/2.3.0_p5_xmx32g" : "module load smrtanalysis/2.3.0_p5;export _JAVA_OPTIONS='-Xmx32g'",
"#mummer/3.23" : "module load mummer/3.23",
"#hmmer" : "module load hmmer/3.1b2",
"#samtools/1.4" : "module load samtools/1.4",
"#mothur/1.39.5" : "module load mothur/1.32.1", # 1.26.0 default, 1.32.1
"#vsearch/2.4.3" : "module load vsearch/2.3.0", # 2.3.0
"#graphviz" : "module load graphviz",
"#ssu-align/0.1.1" : "module load ssu-align",
"#smrtlink/4.0.0.190159" : "module load smrtlink/4.0.0.190159",
"#smrtlink" : "module load smrtlink/5.0.1.9585",
"#smrtlink/5.0.1.9585" : "module load smrtlink/5.0.1.9585",
"#prodege" : "module load R;/projectb/sandbox/rqc/prod/pipelines/external_tools/sag_decontam/prodege-2.2/bin/",
"#checkm" : "module load hmmer prodigal pplacer", # checkm installed in python by default on genepool
}
#bbtools;stats.sh
if cmd.startswith("#"):
cluster = "genepool"
# any other env ids to use?
# cori, denovo, genepool
cluster = os.environ.get('NERSC_HOST', 'unknown')
f = cmd.find(";")
mod = "" # command to replace
if f > -1:
mod = cmd[0:f]
if mod:
# use module load jamo on denovo
if mod == "#jamo" and cluster == "denovo":
shifter_img[mod] = "module load jamo;"
if cluster in ("denovo", "cori"):
if mod in shifter_img:
new_cmd = new_cmd.replace(mod + ";", shifter_img[mod])
else:
if mod in genepool_mod:
if genepool_mod[mod] == "":
new_cmd = new_cmd.replace(mod + ";", "")
else:
new_cmd = new_cmd.replace(mod, genepool_mod[mod])
if new_cmd.startswith("#"):
print "Command not found! %s" % new_cmd
sys.exit(18)
#print new_cmd
return new_cmd
'''
returns human readable file size
@param num = file size (e.g. 1000)
@return: readable float e.g. 1.5 KB
'''
def human_size(num):
if not num:
num = 0.0
for x in ['bytes', 'KB', 'MB', 'GB', 'TB', 'PB', 'XB']:
if num < 1024.0:
return "%3.1f %s" % (num, x)
num /= 1024.0
return "%3.1f %s" % (num, 'ZB')
'''
send out email
@param emailTo: email receipient (e.g. [email protected])
@param emailSubject: subject line for the email
@param emailBody: content of the email
@param emailFrom: optional email from
'''
def send_email(email_to, email_subject, email_body, email_from = '[email protected]', log = None):
msg = ""
err_flag = 0
if not email_to:
msg = "- send_email: email_to parameter missing!"
if not email_subject:
msg = "- send_email: email_subject parameter missing!"
if not email_body:
msg = "- send_email: email_body parameter missing!"
if err_flag == 0:
msg = "- sending email to: %s" % (email_to)
if log:
log.info(msg)
else:
print msg
if err_flag == 1:
return 0
# assume html
email_msg = MIMEText(email_body, "html") # vs "plain"
email_msg['Subject'] = email_subject
email_msg['From'] = email_from
email_msg['To'] = email_to
p = Popen(["/usr/sbin/sendmail", "-t"], stdin = PIPE)
p.communicate(email_msg.as_string())
return err_flag
'''
Write to rqc_file (e.g. rqc-files.tmp) the file_key and file_value
@param rqc_file_log: full path to file containing key=file
@param file_key: key for the entry
@param file_value: value for the entry
'''
def append_rqc_file(rqc_file_log, file_key, file_value, log=None):
if file_key:
buffer = "%s = %s\n" % (file_key, file_value)
with open(rqc_file_log, "a") as myfile:
myfile.write(buffer)
if log: log.info("append_rqc_file: %s:%s" % (file_key, file_value))
else:
if log: log.warning("key or value error: %s:%s" % (file_key, file_value))
'''
Write to rqc_stats (e.g. rqc-stats.tmp) the stats_key and stats_value
@param rqc_file_log: full path to file containing key=file
@param file_key: key for the entry
@param file_value: value for the entry
'''
def append_rqc_stats(rqc_stats_log, stats_key, stats_value, log=None):
if stats_key:
buffer = "%s = %s\n" % (stats_key, stats_value)
with open(rqc_stats_log, "a") as myfile:
myfile.write(buffer)
if log: log.info("append_rqc_stats: %s:%s" % (stats_key, stats_value))
else:
if log: log.warning("key or value error: %s:%s" % (stats_key, stats_value))
'''
Return the file system path to jgi-rqc-pipeline so we can use */tools and */lib
@return /path/to/jgi-rqc-pipelines
'''
def get_run_path():
current_path = os.path.dirname(os.path.abspath(__file__))
run_path = os.path.abspath(os.path.join(current_path, os.pardir))
return run_path
'''
Simple read count using bbtools n_contigs field
- slightly different than in rqc_utility
n_scaffolds n_contigs scaf_bp contig_bp gap_pct scaf_N50 scaf_L50 ctg_N50 ctg_L50 scaf_N90 scaf_L90 ctg_N90 ctg_L90 scaf_max ctg_max scaf_n_gt50K scaf_pct_gt50K gc_avg gc_std
1346616 1346616 405331416 405331415 0.000 1346616 301 1346615 301 1346616 301 1346615 301 301 301 0 0.000 0.44824 0.02675
'''
def get_read_count_fastq(fastq, log = None):
read_cnt = 0
if os.path.isfile(fastq):
# EnvironmentModules.module(["load", "bbtools"])
# bbtools faster than zcat | wc because bbtools uses pigz
# cmd = "stats.sh format=3 in=%s" % fastq
cmd = "#bbtools;stats.sh format=3 in=%s" % fastq
cmd = convert_cmd(cmd)
if log:
log.info("- cmd: %s", cmd)
std_out, std_err, exit_code = run_command(cmd, True)
# EnvironmentModules.module(["unload", "bbtools"])
if exit_code == 0 and std_out:
line_list = std_out.split("\n")
#print line_list
val_list = str(line_list[1]).split() #.split('\t')
#print "v = %s" % val_list
read_cnt = int(val_list[1])
if log:
log.info("- read count: %s", read_cnt)
else:
if log:
post_mortem_cmd(cmd, exit_code, std_out, std_err, log)
else:
log.error("- fastq: %s does not exist!", fastq)
return read_cnt
'''
Subsampling calculation
0 .. 250k reads = 100%
250k .. 25m = 100% to 1%
25m .. 600m = 1%
600m+ .. oo < 1%
July 2014 - 15 runs > 600m (HiSeq-2500 Rapid) - 4 actual libraries / 85325 seq units
- returns new subsampling rate
'''
def get_subsample_rate(read_count):
subsample = 0
subsample_rate = 0.01
max_subsample = 6000000 # 4 hours of blast time
new_subsample_rate = 250000.0/read_count
subsample_rate = max(new_subsample_rate, subsample_rate)
subsample_rate = min(1, subsample_rate) # if subsample_rate > 1, then set to 1
subsample = int(read_count * subsample_rate)
if subsample > max_subsample:
subsample = max_subsample
subsample_rate = subsample / float(read_count)
return subsample_rate
'''
Set color hash
- need to update to remove "c" parameter - used in too many places
'''
def set_colors(c, use_color = False):
if use_color == False:
color = {
'black' : "",
'red' : "",
'green' : "",
'yellow' : "",
'blue' : "",
'pink' : "",
'cyan' : "",
'white' : "",
'' : ""
}
else:
color = {
'black' : "\033[1;30m",
'red' : "\033[1;31m",
'green' : "\033[1;32m",
'yellow' : "\033[1;33m",
'blue' : "\033[1;34m",
'pink' : "\033[1;35m",
'cyan' : "\033[1;36m",
'white' : "\033[1;37m",
'' : "\033[m"
}
return color
'''
New function that just returns colors
'''
def get_colors():
color = {
'black' : "\033[1;30m",
'red' : "\033[1;31m",
'green' : "\033[1;32m",
'yellow' : "\033[1;33m",
'blue' : "\033[1;34m",
'pink' : "\033[1;35m",
'cyan' : "\033[1;36m",
'white' : "\033[1;37m",
'' : "\033[m"
}
return color
'''
Returns msg_ok, msg_fail, msg_warn colored or not colored
'''
def get_msg_settings(color):
msg_ok = "[ "+color['green']+"OK"+color['']+" ]"
msg_fail = "[ "+color['red']+"FAIL"+color['']+" ]"
msg_warn = "[ "+color['yellow']+"WARN"+color['']+" ]"
return msg_ok, msg_fail, msg_warn
'''
Use RQC's ap_tool to get the status
set mode = "-sa" to show all, even completed
'''
def get_analysis_project_id(seq_proj_id, target_analysis_project_id, target_analysis_task_id, output_path, log = None, mode = ""):
if log:
log.info("get_analysis_project_id: spid = %s, tapid = %s, tatid = %s", seq_proj_id, target_analysis_project_id, target_analysis_task_id)
analysis_project_id = 0
analysis_task_id = 0
project_type = None
task_type = None
ap_list = os.path.join(output_path, "ap-info.txt")
AP_TOOL = "/global/dna/projectdirs/PI/rqc/prod/jgi-rqc-pipeline/tools/ap_tool.py"
#AP_TOOL = "/global/homes/b/brycef/git/jgi-rqc-pipeline/tools/ap_tool.py"
cmd = "%s -spid %s -m psv -tapid %s -tatid %s %s > %s 2>&1" % (AP_TOOL, seq_proj_id, target_analysis_project_id, target_analysis_task_id, mode, ap_list)
if log:
log.info("- cmd: %s", cmd)
else:
print "- cmd: %s" % cmd
std_out, std_err, exit_code = run_command(cmd, True)
post_mortem_cmd(cmd, exit_code, std_out, std_err, log)
if os.path.isfile(ap_list):
ap_dict = {} # header = value
cnt = 0
fh = open(ap_list, "r")
for line in fh:
arr = line.strip().split("|")
if cnt == 0:
c2 = 0 # position of title in header
for a in arr:
ap_dict[a.lower()] = c2
c2 += 1
else:
for a in ap_dict:
if ap_dict[a] + 1 > len(arr):
pass
else:
ap_dict[a] = arr[ap_dict[a]]
cnt += 1
fh.close()
analysis_project_id = ap_dict.get("analysis project id")
analysis_task_id = ap_dict.get("analysis task id")
project_type = ap_dict.get("analysis product name")
task_type = ap_dict.get("analysis task name")
# nno such project
if cnt == 1:
analysis_project_id = 0
analysis_task_id = 0
if log:
log.info("- project type: %s, task type: %s", project_type, task_type)
log.info("- analysis_project_id: %s, analysis_task_id: %s", analysis_project_id, analysis_task_id)
try:
analysis_project_id = int(analysis_project_id)
analysis_task_id = int(analysis_task_id)
except:
analysis_project_id = 0
analysis_task_id = 0
# ap = 4, at = 8 means its using the column names but didn't find anything
if analysis_project_id < 100:
analysis_project_id = 0
if analysis_task_id < 100:
analysis_task_id = 0
return analysis_project_id, analysis_task_id
'''
For creating a dot file from the pipeline flow
'''
def append_flow(flow_file, orig_node, orig_label, next_node, next_label, link_label):
fh = open(flow_file, "a")
fh.write("%s|%s|%s|%s|%s\n" % (orig_node, orig_label, next_node, next_label, link_label))
fh.close()
'''
Flow file format:
# comment
*label|PBMID Pipeline run for BTXXY<br><font point-size="10">Run Date: 2017-09-28 14:22:50</font>
# origin node, origin label, next node, next label, link label
input_h5|BTXXY H5<br><font point-size="10">3 smrtcells</font>|assembly|HGAP Assembly<FONT POINT-SIZE="10"><br>3 contigs, 13,283,382bp</FONT>|HGAP v4.0.1
nodes should be the output of the transformation between the nodes
e.g. input fastq (25m reads) --[ bbtools subsampling ]--> subsampled fastq (10m reads)
creates a dot file, to convert to png use:
$ module load graphviz
$ dot -T png (dot file) > (png file)
More info on formatting the labels
http://www.graphviz.org/content/node-shapes#html
'''
def dot_flow(flow_file, dot_file, log = None):
if not os.path.isfile(flow_file):
if log:
log.info("- cannot find flow file: %s", flow_file)
else:
print "Cannot find flow file: %s" % flow_file
return
fhw = open(dot_file, "w")
fhw.write("// dot file\n")
fhw.write("digraph rqc {\n") # directed graph
fhw.write(" node [shape=box];\n")
fhw.write(" rankdir=LR;\n")
fh = open(flow_file, "r")
for line in fh:
line = line.strip()
if not line:
continue
if line.startswith("#"):
continue
# graph label
if line.startswith("*label"):
arr = line.split("|")
label = flow_replace(str(arr[1]))
fhw.write(" label=<%s>;\n" % label)
fhw.write(" labelloc=top;\n")
else:
arr = line.split("|")
#print arr
if len(arr) == 5:
org_node = arr[0]
org_label = str(arr[1])
next_node = arr[2]
next_label = str(arr[3])
link_label = str(arr[4])
# must be <br/> in the dot file, I have a habit of using <br>
org_label = flow_replace(org_label)
next_label = flow_replace(next_label)
link_label = flow_replace(link_label)
# label are enclosed by < > instead of " " to handle html-ish markups
if next_node:
link = " %s -> %s;\n" % (org_node, next_node)
if link_label:
link = " %s -> %s [label=<%s>];\n" % (org_node, next_node, link_label)
fhw.write(link)
if org_label:
label = " %s [label=<%s>];\n" % (org_node, org_label)
fhw.write(label)
if next_label:
label = " %s [label=<%s>];\n" % (next_node, next_label)
fhw.write(label)
fh.close()
fhw.write("}\n")
fhw.close()
if log:
log.info("- created dot file: %s", dot_file)
return dot_file
'''
simple replacements
'''
def flow_replace(my_str):
new_str = my_str.replace("<br>", "<br/>").replace("<smf>", "<font point-size=\"10\">").replace("</f>", "</font>")
return new_str
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## main program
if __name__ == "__main__":
# unit tests
print human_size(102192203)
print human_size(250000000000)
#print get_read_count_fastq("/global/projectb/scratch/brycef/sag/phix/11185.1.195330.UNKNOWN_matched.fastq.gz")
cmd = "#bbtools;bbduk.sh in=/global/dna/dm_archive/sdm/illumina//01/14/88/11488.1.208132.UNKNOWN.fastq.gz ref=/global/dna/shared/rqc/ref_databases/qaqc/databases/phix174_ill.ref.fa outm=/global/projectb/scratch/brycef/phix/11488/11488.1.208132.UNKNOWN_matched.fastq.gz outu=/global/projectb/scratch/brycef/phix/11488/11488.1.208132.UNKNOWN_unmatched.fastq.gz"
print convert_cmd(cmd)
cmd = "#pigz;pigz /global/projectb/scratch/brycef/align/BTOYH/genome/11463.6.208000.CAAGGTC-AGACCTT.filter-RNA.fastq.gz-genome.sam"
print convert_cmd(cmd)
cmd = "#java;java -version"
print convert_cmd(cmd)
dot_flow("/global/projectb/scratch/brycef/pbmid/BWOAU/f2.flow", "/global/projectb/scratch/brycef/pbmid/BWOAU/BWOUAx.dot")
sys.exit(0) | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pytools/lib/common.py | common.py |
import os
import sys
import time # sleep
import re # pod_path
srcDir = os.path.dirname(__file__)
sys.path.append(os.path.join(srcDir, '../assemblyqc/lib'))
# from assemblyqc_constants import *
from common import *
from rqc_constants import *
from os_utility import run_sh_command, make_dir_p
from html_utility import html_link
'''
Run Stephan Trong's blast plus wrapper instead of megablast.
It specifies default parameters for megablast and megablast is the default aligner.
The dbs for which run_blastplus is called do not have masking and are not really indexed with makembindex.
TODO: Run all databases in parallel.
Use the same blastn parameters as used in run_blastn, which are set as follows
defaults = " -num_threads " + str(num_threads) + " " + RQCBlast.BLASTN_DEFAULTS;
'''
# def run_blastplus(queryFastaFile, db, outputPath, log, num_threads=16):
# REMOVED! 09092016
def get_cat_cmd(seqUnitName, log):
zcatCmd = ""
if seqUnitName.endswith(".bz2"):
zcatCmd = RQCCommands.BZCAT_CMD
elif seqUnitName.endswith(".gz"):
zcatCmd = RQCCommands.ZCAT_CMD
elif seqUnitName.endswith(".fastq") or seqUnitName.endswith(".fa"):
zcatCmd = RQCCommands.CAT_CMD
else:
log.error("source_fastq should be either bzipped or gzipped fastq. " + str(seqUnitName))
return zcatCmd, RQCExitCodes.JGI_FAILURE
return zcatCmd, RQCExitCodes.JGI_SUCCESS
"""
Localize specified file to /scratch/rqc
- Bryce added the file check and wait
@param: fileName: full path to file
@param: log
sulsj ([email protected])
"""
def localize_file(fileNameFullPath, log):
done = 0
loopCount = 0
while done == 0:
loopCount += 1
if os.path.isfile(fileNameFullPath):
done = 1 # congratulations! The file system seems to be working for the moment
else:
sleepTime = 60 * loopCount
log.error("Filename doesn't exist on the system: %s, sleeping for %s seconds.", fileNameFullPath, sleepTime)
time.sleep(sleepTime)
if loopCount > 3:
done = 1 # probably failed
if not os.path.isfile(fileNameFullPath):
return None
destinationPath = "/scratch/rqc/localized-file"
if not os.path.isdir(destinationPath):
# make_dir_p(destinationPath)
_, _, exitCode = run_sh_command("mkdir -p /scratch/rqc/localized-file && chmod 777 /scratch/rqc/localized-file", True, log, True)
assert exitCode == 0
fileName = safe_basename(fileNameFullPath, log)[0]
localizeCmd = "rsync -av --omit-dir-times --no-perms " + fileNameFullPath + " " + os.path.join(destinationPath, fileName)
_, _, exitCode = run_sh_command(localizeCmd, True, log, True)
localizedFileNameFullPath = None
if exitCode == RQCExitCodes.JGI_SUCCESS:
localizedFileNameFullPath = os.path.join(destinationPath, fileName)
log.info("File localization completed for %s" % (fileName))
else:
localizedFileNameFullPath = None
log.error("File localization failed for %s" % (fileName))
return localizedFileNameFullPath
# def localize_dir(dbFileOrDirName, log):
# REMOVED! 09092016
"""
New localize_dir
Revisions
08022017 Enabled burst buffer on cori
08072017 Added support for persistent bbuffer on cori for localized NCBI databases
"""
def localize_dir2(db, log, bbuffer=None):
safeBaseName, exitCode = safe_basename(db, log)
safeDirName, exitCode = safe_dirname(db, log)
targetScratchDir = None
nerscDb = "/scratch/blastdb/global/dna/shared/rqc/ref_databases/ncbi/CURRENT"
# nerscDb = "/" ## temporarily do not use NERSC db
## nerscDbPersistentBBuffer = "/var/opt/cray/dws/mounts/batch/NCBI_DB_striped_scratch/ncbi"
## check if persistent burst buffer is ready
if bbuffer is None and "DW_PERSISTENT_STRIPED_NCBI_DB" in os.environ and os.environ['DW_PERSISTENT_STRIPED_NCBI_DB'] is not None:
nerscDb = os.path.join(os.environ['DW_PERSISTENT_STRIPED_NCBI_DB'], "ncbi")
else:
targetScratchDir = "/scratch/rqc"
if bbuffer is not None:
if not os.path.isdir(bbuffer):
log.error("Burst Buffer does not initiated: %s", bbuffer)
return None, RQCExitCodes.JGI_FAILURE
else:
targetScratchDir = bbuffer
log.info("Localization will use Burst Buffer location at %s", targetScratchDir)
elif not os.path.isdir(targetScratchDir):
_, _, exitCode = run_sh_command("mkdir %s && chmod 777 %s" % (targetScratchDir, targetScratchDir), True, log, True)
assert exitCode == 0
rsyncOption = ""
src = db
dest = ""
## For
# GREEN_GENES = "/global/dna/shared/rqc/ref_databases/misc/CURRENT/green_genes16s.insa_gg16S.fasta"
# LSU_REF = "/global/dna/shared/rqc/ref_databases/misc/CURRENT/LSURef_115_tax_silva.fasta"
# SSU_REF = "/global/dna/shared/rqc/ref_databases/misc/CURRENT/SSURef_NR99_115_tax_silva.fasta"
# LSSU_REF = "/global/dna/shared/rqc/ref_databases/misc/CURRENT/LSSURef_115_tax_silva.fasta"
# CONTAMINANTS = "/global/dna/shared/rqc/ref_databases/misc/CURRENT/JGIContaminants.fa"
# COLLAB16S = "/global/dna/shared/rqc/ref_databases/misc/CURRENT/collab16s.fa"
if db.endswith(".fa") or db.endswith(".fasta"):
rsyncOption = "--include '*.n??' --exclude '*.fa' --exclude '*.fasta' --exclude '%s' --exclude '*.log'" % (safeBaseName)
src = db + ".n??"
dest = os.path.join(targetScratchDir, safeBaseName)
blastDb = dest
if os.path.isfile(dest): ## just in case for the file name already exists
rmCmd = "rm -rf %s" % (dest)
run_sh_command(rmCmd, True, log, True)
## For
# NT_maskedYindexedN_BB = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/nt/bbtools_dedupe_mask/nt_bbdedupe_bbmasked_formatted"
elif db.endswith("nt_bbdedupe_bbmasked_formatted"):
if os.path.isdir(os.path.join(nerscDb, "nt/bbtools_dedupe_mask")):
blastDb = os.path.join(nerscDb, "nt/bbtools_dedupe_mask")
log.info("NERSC NCBI Database found: %s" % (blastDb))
return blastDb, RQCExitCodes.JGI_SUCCESS
rsyncOption = "--include '*.n??' --exclude '*.fna' --exclude 'nt_bbdedupe_bbmasked_formatted' --exclude '*.log'"
src = safeDirName + '/'
dest = os.path.join(targetScratchDir, safeBaseName)
blastDb = dest
## For
# NR = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/nr/nr"
# REFSEQ_ARCHAEA = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.archaea/refseq.archaea"
# REFSEQ_BACTERIA = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.bacteria/refseq.bacteria"
# REFSEQ_FUNGI = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.fungi/refseq.fungi"
# REFSEQ_MITOCHONDRION = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.mitochondrion/refseq.mitochondrion"
# REFSEQ_PLANT = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.plant/refseq.plant"
# REFSEQ_PLASMID = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.plasmid/refseq.plasmid"
# REFSEQ_PLASTID = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.plastid/refseq.plastid"
# REFSEQ_VIRAL = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.viral/refseq.viral"
else:
if os.path.isdir(os.path.join(nerscDb, safeBaseName)):
blastDb = os.path.join(nerscDb, safeBaseName)
log.info("NERSC NCBI Database found: %s" % (blastDb))
return blastDb, RQCExitCodes.JGI_SUCCESS
rsyncOption = "--include '*.n??' --exclude '*.fna' --exclude '%s' --exclude '*.log'" % (safeBaseName)
src = safeDirName + '/'
dest = os.path.join(targetScratchDir, safeBaseName)
blastDb = os.path.join(dest, dest)
rsyncCmd = "rsync -av --omit-dir-times --no-perms %s %s %s" % (rsyncOption, src, dest)
dbDirNameLocalized, _, exitCode = run_sh_command(rsyncCmd, True, log, True)
if exitCode != 0:
log.error("rsync failed. Cannot localize " + str(db))
return dbDirNameLocalized, RQCExitCodes.JGI_FAILURE
else:
cmd = "chmod -f -R 777 %s" % (dest)
_, _, exitCode = run_sh_command(cmd, True, log)
return blastDb, RQCExitCodes.JGI_SUCCESS
def safe_dirname(pathName, log):
dirName = ""
if pathName is None:
return dirName, RQCExitCodes.JGI_FAILURE
dirName = os.path.dirname(pathName)
if dirName is None:
log.error("Could not get basename for " + str(pathName))
return dirName, RQCExitCodes.JGI_FAILURE
return dirName, RQCExitCodes.JGI_SUCCESS
def safe_basename(pathName, log):
baseName = ""
if pathName is None:
return baseName, RQCExitCodes.JGI_FAILURE
baseName = os.path.basename(pathName)
if baseName is None:
log.error("Could not get basename for " + str(pathName))
return baseName, RQCExitCodes.JGI_FAILURE
return baseName, RQCExitCodes.JGI_SUCCESS
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## qsub functions
## module load uge
def submit_qsub(qsub_opt, qsub_cmd, qsub_name, output_path, log):
job_id = 0
job_share = 150
cluster_project = "gentech-rqc.p"
qsub_path = os.path.join(output_path, "qsub")
if not os.path.isdir(qsub_path):
os.makedirs(qsub_path)
output_log = os.path.join(qsub_path, "qsub-%s.log" % qsub_name)
my_qsub = "qsub -b y -j y -m n -w e -terse -N %s -P %s -o %s -js %s %s '%s'" % (qsub_name, cluster_project, output_log, job_share, qsub_opt, qsub_cmd)
# append to qsub.txt
cmd = "module load uge;%s" % my_qsub
stdOut, stdErr, exitCode = run_sh_command(cmd, True, log)
post_mortem_cmd(cmd, exitCode, stdOut, stdErr, log)
if exitCode == 0:
job_id = int(stdOut.strip())
log.info("- cluster job id: %s", job_id)
if job_id > 0:
qsub_log = os.path.join(output_path, "qsub_list.txt")
fh = open(qsub_log, "a")
fh.write("%s,%s\n" % (job_id, "submitted"))
fh.close()
return job_id
## watch jobs running on the cluster
def watch_cluster(output_path, log, sleep_time=300):
log.info("watch_cluster")
is_job_complete = "/usr/common/usg/bin/isjobcomplete.new" # + job_id = nice way to ask if job is done
qsub_log = os.path.join(output_path, "qsub_list.txt")
#sleep_time = 300 # check isjobrunning every xx seconds
hb_max = (180 * 3600) / sleep_time # total number of heartbeats before we give up, 180 hours worth of run time
#hb_max = 5 # temp
done = 0
hb_cnt = 0 # heartbeat count
if not os.path.isfile(qsub_log):
done = 1
while done == 0:
hb_cnt += 1
log.info("- heartbeat: %s", hb_cnt)
qsub_list = []
qsub_cnt = 0
qsub_complete = 0
qsub_err = 0
fh = open(qsub_log, "r")
for line in fh:
qsub_list.append(line.strip())
qsub_cnt += 1
fh.close()
fh = open(qsub_log, "w")
for qsub in qsub_list:
job_id, status = qsub.split(",")
new_status = status
if status in ("complete", "fail"):
continue
else:
cmd = "%s %s" % (is_job_complete, job_id)
stdOut, stdErr, exitCode = run_sh_command(cmd, True, log)
#post_mortem_cmd(cmd, exitCode, stdOut, stdErr, log)
running = "%s queued/running" % job_id
not_running = "%s not queued/running" % job_id
error_qw = "%s queued/running/error" % job_id
if stdOut.strip == running:
new_status = "running"
elif stdOut.strip() == not_running:
new_status = "complete" # might have failed
qsub_complete += 1
elif stdOut.strip() == error_qw:
new_status = "error"
qsub_err += 1
if stdOut.strip() == "not queued/running":
new_status = "complete"
fh.write("%s,%s\n" % (job_id, new_status))
log.info("- job_id: %s, status: %s", job_id, new_status)
fh.close()
qsub_running = qsub_cnt - (qsub_complete + qsub_err)
log.info("- job count: %s, running: %s, err: %s", qsub_cnt, qsub_running, qsub_err)
if qsub_cnt == (qsub_complete + qsub_err):
done = 1
if hb_cnt >= hb_max:
done = 1
if done == 0:
time.sleep(sleep_time)
'''
Open the file, count number of lines/hits
- skip header (#)
Used in sag.py and sag_decontam.py, ce.py and iso.py too
* out of memory when reading big files ...
'''
def get_blast_hit_count(blast_file):
hit_count = 0
if os.path.isfile(blast_file):
# if more than a gig we get a memory error doing this, even though its not supposed to cause a problem ...
# - maybe the hit_count var grows too big? no, not counting the hits still causes issues
if os.path.getsize(blast_file) < 100000000:
fh_blast = open(blast_file, "r")
for line in fh_blast:
if line.startswith("#"):
continue
hit_count += 1
fh_blast.close()
else:
hit_count = 9999
return hit_count
'''
Pads a string with 0's and splits into groups of 2
e.g. padStringPath(44) returns "00/00/00/44"
@param myString: string to pad
@return: padded string
'''
def pad_string_path(myString, padLength=8, depth=None):
myString = str(myString)
padLength = int(padLength)
if padLength > 8 or padLength <= 0:
padLength = 8
# left-pad with 0's
myString = myString.zfill(padLength)
# use re.findall function to split into pairs of strings
stringList = re.findall("..", myString)
# create ss/ss/ss/ss
if not depth:
padString = "/".join(stringList)
else:
padString = "/".join(stringList[:depth])
padString = padString + "/"
return padString
def get_dict_obj(sfile):
kvmap = {}
if os.path.isfile(sfile):
with open(sfile, 'r') as fh:
for line in fh:
line = line.strip()
if not line or line.startswith('#'):
continue
tks = line.strip().split('=')
if len(tks) == 2:
kvmap[tks[0].strip().lower()] = tks[1].strip()
return kvmap
# helper function
def pipeline_val(keyname, kwargs, stats, files=None, file_path_prefix=None):
keyname = keyname.lower()
form = kwargs.get('type')
if form == 'file' and files:
val = files.get(keyname)
if val:
if kwargs.get('filter') == 'base':
val = os.path.basename(val)
elif kwargs.get('filter') != 'full' and file_path_prefix and val.startswith(file_path_prefix): # make the file path relative to run dir
val = val[len(file_path_prefix)+1:]
if kwargs.get('filter') == 'link':
val = html_link(val, kwargs.get('label'))
elif stats:
val = stats.get(keyname, 'n.a.')
if val != 'n.a.':
if val.endswith('%'): # remove last "%"
val = val[:-1]
if form == 'bigint' or form == 'int':
try:
tmp = int(val)
if form == 'bigint':
val = format(tmp, ',') # e.g. 123,456,789
else:
val = tmp # int
except:
pass
elif form == 'pct':
try:
tmp = 100.0 * float(val)
if 'filter' in kwargs and type(kwargs['filter']) == int:
fformat = '%.' + str(kwargs['filter']) + 'f'
else:
fformat = '%.2f'
val = fformat % tmp
except Exception as e:
print('ERROR pipeline_val - form=pct: %s' % e)
elif form == 'floatstr' or form == 'float':
try:
tmp = float(val)
if form == 'floatstr':
val = '%.4f' % tmp
elif form == 'float':
val = tmp # float
if 'filter' in kwargs and type(kwargs['filter']) == int:
fformat = '%.' + str(kwargs['filter']) + 'f'
val = fformat % val
except Exception as e:
print('ERROR pipeline_val - form=floatstr|float: %s' % e)
if val == 'n.a.' and kwargs.get('filter') == 0:
if form == 'int':
val = 0
else:
val = '0'
else:
val = None
if 'vtype' not in kwargs or kwargs['vtype'] != 'numeric':
val = str(val)
return val # this is for string replacement, so return string
## EOF
if __name__ == '__main__':
print('unit test ...')
tok_map = {
'overall bases Q score mean' : {'token' : '[_BASE-QUALITY-SCORE_]', 'type': 'float', 'filter': 1},
'overall bases Q score std' : {'token' : '[_BASE-QUALITY-SCORE-STD_]', 'type': 'float'},
'Q30 bases Q score mean' : {'token' : '[_Q30-BASE-QUALITY-SCORE_]', 'type': 'float'},
'Q30 bases Q score std' : {'token' : '[_Q30-BASE-QUALITY-SCORE-STD_]', 'type': 'float'},
}
odir = '/global/projectb/scratch/syao/standalone/readqc_bb'
stats = get_dict_obj(os.path.join(odir, 'readqc_stats.txt'))
# rawCnt = pipeline_val('inputReads', {'type': 'int'}, stats)
# print(rawCnt)
# print(type(rawCnt))
# from pprint import pprint
# pprint(stats)
for key in tok_map:
dat = tok_map[key]
key = key.lower()
print(key)
print(pipeline_val(key, dat, stats))
print(stats[key])
print('\n') | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pytools/lib/rqc_utility.py | rqc_utility.py |
from os import path as ospath
from common import get_run_path
class RQCConstants:
UMASK = 0o022 # 755
# Exit codes
class RQCExitCodes:
# used by run_blastplus_taxserver, run_diamond, readqc, pacbio, dapseq, assemblyqc
JGI_SUCCESS = 0
JGI_FAILURE = 1
# The locations of different directories are hard-coded
class RQCPaths:
RQC_SW_BIN_DIR = ospath.join(get_run_path(), "tools")
# General commands
class RQCCommands:
GNUPLOT_CMD = 'gnuplot'
CAT_CMD = "cat"
BZCAT_CMD = "bzcat"
RM_CMD = "rm -f" # readqc utils
SETE_CMD = "set -e" # readqc_utils
# Read QC
BBTOOLS_REFORMAT_CMD = "module load bbtools; reformat.sh"
# Blast specific constants
class RQCBlast:
MEGAN_CMD = RQCPaths.RQC_SW_BIN_DIR + "/megan.pl" # used in assemblyqc.py and pacbioqc2_utils.py
# Reference databases
class RQCReferenceDatabases:
## NCBI databases
## NERSC's DB location: /scratch/blastdb/global/dna/shared/rqc/ref_databases/ncbi/CURRENT"
NR = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/nr/nr"
REFSEQ_ARCHAEA = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.archaea/refseq.archaea"
REFSEQ_BACTERIA = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.bacteria/refseq.bacteria"
REFSEQ_FUNGI = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.fungi/refseq.fungi"
REFSEQ_MITOCHONDRION = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.mitochondrion/refseq.mitochondrion"
REFSEQ_PLANT = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.plant/refseq.plant"
REFSEQ_PLASMID = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.plasmid/refseq.plasmid"
REFSEQ_PLASTID = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.plastid/refseq.plastid"
REFSEQ_VIRAL = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.viral/refseq.viral"
NT = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/nt/nt"
# should use this by default for NT
NT_maskedYindexedN_BB = "/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/nt/bbtools_dedupe_mask/nt_bbdedupe_bbmasked_formatted"
SAGTAMINANTS = "/global/projectb/sandbox/rqc/qcdb/blast+/sagtaminants/sagtaminants.fa"
GREEN_GENES = "/global/dna/shared/rqc/ref_databases/misc/CURRENT/green_genes16s.insa_gg16S.fasta" # same file as above but has db index
CONTAMINANTS = "/global/dna/shared/rqc/ref_databases/misc/CURRENT/JGIContaminants.fa" # april 2015
COLLAB16S = "/global/projectb/sandbox/rqc/qcdb/collab16s/collab16s.fa" # recent
LSU_REF = "/global/dna/shared/rqc/ref_databases/silva/CURRENT/LSURef_tax_silva"
SSU_REF = "/global/dna/shared/rqc/ref_databases/silva/CURRENT/SSURef_tax_silva"
LSSU_REF = "/global/dna/shared/rqc/ref_databases/silva/CURRENT/LSSURef_tax_silva"
## Alignment
ARTIFACT_REF = "/global/dna/shared/rqc/ref_databases/qaqc/databases/Artifacts.adapters_primers_only.fa"
## PHiX Pipeline
PHIX_REF = "/global/dna/shared/rqc/ref_databases/qaqc/databases/phix174_ill.ref.fa"
PHIX_CIRCLE_REF = "/global/dna/shared/rqc/ref_databases/qaqc/databases/phix174.ilmn_ref_concat.fa"
FRAG_ADAPTERS = "/global/projectb/sandbox/gaag/bbtools/data/adapters.fa"
## EOF | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pytools/lib/rqc_constants.py | rqc_constants.py |
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## libraries to use
from subprocess import Popen, call, PIPE
import os, glob, sys
import shlex
import unittest
import time
import grp
import errno
from threading import Timer ## for timer
from common import get_run_path
g_scale_inv = ((1024.*1024., "MB"), (1024., "KB"))
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## function definitions
defineCalledProcessError = False
try:
from subprocess import CalledProcessError
except ImportError:
defineCalledProcessError = True
if defineCalledProcessError:
class CalledProcessError(OSError):
def __init__(self, returncode, cmd, *l, **kw):
OSError.__init__(self, *l, **kw)
self.cmd = cmd
self.returncode = returncode
"""
Run user command using subprocess.call
@param: popenargs: command and options to run
@param: kwargs: additional parameters
"""
def run(*popenargs, **kwargs):
kw = {}
kw.update(kwargs)
dryRun = kw.pop('dryRun', False)
if dryRun:
print popenargs
else:
## convert something like run("ls -l") into run("ls -l", shell=True)
if isinstance(popenargs[0], str) and len(shlex.split(popenargs[0])) > 1:
kw.setdefault("shell", True)
## > /dev/null 2>&1
if kw.pop("supressAllOutput", False):
stdnull = open(os.devnull, "w") ## incompat with close_fds on Windows
kw.setdefault("stdout", stdnull)
kw.setdefault("stderr", stdnull)
else:
stdnull = None
returncode = call(*popenargs, **kw)
if stdnull:
stdnull.close()
if returncode != 0:
raise CalledProcessError(returncode=returncode, cmd=str(popenargs))
"""
Similar to shell backticks, e.g. a = `ls -1` <=> a = backticks(['ls','-1']).
If 'dryRun=True' is given as keyword argument, then 'dryRet' keyword must
provide a value to return from this function.
@param: popenargs: command and options to run
@param: kwargs: additional parameters
@return: command result (stdout)
"""
def back_ticks(*popenargs, **kwargs):
kw = {}
kw.update(kwargs)
dryRun = kw.pop('dryRun', False)
dryRet = kw.pop('dryRet', None)
if dryRun:
print popenargs
return dryRet
else:
kw['stdout'] = PIPE
p = Popen(*popenargs, **kw)
retOut = p.communicate()[0]
if p.returncode != 0:
raise CalledProcessError(returncode=p.returncode, cmd=str(popenargs))
return retOut
"""
Run a command, catch stdout and stderr and exitCode
@param: cmd
@param: live (boolean, default false - don't run the command but pretend we did)
@return: stdout, stderr, exit code
"""
# def run_command(cmd, live=False):
# return run_sh_command(cmd, live=False)
def run_sh_command(cmd, live=False, log=None, runTime=False, stdoutPrint=True, timeoutSec=0):
stdOut = None
stdErr = None
exitCode = None
start = 0
end = 0
elapsedSec = 0
if cmd:
if not live:
stdOut = "Not live: cmd = '%s'" % (cmd)
exitCode = 0
else:
if log and stdoutPrint:
log.info("cmd: %s" % (cmd))
##---------
## OLD
if runTime:
start = time.time()
p = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True)
## ref) http://stackoverflow.com/questions/1191374/using-module-subprocess-with-timeout
if timeoutSec > 0:
kill_proc = lambda proc: proc.kill()
timer = Timer(timeoutSec, kill_proc, [p])
#p.wait()
try:
stdOut, stdErr = p.communicate()
exitCode = p.returncode
finally:
if timeoutSec > 0:
timer.cancel()
exitCode = 143
else:
pass
if runTime:
end = time.time()
elapsedSec = end - start
if log:
log.info("*************************************")
if cmd.split(" ")[0].split("/")[-1]:
log.info(cmd.split(" ")[0].split("/")[-1])
log.info("Command took " + str(elapsedSec) + " sec.")
log.info("*************************************")
if log and stdoutPrint:
log.info("Return values: exitCode=" + str(exitCode) + ", stdOut=" + str(stdOut) + ", stdErr=" + str(stdErr))
if exitCode != 0:
if log:
log.warn("- The exit code has non-zero value.")
else:
if log:
log.error("- No command to run.")
return None, None, -1
return stdOut, stdErr, exitCode
"""
Create one dir with pathname path or do nothing if it already exists.
Same as Linux 'mkdir -p'.
@param: path: path
@param: dryRun: dryrun directive
"""
def make_dir(path, perm=None, dryRun=False):
if not dryRun:
if not os.path.exists(path):
if not perm:
os.makedirs(path)
else:
os.makedirs(path, perm)
else:
print "make_dir %s" % (path, )
"""
The method make_dir_p() is recursive directory creation function.
Like mkdir(), but makes all intermediate-level directories needed to contain the leaf directory.
"""
def make_dir_p(path):
try:
os.makedirs(path)
except OSError as exc: # Python >2.5
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else: raise
"""
Create muiltiple dirs with the same semantics as make_dir
@param: path: path
@param: dryRun: dryrun directive
"""
def make_dirs(paths, dryRun=False):
for path in paths:
make_dir(path=path, dryRun=dryRun)
"""
Assume that the argument is a file name and make all directories that are
part of it
@param: fileName: create dir to the file
"""
def make_file_path(fileName):
dirName = os.path.dirname(fileName)
if dirName not in ("", "."):
make_dir(dirName)
"""
Remove dir
@param: path: path to delete
@param: dryRun: dryrun directive
"""
## To do: perhaps use shutil.rmtree instead?
def rm_dir(path, dryRun=False):
run(["rm", "-rf", path], dryRun=dryRun)
## make alias
rmrf = rm_dir
"""
Remove file.
@param: path: path to delete
@param: dryRun: dryrun directive
"""
def remove_file(path, dryRun=False):
for f in glob.iglob(path):
try:
if os.path.exists(f):
os.remove(f)
except OSError:
pass
"""
Remove multiple files.
@param: path: path to delete
@param: dryRun: dryrun directive
"""
def remove_files(paths, dryRun=False):
for f in paths:
try:
os.remove(f)
except OSError:
pass
"""
Create an empty dir with a given path.
If path already exists, it will be removed first.
@param: path: path to delete
@param: dryRun: dryrun directive
"""
def remake_dir(path, dryRun=False):
rmrf(path, dryRun=dryRun)
make_dir(path, dryRun=dryRun)
"""
Change mode.
@param: path: path to chmod
@param: mode: the form `[ugoa]*([-+=]([rwxXst]*|[ugo]))+' OR change_mod(path, "0755")
@param: opts: additional chmod options
@param: dryRun: dryrun directive
"""
def change_mod(path, mode, opts='', dryRun=False):
if isinstance(path, basestring):
path = [path]
else:
path = list(path)
run(["chmod"]+opts.split()+[mode]+path, dryRun=dryRun)
"""
Change grp.
"""
def change_grp(filepath, grpName):
uid = os.stat(filepath).st_uid
#uid = pwd.getpwnam("qc_user").pw_uid
gid = grp.getgrnam(grpName).gr_gid
os.chown(filepath, uid, gid)
"""
find files
"""
def find_files(patt):
#return [os.path.join(d, f) if f.find(patt) != -1 for f in os.listdir(d)]
return [f for f in glob.glob(patt)]
"""
Move file
"""
def move_file(source, dest, dryRun=False):
try:
if os.path.exists(source) and not os.path.exists(dest):
run(["mv", source, dest], dryRun=dryRun)
except OSError:
pass
"""
get various mem usage properties of process with id pid in MB
@param VmKey
@param pid
"""
#-------------------------------------------------------------------------------
def _VmB(VmKey, pid):
#-------------------------------------------------------------------------------
procStatus = '/proc/%d/status' % pid
unitScale = {'kB': 1.0/1024.0, 'mB': 1.0,
'KB': 1.0/1024.0, 'MB': 1.0}
## get pseudo file /proc/<pid>/status
try:
if os.path.exists(procStatus):
t = open(procStatus)
v = t.read()
t.close()
else:
return 0.0
except OSError:
#logger.exception("Failed to open /proc files.")
print "Failed to open /proc files."
return 0.0 # non-Linux?
## get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
i = v.index(VmKey)
v = v[i:].split(None, 3) # by whitespace
if len(v) < 3:
return 0.0 # invalid format?
## convert Vm value to bytes
return float(v[1]) * unitScale[v[2]]
"""
convert scale
"""
#-------------------------------------------------------------------------------
def to_scale(x):
#-------------------------------------------------------------------------------
for sc in g_scale_inv:
y = x/sc[0]
if y >= 1:
return "%.3f%s" % (y, sc[1])
return "%.3f%s" % (y, "B")
"""
Return memory usage in bytes or as formatted string.
@param pid
@param since
@param asStr
"""
#-------------------------------------------------------------------------------
def get_virtual_memory_usage(pid, since=0.0, asStr=True):
#-------------------------------------------------------------------------------
b = _VmB('VmSize:', pid) - since
if asStr:
return "VirtMem: " + to_scale(b)
else:
return b
"""
Return resident memory usage in bytes.
@param pid
@param since
@param asStr
"""
#-------------------------------------------------------------------------------
def get_resident_memory_usage(pid, since=0.0, asStr=True):
#-------------------------------------------------------------------------------
b = _VmB('VmRSS:', pid) - since
if asStr:
return "ResMem: " + to_scale(b)
else:
return b
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## unit test
class TestOsUtility(unittest.TestCase):
def testRun(self):
try:
run(["rm", "-rf", "./unittest"], dryRun=False)
except CalledProcessError, msg:
self.assertNotEqual(msg.returncode, 0)
try:
make_dir("./unittest", dryRun=False)
except CalledProcessError, msg:
self.assertEqual(msg.returncode, 0)
try:
rm_dir("./unittest", dryRun=False)
except CalledProcessError, msg:
self.assertEqual(msg.returncode, 0)
def testBackTicks(self):
cmd = "free"
try:
freeOut = back_ticks(cmd, shell=True)
except CalledProcessError, msg:
print >>sys.stderr, "Failed to call %s. Exit code=%s" % (msg.cmd, msg.returncode)
sys.exit(1)
ret = -1
ret = float(freeOut.split('\n')[1].split()[2]) / \
float(freeOut.split('\n')[1].split()[1]) * 100.0
assert ret > -1
def sh(cmd):
""" simple function to call a sh command """
proc = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True)
out, err = proc.communicate()
if err: print >>sys.stderr, err
return out
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## main program
## EOF | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pytools/lib/os_utility.py | os_utility.py |
import os
import sys
## custom libs in "../lib/"
srcDir = os.path.dirname(__file__)
sys.path.append(os.path.join(srcDir, 'tools')) ## ./tools
sys.path.append(os.path.join(srcDir, '../lib')) ## rqc-pipeline/lib
sys.path.append(os.path.join(srcDir, '../tools')) ## rqc-pipeline/tools
from readqc_constants import RQCReadQcConfig, ReadqcStats
from rqc_constants import RQCExitCodes
from os_utility import run_sh_command
from common import append_rqc_stats, append_rqc_file
statsFile = RQCReadQcConfig.CFG["stats_file"]
filesFile = RQCReadQcConfig.CFG["files_file"]
"""
Title : read_megablast_hits
Function : This function generates tophit list of megablast against different databases.
Usage : read_megablast_hits(db_name, log)
Args : blast db name or full path
Returns : SUCCESS
FAILURE
Comments :
"""
def read_megablast_hits(db, log):
currentDir = RQCReadQcConfig.CFG["output_path"]
megablastDir = "megablast"
megablastPath = os.path.join(currentDir, megablastDir)
statsFile = RQCReadQcConfig.CFG["stats_file"]
filesFile = RQCReadQcConfig.CFG["files_file"]
##
## Process blast output files
##
matchings = 0
hitCount = 0
parsedFile = os.path.join(megablastPath, "megablast.*.%s*.parsed" % (db))
matchings, _, exitCode = run_sh_command("grep -v '^#' %s 2>/dev/null | wc -l " % (parsedFile), True, log)
if exitCode == 0: ## if parsed file found.
t = matchings.split()
if len(t) == 1 and t[0].isdigit():
hitCount = int(t[0])
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_MATCHING_HITS + " " + db, hitCount, log)
##
## add .parsed file
##
parsedFileFound, _, exitCode = run_sh_command("ls %s" % (parsedFile), True, log)
if parsedFileFound:
parsedFileFound = parsedFileFound.strip()
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_PARSED_FILE + " " + db, os.path.join(megablastPath, parsedFileFound), log)
else:
log.error("- Failed to add megablast parsed file of %s." % (db))
return RQCExitCodes.JGI_FAILURE
##
## wc the top hits
##
topHit = 0
tophitFile = os.path.join(megablastPath, "megablast.*.%s*.parsed.tophit" % (db))
tophits, _, exitCode = run_sh_command("grep -v '^#' %s 2>/dev/null | wc -l " % (tophitFile), True, log)
t = tophits.split()
if len(t) == 1 and t[0].isdigit():
topHit = int(t[0])
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_TOP_HITS + " " + db, topHit, log)
##
## wc the taxonomic species
##
spe = 0
taxlistFile = os.path.join(megablastPath, "megablast.*.%s*.parsed.taxlist" % (db))
species, _, exitCode = run_sh_command("grep -v '^#' %s 2>/dev/null | wc -l " % (taxlistFile), True, log)
t = species.split()
if len(t) == 1 and t[0].isdigit():
spe = int(t[0])
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_TAX_SPECIES + " " + db, spe, log)
##
## wc the top 100 hit
##
top100hits = 0
top100hitFile = os.path.join(megablastPath, "megablast.*.%s*.parsed.top100hit" % (db))
species, _, exitCode = run_sh_command("grep -v '^#' %s 2>/dev/null | wc -l " % (top100hitFile), True, log)
t = species.split()
if len(t) == 1 and t[0].isdigit():
top100hits = int(t[0])
append_rqc_stats(statsFile, ReadqcStats.ILLUMINA_READ_TOP_100HITS + " " + db, top100hits, log)
##
## Find and add taxlist file
##
taxListFound, _, exitCode = run_sh_command("ls %s" % (taxlistFile), True, log)
taxListFound = taxListFound.strip()
if taxListFound:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_TAXLIST_FILE + " " + db, os.path.join(megablastPath, taxListFound), log)
else:
log.error("- Failed to add megablast taxlist file of %s." % (db))
return RQCExitCodes.JGI_FAILURE
##
## Find and add tophit file
##
tophitFound, _, exitCode = run_sh_command("ls %s" % (tophitFile), True, log)
tophitFound = tophitFound.strip()
if tophitFound:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_TOPHIT_FILE + " " + db, os.path.join(megablastPath, tophitFound), log)
else:
log.error("- Failed to add megablast tophit file of %s." % (db))
return RQCExitCodes.JGI_FAILURE
##
## Find and add top100hit file
##
top100hitFound, _, exitCode = run_sh_command("ls %s" % (top100hitFile), True, log)
top100hitFound = top100hitFound.strip()
if top100hitFound:
append_rqc_file(filesFile, ReadqcStats.ILLUMINA_READ_TOP100HIT_FILE + " " + db, os.path.join(megablastPath, top100hitFound), log)
else:
log.error("- Failed to add megablast top100hit file of %s." % (db))
return RQCExitCodes.JGI_FAILURE
else:
log.info("- No blast hits for %s." % (db))
return RQCExitCodes.JGI_SUCCESS
"""
Title : read_level_qual_stats
Function : Generate qual scores and plots of read level 20mer sampling
Usage : read_level_mer_sampling($analysis, $summary_file_dir)
Args : 1) A reference to an JGI_Analysis object
2) current working folder wkdir/uniqueness
Returns : JGI_SUCCESS: Illumina read level report could be successfully generated.
JGI_FAILURE: Illumina read level report could not be generated.
Comments : This function is intended to be called at the very end of the illumina read level data processing script.
"""
def read_level_mer_sampling(dataToRecordDict, dataFile, log):
retCode = RQCExitCodes.JGI_FAILURE
## Old data
#nSeq nStartUniMer fracStartUniMer nRandUniMer fracRandUniMer
## 0 1 2 3 4
##25000 2500 0.1 9704 0.3882
## New data
#count first rand first_cnt rand_cnt
# 0 1 2 3 4
#25000 66.400 76.088 16600 19022
#50000 52.148 59.480 13037 14870
#75000 46.592 53.444 11648 13361
#100000 43.072 49.184 10768 12296 ...
if os.path.isfile(dataFile):
with open(dataFile, "r") as merFH:
lines = merFH.readlines()
## last line
t = lines[-1].split('\t')
# breaks 2016-09-07
#assert len(t) == 5
totalMers = int(t[0])
## new by bbcountunique
uniqStartMerPer = float("%.2f" % (float(t[1])))
uniqRandtMerPer = float("%.2f" % (float(t[2])))
dataToRecordDict[ReadqcStats.ILLUMINA_READ_20MER_SAMPLE_SIZE] = totalMers
dataToRecordDict[ReadqcStats.ILLUMINA_READ_20MER_PERCENTAGE_STARTING_MERS] = uniqStartMerPer
dataToRecordDict[ReadqcStats.ILLUMINA_READ_20MER_PERCENTAGE_RANDOM_MERS] = uniqRandtMerPer
retCode = RQCExitCodes.JGI_SUCCESS
else:
log.error("- qhist file not found: %s" % (dataFile))
return retCode
"""
Title : base_level_qual_stats
Function : Generate qual scores and plots of read level QC
Usage : base_level_qual_stats($analysis, $)
Args : 1) A reference to an JGI_Analysis object
2) current working folder wkdir/qual
Returns : JGI_SUCCESS: Illumina read level report could be successfully generated.
JGI_FAILURE: Illumina read level report could not be generated.
Comments : This function is intended to be called at the very end of the illumina base level data processing script.
"""
def base_level_qual_stats(dataToRecordDict, reformatObqhistFile, log):
cummlatPer = 0
cummlatBase = 0
statsPerc = {30:0, 25:0, 20:0, 15:0, 10:0, 5:0}
statsBase = {30:0, 25:0, 20:0, 15:0, 10:0, 5:0}
Q30_seen = 0
Q25_seen = 0
Q20_seen = 0
Q15_seen = 0
Q10_seen = 0
Q5_seen = 0
## New format
##Median 38
##Mean 37.061
##STDev 4.631
##Mean_30 37.823
##STDev_30 1.699
##Quality bases fraction
#0 159 0.00008
#1 0 0.00000
#2 12175 0.00593
#3 0 0.00000
#4 0 0.00000
#5 0 0.00000
#6 0 0.00000
allLines = open(reformatObqhistFile).readlines()
for l in allLines[::-1]:
l = l.strip()
##
## obqhist file format example
##
# #Median 36
# #Mean 33.298
# #STDev 5.890
# #Mean_30 35.303
# #STDev_30 1.517
# #Quality bases fraction
# 0 77098 0.00043
# 1 0 0.00000
# 2 0 0.00000
# 3 0 0.00000
# 4 0 0.00000
# 5 0 0.00000
# 6 0 0.00000
if len(l) > 0:
if l.startswith("#"):
if l.startswith("#Mean_30"):
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q30_SCORE_MEAN] = l.split('\t')[1]
elif l.startswith("#STDev_30"):
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q30_SCORE_STD] = l.split('\t')[1]
elif l.startswith("#Mean"):
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_OVERALL_BASES_Q_SCORE_MEAN] = l.split('\t')[1]
elif l.startswith("#STDev"):
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_OVERALL_BASES_Q_SCORE_STD] = l.split('\t')[1]
continue
qavg = None
nbase = None
percent = None
t = l.split()
try:
qavg = int(t[0])
nbase = int(t[1])
percent = float(t[2])
except IndexError:
log.warn("parse error in base_level_qual_stats: %s %s %s %s" % (l, qavg, nbase, percent))
continue
log.debug("base_level_qual_stats(): qavg and nbase and percent: %s %s %s" % (qavg, nbase, percent))
cummlatPer += percent * 100.0
cummlatPer = float("%.f" % (cummlatPer))
if cummlatPer > 100:
cummlatPer = 100.0 ## RQC-621
cummlatBase += nbase
if qavg == 30:
Q30_seen = 1
statsPerc[30] = cummlatPer
statsBase[30] = cummlatBase
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q30] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_C30] = cummlatBase
elif qavg == 25:
Q25_seen = 1
statsPerc[25] = cummlatPer
statsBase[25] = cummlatBase
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q25] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_C25] = cummlatBase
elif qavg == 20:
Q20_seen = 1
statsPerc[20] = cummlatPer
statsBase[20] = cummlatBase
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q20] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_C20] = cummlatBase
elif qavg == 15:
Q15_seen = 1
statsPerc[15] = cummlatPer
statsBase[15] = cummlatBase
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q15] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_C15] = cummlatBase
elif qavg == 10:
Q10_seen = 1
statsPerc[10] = cummlatPer
statsBase[10] = cummlatBase
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q10] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_C10] = cummlatBase
elif qavg == 5:
Q5_seen = 1
statsPerc[5] = cummlatPer
statsBase[5] = cummlatBase
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q5] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_C5] = cummlatBase
## Double check that no value is missing.
if Q25_seen == 0 and Q30_seen != 0:
Q25_seen = 1
statsPerc[25] = statsPerc[30]
statsBase[25] = statsBase[30]
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q25] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_C25] = cummlatBase
if Q20_seen == 0 and Q25_seen != 0:
Q20_seen = 1
statsPerc[20] = statsPerc[25]
statsBase[20] = statsBase[25]
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q20] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_C20] = cummlatBase
if Q15_seen == 0 and Q20_seen != 0:
Q15_seen = 1
statsPerc[15] = statsPerc[20]
statsBase[15] = statsBase[20]
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q15] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_C15] = cummlatBase
if Q10_seen == 0 and Q15_seen != 0:
Q10_seen = 1
statsPerc[10] = statsPerc[15]
statsBase[10] = statsBase[15]
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q10] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_C10] = cummlatBase
if Q5_seen == 0 and Q10_seen != 0:
Q5_seen = 1
statsPerc[5] = statsPerc[10]
statsBase[5] = statsBase[10]
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_Q5] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_BASE_C5] = cummlatBase
if Q30_seen == 0:
log.error("Q30 is 0. Base quality values are ZERO.")
log.debug("Q and C values: %s" % (dataToRecordDict))
return RQCExitCodes.JGI_SUCCESS
"""
Title : q20_score
Function : this method returns Q20 using a qrpt file as input
Usage : JGI_QC_Utility::qc20_score($qrpt)
Args : $_[0] : qrpt file.
Returns : a number of Q20 score
Comments :
"""
# def q20_score(qrpt, log):
# log.debug("qrpt file %s" % (qrpt))
#
# q20 = None
# num = 0
#
# if os.path.isfile(qrpt):
# with open(qrpt, "r") as qrptFH:
# for l in qrptFH:
# num += 1
#
# if num == 1:
# continue
#
# ##############
# ## Old format
# ## READ1.qrpt
# ## column count min max sum mean Q1 med Q3 IQR lW rW A_Count C_Count G_Count T_Count N_Count Max_count
# ## 1 378701 2 34 12447306 32.87 31 34 34 3 27 34 108573 83917 81999 104127 85 378701
# ## 2 378701 2 34 12515957 33.05 33 34 34 1 32 34 112178 83555 84449 98519 0 378701
# ## 3 378701 2 34 12519460 33.06 33 34 34 1 32 34 104668 72341 80992 120700 0 378701
# ## 4 378701 2 37 13807944 36.46 37 37 37 0 37 37 96935 95322 83958 102440 46 378701
# ## 5 378701 2 37 13790443 36.42 37 37 37 0 37 37 114586 68297 78020 117740 58 378701
# ##
# ## or
# ##
# ## READ2.qrpt
# ## column count min max sum mean Q1 med Q3 IQR lW rW A_Count C_Count G_Count T_Count N_Count Max_count
# ## 1 378701 2 34 8875097 23.44 25 26 28 3 21 32 106904 84046 81795 105956 0 378701
# ## 2 378701 2 34 6543224 17.28 15 16 26 11 2 34 107573 77148 97953 88998 7029 378701
# ## 3 378701 2 34 7131741 18.83 16 16 26 10 2 34 96452 83003 107891 91355 0 378701
# ## 4 378701 2 37 9686653 25.58 19 32 33 14 2 37 97835 78304 87944 114618 0 378701
# ## 5 378701 2 37 10208226 26.96 25 33 35 10 10 37 98021 90611 89040 101029 0 378701
#
# pos = None
# mean = None
# t = l.split("\t")
# assert len(t) > 6
# pos = int(t[0])
# mean = float(t[5])
#
# if mean and pos:
# if mean < 20:
# return pos - 1
# else:
# q20 = pos
#
# else:
# log.error("- qhist file not found: %s" % (qrpt))
# return None
#
#
# return q20
def q20_score_new(bqHist, readNum, log):
log.debug("q20_score_new(): bqHist file = %s" % (bqHist))
q20 = None
if os.path.isfile(bqHist):
with open(bqHist, "r") as qrptFH:
for l in qrptFH:
if l.startswith('#'):
continue
## New data
# 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
##BaseNum count_1 min_1 max_1 mean_1 Q1_1 med_1 Q3_1 LW_1 RW_1 count_2 min_2 max_2 mean_2 Q1_2 med_2 Q3_2 LW_2 RW_2
# 0 6900 0 36 33.48 33 34 34 29 36 6900 0 36 33.48 33 34 34 29 36
pos = None
mean = None
t = l.split("\t")
pos = int(t[0]) + 1
if readNum == 1:
mean = float(t[4])
else:
mean = float(t[13])
if mean and pos:
if mean < 20:
return pos - 1
else:
q20 = pos
else:
log.error("- bqHist file not found: %s" % (bqHist))
return None
return q20
"""
Title : read_level_qual_stats
Function : Generate qual scores and plots of read level QC
Usage : read_level_qual_stats($analysis, $)
Args : 1) A reference to an JGI_Analysis object
2) current working folder wkdir/qual
Returns : JGI_SUCCESS: Illumina read level report could be successfully generated.
JGI_FAILURE: Illumina read level report could not be generated.
Comments : This function is intended to be called at the very end of the illumina read level data processing script.
"""
def read_level_qual_stats(dataToRecordDict, qhistTxtFullPath, log):
retCode = RQCExitCodes.JGI_FAILURE
cummlatPer = 0.0
Q30_seen = 0
Q25_seen = 0
Q20_seen = 0
Q15_seen = 0
Q10_seen = 0
Q5_seen = 0
if os.path.isfile(qhistTxtFullPath):
stats = {30:0, 25:0, 20:0, 15:0, 10:0, 5:0}
allLines = open(qhistTxtFullPath).readlines()
for l in allLines[::-1]:
if not l:
break
if l.startswith('#'):
continue
t = l.split()
assert len(t) == 3
qavg = int(t[0])
percent = float(t[2]) * 100.0 ## 20140826 Changed for bbtools
cummlatPer = cummlatPer + percent
cummlatPer = float("%.2f" % cummlatPer)
if qavg <= 30 and qavg > 25 and Q30_seen == 0:
Q30_seen = 1
stats[30] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_READ_Q30] = cummlatPer
elif qavg <= 25 and qavg > 20 and Q25_seen == 0:
Q25_seen = 1
stats[25] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_READ_Q25] = cummlatPer
elif qavg <= 20 and qavg > 15 and Q20_seen == 0:
Q20_seen = 1
stats[20] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_READ_Q20] = cummlatPer
elif qavg <= 15 and qavg > 10 and Q15_seen == 0:
Q15_seen = 1
stats[15] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_READ_Q15] = cummlatPer
elif qavg <= 10 and qavg > 5 and Q10_seen == 0:
Q10_seen = 1
stats[10] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_READ_Q10] = cummlatPer
elif qavg <= 5 and Q5_seen == 0:
Q5_seen = 1
stats[5] = cummlatPer
dataToRecordDict[ReadqcStats.ILLUMINA_READ_Q5] = cummlatPer
### Double check that no value is missing.
if Q25_seen == 0 and Q30_seen != 0:
Q25_seen = 1
stats[25] = stats[30]
dataToRecordDict[ReadqcStats.ILLUMINA_READ_Q25] = cummlatPer
if Q20_seen == 0 and Q25_seen != 0:
Q20_seen = 1
stats[20] = stats[25]
dataToRecordDict[ReadqcStats.ILLUMINA_READ_Q20] = cummlatPer
if Q15_seen == 0 and Q20_seen != 0:
Q15_seen = 1
stats[15] = stats[20]
dataToRecordDict[ReadqcStats.ILLUMINA_READ_Q15] = cummlatPer
if Q10_seen == 0 and Q15_seen != 0:
Q10_seen = 1
stats[10] = stats[15]
dataToRecordDict[ReadqcStats.ILLUMINA_READ_Q10] = cummlatPer
if Q5_seen == 0 and Q10_seen != 0:
Q5_seen = 1
stats[5] = stats[10]
dataToRecordDict[ReadqcStats.ILLUMINA_READ_Q5] = cummlatPer
if Q30_seen == 0:
log.error("Q30 is 0 . Read quality values are ZERO.")
log.debug("Q30 %s, Q25 %s, Q20 %s, Q15 %s, Q10 %s, Q5 %s" % \
(stats[30], stats[25], stats[20], stats[15], stats[10], stats[5]))
retCode = RQCExitCodes.JGI_SUCCESS
else:
log.error("- qhist file not found: %s" % (qhistTxtFullPath))
return retCode
"""
Title : read_gc_mean
Function : This function generates average GC content % and its standard deviation and put them into database.
Usage : read_gc_mean($analysis)
Args : 1) A reference to an JGI_Analysis object
Returns : JGI_SUCCESS:
JGI_FAILURE:
Comments :
"""
def read_gc_mean(histFile, log):
mean = 0.0
stdev = 0.0
retCode = RQCExitCodes.JGI_FAILURE
if os.path.isfile(histFile):
with open(histFile, "r") as histFH:
line = histFH.readline() ## we only need the first line
# Ex) #Found 1086 total values totalling 420.3971. <0.387106 +/- 0.112691>
if len(line) == 0 or not line.startswith("#Found"):
log.error("- GC content hist text file does not contains right results: %s, %s" % (histFile, line))
retCode = RQCExitCodes.JGI_FAILURE
else:
toks = line.split()
assert len(toks) == 9
mean = float(toks[6][1:]) * 100.0
stdev = float(toks[8][:-1]) * 100.0
log.debug("mean, stdev = %.2f, %.2f" % (mean, stdev))
retCode = RQCExitCodes.JGI_SUCCESS
else:
log.error("- gc hist file not found: %s" % (histFile))
return retCode, mean, stdev
if __name__ == "__main__":
exit(0)
## EOF | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pytools/lib/readqc_report.py | readqc_report.py |
import sys
import os
import re
import gzip
import argparse
pipe_root = os.path.abspath(os.path.dirname(os.path.abspath(__file__)) + '/' + os.pardir)
sys.path.append(pipe_root + '/lib') # common
from common import run_command
# from db_access import jgi_connect_db
"""
These are functions for fastq file analysis
"""
def _err_return(msg, log=None):
if log:
log.error(msg)
print msg
return None
'''
For a given fastq file (zipped or unzipped), check the format by comparing the string lengths of seq and quality
'''
def check_fastq_format(fastq, log=None):
if not fastq:
return _err_return('Function read_length_from_file() requires a fastq file.', log)
if not os.path.isfile(fastq):
return _err_return('The given fastq file [%s] does not exist' % fastq, log)
if fastq.endswith(".gz"):
fh = gzip.open(fastq, 'r')
else:
fh = open(fastq, 'r')
lCnt = 0 # line counter
seq = None
lstr = None
missing = False
while 1:
lstr = fh.readline()
if not lstr:
break
lCnt += 1
seqLen = 0
qualLen = 0
idx = lCnt % 4 # line index in the 4-line group
if idx == 1:
continue
elif idx == 2:
seq = lstr.strip()
elif idx == 3:
continue
elif idx == 0:
seqLen = len(seq)
qualLen = len(lstr.strip())
if seqLen != qualLen:
missing = True
break
fh.close()
if missing:
log.error("Incorrect fastq file: missing quality score character or base character.\n seq=%s\n qual=%s" % (seq, lstr))
return -1
if lCnt % 4 != 0:
log.error("Incorrect fastq file: missing fastq record item. Number of lines in the output fastq file = %d" % (lCnt))
return -2
return lCnt
'''
Ref : JGI_Utility::read_length_from_file
The first read count (read 1 and read 2) meets the sszie will stop the record reading.
'''
def read_length_from_file(fastq, log=None, ssize=10000):
''' For a given fastq file (zipped or unzipped), reads the first *ssize* reads to compute average length
and return (avgLen1, avgLen2, isPE)
The file scanning will stop when the ssize is met by either read (1 or 2).
'''
is_pe = False # def to none paired end
readi = 0 # counter for pe read 1
readj = 0 # counter for pe read 2
leni = 0 # total length of read 1
lenj = 0 # total length of read 2
if not fastq:
return _err_return('Function read_length_from_file() requires a fastq file.', log)
if not os.path.isfile(fastq):
return _err_return('The given fastq file [%s] does not exist' % fastq, log)
if fastq.endswith(".gz"):
fh = gzip.open(fastq, 'r')
else:
fh = open(fastq, 'r')
lCnt = 0 # line counter
done = False
while not done:
lstr = fh.readline()
lstr = lstr.rstrip()
lCnt += 1
idx = lCnt % 4 # line index in the 4-line group
if idx == 1:
header = lstr
elif idx == 2:
seq = lstr
elif idx == 3:
plus = lstr
elif idx == 4:
quality = lstr
else:
if not header or not seq: # end of file
done = True
if header.find('enter') != -1 and header.find('exit') != -1: # copied from perl's logic
continue
match = re.match(r'^@([^/]+)([ |/]1)', header) # for pe read 1
aLen = len(seq.strip())
if match:
# to let read2 match up
readi += 1
if readi > ssize: #read 1 meet the max count; done regardless situations of read2 count.
readi -= 1
done = True
else:
leni += aLen
else:
match = re.match(r'^@([^/]+)([ |/]2)', header) # for pe read 2
if match:
readj += 1
if readj > ssize: #read2 meet the max count; done regardless situation of read1 count
readj -= 1
done = True
else:
lenj += aLen
if leni > 0: ### Only set is_pe to true if leni > 0, which means the 1st read in the pair was found
is_pe = True
fh.close()
# debug to be sure the max are met properly for both reads
#print('read1len=%d; read1cnt=%d; read2len=%d; read2cnt=%d' %(leni, readi, lenj, readj))
if leni > 0 and readi > 0:
leni = leni / readi
if lenj > 0 and readj > 0:
lenj = lenj / readj
return (leni, lenj, is_pe)
def get_working_read_length(fastq, log):
read_length = 0
read_length_1 = 0
read_length_2 = 0
(read_length_1, read_length_2, is_pe) = read_length_from_file(fastq, log)
if not is_pe:
log.info("It is NOT pair-ended. read_length_1 %s" % (read_length_1))
read_length = read_length_1
else:
log.info("It is pair-ended. read_length_1 %s read_length_2 %s" % (read_length_1, read_length_2))
if read_length_1 != read_length_2:
log.warning("The lengths of read 1 (" + str(read_length_1) + ") and read 2 (" + str(read_length_2) + ") are not equal")
if read_length_1 < read_length_2:
read_length = read_length_1
else:
read_length = read_length_2
#if read_length < 10:
# log.error("File name: " + fastq + ". Read length is less than 10 bps. Is this paired end? " + str(is_pe) + ". Read one length: " + str(read_length_1) + "; Read two length: " + str(read_length_2) )
# return (0, read_length_1, read_length_2, is_pe)
return (read_length, read_length_1, read_length_2, is_pe)
def read_count(fpath):
'return the raw read count in a fastq file, assuming each record occupy 4 lines in file.'
if os.path.isfile(fpath):
cdir = os.path.dirname(__file__)
cmd = os.path.join(cdir, '../../testformat2.sh')
cmd = '%s in=%s | grep "^Reads"' % (cmd, fpath)
stdOut, stdErr, exitCode = run_command(cmd, True)
if exitCode == 0:
toks = stdOut.strip().split()
if len(toks) == 2:
return int(toks[1])
else:
print('error in %s:read_count (%s): wrong run output [%s]' % (__file__, cmd, stdOut.strip()))
else:
print('error in %s:read_count (%s): %s' % (__file__, cmd, stdErr.strip()))
return None | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pytools/lib/rqc_fastq.py | rqc_fastq.py |
from __future__ import division
import sys
import os
import argparse
import string
from collections import Counter
import random
sys.path.append(os.path.dirname(__file__));
import readSeq
import multiprocessing
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
KMER = 16
SUBLEN = 1000000
def getArgs():
parser = argparse.ArgumentParser(description="Count occurance of database kmers in reads", formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('-k', default=KMER, dest='kmer', metavar='<int>', type=int, help="kmer length")
parser.add_argument('-l', dest='sublen', metavar='<int>', type=int, help="perform analysis on first <int> bases [RDLEN - K + 1]")
parser.add_argument('-c', default=2, dest='cutoff', metavar='<int>', type=int, help="minimum allowed coverage")
parser.add_argument('-t', default=30, dest='targetCov', metavar='<int>', type=int, help="target coverage")
parser.add_argument('-p', '--plot', dest='plot', type=str, default=None, metavar="<file>", help='plot data and save as png to <file>')
parser.add_argument('fastaFile', type=str, help='Input FASTA file(s). Text or gzip')
parser.add_argument('fastqFile', nargs='+', help='Input FASTQ file(s). Text or gzip')
args = parser.parse_args()
return args
def getMers(seq, merLen):
for i in xrange(min(len(seq) - merLen + 1,SUBLEN)):
yield seq[i:i+merLen]
complements = string.maketrans('acgtACGT', 'tgcaTGCA')
def revComp(seq):
revCompSeq = seq.translate(complements)[::-1]
return revCompSeq
def tallyPositions(seq,counts,sublen=None):
readPos = 0
for mer in getMers(seq, KMER):
if mer in merCnts:
if readPos > len(counts):
counts.append(1)
else:
counts[readPos] += 1
readPos += 1
if sublen:
if readPos == sublen:
break
#end tallyPositions
def main():
"""
kmer based normization
call rand once
004
if avecov1|2 < mincov:
trash
elif avecov1|2 < target:
keep
elif random < target/avecov1|2:
keep
else:
trash
"""
args = getArgs()
global KMER
KMER = args.kmer
out=sys.stdout
#check to make sure input files exist
for fq in args.fastqFile:
if not os.path.exists(fq):
sys.stderr.write("ERROR: Input file '%s' does not exist. Exiting.\n" % fq)
sys.exit()
#count mers / create database
sys.stderr.write("Making k-mer database\n")
global merCnts
merCnts = Counter()
for record in readSeq.readSeq(args.fastaFile,fileType='fasta'):
for mer in getMers(record.seq, args.kmer):
sortMers = sorted([mer, revComp(mer)])
merCnts[mer] += 1
merCnts[revComp(mer)] += 1
#normalize reads
sys.stderr.write("Tallying occurrences of database kmers in reads\n")
seqIt = readSeq.readSeq(fq,paired=True)
record1,record2 = seqIt.next()
readLen = len(record1.seq)
sys.stderr.write("Read length = %d\n" % readLen)
tallyLen = readLen - args.kmer + 1
if args.sublen:
if args.sublen > tallyLen:
sys.stderr.write("sublen (-l) must be less than readlen - k + 1 : (found reads of length %d\n" % readLen)
sys.exit(-1)
tallyLen = args.sublen
counts1 = [0 for x in range(tallyLen)]
counts2 = [0 for x in range(tallyLen)]
tallyPositions(record1.seq,counts1,tallyLen)
tallyPositions(record2.seq,counts2,tallyLen)
total_reads = 1
fqName = ""
for fq in args.fastqFile:
for record1, record2 in seqIt:
tallyPositions(record1.seq,counts1,tallyLen)
tallyPositions(record2.seq,counts2,tallyLen)
total_reads += 1
fqName += fq+" "
counts1_perc = [ 100 * float(x)/total_reads for x in counts1 ]
counts2_perc = [ 100 * float(x)/total_reads for x in counts2 ]
out.write("#pos\tread1_count\tread1_perc\tread2_count\tread2_perc\n")
for i in range(tallyLen):
out.write("%i\t%i\t%0.2f\t%i\t%0.2f\n" % (i+1,counts1[i],counts1_perc[i],counts2[i],counts2_perc[i]))
if args.plot:
sys.stderr.write("Plotting data. Saving to %s\n" % args.plot)
plt.ioff()
xcoord = range(1,tallyLen+1)
plt.plot(xcoord,counts1_perc,color="red",linewidth=1.0,linestyle="-",label="Read 1")
plt.plot(xcoord,counts2_perc,color="green",linewidth=1.0,linestyle="-",label="Read 2")
leg_loc="upper right"
max_cnt = 0
max_pos = 0
for i in range(len(counts1)):
if counts1[i] > max_cnt or counts2[i] > max_cnt:
max_cnt = counts1[i] if counts1[i] > counts2[i] else counts2[i]
max_pos = i
if max_pos > 0.5*len(counts1) :
leg_loc="upper left"
plt.legend(loc=leg_loc,prop={'size':10})
plt.xlabel("Read Position")
plt.ylabel("Percent Reads with Database k-mer")
plt.title("Occurrence of reference k-mers (k = %i) in \n%s (# reads = %d)" % (args.kmer,fqName,total_reads))
plt.savefig(args.plot)
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
pass | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pytools/lib/kmercount_pos.py | kmercount_pos.py |
import sys
import math
import random
import fileinput
import re
#import os
import copy
class readSeq:
def __init__(self, fileName, paired = False, fileType = 'fastq', compression = 'auto', sampleRate = None):
self.fileName = fileName
self.fileType = fileType
self.fileType = self.__detectFileFormat()
self.compression = compression
self.__readObj = self.readFile()
self.paired = paired
self.sampleRate = sampleRate
def __iter__(self):
return self
def next(self):
record = self.__readObj.next()
self.header = record['header']
self.seq = record['seq']
self.seqNum = record['seqNum']
if self.fileType == 'fastq':
self.qual = record['qual']
if self.paired is True:
self2 = copy.copy(self)
record = self.__readObj.next()
self2.header = record['header']
self2.seq = record['seq']
self2.seqNum = record['seqNum']
if self.fileType == 'fastq':
self2.qual = record['qual']
return self, self2
else:
return self
def __detectFileFormat(self):
if self.fileType != 'auto':
return self.fileType
if ( re.search('fasta$', self.fileName) or
re.search('fa$', self.fileName) or
re.search('fna$', self.fileName) ):
self.fileType = 'fasta'
elif ( re.search('fastq$', self.fileName) or
re.search('fq$', self.fileName) ) :
self.fileType = 'fastq'
return self.fileType
def readFile(self):
record = dict()
# allow user to specify if compression should be auto detected using file names
# specified by fileinput documentation
if self.compression == 'auto':
inFH = fileinput.FileInput(self.fileName, openhook=fileinput.hook_compressed)
else:
inFH = fileinput.FileInput(self.fileName)
#TODO RAISE exception if file doesn't exist
# read fastq
if self.fileType == 'fastq':
self.seqNum = 0
for record['header'] in inFH:
record['seqNum'] = int(math.ceil(inFH.lineno()/8.0))
record['seq'] = inFH.readline().strip()
record['r1Plus'] = inFH.readline()
record['qual'] = inFH.readline().strip()
if not re.search('^@', record['header'], flags=0):
pass
#TODO add exception for not being a fastq
record['header'] = re.sub('^@', '', record['header'].strip())
yield record
elif self.fileType == 'fasta':
record['header'] = None
record['seq'] = None
record['seqNum'] = 0
for line in inFH:
line = line.strip()
if re.search('^>', line):
line = re.sub('^>', '', line)
if record['header'] is None:
record['header'] = line
if not re.search('>', record['header']):
pass
# TODO add exception for not being fasta
if record['seq'] is not None:
record['seqNum'] = record['seqNum'] + 1
if self.sampleRate:
#subsampling of reads is desired
if random.random() < self.sampleRate:
yield record
else:
yield record
record['seq'] = None
record['header'] = line
else:
if record['seq'] is not None:
record['seq'] += line
else:
record['seq'] = line
record['seqNum'] = record['seqNum'] + 1
if self.sampleRate:
#subsampling of reads is desired
if random.random() < self.sampleRate:
yield record
else:
yield record
try:
inFH = fileinput.close()
except:
pass
#except:
sys.exc_info()[0]
def printRecord(self):
if self.fileType == 'fastq':
print("@%s" % self.header)
print(self.seq)
print("+")
print(self.qual)
elif self.fileType == 'fasta':
print(">%s" % self.header)
print(self.seq)
if __name__ == '__main__':
for inputfile in sys.argv[1:]:
#get a pair of reads
for r1, r2 in readSeq(inputfile, paired=True):
print r1.header
print r2.header | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pytools/lib/readSeq.py | readSeq.py |
import os
import subprocess
import matplotlib
import numpy as np
from common import checkpoint_step
from os_utility import make_dir, change_mod, run_sh_command, rm_dir
from readqc_constants import RQCReadQcConfig, RQCContamDb, RQCReadQcReferenceDatabases, RQCReadQc
from rqc_utility import safe_basename, get_cat_cmd, localize_file, safe_dirname
from rqc_constants import RQCExitCodes
matplotlib.use("Agg") ## This needs to skip the DISPLAY env var checking
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
import mpld3
from matplotlib.ticker import MultipleLocator, FormatStrFormatter
""" STEP1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
fast_subsample_fastq_sequences
Title : fast_subsample_fastq_sequences
Function : This function subsamples the data from a specified fastq file.
Usage : fast_subsample_fastq_sequences( $seq_unit, subsampledFile, $subsamplingRate, $max_count, \$totalBaseCount, $totalReadNum, log)
Args : 1) The source fastq file.
2) The destination fastq file.
3) The percentage of read subsampling.
4) The maximum number of reads at which to stop subsampling.
5) A reference to the variable that will store the
basecount.
6) A reference to the variable that will store the
number of reads.
7) A reference to a JGI_Log object.
Returns : JGI_SUCCESS: The fastq data was successfully sampled.
JGI_FAILURE: The fastq data could not be sampled.
Comments : Pass as parameters both the subsample_rate and the read_count
in order to stop subsampling at the read_count.
The read_count can also be null, in which case the number
of reads corresponding to the percentage subsample_rate will be subsampled.
@param fastq: source fastq file (full path)
@param outputFileName: subsampled output file name (basename)
@param subsamplingRate: sample rate < 1.0
@param isStatGenOnly: boolean -> generate stats output or not
@param log
@return retCode: success or failure
@return subsampledFile: subsampled output file name (full path)
@return totalBaseCount: total #bases (to be added to readqc_stats.txt)
@return totalReadNum: total #reads (to be added to readqc_stats.txt)
@return subsampledReadNum: total #reads sampled (to be added to readqc_stats.txt)
"""
def fast_subsample_fastq_sequences(sourceFastq, outputFileName, subsamplingRate, isStatGenOnly, log):
## Tools
cdir = os.path.dirname(__file__)
bbtoolsReformatShCmd = os.path.join(cdir, '../../reformat.sh') #RQCReadQcCommands.BBTOOLS_REFORMAT_CMD
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
log.info("Sampling %s at %.2f rate", sourceFastq, subsamplingRate)
retCode = None
totalBaseCount = 0
totalReadNum = 0
subsampledReadNum = 0
bIsPaired = False
readLength = 0
subsampleDir = "subsample"
subsamplePath = os.path.join(READ_OUTPUT_PATH, subsampleDir)
make_dir(subsamplePath)
change_mod(subsamplePath, "0755")
subsampledFile = os.path.join(subsamplePath, outputFileName)
fileSize = os.path.getsize(sourceFastq)
log.info("Source fastq file size = %s", fileSize)
## Replaced subsampler with reformat.sh
## subsample with bbtoolsReformatShCmd:
## $ reformat.sh in=7348.8.68143.fastq out=subsample.fastq samplerate=0.01 qout=33
## - 21G == 180.399 seconds ~ 6x faster than subsample_fastq_pl
## new subampler from BBTOOLS
## without qin=33 then it uses auto detect, Illumina is phread64 but we need to convert to phred33
##reformat.sh in=7257.1.64419.CACATTGTGAG.s1.0.fastq out=temp.out samplerate=0.02 qin=33 qout=33 overwrite
## 20140820
## bhist=<file> Write a base composition histogram to file. ## Cycle Nucleotide Composition
## gchist=<file> Write a gc content histogram to file. ## Read GC, mean, std
## qhist=<file> Write a quality histogram to file. ## Average Base Position Quality
## bqhist=<file> Write a quality histogram designed for box plots. ## Average Base Position Quality Boxplot
## obqhist=<file> Write a base quality histogram to file. ## Base quality histogram; *.base_qual.stats
reformatPrefix = os.path.basename(subsampledFile).replace(".fastq", "")
reformatLogFile = os.path.join(subsamplePath, reformatPrefix + ".reformat.log")
reformatGchistFile = os.path.join(subsamplePath, reformatPrefix + ".reformat.gchist.txt") ## Read GC
reformatBhistFile = os.path.join(subsamplePath, reformatPrefix + ".reformat.bhist.txt") ## Cycle Nucleotide Composition
reformatQhistFile = os.path.join(subsamplePath, reformatPrefix + ".reformat.qhist.txt") ## Average Base Position Quality
reformatBqhistFile = os.path.join(subsamplePath, reformatPrefix + ".reformat.bqhist.txt") ## Average Base Position Quality Boxplot
reformatObqhistFile = os.path.join(subsamplePath, reformatPrefix + ".reformat.obqhist.txt") ## Base quality histogram
if not isStatGenOnly: ## if subsampling for blast, do not generate the stat files
subsampleCmd = "%s in=%s out=%s samplerate=%s qin=33 qout=33 ow=t > %s 2>&1 " % \
(bbtoolsReformatShCmd, sourceFastq, subsampledFile, subsamplingRate, reformatLogFile)
else:
subsampleCmd = "%s in=%s out=%s samplerate=%s qin=33 qout=33 ow=t gcplot=t bhist=%s qhist=%s gchist=%s gcbins=auto bqhist=%s obqhist=%s > %s 2>&1 " % \
(bbtoolsReformatShCmd, sourceFastq, subsampledFile, subsamplingRate, reformatBhistFile,
reformatQhistFile, reformatGchistFile, reformatBqhistFile, reformatObqhistFile, reformatLogFile)
_, _, exitCode = run_sh_command(subsampleCmd, True, log, True)
if exitCode == 0:
##java -ea -Xmx200m -cp /usr/common/jgi/utilities/bbtools/prod-33.18/lib/BBTools.jar jgi.ReformatReads in=7257.1.64419.CACATTGTGAG.s1.0.fastq out=temp.out samplerate=0.02 qin=33 qout=33 overwrite
##Executing jgi.ReformatReads [in=7257.1.64419.CACATTGTGAG.s1.0.fastq, out=temp.out, samplerate=0.02, qin=33, qout=33, overwrite]
##
##Unspecified format for output temp.out; defaulting to fastq.
##Input is being processed as paired
##Writing interleaved.
##Input: 6661 reads 1671911 bases
##Processed: 278 reads 69778 bases
##Output: 278 reads (4.17%) 69778 bases (4.17%)
##
##Time: 0.181 seconds.
##Reads Processed: 278 1.54k reads/sec
##Bases Processed: 69778 0.39m bases/sec
## NEW
if os.path.isfile(reformatLogFile):
with open(reformatLogFile) as STAT_FH:
for l in STAT_FH.readlines():
if l.startswith("Input:"):
toks = l.split()
totalBaseCount = int(toks[3])
totalReadNum = int(toks[1])
# elif l.startswith("Processed:") or l.startswith("Output:"):
elif l.startswith("Output:"):
toks = l.split()
subsampledReadNum = int(toks[1])
elif l.startswith("Input is being processed as"):
toks = l.split()
if toks[-1].strip() == "paired":
bIsPaired = True
log.info("Total base count of input fastq = %s", totalBaseCount)
log.info("Total num reads of input fastq = %s", totalReadNum)
log.info("Total num reads of sampled = %s", subsampledReadNum)
readLength = int(totalBaseCount / totalReadNum)
log.info("Read length = %d", readLength)
log.info("Paired = %s", bIsPaired)
if totalReadNum > 0 and subsampledReadNum > 0:
##
## TODO: deal with sequnits with small number of reads
## How to record the status in readqc.log
##
retCode = RQCExitCodes.JGI_SUCCESS
log.info("illumina_readqc_subsampling complete: output file = %s", subsampledFile)
elif totalReadNum > 0 and subsampledReadNum <= 0:
retCode = RQCExitCodes.JGI_FAILURE
log.error("illumina_readqc_subsampling failure. subsampledReadNum <= 0.")
else:
retCode = RQCExitCodes.JGI_FAILURE
log.error("illumina_readqc_subsampling failure. totalReadNum <= 0 and subsampledReadNum <= 0.")
else:
retCode = RQCExitCodes.JGI_FAILURE
log.error("illumina_readqc_subsampling failure. Can't find stat file from subsampling.")
else:
retCode = RQCExitCodes.JGI_FAILURE
log.error("illumina_readqc_subsampling failure. Failed to run bbtoolsReformatShCmd. Exit code != 0")
with open(reformatLogFile, 'r') as f:
log.error(f.read())
retCode = -2
return retCode, subsampledFile, totalBaseCount, totalReadNum, subsampledReadNum, bIsPaired, readLength
""" STEP2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
write_unique_20_mers
Title: write_unique_k_mers (k=20 or 25)
Function: Given a fastq file, finds unique 20/25 mers from the start
of the read and along a random position of the read
Usage: write_unique_20_mers(\@seq_files, $log)
Args: 1) ref to an array containing bz2 zipped fastq file path(s)
2) output directory
3) log file object
Returns: exitCode, merSamplerOutFile, pngPlotFile, htmlPlotFile
Comments: Using bbcountunique.sh's output file named merSampler.<fastq_name>.m20.e25000
create a plot png file, merSampler.<fastq_name>.m20.e25000.png
bbcountunique.sh: Generates a kmer uniqueness histogram, binned by file position.
There are 3 columns for single reads, 6 columns for paired:
count number of reads or pairs processed
r1_first percent unique 1st kmer of read 1
r1_rand percent unique random kmer of read 1
r2_first percent unique 1st kmer of read 2
r2_rand percent unique random kmer of read 2
pair percent unique concatenated kmer from read 1 and 2
@param fastq: source fastq file (full path)
@param log
@return retCode: success or failure
@return mersampler_out_file: plot data (to be added to readqc_files.txt)
@return pngPlotFile: output plot (to be added to readqc_files.txt)
@return htmlPlotFile: output d3 interactive plot (to be added to readqc_files.txt)
"""
def write_unique_20_mers(fastq, log):
## Tools
cdir = os.path.dirname(__file__)
bbcountuniqueShCmd = os.path.join(cdir, '../../bbcountunique.sh') #RQCReadQcCommands.BBCOUNTUNIQUE_SH_CMD
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
uniqMerDir = "uniqueness"
uniqMerPath = os.path.join(READ_OUTPUT_PATH, uniqMerDir)
make_dir(uniqMerPath)
change_mod(uniqMerPath, "0755")
## cmd line for merSampler
uniqMerSize = RQCReadQc.ILLUMINA_MER_SAMPLE_MER_SIZE ## 20 ==> 25 RQC-823 08102016
reportFreq = RQCReadQc.ILLUMINA_MER_SAMPLE_REPORT_FRQ ## 25000
sequnitFileName, exitCode = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## bbcountunique.sh, a new version of sampler from bbtools
## bbcountunique.sh in=$FILENAME out=out.txt percent=t count=t cumulative=t
## ex) bbcountunique.sh in=$SEQDIR/dna/$SEQFILE.fastq.gz out=7378.1.69281.CGATG-2.txt percent=t count=t cumulative=f int=f
## ex2)
## cmd: bbcountunique.sh k=20 interval=25000 in=7601.1.77813.CTTGTA.fastq.gz out=7601.1.77813.CTTGTA.merSampler.m20.e25000_2 percent=t count=t cumulative=f int=f
log.info("bbcountunique.sh started.")
merSamplerOutFile = os.path.join(uniqMerPath, sequnitFileNamePrefix + ".merSampler.m" + str(uniqMerSize) + ".e" + str(reportFreq) + "_2")
## RQC-823
## Adding shuffling before bbcountunique
## 08302016 Reverted to no shuffling
##
## shuffle.sh in=input.fastq.gz out=stdout.fq -Xmx40g | bbcountunique.sh in=stdin.fq -Xmx40g ==> not working
# shuffledFastqFile = os.path.join(uniqMerPath, sequnitFileNamePrefix + ".shuffled.fq")
# suffleCmd = "%s in=%s out=%s" % (shuffleShCmd, fastq, shuffledFastqFile)
# stdOut, stdErr, exitCode = run_sh_command(suffleCmd, True, log, True)
# if exitCode != 0:
# log.error("failed to suffle fastq for unique mer analysis.")
# return RQCExitCodes.JGI_FAILURE, None, None, None
bbcountuniqCmd = "%s in=%s out=%s k=%s interval=%s percent=t count=t cumulative=f int=f ow=t" \
% (bbcountuniqueShCmd, fastq, merSamplerOutFile, uniqMerSize, reportFreq)
_, _, exitCode = run_sh_command(bbcountuniqCmd, True, log, True)
if exitCode != 0:
log.error("Failed to sample unique %s mers by bbcountunique.sh.", uniqMerSize)
return RQCExitCodes.JGI_FAILURE, None, None, None
log.info("bbcountunique.sh completed.")
## Old plotting data
# nSeq nStartUniMer fracStartUniMer nRandUniMer fracRandUniMer
## 0 1 2 3 4
##25000 2500 0.1 9704 0.3882
## New plotting data from bbcountunique
# count first rand first_cnt rand_cnt
# 0 1 2 3 4
# 25000 66.400 76.088 16600 19022
# 50000 52.148 59.480 13037 14870
# 75000 46.592 53.444 11648 13361
# 100000 43.072 49.184 10768 12296 ...
pngPlotFile = None
htmlPlotFile = None
if os.path.isfile(merSamplerOutFile):
## sanity check
## OLD
## #nSeq nStartUniMer fracStartUniMer nRandUniMer fracRandUniMer
## ex) 25000 16594 0.6638 18986 0.7594
## 50000 29622 0.5211 33822 0.5934
## 75000 41263 0.4656 47228 0.5362
## 100000 52026 0.4305 59545 0.4927 ...
"""
2016-09-07
#count first rand first_cnt rand_cnt avg_quality perfect_prob
25000 96.480 98.636 24120 24659 30.36 80.94
50000 96.204 97.996 24051 24499 30.41 81.17
75000 95.512 97.568 23878 24392 29.99 80.06
100000 95.408 97.588 23852 24397 30.24 80.78
125000 95.176 97.240 23794 24310 30.23 80.86
"""
line = None
numData = 0
with open(merSamplerOutFile, "r") as FH:
lines = FH.readlines()
line = lines[-1] ## get the last line
numData = sum(1 for l in lines)
toks = line.split()
assert len(toks) == 7, "ERROR: merSamplerOutFile format error: %s " % (merSamplerOutFile)
if numData < 3:
log.error("Not enough data in merSamplerOutFile: %s", merSamplerOutFile)
return RQCExitCodes.JGI_SUCCESS, None, None, None
## Generating plots
rawDataMatrix = np.loadtxt(merSamplerOutFile, delimiter='\t', comments='#')
# Bryce: 2016-09-07, its 7 now. Its failed 622 pipelines ...
# assert len(rawDataMatrix[1][:]) == 5
fig, ax = plt.subplots()
markerSize = 5.0
lineWidth = 1.5
## Note: no need to show all the data points
## If the number of data points > 5k, get only 5k data.
jump = 1
if len(rawDataMatrix[:, 0]) > 10000:
jump = int(len(rawDataMatrix[:, 0]) / 5000)
xData = rawDataMatrix[:, 0][0::jump]
yData = rawDataMatrix[:, 1][0::jump]
yData2 = rawDataMatrix[:, 2][0::jump]
totalReadNum = rawDataMatrix[-1, 0] ## sampled read num from the last line of the data file
assert int(totalReadNum) > 0
maxX = int(totalReadNum) * 3
p1 = ax.plot(xData, yData, 'g', marker='x', markersize=markerSize, linewidth=lineWidth, label="Starting %s Mer Uniqueness" % (str(uniqMerSize)), alpha=0.5)
p2 = ax.plot(xData, yData2, 'b', marker='x', markersize=markerSize, linewidth=lineWidth, label="Random %s Mer Uniqueness" % (str(uniqMerSize)), alpha=0.5)
## Curve-fitting
from scipy.optimize import curve_fit
## fit function: f(x)=a*log(x)+b
def fit_func(x, a, b):
return a * np.log(x) + b
fitpars, _ = curve_fit(fit_func, rawDataMatrix[:, 0], rawDataMatrix[:, 1])
fix_x = [i for i in range(25000, maxX, 25000)]
ax.plot(fix_x, fit_func(fix_x, *fitpars), 'r', linewidth=lineWidth, label="fit", alpha=0.5)
ax.set_xlabel("Read Sampled", fontsize=12, alpha=0.5)
ax.set_ylabel("Percentage Unique", fontsize=12, alpha=0.5)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
fontProp = FontProperties()
fontProp.set_size("small")
fontProp.set_family("Bitstream Vera Sans")
ax.legend(loc=1, prop=fontProp)
ax.set_xlim([0, maxX])
ax.set_ylim([0, 100])
ax.grid(color="gray", linestyle=':')
## Add tooltip
labels = ["%.2f" % i for i in rawDataMatrix[:, 1]]
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p1[0], labels=labels))
labels = ["%.2f" % i for i in rawDataMatrix[:, 2]]
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p2[0], labels=labels))
## Create both dynamic and static plots
pngPlotFile = merSamplerOutFile + "_mer_sampler_plot.png"
plt.savefig(pngPlotFile, dpi=fig.dpi)
htmlPlotFile = merSamplerOutFile + "_mer_sampler_plot_d3.html"
mpld3.save_html(fig, htmlPlotFile)
log.info("New data file from bbcountunique: %s", merSamplerOutFile)
log.info("New png plot: %s", pngPlotFile)
log.info("New D3 plot: %s", htmlPlotFile)
else:
log.error("Cannot find merSamplerOutFile by bbcountunique.sh, %s", merSamplerOutFile)
return RQCExitCodes.JGI_FAILURE, None, None, None
##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# if os.path.isfile(merSamplerOutFile):
# line = None
# numData = 0
#
# with open(merSamplerOutFile, "r") as FH:
# line = FH.readlines()[-1] ## get the last line
# numData = len(line)
#
# ## #nSeq nStartUniMer fracStartUniMer nRandUniMer fracRandUniMer
# ## ex) 25000 16594 0.6638 18986 0.7594
# ## 50000 29622 0.5211 33822 0.5934
# ## 75000 41263 0.4656 47228 0.5362
# ## 100000 52026 0.4305 59545 0.4927 ...
# """
# 2016-09-07
# #count first rand first_cnt rand_cnt avg_quality perfect_prob
# 25000 96.480 98.636 24120 24659 30.36 80.94
# 50000 96.204 97.996 24051 24499 30.41 81.17
# 75000 95.512 97.568 23878 24392 29.99 80.06
# 100000 95.408 97.588 23852 24397 30.24 80.78
# 125000 95.176 97.240 23794 24310 30.23 80.86
# """
#
# toks = line.split()
# assert len(toks)==7, "ERROR: merSamplerOutFile format error: %s " % (merSamplerOutFile)
#
# # totalReadNum = int(toks[0])
# # log.info("Total number of reads = %s." % (totalReadNum))
# # assert totalReadNum > 0
#
# else:
# log.error("cannot find mersampler_out_file, %s" % (merSamplerOutFile))
# return RQCExitCodes.JGI_FAILURE, None, None, None
#
# if numData < 3:
# log.error("not enough data in %s" % (merSamplerOutFile))
# return RQCExitCodes.JGI_FAILURE, None, None, None
## verify that the mer sampler output file was created
# if not os.path.isfile(merSamplerOutFile):
# log.error("failed to find output file for %s Mer Uniqueness: %s" % (str(uniqMerSize), merSamplerOutFile))
# else:
# log.info("MerSampler output file successfully generated (%s)." % (merSamplerOutFile))
## verify that the mer sampler plot png file was created
if not os.path.isfile(pngPlotFile):
log.warning("Failed to find output plot png file for %s Mer Uniqueness", str(uniqMerSize))
else:
log.info("MerSampler output png file successfully generated (%s)", pngPlotFile)
if not os.path.isfile(htmlPlotFile):
log.warning("Failed to find output d3 plot html file for %s Mer Uniqueness", str(uniqMerSize))
else:
log.info("MerSampler output png file successfully generated (%s)", htmlPlotFile)
return RQCExitCodes.JGI_SUCCESS, merSamplerOutFile, pngPlotFile, htmlPlotFile
""" STEP3 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
illumina_read_gc
Title: illumina_read_gc
Function: Takes path to fastq file and generates
read gc histograms (txt and png)
Usage: illumina_read_gc($fastq_path, $log)
Args: 1) path to subsampled fastq file
2) log file object
Returns: JGI_SUCCESS
JGI_FAILURE
Comments: None.
@param fastq: source fastq file (full path)
@param log
@return reformatGchistFile: hist text data (to be added to readqc_files.txt)
@return pngFile: output plot (to be added to readqc_files.txt)
@return htmlFile: output d3 interactive plot (to be added to readqc_files.txt)
@return meanVal: gc mean (to be added to readqc_stats.txt)
@return stdevVal: gc stdev (to be added to readqc_stats.txt)
"""
def illumina_read_gc(fastq, log):
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
sequnitFileName, _ = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
subsampleDir = "subsample"
subsamplePath = os.path.join(READ_OUTPUT_PATH, subsampleDir)
qualDir = "qual"
qualPath = os.path.join(READ_OUTPUT_PATH, qualDir)
make_dir(qualPath)
change_mod(qualPath, "0755")
reformatGchistFile = os.path.join(subsamplePath, sequnitFileNamePrefix + ".reformat.gchist.txt") ## gc hist
log.debug("gchist file: %s", reformatGchistFile)
## Gen Average Base Position Quality plot
if not os.path.isfile(reformatGchistFile):
log.error("Gchist file not found: %s", reformatGchistFile)
return None, None, None, None, None, None, None
## File format
## #Mean 41.647
## #Median 42.000
## #Mode 42.000
## #STDev 4.541
## #GC Count
## 0.0 0
## 1.0 0
## 2.0 0
## 3.0 0
## 4.0 0
meanVal = None
medVal = None
modeVal = None
stdevVal = None
with open(reformatGchistFile, "r") as STAT_FH:
for l in STAT_FH.readlines():
if l.startswith("#Mean"):
meanVal = l.strip().split('\t')[1]
elif l.startswith("#Median"):
medVal = l.strip().split('\t')[1]
elif l.startswith("#Mode"):
modeVal = l.strip().split('\t')[1]
elif l.startswith("#STDev"):
stdevVal = l.strip().split('\t')[1]
rawDataMatrix = np.loadtxt(reformatGchistFile, comments='#', usecols=(0, 1, 2)) ## only use 3 colums: GC, Count, Cumulative
## In addition to the %GC and # reads, the cumulative read % is added.
assert len(rawDataMatrix[1][:]) == 3
fig, ax = plt.subplots()
markerSize = 5.0
lineWidth = 1.5
p1 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 1], 'r', marker='o', markersize=markerSize, linewidth=lineWidth, alpha=0.5)
ax.set_xlabel("%GC", fontsize=12, alpha=0.5)
ax.set_ylabel("Read count", fontsize=12, alpha=0.5)
ax.grid(color="gray", linestyle=':')
## Add tooltip
toolTipStrReadCnt = ["Read count=%d" % i for i in rawDataMatrix[:, 1]]
toolTipStrGcPerc = ["GC percent=%.1f" % i for i in rawDataMatrix[:, 0]]
toolTipStrReadPerc = ["Read percent=%.1f" % (i * 100.0) for i in rawDataMatrix[:, 2]]
toolTipStr = ["%s, %s, %s" % (i, j, k) for (i, j, k) in
zip(toolTipStrGcPerc, toolTipStrReadCnt, toolTipStrReadPerc)]
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p1[0], labels=toolTipStr))
pngFile = os.path.join(qualPath, sequnitFileNamePrefix + ".gchist.png")
htmlFile = os.path.join(qualPath, sequnitFileNamePrefix + ".gchist.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, htmlFile)
## Save Matplotlib plot in png format
plt.savefig(pngFile, dpi=fig.dpi)
return reformatGchistFile, pngFile, htmlFile, meanVal, stdevVal, medVal, modeVal
""" STEP5 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
write_avg_base_quality_stats
Title: write_base_quality_stats
Function: Takes path to fastq file and generates
quality plots for each read
Usage: write_base_quality_stats($fastq_path, $analysis, $log)
Args: 1) path to subsampled fastq file
2) log file object
Returns: None.
Comments: None.
@param fastq: source fastq file (full path)
@param log
@return reformatObqhistFile: output data file (to be added to readqc_files.txt)
"""
def write_avg_base_quality_stats(fastq, log):
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
sequnitFileName, _ = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
subsampleDir = "subsample"
subsamplePath = os.path.join(READ_OUTPUT_PATH, subsampleDir)
qualDir = "qual"
qualPath = os.path.join(READ_OUTPUT_PATH, qualDir)
make_dir(qualPath)
change_mod(qualPath, "0755")
reformatObqhistFile = os.path.join(subsamplePath, sequnitFileNamePrefix + ".reformat.obqhist.txt") ## base composition histogram
log.debug("obqhist file: %s", reformatObqhistFile)
## Gen base composition histogram
if not os.path.isfile(reformatObqhistFile):
log.error("Obqhist file not found: %s", reformatObqhistFile)
return None
else:
return reformatObqhistFile
""" STEP6 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
illumina_count_q_score
Title: count_q_score
Function: Given a fastq (bz2 zipped or unzipped)
file path, creates a histogram of
the quality scores in the file
Usage: count_q_score($fastq, $log)
Args: 1) path to subsampled fastq file
2) log file object
Returns: JGI_SUCCESS
JGI_FAILURE
Comments: Generates a file named <fastq_path>.qhist that has
the quality score histogram.
@param fastq: source fastq file (full path)
@param log
@return reformatObqhistFile: output data file (to be added to readqc_files.txt)
@return pngFile: output plot file (to be added to readqc_files.txt)
@return htmlFile: output d3 interactive plot file (to be added to readqc_files.txt)
"""
def illumina_count_q_score(fastq, log):
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
sequnitFileName, _ = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
subsampleDir = "subsample"
subsamplePath = os.path.join(READ_OUTPUT_PATH, subsampleDir)
qualDir = "qual"
qualPath = os.path.join(READ_OUTPUT_PATH, qualDir)
make_dir(qualPath)
change_mod(qualPath, "0755")
reformatObqhistFile = os.path.join(subsamplePath, sequnitFileNamePrefix + ".reformat.obqhist.txt") ## base composition histogram
log.debug("obqhist file: %s", reformatObqhistFile)
rawDataMatrix = np.loadtxt(reformatObqhistFile, delimiter='\t', comments='#')
assert len(rawDataMatrix[1][:]) == 3
## Qavg nrd percent
## 0 1 2
fig, ax = plt.subplots()
markerSize = 5.0
lineWidth = 1.5
p1 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 2], 'r', marker='o', markersize=markerSize, linewidth=lineWidth, alpha=0.5)
ax.set_xlabel("Average Read Quality", fontsize=12, alpha=0.5)
ax.set_ylabel("Fraction of Reads", fontsize=12, alpha=0.5)
ax.grid(color="gray", linestyle=':')
## Add tooltip
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p1[0], labels=list(rawDataMatrix[:, 2])))
pngFile = os.path.join(qualPath, sequnitFileNamePrefix + ".avg_read_quality_histogram.png")
htmlFile = os.path.join(qualPath, sequnitFileNamePrefix + ".avg_read_quality_histogram.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, htmlFile)
## Save Matplotlib plot in png format
plt.savefig(pngFile, dpi=fig.dpi)
return reformatObqhistFile, pngFile, htmlFile
""" STEP7 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
illumina_calculate_average_quality
Title: illumina_calculate_average_quality
Function: Given a fastq (subsampled) file, calculates average quality in 21 mer windows.
Usage: illumina_calculate_average_quality($fastq_path, $log)
Args: 1) path to subsampled fastq file
2) log file object
Returns: JGI_SUCCESS
JGI_FAILURE
Comments: Several output files are generated in the directory that
the script is run.
1) Text output file with 21mer start position, number of mers read,
total mers, and average accuracy of the bin
2) A gnuplot png file named <fastq_name>.21mer.qual.png
The function assumes that the fastq file exists.
The 21mer qual script was writtten by mli
@return retCode: success or failure
@return stat_file: plot data (to be added to readqc_files.txt)
@return pngFile: output plot (to be added to readqc_files.txt)
"""
## Removed!
##def illumina_calculate_average_quality(fastq, log):
""" STEP8 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
illumina_find_common_motifs
Title: illumina_find_common_motifs
Function: Given a fastq (subsampled) file, finds most common N-string motifs.
Usage: illumina_find_common_motifs($fastq_path, $log)
Args: 1) path to subsampled fastq file
2) log file object
Returns: JGI_SUCCESS
JGI_FAILURE
Comments: An output file is generated in the directory
1) Text output summary file most common motifs and perecent total motifs.
The function assumes that the fastq file exists.
The nstutter script was writtten by jel
@param fastq: source fastq file (full path)
@param log
@return retCode: success or failure
@return nstutterStatFile: output stutter data file (to be added to readqc_files.txt)
"""
def illumina_find_common_motifs(fastq, log):
## Tools
cdir = os.path.dirname(__file__)
patterNFastqPlCmd = os.path.join(cdir, '../tools/patterN_fastq.pl') #RQCReadQcCommands.PATTERN_FASTQ_PL
sequnitFileName, exitCode = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
stutterDir = "stutter"
stutterPath = os.path.join(READ_OUTPUT_PATH, stutterDir)
make_dir(stutterPath)
change_mod(stutterPath, "0755")
nstutterStatFile = os.path.join(stutterPath, sequnitFileNamePrefix + ".nstutter.stat")
## ex) patterN_fastq.pl -analog -PCT 0.1 -in 7601.1.77813.CTTGTA.s0.01.fastq > 7601.1.77813.CTTGTA.s0.01.nstutter.stat ; wait;
makeStatFileCmd = "%s -analog -PCT 0.1 -in %s > %s " % (patterNFastqPlCmd, fastq, nstutterStatFile)
combinedCmd = "%s; wait; " % (makeStatFileCmd)
_, _, exitCode = run_sh_command(combinedCmd, True, log, True)
if exitCode != 0:
log.error("failed to run patterNFastqPlCmd. Exit code != 0.")
return RQCExitCodes.JGI_FAILURE, None
if os.path.isfile(nstutterStatFile):
log.info("N stutter stat file successfully created (%s)", nstutterStatFile)
else:
log.warning("Could not locate N stutter stat file %s", nstutterStatFile)
nstutterStatFile = "failed to generate"
return RQCExitCodes.JGI_SUCCESS, nstutterStatFile
""" STEP9 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
illumina_run_bwa
Title: illumina_run_bwa
Function: Given a fastq (subsampled) file path, runs bwa aligner
Reads are aligned to each other.
Usage: illumina_run_bwa($fastq_file_path, $log)
Args: 1) path to subsampled fastq file
2) log file object
Returns: JGI_SUCCESS
JGI_FAILURE
Comments: None.
@param fastq: source fastq file (full path)
@param log
@return retCode: success or failure
@return summary_file: output bwa summary file (to be added to readqc_files.txt)
"""
## REMOVED!
##def illumina_run_bwa(fastq, log):
def illumina_run_dedupe(fastq, log):
## Tools
cdir = os.path.dirname(__file__)
bbdedupeShCmd = os.path.join(cdir, '../../bbdedupe.sh') #RQCReadQcCommands.BBDEDUPE_SH
sequnitFileName, exitCode = safe_basename(fastq, log)
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
dupDir = "dupes"
dupPath = os.path.join(READ_OUTPUT_PATH, dupDir)
make_dir(dupPath)
change_mod(dupPath, "0755")
dedupeSummaryFile = os.path.join(dupPath, sequnitFileName + ".bwa.summary")
## dedupe.sh in=reads.fq s=0 ftr=49 ac=f int=f
xmx = "-Xmx23G"
bbdedupeShCmd = "%s %s in=%s out=null qin=33 ow=t s=0 ftr=49 ac=f int=f> %s 2>&1 " % (bbdedupeShCmd, xmx, fastq, dedupeSummaryFile)
_, _, exitCode = run_sh_command(bbdedupeShCmd, True, log, True)
if exitCode != 0:
log.error("Failed to run bbdedupeShCmd.sh")
return RQCExitCodes.JGI_FAILURE, None
return RQCExitCodes.JGI_SUCCESS, dedupeSummaryFile
""" STEP10 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
illumina_run_tagdust
Title: illumina_run_tagdust
Function: Given a fastq (subsampled) file path, runs tag dust to
find common illumina artifacts.
Usage: illumina_run_tagdust($fastq_file_path, $log)
Args: 1) path to subsampled fastq file
2) log file object
Returns: JGI_SUCCESS
JGI_FAILURE
Comments: None.
@param fastq: source fastq file (full path)
@param log
@return retCode: success or failure
@return tagdust_out: output tagdust file (to be added to readqc_files.txt)
"""
## No longer needed!!
##def illumina_run_tagdust(fastq, log):
""" STEP11 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
illumina_detect_read_contam
@param fastq: source fastq file (full path)
@param firstBp: first bp length to cut for read contam detection
@param log
@return retCode: success or failure
@return outFileList: output duk stat file list (to be added to readqc_files.txt)
@return ratioResultDict: output stat value dict (to be added to readqc_stats.txt)
"""
##def illumina_detect_read_contam(fastq, log):
## REMOVED!
# # def illumina_detect_read_contam2(fastq, firstBp, log):
## REMOVED! 08302016
"""
Contam removal by seal.sh
"""
def illumina_detect_read_contam3(fastq, firstBp, log):
## Tools
cdir = os.path.dirname(__file__)
sealShCmd = os.path.join(cdir, '../../seal.sh') #RQCReadQcCommands.SEAL_SH_CMD
sequnitFileName, exitCode = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
qualDir = "qual"
qualPath = os.path.join(READ_OUTPUT_PATH, qualDir)
make_dir(qualPath)
change_mod(qualPath, "0755")
numBadFiles = 0
ratio = 0
outFileDict = {}
ratioResultDict = {}
contamStatDict = {}
catCmd, exitCode = get_cat_cmd(fastq, log)
## TODO: remove JGI Contaminants from CONTAM_DBS
# CONTAM_DBS['artifact'] = ARTIFACT_FILE_NO_SPIKEIN
# CONTAM_DBS['artifact_50bp'] = ARTIFACT_FILE_NO_SPIKEIN ## 20131203 Added for 50bp contam
# CONTAM_DBS['DNA_spikein'] = ARTIFACT_FILE_DNA_SPIKEIN
# CONTAM_DBS['RNA_spikein'] = ARTIFACT_FILE_RNA_SPIKEIN
# CONTAM_DBS['contaminants'] = CONTAMINANTS
# CONTAM_DBS['fosmid'] = FOSMID_VECTOR
# CONTAM_DBS['mitochondrion'] = MITOCHONDRION_NCBI_REFSEQ
# CONTAM_DBS['phix'] = PHIX
# CONTAM_DBS['plastid'] = CHLOROPLAST_NCBI_REFSEQ
# CONTAM_DBS['rrna'] = GENERAL_RRNA_FILE
# CONTAM_DBS['microbes'] = MICROBES ## non-synthetic
# CONTAM_DBS['synthetic'] = SYNTHETIC
# CONTAM_DBS['adapters'] = ADAPTERS
for db in RQCContamDb.CONTAM_DBS.iterkeys():
if db == "artifact_50bp" and int(firstBp) == 20:
sealStatsFile = os.path.join(qualPath, sequnitFileNamePrefix + ".artifact_20bp.seal.stats")
else:
sealStatsFile = os.path.join(qualPath, sequnitFileNamePrefix + "." + db + ".seal.stats")
## Localization file to /scratch/rqc
# log.info("Contam DB localization started for %s", db)
# localizedDb = localize_file(RQCContamDb.CONTAM_DBS[db], log)
## 04262017 Skip localization temporarily until /scratch can be mounted
## in shifter container
# if os.environ['NERSC_HOST'] == "genepool":
# localizedDb = localize_file(RQCContamDb.CONTAM_DBS[db], log)
# else:
# localizedDb = None
#
# if localizedDb is None:
# localizedDb = RQCContamDb.CONTAM_DBS[db] ## use the orig location
# else:
# log.info("Use the localized file, %s", localizedDb)
localizedDb = RQCContamDb.CONTAM_DBS[db]
## 09112017 Manually add -Xmx23G
xmx = "-Xmx23G"
if db == "artifact_50bp":
cutCmd = "cut -c 1-%s | " % (firstBp)
cmd = "set -e; %s %s | %s %s in=stdin.fq out=null ref=%s k=22 hdist=0 stats=%s ow=t statscolumns=3 %s " % \
(catCmd, fastq, cutCmd, sealShCmd, localizedDb, sealStatsFile, xmx)
elif db == "microbes":
cmd = "%s in=%s out=null ref=%s hdist=0 mm=f mkf=0.5 ambig=random minlen=120 qtrim=rl trimq=10 stats=%s ow=t statscolumns=3 %s " % \
(sealShCmd, fastq, localizedDb, sealStatsFile, xmx)
else:
cmd = "%s in=%s out=null ref=%s k=22 hdist=0 stats=%s ow=t statscolumns=3 %s " % \
(sealShCmd, fastq, localizedDb, sealStatsFile, xmx)
_, _, exitCode = run_sh_command(cmd, True, log, True)
if exitCode != 0:
log.error("Failed to run seal.sh cmd")
return RQCExitCodes.JGI_FAILURE, None, None, None
## Parsing seal output
if not os.path.isfile(sealStatsFile):
log.warning("Cannot open contam output file %s", sealStatsFile)
numBadFiles += 1
continue ## continue to next contam db
maxStatCount = 0
with open(sealStatsFile, "r") as sealFH:
for line in sealFH:
line.strip()
if line.find("#Matched") != -1:
## ex) #Matched 1123123 77.31231%
toks = line.split()
assert len(toks) == 3
ratio = toks[-1].replace('%', '')
## contamintaion stat
if not line.startswith('#'):
t = line.rstrip().split('\t')
# contamStatDict["%s:%s" % (db, t[0])] = t[2].replace('%', '')
if maxStatCount < 10 and t[0].startswith("gi|"):
# contamStatDict["contam:%s:%s" % (db, t[0])] = t[2].replace('%', '')
contamStatDict["contam:%s:%s" % (db, "|".join(t[0].split('|')[:2]))] = t[2].replace('%', '') ## save only gi part (RQC-906)
maxStatCount += 1
## RQC-743
if db == "artifact_50bp" and int(firstBp) == 20:
db = "artifact_20bp"
outFileDict[db + ".seal.stats"] = sealStatsFile
log.debug("Contam db and matched ratio: %s = %f", RQCContamDb.CONTAM_KEYS[db], float(ratio))
ratioResultDict[RQCContamDb.CONTAM_KEYS[db]] = float(ratio)
if numBadFiles:
log.info("Number of bad I/O cases = %s", numBadFiles)
return RQCExitCodes.JGI_FAILURE, None, None, None
return RQCExitCodes.JGI_SUCCESS, outFileDict, ratioResultDict, contamStatDict
""" STEP12 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
illumina_sciclone_analysis
Title: illumina_sciclone_analysis
Function: Takes path to fastq file and determines
if it is from a multiplexed run or not
Usage: illumina_sciclone_analysis($subfastq, $log)
Args: 1) fastq file
2) log file object
Returns: JGI_SUCCESS
JGI_FAILURE
Comments: None.
@param origFastq: source fastq file (full path)
@param isPairedEnd: pair- or single-ended
@param log
@return retCode: success or failure
@return ratioResultDict: output stat value dict (to be added to readqc_stats.txt)
@return dnaCountFile: output sam stat file (to be added to readqc_files.txt)
@return rnaCountFile: output sam stat file (to be added to readqc_files.txt)
"""
## Removed!
##def illumina_sciclone_analysis(origFastq, isPairedEnd, log, libName=None, isRna=None):
def illumina_sciclone_analysis2(origFastq, isPairedEnd, log, libName=None, isRna=None):
## detect lib is rna or not
sequnitFileName, exitCode = safe_basename(origFastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
# if libName is None and isRna is None:
# _, _, libName, isRna = get_lib_info(sequnitFileNamePrefix, log) ## seqProjectId, ncbiOrganismName not used
#
# if isRna == "N/A":
# log.error("Failed to get lib info for %s", sequnitFileNamePrefix)
# return RQCExitCodes.JGI_FAILURE, -1, -1
if isRna == '1':
isRna = True
elif isRna == '0':
isRna = False
if isRna:
log.debug("The lib is RNA (%s)", libName)
else:
log.debug("The lib is DNA (%s)", libName)
if isPairedEnd:
log.debug("It's pair-ended.")
else:
log.debug("It's single-ended.")
## output dir
sciclone_dir = "sciclone_analysis"
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
sciclone_path = os.path.join(READ_OUTPUT_PATH, sciclone_dir)
## Define the subdirectory. If it exists already, remove it
if os.path.isdir(sciclone_path):
rm_dir(sciclone_path)
make_dir(sciclone_path)
change_mod(sciclone_path, "0755")
## NOTE: Save count file in analysis object
dnaCountFile = None
rnaCountFile = None
if isRna:
rnaCountFile = os.path.join(sciclone_path, sequnitFileNamePrefix + "_bbduk_sciclone_rna_count.txt")
else:
dnaCountFile = os.path.join(sciclone_path, sequnitFileNamePrefix + "_bbduk_sciclone_dna_count.txt")
cdir = os.path.dirname(__file__)
bbdukShCmd = os.path.join(cdir, '../../bbduk.sh') #RQCReadQcCommands.BBDUK_SH_CMD
bbdukRnaDb = RQCReadQcReferenceDatabases.SCICLONE_RNA2
# bbdukDnaDb = RQCReadQcReferenceDatabases.SCICLONE_DNA2
cmd = None
if isRna:
## Localization file to /scratch/rqc
# log.info("Sciclone RNA ref DB localization started for %s", bbdukRnaDb)
# localizedDb = localize_file(bbdukRnaDb, log)
## 04262017 Skip localization temporarily until /scratch can be mounted
## in shifter container
# if os.environ['NERSC_HOST'] == "genepool":
# localizedDb = localize_file(bbdukRnaDb, log)
# else:
# localizedDb = None
#
# if localizedDb is None:
# localizedDb = bbdukRnaDb ## use the orig location
# else:
# log.info("Use the localized file, %s", localizedDb)
localizedDb = bbdukRnaDb ## use the orig location
## bbduk.sh in=7365.2.69553.AGTTCC.fastq.gz ref=/global/projectb/sandbox/gaag/bbtools/data/sciclone_rna.fa out=null fbm=t k=31 mbk=0 stats=sciclone2.txt
cmd = "%s in=%s ref=%s out=null fbm=t k=31 mbk=0 stats=%s statscolumns=3 " % (bbdukShCmd, origFastq, localizedDb, rnaCountFile)
else:
## Localization file to /scratch/rqc
# log.info("Sciclone DNA ref DB localization started for %s", bbdukDnaDb)
# localizedDb = localize_file(bbdukDnaDb, log)
## 04262017 Skip localization temporarily until /scratch can be mounted
## in shifter container
# if os.environ['NERSC_HOST'] == "genepool":
# localizedDb = localize_file(bbdukRnaDb, log)
# else:
# localizedDb = None
#
# if localizedDb is None:
# localizedDb = bbdukRnaDb ## use the orig location
# else:
# log.info("Use the localized file, %s", localizedDb)
localizedDb = bbdukRnaDb ## use the orig location
## bbduk.sh in=7257.1.64419.CACATTGTGAG.fastq.gz ref=/global/projectb/sandbox/gaag/bbtools/data/sciclone_dna.fa out=null fbm=t k=31 mbk=0 stats=sciclone1.txt
cmd = "%s in=%s ref=%s out=null fbm=t k=31 mbk=0 stats=%s statscolumns=3 " % (bbdukShCmd, origFastq, localizedDb, dnaCountFile)
_, _, exitCode = run_sh_command(cmd, True, log, True)
if exitCode != 0:
log.error("Failed to run bbduk.sh cmd")
return RQCExitCodes.JGI_FAILURE, None, None
log.debug("rnaCountFile = %s", rnaCountFile)
log.debug("dnaCountFile = %s", dnaCountFile)
return RQCExitCodes.JGI_SUCCESS, dnaCountFile, rnaCountFile
""" STEP13 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
illumina_read_megablast
Title: illumina_read_megablast
Function: Takes path(s) to bz2 zipped or gzipped fastq file
and runs megablast against the reads.
Usage: illumina_read_megablast(\@seq_files, $subsampledFile, $read_length, $log)
Args: 1) reference to an array containing bz2 zipped or gzipped fastq file path(s);
the files should all be compressed the same way
2) subsampled fastq file
3) read length
4) log file object
Returns: JGI_SUCCESS
JGI_FAILURE
Comments: None.
@param subsampledFile: source fastq file (full path)
@param log
@return retCode: success or failure
"""
## No longer needed. Removed.
##def illumina_read_megablast(subsampledFile, read_num_to_pass, log, blastDbPath=None):
""" STEP14 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
##def illumina_read_blastn_refseq_microbial(subsampledFile, log, blastDbPath=None):
## 12212015 sulsj REMOVED!
""" STEP14 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def illumina_read_blastn_refseq_archaea(subsampledFile, log):
## Tools
cdir = os.path.dirname(__file__)
bbtoolsReformatShCmd = os.path.join(cdir, '../../reformat.sh') #RQCReadQcCommands.BBTOOLS_REFORMAT_CMD
## verify the fastq file
if not os.path.isfile(subsampledFile):
log.error("Failed to find fastq file")
return RQCExitCodes.JGI_FAILURE
log.info("Read level contamination analysis using blastn")
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
## output dir
megablastDir = "megablast"
megablastPath = os.path.join(READ_OUTPUT_PATH, megablastDir)
make_dir(megablastPath)
change_mod(megablastPath, "0755")
## 20140929 Replaced with reformat.sh
queryFastaFileName = "reads.fa"
## reformat.sh for converting fastq to fasta
cmd = "%s in=%s out=%s qin=33 qout=33 ow=t " % (bbtoolsReformatShCmd, subsampledFile, os.path.join(megablastPath, queryFastaFileName))
_, _, exitCode = run_sh_command(cmd, True, log, True)
if exitCode != 0:
log.error("Failed to run reformat.sh to convert fastq to fasta: %s", cmd)
return RQCExitCodes.JGI_FAILURE
megablastOutputFile = None
db = "refseq.archaea"
log.info("---------------------------------------------")
log.info("Start blastn search against %s", db)
log.info("---------------------------------------------")
## final output ==> READ_OUTPUT_PATH/megablast
retCode, megablastOutputFile = run_blastplus_py(os.path.join(megablastPath, queryFastaFileName), db, log)
if retCode != RQCExitCodes.JGI_SUCCESS:
if megablastOutputFile is None:
log.error("Failed to run blastn against %s. Ret = %s", db, retCode)
retCode = RQCExitCodes.JGI_FAILURE
elif megablastOutputFile == -143:
log.warning("Blast overtime. Skip the search against %s.", db)
retCode = -143 ## blast overtime
else:
log.info("Successfully ran blastn of reads against %s", db)
retCode = RQCExitCodes.JGI_SUCCESS
return retCode
""" STEP15 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def illumina_read_blastn_refseq_bacteria(subsampledFile, log):
## Tools
cdir = os.path.dirname(__file__)
bbtoolsReformatShCmd = os.path.join(cdir, '../../reformat.sh') #RQCReadQcCommands.BBTOOLS_REFORMAT_CMD
## verify the fastq file
if not os.path.isfile(subsampledFile):
log.error("Failed to find fastq file for blastn")
return RQCExitCodes.JGI_FAILURE
log.info("Read level contamination analysis using blastn")
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
## output dir
megablastDir = "megablast"
megablastPath = os.path.join(READ_OUTPUT_PATH, megablastDir)
make_dir(megablastPath)
change_mod(megablastPath, "0755")
## 20140929 Replaced with reformat.sh
queryFastaFileName = "reads.fa"
## reformat.sh for converting fastq to fasta
cmd = "%s in=%s out=%s qin=33 qout=33 ow=t " % (bbtoolsReformatShCmd, subsampledFile, os.path.join(megablastPath, queryFastaFileName))
_, _, exitCode = run_sh_command(cmd, True, log, True)
if exitCode != 0:
log.error("Failed to run reformat.sh to convert fastq to fasta: %s", cmd)
return RQCExitCodes.JGI_FAILURE
megablastOutputFile = None
db = "refseq.bacteria"
log.info("---------------------------------------------")
log.info("Start blastn search against %s", db)
log.info("---------------------------------------------")
## final output ==> READ_OUTPUT_PATH/megablast
retCode, megablastOutputFile = run_blastplus_py(os.path.join(megablastPath, queryFastaFileName), db, log)
if retCode != RQCExitCodes.JGI_SUCCESS:
if megablastOutputFile is None:
log.error("Failed to run blastn against %s. Ret = %s", db, retCode)
retCode = RQCExitCodes.JGI_FAILURE
elif megablastOutputFile == -143:
log.warning("Blast overtime. Skip the search against %s.", db)
retCode = -143 ## blast overtime
else:
log.info("Successfully ran blastn of reads against %s", db)
retCode = RQCExitCodes.JGI_SUCCESS
return retCode
""" STEP16 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
def illumina_read_blastn_nt(subsampledFile, log):
## Tools
cdir = os.path.dirname(__file__)
bbtoolsReformatShCmd = os.path.join(cdir, '../../reformat.sh') #RQCReadQcCommands.BBTOOLS_REFORMAT_CMD
## verify the fastq file
if not os.path.isfile(subsampledFile):
log.error("Failed to find fastq file for blastn")
return RQCExitCodes.JGI_FAILURE
log.info("Read level contamination analysis using blastn")
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
## output dir
megablastDir = "megablast"
megablastPath = os.path.join(READ_OUTPUT_PATH, megablastDir)
make_dir(megablastPath)
change_mod(megablastPath, "0755")
## 20140929 Replaced with reformat.sh
queryFastaFileName = "reads.fa"
## reformat.sh
cmd = "%s in=%s out=%s qin=33 qout=33 ow=t " % (bbtoolsReformatShCmd, subsampledFile, os.path.join(megablastPath, queryFastaFileName))
_, _, exitCode = run_sh_command(cmd, True, log, True)
if exitCode != 0:
log.error("Failed to run reformat.sh to convert fastq to fasta: %s", cmd)
return RQCExitCodes.JGI_FAILURE
megablastOutputFile = None
db = "nt"
log.info("----------------------------------")
log.info("Start blastn search against %s", db)
log.info("----------------------------------")
## final output ==> READ_OUTPUT_PATH/megablast
retCode, megablastOutputFile = run_blastplus_py(os.path.join(megablastPath, queryFastaFileName), db, log)
if retCode != RQCExitCodes.JGI_SUCCESS:
if megablastOutputFile is None:
log.error("Failed to run blastn against %s. Ret = %s", db, retCode)
retCode = RQCExitCodes.JGI_FAILURE
elif megablastOutputFile == -143:
log.warning("Blast overtime. Skip the search against %s.", db)
retCode = -143 ## blast overtime
else:
log.info("Successfully ran blastn of reads against %s", db)
retCode = RQCExitCodes.JGI_SUCCESS
return retCode
""" STEP17 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
illumina_generate_index_sequence_detection_plot
"""
def illumina_generate_index_sequence_detection_plot(fastq, log, isMultiplexed=None): ## TO BE REMOVED!
isMultiplexed = 0
if not os.path.isfile(fastq):
log.error("Failed to find the input fastq file, %s", fastq)
return RQCExitCodes.JGI_FAILURE, None, None, None
else:
log.info("fastq file for index sequence analysis: %s", fastq)
sequnitFileName, exitCode = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
# if isMultiplexed is None:
# isMultiplexed = get_multiplex_info(sequnitFileNamePrefix, log)
#retCode = None
demultiplexStatsFile = None
demultiplexPlotDataFile = None
detectionPlotPngFile = None
storedDemultiplexStatsFile = None
if int(isMultiplexed) == 1:
log.info("Multiplexed - start analyzing...")
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
demul_dir = "demul"
demulPath = os.path.join(READ_OUTPUT_PATH, demul_dir)
make_dir(demulPath)
change_mod(demulPath, "0755")
## This version is sorted by percent for readability of stats
demultiplexStatsFile = os.path.join(demulPath, sequnitFileNamePrefix + ".demultiplex_stats")
## This version has index column and sort by index for plot
demultiplexPlotDataFile = os.path.join(demulPath, sequnitFileNamePrefix + ".demultiplex_stats.tmp")
## This path is relative to final qual location to be stored in analysis obj.
detectionPlotPngFile = os.path.join(demulPath, sequnitFileNamePrefix + ".index_sequence_detection.png")
storedDemultiplexStatsFile = os.path.join(demulPath, sequnitFileNamePrefix + ".demultiplex_stats")
if not os.path.isfile(demultiplexStatsFile):
indexSeq = None
line = None
header = None
indexSeqCounter = {}
catCmd, exitCode = get_cat_cmd(fastq, log)
if fastq.endswith(".gz"):
catCmd = "zcat" ## pigz does not work with subprocess
seqCount = 0
try:
proc = subprocess.Popen([catCmd, fastq], bufsize=2 ** 16, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while 1:
line = proc.stdout.readline()
if not line:
break
## First line is header
line.strip()
header = line
## Get second line of record - sequence
line = proc.stdout.readline()
## Get third line - junk
line = proc.stdout.readline()
## Get the final line (4th line) - quality
line = proc.stdout.readline()
## Parse the header
headerFields = header.split(":")
## The last index is the index
indexSeq = headerFields[-1].strip()
assert indexSeq
## Increment the counts and store the index
if indexSeq in indexSeqCounter:
indexSeqCounter[indexSeq] += 1
else:
indexSeqCounter[indexSeq] = 1
seqCount += 1
except Exception as e:
if log:
log.error("Exception in file reading: %s", e)
log.error("Failed to read the given fastq file [%s]", fastq)
log.error("Fastq header doesn't have the index sequence: %s", header)
log.error("Index sequence analysis is skipped!")
return RQCExitCodes.JGI_SUCCESS, None, None, None
## Open the output file handles for writing
log.info("demultiplexPlotDataFile = %s", demultiplexPlotDataFile)
log.info("detectionPlotPngFile = %s", detectionPlotPngFile)
log.info("demultiplexStatsFile = %s", demultiplexStatsFile)
log.info("storedDemultiplexStatsFile = %s", storedDemultiplexStatsFile)
plotDataFH = open(demultiplexPlotDataFile, "w")
statsFH = open(demultiplexStatsFile, "w")
## Count the total number of indexes found
numIndexesFound = len(indexSeqCounter)
## Store the data header information for printing
reportHeader = """# Demultiplexing Summary
#
# Seq unit name: %s
# Total sequences: %s
# Total indexes found: %s
# 1=indexSeq 2=index_sequence_count 3=percent_of_total
#
""" % (sequnitFileName, seqCount, numIndexesFound)
statsFH.write(reportHeader)
## Sort by value, descending
log.debug("Sort by value of indexSeqCounter")
for indexSeq in sorted(indexSeqCounter, key=indexSeqCounter.get, reverse=True):
perc = float(indexSeqCounter[indexSeq]) / float(seqCount) * 100
l = "%s\t%s\t%.6f\n" % (indexSeq, indexSeqCounter[indexSeq], perc)
statsFH.write(l)
## Sort by index and add id column for plotting
log.debug("Sort by index of indexSeqCounter")
i = 1
for indexSeq in sorted(indexSeqCounter.iterkeys()):
perc = float(indexSeqCounter[indexSeq]) / float(seqCount) * 100
l = "%s\t%s\t%s\t%.6f\n" % (i, indexSeq, indexSeqCounter[indexSeq], perc)
plotDataFH.write(l)
i += 1
plotDataFH.close()
statsFH.close()
log.debug("demultiplex plotting...")
## matplotlib plotting
# data
# Index_seq_id Index_seq indexSeqCounter percent
# 1 AAAAAAAAAAAA 320 0.000549
# 2 AAAAAAAAAAAC 16 0.000027
# 3 AAAAAAAAAAAG 8 0.000014
# 4 AAAAAAAAAACA 4 0.000007
# 5 AAAAAAAAAACG 2 0.000003
# 6 AAAAAAAAAAGA 6 0.000010
# 7 AAAAAAAAAATA 6 0.000010
# rawDataMatrix = np.loadtxt(demultiplexPlotDataFile, delimiter='\t', comments='#')
# assert len(rawDataMatrix[1][:]) == 4
## For a textfile with 4000x4000 words this is about 10 times faster than loadtxt.
## http://stackoverflow.com/questions/14985233/load-text-file-as-strings-using-numpy-loadtxt
def load_data_file(fname):
data = []
with open(fname, 'r') as FH:
lineCnt = 0
for line in FH:
if lineCnt > 10000: ## experienced out of mem with a stat file with 9860976 index sequences
log.warning("Too many index sequences. Only 10000 index sequences will be used for plotting.")
break
data.append(line.replace('\n', '').split('\t'))
lineCnt += 1
return data
def column(matrix, i, opt):
if opt == "int":
return [int(row[i]) for row in matrix]
elif opt == "float":
return [float(row[i]) for row in matrix]
else:
return [row[i] for row in matrix]
rawDataMatrix = load_data_file(demultiplexPlotDataFile)
fig, ax = plt.subplots()
markerSize = 6.0
lineWidth = 1.5
p1 = ax.plot(column(rawDataMatrix, 0, "int"), column(rawDataMatrix, 3, "float"), 'r', marker='o', markersize=markerSize, linewidth=lineWidth, label='N', alpha=0.5)
ax.set_xlabel("Index Sequence ID", fontsize=12, alpha=0.5)
ax.set_ylabel("Fraction", fontsize=12, alpha=0.5)
ax.grid(color="gray", linestyle=':')
## Add tooltip
labels = ["%s" % i for i in column(rawDataMatrix, 1, "str")]
## Show index_seq in the plot
for i in rawDataMatrix:
ax.text(i[0], float(i[3]) + .2, "%s" % i[1])
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p1[0], labels=labels))
detectionPlotHtml = os.path.join(demulPath, sequnitFileNamePrefix + ".index_sequence_detection.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, detectionPlotHtml)
## Save Matplotlib plot in png format
plt.savefig(detectionPlotPngFile, dpi=fig.dpi)
if exitCode != 0:
log.error("Failed to create demulplex plot")
return RQCExitCodes.JGI_FAILURE, None, None, None
log.info("demulplex stats and plot generation completed!")
else:
log.info("Not multiplexed - skip this analysis.")
return RQCExitCodes.JGI_SUCCESS, None, None, None
if detectionPlotPngFile is not None and storedDemultiplexStatsFile is not None and os.path.isfile(
detectionPlotPngFile) and os.path.isfile(storedDemultiplexStatsFile):
return RQCExitCodes.JGI_SUCCESS, demultiplexStatsFile, detectionPlotPngFile, detectionPlotHtml
else:
return RQCExitCodes.JGI_FAILURE, None, None, None
""" STEP18 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
end_of_read_illumina_adapter_check
usage: kmercount_pos.py [-h] [-k <int>] [-c <int>] [-t <int>] [-p <file>]
fastaFile fastqFile [fastqFile ...]
Count occurance of database kmers in reads
positional arguments:
fastaFile Input FASTA file(s). Text or gzip
fastqFile Input FASTQ file(s). Text or gzip
optional arguments:
-h, --help show this help message and exit
-k <int> kmer length (default: 16)
-c <int> minimum allowed coverage (default: 2)
-t <int> target coverage (default: 30)
-p <file>, --plot <file> plot data and save as png to <file> (default: None)
* NOTE
- RQC-383 04082014: Updated to the newest version of kmercount_pos.py (readqc ver 5.0.4)
"""
def end_of_read_illumina_adapter_check(firstSubsampledFastqFile, log):
cdir = os.path.dirname(__file__)
kmercountPosCmd = os.path.join(cdir, 'kmercount_pos.py') #RQCReadQcCommands.KMERCOUNT_POS_CMD
adapterDbName = RQCReadQcReferenceDatabases.END_OF_READ_ILLUMINA_ADAPTER_CHECK_DB
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
sequnitFileName, exitCode = safe_basename(firstSubsampledFastqFile, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
plotFile = ""
dataFile = ""
adapterCheckDir = "adapter"
adapterCheckPath = os.path.join(READ_OUTPUT_PATH, adapterCheckDir)
make_dir(adapterCheckPath)
change_mod(adapterCheckPath, "0755")
plotFile = os.path.join(adapterCheckPath, sequnitFileNamePrefix + ".end_of_read_adapter_check.png") ## ignored
dataFile = os.path.join(adapterCheckPath, sequnitFileNamePrefix + ".end_of_read_adapter_check.txt")
## Localization file to /scratch/rqc
log.info("illumina_adapter_check DB localization started for %s", adapterDbName)
# if os.environ['NERSC_HOST'] == "genepool":
# localizedDb = localize_file(adapterDbName, log)
# else:
# localizedDb = None
#
# if localizedDb is None:
# localizedDb = adapterDbName ## use the orig location
# else:
# log.info("Use the localized file, %s", localizedDb)
localizedDb = adapterDbName ## use the orig location
## ex) kmercount_pos.py --plot plot.png Artifacts.adapters_primers_only.fa subsample.fastq > counts_by_pos.txt
cmd = "%s --plot %s %s %s > %s " % (kmercountPosCmd, plotFile, localizedDb, firstSubsampledFastqFile, dataFile)
## Run cmd
_, _, exitCode = run_sh_command(cmd, True, log, True)
assert exitCode == 0
## mpld3 plots gen
rawDataMatrix = np.loadtxt(dataFile, delimiter='\t', comments='#', skiprows=0)
assert len(rawDataMatrix[1][:]) == 3 or len(rawDataMatrix[1][:]) == 5
##pos read1 read2
## 0 1 2
## This output file format is changed on 2013.06.26 (RQC-442)
##
## pos read1_count read1_perc read2_count read2_perc
##
fig, ax = plt.subplots()
markerSize = 3.5
lineWidth = 1.0
if len(rawDataMatrix[1][:]) != 5: ## support for old file
p1 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 1], 'r', marker='o', markersize=markerSize, linewidth=lineWidth, label="read1", alpha=0.5)
p2 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 2], 'g', marker='d', markersize=markerSize, linewidth=lineWidth, label="read2", alpha=0.5)
ax.set_ylabel("Read Count with Database K-mer", fontsize=12, alpha=0.5)
else:
p1 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 2], 'r', marker='o', markersize=markerSize, linewidth=lineWidth, label="read1", alpha=0.5)
p2 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 4], 'g', marker='d', markersize=markerSize, linewidth=lineWidth, label="read2", alpha=0.5)
ax.set_ylim([0, 100])
ax.set_ylabel("Percent Reads with Database K-mer", fontsize=12, alpha=0.5)
ax.set_xlabel("Read Position", fontsize=12, alpha=0.5)
ax.yaxis.set_label_coords(-0.095, 0.75)
fontProp = FontProperties()
fontProp.set_size("small")
fontProp.set_family("Bitstream Vera Sans")
ax.legend(loc=1, prop=fontProp)
ax.grid(color="gray", linestyle=':')
## Add tooltip
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p1[0], labels=list(rawDataMatrix[:, 1])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p2[0], labels=list(rawDataMatrix[:, 2])))
pngFile = os.path.join(adapterCheckPath, sequnitFileNamePrefix + ".end_of_read_adapter_check.png")
htmlFile = os.path.join(adapterCheckPath, sequnitFileNamePrefix + ".end_of_read_adapter_check.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, htmlFile)
## Save Matplotlib plot in png format
plt.savefig(pngFile, dpi=fig.dpi)
if exitCode != 0:
log.error("Failed to run kmercountPosCmd")
return RQCExitCodes.JGI_FAILURE, None, None, None
if os.path.isfile(plotFile) and os.path.isfile(dataFile):
log.info("kmercount_pos completed.")
return RQCExitCodes.JGI_SUCCESS, dataFile, pngFile, htmlFile
else:
log.error("cannot find the output files from kmercount_pos. kmercount_pos failed.")
return RQCExitCodes.JGI_FAILURE, None, None, None
""" STEP19 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
insert_size_analysis
Using bbmerge.sh from bbtools, create insert size histogram (static/interactive) plots using D3 and data file
e.q.
java -ea -Xmx200m -cp /usr/common/jgi/utilities/bbtools/prod-v32.28/lib/BBTools.jar jgi.BBMerge
in=/global/projectb/scratch/brycef/rqc-dev/staging/00/00/66/26/6626.2.48981.TTCTCC.fastq ihist=ihist.txt
Executing jgi.BBMerge [in=/global/projectb/scratch/brycef/rqc-dev/staging/00/00/66/26/6626.2.48981.TTCTCC.fastq, ihist=ihist.txt]
e.g.
bbmerge.sh in=[path-to-fastq] hist=hist.txt
- you should use the whole fastq, not just the subsampled fastq
"""
def insert_size_analysis(fastq, log):
## Tools
cdir = os.path.dirname(__file__)
bbmergeShCmd = os.path.join(cdir, '../../bbmerge.sh') #RQCReadQcCommands.BBMERGE_SH_CMD
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
sequnitFileName, exitCode = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
plotFile = ""
dataFile = ""
insertSizeOutDir = "insert_size_analysis"
insertSizeOutPath = os.path.join(READ_OUTPUT_PATH, insertSizeOutDir)
make_dir(insertSizeOutPath)
change_mod(insertSizeOutPath, "0755")
plotFile = os.path.join(insertSizeOutPath, sequnitFileNamePrefix + ".insert_size_histo.png")
htmlFile = os.path.join(insertSizeOutPath, sequnitFileNamePrefix + ".insert_size_histo.html")
dataFile = os.path.join(insertSizeOutPath, sequnitFileNamePrefix + ".insert_size_histo.txt")
## TODO
## if it's single ended
## 1. rqcfilter.sh for adapter trim
## 2. reformat.sh for getting lhist
## 3. analyze lhist.txt
## ex) bbmerge.sh in=7601.1.77813.CTTGTA.fastq.gz hist=.../insert_size_analysis/7601.1.77813.CTTGTA.insert_size_histo.txt
## reads=1000000 --> 1M reads are enough for insert size analysis
cmd = "%s in=%s hist=%s reads=1000000 " % (bbmergeShCmd, fastq, dataFile)
## Run cmd
_, stdErr, exitCode = run_sh_command(cmd, True, log, True)
if exitCode != 0:
log.error("Failed to run bbmerge_sh_cmd.")
return RQCExitCodes.JGI_FAILURE, None, None, None, None
retCode = {}
## File format
## BBMerge version 5.0
## Finished reading
## Total time: 8.410 seconds.
##
## Pairs: 1000000
## Joined: 556805 55.681%
## Ambiguous: 9665 0.967%
## No Solution: 433474 43.347%
## Too Short: 56 0.006%
## Avg Insert: 234.6
## Standard Deviation: 33.9
## Mode: 250
##
## Insert range: 26 - 290
## 90th percentile: 277
## 75th percentile: 262
## 50th percentile: 238
## 25th percentile: 211
## 10th percentile: 188
for l in stdErr.split('\n'):
toks = l.split()
if l.startswith("Total time"):
retCode["total_time"] = toks[2]
elif l.startswith("Reads"):
retCode["num_reads"] = toks[1]
elif l.startswith("Pairs"):
retCode["num_reads"] = toks[1]
elif l.startswith("Joined"):
retCode["joined_num"] = toks[1]
retCode["joined_perc"] = toks[2]
elif l.startswith("Ambiguous"):
retCode["ambiguous_num"] = toks[1]
retCode["ambiguous_perc"] = toks[2]
elif l.startswith("No Solution"):
retCode["no_solution_num"] = toks[2]
retCode["no_solution_perc"] = toks[3]
elif l.startswith("Too Short"):
retCode["too_short_num"] = toks[2]
retCode["too_short_perc"] = toks[3]
elif l.startswith("Avg Insert"):
retCode["avg_insert"] = toks[2]
elif l.startswith("Standard Deviation"):
retCode["std_insert"] = toks[2]
elif l.startswith("Mode"):
retCode["mode_insert"] = toks[1]
elif l.startswith("Insert range"):
retCode["insert_range_start"] = toks[2]
retCode["insert_range_end"] = toks[4]
elif l.startswith("90th"):
retCode["perc_90th"] = toks[2]
elif l.startswith("50th"):
retCode["perc_50th"] = toks[2]
elif l.startswith("10th"):
retCode["perc_10th"] = toks[2]
elif l.startswith("75th"):
retCode["perc_75th"] = toks[2]
elif l.startswith("25th"):
retCode["perc_25th"] = toks[2]
log.debug("Insert size stats: %s", str(retCode))
## -------------------------------------------------------------------------
## plotting
rawDataMatrix = np.loadtxt(dataFile, delimiter='\t', comments='#')
## File format
## loc val
## 1 11
try:
rawDataX = rawDataMatrix[:, 0]
rawDataY = rawDataMatrix[:, 1]
except IndexError:
log.info("No solution from bbmerge.")
return RQCExitCodes.JGI_SUCCESS, dataFile, None, None, retCode
fig, ax = plt.subplots()
markerSize = 5.0
lineWidth = 1.5
p1 = ax.plot(rawDataX, rawDataY, 'r', marker='o', markersize=markerSize, linewidth=lineWidth, alpha=0.5)
ax.set_xlabel("Insert Size", fontsize=12, alpha=0.5)
ax.set_ylabel("Count", fontsize=12, alpha=0.5)
ax.grid(color="gray", linestyle=':')
## Add tooltip
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p1[0], labels=list(rawDataY)))
## Save D3 interactive plot in html format
mpld3.save_html(fig, htmlFile)
## Save Matplotlib plot in png format
plt.savefig(plotFile, dpi=fig.dpi)
## Checking outputs
if os.path.isfile(plotFile) and os.path.isfile(dataFile) and os.path.isfile(htmlFile):
log.info("insert_size_analysis completed.")
return RQCExitCodes.JGI_SUCCESS, dataFile, plotFile, htmlFile, retCode
else:
log.error("cannot find the output files. insert_size_analysis failed.")
return RQCExitCodes.JGI_FAILURE, None, None, None, None
""" STEP20 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gc_divergence_analysis
"""
def gc_divergence_analysis(fastq, bIsPaired, srcDir, log):
## Tools
# rscriptCmd = RQCReadQcCommands.RSCRIPT_CMD
READ_FILES_FILE = RQCReadQcConfig.CFG["files_file"]
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
sequnitFileName, exitCode = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
qualDir = "qual"
qualPath = os.path.join(READ_OUTPUT_PATH, qualDir)
make_dir(qualPath)
change_mod(qualPath, "0755")
## get the bhist.txt produced by reformat.sh
reformatBhistFile = None
with open(READ_FILES_FILE, "r") as FFH:
for l in FFH.readlines():
if l.startswith("read base count text 1"):
reformatBhistFile = l.split("=")[1].strip()
assert os.path.isfile(reformatBhistFile), "reformatBhistFile does not exist: %s" % (l)
break
assert reformatBhistFile, "ERROR: reformatBhistFile cannot be found in %s." % (READ_FILES_FILE)
log.debug("gc_divergence_analysis(): bhist file = %s", reformatBhistFile)
gcDivergenceTransformedFile = os.path.join(qualPath, sequnitFileNamePrefix + ".gc.divergence.transformed.csv") ## gc divergence transformed csv
gcDivergenceTransformedPlot = os.path.join(qualPath, sequnitFileNamePrefix + ".gc.divergence.transformed.png") ## gc divergence transformed plot
gcDivergenceCoefficientsFile = os.path.join(qualPath, sequnitFileNamePrefix + ".gc.divergence.coefficients.csv") ## gc divergence coefficients csv
## Check base composition histogram
if not os.path.isfile(reformatBhistFile):
log.error("Bhist file not found: %s", reformatBhistFile)
return RQCExitCodes.JGI_FAILURE, None, None, None, None
##
## Need R/3.1.2 b/c of library(dplyr) and library(tidyr) are only installed for the R/3.1.2 version
##
## Transform bhist output from reformat.sh to csv
if bIsPaired:
# transformCmd = "module unload R; module load R/3.2.4; module load jgi-fastq-signal-processing/2.x; format_signal_data --input %s --output %s --read both --type composition " %\
transformCmd = "module unload R; module load R/3.2.4; module load jgi-fastq-signal-processing/2.x; format_signal_data --input %s --output %s --read both --type composition " %\
(reformatBhistFile, gcDivergenceTransformedFile)
# cmd = " ".join(["module load R; Rscript", "--vanilla", os.path.join(SRCDIR, "tools", "jgi-fastq-signal-processing", "format_signal_data"), ])
# transformCmd = "module unload R; module load R; module load R/3.3.2; Rscript --vanilla %s --input %s --output %s --read both --type composition" %\
if os.environ['NERSC_HOST'] in ("denovo", "cori"):
transformCmd = "Rscript --vanilla %s --input %s --output %s --read both --type composition" %\
(os.path.join(srcDir, "tools/jgi-fastq-signal-processing/bin", "format_signal_data"), reformatBhistFile, gcDivergenceTransformedFile)
else:
# transformCmd = "module unload R; module load R/3.2.4; module load jgi-fastq-signal-processing/2.x; format_signal_data --input %s --output %s --read 1 --type composition " %\
transformCmd = "module unload R; module load R/3.2.4; module load jgi-fastq-signal-processing/2.x; format_signal_data --input %s --output %s --read 1 --type composition " %\
(reformatBhistFile, gcDivergenceTransformedFile)
# transformCmd = "module unload R; module load R; module load R/3.3.2; Rscript --vanilla %s --input %s --output %s --read 1 --type composition" %\
if os.environ['NERSC_HOST'] in ("denovo", "cori"):
transformCmd = "Rscript --vanilla %s --input %s --output %s --read 1 --type composition" %\
(os.path.join(srcDir, "tools/jgi-fastq-signal-processing/bin", "format_signal_data"), reformatBhistFile, gcDivergenceTransformedFile)
log.debug("Transform cmd = %s", transformCmd)
_, _, exitCode = run_sh_command(transformCmd, True, log, True)
if exitCode != 0:
log.info("Failed to run GC_DIVERGENCE_TRANSFORM.")
return -1, None, None, None, None
## Compute divergence value
coeff = []
# modelCmd = "module unload R; module load R/3.2.4; module load jgi-fastq-signal-processing/2.x; model_read_signal --input %s --output %s " %\
modelCmd = "module unload R; module load R/3.2.4; module load jgi-fastq-signal-processing/2.x; model_read_signal --input %s --output %s " %\
(gcDivergenceTransformedFile, gcDivergenceCoefficientsFile)
# modelCmd = "module unload python; module unload R; module load R/3.3.2; Rscript --vanilla %s --input %s --output %s" %\
if os.environ['NERSC_HOST'] in ("denovo", "cori"):
modelCmd = "Rscript --vanilla %s --input %s --output %s" %\
(os.path.join(srcDir, "tools/jgi-fastq-signal-processing/bin", "model_read_signal"), gcDivergenceTransformedFile, gcDivergenceCoefficientsFile)
log.debug("Model cmd = %s", modelCmd)
_, _, exitCode = run_sh_command(modelCmd, True, log, True)
if exitCode != 0:
log.info("Failed to run GC_DIVERGENCE_MODEL.")
return -1, None, None, None, None
## Parsing coefficients.csv
## ex)
## "read","variable","coefficient"
## "Read 1","AT",2
## "Read 1","AT+CG",2.6
## "Read 1","CG",0.6
## "Read 2","AT",1.7
## "Read 2","AT+CG",2.3
## "Read 2","CG",0.7
assert os.path.isfile(gcDivergenceCoefficientsFile), "GC divergence coefficient file not found."
with open(gcDivergenceCoefficientsFile) as COEFF_FH:
for l in COEFF_FH.readlines():
if l.startswith("\"read\""):
continue ## skip header line
toks = l.strip().split(',')
assert len(toks) == 3, "Unexpected GC divergence coefficient file format."
coeff.append({"read": toks[0], "variable": toks[1], "coefficient": toks[2]})
## Plotting
# plotCmd = "module unload R; module load R/3.2.4; module load jgi-fastq-signal-processing/2.x; plot_read_signal --input %s --output %s --type composition " %\
plotCmd = "module unload R; module load R/3.2.4; module load jgi-fastq-signal-processing/2.x; plot_read_signal --input %s --output %s --type composition " %\
(gcDivergenceTransformedFile, gcDivergenceTransformedPlot)
# plotCmd = "module unload python; module unload R; module load R/3.3.2; Rscript --vanilla %s --input %s --output %s --type composition" %\
if os.environ['NERSC_HOST'] in ("denovo", "cori"):
plotCmd = "Rscript --vanilla %s --input %s --output %s --type composition" %\
(os.path.join(srcDir, "tools/jgi-fastq-signal-processing/bin", "plot_read_signal"), gcDivergenceTransformedFile, gcDivergenceTransformedPlot)
log.debug("Plot cmd = %s", plotCmd)
_, _, exitCode = run_sh_command(plotCmd, True, log, True)
if exitCode != 0:
log.info("Failed to run GC_DIVERGENCE_PLOT.")
return -1, None, None, None, None
return RQCExitCodes.JGI_SUCCESS, gcDivergenceTransformedFile, gcDivergenceTransformedPlot, gcDivergenceCoefficientsFile, coeff
""" STEP22 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cleanup_readqc
Cleaning up the ReadQC analysis directory with unwanted files.
@param log
@return retCode: always return success
"""
def cleanup_readqc(log):
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
## Purge FASTQs
cmd = "rm -f %s/%s/*.fastq " % (READ_OUTPUT_PATH, "subsample")
_, _, exitCode = run_sh_command(cmd, True, log)
if exitCode != 0:
log.error("Failed to execute %s; may be already purged.", cmd)
cmd = "rm -f %s/%s/*.fastq " % (READ_OUTPUT_PATH, "qual")
_, _, exitCode = run_sh_command(cmd, True, log)
if exitCode != 0:
log.error("Failed to execute %s; may be already purged.", cmd)
## Purge FASTA qual and Fasta file
cmd = "rm -f %s/%s/reads.fa " % (READ_OUTPUT_PATH, "megablast")
_, _, exitCode = run_sh_command(cmd, True, log)
if exitCode != 0:
log.error("Failed to execute %s; may be already purged.", cmd)
cmd = "rm -f %s/%s/reads.qual " % (READ_OUTPUT_PATH, "megablast")
_, _, exitCode = run_sh_command(cmd, True, log)
if exitCode != 0:
log.error("Failed to execute %s; may be already purged.", cmd)
## Delete Sciclone files
cmd = "rm -f %s/%s/*.fastq " % (READ_OUTPUT_PATH, "sciclone_analysis")
_, _, exitCode = run_sh_command(cmd, True, log)
if exitCode != 0:
log.error("Failed to execute %s; may be already purged.", cmd)
cmd = "rm -f %s/%s/*.fq " % (READ_OUTPUT_PATH, "sciclone_analysis")
_, _, exitCode = run_sh_command(cmd, True, log)
if exitCode != 0:
log.error("Failed to execute %s; may be already purged.", cmd)
cmd = "rm -f %s/%s/*.sam " % (READ_OUTPUT_PATH, "sciclone_analysis")
_, _, exitCode = run_sh_command(cmd, True, log)
if exitCode != 0:
log.error("Failed to execute %s; may be already purged.", cmd)
cmd = "rm -f %s/%s/*.sai " % (READ_OUTPUT_PATH, "sciclone_analysis")
_, _, exitCode = run_sh_command(cmd, True, log)
if exitCode != 0:
log.error("Failed to execute %s; may be already purged.", cmd)
## Purge Blast files
cmd = "rm -f %s/%s/megablast*v*JFfTIT " % (READ_OUTPUT_PATH, "megablast")
_, _, exitCode = run_sh_command(cmd, True, log)
if exitCode != 0:
log.error("Failed to execute %s; may be already purged.", cmd)
## purge megablast.reads.fa.v.nt.FmLD2a10p90E30JFfTITW45; megablast.reads.fa.v.refseq.microbial.FmLD2a10p90E30JFfTITW45
cmd = "rm -f %s/%s/megablast*v*FfTITW45 " % (READ_OUTPUT_PATH, "megablast")
_, _, exitCode = run_sh_command(cmd, True, log)
if exitCode != 0:
log.error("Failed to execute %s; may be already purged.", cmd)
return RQCExitCodes.JGI_SUCCESS
## ===========================================================================================================================
""" For NEW STEP4
QC plot generation using the outputs from reformat.sh
"""
def gen_average_base_position_quality_plot(fastq, bIsPaired, log):
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
sequnitFileName, _ = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
subsampleDir = "subsample"
subsamplePath = os.path.join(READ_OUTPUT_PATH, subsampleDir)
qualDir = "qual"
qualPath = os.path.join(READ_OUTPUT_PATH, qualDir)
make_dir(qualPath)
change_mod(qualPath, "0755")
reformatQhistFile = os.path.join(subsamplePath, sequnitFileNamePrefix + ".reformat.qhist.txt") ## Average Base Position Quality
log.debug("qhist file: %s", reformatQhistFile)
## Gen Average Base Position Quality Plot
if not os.path.isfile(reformatQhistFile):
log.error("Qhist file not found: %s", reformatQhistFile)
return None, None, None
## New data format
## Load data from txt
rawDataMatrix = np.loadtxt(reformatQhistFile, delimiter='\t', comments='#', skiprows=0)
assert len(rawDataMatrix[1][:]) == 5 or len(rawDataMatrix[1][:]) == 3
## New data (paired)
# BaseNum Read1_linear Read1_log Read2_linear Read2_log
# 1 33.469 30.347 32.459 29.127
# 2 33.600 32.236 32.663 29.532
# 3 33.377 30.759 32.768 29.719
fig, ax = plt.subplots()
markerSize = 3.5
lineWidth = 1.0
p1 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 1], 'r', marker='o', markersize=markerSize, linewidth=lineWidth, label='read1')
if bIsPaired:
p2 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 3], 'g', marker='d', markersize=markerSize, linewidth=lineWidth, label='read2')
ax.set_xlabel("Read Position", fontsize=12, alpha=0.5)
ax.set_ylabel("Average Quality Score", fontsize=12, alpha=0.5)
ax.set_ylim([0, 45])
fontProp = FontProperties()
fontProp.set_size("small")
fontProp.set_family("Bitstream Vera Sans")
ax.legend(loc=1, prop=fontProp)
ax.grid(color="gray", linestyle=':')
## Add tooltip
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p1[0], labels=list(rawDataMatrix[:, 1])))
if bIsPaired:
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p2[0], labels=list(rawDataMatrix[:, 3])))
pngFile = os.path.join(qualPath, sequnitFileNamePrefix + ".r1_r2_baseposqual.png")
htmlFile = os.path.join(qualPath, sequnitFileNamePrefix + ".r1_r2_baseposqual.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, htmlFile)
## Save Matplotlib plot in png format
plt.savefig(pngFile, dpi=fig.dpi)
return reformatQhistFile, pngFile, htmlFile
def gen_average_base_position_quality_boxplot(fastq, log):
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
sequnitFileName, _ = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
subsampleDir = "subsample"
subsamplePath = os.path.join(READ_OUTPUT_PATH, subsampleDir)
qualDir = "qual"
qualPath = os.path.join(READ_OUTPUT_PATH, qualDir)
make_dir(qualPath)
change_mod(qualPath, "0755")
reformatBqhistFile = os.path.join(subsamplePath, sequnitFileNamePrefix + ".reformat.bqhist.txt") ## Average Base Position Quality Boxplot
log.debug("qhist file: %s", reformatBqhistFile)
## Gen Average Base Position Quality Boxplot
if not os.path.isfile(reformatBqhistFile):
log.error("Bqhist file not found: %s", reformatBqhistFile)
return None, None, None, None, None
## New data format (paired)
# 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
##BaseNum count_1 min_1 max_1 mean_1 Q1_1 med_1 Q3_1 LW_1 RW_1 count_2 min_2 max_2 mean_2 Q1_2 med_2 Q3_2 LW_2 RW_2
# 0 6900 0 36 33.48 33 34 34 29 36 6900 0 36 33.48 33 34 34 29 36
rawDataMatrix = np.loadtxt(reformatBqhistFile, delimiter='\t', comments='#', skiprows=0)
assert len(rawDataMatrix[1][:]) == 19 or len(rawDataMatrix[1][:]) == 10
bIsPaired = True
if len(rawDataMatrix[1][:]) == 10:
bIsPaired = False
## create data for boxplot
boxplot_data_r1 = []
boxplot_data_r2 = []
for i in rawDataMatrix:
idx = int(i[0]) - 1 ## read base loc
spread = [rawDataMatrix[idx, 5], rawDataMatrix[idx, 7]] # Q1 ~ Q3
center = [rawDataMatrix[idx, 6]] # median
flier_high = [rawDataMatrix[idx, 8]] # whisker lW 2%
flier_low = [rawDataMatrix[idx, 9]] # whisker rW 98%
boxplot_data_r1.append(np.concatenate((spread, center, flier_high, flier_low), 0))
if bIsPaired:
spread = [rawDataMatrix[idx, 14], rawDataMatrix[idx, 16]] # Q1 ~ Q3
center = [rawDataMatrix[idx, 15]] # median
flier_high = [rawDataMatrix[idx, 17]] # whisker lW 2%
flier_low = [rawDataMatrix[idx, 18]] # whisker rW 98%
boxplot_data_r2.append(np.concatenate((spread, center, flier_high, flier_low), 0))
fig, ax = plt.subplots()
ax.boxplot(boxplot_data_r1)
plt.subplots_adjust(left=0.06, right=0.9, top=0.9, bottom=0.1)
F = plt.gcf()
## How to get the current size?
## DPI = F.get_dpi() ## = 80
## DefaultSize = F.get_size_inches() ## = (8, 6)
F.set_size_inches(18, 6) ## plot size (w, h)
ax.set_xlabel("Read Position", fontsize=11, alpha=0.5)
ax.set_ylabel("Quality Score (Solexa Scale: 40=Highest, -15=Lowest)", fontsize=11, alpha=0.5)
ax.set_ylim([-20, 45])
ax.yaxis.grid(True, linestyle=':', which="major")
majorLocator_x = MultipleLocator(5)
majorFormatter = FormatStrFormatter("%d")
minorLocator = MultipleLocator(1)
ax.xaxis.set_major_locator(majorLocator_x)
ax.xaxis.set_major_formatter(majorFormatter)
ax.xaxis.set_minor_locator(minorLocator)
majorLocator_y = MultipleLocator(5)
minorLocator = MultipleLocator(1)
ax.yaxis.set_major_locator(majorLocator_y)
ax.yaxis.set_minor_locator(minorLocator)
pngFileR1 = os.path.join(qualPath, sequnitFileNamePrefix + ".r1_average_base_position_quality_boxplot.png")
htmlFileR1 = os.path.join(qualPath, sequnitFileNamePrefix + ".r1_average_base_position_quality_boxplot.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, htmlFileR1)
## Save Matplotlib plot in png format
plt.savefig(pngFileR1, dpi=fig.dpi)
plotPngPlotFileR2 = None
plotHtmlPlotFileR2 = None
if bIsPaired:
fig, ax = plt.subplots()
ax.boxplot(boxplot_data_r2)
plt.subplots_adjust(left=0.06, right=0.9, top=0.9, bottom=0.1)
F = plt.gcf()
## How to get the current size?
## DPI = F.get_dpi() ## = 80
## DefaultSize = F.get_size_inches() ## = (8, 6)
F.set_size_inches(18, 6) ## plot size (w, h)
ax.set_xlabel("Read Position", fontsize=11, alpha=0.5)
ax.set_ylabel("Quality Score (Solexa Scale: 40=Highest, -15=Lowest)", fontsize=11, alpha=0.5)
ax.set_ylim([-20, 45])
ax.yaxis.grid(True, linestyle=':', which='major')
majorLocator_x = MultipleLocator(5)
majorFormatter = FormatStrFormatter('%d')
minorLocator = MultipleLocator(1)
ax.xaxis.set_major_locator(majorLocator_x)
ax.xaxis.set_major_formatter(majorFormatter)
ax.xaxis.set_minor_locator(minorLocator)
majorLocator_y = MultipleLocator(5)
minorLocator = MultipleLocator(1)
ax.yaxis.set_major_locator(majorLocator_y)
ax.yaxis.set_minor_locator(minorLocator)
plotPngPlotFileR2 = os.path.join(qualPath, sequnitFileNamePrefix + ".r2_average_base_position_quality_boxplot.png")
plotHtmlPlotFileR2 = os.path.join(qualPath, sequnitFileNamePrefix + ".r2_average_base_position_quality_boxplot.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, plotHtmlPlotFileR2)
## Save Matplotlib plot in png format
plt.savefig(plotPngPlotFileR2, dpi=fig.dpi)
return reformatBqhistFile, pngFileR1, plotPngPlotFileR2, htmlFileR1, plotHtmlPlotFileR2
def gen_cycle_nucleotide_composition_plot(fastq, readLength, isPairedEnd, log):
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
sequnitFileName, _ = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
subsampleDir = "subsample"
subsamplePath = os.path.join(READ_OUTPUT_PATH, subsampleDir)
qualDir = "qual"
qualPath = os.path.join(READ_OUTPUT_PATH, qualDir)
make_dir(qualPath)
change_mod(qualPath, "0755")
reformatBhistFile = os.path.join(subsamplePath, sequnitFileNamePrefix + ".reformat.bhist.txt") ## base composition histogram
log.debug("gen_cycle_nucleotide_composition_plot(): bhist file = %s", reformatBhistFile)
## Genbase composition histogram
if not os.path.isfile(reformatBhistFile):
log.error("Bhist file not found: %s", reformatBhistFile)
return None, None, None, None, None
## data
# 0 1 2 3 4 5
##Pos A C G T N
# 0 0.15111 0.26714 0.51707 0.06412 0.00056
# 1 0.20822 0.20773 0.25543 0.32795 0.00068
rawDataMatrix = np.loadtxt(reformatBhistFile, delimiter='\t', comments='#')
assert len(rawDataMatrix[1][:]) == 6
if isPairedEnd:
rawDataR1 = rawDataMatrix[:readLength - 1][:]
rawDataR2 = rawDataMatrix[readLength:][:]
## r1 ------------------------------------------------------------------
fig, ax = plt.subplots()
markerSize = 2.5
lineWidth = 1.0
p2 = ax.plot(rawDataR1[:, 0], rawDataR1[:, 1], 'r', marker='o', markersize=markerSize, linewidth=lineWidth, label='A', alpha=0.5)
p3 = ax.plot(rawDataR1[:, 0], rawDataR1[:, 4], 'g', marker='s', markersize=markerSize, linewidth=lineWidth, label='T', alpha=0.5)
p4 = ax.plot(rawDataR1[:, 0], rawDataR1[:, 3], 'b', marker='*', markersize=markerSize, linewidth=lineWidth, label='G', alpha=0.5)
p5 = ax.plot(rawDataR1[:, 0], rawDataR1[:, 2], 'm', marker='d', markersize=markerSize, linewidth=lineWidth, label='C', alpha=0.5)
p6 = ax.plot(rawDataR1[:, 0], rawDataR1[:, 5], 'c', marker='v', markersize=markerSize, linewidth=lineWidth, label='N', alpha=0.5)
ax.set_xlabel("Read Position", fontsize=12, alpha=0.5)
ax.set_ylabel("Fraction", fontsize=12, alpha=0.5)
fontProp = FontProperties()
fontProp.set_size("small")
fontProp.set_family("Bitstream Vera Sans")
ax.legend(loc=1, prop=fontProp)
ax.grid(color="gray", linestyle=':')
## Add tooltip
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p2[0], labels=list(rawDataR1[:, 1])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p3[0], labels=list(rawDataR1[:, 4])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p4[0], labels=list(rawDataR1[:, 3])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p5[0], labels=list(rawDataR1[:, 2])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p6[0], labels=list(rawDataR1[:, 5])))
pngFileR1 = os.path.join(qualPath, sequnitFileNamePrefix + ".cycle_nucl_composition_r1.png")
htmlFileR1 = os.path.join(qualPath, sequnitFileNamePrefix + ".cycle_nucl_composition_r1.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, htmlFileR1)
## Save Matplotlib plot in png format
plt.savefig(pngFileR1, dpi=fig.dpi)
## r2 ------------------------------------------------------------------
fig, ax = plt.subplots()
markerSize = 2.5
lineWidth = 1.0
p2 = ax.plot([(x - readLength) for x in rawDataR2[:, 0]], rawDataR2[:, 1], 'r', marker='o', markersize=markerSize, linewidth=lineWidth, label='A', alpha=0.5)
p3 = ax.plot([(x - readLength) for x in rawDataR2[:, 0]], rawDataR2[:, 4], 'g', marker='s', markersize=markerSize, linewidth=lineWidth, label='T', alpha=0.5)
p4 = ax.plot([(x - readLength) for x in rawDataR2[:, 0]], rawDataR2[:, 3], 'b', marker='*', markersize=markerSize, linewidth=lineWidth, label='G', alpha=0.5)
p5 = ax.plot([(x - readLength) for x in rawDataR2[:, 0]], rawDataR2[:, 2], 'm', marker='d', markersize=markerSize, linewidth=lineWidth, label='C', alpha=0.5)
p6 = ax.plot([(x - readLength) for x in rawDataR2[:, 0]], rawDataR2[:, 5], 'c', marker='v', markersize=markerSize, linewidth=lineWidth, label='N', alpha=0.5)
ax.set_xlabel("Read Position", fontsize=12, alpha=0.5)
ax.set_ylabel("Fraction", fontsize=12, alpha=0.5)
fontProp = FontProperties()
fontProp.set_size("small")
fontProp.set_family("Bitstream Vera Sans")
ax.legend(loc=1, prop=fontProp)
ax.grid(color="gray", linestyle=':')
## Add tooltip
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p2[0], labels=list(rawDataR2[:, 1])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p3[0], labels=list(rawDataR2[:, 4])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p4[0], labels=list(rawDataR2[:, 3])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p5[0], labels=list(rawDataR2[:, 2])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p6[0], labels=list(rawDataR2[:, 5])))
plotPngPlotFileR2 = os.path.join(qualPath, sequnitFileNamePrefix + ".cycle_nucl_composition_r2.png")
plotHtmlPlotFileR2 = os.path.join(qualPath, sequnitFileNamePrefix + ".cycle_nucl_composition_r2.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, plotHtmlPlotFileR2)
## Save Matplotlib plot in png format
plt.savefig(plotPngPlotFileR2, dpi=fig.dpi)
return reformatBhistFile, pngFileR1, htmlFileR1, plotPngPlotFileR2, plotHtmlPlotFileR2
else:
fig, ax = plt.subplots()
markerSize = 2.5
lineWidth = 1.0
p2 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 1], 'r', marker='o', markersize=markerSize, linewidth=lineWidth, label='A', alpha=0.5)
p3 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 4], 'g', marker='s', markersize=markerSize, linewidth=lineWidth, label='T', alpha=0.5)
p4 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 3], 'b', marker='*', markersize=markerSize, linewidth=lineWidth, label='G', alpha=0.5)
p5 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 2], 'm', marker='d', markersize=markerSize, linewidth=lineWidth, label='C', alpha=0.5)
p6 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 5], 'c', marker='v', markersize=markerSize, linewidth=lineWidth, label='N', alpha=0.5)
ax.set_xlabel("Read Position", fontsize=12, alpha=0.5)
ax.set_ylabel("Fraction", fontsize=12, alpha=0.5)
fontProp = FontProperties()
fontProp.set_size("small")
fontProp.set_family("Bitstream Vera Sans")
ax.legend(loc=1, prop=fontProp)
ax.grid(color="gray", linestyle=':')
## Add tooltip
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p2[0], labels=list(rawDataMatrix[:, 1])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p3[0], labels=list(rawDataMatrix[:, 4])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p4[0], labels=list(rawDataMatrix[:, 3])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p5[0], labels=list(rawDataMatrix[:, 2])))
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p6[0], labels=list(rawDataMatrix[:, 5])))
pngFile = os.path.join(qualPath, sequnitFileNamePrefix + ".cycle_nucl_composition.png")
htmlFile = os.path.join(qualPath, sequnitFileNamePrefix + ".cycle_nucl_composition.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, htmlFile)
## Save Matplotlib plot in png format
plt.savefig(pngFile, dpi=fig.dpi)
return reformatBhistFile, pngFile, htmlFile, None, None
def gen_cycle_n_base_percent_plot(fastq, readLength, isPairedEnd, log):
READ_OUTPUT_PATH = RQCReadQcConfig.CFG["output_path"]
sequnitFileName, _ = safe_basename(fastq, log)
sequnitFileNamePrefix = sequnitFileName.replace(".fastq", "").replace(".gz", "")
subsampleDir = "subsample"
subsamplePath = os.path.join(READ_OUTPUT_PATH, subsampleDir)
qualDir = "qual"
qualPath = os.path.join(READ_OUTPUT_PATH, qualDir)
make_dir(qualPath)
change_mod(qualPath, "0755")
reformatBhistFile = os.path.join(subsamplePath, sequnitFileNamePrefix + ".reformat.bhist.txt") ## base composition histogram
log.debug("gen_cycle_n_base_percent_plot(): bhist file = %s", reformatBhistFile)
## Genbase composition histogram
if not os.path.isfile(reformatBhistFile):
log.error("Bhist file not found: %s", reformatBhistFile)
return None, None, None, None, None
## data
# 0 1 2 3 4 5
##Pos A C G T N
# 0 0.15111 0.26714 0.51707 0.06412 0.00056
# 1 0.20822 0.20773 0.25543 0.32795 0.00068
rawDataMatrix = np.loadtxt(reformatBhistFile, delimiter='\t', comments='#')
assert len(rawDataMatrix[1][:]) == 6
if isPairedEnd:
rawDataR1 = rawDataMatrix[:readLength - 1][:]
rawDataR2 = rawDataMatrix[readLength:][:]
## r1 ------------------------------------------------------------------
fig, ax = plt.subplots()
markerSize = 5.0
lineWidth = 1.5
p1 = ax.plot(rawDataR1[:, 0], rawDataR1[:, 5], 'r', marker='o', markersize=markerSize, linewidth=lineWidth, label='N', alpha=0.5)
ax.set_xlabel("Read Position", fontsize=12, alpha=0.5)
ax.set_ylabel("Fraction", fontsize=12, alpha=0.5)
ax.grid(color="gray", linestyle=':')
ax.set_xlim([0, readLength])
## Add tooltip
labels = ["%.5f" % i for i in rawDataR1[:, 5]]
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p1[0], labels=labels))
pngFileR1 = os.path.join(qualPath, sequnitFileNamePrefix + ".cycle_n_base_percent_r1.png")
htmlFileR1 = os.path.join(qualPath, sequnitFileNamePrefix + ".cycle_n_base_percent_r1.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, htmlFileR1)
## Save Matplotlib plot in png format
plt.savefig(pngFileR1, dpi=fig.dpi)
## r2 ------------------------------------------------------------------
fig, ax = plt.subplots()
markerSize = 5.0
lineWidth = 1.5
p1 = ax.plot([(x - readLength) for x in rawDataR2[:, 0]], rawDataR2[:, 5], 'r', marker='o', markersize=markerSize, linewidth=lineWidth, label='N', alpha=0.5)
ax.set_xlabel("Read Position", fontsize=12, alpha=0.5)
ax.set_ylabel("Fraction", fontsize=12, alpha=0.5)
ax.grid(color="gray", linestyle=':')
ax.set_xlim([0, readLength])
## Add tooltip
labels = ["%.5f" % i for i in rawDataR2[:, 5]]
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p1[0], labels=labels))
plotPngPlotFileR2 = os.path.join(qualPath, sequnitFileNamePrefix + ".cycle_n_base_percent_r2.png")
plotHtmlPlotFileR2 = os.path.join(qualPath, sequnitFileNamePrefix + ".cycle_n_base_percent_r2.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, plotHtmlPlotFileR2)
## Save Matplotlib plot in png format
plt.savefig(plotPngPlotFileR2, dpi=fig.dpi)
return reformatBhistFile, pngFileR1, htmlFileR1, plotPngPlotFileR2, plotHtmlPlotFileR2
else:
fig, ax = plt.subplots()
markerSize = 5.0
lineWidth = 1.5
p1 = ax.plot(rawDataMatrix[:, 0], rawDataMatrix[:, 5], 'r', marker='o', markersize=markerSize, linewidth=lineWidth, label='N', alpha=0.5)
ax.set_xlabel("Read Position", fontsize=12, alpha=0.5)
ax.set_ylabel("Fraction", fontsize=12, alpha=0.5)
ax.grid(color="gray", linestyle=':')
## Add tooltip
labels = ["%.5f" % i for i in rawDataMatrix[:, 5]]
mpld3.plugins.connect(fig, mpld3.plugins.PointLabelTooltip(p1[0], labels=labels))
pngFile = os.path.join(qualPath, sequnitFileNamePrefix + ".cycle_n_base_percent.png")
htmlFile = os.path.join(qualPath, sequnitFileNamePrefix + ".cycle_n_base_percent.html")
## Save D3 interactive plot in html format
mpld3.save_html(fig, htmlFile)
## Save Matplotlib plot in png format
plt.savefig(pngFile, dpi=fig.dpi)
return reformatBhistFile, pngFile, htmlFile, None, None
""" For STEP12
sciclone_sam2summary
@param sam_file: input sam file
@param count_file: stat file for writing
@return retCode: success or failure
@return count_file: output count file
"""
## Removed!
##def sciclone_sam2summary(sam_file, log):
""" For STEP12
run_rna_strandedness
Title: run_rna_strandedness
Function: Takes sam file generated from rna data set and determines
mapping to sense and antisense strand
Usage: run_rna_strandedness($sam_file, $log)
Args: 1) sam file
2) log file object
Returns: JGI_SUCCESS
JGI_FAILURE
Comments: None.
@param sam_file
@return retCode
@return outputLogFile: log file
@return outResultDict: stat value dict (to be added to readqc_stats.txt)
"""
## Removed!
""" For STEP12
separate_paired_end_fq
Title : separate_paired_end_fq
Function : Given a fastq file, this function splits the fastq into
read1 and read2 fastq file.
Usage : sequence_lengths( $fastq, $read1_fq, $read2_fq, $log )
Args : 1) The name of a fastq sequence file.
2) The name of the read1 fastq output file
3) The name of the read2 fastq output file
4) A JGI_Log object.
Returns : JGI_SUCCESS: The fastq file was successfully separated.
JGI_FAILURE: The fastq file could not be separated.
Comments : For paired end fastq files only.
sulsj
- Added gzip'd fastq support
## TODO: need to write a fastq IO class
@param fastq
@param read1_outfile
@param read2_outfile
@param log
@return retCode
"""
## Removed!
""" For STEP12
NOTE: Borrowed from alignment.py and updated to get lib name and isRna
get_lib_info
Look up the seqProjectId, bio_name from the library_info table
- to look up the references
@param sequnitFileName: name of the seq unit in the seq_units.sequnitFileName field
@return seqProjectId
@return ncbiOrganismName
@return libraryName
@return isRna
"""
""" For STEP14 & STEP15
run_blastplus_py
Call jgi-rqc-pipeline/tools/run_blastplus.py
"""
def run_blastplus_py(queryFastaFile, db, log):
timeoutCmd = 'timeout' #RQCReadQcCommands.TIMEOUT_CMD
blastOutFileNamePrefix = None
# retCode = None
outDir, exitCode = safe_dirname(queryFastaFile, log)
queryFastaFileBaseName, exitCode = safe_basename(queryFastaFile, log)
dbFileBaseName, exitCode = safe_basename(db, log)
# runBlastnCmd = "/global/homes/s/sulsj/work/bitbucket-repo/jgi-rqc-pipeline/tools/run_blastplus.py" ## debug
# runBlastnCmd = "/global/dna/projectdirs/PI/rqc/prod/jgi-rqc-pipeline/tools/run_blastplus.py"
# runBlastnCmd = "/global/homes/s/sulsj/work/bitbucket-repo/jgi-rqc-pipeline/tools/run_blastplus_taxserver.py" ## debug
runBlastnCmd = "/global/dna/projectdirs/PI/rqc/prod/jgi-rqc-pipeline/tools/run_blastplus_taxserver.py"
blastOutFileNamePrefix = outDir + "/megablast." + queryFastaFileBaseName + ".vs." + dbFileBaseName
## Should use jigsaw/2.6.0 for not checking database reference fasta file
## 07212016 Added -s to add lineage to the subject field
cmd = "%s 21600s %s -d %s -o %s -q %s -s > %s.log 2>&1 " % (timeoutCmd, runBlastnCmd, db, outDir, queryFastaFile, blastOutFileNamePrefix)
_, _, exitCode = run_sh_command(cmd, True, log, True)
## Added timeout to terminate blast run manually after 6hrs
## If exitCode == 124 or exitCode = 143, this means the process exits with timeout.
## Timeout exits with 128 plus the signal number. 143 = 128 + 15 (SGITERM)
## Ref) http://stackoverflow.com/questions/4189136/waiting-for-a-command-to-return-in-a-bash-script
## timeout man page ==> If the command times out, and --preserve-status is not set, then exit with status 124.
##
if exitCode in (124, 143):
## BLAST timeout
## Exit with success so that the blast step can be skipped.
log.warning("##################################")
log.warning("BLAST TIMEOUT. JUST SKIP THE STEP.")
log.warning("##################################")
return RQCExitCodes.JGI_FAILURE, -143
elif exitCode != 0:
log.error("Failed to run_blastplus_py. Exit code != 0")
return RQCExitCodes.JGI_FAILURE, None
else:
log.info("run_blastplus_py complete.")
return RQCExitCodes.JGI_SUCCESS, blastOutFileNamePrefix
"""===========================================================================
checkpoint_step_wrapper
"""
def checkpoint_step_wrapper(status):
assert RQCReadQcConfig.CFG["status_file"]
checkpoint_step(RQCReadQcConfig.CFG["status_file"], status)
"""===========================================================================
get the file content
"""
def get_analysis_file_contents(fullPath):
retCode = ""
if os.path.isfile(fullPath):
with open(fullPath, "r") as FH:
retCode = FH.readlines()
return retCode
else:
return "file not found"
'''===========================================================================
file_name_trim
'''
def file_name_trim(fname):
return fname.replace(".gz", "").replace(".fastq", "").replace(".fasta", "")
## EOF | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pytools/lib/readqc_utils.py | readqc_utils.py |
import os
# from rqc_constants import *
from common import get_run_path
# from os_utility import get_tool_path
# RQCReadQcConfig, RQCReadQc, ReadqcStats, RQCReadQcCommands
# global configuration variables
class RQCReadQcConfig:
CFG = {}
CFG["output_path"] = ""
CFG["status_file"] = ""
CFG["files_file"] = ""
CFG["stats_file"] = ""
CFG["log_file"] = ""
CFG["no_cleanup"] = ""
CFG["run_path"] = ""
# ReadQC constants
class RQCReadQc:
ILLUMINA_SAMPLE_COUNT = 50000 ## Used in STEP13
ILLUMINA_SAMPLE_PERCENTAGE = 0.01 ## Used in STEP1 and STEP13
ILLUMINA_MER_SAMPLE_MER_SIZE = 25 ## Used in STEP2 (20 ==> 25 RQC-823 08102016)
ILLUMINA_MER_SAMPLE_REPORT_FRQ = 25000 ## Used in STEP2 for mer sampling
ILLUMINA_MIN_NUM_READS = 1000 ## Used in STEP1 to check min number of reads needed for processing
#Reference databases
class RQCReadQcReferenceDatabases:
## STEP10
ARTIFACT_FILE = "/global/dna/shared/rqc/ref_databases/qaqc/databases/illumina.artifacts/Illumina.artifacts.2012.04.fa"
## STEP12 for bbduk
SCICLONE_RNA2 = "/global/projectb/sandbox/gaag/bbtools/data/sciclone_rna.fa"
SCICLONE_DNA2 = "/global/projectb/sandbox/gaag/bbtools/data/sciclone_dna.fa"
## STEP17
END_OF_READ_ILLUMINA_ADAPTER_CHECK_DB = "/global/dna/shared/rqc/ref_databases/qaqc/databases/Artifacts.adapters_primers_only.fa"
class RQCContamDb:
# ARTIFACT_FILE_NO_SPIKEIN = '/global/dna/shared/rqc/ref_databases/qaqc/databases/illumina.artifacts/Illumina.artifacts.2012.10.no_DNA_RNA_spikeins.fa' #this db includes no DNA/RNA spike-in sequences
ARTIFACT_FILE_NO_SPIKEIN = '/global/dna/shared/rqc/ref_databases/qaqc/databases/illumina.artifacts/Illumina.artifacts.2013.12.no_DNA_RNA_spikeins.fa'
ARTIFACT_FILE_DNA_SPIKEIN = '/global/dna/shared/rqc/ref_databases/qaqc/databases/illumina.artifacts/DNA_spikeins.artifacts.2012.10.fa.bak' #this db has just DNA spike-in sequences
ARTIFACT_FILE_RNA_SPIKEIN = '/global/dna/shared/rqc/ref_databases/qaqc/databases/illumina.artifacts/RNA_spikeins.artifacts.2012.10.NoPolyA.fa' #this db has just RNA spike-in sequences
CONTAMINANTS = '/global/dna/shared/rqc/ref_databases/qaqc/databases/JGIContaminants.fa' ## '/home/blast_db2_admin/qaqc_db/2010-11-19/JGIContaminants.fa'
FOSMID_VECTOR = '/global/dna/shared/rqc/ref_databases/qaqc/databases/pCC1Fos.ref.fa'
PHIX = '/global/dna/shared/rqc/ref_databases/qaqc/databases/phix174_ill.ref.fa'
GENERAL_RRNA_FILE = '/global/dna/shared/rqc/ref_databases/qaqc/databases/rRNA.fa' ## including Chlamy
CHLOROPLAST_NCBI_REFSEQ = '/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.plastid/refseq.plastid'
MITOCHONDRION_NCBI_REFSEQ = '/global/dna/shared/rqc/ref_databases/ncbi/CURRENT/refseq.mitochondrion/refseq.mitochondrion'
MICROBES = "/global/projectb/sandbox/gaag/bbtools/commonMicrobes/fusedERPBBmasked.fa.gz" ## non-synthetic contaminants
SYNTHETIC = "/global/projectb/sandbox/gaag/bbtools/data/Illumina.artifacts.2013.12.no_DNA_RNA_spikeins.fa.gz" ## synthetic contaminants
ADAPTERS = "/global/projectb/sandbox/gaag/bbtools/data/adapters.fa"
ILLUMINA_READ_PERCENT_CONTAMINATION_ARTIFACT = "illumina read percent contamination artifact"
ILLUMINA_READ_PERCENT_CONTAMINATION_ARTIFACT_50BP = "illumina read percent contamination artifact 50bp" ## 20131203 Added for 50bp contam
ILLUMINA_READ_PERCENT_CONTAMINATION_ARTIFACT_20BP = "illumina read percent contamination artifact 20bp" ## 11092015 Added for 20bp contam for smRNA
ILLUMINA_READ_PERCENT_CONTAMINATION_DNA_SPIKEIN = "illumina read percent contamination DNA spikein"
ILLUMINA_READ_PERCENT_CONTAMINATION_RNA_SPIKEIN = "illumina read percent contamination RNA spikein"
ILLUMINA_READ_PERCENT_CONTAMINATION_CONTAMINANTS = "illumina read percent contamination contaminants"
# ILLUMINA_READ_PERCENT_CONTAMINATION_ECOLI_COMBINED = "illumina read percent contamination ecoli combined"
ILLUMINA_READ_PERCENT_CONTAMINATION_FOSMID = "illumina read percent contamination fosmid"
ILLUMINA_READ_PERCENT_CONTAMINATION_MITOCHONDRION = "illumina read percent contamination mitochondrion"
ILLUMINA_READ_PERCENT_CONTAMINATION_PHIX = "illumina read percent contamination phix"
ILLUMINA_READ_PERCENT_CONTAMINATION_PLASTID = "illumina read percent contamination plastid"
ILLUMINA_READ_PERCENT_CONTAMINATION_RRNA = "illumina read percent contamination rrna"
ILLUMINA_READ_PERCENT_CONTAMINATION_MICROBES = "illumina read percent contamination microbes" ## non-synthetic
ILLUMINA_READ_PERCENT_CONTAMINATION_SYNTH = "illumina read percent contamination adapters"
ILLUMINA_READ_PERCENT_CONTAMINATION_ADAPTERS = "illumina read percent contamination adapters"
CONTAM_DBS = {}
CONTAM_DBS['artifact'] = ARTIFACT_FILE_NO_SPIKEIN
CONTAM_DBS['artifact_50bp'] = ARTIFACT_FILE_NO_SPIKEIN ## 20131203 Added for 50bp contam
CONTAM_DBS['DNA_spikein'] = ARTIFACT_FILE_DNA_SPIKEIN
CONTAM_DBS['RNA_spikein'] = ARTIFACT_FILE_RNA_SPIKEIN
CONTAM_DBS['contaminants'] = CONTAMINANTS
# CONTAM_DBS['ecoli_combined'] = ECOLI_COMBINED
CONTAM_DBS['fosmid'] = FOSMID_VECTOR
CONTAM_DBS['mitochondrion'] = MITOCHONDRION_NCBI_REFSEQ
CONTAM_DBS['phix'] = PHIX
CONTAM_DBS['plastid'] = CHLOROPLAST_NCBI_REFSEQ
CONTAM_DBS['rrna'] = GENERAL_RRNA_FILE
CONTAM_DBS['microbes'] = MICROBES ## non-synthetic
CONTAM_DBS['synthetic'] = SYNTHETIC
CONTAM_DBS['adapters'] = ADAPTERS
CONTAM_KEYS = {}
CONTAM_KEYS['artifact'] = ILLUMINA_READ_PERCENT_CONTAMINATION_ARTIFACT
CONTAM_KEYS['artifact_50bp'] = ILLUMINA_READ_PERCENT_CONTAMINATION_ARTIFACT_50BP ## 12032013 Added for 50bp contam
CONTAM_KEYS['artifact_20bp'] = ILLUMINA_READ_PERCENT_CONTAMINATION_ARTIFACT_20BP ## 11092015 Added for 20bp contam for smRNA
CONTAM_KEYS['DNA_spikein'] = ILLUMINA_READ_PERCENT_CONTAMINATION_DNA_SPIKEIN
CONTAM_KEYS['RNA_spikein'] = ILLUMINA_READ_PERCENT_CONTAMINATION_RNA_SPIKEIN
CONTAM_KEYS['contaminants'] = ILLUMINA_READ_PERCENT_CONTAMINATION_CONTAMINANTS
# CONTAM_KEYS['ecoli_combined'] = ILLUMINA_READ_PERCENT_CONTAMINATION_ECOLI_COMBINED
CONTAM_KEYS['fosmid'] = ILLUMINA_READ_PERCENT_CONTAMINATION_FOSMID
CONTAM_KEYS['mitochondrion'] = ILLUMINA_READ_PERCENT_CONTAMINATION_MITOCHONDRION
CONTAM_KEYS['phix'] = ILLUMINA_READ_PERCENT_CONTAMINATION_PHIX
CONTAM_KEYS['plastid'] = ILLUMINA_READ_PERCENT_CONTAMINATION_PLASTID
CONTAM_KEYS['rrna'] = ILLUMINA_READ_PERCENT_CONTAMINATION_RRNA
CONTAM_KEYS['microbes'] = ILLUMINA_READ_PERCENT_CONTAMINATION_MICROBES ## non-synthetic
CONTAM_KEYS['synthetic'] = ILLUMINA_READ_PERCENT_CONTAMINATION_SYNTH
CONTAM_KEYS['adapters'] = ILLUMINA_READ_PERCENT_CONTAMINATION_ADAPTERS
## Constants for reporting
class ReadqcStats:
ILLUMINA_READ_Q20_READ1 = "read q20 read1"
ILLUMINA_READ_Q20_READ2 = "read q20 read2"
ILLUMINA_READ_QHIST_TEXT = "read qhist text"
ILLUMINA_READ_QHIST_PLOT = "read qhist plot"
ILLUMINA_READ_QHIST_D3_HTML_PLOT = "ILLUMINA_READ_QHIST_D3_HTML_PLOT"
ILLUMINA_READ_QUAL_POS_PLOT_1 = "read qual pos plot 1"
ILLUMINA_READ_QUAL_POS_PLOT_2 = "read qual pos plot 2"
ILLUMINA_READ_QUAL_POS_PLOT_MERGED = "read qual pos plot merged"
ILLUMINA_READ_QUAL_POS_PLOT_MERGED_D3_HTML_PLOT = "ILLUMINA_READ_QUAL_POS_PLOT_MERGED_D3_HTML_PLOT"
ILLUMINA_READ_QUAL_POS_QRPT_1 = "read qual pos qrpt 1"
ILLUMINA_READ_QUAL_POS_QRPT_2 = "read qual pos qrpt 2"
ILLUMINA_READ_QUAL_POS_QRPT_MERGED = "ILLUMINA_READ_QUAL_POS_QRPT_MERGED"
ILLUMINA_READ_QUAL_POS_QRPT_BOXPLOT_1 = "ILLUMINA_READ_QUAL_POS_QRPT_BOXPLOT_1"
ILLUMINA_READ_QUAL_POS_QRPT_BOXPLOT_2 = "ILLUMINA_READ_QUAL_POS_QRPT_BOXPLOT_2"
ILLUMINA_READ_QUAL_POS_QRPT_D3_HTML_BOXPLOT_1 = "ILLUMINA_READ_QUAL_POS_QRPT_D3_HTML_BOXPLOT_1"
ILLUMINA_READ_QUAL_POS_QRPT_D3_HTML_BOXPLOT_2 = "ILLUMINA_READ_QUAL_POS_QRPT_D3_HTML_BOXPLOT_2"
ILLUMINA_READ_QUAL_POS_QRPT_BOXPLOT_TEXT = "ILLUMINA_READ_QUAL_POS_QRPT_BOXPLOT_TEXT"
ILLUMINA_READ_BASE_COUNT_TEXT_1 = "read base count text 1"
ILLUMINA_READ_BASE_COUNT_TEXT_2 = "read base count text 2"
ILLUMINA_READ_BASE_COUNT_PLOT_1 = "read base count plot 1"
ILLUMINA_READ_BASE_COUNT_PLOT_2 = "read base count plot 2"
ILLUMINA_READ_BASE_COUNT_D3_HTML_PLOT_1 = "ILLUMINA_READ_BASE_COUNT_D3_HTML_PLOT_1" # cyclone nucleotide comp plot
ILLUMINA_READ_BASE_COUNT_D3_HTML_PLOT_2 = "ILLUMINA_READ_BASE_COUNT_D3_HTML_PLOT_2" # cyclone nucleotide comp plot
ILLUMINA_READ_BASE_PERCENTAGE_TEXT_1 = "read base percentage text 1"
ILLUMINA_READ_BASE_PERCENTAGE_TEXT_2 = "read base percentage text 2"
ILLUMINA_READ_BASE_PERCENTAGE_PLOT_1 = "read base percentage plot 1"
ILLUMINA_READ_BASE_PERCENTAGE_PLOT_2 = "read base percentage plot 2"
ILLUMINA_READ_BASE_PERCENTAGE_D3_HTML_PLOT_1 = "ILLUMINA_READ_BASE_PERCENTAGE_D3_HTML_PLOT_1"
ILLUMINA_READ_BASE_PERCENTAGE_D3_HTML_PLOT_2 = "ILLUMINA_READ_BASE_PERCENTAGE_D3_HTML_PLOT_2"
ILLUMINA_READ_20MER_SAMPLE_SIZE = "read 20mer sample size"
ILLUMINA_READ_20MER_PERCENTAGE_STARTING_MERS = "read 20mer percentage starting mers"
ILLUMINA_READ_20MER_PERCENTAGE_RANDOM_MERS = "read 20mer percentage random mers"
ILLUMINA_READ_20MER_UNIQUENESS_TEXT = "read 20mer uniqueness text"
ILLUMINA_READ_20MER_UNIQUENESS_PLOT = "read 20mer uniqueness plot"
ILLUMINA_READ_BWA_ALIGNED = "read bwa aligned"
ILLUMINA_READ_BWA_ALIGNED_DUPLICATE = "read bwa aligned duplicate"
ILLUMINA_READ_BWA_ALIGNED_DUPLICATE_PERCENT = "read bwa aligned duplicate percent"
ILLUMINA_READ_N_FREQUENCE = "read N frequence"
ILLUMINA_READ_N_PATTERN = "read N pattern"
ILLUMINA_READ_GC_MEAN = "read GC mean"
ILLUMINA_READ_GC_STD = "read GC std"
ILLUMINA_READ_GC_MED = "read GC median"
ILLUMINA_READ_GC_MODE = "read GC mode"
ILLUMINA_READ_GC_PLOT = "read GC plot"
ILLUMINA_READ_GC_D3_HTML_PLOT = "ILLUMINA_READ_GC_D3_HTML_PLOT"
ILLUMINA_READ_GC_TEXT = "read GC text hist"
ILLUMINA_READ_LENGTH_1 = "read length 1"
ILLUMINA_READ_LENGTH_2 = "read length 2"
ILLUMINA_READ_BASE_COUNT = "read base count"
ILLUMINA_READ_COUNT = "read count"
ILLUMINA_READ_TOPHIT_FILE = "read tophit file of"
ILLUMINA_READ_TOP100HIT_FILE = "read top100hit file of"
ILLUMINA_READ_TAXLIST_FILE = "read taxlist file of"
ILLUMINA_READ_TAX_SPECIES = "read tax species of"
ILLUMINA_READ_TOP_HITS = "read top hits of"
ILLUMINA_READ_TOP_100HITS = "read top 100 hits of"
ILLUMINA_READ_PARSED_FILE = "read parsed file of"
ILLUMINA_READ_MATCHING_HITS = "read matching hits of"
ILLUMINA_READS_NUMBER = "reads number"
ILLUMINA_READ_DEMULTIPLEX_STATS = "demultiplex stats"
ILLUMINA_READ_DEMULTIPLEX_STATS_PLOT = "demultiplex stats plot"
ILLUMINA_READ_DEMULTIPLEX_STATS_D3_HTML_PLOT = "ILLUMINA_READ_DEMULTIPLEX_STATS_D3_HTML_PLOT"
ILLUMINA_READ_BASE_QUALITY_STATS = "read base quality stats"
ILLUMINA_READ_BASE_QUALITY_STATS_PLOT = "read base quality stats plot"
ILLUMINA_READ_PERCENT_CONTAMINATION_ARTIFACT = "illumina read percent contamination artifact"
ILLUMINA_READ_PERCENT_CONTAMINATION_ARTIFACT_50BP = "illumina read percent contamination artifact 50bp" ## 20131203 Added for 50bp contam
ILLUMINA_READ_PERCENT_CONTAMINATION_ARTIFACT_20BP = "illumina read percent contamination artifact 20bp" ## 11092015 Added for 20bp contam for smRNA
ILLUMINA_READ_PERCENT_CONTAMINATION_DNA_SPIKEIN = "illumina read percent contamination DNA spikein"
ILLUMINA_READ_PERCENT_CONTAMINATION_RNA_SPIKEIN = "illumina read percent contamination RNA spikein"
ILLUMINA_READ_PERCENT_CONTAMINATION_CONTAMINANTS = "illumina read percent contamination contaminants"
ILLUMINA_READ_PERCENT_CONTAMINATION_ECOLI_B = "illumina read percent contamination ecoli b"
ILLUMINA_READ_PERCENT_CONTAMINATION_ECOLI_K12 = "illumina read percent contamination ecoli k12"
# ILLUMINA_READ_PERCENT_CONTAMINATION_ECOLI_COMBINED = "illumina read percent contamination ecoli combined"
ILLUMINA_READ_PERCENT_CONTAMINATION_FOSMID = "illumina read percent contamination fosmid"
ILLUMINA_READ_PERCENT_CONTAMINATION_MITOCHONDRION = "illumina read percent contamination mitochondrion"
ILLUMINA_READ_PERCENT_CONTAMINATION_PLASTID = "illumina read percent contamination plastid"
ILLUMINA_READ_PERCENT_CONTAMINATION_PHIX = "illumina read percent contamination phix"
ILLUMINA_READ_PERCENT_CONTAMINATION_RRNA = "illumina read percent contamination rrna"
ILLUMINA_READ_SCICLONE_DNA_COUNT_FILE = "illumina read sciclone DNA count file"
ILLUMINA_READ_SCICLONE_RNA_COUNT_FILE = "illumina read sciclone RNA count file"
ILLUMINA_READ_SCICLONE_RNA_COUNT_TOTAL = "ILLUMINA_READ_SCICLONE_RNA_COUNT_TOTAL"
ILLUMINA_READ_SCICLONE_RNA_COUNT_MATCHED = "ILLUMINA_READ_SCICLONE_RNA_COUNT_MATCHED"
ILLUMINA_READ_SCICLONE_RNA_COUNT_MATCHED_PERC = "ILLUMINA_READ_SCICLONE_RNA_COUNT_MATCHED_PERC"
ILLUMINA_READ_SCICLONE_DNA_COUNT_TOTAL = "ILLUMINA_READ_SCICLONE_DNA_COUNT_TOTAL"
ILLUMINA_READ_SCICLONE_DNA_COUNT_MATCHED = "ILLUMINA_READ_SCICLONE_DNA_COUNT_MATCHED"
ILLUMINA_READ_SCICLONE_DNA_COUNT_MATCHED_PERC = "ILLUMINA_READ_SCICLONE_DNA_COUNT_MATCHED_PERC"
ILLUMINA_BASE_Q30 = 'base Q30'
ILLUMINA_BASE_Q25 = 'base Q25'
ILLUMINA_BASE_Q20 = 'base Q20'
ILLUMINA_BASE_Q15 = 'base Q15'
ILLUMINA_BASE_Q10 = 'base Q10'
ILLUMINA_BASE_Q5 = 'base Q5'
ILLUMINA_BASE_C30 = 'base C30'
ILLUMINA_BASE_C25 = 'base C25'
ILLUMINA_BASE_C20 = 'base C20'
ILLUMINA_BASE_C15 = 'base C15'
ILLUMINA_BASE_C10 = 'base C10'
ILLUMINA_BASE_C5 = 'base C5'
ILLUMINA_READ_Q30 = 'read Q30'
ILLUMINA_READ_Q25 = 'read Q25'
ILLUMINA_READ_Q20 = 'read Q20'
ILLUMINA_READ_Q15 = 'read Q15'
ILLUMINA_READ_Q10 = 'read Q10'
ILLUMINA_READ_Q5 = 'read Q5'
ILLUMINA_BASE_Q30_SCORE_MEAN = 'Q30 bases Q score mean'
ILLUMINA_BASE_Q30_SCORE_STD = 'Q30 bases Q score std'
ILLUMINA_BASE_OVERALL_BASES_Q_SCORE_MEAN = 'overall bases Q score mean'
ILLUMINA_BASE_OVERALL_BASES_Q_SCORE_STD = 'overall bases Q score std'
## read qc step 1
ILLUMINA_TOO_SMALL_NUM_READS = "ILLUMINA_TOO_SMALL_NUM_READS" ## flag to notify too low number of reads
## read qc step 2
ILLUMINA_READ_20MER_UNIQUENESS_D3_HTML_PLOT = "ILLUMINA_READ_20MER_UNIQUENESS_D3_HTML_PLOT"
## read qc step 13, 14, 15
ILLUMINA_SKIP_BLAST = "ILLUMINA_SKIP_BLAST" ## to record the blast step was skipped
## read qc step 17
ILLUMINA_READ_END_OF_READ_ADAPTER_CHECK_DATA = "ILLUMINA_READ_END_OF_READ_ADAPTER_CHECK_DATA"
ILLUMINA_READ_END_OF_READ_ADAPTER_CHECK_PLOT = "ILLUMINA_READ_END_OF_READ_ADAPTER_CHECK_PLOT"
ILLUMINA_READ_END_OF_READ_ADAPTER_CHECK_D3_HTML_PLOT = "ILLUMINA_READ_END_OF_READ_ADAPTER_CHECK_D3_HTML_PLOT"
## read qc step 18
ILLUMINA_READ_INSERT_SIZE_HISTO_PLOT = "ILLUMINA_READ_INSERT_SIZE_HISTO_PLOT"
ILLUMINA_READ_INSERT_SIZE_HISTO_DATA = "ILLUMINA_READ_INSERT_SIZE_HISTO_DATA"
ILLUMINA_READ_INSERT_SIZE_HISTO_D3_HTML_PLOT = "ILLUMINA_READ_INSERT_SIZE_HISTO_D3_HTML_PLOT"
ILLUMINA_READ_INSERT_SIZE_TOTAL_TIME = "ILLUMINA_READ_INSERT_SIZE_TOTAL_TIME"
ILLUMINA_READ_INSERT_SIZE_NUM_READS = "ILLUMINA_READ_INSERT_SIZE_NUM_READS"
ILLUMINA_READ_INSERT_SIZE_JOINED_NUM = "ILLUMINA_READ_INSERT_SIZE_JOINED_NUM"
ILLUMINA_READ_INSERT_SIZE_JOINED_PERC = "ILLUMINA_READ_INSERT_SIZE_JOINED_PERC"
ILLUMINA_READ_INSERT_SIZE_AMBIGUOUS_NUM = "ILLUMINA_READ_INSERT_SIZE_AMBIGUOUS_NUM"
ILLUMINA_READ_INSERT_SIZE_AMBIGUOUS_PERC = "ILLUMINA_READ_INSERT_SIZE_AMBIGUOUS_PERC"
ILLUMINA_READ_INSERT_SIZE_NO_SOLUTION_NUM = "ILLUMINA_READ_INSERT_SIZE_NO_SOLUTION_NUM"
ILLUMINA_READ_INSERT_SIZE_NO_SOLUTION_PERC = "ILLUMINA_READ_INSERT_SIZE_NO_SOLUTION_PERC"
ILLUMINA_READ_INSERT_SIZE_TOO_SHORT_NUM = "ILLUMINA_READ_INSERT_SIZE_TOO_SHORT_NUM"
ILLUMINA_READ_INSERT_SIZE_TOO_SHORT_PERC = "ILLUMINA_READ_INSERT_SIZE_TOO_SHORT_PERC"
ILLUMINA_READ_INSERT_SIZE_AVG_INSERT = "ILLUMINA_READ_INSERT_SIZE_AVG_INSERT"
ILLUMINA_READ_INSERT_SIZE_STD_INSERT = "ILLUMINA_READ_INSERT_SIZE_STD_INSERT"
ILLUMINA_READ_INSERT_SIZE_MODE_INSERT = "ILLUMINA_READ_INSERT_SIZE_MODE_INSERT"
ILLUMINA_READ_INSERT_SIZE_INSERT_RANGE_START = "ILLUMINA_READ_INSERT_SIZE_INSERT_RANGE_START"
ILLUMINA_READ_INSERT_SIZE_INSERT_RANGE_END = "ILLUMINA_READ_INSERT_SIZE_INSERT_RANGE_END"
ILLUMINA_READ_INSERT_SIZE_90TH_PERC = "ILLUMINA_READ_INSERT_SIZE_90TH_PERC"
ILLUMINA_READ_INSERT_SIZE_50TH_PERC = "ILLUMINA_READ_INSERT_SIZE_50TH_PERC"
ILLUMINA_READ_INSERT_SIZE_10TH_PERC = "ILLUMINA_READ_INSERT_SIZE_10TH_PERC"
ILLUMINA_READ_INSERT_SIZE_75TH_PERC = "ILLUMINA_READ_INSERT_SIZE_75TH_PERC"
ILLUMINA_READ_INSERT_SIZE_25TH_PERC = "ILLUMINA_READ_INSERT_SIZE_25TH_PERC"
## step 19
GC_DIVERGENCE_CSV_FILE = "GC_DIVERGENCE_CSV_FILE"
GC_DIVERGENCE_PLOT_FILE = "GC_DIVERGENCE_PLOT_FILE"
GC_DIVERGENCE_COEFFICIENTS_CSV_FILE = "GC_DIVERGENCE_COEFFICIENTS_CSV_FILE"
#GC_DIVERGENCE_VAL = "GC_DIVERGENCE_VAL"
GC_DIVERGENCE_COEFF_R1_AT = "GC_DIVERGENCE_COEFF_R1_AT"
GC_DIVERGENCE_COEFF_R1_ATCG = "GC_DIVERGENCE_COEFF_R1_ATCG"
GC_DIVERGENCE_COEFF_R1_CG = "GC_DIVERGENCE_COEFF_R1_CG"
GC_DIVERGENCE_COEFF_R2_AT = "GC_DIVERGENCE_COEFF_R2_AT"
GC_DIVERGENCE_COEFF_R2_ATCG = "GC_DIVERGENCE_COEFF_R2_ATCG"
GC_DIVERGENCE_COEFF_R2_CG = "GC_DIVERGENCE_COEFF_R2_CG"
## EOF | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pytools/lib/readqc_constants.py | readqc_constants.py |
ln -s /global/projectb/sandbox/gaag/bbtools/silva/latest/both_deduped_sorted.fa.gz .
#Aggresively remove similar sequences
dedupe.sh in=both_deduped_sorted.fa.gz out=dd2_e2_s5.fa.gz s=5 e=2 ordered zl=9 fastawrap=4000
#Less-aggressively remove duplicate sequences
clumpify.sh in=both_deduped_sorted.fa.gz out=clumped_s2.fa.gz zl=9 s=2 dedupe fastawrap=4000 ow passes=2
#Entropy-mask the sequences
bbduk.sh -Xmx1g in=dd2_e2_s5.fa.gz out=dd2_e2_s5_masked.fa.gz zl=9 entropy=0.6 entropyk=4 entropywindow=24 maskentropy ordered ow qtrim=rl trimq=1 fastawrap=4000
bbduk.sh -Xmx1g in=clumped_s2.fa.gz out=clumped_s2_masked.fa.gz zl=9 entropy=0.6 entropyk=4 entropywindow=24 maskentropy ordered ow qtrim=rl trimq=1 fastawrap=4000
#Generate synthetic reads
randomreads.sh -Xmx31g adderrors=f ref=clumped_s2_masked.fa.gz reads=300m out=synth_s2.fa.gz len=100 zl=6 fastawrap=4000 illuminanames
randomreads.sh -Xmx8g adderrors=f ref=dd2_e2_s5_masked.fa.gz reads=300m out=synth_e2_s5.fa.gz len=100 zl=6 fastawrap=4000 illuminanames
#Remove duplicate reads
clumpify.sh in=synth_s2.fa.gz out=synth_s2_clumped_s1.fa.gz reorder zl=9 fastawrap=4000 groups=1 dedupe s=1 rcomp
clumpify.sh in=synth_e2_s5.fa.gz out=synth_e2_s5_clumped_s1.fa.gz reorder zl=9 fastawrap=4000 groups=1 dedupe s=1 rcomp
#Create baseline kmer sets at different depths for different sensitivites (only one depth is needed)
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=1000 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers1000A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=500 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers500A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=200 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers200A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=100 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers100A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=50 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers50A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=40 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers40A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=30 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers30A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=20 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers20A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=10 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers10A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=8 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers8A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=5 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers5A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=3 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers3A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=2 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers2A.fa.gz
kcompress.sh -Xmx31g ow zl=9 pigz=16 min=1 in=dd2_e2_s5_masked.fa.gz out=stdout.fa | clumpify.sh -Xmx16g in=stdin.fa k=16 reorder groups=1 fastawrap=4000 ow zl=9 pigz=16 out=riboKmers1A.fa.gz
#Find the missed synthetic reads
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers1000A.fa.gz out=read_misses1000A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers500A.fa.gz out=read_misses500A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers200A.fa.gz out=read_misses200A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers100A.fa.gz out=read_misses100A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers50A.fa.gz out=read_misses50A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers40A.fa.gz out=read_misses40A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers30A.fa.gz out=read_misses30A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers20A.fa.gz out=read_misses20A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers10A.fa.gz out=read_misses10A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers8A.fa.gz out=read_misses8A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers5A.fa.gz out=read_misses5A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers3A.fa.gz out=read_misses3A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers2A.fa.gz out=read_misses2A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
bbduk.sh -Xmx8g in=synth_e2_s5_clumped_s1.fa.gz ref=riboKmers1A.fa.gz out=read_misses1A.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
#Iterate over each depth to add missed kmers (again, only one depth is needed)
kcompress.sh -Xmx31g ow min=2000 in=read_misses1000A.fa.gz out=riboKmers1000B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses1000A.fa.gz ref=riboKmers1000A.fa.gz,riboKmers1000B.fa.gz out=read_misses1000B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=1000 in=read_misses1000B.fa.gz out=riboKmers1000C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers1000A.fa.gz,riboKmers1000B.fa.gz,riboKmers1000C.fa.gz out=read_misses1000C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=500 in=read_misses1000C.fa.gz out=riboKmers1000D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses1000C.fa.gz ref=riboKmers1000A.fa.gz,riboKmers1000B.fa.gz,riboKmers1000C.fa.gz,riboKmers1000D.fa.gz out=read_misses1000D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=250 in=read_misses1000D.fa.gz out=riboKmers1000E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses1000D.fa.gz ref=riboKmers1000A.fa.gz,riboKmers1000B.fa.gz,riboKmers1000C.fa.gz,riboKmers1000D.fa.gz,riboKmers1000E.fa.gz out=read_misses1000E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers1000A.fa.gz,riboKmers1000B.fa.gz,riboKmers1000C.fa.gz,riboKmers1000D.fa.gz,riboKmers1000E.fa.gz out=riboKmers1000merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers1000merged.fa.gz out=riboKmers1000clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers1000clumped.fa.gz out=riboKmers1000fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
kcompress.sh -Xmx31g ow min=1000 in=read_misses500A.fa.gz out=riboKmers500B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses500A.fa.gz ref=riboKmers500A.fa.gz,riboKmers500B.fa.gz out=read_misses500B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=500 in=read_misses500B.fa.gz out=riboKmers500C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers500A.fa.gz,riboKmers500B.fa.gz,riboKmers500C.fa.gz out=read_misses500C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=250 in=read_misses500C.fa.gz out=riboKmers500D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses500C.fa.gz ref=riboKmers500A.fa.gz,riboKmers500B.fa.gz,riboKmers500C.fa.gz,riboKmers500D.fa.gz out=read_misses500D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=125 in=read_misses500D.fa.gz out=riboKmers500E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses500D.fa.gz ref=riboKmers500A.fa.gz,riboKmers500B.fa.gz,riboKmers500C.fa.gz,riboKmers500D.fa.gz,riboKmers500E.fa.gz out=read_misses500E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers500A.fa.gz,riboKmers500B.fa.gz,riboKmers500C.fa.gz,riboKmers500D.fa.gz,riboKmers500E.fa.gz out=riboKmers500merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers500merged.fa.gz out=riboKmers500clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers500clumped.fa.gz out=riboKmers500fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
kcompress.sh -Xmx31g ow min=500 in=read_misses200A.fa.gz out=riboKmers200B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses200A.fa.gz ref=riboKmers200A.fa.gz,riboKmers200B.fa.gz out=read_misses200B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=250 in=read_misses200B.fa.gz out=riboKmers200C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers200A.fa.gz,riboKmers200B.fa.gz,riboKmers200C.fa.gz out=read_misses200C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=125 in=read_misses200C.fa.gz out=riboKmers200D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses200C.fa.gz ref=riboKmers200A.fa.gz,riboKmers200B.fa.gz,riboKmers200C.fa.gz,riboKmers200D.fa.gz out=read_misses200D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=60 in=read_misses200D.fa.gz out=riboKmers200E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses200D.fa.gz ref=riboKmers200A.fa.gz,riboKmers200B.fa.gz,riboKmers200C.fa.gz,riboKmers200D.fa.gz,riboKmers200E.fa.gz out=read_misses200E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers200A.fa.gz,riboKmers200B.fa.gz,riboKmers200C.fa.gz,riboKmers200D.fa.gz,riboKmers200E.fa.gz out=riboKmers200merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers200merged.fa.gz out=riboKmers200clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers200clumped.fa.gz out=riboKmers200fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
kcompress.sh -Xmx31g ow min=200 in=read_misses100A.fa.gz out=riboKmers100B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses100A.fa.gz ref=riboKmers100A.fa.gz,riboKmers100B.fa.gz out=read_misses100B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=100 in=read_misses100B.fa.gz out=riboKmers100C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers100A.fa.gz,riboKmers100B.fa.gz,riboKmers100C.fa.gz out=read_misses100C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=50 in=read_misses100C.fa.gz out=riboKmers100D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses100C.fa.gz ref=riboKmers100A.fa.gz,riboKmers100B.fa.gz,riboKmers100C.fa.gz,riboKmers100D.fa.gz out=read_misses100D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=25 in=read_misses100D.fa.gz out=riboKmers100E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses100D.fa.gz ref=riboKmers100A.fa.gz,riboKmers100B.fa.gz,riboKmers100C.fa.gz,riboKmers100D.fa.gz,riboKmers100E.fa.gz out=read_misses100E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers100A.fa.gz,riboKmers100B.fa.gz,riboKmers100C.fa.gz,riboKmers100D.fa.gz,riboKmers100E.fa.gz out=riboKmers100merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers100merged.fa.gz out=riboKmers100clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers100clumped.fa.gz out=riboKmers100fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
kcompress.sh -Xmx31g ow min=100 in=read_misses50A.fa.gz out=riboKmers50B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses50A.fa.gz ref=riboKmers50A.fa.gz,riboKmers50B.fa.gz out=read_misses50B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=50 in=read_misses50B.fa.gz out=riboKmers50C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers50A.fa.gz,riboKmers50B.fa.gz,riboKmers50C.fa.gz out=read_misses50C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=25 in=read_misses50C.fa.gz out=riboKmers50D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses50C.fa.gz ref=riboKmers50A.fa.gz,riboKmers50B.fa.gz,riboKmers50C.fa.gz,riboKmers50D.fa.gz out=read_misses50D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=12 in=read_misses50D.fa.gz out=riboKmers50E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses50D.fa.gz ref=riboKmers50A.fa.gz,riboKmers50B.fa.gz,riboKmers50C.fa.gz,riboKmers50D.fa.gz,riboKmers50E.fa.gz out=read_misses50E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers50A.fa.gz,riboKmers50B.fa.gz,riboKmers50C.fa.gz,riboKmers50D.fa.gz,riboKmers50E.fa.gz out=riboKmers50merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers50merged.fa.gz out=riboKmers50clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers50clumped.fa.gz out=riboKmers50fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
kcompress.sh -Xmx31g ow min=80 in=read_misses40A.fa.gz out=riboKmers40B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses40A.fa.gz ref=riboKmers40A.fa.gz,riboKmers40B.fa.gz out=read_misses40B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=40 in=read_misses40B.fa.gz out=riboKmers40C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers40A.fa.gz,riboKmers40B.fa.gz,riboKmers40C.fa.gz out=read_misses40C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=20 in=read_misses40C.fa.gz out=riboKmers40D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses40C.fa.gz ref=riboKmers40A.fa.gz,riboKmers40B.fa.gz,riboKmers40C.fa.gz,riboKmers40D.fa.gz out=read_misses40D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=10 in=read_misses40D.fa.gz out=riboKmers40E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses40D.fa.gz ref=riboKmers40A.fa.gz,riboKmers40B.fa.gz,riboKmers40C.fa.gz,riboKmers40D.fa.gz,riboKmers40E.fa.gz out=read_misses40E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers40A.fa.gz,riboKmers40B.fa.gz,riboKmers40C.fa.gz,riboKmers40D.fa.gz,riboKmers40E.fa.gz out=riboKmers40merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers40merged.fa.gz out=riboKmers40clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers40clumped.fa.gz out=riboKmers40fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
kcompress.sh -Xmx31g ow min=60 in=read_misses30A.fa.gz out=riboKmers30B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses30A.fa.gz ref=riboKmers30A.fa.gz,riboKmers30B.fa.gz out=read_misses30B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=30 in=read_misses30B.fa.gz out=riboKmers30C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers30A.fa.gz,riboKmers30B.fa.gz,riboKmers30C.fa.gz out=read_misses30C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=15 in=read_misses30C.fa.gz out=riboKmers30D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses30C.fa.gz ref=riboKmers30A.fa.gz,riboKmers30B.fa.gz,riboKmers30C.fa.gz,riboKmers30D.fa.gz out=read_misses30D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=8 in=read_misses30D.fa.gz out=riboKmers30E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses30D.fa.gz ref=riboKmers30A.fa.gz,riboKmers30B.fa.gz,riboKmers30C.fa.gz,riboKmers30D.fa.gz,riboKmers30E.fa.gz out=read_misses30E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers30A.fa.gz,riboKmers30B.fa.gz,riboKmers30C.fa.gz,riboKmers30D.fa.gz,riboKmers30E.fa.gz out=riboKmers30merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers30merged.fa.gz out=riboKmers30clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers30clumped.fa.gz out=riboKmers30fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
kcompress.sh -Xmx31g ow min=40 in=read_misses20A.fa.gz out=riboKmers20B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses20A.fa.gz ref=riboKmers20A.fa.gz,riboKmers20B.fa.gz out=read_misses20B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=20 in=read_misses20B.fa.gz out=riboKmers20C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers20A.fa.gz,riboKmers20B.fa.gz,riboKmers20C.fa.gz out=read_misses20C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=12 in=read_misses20C.fa.gz out=riboKmers20D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses20C.fa.gz ref=riboKmers20A.fa.gz,riboKmers20B.fa.gz,riboKmers20C.fa.gz,riboKmers20D.fa.gz out=read_misses20D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=7 in=read_misses20D.fa.gz out=riboKmers20E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses20D.fa.gz ref=riboKmers20A.fa.gz,riboKmers20B.fa.gz,riboKmers20C.fa.gz,riboKmers20D.fa.gz,riboKmers20E.fa.gz out=read_misses20E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers20A.fa.gz,riboKmers20B.fa.gz,riboKmers20C.fa.gz,riboKmers20D.fa.gz,riboKmers20E.fa.gz out=riboKmers20merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers20merged.fa.gz out=riboKmers20clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers20clumped.fa.gz out=riboKmers20fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
kcompress.sh -Xmx31g ow min=20 in=read_misses10A.fa.gz out=riboKmers10B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses10A.fa.gz ref=riboKmers10A.fa.gz,riboKmers10B.fa.gz out=read_misses10B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=12 in=read_misses10B.fa.gz out=riboKmers10C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers10A.fa.gz,riboKmers10B.fa.gz,riboKmers10C.fa.gz out=read_misses10C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=8 in=read_misses10C.fa.gz out=riboKmers10D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses10C.fa.gz ref=riboKmers10A.fa.gz,riboKmers10B.fa.gz,riboKmers10C.fa.gz,riboKmers10D.fa.gz out=read_misses10D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=5 in=read_misses10D.fa.gz out=riboKmers10E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses10D.fa.gz ref=riboKmers10A.fa.gz,riboKmers10B.fa.gz,riboKmers10C.fa.gz,riboKmers10D.fa.gz,riboKmers10E.fa.gz out=read_misses10E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers10A.fa.gz,riboKmers10B.fa.gz,riboKmers10C.fa.gz,riboKmers10D.fa.gz,riboKmers10E.fa.gz out=riboKmers10merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers10merged.fa.gz out=riboKmers10clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers10clumped.fa.gz out=riboKmers10fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
kcompress.sh -Xmx31g ow min=16 in=read_misses8A.fa.gz out=riboKmers8B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses8A.fa.gz ref=riboKmers8A.fa.gz,riboKmers8B.fa.gz out=read_misses8B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=10 in=read_misses8B.fa.gz out=riboKmers8C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers8A.fa.gz,riboKmers8B.fa.gz,riboKmers8C.fa.gz out=read_misses8C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=7 in=read_misses8C.fa.gz out=riboKmers8D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses8C.fa.gz ref=riboKmers8A.fa.gz,riboKmers8B.fa.gz,riboKmers8C.fa.gz,riboKmers8D.fa.gz out=read_misses8D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=4 in=read_misses8D.fa.gz out=riboKmers8E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses8D.fa.gz ref=riboKmers8A.fa.gz,riboKmers8B.fa.gz,riboKmers8C.fa.gz,riboKmers8D.fa.gz,riboKmers8E.fa.gz out=read_misses8E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers8A.fa.gz,riboKmers8B.fa.gz,riboKmers8C.fa.gz,riboKmers8D.fa.gz,riboKmers8E.fa.gz out=riboKmers8merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers8merged.fa.gz out=riboKmers8clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers8clumped.fa.gz out=riboKmers8fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
kcompress.sh -Xmx31g ow min=12 in=read_misses5A.fa.gz out=riboKmers5B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses5A.fa.gz ref=riboKmers5A.fa.gz,riboKmers5B.fa.gz out=read_misses5B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=8 in=read_misses5B.fa.gz out=riboKmers5C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers5A.fa.gz,riboKmers5B.fa.gz,riboKmers5C.fa.gz out=read_misses5C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=5 in=read_misses5C.fa.gz out=riboKmers5D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses5C.fa.gz ref=riboKmers5A.fa.gz,riboKmers5B.fa.gz,riboKmers5C.fa.gz,riboKmers5D.fa.gz out=read_misses5D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=3 in=read_misses5D.fa.gz out=riboKmers5E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses5D.fa.gz ref=riboKmers5A.fa.gz,riboKmers5B.fa.gz,riboKmers5C.fa.gz,riboKmers5D.fa.gz,riboKmers5E.fa.gz out=read_misses5E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers5A.fa.gz,riboKmers5B.fa.gz,riboKmers5C.fa.gz,riboKmers5D.fa.gz,riboKmers5E.fa.gz out=riboKmers5merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers5merged.fa.gz out=riboKmers5clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers5clumped.fa.gz out=riboKmers5fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
kcompress.sh -Xmx31g ow min=10 in=read_misses3A.fa.gz out=riboKmers3B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses3A.fa.gz ref=riboKmers3A.fa.gz,riboKmers3B.fa.gz out=read_misses3B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=7 in=read_misses3B.fa.gz out=riboKmers3C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers3A.fa.gz,riboKmers3B.fa.gz,riboKmers3C.fa.gz out=read_misses3C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=4 in=read_misses3C.fa.gz out=riboKmers3D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses3C.fa.gz ref=riboKmers3A.fa.gz,riboKmers3B.fa.gz,riboKmers3C.fa.gz,riboKmers3D.fa.gz out=read_misses3D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=3 in=read_misses3D.fa.gz out=riboKmers3E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses3D.fa.gz ref=riboKmers3A.fa.gz,riboKmers3B.fa.gz,riboKmers3C.fa.gz,riboKmers3D.fa.gz,riboKmers3E.fa.gz out=read_misses3E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers3A.fa.gz,riboKmers3B.fa.gz,riboKmers3C.fa.gz,riboKmers3D.fa.gz,riboKmers3E.fa.gz out=riboKmers3merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers3merged.fa.gz out=riboKmers3clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers3clumped.fa.gz out=riboKmers3fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
kcompress.sh -Xmx31g ow min=8 in=read_misses2A.fa.gz out=riboKmers2B.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses2A.fa.gz ref=riboKmers2A.fa.gz,riboKmers2B.fa.gz out=read_misses2B.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=6 in=read_misses2B.fa.gz out=riboKmers2C.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=synth_s2_clumped_s1.fa.gz ref=riboKmers2A.fa.gz,riboKmers2B.fa.gz,riboKmers2C.fa.gz out=read_misses2C.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=4 in=read_misses2C.fa.gz out=riboKmers2D.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses2C.fa.gz ref=riboKmers2A.fa.gz,riboKmers2B.fa.gz,riboKmers2C.fa.gz,riboKmers2D.fa.gz out=read_misses2D.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow min=2 in=read_misses2D.fa.gz out=riboKmers2E.fa.gz fastawrap=4000 ow zl=9 pigz=16
bbduk.sh -Xmx8g in=read_misses2D.fa.gz ref=riboKmers2A.fa.gz,riboKmers2B.fa.gz,riboKmers2C.fa.gz,riboKmers2D.fa.gz,riboKmers2E.fa.gz out=read_misses2E.fa.gz zl=6 k=31 mm=f ordered fastawrap=4000 ow
kcompress.sh -Xmx31g ow in=riboKmers2A.fa.gz,riboKmers2B.fa.gz,riboKmers2C.fa.gz,riboKmers2D.fa.gz,riboKmers2E.fa.gz out=riboKmers2merged.fa.gz fastawrap=4000 ow zl=9 pigz=16
clumpify.sh k=16 in=riboKmers2merged.fa.gz out=riboKmers2clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers2clumped.fa.gz out=riboKmers2fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1
clumpify.sh k=16 in=riboKmers1A.fa.gz out=riboKmers1clumped.fa.gz g=1 zl=9 fastawrap=4000 reorder rcomp ow
fuse.sh -Xmx1g ow in=riboKmers1A.fa.gz out=riboKmers1fused.fa.gz fastawrap=8000 ow zl=11 pigz=32 maxlen=4000 npad=1 | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pipelines/makeRiboKmers.sh | makeRiboKmers.sh |
set -e
#Written by Brian Bushnell
#Last updated February 21, 2018
#Combines IMG (a JGI genome set) into a single flat file, renamed by taxonomy, and sketches it.
#This script only works on Genepool or other systems connected to NERSC.
#"time" before each command is optional.
#Rename all contigs by prefixing them with the IMG ID, and put them in a single file.
#"in=auto" reads the IMG ID, taxID, and file location from a file at /global/projectb/sandbox/gaag/bbtools/tax/img2/IMG_taxonID_ncbiID_fna.txt
#This is a 3-column tsv file.
#The "imghq" flag uses a different file, "IMG_taxonID_ncbiID_fna_HQ.txt", which contains only high-quality genomes.
time renameimg.sh in=auto imghq out=renamed.fa.gz fastawrap=255 zl=6
#Make the IMG blacklist of uninformative kmers occuring in over 300 different species.
#This is optional, but tends to increase query speed and reduce false positives.
time sketchblacklist.sh -Xmx31g in=renamed.fa.gz prepasses=1 tree=auto taxa taxlevel=species ow out=blacklist_img_species_300.sketch mincount=300 k=31,24 imghq
#Sketch the reference genomes, creating one sketch per IMG ID.
#They are written to 31 files, img0.sketch through img30.sketch.
#The only reason for this is to allow parallel loading by CompareSketch.
time sketch.sh -Xmx31g in=renamed.fa.gz out=img#.sketch files=31 mode=img tree=auto img=auto gi=null ow blacklist=blacklist_img_species_300.sketch k=31,24 imghq
#A query such as contigs.fa can now be compared to the new reference sketches like this:
#comparesketch.sh in=contigs.fa k=31,24 tree=auto img*.sketch blacklist=blacklist_img_species_300.sketch printimg
#On NERSC systems, you can then set the default path to img by pointing /global/projectb/sandbox/gaag/bbtools/img/current at the path to the new sketches.
#Then you can use the default set of img sketches like this:
#comparesketch.sh in=contigs.fa img tree=auto printimg
#That command automatically adds the default path to the sketches, the blacklist, and the correct values for K. | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pipelines/processIMG.sh | processIMG.sh |
set -e
#Written by Brian Bushnell
#Last updated March 4, 2019
#This script is designed to preprocess data for assembly of overlapping 2x150bp reads from Illumina HiSeq 2500.
#Some numbers and steps may need adjustment for different data types or file paths.
#For large genomes, tadpole and bbmerge (during the "Merge" phase) may need the flag "prefilter=2" to avoid running out of memory.
#"prefilter" makes these take twice as long though so don't use it if you have enough memory.
#The "rm temp.fq.gz; ln -s reads.fq.gz temp.fq.gz" is not necessary but added so that any pipeline stage can be easily disabled,
#without affecting the input file name of the next stage.
# --- Setup ---
#Load dependencies.
#These module load commands are for Genepool; getting the correct executables in your path will vary by system.
if [[ $NERSC_HOST == genepool ]]; then
#module load bbtools
module load spades/3.9.0
module load megahit
module load pigz
module load quast
elif [[ $NERSC_HOST == denovo ]]; then
#TODO
elif [[ $NERSC_HOST == cori ]]; then
#TODO
fi
#Link the interleaved input file as "temp.fq.gz"
rm temp.fq.gz; ln -s reads.fq.gz temp.fq.gz
# --- Preprocessing ---
#Remove optical duplicates
clumpify.sh in=temp.fq.gz out=clumped.fq.gz dedupe optical
rm temp.fq.gz; ln -s clumped.fq.gz temp.fq.gz
#Remove low-quality regions
filterbytile.sh in=temp.fq.gz out=filtered_by_tile.fq.gz
rm temp.fq.gz; ln -s filtered_by_tile.fq.gz temp.fq.gz
#Trim adapters. Optionally, reads with Ns can be discarded by adding "maxns=0" and reads with really low average quality can be discarded with "maq=8".
bbduk.sh in=temp.fq.gz out=trimmed.fq.gz ktrim=r k=23 mink=11 hdist=1 tbo tpe minlen=70 ref=adapters ftm=5 ordered
rm temp.fq.gz; ln -s trimmed.fq.gz temp.fq.gz
#Remove synthetic artifacts and spike-ins by kmer-matching.
bbduk.sh in=temp.fq.gz out=filtered.fq.gz k=31 ref=artifacts,phix ordered cardinality
rm temp.fq.gz; ln -s filtered.fq.gz temp.fq.gz
#Decontamination by mapping can be done here.
#JGI removes these in two phases:
#1) common microbial contaminants (E.coli, Pseudomonas, Delftia, others)
#2) common animal contaminants (Human, cat, dog, mouse)
#Error-correct phase 1
bbmerge.sh in=temp.fq.gz out=ecco.fq.gz ecco mix vstrict ordered ihist=ihist_merge1.txt
rm temp.fq.gz; ln -s ecco.fq.gz temp.fq.gz
#Error-correct phase 2
clumpify.sh in=temp.fq.gz out=eccc.fq.gz ecc passes=4 reorder
rm temp.fq.gz; ln -s eccc.fq.gz temp.fq.gz
#Error-correct phase 3
#Low-depth reads can be discarded here with the "tossjunk", "tossdepth", or "tossuncorrectable" flags.
#For very large datasets, "prefilter=1" or "prefilter=2" can be added to conserve memory.
#Alternatively, bbcms.sh can be used if Tadpole still runs out of memory.
tadpole.sh in=temp.fq.gz out=ecct.fq.gz ecc k=62 ordered
rm temp.fq.gz; ln -s ecct.fq.gz temp.fq.gz
#Normalize
#This phase can be very beneficial for data with uneven coverage like metagenomes, MDA-amplified single cells, and RNA-seq, but is not usually recommended for isolate DNA.
#So normally, this stage should be commented out, as it is here.
#bbnorm.sh in=temp.fq.gz out=normalized.fq.gz target=100 hist=khist.txt peaks=peaks.txt
#rm temp.fq.gz; ln -s normalized.fq.gz temp.fq.gz
#Merge
#This phase handles overlapping reads,
#and also nonoverlapping reads, if there is sufficient coverage and sufficiently short inter-read gaps
#For very large datasets, "prefilter=1" or "prefilter=2" can be added to conserve memory.
bbmerge-auto.sh in=temp.fq.gz out=merged.fq.gz outu=unmerged.fq.gz strict k=93 extend2=80 rem ordered ihist=ihist_merge.txt
#Quality-trim the unmerged reads.
bbduk.sh in=unmerged.fq.gz out=qtrimmed.fq.gz qtrim=r trimq=10 minlen=70 ordered
# --- Assembly ---
#You do not need to assemble with all assemblers, but I have listed the commands for the 3 I use most often
#Assemble with Tadpole
#For very large datasets, "prefilter=1" or "prefilter=2" can be added to conserve memory.
tadpole.sh in=merged.fq.gz,qtrimmed.fq.gz out=tadpole_contigs.fa k=124
#Or assemble with TadWrapper (which automatically finds the best value of K but takes longer)
tadwrapper.sh in=merged.fq.gz,qtrimmed.fq.gz out=tadwrapper_contigs_%.fa outfinal=tadwrapper_contigs k=40,124,217 bisect
#Assemble with Spades
spades.py -k 25,55,95,125 --phred-offset 33 -s merged.fq.gz --12 qtrimmed.fq.gz -o spades_out
#Assemble with Megahit
#Note that the above error correction phases tend to not be beneficial for Megahit
megahit --k-min 45 --k-max 225 --k-step 26 --min-count 2 -r merged.fq.gz --12 qtrimmed.fq.gz -o megahit_out
# --- Evaluation ---
#Evaluate assemblies with AssemblyStats
statswrapper.sh contigs.fa spades_out/scaffolds.fasta megahit_out/contigs.fa format=3 out=
#Evaluate assemblies with Quast (leave out "-R ref.fa if you don't have a reference)
quast.py -f -o quast -R ref.fa tadpole_contigs.fa spades_out/scaffolds.fasta megahit_out/contigs.fa
#Pick which assembly you like best
#Determine the overall taxonomic makeup of the assembly
sendsketch.sh in=tadpole_contigs.fa
#Or, try to determine taxonomy on a per-contig basis. If this is not sensitive enough, try BLAST instead.
sendsketch.sh in=tadpole_contigs.fa persequence minhits=1 records=4
#Calculate the coverage distribution, and capture reads that did not make it into the assembly
bbmap.sh in=filtered.fq.gz ref=tadpole_contigs.fa nodisk covhist=covhist.txt covstats=covstats.txt outm=assembled.fq.gz outu=unassembled.fq.gz maxindel=200 minid=90 qtrim=10 untrim ambig=all | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pipelines/assemblyPipeline.sh | assemblyPipeline.sh |
set -e
#Written by Brian Bushnell
#Last updated January 29, 2020
#This script is designed to preprocess and map data for variation calling of 2x150bp reads from Illumina HiSeq 2500.
#Some numbers and steps may need adjustment for different data types or file paths.
#The commands assume single-ended reads, or paired reads are interleaved in a single file.
#For paired reads in 2 files "in1=" and "in2=" may be used.
# --------------- #
#Remove duplicates
#Optical deduplication requires standard Illumina read headers and will not work with renamed reads, such as most SRA data.
#To perform PCR-duplicate removal (of all duplicates regardless of location), omit the "optical" flag.
#Deduplication is generally not recommended for quantification experiments such as RNA-seq.
clumpify.sh in=reads.fq.gz out=clumped.fq.gz dedupe optical
#Remove low-quality regions
#This step requires standard Illumina read headers and will not work with renamed reads, such as most SRA data.
filterbytile.sh in=clumped.fq.gz out=filtered_by_tile.fq.gz
#Trim adapters
bbduk.sh in=filtered_by_tile.fq.gz out=trimmed.fq.gz ktrim=r k=23 mink=11 hdist=1 tbo tpe minlen=100 ref=bbmap/resources/adapters.fa ftm=5 ordered
#Remove synthetic artifacts and spike-ins. Add "qtrim=r trimq=8" to also perform quality-trimming at this point, but not if quality recalibration will be done later.
bbduk.sh in=trimmed.fq.gz out=filtered.fq.gz k=27 ref=bbmap/resources/sequencing_artifacts.fa.gz,bbmap/resources/phix174_ill.ref.fa.gz ordered
#Map to reference
bbmap.sh in=filtered.fq.gz out=mapped.sam.gz bs=bs.sh pigz unpigz ref=reference.fa
#Call variants
callvariants.sh in=mapped.sam.gz out=vars.vcf.gz ref=reference.fa ploidy=1 prefilter
#You can stop at this point; the remainder is optional.
# --------------- #
#Optional error-correction and recalibration for better quality:
#Generate recalibration matrices
calctruequality.sh in=mapped.sam.gz vcf=vars.vcf.gz
#Recalibrate. This can be done on the mapped reads instead of remapping, but if error-correction is desired it needs to be done on the unmapped reads.
bbduk.sh in=filtered.fq.gz out=recal.fq.gz recalibrate ordered
#Error-correct by overlap (for paired reads only)
bbmerge.sh in=recal.fq.gz out=ecco.fq.gz ecco strict mix ordered
#Quality-trim, if not already done earlier
bbduk.sh in=ecco.fq.gz out=qtrimmed.fq.gz qtrim=r trimq=8 ordered
#Re-map to the reference
bbmap.sh in=qtrimmed.fq.gz out=mapped2.sam.gz pigz unpigz
#Re-call variants
callvariants.sh in=mapped2.sam.gz out=vars2.vcf.gz ref=reference.fa ploidy=1 prefilter
# --------------- #
#Optional removal of reads with multiple substitutions unique to that read (an occasional Illumina error mode, particularly on Novaseq):
#If this step is performed, it may be advisable to do the initial variant-calling (prior to this step) with very liberal settings, e.g:
#callvariants.sh in=mapped.sam vcf=vars.vcf.gz ref=reference.fa clearfilters minreads=2 ploidy=1 prefilter
#Remove bad reads
filtersam.sh in=mapped.sam.gz out=clean.sam.gz outb=dirty.sam.gz vcf=vars.vcf.gz maxbadsubs=2 mbsad=2 mbsrd=2
#Re-call variants
callvariants.sh in=clean.sam.gz out=vars2.vcf.gz ref=reference.fa ploidy=1 prefilter
# --------------- # | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pipelines/variantPipeline.sh | variantPipeline.sh |
set -e
TAXPATH="auto"
#Fetch
wget -nv http://ftp.arb-silva.de/release_132/Exports/SILVA_132_SSURef_tax_silva_trunc.fasta.gz
wget -nv http://ftp.arb-silva.de/release_132/Exports/SILVA_132_LSURef_tax_silva_trunc.fasta.gz
#Process LSU
time reformat.sh in=SILVA_132_LSURef_tax_silva_trunc.fasta.gz out=lsu_utot.fa.gz fastawrap=4000 utot zl=9 qtrim=rl trimq=1 ow minlen=80 pigz=20
time clumpify.sh in=lsu_utot.fa.gz reorder groups=1 -Xmx31g fastawrap=4000 out=lsu_clumped.fa.gz zl=9 pigz=20
time rename.sh addprefix prefix="lcl\|LSU" in=lsu_clumped.fa.gz out=lsu_prefix.fa.gz pigz=20 zl=9 fastawrap=4000
time gi2taxid.sh -Xmx8g tree=auto in=lsu_prefix.fa.gz out=lsu_renamed.fa.gz silva zl=9 pigz=20 taxpath=$TAXPATH
time sortbyname.sh -Xmx31g in=lsu_renamed.fa.gz out=lsu_sorted.fa.gz ow taxa tree=auto fastawrap=4000 zl=9 pigz=20 allowtemp=f taxpath=$TAXPATH
time dedupe.sh in=lsu_sorted.fa.gz out=lsu_deduped100pct.fa.gz zl=9 pigz=20 ordered fastawrap=4000
#Process SSU
time reformat.sh in=SILVA_132_SSURef_tax_silva_trunc.fasta.gz out=ssu_utot.fa.gz fastawrap=4000 utot zl=9 qtrim=rl trimq=1 ow minlen=80 pigz=20
time clumpify.sh in=ssu_utot.fa.gz reorder groups=1 -Xmx31g fastawrap=4000 out=ssu_clumped.fa.gz zl=9 pigz=20
time rename.sh addprefix prefix="lcl\|SSU" in=ssu_clumped.fa.gz out=ssu_prefix.fa.gz pigz=20 zl=9 fastawrap=4000
time gi2taxid.sh -Xmx8g tree=auto in=ssu_prefix.fa.gz out=ssu_renamed.fa.gz silva zl=9 pigz=20 taxpath=$TAXPATH
time sortbyname.sh -Xmx31g in=ssu_renamed.fa.gz out=ssu_sorted.fa.gz ow taxa tree=auto fastawrap=4000 zl=9 pigz=20 allowtemp=f taxpath=$TAXPATH
time dedupe.sh in=ssu_sorted.fa.gz out=ssu_deduped_sorted.fa.gz zl=9 pigz=20 ordered fastawrap=4000
#time sortbyname.sh -Xmx31g in=ssu_deduped100pct.fa.gz out=ssu_deduped_sorted.fa.gz ow taxa tree=auto fastawrap=4000 zl=9 pigz=20 allowtemp=f
#Merge LSU and SSU
cat ssu_sorted.fa.gz lsu_sorted.fa.gz > both_sorted_temp.fa.gz
time sortbyname.sh -Xmx31g in=both_sorted_temp.fa.gz out=both_sorted.fa.gz ow taxa tree=auto fastawrap=4000 zl=9 pigz=20 allowtemp=f taxpath=$TAXPATH
rm both_sorted_temp.fa.gz
cat ssu_deduped_sorted.fa.gz lsu_deduped_sorted.fa.gz > both_deduped_sorted_temp.fa.gz
time sortbyname.sh -Xmx31g in=both_deduped_sorted_temp.fa.gz out=both_deduped_sorted.fa.gz ow taxa tree=auto fastawrap=4000 zl=9 pigz=20 allowtemp=f taxpath=$TAXPATH
rm both_deduped_sorted_temp.fa.gz
filterbytaxa.sh include=f in=both_deduped_sorted.fa.gz id=2323,256318,12908 tree=auto requirepresent=f out=both_deduped_sorted_no_unclassified_bacteria.fa.gz zl=9 fastawrap=9999 ow
#Sketch steps
time sketchblacklist.sh -Xmx16g in=both_deduped_sorted_no_unclassified_bacteria.fa.gz prefilter=f tree=auto taxa silva taxlevel=species ow out=blacklist_silva_species_500.sketch mincount=500
time sketchblacklist.sh -Xmx16g in=both_deduped_sorted_no_unclassified_bacteria.fa.gz prefilter=f tree=auto taxa silva taxlevel=genus ow out=blacklist_silva_genus_200.sketch mincount=200
time mergesketch.sh -Xmx1g in=blacklist_silva_species_500.sketch,blacklist_silva_genus_200.sketch out=blacklist_silva_merged.sketch
time sketch.sh files=31 out=both_taxa#.sketch in=both_deduped_sorted.fa.gz size=200 maxgenomefraction=0.1 -Xmx8g tree=auto mode=taxa ow silva blacklist=blacklist_silva_merged.sketch autosize ow
time sketch.sh files=31 out=both_seq#.sketch in=both_deduped_sorted.fa.gz size=200 maxgenomefraction=0.1 -Xmx8g tree=auto mode=sequence ow silva blacklist=blacklist_silva_merged.sketch autosize ow parsesubunit
cp blacklist_silva_merged.sketch /global/projectb/sandbox/gaag/bbtools/jgi-bbtools/resources/ | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pipelines/silva/fetchSilva.sh | fetchSilva.sh |
##Written by Brian Bushnell
##Last modified May 4, 2020
##Description: Creates a recalibration matrix for Illumina data.
##Usage: recal.sh <prefix>
##For example, "recal.sh Sample1" if the data is in Sample1.fq.gz
##This script creates quality-score recalibration matrices for processing Illumina PE reads.
##It needs to be run ONCE on a single library (preferably a large one) from a sequencing run.
##Then the primary script will use the recalibration matrices for all of the libraries,
##assuming all the processing is done in the same directory.
##This script assumes input data is single-ended or paired and interleaved.
##If data is paired in twin files, you can run "reformat.sh in1=r1.fq.gz in2=r2.fq.gz out=both.fq.gz" to interleave it.
##Grab the sample name from the command line
NAME="$1"
##Specify the viral reference file.
##NC_045512.fasta contains the SARS-CoV-2 genome, equivalent to bbmap/resources/Covid19_ref.fa
REF="NC_045512.fasta"
##Discover adapter sequence for this library based on read overlap.
##This step should be skipped for single-ended reads.
bbmerge.sh in="$NAME".fq.gz outa="$NAME"_adapters.fa ow reads=1m
##Adapter-trim and discard everything with adapter sequence so the reads are uniform length.
##This assumes PE 2x150bp reads; minlen should be set to read length
bbduk.sh -Xmx1g in="$NAME".fq.gz out=recal.fq.gz minlen=150 ktrim=r k=21 mink=9 hdist=2 hdist2=1 ref="$NAME"_adapters.fa ow tbo tpe
##Note - if the reads are single-ended, use this command instead:
#bbduk.sh -Xmx1g in="$NAME".fq.gz out=trimmed.fq.gz minlen=150 ktrim=r k=21 mink=9 hdist=2 hdist2=1 ref=adapters ow
##Map the reads with very high sensitivity.
bbmap.sh ref="$REF" in=trimmed.fq.gz outm=mapped.sam.gz vslow -Xmx6g ow
#Discover true variants.
callvariants.sh in=mapped.sam.gz ref="$REF" out=recal.vcf -Xmx6g ow
##Generate recalibration matrices.
calctruequality.sh in=mapped.sam.gz vcf=recal.vcf
##Now the recalibration matrices are stored in ./ref
##BBDuk can be run with the 'recal' flag to recalibrate data (mapped or unmapped).
##It should be run from the directory containing /ref | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pipelines/covid/recal.sh | recal.sh |
##Written by Brian Bushnell
##Last modified May 4, 2020
##Description: Calls SARS-CoV-2 variants from Illumina amplicon data.
## This script assumes input data is paired-end.
##Usage: processCorona.sh <prefix>
##For example, "processCorona.sh Sample1" if the data is in Sample1.fq.gz
##Grab the sample name from the command line.
NAME="$1"
##Specify the viral reference file.
##NC_045512.fasta contains the SARS-CoV-2 genome, equivalent to bbmap/resources/Covid19_ref.fa
REF="NC_045512.fasta"
##Set minimum coverage for genotype calls.
##Areas below this depth will be set to N in the consensus genome.
MINCOV=3
##This line is in case the script is being re-run, to clear the old output.
rm "$NAME"*.sam.gz "$NAME"*.bam "$NAME"*.bai "$NAME"*.txt "$NAME"*.fa "$NAME"*.vcf
##If data is paired in twin files, interleave it into a single file.
##Otherwise, skip this step.
##In this case, the files are assumed to be named "Sample1_R1.fq.gz" and "Sample1_R2.fq.gz"
#reformat.sh in="$NAME"_R#.fq.gz out="$NAME".fq.gz
##Split into Covid and non-Covid reads if this has not already been done.
##This step can be skipped if non-Covid was already removed.
bbduk.sh ow -Xmx1g in="$NAME".fq.gz ref="$REF" outm="$NAME"_viral.fq.gz outu="$NAME"_nonviral.fq.gz k=25
##Recalibrate quality scores prior to any trimming.
##*** Requires a recalibration matrix in the working directory (see recal.sh for details). ***
##This step is optional but useful for Illumina binned quality scores.
bbduk.sh in="$NAME"_viral.fq.gz out="$NAME"_recal.fq.gz recalibrate -Xmx1g ow
##Discover adapter sequence for this library based on read overlap.
##You can examine the adapters output file afterward if desired;
##If there were too few short-insert pairs this step will fail (and you can just use the default Illumina adapters).
bbmerge.sh in="$NAME"_recal.fq.gz outa="$NAME"_adapters.fa ow reads=1m
##Remove duplicates by sequence similarity.
##This is more memory-efficient than dedupebymapping.
clumpify.sh in="$NAME"_recal.fq.gz out="$NAME"_clumped.fq.gz zl=9 dedupe s=2 passes=4 -Xmx31g
##Perform adapter-trimming on the reads.
##Also do quality trimming and filtering.
##If desired, also do primer-trimming here by adding, e.g., 'ftl=20' to to trim the leftmost 20 bases.
##If the prior adapter-detection step failed, use "ref=adapters"
bbduk.sh in="$NAME"_clumped.fq.gz out="$NAME"_trimmed.fq.gz minlen=60 ktrim=r k=21 mink=9 hdist=2 hdist2=1 ref="$NAME"_adapters.fa maq=14 qtrim=r trimq=10 maxns=0 tbo tpe ow -Xmx1g
##Align reads to the reference.
bbmap.sh ref="$REF" in="$NAME"_trimmed.fq.gz outm="$NAME"_mapped.sam.gz nodisk local maxindel=500 -Xmx4g ow k=12
##Deduplicate based on mapping coordinates.
##Note that if you use single-ended amplicon data, you will lose most of your data here.
dedupebymapping.sh in="$NAME"_mapped.sam.gz out="$NAME"_deduped.sam.gz -Xmx31g ow
##Remove junk reads with unsupported unique deletions; these are often chimeric.
filtersam.sh ref="$REF" ow in="$NAME"_deduped.sam.gz out="$NAME"_filtered.sam.gz mbad=1 del sub=f mbv=0 -Xmx4g
##Remove junk reads with multiple unsupported unique substitutions; these are often junk, particularly on Novaseq.
##This step is not essential but reduces noise.
filtersam.sh ref="$REF" ow in="$NAME"_filtered.sam.gz out="$NAME"_filtered2.sam.gz mbad=1 sub mbv=2 -Xmx4g
##Trim soft-clipped bases.
bbduk.sh in="$NAME"_filtered2.sam.gz trimclip out="$NAME"_trimclip.sam.gz -Xmx1g ow
##Call variants from the sam files.
##The usebias=f/minstrandratio=0 flags are necessary due to amplicon due to strand bias,
##and should be removed if the data is exclusively shotgun/metagenomic or otherwise randomly fragmented.
callvariants.sh in="$NAME"_trimclip.sam.gz ref="$REF" out="$NAME"_vars.vcf -Xmx4g ow strandedcov usebias=f minstrandratio=0 maf=0.6 minreads="$MINCOV" mincov="$MINCOV" minedistmax=30 minedist=16
##Calculate reduced coverage as per CallVariants defaults (ignoring outermost 5bp of reads).
pileup.sh in="$NAME"_trimclip.sam.gz basecov="$NAME"_basecov_border5.txt -Xmx4g ow border=5
##Generate a mutant reference by applying the detected variants to the reference.
##This is essentially the reference-guided assembly of the strain.
##Also changes anything below depth MINCOV to N (via the mindepth flag).
applyvariants.sh in="$REF" out="$NAME"_genome.fa vcf="$NAME"_vars.vcf basecov="$NAME"_basecov_border5.txt ow mindepth="$MINCOV"
##Make bam/bai files; requires samtools to be installed.
##This step is only necessary for visualization, not variant-calling.
#samtools view -bShu "$NAME"_trimclip.sam.gz | samtools sort -m 2G -@ 3 - -o "$NAME"_sorted.bam
#samtools index "$NAME"_sorted.bam
##At this point, "$NAME"_sorted.bam, "$NAME"_sorted.bam.bai, "$REF", and "$NAME"_vars.vcf can be used for visualization in IGV. | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pipelines/covid/processCorona.sh | processCorona.sh |
Written by Brian Bushnell
Last modified April 8, 2020
Contents:
This directory contains a collection of scripts written for calling variants and generating consensus genomes from SARS-CoV-2 ("Covid") Illumina data using BBTools. They were designed and optimized for several sets of libraries (from NextSeq and NovaSeq platforms) containing a mix of shotgun and amplicon data using various primers, and may need adjustment for your specific experimental design.
These scripts, and those in BBTools, should work fine in Bash on Linux, MacOS, or Windows 10. Other shells generally work fine.
Usage:
These scripts are just guidelines showing how I am processing Covid data.
If you want to use them without modification, follow these steps:
1) Install BBTools (you need the latest version - 38.83+ - due to some new features I added for Covid!).
a) If you don't have Java, install Java.
i) If you run java --version on the command line and it reports version 8+, you're fine.
ii) Otherwise, you can get it from https://openjdk.java.net/install/index.html
b) Download BBTools 38.83 or higher from https://sourceforge.net/projects/bbmap/
c) Unzip the archive: tar -xzf BBMap_38.83.tar.gz
d) Add it to the path: export PATH=/path/to/bbmap/:$PATH
i) Now you can run the shell scripts from wherever.
e) Samtools is not necessary, but recommended if you want to make the bam files for visualization.
2) Rename and interleave the files if they are paired, so libraries are in the format "prefix.fq.gz" with one file per library.
3) Copy the Covid reference to the directory.
4) Modify the template processCoronaWrapper.sh:
a) Pick a library to use for quality-score calibration if desired (line 16).
b) Add lines, or make a loop, so that processCorona.sh is called on all of the libraries (lines 20-21).
c) Delete lines 8-9 to let the script run.
5) Run processCoronaWrapper.sh.
Note:
You can remove human reads with a command like this (where reads.fq can be single-ended or paired/interleaved):
bbmap.sh ref=hg19.fa in=reads.fq outm=human.fq outu=nonhuman.fq bloom fast
| ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pipelines/covid/readme.txt | readme.txt |
set -e
#Written by Brian Bushnell
#Last updated August 6, 2019
#Fetches and sketches Silva.
#Be sure taxonomy is updated first!
#To use this script outside of NERSC, modify $TAXPATH to point to your directory with the BBTools taxonomy data,
#e.g. TAXPATH="/path/to/taxonomy_directory/"
#Also, check Silva's archive (https://www.arb-silva.de/no_cache/download/archive/) for a newer version.
TAXPATH="auto"
#Fetch files.
#"_tax_silva_trunc" means it has taxonomic information, and it is truncated to retain only alignable 16S bases, not additional sequence.
wget -nv http://ftp.arb-silva.de/release_132/Exports/SILVA_132_SSURef_tax_silva_trunc.fasta.gz
wget -nv http://ftp.arb-silva.de/release_132/Exports/SILVA_132_LSURef_tax_silva_trunc.fasta.gz
#Process SSU
#This transforms U to T, trims leading or trailing Ns, discards sequences under 80bp, and changes the line wrap to 4000 to save space.
time reformat.sh in=SILVA_132_SSURef_tax_silva_trunc.fasta.gz out=ssu_utot.fa.gz fastawrap=4000 utot zl=9 qtrim=rl trimq=1 ow minlen=80 pigz=20
#Clumpify step is not strictly necessary
time clumpify.sh in=ssu_utot.fa.gz reorder groups=1 -Xmx31g fastawrap=4000 out=ssu_clumped.fa.gz zl=9 pigz=20 bgzip
#Prefix with the subunit identifier to allow unique names when SSU and LSU are merged
time rename.sh addprefix prefix="lcl\|SSU" in=ssu_clumped.fa.gz out=ssu_prefix.fa.gz pigz=20 bgzip zl=9 fastawrap=4000
#Rename with the prefix tid|number based on the TaxID.
time gi2taxid.sh -Xmx8g tree=auto gi=null accession=null in=ssu_prefix.fa.gz out=ssu_renamed.fa.gz silva zl=9 pigz=20 bgzip taxpath=$TAXPATH
#Sort by taxonomy to put records from the same organism together,a nd increase compression
time sortbyname.sh -Xmx31g in=ssu_renamed.fa.gz out=ssu_sorted.fa.gz ow taxa tree=auto fastawrap=4000 zl=9 pigz=20 bgzip allowtemp=f taxpath=$TAXPATH
#Remove duplicate or contained sequences
time dedupe.sh in=ssu_sorted.fa.gz out=ssu_deduped_sorted.fa.gz zl=9 pigz=20 bgzip ordered fastawrap=4000
#Process LSU
time reformat.sh in=SILVA_132_LSURef_tax_silva_trunc.fasta.gz out=lsu_utot.fa.gz fastawrap=4000 utot zl=9 qtrim=rl trimq=1 ow minlen=80 pigz=20 bgzip
time clumpify.sh in=lsu_utot.fa.gz reorder groups=1 -Xmx31g fastawrap=4000 out=lsu_clumped.fa.gz zl=9 pigz=20 bgzip
time rename.sh addprefix prefix="lcl\|LSU" in=lsu_clumped.fa.gz out=lsu_prefix.fa.gz pigz=20 bgzip zl=9 fastawrap=4000
time gi2taxid.sh -Xmx8g tree=auto gi=null accession=null in=lsu_prefix.fa.gz out=lsu_renamed.fa.gz silva zl=9 pigz=20 bgzip taxpath=$TAXPATH
time sortbyname.sh -Xmx31g in=lsu_renamed.fa.gz out=lsu_sorted.fa.gz ow taxa tree=auto fastawrap=4000 zl=9 pigz=20 bgzip allowtemp=f
time dedupe.sh in=lsu_sorted.fa.gz out=lsu_deduped100pct.fa.gz zl=9 pigz=20 bgzip ordered fastawrap=4000
#time sortbyname.sh -Xmx31g in=lsu_deduped_sorted.fa.gz out=lsu_deduped_sorted.fa.gz ow taxa tree=auto fastawrap=4000 zl=9 pigz=20 bgzip allowtemp=f taxpath=$TAXPATH
#Merge SSU and LSU
mergesorted.sh -Xmx31g ssu_sorted.fa.gz lsu_sorted.fa.gz bgzip unbgzip zl=9 out=both_sorted.fa.gz taxa tree=auto fastawrap=4000 minlen=60 ow taxpath=$TAXPATH
mergesorted.sh -Xmx31g ssu_deduped_sorted.fa.gz lsu_deduped_sorted.fa.gz bgzip unbgzip zl=9 out=both_deduped_sorted.fa.gz taxa tree=auto fastawrap=4000 minlen=60 ow taxpath=$TAXPATH
#Old version of merge
#cat ssu_sorted.fa.gz lsu_sorted.fa.gz > both_sorted_temp.fa.gz
#time sortbyname.sh -Xmx31g in=both_sorted_temp.fa.gz out=both_sorted.fa.gz ow taxa tree=auto fastawrap=4000 zl=9 pigz=20 bgzip allowtemp=f taxpath=$TAXPATH
#rm both_sorted_temp.fa.gz
#cat ssu_deduped_sorted.fa.gz lsu_deduped_sorted.fa.gz > both_deduped_sorted_temp.fa.gz
#time sortbyname.sh -Xmx31g in=both_deduped_sorted_temp.fa.gz out=both_deduped_sorted.fa.gz ow taxa tree=auto fastawrap=4000 zl=9 pigz=20 bgzip allowtemp=f taxpath=$TAXPATH
#rm both_deduped_sorted_temp.fa.gz
#Sketch steps
#Make a blacklist of kmers occuring in at least 500 different species.
#A blacklist is HIGHLY recommended for Silva or any ribo database.
#This needs to be copied to bbmap/resources/
time sketchblacklist.sh -Xmx31g in=both_deduped_sorted.fa.gz prefilter=f tree=auto taxa silva taxlevel=species ow out=blacklist_silva_species_500.sketch mincount=500 taxpath=$TAXPATH
#Make one sketch per sequence
time sketch.sh files=31 out=both_seq#.sketch in=both_deduped_sorted.fa.gz size=200 maxgenomefraction=0.1 -Xmx8g tree=auto mode=sequence ow silva blacklist=blacklist_silva_species_500.sketch autosize ow parsesubunit taxpath=$TAXPATH
#Make one sketch per TaxID
time sketch.sh files=31 out=both_taxa#.sketch in=both_deduped_sorted.fa.gz size=200 maxgenomefraction=0.1 -Xmx8g tree=auto mode=taxa ow silva blacklist=blacklist_silva_species_500.sketch autosize ow taxpath=$TAXPATH | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pipelines/fetch/fetchSilva.sh | fetchSilva.sh |
set -e
#Written by Brian Bushnell
#Last updated August 19, 2019
#Filters prokaryotic clades, translates them to protein, and sketches them.
#To use this script outside of NERSC, modify $TAXPATH to point to your directory with the BBTools taxonomy data,
#e.g. TAXPATH="/path/to/taxonomy_directory/"
TAXPATH="auto"
#Ensure necessary executables are in your path
#module load pigz
mkdir prot
time filterbytaxa.sh in=sorted.fa.gz out=prot/prok.fa.gz fastawrap=4095 ids=Viruses,Bacteria,Archaea,plasmids tree=auto -Xmx16g include taxpath=$TAXPATH zl=9 requirepresent=f
cd prot
wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/protozoa/*genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fa.gz out=protozoa.fa.gz zl=9 server ow
wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/mitochondrion/*genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fa.gz out=mito.fa.gz zl=9 server ow
wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/plastid/*genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fa.gz out=chloro.fa.gz zl=9 server ow
time callgenes.sh in=prok.fa.gz outa=prok.faa.gz -Xmx16g ow ordered=f zl=9
time callgenes.sh in=protozoa.fa.gz outa=protozoa.faa.gz -Xmx16g ow ordered=f zl=9
time callgenes.sh in=mito.fa.gz outa=mito.faa.gz -Xmx16g ow ordered=f zl=9
time callgenes.sh in=chloro.fa.gz outa=chloro.faa.gz -Xmx16g ow ordered=f zl=9
cat prok.faa.gz mito.faa.gz chloro.faa.gz protozoa.faa.gz > all.faa.gz
time sketchblacklist.sh -Xmx63g in=prok.faa.gz prepasses=1 tree=auto taxa taxlevel=family ow out=blacklist_prokprot_family_40.sketch mincount=40 k=9,12 sizemult=3 amino taxpath=$TAXPATH
time sketchblacklist.sh -Xmx63g in=prok.faa.gz prepasses=1 tree=auto taxa taxlevel=genus ow out=blacklist_prokprot_genus_80.sketch mincount=80 k=9,12 sizemult=3 amino taxpath=$TAXPATH
mergesketch.sh -Xmx1g in=blacklist_prokprot_genus_80.sketch,blacklist_prokprot_family_40.sketch out=blacklist_prokprot_merged.sketch amino name0=blacklist_prokprot_merged k=9,12 ow
time bbsketch.sh -Xmx63g in=all.faa.gz out=taxa#.sketch mode=taxa tree=auto files=31 ow unpigz minsize=200 prefilter autosize blacklist=blacklist_prokprot_merged.sketch k=9,12 depth sizemult=3 amino taxpath=$TAXPATH | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pipelines/fetch/runRefSeqProtein.sh | runRefSeqProtein.sh |
set -e
#Written by Brian Bushnell
#Last updated August 7, 2019
#Fetches specific RefSeq clades, plus nt and nr.
#Be sure taxonomy is updated first!
#To use this script outside of NERSC, modify $TAXPATH to point to your directory with the BBTools taxonomy data,
#e.g. TAXPATH="/path/to/taxonomy_directory/"
TAXPATH="auto"
#Ensure necessary executables are in your path
#module load pigz
#These commands require pigz!
time wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/invertebrate/*.genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=refseq.invertebrate.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_invertebrate.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/plant/*.genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=refseq.plant.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_plant.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/plasmid/*.genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=refseq.plasmid.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_plasmid.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/plastid/*.genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=refseq.plastid.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_plastid.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/protozoa/*.genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=refseq.protozoa.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_protozoa.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/mitochondrion/*.genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=refseq.mitochondrion.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_mitochondrion.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/vertebrate_mammalian/*.genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=refseq.vertebrate_mammalian.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_vertebrate_mammalian.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/vertebrate_other/*.genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=refseq.vertebrate_other.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_vertebrate_other.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/archaea/*.genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=refseq.archaea.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_archaea.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/bacteria/*.genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=refseq.bacteria.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_bacteria.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/fungi/*.genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=refseq.fungi.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_fungi.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nlm.nih.gov/refseq/release/viral/*.genomic.fna.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=refseq.viral.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_viral.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nih.gov/blast/db/FASTA/nt.gz | gi2taxid.sh -Xmx1g in=stdin.fna.gz out=nt.fna.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=5000 badheaders=badHeaders_nt.txt taxpath=$TAXPATH
time wget -q -O - ftp://ftp.ncbi.nih.gov/blast/db/FASTA/nr.gz | gi2taxid.sh -Xmx1g in=stdin.faa.gz out=nr.faa.gz pigz=6 unpigz=t bgzip=t preferbgzip=t zl=9 server ow maxbadheaders=-1 badheaders=badHeaders_nr.txt taxpath=$TAXPATH
touch done | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pipelines/fetch/fetchRefSeqClades.sh | fetchRefSeqClades.sh |
set -e
#Written by Brian Bushnell
#Last updated August 7, 2019
#Sketches RefSeq.
#Be sure taxonomy is updated first!
#To use this script outside of NERSC, modify $TAXPATH to point to your directory with the BBTools taxonomy data,
#e.g. TAXPATH="/path/to/taxonomy_directory/"
TAXPATH="auto"
#Ensure necessary executables are in your path
#module load pigz
#Sort by taxonomy.
#This makes sketching by taxa use much less memory because sketches can be written to disk as soon as they are finished.
#If this stage still runs out of memory please increase the -Xmx flag or add the flag 'maxfiles=4'.
#Or you can just skip it; sorting should no longer be necessary.
time sortbyname.sh -Xmx100g in=renamed.fa.gz memmult=0.33 out=sorted.fa.gz zl=9 pigz=64 bgzip taxa tree=auto gi=ignore fastawrap=1023 minlen=60 readbufferlen=2 readbuffers=1 taxpath=$TAXPATH
#If sortbyname runs out of memory in the last stage (when sorted.fa.gz is being written), you can usually run this:
#mergesorted.sh -Xmx100g sort_temp* bgzip unbgzip zl=8 out=sorted.fa.gz taxa tree=auto fastawrap=1023 minlen=60 readbufferlen=2 readbuffers=1 ow taxpath=$TAXPATH
#Make a blacklist of kmers occuring in at least 140 different genuses.
time sketchblacklist.sh -Xmx63g in=sorted.fa.gz prepasses=1 tree=auto taxa taxlevel=species ow out=blacklist_refseq_genus_140.sketch mincount=140 k=32,24 sizemult=2 taxpath=$TAXPATH
#Generate 31 sketch files, with one sketch per species.
time bbsketch.sh -Xmx63g in=sorted.fa.gz out=taxa#.sketch mode=taxa tree=auto accession=null gi=null files=31 ow unpigz minsize=400 prefilter autosize blacklist=blacklist_refseq_genus_140.sketch k=32,24 depth sizemult=2 taxpath=$TAXPATH
#A query such as contigs.fa can now be compared to the new reference sketches like this:
#comparesketch.sh in=contigs.fa k=32,24 tree=auto taxa*.sketch blacklist=blacklist_refseq_genus_140 taxpath=$TAXPATH
#On NERSC systems, you can then set the default path to nt by pointing /global/projectb/sandbox/gaag/bbtools/refseq/current at the path to the new sketches.
#Then you can use the default set of refseq sketches like this:
#comparesketch.sh in=contigs.fa refseq tree=auto
#That command automatically adds the default path to the sketches, the blacklist, and the correct values for K. | ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/pipelines/fetch/sketchRefSeq.sh | sketchRefSeq.sh |
The accelerated versions of BBMap, Dedupe, BBMerge, and IceCreamFinder rely on the addition of a small amount C code that you may need to compile on your specific machine to take full advantage of its specific architecture. However, compiled versions are already provided for Linux and OSX (libbbtoolsjni.so and libbbtoolsjni.dylib) with libbbtoolsjni.so compiled by Brian Bushnell on a Sandy Bridge machine and libbbtoolsjni.dylib compiled by Jie Wang on a Macbook; these seem to work fine on different architectures and operating systems.
To recompile, most C compilers should suffice. On Linux and OS X, we use gcc (gcc.gnu.org). The compiling process will create a library that Java can then load and use during execution. Simple makefiles for OSX and Linux have been provided. To compile the accelerated versions of BBTools on OS X or Linux, change your directory to "bbmap/jni" and refer to the following commands:
Linux:
make -f makefile.linux
OS X:
make -f makefile.osx
Windows:
If you are familiar with cmake and have it installed on your system, there is a CMakeLists.txt file that should get you most of the way there, but a full Windows build has not been tested at the moment.
After the "make" command, a "libbbtoolsjni.xx" file should appear in the "jni" directory. Once you have the libraries built, run BBMap, Dedupe, BBMerge, or IceCreamFinder with the addition of the "usejni" flag. If Java complains that it can't find the library you just compiled, the -Djava.library.path=<dir> flag in the bash scripts is what tells Java where to look for native library, and it should already be pointing to the "jni" directory. Java also looks for specific library suffixes on different operating systems, e.g. OSX: .dylib, Linux: .so, Windows: .dll.
Note:
Before building this, you must have a JDK installed (Java 8+) and the JAVA_HOME environment variable set (to something like /usr/bin/java/jdk-11.0.2/).
| ARGs-OAP | /ARGs_OAP-2.3.2.tar.gz/ARGs_OAP-2.3.2/ARGs_OAP/bin/bbmap/jni/README.txt | README.txt |
# **--ARIclicker--** #
## Runtime Dependencies ##
* python>=3.0
## Requires ##
* pynput
## Installation ##
`
pip install ARIclicker
`
## Function ##
* `def autoclick(start_stop_key_character, end_key_character, button,delay_min, delay_max) `
## Attention ##
* This clicker can reduce the risk of banning to a certain extent, but it does not mean that it can guarantee that it will not be banned!
* Please do not turn on the clicker for a long time! otherwise it may cause some impact on the hardware, in addition, it may increase the risk of banning!
### have fun ~ ###
<br><br>
## Change log ##
### **2.0.1** (2023/7/17): ###
* Fixed several bugs
* Remove the "autopress" function
| ARIclicker | /ARIclicker-2.0.1.tar.gz/ARIclicker-2.0.1/README.md | README.md |
Copyright (c) 2022 Boris Senkovskiy
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | ARPES-GUI | /ARPES_GUI-0.0.4.tar.gz/ARPES_GUI-0.0.4/LICENSE.md | LICENSE.md |
# ANTARES data browser for micro-ARPES spatial maps
This is an interactive software allowing to visualize and treat the micro-ARPES spatial maps from ANTARES beamline at syncrotron radiation facility SOLEIL (France).
The micro-ARPES data are stored in Nexus files with HDF5.
Main features:
- visualize the ARPES image from the selected point in spatial map,
- visualize integrated ARPES image from the selected area,
- convert ARPES data from real (E vs. angle) to momentum (E vs. k) space
- select the region of interest in real and momentum space and integrate the spatial map inside the selected region,
- save images (cvg, jpg etc.) and data (ARPES or spatial) in 'csv' format.

| ARPES-GUI | /ARPES_GUI-0.0.4.tar.gz/ARPES_GUI-0.0.4/README.md | README.md |
# ARPOC
A simple reverse proxy that adds OpenID Connect Authentication and lets you
write access rules for services you want to protect.
## Fast tutorial
You will need:
* A domain name `<domain>`
* A tls keypair (`<fullchain>`, `<privkey>`)
* A server with python (3.7 or newer) `<python3>`
### Install
* Download the repository and run `<python3> setup.py install`, or install via pip: `pip install arpoc`
* If successful you should now have the `arpoc` command.
* Make yourself familiar with the basic interface with `arpoc --help`.
* Create a configuration file `arpoc --print-sample-config`
* Save the configuration file (preferable under /etc/arpoc/config.yml)
* Create a default access control hierarchy using `arpoc --print-sample-config`
* Save the access control hierarchy in a json file (defaultdir: /etc/arpoc/acl/)
### Edit the sample configuration
Fill in the right values for `<keyfile>`, `<certfile>`, `<domainname>`, `<redirect>`
urls (path the openid connect providers will redirect the user to, with a leading
slash) and the contacts field (at least on valid mail adress).
### Add an openid connect provider
You need the configuration url (should end with .well-known/openid/configuration, cut this part of, it is added automatically).
You also need either:
* A configuration token
* A registration url and a registration token
* Client ID and Client Secret
#### Configuration URL and Token:
Choose a key which arpoc uses internally for the provider.
Add both parameters to the config.yml under
`openid_providers -> <key> -> configuration_url`
`openid_providers -> <key> -> configuration_token`
#### Registration URL and registration token:
If you already registered your client and have a registration token add
the configuration url, the registration url and the registration token
under to the config.yml file under
`openid_providers -> <key>` using the `configuration_url`, `registration_url`
and `registration_token`.
#### Client ID and Client Secret
Add the configuration url to the config.yml.
Call `arpoc --add-provider <key> --client-id <client_id> --client-secret <client-secret>`
### Add a service you want to protect.
You need the origin url, the proxy url and the key of an access control policy
set (the key of an ac entity in the json file with type policy set).
Choose a key which arpoc will internally use for the service.
Add the origin url and the proxy url (the path under which the service will be
available with a leading slash) using the `origin_URL` and `proxy_url` keys
under `services -> <service key> -> ` to the config.yml
*Now you should be able to access the service.*
## Dependencies
* [pyjwkest](https://github.com/IdentityPython/pyjwkest/) -- a python library for web tokens
* [lark-parser](https://github.com/lark-parser/lark) -- a parser for the access control language
* [pyoidc](https://github.com/OpenIDC/pyoidc) -- a python library for Open ID Connect
* ...
| ARPOC | /ARPOC-0.3.1.tar.gz/ARPOC-0.3.1/README.md | README.md |
import inspect
import logging
import os
from dataclasses import InitVar, asdict, dataclass, field, replace
from typing import Any, Dict, List, Optional
import yaml
from arpoc.exceptions import ConfigError
LOGGING = logging.getLogger()
@dataclass
class ProviderConfig:
"""
Configuration for a single Open ID Connect Provider
Attributes:
- **human_readable_name**: A name which arpoc uses when communicating
with the user / operator
- **configuration_url**: The base url of the OIDC provider. Without the
.well-known/ part
- **configuration_token**: The token ARPOC can use to register itself with
the OIDC provider
- **registration_token**: The token issued from the OIDC provider for a
specific client to obtain its configuration
- **registration_url**: The url where arpoc can obtain its configuration
after registration.
- **method**: Either 'auto', 'GET', or 'POST'. The HTTP method ARPOC will
use if the OIDC / OAuth standard gives a choice.
- **special_claim2scope**: A mapping from claim to scopes that will deliver
the claims.
**Mandatory arguments**:
- configuration_url
And either:
- configuration_token
Or:
- registration_token
- registration_url
"""
baseuri: InitVar[str]
""" arpoc's base uri
:meta private:
"""
human_readable_name: str
configuration_url: str = ""
configuration_token: str = ""
registration_token: str = ""
registration_url: str = ""
method: str = "auto"
special_claim2scope: InitVar[dict] = None
claim2scope: dict = field(init=False)
redirect_paths: List[str] = field(default_factory=list)
do_token_introspection: bool = True
def __post_init__(self, baseuri: str, special_claim2scope: Dict) -> None:
self.claim2scope = {
"name": ['profile'],
"family_name": ['profile'],
"given_name": ['profile'],
"middle_name": ['profile'],
"nickname": ['profile'],
"preferred_username": ['profile'],
"profile": ['profile'],
"picture": ['profile'],
"website": ['profile'],
"gender": ['profile'],
"birthdate": ['profile'],
"zoneinfo": ['profile'],
"locale": ['profile'],
"updated_at": ['profile'],
"email": ["email"],
"email_verified": ["email"],
"address": ["address"],
"phone": ["phone"],
"phone_number_verified": ["phone"]
}
if special_claim2scope:
for key, val in special_claim2scope.items():
self.claim2scope[key] = val
self.redirect_uris = []
for redirect_path in self.redirect_paths:
self.redirect_uris.append("{}{}".format(baseuri, redirect_path))
def check_method(self):
if self.method not in ["auto", "POST", "GET"]:
raise ConfigError(f"Method of Provider {self.human_readable_name} is not valid; must be auto, POST or GET")
def __getitem__(self, key: str) -> Any:
return getattr(self, key)
def default_redirect() -> List:
""" Default Redirect Path"""
return ["/secure/redirect_uris"]
def default_json_dir() -> List:
""" Default json path for access control entities """
return ["/etc/arpoc/acl"]
#pylint: disable=too-many-instance-attributes
@dataclass
class ProxyConfig:
"""
Configuration for the Proxy Setup
Attributes:
- **keyfile**: The path to the private key file of the TLS keypair
- **certfile**: The path to the certificate chain file (full chain)
- **domainname**: The domain name where ARPOC will be available
- **contacts**: A list of mail contact adresses responsible for the ARPOC
instance
Mandatory: **keyfile**, **certfile**, **domainname**, **contacts**
"""
keyfile: str
certfile: str
domainname: str
contacts: List[str]
address: str = "0.0.0.0"
tls_port: int = 443
plain_port: int = 80
https_only: bool = True
username: str = "www-data"
groupname: str = "www-data"
secrets: str = "/var/lib/arpoc/secrets.yml"
tls_redirect: str = "/TLSRedirect"
auth: str = "/auth"
redirect: List[str] = field(default_factory=default_redirect)
def __getitem__(self, key: str) -> Any:
return getattr(self, key)
def __post_init__(self) -> None:
assert isinstance(self.redirect, List)
self.baseuri = "https://{}/".format(self.domainname)
self.redirect_uris = []
for redirect_path in self.redirect:
# skip first slash if redirect_path has one
rp = redirect_path[1:] if redirect_path.startswith('/') else redirect_path
self.redirect_uris.append(f"{self.baseuri}{rp}")
@dataclass
class ServiceConfig:
"""
Configuration for a single proxied Service
Attributes:
- **origin_URL**: The URL that will be proxied, or the special page string; see :ref:`Special Pages <specialpagessection>`
- **proxy_URL**: The *path* under which *origin_URL* will be available.
- **AC**: The policy set which is evaluated to decide the access request
- **objectsetters**: Configuration for the objectsetters
- **obligations**: Configuration for obligations
- **authentication** Authentication information which will be used to request *origin_URL*
Mandatory Arguments:
- *origin_URL*
- *proxy_URL*
- *AC*
"""
origin_URL: str
proxy_URL: str
AC: str
objectsetters: dict = field(default_factory=dict)
obligations: dict = field(default_factory=dict)
authentication: dict = field(default_factory=dict)
def __getitem__(self, key: str) -> Any:
return getattr(self, key)
@dataclass
class ACConfig:
""" Configuration for the access control
Attributes:
- **json_dir**: The directory where the AC Entities are stored. The files
must end with ".json"
"""
json_dir: List[str] = field(default_factory=default_json_dir)
def __getitem__(self, key: str) -> Any:
return getattr(self, key)
@dataclass
class Misc:
""" Misc Config Class
Attributes:
- **access_log**: The location to store the access log (HTTP requests)
- **error_log**: The location to store the error_log
- **daemonize**: If arpoc should start daemonized.
- **log_level**: ARPOC's log level. (DEBUG/INFO/ERROR/WARN). Affects also underlying libraries
- **pid_file**: Where ARPOC should store the process id file. Only used when daemonized.
- **plugin_dirs**: Where ARPOC should load plugins
No mandatory arguments.
"""
pid_file: str = "/var/run/arpoc.pid"
daemonize: bool = True
log_level: str = "INFO"
access_log: str = "/var/log/arpoc/access.log"
error_log: str = "/var/log/arpoc/error.log"
plugin_dirs: List[str] = field(default_factory=list)
class OIDCProxyConfig:
""" Config Container Which for all specific configuration """
def __init__(self,
config_file: Optional[str] = None,
std_config: Optional[str] = '/etc/arpoc/config.yml'):
self.openid_providers: Dict[str, ProviderConfig] = {}
self.proxy: ProxyConfig = ProxyConfig("", "", "", [""])
self.services: Dict[str, ServiceConfig] = {}
self.access_control = ACConfig()
self.misc: Misc = Misc()
default_paths = [std_config]
if 'OIDC_PROXY_CONFIG' in os.environ:
default_paths.append(os.environ['OIDC_PROXY_CONFIG'])
if config_file:
default_paths.append(config_file)
for filepath in default_paths:
if filepath:
try:
self.read_file(filepath)
except IOError:
pass
self.check_config()
def add_provider(self, name: str, prov_cfg: ProviderConfig) -> None:
""" Adds the provider with key <name> to the configuration """
self.openid_providers[name] = prov_cfg
def check_redirect_uri(self) -> None:
""" Checks if every redirect uri in the provider config is also in the proxy list """
for _, provider_obj in self.openid_providers.items():
for redirect_url in provider_obj.redirect_uris:
if redirect_url not in self.proxy.redirect:
raise ConfigError(f"{provider_obj.human_readable_name} has an invalid redirect_path")
def check_config_proxy_url(self) -> None:
""" Checks for duplicates in the proxy_url """
proxy_urls: List[str] = []
for key, service in self.services.items():
if service.proxy_URL in proxy_urls:
raise ConfigError("Bound different services to the same URL")
proxy_urls.append(service.proxy_URL)
assert self.proxy is not None
def check_config(self) -> None:
""" Make consistency checks for the arpoc config """
LOGGING.debug("checking config consistency")
for provider in self.openid_providers.values():
attrs = (getattr(provider, name) for name in dir(provider) if name.startswith("check_"))
methods = filter(inspect.ismethod, attrs)
for method in methods:
method()
attrs = (getattr(self, name) for name in dir(self) if name.startswith("check_") and name != "check_config")
methods = filter(inspect.ismethod, attrs)
for method in methods:
method()
def merge_config(self, new_cfg: Dict) -> None:
"""Merges the current configuration with a new configuration dict """
if 'proxy' in new_cfg:
if self.proxy:
self.proxy = replace(self.proxy, **new_cfg['proxy'])
else:
self.proxy = ProxyConfig(**new_cfg['proxy'])
if 'services' in new_cfg:
for key, val in new_cfg['services'].items():
service_cfg = ServiceConfig(**val)
self.services[key] = service_cfg
if 'openid_providers' in new_cfg:
for key, val in new_cfg['openid_providers'].items():
provider_cfg = ProviderConfig(self.proxy.baseuri, **val)
self.openid_providers[key] = provider_cfg
if 'access_control' in new_cfg:
self.access_control = ACConfig(**new_cfg['access_control'])
if 'misc' in new_cfg:
if self.misc:
self.misc = replace(self.misc, **new_cfg['misc'])
else:
self.misc = Misc(**new_cfg['misc'])
def read_file(self, filepath: str) -> None:
""" Read the YAML file <filepath> and add the contents to the current configuration """
with open(filepath, 'r') as ymlfile:
new_cfg = yaml.safe_load(ymlfile)
self.merge_config(new_cfg)
def print_config(self) -> None:
""" Print the current config """
cfg: Dict[str, Dict] = dict()
cfg['services'] = dict()
cfg['openid_providers'] = dict()
for services_key, services_obj in self.services.items():
cfg['services'][services_key] = asdict(services_obj)
for providers_key, providers_obj in self.openid_providers.items():
cfg['openid_providers'][providers_key] = asdict(providers_obj)
cfg['proxy'] = asdict(self.proxy)
cfg['access_control'] = asdict(self.access_control)
print(yaml.dump(cfg, sort_keys=False))
@staticmethod
def print_sample_config() -> None:
""" Prints a sample config """
provider = ProviderConfig("", "", "", "", "", "")
proxy = ProxyConfig("", "", "", [""], "")
service = ServiceConfig("", "", "", {}, {})
ac_config = ACConfig()
misc = Misc()
# delete the default values of claim2scope
provider_dict = asdict(provider)
del provider_dict['claim2scope']
del provider_dict['do_token_introspection']
cfg = {
"openid_providers": {
"example": provider_dict
},
"proxy": asdict(proxy),
"services": {
"example": asdict(service)
},
"access_control": asdict(ac_config),
"misc": asdict(misc)
}
print(yaml.dump(cfg, sort_keys=False))
cfg: Optional[OIDCProxyConfig] = None
if __name__ == "__main__":
cfg = OIDCProxyConfig()
cfg.print_sample_config() | ARPOC | /ARPOC-0.3.1.tar.gz/ARPOC-0.3.1/arpoc/config.py | config.py |
import logging
import logging.config
import copy
import os
import hashlib
import urllib.parse
#from http.client import HTTPConnection
#HTTPConnection.debuglevel = 1
from typing import List, Dict, Union, Tuple, Callable, Iterable, Optional, Any
# side packages
##oic
import oic.oic
from oic.utils.authn.client import CLIENT_AUTHN_METHOD
from oic.oic.message import RegistrationResponse, AuthorizationResponse
from oic import rndstr
#from oic.utils.http_util import Redirect
import oic.extension.client
import oic.exception
import requests
import cherrypy
from cherrypy._cpdispatch import Dispatcher
#from cherrypy.process.plugins import DropPrivileges, Daemonizer, PIDFile
from jinja2 import Environment, FileSystemLoader
from jwkest import jwt, BadSyntax
#### Own Imports
import arpoc.ac as ac
import arpoc.exceptions
import arpoc.config as config
import arpoc.cache
import arpoc.utils
from arpoc.plugins import EnvironmentDict, ObjectDict, ObligationsDict
#logging.basicConfig(level=logging.DEBUG)
LOGGING = logging.getLogger(__name__)
env = Environment(loader=FileSystemLoader(
os.path.join(os.path.dirname(__file__), 'resources', 'templates')))
class OidcHandler:
""" A class to handle the connection to OpenID Connect Providers """
def __init__(self, cfg: config.OIDCProxyConfig):
self.__oidc_provider: Dict[str, oic.oic.Client] = dict()
self.cfg = cfg
self._secrets: Dict[str, dict] = dict()
self._cache = arpoc.cache.Cache()
# assert self.cfg.proxy is not None
def get_secrets(self) -> Dict[str, dict]:
""" Returns the secrets (client_id, client_secret) of the OIDC Relying Partys"""
return self._secrets
def set_secrets(self, secrets):
""" Set the secrets dict """
self._secrets = secrets
def register_first_time(self, name: str,
provider: config.ProviderConfig) -> None:
""" Registers a client or reads the configuration from the registration endpoint
If registration_url is present in the configuration file, then it will try
to read the configuration using the registration_token.
If configuration_url is present in the configuration file, it will try to
set the configuration using the registration endpoint dynamically
received with the well-known location url (configuration_url)
"""
client = oic.oic.Client(client_authn_method=CLIENT_AUTHN_METHOD)
registration_response: RegistrationResponse
try:
if provider.registration_url and provider.registration_token:
provider_info = client.provider_config(
provider['configuration_url'])
# Only read configuration
registration_response = client.registration_read(
url=provider['registration_url'],
registration_access_token=provider['registration_token'])
args = dict()
args['redirect_uris'] = registration_response['redirect_uris']
elif provider.configuration_url and provider.configuration_token:
assert self.cfg.proxy is not None
provider_info = client.provider_config(
provider['configuration_url'])
redirect_uris = (provider.redirect_uris
if provider.redirect_uris
else self.cfg.proxy['redirect_uris'])
args = {
"redirect_uris": redirect_uris,
"contacts": self.cfg.proxy['contacts']
}
registration_response = client.register(
provider_info["registration_endpoint"],
registration_token=provider['configuration_token'],
**args)
else:
raise arpoc.exceptions.OIDCProxyException(
"Error in the configuration file")
except oic.exception.RegistrationError:
LOGGING.warning("Provider %s returned an error on registration",
name)
LOGGING.debug("Seems to be permament, so not retrying")
return
except (requests.exceptions.MissingSchema,
requests.exceptions.InvalidSchema,
requests.exceptions.InvalidURL):
raise arpoc.exceptions.OIDCProxyException(
"Error in the configuration file")
self.__oidc_provider[name] = client
self.__oidc_provider[name].redirect_uris = args["redirect_uris"]
self._secrets[name] = registration_response.to_dict()
def create_client_from_secrets(self, name: str,
provider: config.ProviderConfig) -> None:
""" Try to create an openid connect client from the secrets that are
saved in the secrets file"""
client_secrets = self._secrets[name]
client = oic.oic.Client(client_authn_method=CLIENT_AUTHN_METHOD)
client.provider_config(provider.configuration_url)
client_reg = RegistrationResponse(**client_secrets)
client.store_registration_info(client_reg)
client.redirect_uris = client_secrets[
'redirect_uris']
self.__oidc_provider[name] = client
self._secrets[name] = client_reg.to_dict()
def get_userinfo_access_token(self, access_token: str) -> Tuple[int, Dict]:
""" Get the user info if the user supplied an access token"""
# TODO: error handling (no jwt)
# TODO: allow issuer parameter in header here
userinfo = {}
LOGGING.debug(access_token)
try:
access_token_obj = jwt.JWT()
access_token_obj.unpack(access_token)
LOGGING.debug(access_token_obj.payload())
issuer = access_token_obj.payload()['iss']
except BadSyntax:
LOGGING.debug("Decoding Access Token failed")
if 'x-arpoc-issuer' in cherrypy.request.headers:
LOGGING.debug("issuer hint found")
issuer = cherrypy.request.headers['x-arpoc-issuer']
else:
raise Exception("400 - Bad Request") # TODO
# check if issuer is in provider list
client = None
for provider_name, obj in self.__oidc_provider.items():
LOGGING.debug(obj)
if obj.issuer == issuer:
client = obj
client_name = provider_name
valid_until = 0
if client:
if self.cfg.openid_providers[client_name].do_token_introspection:
# do userinfo with provided AT
# we need here the oauth extension client
args = ["client_id", "client_authn_method", "keyjar", "config"]
kwargs = {x: client.__getattribute__(x) for x in args}
oauth_client = oic.extension.client.Client(**kwargs)
for key, val in client.__dict__.items():
if key.endswith("_endpoint"):
oauth_client.__setattr__(key, val)
oauth_client.client_secret = client.client_secret
introspection_kwargs = {'authn_method' : 'client_secret_basic'}
if self.cfg.openid_providers[client_name].method != "auto":
introspection_kwargs['method'] = self.cfg.openid_providers[client_name].method
introspection_res = oauth_client.do_token_introspection(
request_args={
'token': access_token,
'state': rndstr()
},
**introspection_kwargs)
if introspection_res['active']:
if 'exp' in introspection_res:
valid_until = introspection_res['exp']
else:
valid_until = arpoc.utils.now() + 30
else:
valid_until = arpoc.utils.now() + 30
userinfo_kwargs = {'token' : access_token}
if self.cfg.openid_providers[client_name].method != "auto":
userinfo_kwargs['method'] = self.cfg.openid_providers[client_name].method
userinfo = client.do_user_info_request(**userinfo_kwargs)
else:
LOGGING.info(
"Access token received, but no suitable provider in configuration"
)
LOGGING.info("Access token issuer %s", issuer)
return valid_until, dict(userinfo)
@staticmethod
def _check_session_refresh() -> bool:
""" checks if the session must be refreshed. If there is no session,
then False is returned"""
if 'refresh' in cherrypy.session:
now = arpoc.utils.now()
LOGGING.debug("refresh necessary: %s, now: %s",
cherrypy.session['refresh'], now)
return cherrypy.session['refresh'] < now
return False
def need_claims(self, claims: List[str]) -> None:
""" Maps claims to scopes and checks
if the scopes were already requested.
Else start auth procedure to get requested scopes"""
cherrypy.session["url"] = cherrypy.url()
if 'provider' in cherrypy.session:
provider = cherrypy.session['provider']
scopes = set(["openid"])
for claim in claims:
LOGGING.debug("Need claim %s", claim)
scopes |= set(
self.cfg.openid_providers[provider].claim2scope[claim])
LOGGING.debug("Need scopes %s", scopes)
self._auth(scopes)
else:
raise cherrypy.HTTPRedirect(self.cfg.proxy.auth)
@staticmethod
def get_access_token_from_headers() -> Union[None, str]:
""" Returns the Access Token from the authorization header.
Strips the bearer part """
if 'authorization' in cherrypy.request.headers:
auth_header = cherrypy.request.headers['authorization']
len_bearer = len("bearer")
if len(auth_header) > len_bearer:
auth_header_start = auth_header[0:len_bearer]
if auth_header_start.lower() == 'bearer':
access_token = auth_header[len_bearer + 1:]
return access_token
return None
def refresh_access_token(self, hash_access_token: str) -> Tuple[str, Dict]:
""" Refreshes the access token.
This can only be done, if we are Client (normal web interface). """
client_name = cherrypy.session['provider']
client = self._get_oidc_client(client_name)
cache_entry = self._cache[hash_access_token]
state = cache_entry['state']
try:
del self._cache[hash_access_token]
userinfo_kwargs = {'state' : state}
if self.cfg.openid_providers[client_name].method != "auto":
userinfo_kwargs['method'] = self.cfg.openid_providers[client_name].method
userinfo = dict(client.do_user_info_request(**userinfo_kwargs))
new_token = client.get_token(state=state)
LOGGING.debug("New token: %s", new_token)
hash_access_token = hashlib.sha256(
str(new_token.access_token).encode()).hexdigest()
cherrypy.session['hash_at'] = hash_access_token
valid_until, refresh_valid = self.get_validity_from_token(
new_token)
self._cache.put(
hash_access_token, {
"state": state,
"valid_until": valid_until,
"userinfo": userinfo,
"scopes": new_token.scope
}, refresh_valid)
return hash_access_token, userinfo
except Exception as excep:
LOGGING.debug(excep.__class__)
raise
def get_userinfo(self) -> Tuple[Optional[str], Dict]:
"""
Gets the userinfo from the OIDC Provider.
This works in two steps:
1. Check if the user supplied an Access Token
2. Otherwise, check the session management if the user is logged in
"""
access_token_header = self.get_access_token_from_headers()
if access_token_header:
hash_access_token = hashlib.sha256(
access_token_header.encode()).hexdigest()
try:
return hash_access_token, self._cache[hash_access_token][
'userinfo']
except KeyError:
pass
# how long is the token valid?
valid_until, userinfo = self.get_userinfo_access_token(
access_token_header)
self._cache.put(hash_access_token, {"userinfo": userinfo},
valid_until)
return hash_access_token, userinfo
# check if refresh is needed
if 'hash_at' in cherrypy.session:
hash_access_token = cherrypy.session['hash_at']
now = arpoc.utils.now()
# is the access token still valid?
try:
cache_entry = self._cache[hash_access_token]
except KeyError:
# hash_at is not in cache!
LOGGING.debug('Hash at not in cache!')
LOGGING.debug("Cache %s", self._cache.keys())
return (None, {})
# the entry valid_until is the validity of the refresh token, not of the cache entry
if cache_entry['valid_until'] > now:
return hash_access_token, cache_entry['userinfo']
return self.refresh_access_token(hash_access_token)
return None, {}
def _get_oidc_client(self, name: str) -> oic.oic.Client:
return self.__oidc_provider[name]
@staticmethod
def get_validity_from_token(token: oic.oic.Token) -> Tuple[int, int]:
"""Find the validity of the id_token, access_token and refresh_token """
# how long is the information valid?
# oauth has the expires_in (but only RECOMMENDED)
# oidc has exp and iat required.
# so if: iat + expires_in < exp -> weird stuff (but standard compliant)
(iat, exp) = (token.id_token['iat'], token.id_token['exp'])
at_exp = exp
try:
at_exp = token.expires_in + iat
except AttributeError:
pass
valid_until = min(at_exp, exp)
refresh_valid = valid_until
try:
refresh_valid = arpoc.utils.now() + token.refresh_expires_in
except AttributeError:
try:
if token.refresh_token:
refresh_valid = valid_until
# TODO: add token introspection for the refresh_token (if jwt)
except AttributeError:
# we don't have a refresh token
pass
return (valid_until, refresh_valid)
def do_userinfo_request_with_state(self, state: str) -> Dict:
""" Perform the userinfo request with given state """
client_name = cherrypy.session['provider']
client = self._get_oidc_client(client_name)
try:
userinfo_kwargs = {'state' : state}
if self.cfg.openid_providers[client_name].method != "auto":
userinfo_kwargs['method'] = self.cfg.openid_providers[client_name].method
userinfo = client.do_user_info_request(**userinfo_kwargs)
except oic.exception.CommunicationError as excep:
exception_args = excep.args
LOGGING.debug(exception_args)
if exception_args[
0] == "Server responded with HTTP Error Code 405":
# allowed methods in [1]
if exception_args[1][0] in ["GET", "POST"]:
userinfo = client.do_user_info_request(
state=state, method=exception_args[1][0])
else:
raise
except oic.exception.RequestError as excep:
LOGGING.debug(excep.args)
raise
return userinfo
def get_access_token_from_code(self, state: str,
code: str) -> oic.oic.Token:
""" Takes the OIDC Authorization Code,
Performs the Access Token Request
Returns: The Access Token Request Response"""
# Get Access Token
qry = {'state': state, 'code': code}
client_name = cherrypy.session['provider']
client = self._get_oidc_client(client_name)
aresp = client.parse_response(AuthorizationResponse,
info=qry,
sformat="dict")
if state != aresp['state']:
raise RuntimeError
LOGGING.debug("Authorization Response %s",
dict(aresp)) # just code and state
args = {"code": aresp["code"]}
at_request_kwargs = { 'state' : aresp['state'], 'authn_method' : 'client_secret_basic' }
# this is forbidden in the oauth standard (3.2)
#if self.cfg.openid_providers[client_name].method != "auto":
# at_request_kwargs['method'] = self.cfg.openid_providers[client_name].method
resp = client.do_access_token_request(
request_args=args, **at_request_kwargs)
LOGGING.debug("Access Token Request %s", resp)
token = client.get_token(state=aresp["state"])
assert isinstance(token, oic.oic.Token)
return token
@staticmethod
def check_scopes(request: List, response: List) -> Optional[str]:
""" Checks the request and response scopes
and alert if the response scopes are not enough"""
requested_scopes = set(request)
response_scopes = set(response)
# Did we get the requested scopes?
if not requested_scopes.issubset(response_scopes):
tmpl = env.get_template('500.html')
info = {
"error":
"The openid provider did not respond with the requested scopes",
"requested scopes": request,
"scopes in answer": response
}
return tmpl.render(info=info)
return None
def redirect(self, **kwargs: Any) -> str:
"""Handler for the redirect method (entrypoint after forwarding to OIDC Provider """
# We are trying to get the user info here from the provider
LOGGING.debug(cherrypy.session)
LOGGING.debug('kwargs is %s', kwargs)
# Errors?
if 'error' in kwargs:
tmpl = env.get_template('500.html')
return tmpl.render(info=kwargs)
# TODO: Here we should check that state has not been altered!
token = self.get_access_token_from_code(kwargs['state'],
kwargs['code'])
hash_at = hashlib.sha256(str(token).encode()).hexdigest()
cherrypy.session['hash_at'] = hash_at
# check for scopes:
response_check = self.check_scopes(cherrypy.session["scopes"],
token.scope)
if response_check:
return response_check
cherrypy.session["scopes"] = token.scope
valid_until, refresh_valid = self.get_validity_from_token(token)
userinfo = self.do_userinfo_request_with_state(state=kwargs["state"])
self._cache.put(
hash_at, {
"state": kwargs['state'],
"valid_until": valid_until,
"userinfo": dict(userinfo),
"scopes": token.scope
}, refresh_valid)
# There should be an url in the session so we can redirect
if "url" in cherrypy.session:
raise cherrypy.HTTPRedirect(cherrypy.session["url"])
raise RuntimeError
def _auth(self, scopes: Optional[Iterable[str]] = None) -> None:
if not scopes:
scopes = ["openid"]
if "hash_at" in cherrypy.session:
hash_at = cherrypy.session["hash_at"]
try:
scopes_set = set(scopes)
scopes_set_session = set(self._cache[hash_at]["scopes"])
if scopes_set.issubset(scopes_set_session):
return None
except KeyError:
pass
if "state" in cherrypy.session:
LOGGING.debug("state is already present")
cherrypy.session["state"] = rndstr()
cherrypy.session["nonce"] = rndstr()
# we need to test the scopes later
cherrypy.session["scopes"] = list(scopes)
client = self._get_oidc_client(cherrypy.session['provider'])
args = {
"client_id": client.client_id,
"response_type": "code",
"scope": cherrypy.session["scopes"],
"nonce": cherrypy.session["nonce"],
"redirect_uri": client.redirect_uris[0],
"state": cherrypy.session['state']
}
auth_req = client.construct_AuthorizationRequest(request_args=args)
login_url = auth_req.request(client.authorization_endpoint)
raise cherrypy.HTTPRedirect(login_url)
def auth(self, **kwargs: Any) -> Optional[str]:
""" Start an authentication request.
Redirects to OIDC Provider if given"""
# Do we have only one openid provider? -> use this
if len(self.__oidc_provider) == 1:
cherrypy.session['provider'] = self.__oidc_provider.keys(
).__iter__().__next__()
else:
if 'name' in kwargs and kwargs['name'] in self.__oidc_provider:
cherrypy.session['provider'] = kwargs['name']
else:
LOGGING.debug(self.__oidc_provider)
tmpl = env.get_template('auth.html')
provider = dict()
for key in self.__oidc_provider:
provider[key] = self.cfg.openid_providers[key][
'human_readable_name']
return tmpl.render(auth_page=self.cfg.proxy.auth,
provider=provider)
self._auth()
return None # we won't get here
class ServiceProxy:
""" A class to perform the actual proxying """
ac = ac.container
def __init__(self, service_name: str, oidc_handler: OidcHandler,
cfg: config.ServiceConfig):
self.service_name = service_name
self.cfg = cfg
self._oidc_handler = oidc_handler
def _proxy(self, url: str, access: Dict) -> str:
""" Actually perform the proxying.
1. Setup request
2. Setup authentication
3. Get library method to use
4. Perform outgoing request
5. Answer the request
"""
# Copy request headers
access['headers'].pop('Authorization', None)
# Setup authentication (bearer/cert)
cert = None
if self.cfg.authentication:
# bearer?
if self.cfg['authentication']['type'] == "Bearer":
access['headers']['Authorization'] = "Bearer {}".format(
self.cfg['authentication']['token'])
if self.cfg['authentication']['type'] == "Certificate":
cert = (self.cfg['authentication']['certfile'],
self.cfg['authentication']['keyfile'])
# Get requests method
method_switcher: Dict[str, Callable] = {
"GET": requests.get,
"PUT": requests.put,
"POST": requests.post,
"DELETE": requests.delete
}
method = method_switcher.get(access['method'], None)
if not method:
raise NotImplementedError
# Outgoing request
kwargs = {"headers": access['headers'], "data": access['body']}
if cert:
kwargs['cert'] = cert
resp = method(url, **kwargs)
# Answer the request
for header in resp.headers.items():
if header[0].lower() == 'transfer-encoding':
continue
logging.debug("Proxy Request Header: %s", header)
cherrypy.response.headers[header[0]] = header[1]
cherrypy.response.status = resp.status_code
return resp
def _build_url(self, url: str, kwargs: Any) -> str:
url = "{}/{}".format(self.cfg['origin_URL'], url)
if kwargs:
url = "{}?{}".format(url, urllib.parse.urlencode(kwargs))
return url
def _build_proxy_url(self, path: str = '', kwargs: Any = None) -> str:
kwargs = {} if kwargs is None else kwargs
this_url = "{}{}/{}".format(self._oidc_handler.cfg.proxy['baseuri'],
self.cfg['proxy_URL'][1:], path)
if kwargs:
this_url = "{}?{}".format(this_url, urllib.parse.urlencode(kwargs))
return this_url
@staticmethod
def _send_403(message: str = '') -> str:
cherrypy.response.status = 403
return "<h1>Forbidden</h1><br>%s" % message
@staticmethod
def build_access_dict(query_dict: Optional[Dict] = None) -> Dict:
"""Creates the access dict for the evaluation context """
query_dict = query_dict if query_dict is not None else {}
method = cherrypy.request.method
headers = copy.copy(cherrypy.request.headers)
headers.pop('host', None)
headers.pop('Content-Length', None)
headers['connection'] = "close"
# Read request body
request_body = ""
if cherrypy.request.method in cherrypy.request.methods_with_bodies:
request_body = cherrypy.request.body.read()
return {"method": method,
"body": request_body,
"headers": headers,
"query_dict" : query_dict}
# pylint: disable=W0613
# disable unused arguments
@cherrypy.expose
def index(self, *args: Any, **kwargs: Any) -> str:
"""
Connects to the origin_URL of the proxied service.
Important: If a request parameter "url" is in the REQUEST, it will
override the path information.
/serviceA/urlinformation?url=test will translate to:
<ServiceA>/test
"""
try:
del kwargs['_']
except KeyError:
pass
_, userinfo = self._oidc_handler.get_userinfo()
#called_url = cherrypy.url(qs=cherrypy.request.query_string)
called_url_wo_qs = cherrypy.url()
path = called_url_wo_qs[len(self._build_proxy_url()):]
#LOGGING.debug("Called url was %s ", called_url)
target_url = self._build_url(path, kwargs)
object_dict = ObjectDict(objsetter=self.cfg['objectsetters'],
initialdata={
"path": path,
"target_url": target_url,
"service": self.service_name,
})
access = self.build_access_dict(query_dict=kwargs)
context = {
"subject": userinfo,
"object": object_dict,
"environment": EnvironmentDict(),
"access": access
}
LOGGING.debug("Container is %s", self.ac)
evaluation_result = self.ac.evaluate_by_entity_id(
self.cfg['AC'], context)
(effect, missing,
obligations) = (evaluation_result.results[self.cfg['AC']],
evaluation_result.missing_attr,
evaluation_result.obligations)
LOGGING.debug("Obligations are: %s", obligations)
obligations_dict = ObligationsDict()
obligations_result = obligations_dict.run_all(obligations, effect,
context,
self.cfg.obligations)
if effect == ac.Effects.GRANT and all(obligations_result):
return self._proxy(target_url, access)
if len(missing) > 0:
# -> Are we logged in?
attr = set(missing)
self._oidc_handler.need_claims(list(attr))
warn = ("Failed to get the claims even we requested the " +
"right scopes.<br>Missing claims are:<br>")
warn += "<br>".join(attr)
return self._send_403(warn)
return self._send_403("")
class TLSOnlyDispatcher(Dispatcher):
""" Dispatcher for cherrypy to force TLS """
def __init__(self, tls_url: str, next_dispatcher: Dispatcher):
super().__init__()
self._tls_url = tls_url
self._next_dispatcher = next_dispatcher
def __call__(self, path_info: str) -> Dispatcher:
if cherrypy.request.scheme == 'https':
return self._next_dispatcher(path_info)
return self._next_dispatcher(self._tls_url + path_info) | ARPOC | /ARPOC-0.3.1.tar.gz/ARPOC-0.3.1/arpoc/base.py | base.py |
# Python imports
import logging
import logging.config
import warnings
import copy
import argparse
# For scheduling auth & registration to providers
import sched
import threading
import time
import importlib.resources
import os
import pwd
import grp
import hashlib
import urllib.parse
from http.client import HTTPConnection
#HTTPConnection.debuglevel = 1
from dataclasses import dataclass, field
from typing import List, Dict, Union, Tuple, Callable, Iterable, Optional, Any
# side packages
##oic
import oic.oic
from oic.utils.authn.client import CLIENT_AUTHN_METHOD
from oic.oic.message import RegistrationResponse, AuthorizationResponse
from oic import rndstr
from oic.utils.http_util import Redirect
import oic.extension.client
import oic.exception
import yaml
import requests
import cherrypy
from cherrypy._cpdispatch import Dispatcher
from cherrypy.process.plugins import DropPrivileges, Daemonizer, PIDFile
from jinja2 import Environment, FileSystemLoader
from jwkest import jwt
#### Own Imports
from arpoc.base import ServiceProxy, OidcHandler, TLSOnlyDispatcher
import arpoc.ac as ac
import arpoc.exceptions
import arpoc.config as config
import arpoc.pap
import arpoc.special_pages
import arpoc.cache
import arpoc.utils
import arpoc.plugins
#from arpoc.plugins import EnvironmentDict, ObjectDict, ObligationsDict
#logging.basicConfig(level=logging.DEBUG)
LOGGING = logging.getLogger(__name__)
env = Environment(loader=FileSystemLoader(
os.path.join(os.path.dirname(__file__), 'resources', 'templates')))
def get_argparse_instance():
parser = argparse.ArgumentParser(description='ARPOC')
parser.add_argument('-c', '--config-file', help="Path to the configuration file")
sample_group = parser.add_argument_group('Sample configuration', 'Options to print sample configuration file')
sample_group.add_argument('--print-sample-config', action='store_true', help="Prints a sample configuration file and exit")
sample_group.add_argument('--print-sample-ac', action='store_true', help="Prints a sample AC hierarchy and exit")
add_provider_group = parser.add_argument_group('Adding an OIDC Provider', "Add an OIDC provider to the secrets file")
add_provider_group.add_argument('--add-provider', help="The key which is used in the configuration file")
add_provider_group.add_argument('--client-id', help="The client id which is used at the provider")
add_provider_group.add_argument('--client-secret', help="The client secret which is used at the provider")
parser.add_argument('-d', '--daemonize', action='store_true', help='Daemonize arpoc')
parser.add_argument('--no-daemonize', action='store_true', help='Do not daemonize arpoc')
parser.add_argument('--check-ac', action='store_true')
return parser
class App:
""" Class for application handling.
Reads configuration files,
setups the oidc client classes
and the dispatcher for the services"""
def __init__(self) -> None:
self._scheduler = sched.scheduler(time.time, time.sleep)
self.thread = threading.Thread(target=self._scheduler.run)
self.oidc_handler: OidcHandler
self.config: config.OIDCProxyConfig
self.uid = 0
self.gid = 0
def cancel_scheduler(self):
""" Cancels every event in the scheduler queue """
if not self._scheduler.empty():
for event in self._scheduler.queue:
self._scheduler.cancel(event)
self.thread.join()
def setup_loggers(self) -> None:
""" Read the loggers configuration and configure the loggers"""
with importlib.resources.path(
'arpoc.resources',
'loggers.yml') as loggers_path, open(loggers_path) as ymlfile:
log_config_str = ymlfile.read()
log_config_str = log_config_str.replace('DEFAULTLEVEL',
self.config.misc.log_level)
log_config_str = log_config_str.replace(
'ACCESS_LOG', self.config.misc.access_log)
log_config_str = log_config_str.replace('ERROR_LOG',
self.config.misc.error_log)
log_conf = yaml.safe_load(log_config_str)
try:
logging.config.dictConfig(log_conf)
except ValueError:
# pylint: disable=C0415
import pprint
print("Problem with log setup")
print("Probably, the log directory (%s) was not found or is "
"not writeable" % (log_conf['handlers']['cherrypy_access']['filename']))
print("Here is the log configuration:")
print()
pprint.pprint(log_conf)
raise
def retry(self,
function: Callable,
exceptions: Tuple,
*args: Any,
retries: int = 5,
retry_delay: int = 30) -> None:
""" Retries function <retries> times, as long as <exceptions> are thrown"""
try:
function(*args)
except exceptions as excep:
if retries > 0:
LOGGING.debug(
"Retrying %s, parameters %s, failed with exception %s",
function, args,
type(excep).__name__)
LOGGING.debug("Delaying for %s seconds", retry_delay)
self._scheduler.enter(retry_delay,
1,
self.retry,
(function, exceptions, *args),
kwargs={
'retries': retries - 1,
'retry_delay': retry_delay
})
# pylint: disable=W0613
# (unused arguments)
def tls_redirect(self, *args: Any, **kwargs: Any) -> None:
""" Rewrites the url so that we use https.
May alter the hostname (localhost -> domainname)"""
url = cherrypy.url(qs=cherrypy.request.query_string)
# find starting / of path
index = url.index('/', len('http://')) +1
path = url[index:]
https_url = "{}{}".format(self.config.proxy.baseuri, path)
raise cherrypy.HTTPRedirect(https_url)
def get_routes_dispatcher(self) -> cherrypy.dispatch.RoutesDispatcher:
""" Setups the Cherry Py dispatcher
This connects makes the proxied services accessible"""
dispatcher = cherrypy.dispatch.RoutesDispatcher()
# Connect the Proxied Services
for name, service_cfg in self.config.services.items():
logging.debug(service_cfg)
if service_cfg.origin_URL == "pap":
pap = arpoc.pap.PolicyAdministrationPoint('pap', self.oidc_handler, service_cfg)
dispatcher.connect('pap',
service_cfg.proxy_URL,
controller=pap,
action='index')
dispatcher.connect('pap',
service_cfg.proxy_URL + "{_:/.*?}",
controller=pap,
action='index')
elif service_cfg.origin_URL == "userinfo":
userinfo_page = arpoc.special_pages.Userinfo('userinfo',
self.oidc_handler,
service_cfg)
dispatcher.connect('userinfo',
service_cfg.proxy_URL,
controller=userinfo_page,
action='index')
else:
service_proxy_obj = ServiceProxy(name, self.oidc_handler,
service_cfg)
dispatcher.connect(name,
service_cfg['proxy_URL'],
controller=service_proxy_obj,
action='index')
dispatcher.connect(name,
service_cfg['proxy_URL'] + "{_:/.*?}",
controller=service_proxy_obj,
action='index')
# Connect the Redirect URI
LOGGING.debug(self.config.proxy['redirect'])
for i in self.config.proxy['redirect']:
dispatcher.connect('redirect',
i,
controller=self.oidc_handler,
action='redirect')
# Test auth required
dispatcher.connect('auth',
"%s" % self.config.proxy.auth,
controller=self.oidc_handler,
action='auth')
dispatcher.connect('auth',
"%s/{name:.*?}" % self.config.proxy.auth,
controller=self.oidc_handler,
action='auth')
if self.config.proxy['https_only']:
dispatcher.connect('TLSRedirect',
'%s/{url:.*?}' % self.config.proxy.tls_redirect,
controller=self,
action='tls_redirect')
tls_dispatcher = TLSOnlyDispatcher(self.config.proxy.tls_redirect,
dispatcher)
return tls_dispatcher
return dispatcher
@staticmethod
def read_secrets(filepath: str) -> Dict:
""" Reads the secrets file from the filepath """
try:
with open(filepath, 'r') as ymlfile:
secrets = yaml.safe_load(ymlfile)
except FileNotFoundError:
secrets = dict()
if secrets is None:
secrets = dict()
return secrets
def save_secrets(self) -> None:
""" Saves the oidc rp secrets into the secrets file"""
with open(self.config.proxy['secrets'], 'w') as ymlfile:
yaml.safe_dump(self.oidc_handler.get_secrets(), ymlfile)
def create_secrets_dir(self) -> None:
""" Create the secrets dir and sets permission and ownership """
assert isinstance(self.config.proxy, config.ProxyConfig)
secrets_dir = os.path.dirname(self.config.proxy['secrets'])
os.makedirs(secrets_dir, exist_ok=True)
self.uid = pwd.getpwnam(self.config.proxy['username'])[2]
self.gid = grp.getgrnam(self.config.proxy['groupname'])[2]
for dirpath, _, filenames in os.walk(secrets_dir):
if len(filenames) > 1:
# ignore files with a dot
if len([x for x in filenames if not x.startswith(".")]) > 1:
raise arpoc.exceptions.ConfigError(
"Please specify an own directory for oidproxy secrets")
os.chown(dirpath, self.uid, self.gid)
for filename in filenames:
os.chown(os.path.join(dirpath, filename), self.uid, self.gid)
def setup_oidc_provider(self) -> None:
"""Setup the connection to all oidc providers in the config """
assert isinstance(self.config, config.OIDCProxyConfig)
# Read secrets
secrets = self.read_secrets(self.config.proxy['secrets'])
self.oidc_handler.set_secrets(secrets)
for name, provider in self.config.openid_providers.items():
# check if the client is/was already registered
if name in secrets.keys():
self.retry(self.oidc_handler.create_client_from_secrets,
(requests.exceptions.RequestException,
oic.exception.CommunicationError), name, provider)
else:
self.retry(self.oidc_handler.register_first_time,
(requests.exceptions.RequestException,
oic.exception.CommunicationError), name, provider)
self.thread.start()
def run(self) -> None:
""" Starts the application """
#### Command Line Argument Parsing
parser = get_argparse_instance()
args = parser.parse_args()
config.cfg = config.OIDCProxyConfig(config_file=args.config_file)
self.config = config.cfg
assert self.config.proxy is not None
#### Read Configuration
if args.print_sample_config:
config.cfg.print_sample_config()
return
if args.print_sample_ac:
arpoc.ac.print_sample_ac()
return
try:
self.setup_loggers()
except ValueError:
return
#### Create secrets dir and change ownership (perm)
self.create_secrets_dir()
self.oidc_handler = OidcHandler(self.config)
if args.add_provider and args.client_id and args.client_secret:
# read secrets
secrets = self.read_secrets(self.config.proxy['secrets'])
provider_cfg = self.config.openid_providers[args.add_provider]
redirect_uris = provider_cfg.redirect_uris or self.config.proxy['redirect_uris']
# add secrets
secret_dict = {
"client_id": args.client_id,
"client_secret": args.client_secret,
"redirect_uris": redirect_uris
}
secrets[args.add_provider] = secret_dict
self.oidc_handler.set_secrets(secrets)
self.oidc_handler.create_client_from_secrets(args.add_provider, provider_cfg)
self.save_secrets()
return
arpoc.plugins.import_plugins(self.config.misc.plugin_dirs)
#### Read AC Rules
for acl_dir in self.config.access_control['json_dir']:
ServiceProxy.ac.load_dir(acl_dir)
if args.check_ac:
ServiceProxy.ac.check()
return
if not args.no_daemonize and (args.daemonize or self.config.misc.daemonize):
daemonizer = Daemonizer(cherrypy.engine)
daemonizer.subscribe()
# check if pid file exists
try:
with open(self.config.misc.pid_file) as pidfile:
pid = int(pidfile.read().strip())
try:
os.kill(pid, 0) # check if running
except OSError:
PIDFile(cherrypy.engine,
self.config.misc.pid_file).subscribe()
# not running
else:
# running
print("PID File %s exists" % self.config.misc.pid_file)
print(
"Another instance of arpoc seems to be running"
)
return
except FileNotFoundError:
PIDFile(cherrypy.engine, self.config.misc.pid_file).subscribe()
#### Setup OIDC Provider
cherrypy.engine.subscribe('start', self.setup_oidc_provider, 80)
cherrypy.engine.subscribe('stop', self.cancel_scheduler, 80)
cherrypy.engine.subscribe('stop', self.save_secrets, 80)
#### Setup Cherrypy
global_conf = {
'log.screen': False,
'log.access_file': '',
'log.error_file': '',
'server.socket_host': config.cfg.proxy['address'],
'server.socket_port': config.cfg.proxy['tls_port'],
'server.ssl_private_key': config.cfg.proxy['keyfile'],
'server.ssl_certificate': config.cfg.proxy['certfile'],
'engine.autoreload.on': False
}
cherrypy.config.update(global_conf)
app_conf = {
'/': {
'tools.sessions.on': True,
'request.dispatch': self.get_routes_dispatcher()
}
}
DropPrivileges(cherrypy.engine, uid=self.uid, gid=self.gid).subscribe()
#### Start Web Server
cherrypy.tree.mount(None, '/', app_conf)
if self.config.proxy['plain_port']:
# pylint: disable=W0212
server2 = cherrypy._cpserver.Server()
server2.socket_port = self.config.proxy['plain_port']
server2._socket_host = self.config.proxy['address']
server2.thread_pool = 30
server2.subscribe()
cherrypy.engine.start()
cherrypy.engine.block()
# cherrypy.quickstart(None, '/', app_conf) | ARPOC | /ARPOC-0.3.1.tar.gz/ARPOC-0.3.1/arpoc/__init__.py | __init__.py |
from dataclasses import dataclass, field
from typing import List, Tuple, Union, Optional, Dict
import ast
from collections.abc import Mapping
import os
import itertools
from jinja2 import Environment, FileSystemLoader
from arpoc.base import ServiceProxy, ObjectDict, EnvironmentDict
import arpoc.ac as ac
env = Environment(loader=FileSystemLoader(
os.path.join(os.path.dirname(__file__), 'resources', 'templates')))
@dataclass
class PAPNode:
ID: str
node_type: str
resolver: str
target: str
effect: str
condition: str
policy_sets: Optional[List['PAPNode']]
policies: Optional[List['PAPNode']]
rules: Optional[List['PAPNode']]
def create_PAPNode_Rule(rule: ac.Rule) -> PAPNode:
return PAPNode(rule.entity_id, "rule", "", rule.target, str(rule.effect),
rule.condition, None, None, None)
def create_PAPNode_Policy(policy: ac.Policy) -> PAPNode:
rules = [create_PAPNode_Rule(ac.container.rules[x]) for x in policy.rules]
return PAPNode(policy.entity_id, "policy", policy.conflict_resolution,
policy.target, "", "", None, None, rules)
def create_PAPNode_Policy_Set(policy_set: ac.Policy_Set) -> PAPNode:
policies = [
create_PAPNode_Policy(ac.container.policies[x])
for x in policy_set.policies
]
policy_sets = [
create_PAPNode_Policy_Set(ac.container.policy_sets[x])
for x in policy_set.policy_sets
]
return PAPNode(policy_set.entity_id, "policy set",
policy_set.conflict_resolution, policy_set.target, "", "",
policy_sets, policies, None)
class PolicyAdministrationPoint(ServiceProxy):
# def __init__(self):
# pass
def _proxy(self, url: str, access: Dict) -> str:
context = {}
if url.startswith('pap/testbed'):
services = self._oidc_handler.cfg.services.keys()
s = []
result: Optional[ac.EvaluationResult] = None
obj_setters = False
env_setters = False
try:
# access.query_dict
entity_id, sub, obj, acc, env_attr, service = (
access['query_dict'][x].strip() for x in
['entity', 'subject', 'object', 'access', 'environment', 'service'])
if "object_setters" in access['query_dict']:
obj_setters = True
if "environment_setters" in access['query_dict']:
env_setters = True
except KeyError:
pass
else:
if service not in services:
return ""
entity_obj: ac.AC_Entity
# get the entity_id object
if entity_id in ac.container.policy_sets:
typ = "policy_set"
entity_obj = ac.container.policy_sets[entity_id]
s.append(create_PAPNode_Policy_Set(entity_obj))
elif entity_id in ac.container.policies:
typ = "policy"
entity_obj = ac.container.policies[entity_id]
s.append(create_PAPNode_Policy(entity_obj))
elif entity_id in ac.container.rules:
typ = "rule"
entity_obj = ac.container.rules[entity_id]
s.append(create_PAPNode_Rule(entity_obj))
else:
return ""
try:
sub_dict = ast.literal_eval(sub)
except SyntaxError:
sub_dict = {}
try:
obj_dict = ast.literal_eval(obj)
except SyntaxError:
obj_dict = {}
try:
env_dict = ast.literal_eval(env_attr)
except SyntaxError:
env_dict = {}
try:
acc_dict = ast.literal_eval(acc)
except SyntaxError:
acc_dict = {}
obj_dict['service'] = service
if obj_setters:
obj_dict = ObjectDict(self._oidc_handler.cfg.services[service].objectsetters)
if env_setters:
env_dict = EnvironmentDict()
context = {
'subject': sub_dict,
'object': obj_dict,
'environment': env_dict,
'access': acc_dict
}
result = entity_obj.evaluate(context)
entity_ids = itertools.chain(ac.container.policy_sets.keys(),
ac.container.policies.keys(),
ac.container.rules.keys())
tmpl = env.get_template('testbed.html')
return tmpl.render(pap_nodes=s,
entity_ids=entity_ids,
result=result, services=services, **context)
tmpl = env.get_template('pap.html')
s = []
for ps in ac.container.policy_sets:
s.append(create_PAPNode_Policy_Set(ac.container.policy_sets[ps]))
#url.startswith('pap/view')
return tmpl.render(pap_nodes=s) | ARPOC | /ARPOC-0.3.1.tar.gz/ARPOC-0.3.1/arpoc/pap/__init__.py | __init__.py |
import re
from collections.abc import Mapping
from functools import reduce
from copy import deepcopy
from typing import Dict
import logging
from arpoc.plugins._lib import Obligation, Optional, deep_dict_update
from arpoc.ac.common import Effects
"""
Obligation Log Module.
The classes here will either
- log every access (Log),
- every granted access (LogSuccessful)
- every denied access (LogFailed)
The log can be configured via a dict cfg:
cfg['loggercfg'] -- the logger cfg of the python logging module
cfg['formatstring'] -- A format string for the message generation
The log will be created with INFO level
"""
logger_cfg = {
"version": 1,
"disable_existing_loggers": False,
"handlers": {
"obligation_file": {
"class": "logging.handlers.RotatingFileHandler",
"filename": "obligation.log",
"maxBytes": 1024,
"backupCount": 3,
"level": "INFO",
}
},
"loggers": {
"obligation_logger": {
"level": "INFO",
"handlers": ["obligation_file"]
}
}
}
class Log(Obligation):
"""
Log the access request.
Name: obl_log
Configuration:
- loggercfg -- the logger cfg of the python logging module
- formatstring -- A format string for the message generation
default: `{} subject.email accessed object.service [object.path] -- object.target_url`
The log will be created with INFO level
"""
name = "obl_log"
@staticmethod
def replace_subjectattr(logtext, subject_info):
regex = r"subject\.(?P<subject>[.\w]+)"
func = lambda x: reduce(
lambda d, key: d.get(key, None)
if isinstance(d, Mapping) else None,
x.group('subject').split("."), subject_info)
return re.sub(regex, func, logtext)
@staticmethod
def replace_objectattr(logtext, object_info):
regex = r"object\.(?P<object>[.\w]+)"
func = lambda x: reduce(
lambda d, key: d.get(key, None)
if isinstance(d, Mapping) else None,
x.group('object').split("."), object_info)
return re.sub(regex, func, logtext)
@staticmethod
def replace_envattr(logtext, env_info):
regex = r"environment\.(?P<env>[.\w]+)"
func = lambda x: reduce(
lambda d, key: d.get(key, None)
if isinstance(d, Mapping) else None,
x.group('env').split("."), env_info)
return re.sub(regex, func, logtext)
@staticmethod
def replace_accessattr(logtext, access_info):
regex = r"access\.(?P<access>[.\w]+)"
func = lambda x: reduce(
lambda d, key: d.get(key, None)
if isinstance(d, Mapping) else None,
x.group('access').split("."), access_info)
return re.sub(regex, func, logtext)
@staticmethod
def replace_attr(logtext, context):
logtext = Log.replace_subjectattr(logtext, context['subject'])
logtext = Log.replace_objectattr(logtext, context['object'])
logtext = Log.replace_envattr(logtext, context['environment'])
logtext = Log.replace_accessattr(logtext, context['access'])
return logtext
@staticmethod
def run(effect: Optional[Effects], context: Dict, cfg: Dict) -> bool:
if 'logger_cfg' in cfg:
copy_logger_cfg = deepcopy(logger_cfg)
merged_cfg = deep_dict_update(copy_logger_cfg, cfg['logger_cfg'])
else:
merged_cfg = deepcopy(logger_cfg)
logger = logging.getLogger("obligation_logger")
logging.config.dictConfig(merged_cfg)
if 'log_format' in cfg:
log_format = cfg['log_format']
else:
log_format = ("{} subject.email accessed object.service "
"[object.path] -- object.target_url")
log_format = log_format.format(str(effect))
logger.info(Log.replace_attr(log_format, context))
return True
class LogFailed(Obligation):
"""
Log failed access requests.
Name: obl_log_failed
Configuration: Same as obl_log
"""
name = "obl_log_failed"
@staticmethod
def run(effect: Optional[Effects], context: Dict, cfg: Dict) -> bool:
if effect == Effects.DENY:
Log.run(effect, context, cfg)
return True
class LogSuccessful(Obligation):
"""
Log successful access requests.
Name: obl_log_successful
Configuration: Same as obl_log
"""
name = "obl_log_successful"
@staticmethod
def run(effect: Optional[Effects], context: Dict, cfg: Dict) -> bool:
if effect == Effects.GRANT:
Log.run(effect, context, cfg)
return True | ARPOC | /ARPOC-0.3.1.tar.gz/ARPOC-0.3.1/arpoc/plugins/obl_loggers.py | obl_loggers.py |
import importlib
import importlib.util
from pathlib import Path
from typing import Dict, Callable, Optional, List, Type, TypeVar, Any
from queue import PriorityQueue
import collections
from dataclasses import dataclass, field
import os
from collections.abc import Mapping
import logging
from arpoc.plugins import _lib
from arpoc.ac.common import Effects
import arpoc.config as config
from arpoc.exceptions import DuplicateKeyError
LOGGING = logging.getLogger()
__all__ = [
f"{f.stem}" for f in Path(__file__).parent.glob("*.py")
if not str(f.stem).startswith("_")
]
for m in __all__:
importlib.import_module("." + m, "arpoc.plugins")
plugins = []
def import_plugins(plugin_dirs) -> None:
global plugins
for plugin_dir in plugin_dirs:
for entry in os.listdir(plugin_dir):
wholepath = os.path.join(plugin_dir, entry)
if os.path.isfile(wholepath):
module_name = os.path.splitext(entry)[0]
LOGGING.debug("module_name: %s", module_name)
LOGGING.debug("wholepath: %s", wholepath)
if wholepath.endswith(".py"):
spec = importlib.util.spec_from_file_location(
module_name, wholepath)
if spec is None or not isinstance(
spec.loader, importlib.abc.Loader):
raise RuntimeError("Failed to load %s", wholepath)
module = importlib.util.module_from_spec(spec)
plugins.append(module)
spec.loader.exec_module(module)
@dataclass(order=True)
class PrioritizedItem:
priority: int
item: Any = field(compare=False)
class ObjectDict(collections.UserDict):
def __init__(self,
objsetter: Dict,
initialdata: Optional[Dict] = None) -> None:
if not initialdata:
initialdata = {}
super().__init__(initialdata)
self._executed_flag = False
# sort "plugins" according to priority
self._queue: PriorityQueue = PriorityQueue()
for plugin in _lib.ObjectSetter.__subclasses__():
LOGGING.debug("Found object setter %s, name: %s", plugin,
plugin.name)
if plugin.name in objsetter:
plugin_cfg = objsetter[plugin.name]
if plugin_cfg['enable']:
# give configuration to the plugin and set priority
priority = plugin_cfg[
'priority'] if 'priority' in plugin_cfg else 100
item = PrioritizedItem(priority, plugin(plugin_cfg))
self._queue.put(item)
def get(self, key: str, default: Any = None) -> Any:
if key in self.data:
return self.data[key]
if not self._executed_flag:
while not self._queue.empty():
self._queue.get().item.run(self.data)
self._executed_flag = True
if key in self.data:
return self.data[key]
return default
def __getitem__(self, key: str) -> Any:
elem = self.get(key)
if elem == None:
raise KeyError
return elem
class ObligationsDict():
def __init__(self) -> None:
self.__get_obligations_dict()
LOGGING.debug("Obligations found %s", self._obligations)
def __get_obligations_dict(self) -> None:
d: Dict[str, Callable] = dict()
for plugin in _lib.Obligation.__subclasses__():
if plugin.name in d.keys():
DuplicateKeyError(
"key {} is already in target in a plugin".format(
plugin.name))
d[plugin.name] = plugin.run
# def () -> arpoc.plugins._lib.Obligation
T = TypeVar('T', bound='_lib.Obligation')
self._obligations: Dict[str, Callable] = d
def run_all(self, obligations: List[str], effect: Optional[Effects],
context: Dict, cfg: Dict) -> List[bool]:
results: List[bool] = []
LOGGING.debug("Obligations found %s", self._obligations)
for key in obligations:
obl = self.get(key)
if obl is not None:
obl_cfg = cfg[key] if key in cfg else {}
results.append(obl(effect, context, obl_cfg))
else:
LOGGING.debug("Failed to run obligation %s", key)
raise ValueError
return results
def get(self, key: str, default: Any = None) -> Any:
try:
return self.__getitem__(key)
except KeyError:
return default
def __getitem__(self, key: str) -> Any:
return self._obligations[key]
class EnvironmentDict(collections.UserDict):
def __init__(self, initialdata: Dict = None) -> None:
if not initialdata:
initialdata = {}
super().__init__(initialdata)
self.__get_env_attr_dict()
def __get_env_attr_dict(self) -> None:
d: Dict[str, Callable] = dict()
for plugin in _lib.EnvironmentAttribute.__subclasses__():
if plugin.target in d.keys():
DuplicateKeyError(
"key {} is already in target in a plugin".format(
plugin.target))
d[plugin.target] = plugin.run
self._getter = d
def get(self, key: str, default: Any = None) -> Any:
if key in self.data:
return self.data[key]
if key in self._getter:
self.data[key] = self._getter[key]()
return self.data[key]
return default
def __getitem__(self, key: str) -> Any:
elem = self.get(key)
if elem is None:
raise KeyError
return elem | ARPOC | /ARPOC-0.3.1.tar.gz/ARPOC-0.3.1/arpoc/plugins/__init__.py | __init__.py |
import importlib.resources
import logging
import re
from abc import abstractmethod
from functools import reduce
from typing import Any, Dict, List, TypeVar, Union, Callable
from ast import literal_eval
from collections.abc import Mapping
import lark.exceptions
from lark import Lark, Tree
from arpoc.ac.lark_adapter import MyTransformer
from arpoc.exceptions import (BadSemantics,
SubjectAttributeMissing,
ObjectAttributeMissing,
EnvironmentAttributeMissing)
with importlib.resources.path(
'arpoc.resources',
'grammar.lark') as grammar_path, open(grammar_path) as fp:
grammar = fp.read()
lark_condition = Lark(grammar, start="condition")
lark_target = Lark(grammar, start="target")
LOGGER = logging.getLogger(__name__)
TNum = TypeVar('TNum', int, float)
class BinaryOperator:
@classmethod
@abstractmethod
def eval(cls, op1: Any, op2: Any) -> Any:
pass
@classmethod
def __str__(cls):
return cls.__class__.__name__
@classmethod
def __call__(cls, *args):
cls.eval(*args)
class BinaryOperatorAnd(BinaryOperator):
@classmethod
def eval(cls, op1: Any, op2: Any) -> bool:
return op1 and op2
class BinaryOperatorOr(BinaryOperator):
@classmethod
def eval(cls, op1: Any, op2: Any) -> bool:
return op1 or op2
class BinarySameTypeOperator(BinaryOperator):
@classmethod
@abstractmethod
def eval(cls, op1: Any, op2: Any) -> bool:
if type(op1) != type(op2):
raise BadSemantics(
"op1 '{}' and op2 '{}' need to have the same type, found {}, {}"
.format(op1, op2, type(op1), type(op2)))
return False
class BinaryStringOperator(BinaryOperator):
@classmethod
def eval(cls, op1: str, op2: str) -> bool:
if not isinstance(op1, str):
raise BadSemantics("op1 '{}' is not a string".format(op1))
if not isinstance(op2, str):
raise BadSemantics("op2 '{}' is not a string".format(op2))
return False
class BinaryNumeralOperator(BinaryOperator):
@classmethod
def eval(cls, op1: TNum, op2: TNum) -> bool:
NumberTypes = (int, float)
if not isinstance(op1, NumberTypes):
raise BadSemantics("op1 '{}' is not a number".format(op1))
if not isinstance(op2, NumberTypes):
raise BadSemantics("op1 '{}' is not a number".format(op2))
return False
class BinaryOperatorIn(BinaryOperator):
@classmethod
def eval(cls, op1: Any, op2: Union[list, dict]) -> bool:
if not isinstance(op2, list):
if isinstance(op2, dict):
return op1 in op2.keys()
raise BadSemantics("op2 '{}' is not a list".format(op2))
return op1 in op2
class Lesser(BinarySameTypeOperator):
@classmethod
def eval(cls, op1: Any, op2: Any) -> bool:
super().eval(op1, op2)
return op1 < op2
class Greater(BinarySameTypeOperator):
@classmethod
def eval(cls, op1: Any, op2: Any) -> bool:
super().eval(op1, op2)
return op1 > op2
class Equal(BinaryOperator):
@classmethod
def eval(cls, op1: Any, op2: Any) -> bool:
return op1 == op2
class NotEqual(BinaryOperator):
@classmethod
def eval(cls, op1: Any, op2: Any) -> bool:
return op1 != op2
class startswith(BinaryStringOperator):
@classmethod
def eval(cls, op1: str, op2: str) -> bool:
super().eval(op1, op2)
return op1.startswith(op2)
class matches(BinaryStringOperator):
@classmethod
def eval(cls, op1: str, op2: str) -> bool:
super().eval(op1, op2)
LOGGER.debug("regex matching op1: '%s', op2: '%s'", op1, op2)
LOGGER.debug("result is %s", re.fullmatch(op2, op1))
return re.fullmatch(op2, op1) is not None
binary_operators = {
"startswith": startswith,
"matches": matches,
"in": BinaryOperatorIn,
"<": Lesser,
">": Greater,
"==": Equal,
"!=": NotEqual
}
class UOP:
@staticmethod
def exists(elem: Any) -> bool:
return elem is not None
class TransformAttr(MyTransformer):
def __init__(self, data: Dict):
super().__init__(self)
self.data = data
def subject_attr(self, args: List) -> Any:
LOGGER.debug("data is %s", self.data['subject'])
LOGGER.debug("args are %s", str(args[0]))
attr_str = str(args[0])
attr = reduce(
lambda d, key: d.get(key, None)
if isinstance(d, Mapping) else None, attr_str.split("."),
self.data["subject"])
if attr is None:
raise SubjectAttributeMissing("No subject_attr %s" % str(args[0]),
args[0])
return attr
def access_attr(self, args: List) -> Any:
attr = reduce(
lambda d, key: d.get(key, None)
if isinstance(d, Mapping) else None, args[0].split("."),
self.data["access"])
# attr = self.data["access"].get(str(args[0]), None)
return attr
def object_attr(self, args: List) -> Any:
LOGGER.debug("data is %s", self.data['object'])
LOGGER.debug("args are %s", args)
attr_str = str(args[0])
attr = reduce(
lambda d, key: d.get(key, None)
if isinstance(d, Mapping) else None, attr_str.split("."),
self.data["object"])
if attr is None:
raise ObjectAttributeMissing("No object attr %s" % attr_str,
args[0])
return attr
def environment_attr(self, args: List) -> Any:
attr = reduce(
lambda d, key: d.get(key, None)
if isinstance(d, Mapping) else None, args[0].split("."),
self.data["environment"])
if attr is None:
raise EnvironmentAttributeMissing(
"No object attr %s" % str(args[0]), args[0])
# warnings.warn("No environment_attr %s" % str(args[0]),
# EnvironmentAttributeMissingWarning)
return attr
def list_inner(self, args: List) -> Any:
# either we have two children (one list, one literal) or one child (literal)
if len(args) == 1:
return [args[0]]
return [args[0]] + args[1]
def lit(self, args: List) -> Union[Dict, List, str, int, float]:
if isinstance(args[0], (list, )):
return args[0]
if args[0].type in ["SINGLE_QUOTED_STRING", "DOUBLE_QUOTED_STRING", "RAW_STRING"]:
return str(literal_eval(args[0]))
if args[0].type == "BOOL":
return args[0] == "True"
return int(args[0])
class ExistsTransformer(MyTransformer):
""" The exists Transformer must run before the normal transformers
in order to catch exceptions """
def __init__(self, attr_transformer: TransformAttr):
super().__init__(self)
self.attr_transformer = attr_transformer
def _exists(self, args: List) -> bool:
try:
getattr(self.attr_transformer, args[0].data)(args[0].children)
return True
except AttributeMissing:
return False
def single(self, args: List) -> Any:
if args[0] == "exists":
return self._exists(args[1:])
return Tree("single", args)
def uop(self, args: List) -> Any:
if args[0] == "exists":
return "exists"
return Tree("uop", args)
#return getattr(UOP, str(args[0]))
class TopLevelTransformer(MyTransformer):
def condition(self, args: List) -> Any:
if isinstance(args[0], Tree):
return Tree("condition", args)
if len(args) == 1:
return bool(args[0])
raise ValueError
#return Tree("condition", args)
def target(self, args: List) -> Any:
if isinstance(args[0], Tree):
return Tree("target", args)
if len(args) == 1:
return bool(args[0])
raise ValueError
def statement(self, args: List) -> Any:
if isinstance(args[0], Tree):
return Tree("statement", args)
if len(args) == 1:
return bool(args[0])
raise ValueError
class OperatorTransformer(MyTransformer):
def cbop(self, args: List) -> Callable:
LOGGER.debug("cbop got called")
str_op = str(args[0])
op = binary_operators.get(str_op, None)
if op is None:
raise NotImplementedError()
return op
def lbop(self, args: List) -> Callable:
str_op = str(args[0])
if str_op == 'and':
return BinaryOperatorAnd
elif str_op == 'or':
return BinaryOperatorOr
else:
raise NotImplementedError
def uop(self, args: List) -> Any:
return getattr(UOP, str(args[0]))
class MiddleLevelTransformer(MyTransformer):
def comparison(self, args: List) -> bool:
# xor check for none attributes
if bool(args[0] is None) ^ bool(args[2] is None):
return False
# assert op is not None # for mypy
LOGGER.debug("{} {} {}".format(args[0], args[1], args[2]))
return args[1].eval(args[0], args[2])
def linked(self, args: List) -> bool:
if isinstance(args[0], Tree) or isinstance(args[2], Tree):
return Tree("linked", args)
allowed_types = (bool, dict, list, str, float, int)
if args[0] is None:
args[0] = False
assert issubclass(args[1], BinaryOperator)
if isinstance(args[0], allowed_types) and isinstance(
args[2], allowed_types):
return args[1].eval(args[0], args[2])
LOGGER.debug("Types are %s and %s", type(args[0]), type(args[2]))
raise ValueError
# return Tree("linked", args)
def single(self, args: List) -> Any:
if len(args) == 2:
return args[0](args[1])
if len(args) == 1:
return args[0]
raise ValueError
def parseable(lark_handle: Lark, rule: str) -> bool:
try:
lark_handle.parse(rule)
return True
except (lark.exceptions.UnexpectedCharacters,
lark.exceptions.UnexpectedEOF):
return False
def parse_and_transform(lark_handle: Lark, rule: str, data: Dict) -> bool:
try:
ast = lark_handle.parse(rule)
except (lark.exceptions.UnexpectedCharacters,
lark.exceptions.UnexpectedEOF):
raise BadRuleSyntax('Rule has a bad syntax %s' % rule)
# Eval exists
attr_transformer = TransformAttr(data)
new_ast = ExistsTransformer(attr_transformer).transform(ast)
T = attr_transformer + OperatorTransformer() + MiddleLevelTransformer() + TopLevelTransformer()
return T.transform(new_ast)
def check_condition(condition: str, data: Dict) -> bool:
global lark_condition
LOGGER.debug("Check condition %s with data %s", condition, data)
ret_value = parse_and_transform(lark_condition, condition, data)
LOGGER.debug("Condition %s evaluated to %s", condition, ret_value)
return ret_value
def check_target(rule: str, data: Dict) -> bool:
global lark_target
LOGGER.debug("Check target rule %s with data %s", rule, data)
try:
ret_value = parse_and_transform(lark_target, rule, data)
except Exception as e:
LOGGER.debug(e)
raise
LOGGER.debug("Target Rule %s evaluated to %s", rule, ret_value)
return ret_value
if __name__ == "__main__":
#
l = Lark(grammar, start="condition")
data = {"subject": {"email": "hello"}}
attr_transformer = TransformAttr(data)
ast = l.parse("exists subject.email")
new_ast = ExistsTransformer(attr_transformer).transform(ast)
print(new_ast)
T = attr_transformer * OperatorTransformer() * MiddleLevelTransformer() * TopLevelTransformer()
print(T.transform(new_ast))
ast = l.parse("exists subject.notexisting")
new_ast = ExistsTransformer(attr_transformer).transform(ast)
print(new_ast)
T = attr_transformer * OperatorTransformer() * MiddleLevelTransformer() * TopLevelTransformer()
print(T.transform(new_ast))
print(ast)
new_ast = ExistsTransformer(attr_transformer).transform(ast)
print(new_ast)
#print(new_ast)
#
# ast = l.parse("[5, '4', True]")
# print(ast)
# #data = {}
# ast = TransformAttr(data).transform(ast)
# tree.pydot__tree_to_png(ast, "graph.png")
# ast = l.parse("subject.email != object.email")
# #print(ast)
#
# data = {
# "subject": {
# "email": "blub"
# },
# "object": {
# "email": "blab"
# },
# "environment": {
# "time": 2
# }
# }
# transformed = TransformAttr(data).transform(ast)
# transformed = EvalTree().transform(transformed)
# tree.pydot__tree_to_png(ast, "graph.png")
# tree.pydot__tree_to_png(transformed, "graph02.png")
# ast = l.parse("exists environment.time")
# tree.pydot__tree_to_png(ast, "graph01.png")
# T = TransformAttr(data) * EvalTree() * EvalComplete()
# t1 = TransformAttr(data).transform(ast)
# t2 = EvalTree().transform(t1)
# t3 = EvalComplete().transform(t2)
# print(T.transform(ast))
# print(t3)
# ast = l.parse("True")
# tree.pydot__tree_to_png(ast, "graph04.png")
# ast = l.parse("environment.time < 3")
# tree.pydot__tree_to_png(ast, "graph03_orig.png")
# ast = TransformAttr(data).transform(ast)
# tree.pydot__tree_to_png(ast, "graph03_attr.png")
# ast = EvalTree().transform(ast)
# tree.pydot__tree_to_png(ast, "graph03.png") | ARPOC | /ARPOC-0.3.1.tar.gz/ARPOC-0.3.1/arpoc/ac/parser.py | parser.py |
import os
import json
import traceback
from abc import ABC
from enum import Enum
from typing import List, Union, Dict, Type, Any, Tuple, Callable, Optional, ClassVar, MutableMapping
import itertools
import logging
from collections.abc import Mapping
from dataclasses import dataclass, InitVar, field
import glob
import lark.exceptions
from arpoc.exceptions import *
#import arpoc
#import arpoc.ac
import arpoc.ac.common as common
import arpoc.ac.parser as parser
from .conflict_resolution import *
#logging.basicConfig(level=logging.DEBUG)
LOGGER = logging.getLogger(__name__)
#__all__ = ["conflict_resolution", "common", "parser"]
@dataclass
class EvaluationResult:
missing_attr: List[str] = field(default_factory=list)
results: Dict[str, Optional[common.Effects]] = field(default_factory=dict)
obligations: List[Any] = field(default_factory=list)
@dataclass
class AC_Entity(ABC):
""" Class for all access control entities (policy sets, policies, rules"""
container: ClassVar[Optional['AC_Container']]
entity_id: str
target: str
description: str
obligations: List[str]
def _evaluate(self, entity_id: str, getter: Dict,
evaluation_result: EvaluationResult, cr: ConflictResolution,
context: Dict) -> None:
if entity_id not in evaluation_result.results:
LOGGER.debug("Considering entity_id %s", entity_id)
try:
evaluation_result = getter[entity_id].evaluate(
context, evaluation_result)
except KeyError:
raise ACEntityMissing(entity_id)
cr.update(entity_id, evaluation_result.results[entity_id])
def evaluate(
self,
context: Dict,
evaluation_result: Optional[EvaluationResult] = None
) -> EvaluationResult:
""" Evaluate Policy Set"""
evaluation_result = (evaluation_result
if evaluation_result is not None
else EvaluationResult())
try:
cr_str = getattr(self, "conflict_resolution")
cr_obj = cr_switcher[cr_str]()
except KeyError:
raise NotImplementedError(
"Conflict Resolution %s is not implemented" % cr_str)
except AttributeError:
# This happens if we are evaluating a rule here
pass
try:
if self._check_match(context):
assert self.container is not None
if hasattr(self, "policy_sets"):
for policy_set_id in getattr(self, "policy_sets"):
self._evaluate(policy_set_id,
self.container.policy_sets,
evaluation_result, cr_obj, context)
if cr_obj.check_break():
break
if hasattr(self, "policies"):
for policy_id in getattr(self, "policies"):
self._evaluate(policy_id, self.container.policies,
evaluation_result, cr_obj, context)
if cr_obj.check_break():
break
if hasattr(self, "rules"):
for rule_id in getattr(self, "rules"):
self._evaluate(rule_id, self.container.rules,
evaluation_result, cr_obj, context)
if cr_obj.check_break():
break
evaluation_result.obligations.extend(self.obligations)
except ACEntityMissing as excep:
LOGGER.warning(
"%s requested entity %s, but was not found in container",
self.entity_id, excep.args[0])
LOGGER.warning(traceback.format_exc())
evaluation_result.results[self.entity_id] = None
return evaluation_result
except lark.exceptions.VisitError as e:
if e.orig_exc.__class__ == parser.SubjectAttributeMissing:
evaluation_result.results[self.entity_id] = None
evaluation_result.missing_attr.append(e.orig_exc.attr)
return evaluation_result
if e.orig_exc.__class__ == parser.ObjectAttributeMissing:
evaluation_result.results[self.entity_id] = None
return evaluation_result
if e.orig_exc.__class__ == parser.EnvironmentAttributeMissing:
evaluation_result.results[self.entity_id] = None
return evaluation_result
raise
# Update Evaluation Result
evaluation_result.results[self.entity_id] = cr_obj.get_effect()
return evaluation_result
def _check_match(self, context: Dict) -> bool:
return parser.check_target(self.target, context)
@dataclass
class Policy_Set(AC_Entity):
conflict_resolution: str
policy_sets: List[str]
policies: List[str]
@dataclass
class Policy(AC_Entity):
conflict_resolution: str
rules: List[str]
@dataclass
class Rule(AC_Entity):
condition: str
effect: InitVar[str]
def __post_init__(self, effect: str) -> None:
self.effect = common.Effects[effect]
def evaluate(
self,
context: Dict,
evaluation_result: Optional[EvaluationResult] = None
) -> EvaluationResult:
evaluation_result = (evaluation_result
if evaluation_result is not None
else EvaluationResult())
try:
evaluate_to_if_missing = None
if self._check_match(context):
evaluate_to_if_missing = common.Effects(not self.effect)
if self._check_condition(context):
evaluation_result.results[self.entity_id] = self.effect
else:
evaluation_result.results[self.entity_id] = common.Effects(
not self.effect)
evaluation_result.obligations.extend(self.obligations)
return evaluation_result
evaluation_result.results[self.entity_id] = None
return evaluation_result
except lark.exceptions.VisitError as e:
if e.orig_exc.__class__ == parser.SubjectAttributeMissing:
evaluation_result.results[
self.entity_id] = evaluate_to_if_missing
evaluation_result.missing_attr.append(e.orig_exc.attr)
return evaluation_result
if e.orig_exc.__class__ == parser.ObjectAttributeMissing:
evaluation_result.results[
self.entity_id] = evaluate_to_if_missing
return evaluation_result
if e.orig_exc.__class__ == parser.EnvironmentAttributeMissing:
evaluation_result.results[
self.entity_id] = evaluate_to_if_missing
return evaluation_result
raise
return evaluation_result
def _check_condition(self, context: Dict[str, Dict]) -> bool:
return parser.check_condition(self.condition, context)
class AC_Container:
def __init__(self) -> None:
self.policies: Dict[str, Policy] = dict()
self.policy_sets: Dict[str, Policy_Set] = dict()
self.rules: Dict[str, Rule] = dict()
def __str__(self) -> str:
string = ""
for key, val in itertools.chain(self.policies.items(),
self.policy_sets.items(),
self.rules.items()):
string += "\n{}\n{}".format(str(key), str(val))
return string
def load_file(self, filename: str) -> None:
try:
with open(filename) as f:
data = json.load(f)
for entity_id, definition in data.items():
self.add_entity(entity_id, definition)
except json.decoder.JSONDecodeError:
LOGGER.error("JSON File %s is no valid json", filename)
except TypeError:
LOGGER.error("Error handling file: %s", filename)
def load_dir(self, path: str) -> None:
for f in glob.glob(path + "/*.json"):
self.load_file(f)
def evaluate_by_entity_id(
self,
entity_id: str,
context: Dict[str, MutableMapping],
evaluation_result: Optional[EvaluationResult] = None
) -> EvaluationResult:
if evaluation_result is None:
evaluation_result = EvaluationResult()
if entity_id in evaluation_result.results:
return evaluation_result
# Effect, Missing
try:
evaluation_result = self.policy_sets[entity_id].evaluate(
context, evaluation_result)
except KeyError:
LOGGER.debug("Requested ps %s, but was not found in container",
entity_id)
raise ACEntityMissing(entity_id)
return evaluation_result
def add_entity(self, entity_id: str, definition: Dict[str, str]) -> None:
if not isinstance(definition, Dict):
LOGGER.warning("Cannot find ac entity type or cannot initialize")
LOGGER.warning('Error at: %s', definition)
raise TypeError("Cannot add ac entity without definition as dict")
# if AC_Entity.container is None:
# AC_Entity.container = self
switcher = {"Policy": Policy, "PolicySet": Policy_Set, "Rule": Rule}
switcher_dict: Dict[str, Dict[str, Any]] = {
"Policy": self.policies,
"PolicySet": self.policy_sets,
"Rule": self.rules
}
cleaner = {
"Description": "description",
"Target": "target",
"PolicySets": "policy_sets",
"Policies": "policies",
"Rules": "rules",
"Resolver": "conflict_resolution",
"Effect": "effect",
"Condition": "condition",
"Obligations": "obligations",
"Type": "--filter--"
}
kwargs = {
cleaner[key]: value
for (key, value) in definition.items()
if cleaner.get(key) != "--filter--"
}
LOGGER.debug('Creating %s with parameters %s', definition['Type'],
str(kwargs))
try:
obj: AC_Entity = switcher[definition['Type']](entity_id, **kwargs)
AC_Entity.container = self
obj_container: Dict[str,
AC_Entity] = switcher_dict[definition['Type']]
obj_container[entity_id] = obj
except KeyError:
LOGGER.warning("Cannot find ac entity type or cannot initialize")
LOGGER.warning('Error at: %s', definition)
except TypeError:
LOGGER.warning("Probably error in AC Entity Definition")
LOGGER.warning('Error at: %s with parameters %s',
definition['Type'], str(kwargs))
def check(self) -> bool:
consistent = True
for key, entity in itertools.chain(self.policy_sets.items(),
self.policies.items(),
self.rules.items()):
if hasattr(entity, "conflict_resolution"):
cr_str = getattr(entity, "conflict_resolution")
if cr_str not in cr_switcher:
print("Conflict Resolution %s not found requested by %s" % (cr_str, key))
if hasattr(entity, "policy_sets"):
for policy_set in getattr(entity, "policy_sets"):
if policy_set not in self.policy_sets:
consistent = False
print("Could not find policy set %s requested by %s" % (policy_set, key))
if hasattr(entity, "policies"):
for policy in getattr(entity, "policies"):
if policy not in self.policies:
consistent = False
print("Could not find policy %s requested by %s" % (policy, key))
if hasattr(entity, "rules"):
for rule in getattr(entity, "rules"):
if rule not in self.rules:
consistent = False
print("Could not find rule %s requested by %s" % (policy, key))
if not parser.parseable(parser.lark_target, entity.target):
consistent = False
print("Target rule is not parseable: %s in %s" % (entity.target, key))
if hasattr(entity, "condition"):
if not parser.parseable(parser.lark_condition, getattr(entity, "condition")):
consistent = False
print("Target rule is not parseable: %s in %s" %
(getattr(entity, "condition"), key))
return consistent
def print_sample_ac() -> None:
ac = """
{
"com.example.policysets.default": {
"Type": "PolicySet",
"Description": "Default Policy Set",
"Target": "True",
"Policies": ["com.example.policies.default"],
"PolicySets": [],
"Resolver": "ANY",
"Obligations" : []
},
"com.example.policies.default": {
"Type": "Policy",
"Description": "Default Policy",
"Target" : "True",
"Rules" : [ "com.example.rules.default" ],
"Resolver": "AND",
"Obligations" : []
},
"com.example.rules.default" : {
"Type": "Rule",
"Target": "True",
"Description": "Default Rule",
"Condition" : "True",
"Effect": "GRANT",
"Obligations" : []
}
}
"""
print(ac)
container = AC_Container() | ARPOC | /ARPOC-0.3.1.tar.gz/ARPOC-0.3.1/arpoc/ac/__init__.py | __init__.py |
"""Automated downloading of queue items from AlphaRatio."""
import json
import sys
from pathlib import Path
import click
from environs import Env, EnvError
from httpx import Client, Headers
from loguru import logger
__version__ = "1.2.0"
def _check_version(context: click.core.Context, _param: click.core.Option, value: bool) -> None: # noqa: FBT001
"""Check current version at Pypi."""
if not value or context.resilient_parsing:
return
logger.configure(handlers=[{"sink": sys.stdout, "format": "{message}", "level": "INFO"}])
try:
client = Client(http2=True)
latest = client.get("https://pypi.org/pypi/arqueue/json").json()["info"]["version"]
logger.info("You are currently using v{} the latest is v{}", __version__, latest)
client.close()
except TimeoutError:
logger.exception("Timeout reached fetching current version from Pypi - ARQueue v{}", __version__)
raise
context.exit()
def set_logging(context: click.core.Context) -> None:
"""Set logging level."""
if context.params["verbose"] and context.params["verbose"] == 1:
level = "DEBUG"
elif context.params["verbose"] and context.params["verbose"] >= 2:
level = "TRACE"
else:
level = "INFO"
logger.configure(
handlers=[
{"sink": sys.stdout, "format": "{time:YYYY-MM-DD at HH:mm:ss} | {level} | {message}", "level": level},
],
)
def get_config(context: click.core.Context) -> dict:
"""Gather config data."""
config = {}
config_path = None
if context.params["config"]:
logger.info("Setting config location to {}", context.params["config"])
config_path = context.params["config"]
if not Path(config_path).expanduser().exists():
logger.error("Config file not found at {}", config_path)
sys.exit(5)
elif Path("~/.config/arqueue/config").expanduser().is_file():
logger.debug("Using config at {}", Path("~/.config/arqueue/config").expanduser())
config_path = Path("~/.config/arqueue/config").expanduser()
elif Path(".env").is_file() and not config_path:
logger.debug("Using config at {}", Path(Path(), ".env"))
config_path = ".env"
elif not Path(Path(__file__).parent, ".env").is_file() and not config_path:
logger.error("No .env file found or provided")
logger.error(
"Provide one with -c or place one at {} or {}",
Path("~/.config/arqueue/config").expanduser(),
Path(Path(__file__).parent, ".env"),
)
sys.exit(5)
env = Env()
env.read_env(path=config_path, recurse=False) # type: ignore[arg-type]
try:
config["auth_key"] = env("auth_key")
config["torr_pass"] = env("torrent_pass")
config["watch_dirs"] = env.dict("watch_dirs")
except EnvError:
logger.exception("Key error in .env")
sys.exit(11)
return config
@click.command()
@click.option("-c", "--config", "config", type=str, default=None, help="Specify a config file to use.")
@click.help_option("-h", "--help")
@click.option("-v", "--verbose", count=True, default=None, help="Increase verbosity of output.")
@click.option(
"--version",
is_flag=True,
is_eager=True,
expose_value=False,
callback=_check_version,
help="Check version and exit.",
)
@click.pass_context
def main(context: click.Context, **_) -> None:
"""Automated downloading of queue items from AlphaRatio."""
set_logging(context)
config = get_config(context)
headers = Headers({"User-Agent": "AlphaRatio Queue"})
client = Client(headers=headers, http2=True, base_url="https://alpharatio.cc")
url_keys = f"&authkey={config['auth_key']}&torrent_pass={config['torr_pass']}"
url = f"/torrents.php?action=getqueue{url_keys}"
logger.trace("Queue request URL: https://alpharatio.cc{}", url)
response = client.get(url)
result = json.loads(response.text)
logger.debug("Queue response: {}", result)
if result["status"] == "error":
logger.debug("No torrents queued for download")
sys.exit()
try:
queue = result["response"]
logger.debug("Queue length: {}", len(queue))
except KeyError:
logger.exception("No response key found and status is not error")
sys.exit(18)
for item in queue:
logger.debug("Processing queue item: {}", item)
torrent_id = item["TorrentID"]
download_link = f"/torrents.php?action=download&id={torrent_id}{url_keys}"
if int(item["FreeLeech"]):
download_link = f"{download_link}&usetoken=1"
logger.debug("Freeleech download")
logger.trace("Download link: https://alpharatio.cc{}", download_link)
category = item["Category"]
watch_dirs = config["watch_dirs"]
try:
watch_dir = watch_dirs[category]
except KeyError:
watch_dir = watch_dirs["Default"]
logger.debug("Watch dir: {} with category {}", watch_dir, category)
torrent_response = client.get(download_link)
filename = torrent_response.headers["Content-Disposition"].split('filename="')[1][:-1]
torrent_path = Path(watch_dir, filename)
Path(torrent_path).parent.mkdir(parents=True, exist_ok=True)
Path(torrent_path).open(mode="wb").write(torrent_response.read())
logger.info("Downloaded {} to {} successfully", filename[:-8], watch_dir)
client.close()
if __name__ == "__main__":
main() | ARQueue | /ARQueue-1.2.0.tar.gz/ARQueue-1.2.0/arqueue.py | arqueue.py |
# AR Queue Watcher
[](https://pypi.python.org/pypi/arqueue)
[](https://pypi.python.org/pypi/arqueue)
[](https://github.com/OMEGARAZER/arqueue/actions/workflows/test.yml)
[](https://github.com/charliermarsh/ruff)
[](https://github.com/psf/black)
[](https://github.com/pre-commit/pre-commit)
Automated downloading of queue items from AlphaRatio.
## Installation
### From pypi
Suggested to install via [pipx](https://pypa.github.io/pipx) with:
```bash
pipx install arqueue
```
or pip with:
```bash
pip install arqueue
```
### From repo
Clone the repo with:
```bash
git clone https://github.com/OMEGARAZER/arqueue.git
cd ./arqueue
```
Suggested to install via [pipx](https://pypa.github.io/pipx) with:
```bash
pipx install -e .
```
or pip with:
```bash
pip install -e .
```
## Configuration
Configuration can be done in three ways:
1. Create a file with your auth_key, torrent_pass and your watch_dirs like they are in the `.env.sample` file and pass it to the script with `-c`.
2. Copy the `.env.sample` file to `.config/arqueue/config` and edit to contain your auth_key, torrent_pass and your watch_dirs.
3. Rename `.env.sample` to `.env` and edit to contain your auth_key, torrent_pass and your watch_dirs (not recommended unless installed from repo).
## Running
After configuring you can run it with:
```bash
arqueue
```
or if passing a config file:
```bash
arqueue -c <path to config>
```
You can increase the verbosity of the output by passing `-v` or `-vv`.
* `-v` enables debug output to show request responses and process updates.
* `-vv` enables trace output which shows debug output as well as requested urls (Which include the secrets, use only when required).
### Crontab
To run via crontab you can use this line, replacing {HOME} with your home directory.
```bash
* * * * * {HOME}/.local/bin/arqueue >/dev/null 2>&1
```
Unless [configured](#configuration) through option 3 you will need to pass your config as well.
| ARQueue | /ARQueue-1.2.0.tar.gz/ARQueue-1.2.0/README.md | README.md |
ARS: Autonomous Robot Simulator
===============================
.. image:: https://pypip.in/d/ARS/badge.png
:target: https://crate.io/packages/ARS/
ARS is a physically-accurate open-source simulation suite for research and
development of mobile manipulators and, in general, any multi-body system. It
is modular, easy to learn and use, and can be a valuable tool in the process
of robot design, in the development of control and reasoning algorithms, as
well as in teaching and educational activities.
It will encompass a wide range of tools spanning from kinematics and dynamics
simulation to robot interfacing and control.
The software is implemented in Python integrating the
`Open Dynamics Engine (ODE) <https://sourceforge.net/projects/opende/>`_
and the `Visualization Toolkit (VTK) <http://www.vtk.org/>`_.
Installation and Requirements
-----------------------------
For installation instructions and requirements, see the
`online documentation <http://ars-project.readthedocs.org/en/latest/installation/>`_.
ARS relies in these software:
* Python 2.7
* ODE (Open Dynamics Engine) 0.12 (from rev 1864 could work but is untested) with Python bindings
* VTK (Visualization Toolkit) 5.8 with Python bindings
* NumPy 1.6
Provided ODE and VTK are already installed, execute this to get ARS up and running:
.. code-block:: bash
$ pip install ARS
Documentation
-------------
The documentation is hosted at
`ReadTheDocs.org <http://ars-project.readthedocs.org>`_
and it is generated dynamically after each commit to the repository.
License
-------
This software is licensed under the OSI-approved "BSD License". To avoid
confusion with the original BSD license from 1990, the FSF refers to it as
"Modified BSD License". Other names include "New BSD", "revised BSD", "BSD-3",
or "3-clause BSD".
See the included LICENSE.txt file.
Tests
-----
To run the included test suite you need more packages (``tox`` and ``mock``):
.. code-block:: bash
~/ars$ pip install -r requirements_test.txt
~/ars$ tox
| ARS | /ARS-0.5a2.zip/ARS-0.5a2/README.rst | README.rst |
from importlib import import_module
import sys
DEMOS_PACKAGE_PREFIX = 'demos'
DEMOS = [
# (module, class_name),
('CentrifugalForceTest', 'CentrifugalForceTest'),
('ControlledSimpleArm', 'ControlledSimpleArm'),
('FallingBall', 'FallingBall'),
('FallingBalls', 'FallingBalls'),
('SimpleArm', 'SimpleArm'),
('Vehicle1', 'Vehicle1'),
('Vehicle2', 'Vehicle2'),
('Vehicle2WithScreenshots', 'Vehicle2WithScreenshots'),
('VehicleWithArm', 'VehicleWithArm'),
('VehicleWithControlledArm', 'VehicleWithControlledArm'),
('IROS.example1_bouncing_ball', 'Example1'),
('IROS.example1_bouncing_balls-no_data', 'Example1NoData'),
('IROS.example2_conical_pendulum', 'Example2'),
('IROS.example3_speed_profile', 'Example3'),
('IROS.example4_sinusoidal_terrain', 'Example4'),
('IROS.example4b_sinusoidal_terrain_with_screenshot_recorder', 'Example4SR'),
('IROS.example5_vehicle_with_user_input', 'Example5'),
('sensors.accelerometer1', 'Accelerometer'),
('sensors.accelerometer2', 'Accelerometer'),
('sensors.body', 'GPSSensor'),
('sensors.joint_power1', 'JointPower'),
('sensors.joint_torque1', 'JointTorque'),
('sensors.kinetic_energy1', 'KineticEnergy'),
('sensors.laser', 'LaserSensor'),
('sensors.laser', 'VisualLaser'),
('sensors.potential_energy1', 'PotentialEnergy'),
('sensors.rotary_joint', 'RotaryJointSensor'),
('sensors.system_total_energy1', 'SystemTotalEnergy'),
('sensors.total_energy1', 'TotalEnergy'),
('sensors.velometer1', 'Velometer'),
]
INTRODUCTION_MSG = """
This executable can run all the demos included in ARS.
"""
INSTRUCTIONS = """
Enter one of the following values:
d: print demo list
(number): run a demo (patience, the first launch is a little slower)
q: quit
"""
QUIT_STR = 'q'
def show_demo_list():
print("-" * 30)
print("\nDEMO LIST\n\nindex: (module, class name)")
for i, option in enumerate(DEMOS):
print('%s: %s' % (i, option))
def run_demo(selection):
try:
selected_demo_index = int(selection)
except ValueError:
print('Error, invalid input')
return 1
try:
selected_demo = DEMOS[selected_demo_index] # (module, class_name)
except IndexError:
print('Error, option number is out of range')
return 2
module = import_module(DEMOS_PACKAGE_PREFIX + '.' + selected_demo[0])
klass = getattr(module, selected_demo[1])
print("-" * 30)
print('%s: %s' % (selected_demo_index, selected_demo))
sim_program = klass()
sim_program.start()
try:
sim_program.print_final_data()
except AttributeError:
pass
sim_program.finalize()
def main():
user_input = None
print(INTRODUCTION_MSG)
while True:
print(INSTRUCTIONS)
user_input = raw_input('value: ')
user_input = user_input.strip().lower()
if user_input == 'd':
show_demo_list()
elif user_input == QUIT_STR:
print('Bye')
return 0
else:
run_demo(user_input)
if __name__ == '__main__':
exit_value = main()
sys.exit(exit_value) | ARS | /ARS-0.5a2.zip/ARS-0.5a2/demo_runner.py | demo_runner.py |
from ars.app import Program, dispatcher, logger
import ars.utils.mathematical as mut
import ars.constants as cts
from ars.model.simulator import signals
#View point
#==============================================================================
# vp_hpr = (90.0, 0.0, 0.0) # orientation [degrees]
#
# QUICKSTEP_ITERS = 20 # # of iterations for the QuickStep function
#
# GRAVITY = -9.81
# GLOBAL_CFM = 1e-10 # default for ODE with double precision
# GLOBAL_ERP = 0.8
#==============================================================================
def get_sphere_volume(radius):
return 4.0 / 3.0 * mut.pi * (radius ** 3)
class CentrifugalForceTest(Program):
OFFSET = (2, 0.5, 2)
# ((size, center), density)
BOX_PARAMS = (((5, 0.5, 5), (0, -0.25, 0)), {'density': 1})
WINDOW_SIZE = (900, 600)
CAMERA_POSITION = (2, 5, 10) # position [meters]
FPS = 50
STEPS_PER_FRAME = 20 # 200 #STEP_SIZE = 1e-5 # 0.01 ms
POLE_SPEED_STEP = 0.01
POLE_VISUAL_RADIUS = 0.05 # 5 cm. how it will be displayed
POLE_HEIGHT = 2 # [meters]
POLE_INITIAL_POS = (0.0, 1.0, 0.0)
BALL_MASS = 1.0 # [kg]
BALL_RADIUS = 0.01 # [cm]
BALL_VISUAL_RADIUS = 0.1 # [cm]
BALL_INITIAL_POS = (0.0, 1.0, 1.0)
JOINT1_ANCHOR = (0.0, 0.0, 0.0)
JOINT1_AXIS = (0.0, 1.0, 0.0) # Y-axis
JOINT1_FMAX = 100
JOINT2_ANCHOR = (0.0, 2.0, 1.0)
JOINT2_AXIS = (1.0, 0.0, 0.0) # X-axis
CABLE_LENGTH = mut.length3(mut.sub3(BALL_INITIAL_POS, JOINT2_ANCHOR))
JOINT2_ANGLE_RATE_CONTROLLER_KP = 500.0
JOINT1_ANGLE_RATE_INITIAL = 3.0
def __init__(self):
Program.__init__(self)
self.key_press_functions.add('a', self.inc_joint1_vel)
self.key_press_functions.add('z', self.dec_joint1_vel)
#self.key_press_functions.add('f', self.rotate_clockwise)
dispatcher.connect(self.on_pre_frame, signals.SIM_PRE_FRAME)
self.joint1_vel_user = self.JOINT1_ANGLE_RATE_INITIAL
self.large_speed_steps = True
#TODO: set ERP, CFM
def create_sim_objects(self):
box = self.sim.add_box(*self.BOX_PARAMS[0], **self.BOX_PARAMS[1])
# It doesn't really matter if pole has mass since speed is set
pole = self.sim.add_cylinder(
self.POLE_HEIGHT,
self.POLE_VISUAL_RADIUS,
self.POLE_INITIAL_POS,
density=1.0)
ball_density = self.BALL_MASS / get_sphere_volume(self.BALL_RADIUS)
# FIXME: visual radius => did not affect the results noticeably
ball = self.sim.add_sphere(
self.BALL_RADIUS,
self.BALL_INITIAL_POS,
density=ball_density)
# bodies are rotated before attaching themselves through joints
self.sim.get_object(pole).rotate(cts.X_AXIS, mut.pi / 2)
self.sim.get_object(box).offset_by_position(self.OFFSET)
self.sim.get_object(pole).offset_by_position(self.OFFSET)
self.sim.get_object(ball).offset_by_position(self.OFFSET)
self.sim.add_rotary_joint(
'r1', # name
self.sim.get_object(box), # obj1
self.sim.get_object(pole), # obj2
None, # anchor
self.JOINT1_AXIS) # axis
self.sim.add_rotary_joint(
'r2',
self.sim.get_object(pole),
self.sim.get_object(ball),
mut.add3(self.OFFSET, self.JOINT2_ANCHOR),
self.JOINT2_AXIS)
self.box = box
self.pole = pole
self.ball = ball
def on_pre_frame(self):
try:
self.set_joint1_speed()
self.apply_friction()
ball_pos = self.sim.get_object(self.ball).get_position()
ball_vel = self.sim.get_object(self.ball).get_linear_velocity()
ball_omega = self.sim.get_object(self.ball).get_angular_velocity()
z_top = self.JOINT2_ANCHOR[1]
theta_sim = mut.acos(
(z_top - ball_pos[1] + self.OFFSET[1]) / self.CABLE_LENGTH)
print((ball_pos, ball_vel, ball_omega, theta_sim))
except Exception:
logger.exception("Exception when executing on_pre_frame")
def inc_joint1_vel(self):
self.joint1_vel_user += self.POLE_SPEED_STEP
def dec_joint1_vel(self):
self.joint1_vel_user -= self.POLE_SPEED_STEP
def set_joint1_speed(self):
self.sim.get_joint('r1').joint.set_speed(self.joint1_vel_user, self.JOINT1_FMAX)
def apply_friction(self):
torque = -self.JOINT2_ANGLE_RATE_CONTROLLER_KP * self.sim.get_joint('r2').joint.angle_rate
self.sim.get_joint('r2').joint.add_torque(torque) | ARS | /ARS-0.5a2.zip/ARS-0.5a2/demos/CentrifugalForceTest.py | CentrifugalForceTest.py |
========================================
ARS demos
========================================
This directory contains many demos of how to use ARS to simulate a wide
range of situations and models complexity.
Run
====================
To run them, execute ``demo_runner.py`` (at the root-level directory,
where ``setup.py`` is at too).
A list of all the available demos will be displayed. The user is prompted to
enter a selection.
Stop simulation
--------------------
To end a demo/simulation, just hit (the windows focus must be on the
simulation window) keys ``E`` or ``Q`` (case insensitive).
Stop demo_runner
--------------------
Just hit ``Q`` upon demo selection prompt.
| ARS | /ARS-0.5a2.zip/ARS-0.5a2/demos/README.rst | README.rst |
from ars.app import Program, dispatcher, logger
import ars.utils.mathematical as mut
import ars.constants as cts
from ars.model.simulator import signals
class VehicleWithArm(Program):
STEPS_PER_FRAME = 200
BACKGROUND_COLOR = (0.8, 0.8, 0.8)
FLOOR_BOX_SIZE = (20, 0.01, 20)
WHEEL_TORQUE = 3
VEHICLE_OFFSET = (2, 0.35, 4)
# ((radius, center), density)
BALL_PARAMS = ((0.3, (1, 0, 0)), {'density': 1})
# ((size, center), mass)
CHASSIS_PARAMS = (((2, 0.2, 1.5), (0.5, 0.45, 0)), {'mass': 6})
# ((length, radius, center), density)
WHEEL_R_PARAMS = ((0.4, 0.3, (0, 0, -0.5)), {'density': 1})
WHEEL_L_PARAMS = ((0.4, 0.3, (0, 0, 0.5)), {'density': 1})
# ((length, radius, center), density)
LINK1_PARAMS = ((0.8, 0.1, (0, 0, 0)), {'density': 1})
LINK2_PARAMS = ((0.6, 0.1, (0, 0.7, 0.2)), {'density': 1})
R1_TORQUE = 3
Q1_FRICTION_COEFF = 0.01
Q2_FRICTION_COEFF = 0.01
def __init__(self, use_capsule_wheels=False, frictionless_arm=True):
"""Constructor, calls the superclass constructor first."""
self._use_capsule_wheels = use_capsule_wheels
self._frictionless_arm = frictionless_arm
Program.__init__(self)
self.key_press_functions.add('up', self.go_forwards, repeat=True)
self.key_press_functions.add('down', self.go_backwards, repeat=True)
self.key_press_functions.add('left', self.turn_left, repeat=True)
self.key_press_functions.add('right', self.turn_right, repeat=True)
self.key_press_functions.add('a', self.rotate_clockwise)
self.key_press_functions.add('z', self.rotate_counterlockwise)
dispatcher.connect(self.on_pre_step, signals.SIM_PRE_STEP)
def create_sim_objects(self):
"""Implementation of the required method.
Creates and sets up all the objects of the simulation.
"""
arm_offset = (0, 0.5, 0)
#=======================================================================
# VEHICLE
#=======================================================================
if self._use_capsule_wheels:
wheelR = self.sim.add_capsule(
*self.WHEEL_R_PARAMS[0], **self.WHEEL_R_PARAMS[1])
wheelL = self.sim.add_capsule(
*self.WHEEL_L_PARAMS[0], **self.WHEEL_L_PARAMS[1])
else:
wheelR = self.sim.add_cylinder(
*self.WHEEL_R_PARAMS[0], **self.WHEEL_R_PARAMS[1])
wheelL = self.sim.add_cylinder(
*self.WHEEL_L_PARAMS[0], **self.WHEEL_L_PARAMS[1])
ball = self.sim.add_sphere(
*self.BALL_PARAMS[0], **self.BALL_PARAMS[1])
chassis = self.sim.add_box(
*self.CHASSIS_PARAMS[0], **self.CHASSIS_PARAMS[1])
# create joints: 2 rotary, 1 ball & socket
self.sim.add_rotary_joint(
'w1', # name
self.sim.get_object(chassis), # obj1
self.sim.get_object(wheelR), # obj2
None, # anchor
cts.Z_AXIS) # axis
self.sim.add_rotary_joint(
'w2',
self.sim.get_object(chassis),
self.sim.get_object(wheelL),
None,
cts.Z_AXIS)
self.sim.add_ball_socket_joint(
'bs', # name
self.sim.get_object(chassis), # obj1
self.sim.get_object(ball), # obj2
None) # anchor
self.sim.get_object(wheelR).offset_by_position(self.VEHICLE_OFFSET)
self.sim.get_object(wheelL).offset_by_position(self.VEHICLE_OFFSET)
self.sim.get_object(ball).offset_by_position(self.VEHICLE_OFFSET)
self.sim.get_object(chassis).offset_by_position(self.VEHICLE_OFFSET)
#=======================================================================
# ROBOTIC ARM
#=======================================================================
link1 = self.sim.add_capsule(
*self.LINK1_PARAMS[0], **self.LINK1_PARAMS[1])
link2 = self.sim.add_capsule(
*self.LINK2_PARAMS[0], **self.LINK2_PARAMS[1])
# bodies are rotated before attaching themselves through joints
self.sim.get_object(link1).rotate(cts.X_AXIS, mut.pi / 2)
self.sim.get_object(link2).rotate(cts.X_AXIS, mut.pi / 2)
self.sim.get_object(link1).offset_by_object(self.sim.get_object(chassis))
self.sim.get_object(link1).offset_by_position(arm_offset)
self.sim.get_object(link2).offset_by_object(self.sim.get_object(link1))
self.sim.add_rotary_joint(
'r1',
self.sim.get_object(chassis),
self.sim.get_object(link1),
None,
cts.Y_AXIS)
r2_anchor = mut.sub3(
self.sim.get_object(link2).get_position(),
(0, self.LINK2_PARAMS[0][0] / 2, 0)) # (0, length/2, 0)
self.sim.add_rotary_joint(
'r2',
self.sim.get_object(link1),
self.sim.get_object(link2),
r2_anchor,
cts.Z_AXIS)
try:
self.sim.get_object(chassis).actor.set_color(cts.COLOR_RED)
self.sim.get_object(link1).actor.set_color(cts.COLOR_YELLOW)
self.sim.get_object(link2).actor.set_color(cts.COLOR_NAVY)
except AttributeError:
# if visualization is deactivated, there is no actor
pass
self.chassis = chassis
self.wheelR = wheelR
self.wheelL = wheelL
self.ball = ball
self.link1 = link1
self.link2 = link2
def go_forwards(self):
"""Rotate both powered wheels in the same direction, forwards."""
self.apply_torque_to_wheels(self.WHEEL_TORQUE, self.WHEEL_TORQUE)
def go_backwards(self):
"""Rotate both powered wheels in the same direction, backwards."""
self.apply_torque_to_wheels(-self.WHEEL_TORQUE, -self.WHEEL_TORQUE)
def turn_left(self):
"""Rotate vehicle counter-clockwise (from above)."""
self.apply_torque_to_wheels(-self.WHEEL_TORQUE, self.WHEEL_TORQUE)
def turn_right(self):
"""Rotate vehicle clockwise (from above)."""
self.apply_torque_to_wheels(self.WHEEL_TORQUE, -self.WHEEL_TORQUE)
def on_pre_step(self):
#print(self.sim.get_object(self.chassis).get_position())
try:
#time = self.sim.sim_time
q1p = self.get_q1p()
q2p = self.get_q2p()
if not self._frictionless_arm:
self.apply_friction(q1p, q2p)
print('q1p: %f, q2p: %f' % (q1p, q2p))
except Exception:
logger.exception("Exception when executing on_pre_step")
def apply_torque_to_wheels(self, torque1, torque2):
if torque1 is not None:
self.sim.get_joint('w1').add_torque(torque1)
if torque2 is not None:
self.sim.get_joint('w2').add_torque(torque2)
def rotate_clockwise(self):
self.apply_torque_to_joints(self.R1_TORQUE, 0)
def rotate_counterlockwise(self):
self.apply_torque_to_joints(-self.R1_TORQUE, 0)
def apply_torque_to_joints(self, torque1, torque2):
if torque1 is not None:
self.sim.get_joint('r1').add_torque(torque1)
if torque2 is not None:
self.sim.get_joint('r2').add_torque(torque2)
def get_q1p(self):
return self.sim.get_joint('r1').joint.angle_rate
def get_q2p(self):
return self.sim.get_joint('r2').joint.angle_rate
def apply_friction(self, q1p, q2p):
self.apply_torque_to_joints(
-q1p * self.Q1_FRICTION_COEFF,
-q2p * self.Q2_FRICTION_COEFF) | ARS | /ARS-0.5a2.zip/ARS-0.5a2/demos/VehicleWithArm.py | VehicleWithArm.py |
from ars.app import Program
import ars.constants as cts
class Vehicle2(Program):
TORQUE = 30
OFFSET = (3, 1, 3)
FLOOR_BOX_SIZE = (20, 0.01, 20)
def __init__(self):
"""Constructor, calls the superclass constructor first."""
Program.__init__(self)
self.key_press_functions.add('up', self.go_forwards, True)
self.key_press_functions.add('down', self.go_backwards, True)
self.key_press_functions.add('left', self.turn_left, True)
self.key_press_functions.add('right', self.turn_right, True)
def create_sim_objects(self):
"""Implementation of the required method.
Creates and sets up all the objects of the simulation.
"""
offset = self.OFFSET
# (length, radius, center, density)
wheelR = self.sim.add_cylinder(0.4, 0.3, (0, 0, -0.5), density=1)
wheelL = self.sim.add_cylinder(0.4, 0.3, (0, 0, 0.5), density=1)
# (radius, center, density)
ball = self.sim.add_sphere(0.3, (1, 0, 0), density=1)
# (size, center, density)
chassis = self.sim.add_box((2, 0.2, 1.5), (0.5, 0.45, 0), density=10)
self.sim.add_rotary_joint(
'r1', # name
self.sim.get_object(chassis), # obj1
self.sim.get_object(wheelR), # obj2
None, # anchor
cts.Z_AXIS) # axis
self.sim.add_rotary_joint(
'r2',
self.sim.get_object(chassis),
self.sim.get_object(wheelL),
None,
cts.Z_AXIS)
self.sim.add_ball_socket_joint(
'bs', # name
self.sim.get_object(chassis), # obj1
self.sim.get_object(ball), # obj2
None) # anchor
self.sim.get_object(wheelR).offset_by_position(offset)
self.sim.get_object(wheelL).offset_by_position(offset)
self.sim.get_object(ball).offset_by_position(offset)
self.sim.get_object(chassis).offset_by_position(offset)
# test
# try:
# self.sim.get_object(wheelR).actor.set_color((0.8, 0, 0))
# except AttributeError:
# # if visualization is deactivated, there is no actor
# pass
def go_forwards(self):
"""Rotate both powered wheels in the same direction, forwards."""
self.sim.get_joint('r1').add_torque(self.TORQUE)
self.sim.get_joint('r2').add_torque(self.TORQUE)
def go_backwards(self):
"""Rotate both powered wheels in the same direction, backwards."""
self.sim.get_joint('r1').add_torque(-self.TORQUE)
self.sim.get_joint('r2').add_torque(-self.TORQUE)
def turn_left(self):
"""Rotate vehicle counter-clockwise (from above)."""
self.sim.get_joint('r1').add_torque(-self.TORQUE)
self.sim.get_joint('r2').add_torque(self.TORQUE)
def turn_right(self):
"""Rotate vehicle clockwise (from above)."""
self.sim.get_joint('r1').add_torque(self.TORQUE)
self.sim.get_joint('r2').add_torque(-self.TORQUE) | ARS | /ARS-0.5a2.zip/ARS-0.5a2/demos/Vehicle2.py | Vehicle2.py |
from ars.app import Program, dispatcher, logger
import ars.utils.mathematical as mut
import ars.constants as cts
from ars.model.simulator import signals
def output_data(time, sp, cv, error, torque):
print('time: %f, sp: %f, cv: %f, error: %f, torque: %f' %
(time, sp, cv, error, torque))
class ControlledSimpleArm(Program):
R1_TORQUE = 3
OFFSET = (2.5, 1, 2.5)
# ((size, center), density)
BOX_PARAMS = (((3, 0.5, 3), (0, -0.75, 0)), {'density': 1})
# ((length, radius, center), density)
LINK1_PARAMS = ((0.8, 0.1, (0, 0, 0)), {'density': 1})
LINK2_PARAMS = ((0.6, 0.1, (0, 0.7, 0.2)), {'density': 1})
SP_STEP = 0.1
q2_INITIAL_SP = 0.0 # mut.pi/3 # set point
R2_KP = 1.0 # controller proportional action
R2_KD = 0.5 # controller derivative action
Q1_FRICTION_COEFF = 0.01
Q2_FRICTION_COEFF = 0.01
def __init__(self):
Program.__init__(self)
self.key_press_functions.add('a', self.rotate_clockwise)
self.key_press_functions.add('z', self.rotate_counterlockwise)
self.key_press_functions.add('d', self.increase_sp)
self.key_press_functions.add('c', self.decrease_sp)
dispatcher.connect(self.on_pre_step, signals.SIM_PRE_STEP)
self.sp = self.q2_INITIAL_SP
self.previous_error = 0.0
def create_sim_objects(self):
box = self.sim.add_box(*self.BOX_PARAMS[0], **self.BOX_PARAMS[1])
link1 = self.sim.add_capsule(*self.LINK1_PARAMS[0], **self.LINK1_PARAMS[1])
link2 = self.sim.add_capsule(*self.LINK2_PARAMS[0], **self.LINK2_PARAMS[1])
# bodies are rotated before attaching themselves through joints
self.sim.get_object(link1).rotate(cts.X_AXIS, mut.pi / 2)
self.sim.get_object(link2).rotate(cts.X_AXIS, mut.pi / 2)
self.sim.get_object(box).offset_by_position(self.OFFSET)
self.sim.get_object(link1).offset_by_position(self.OFFSET)
self.sim.get_object(link2).offset_by_position(self.OFFSET)
self.sim.add_rotary_joint(
'r1', # name
self.sim.get_object(box), # obj1
self.sim.get_object(link1), # obj2
None, # anchor
cts.Y_AXIS) # axis
r2_anchor = mut.sub3(
self.sim.get_object(link2).get_position(),
(0, self.LINK2_PARAMS[0][0] / 2, 0)) # (0, length/2, 0)
self.sim.add_rotary_joint(
'r2',
self.sim.get_object(link1),
self.sim.get_object(link2),
r2_anchor,
cts.Z_AXIS)
def on_pre_step(self):
try:
time = self.sim.sim_time
time_step = self.sim.time_step
cv = self.get_q2()
q1p = self.get_q1p()
q2p = self.get_q2p()
mv = self.get_compensation(self.sp, cv, time_step)
self.apply_torque_to_joints(0, mv)
self.apply_friction(q1p, q2p)
output_data(time, self.sp, cv, self.sp - cv, mv)
#print('q1p: %f, q2p: %f' % (q1p, q2p))
except Exception:
logger.exception("Exception when executing on_pre_step")
def rotate_clockwise(self):
self.apply_torque_to_joints(self.R1_TORQUE, 0)
def rotate_counterlockwise(self):
self.apply_torque_to_joints(-self.R1_TORQUE, 0)
def apply_torque_to_joints(self, torque1, torque2):
if torque1 is not None:
self.sim.get_joint('r1').add_torque(torque1)
if torque2 is not None:
self.sim.get_joint('r2').add_torque(torque2)
def increase_sp(self):
self.sp += self.SP_STEP
def decrease_sp(self):
self.sp -= self.SP_STEP
def get_q2(self):
return self.sim.get_joint('r2').joint.angle
def get_q1p(self):
return self.sim.get_joint('r1').joint.angle_rate
def get_q2p(self):
return self.sim.get_joint('r2').joint.angle_rate
def get_compensation(self, sp, q, time_step):
"""PD controller."""
error = (sp - q)
error_p = (error - self.previous_error) / time_step
torque = self.R2_KP * error + self.R2_KD * error_p
self.previous_error = error
return torque
def apply_friction(self, q1p, q2p):
self.apply_torque_to_joints(
-q1p * self.Q1_FRICTION_COEFF,
-q2p * self.Q2_FRICTION_COEFF) | ARS | /ARS-0.5a2.zip/ARS-0.5a2/demos/ControlledSimpleArm.py | ControlledSimpleArm.py |
from ars.app import Program
class Vehicle1(Program):
TORQUE = 500
def __init__(self):
Program.__init__(self)
self.key_press_functions.add('up', self.go_forwards)
self.key_press_functions.add('down', self.go_backwards)
self.key_press_functions.add('left', self.turn_left)
self.key_press_functions.add('right', self.turn_right)
def create_sim_objects(self):
# POR: point of reference
# (length, radius, center, density)
wheel1 = self.sim.add_cylinder(0.1, 0.2, (1, 1, 1), density=1)
wheel2 = self.sim.add_cylinder(0.1, 0.2, (0, 0, 1), density=1)
wheel3 = self.sim.add_cylinder(0.1, 0.2, (1, 0, 0), density=1)
wheel4 = self.sim.add_cylinder(0.1, 0.2, (1, 0, 1), density=1)
# (size, center, density)
chassis = self.sim.add_box((1.3, 0.2, 0.6), (0.5, 0, 0.5), density=10)
self.sim.get_object(wheel2).offset_by_object(
self.sim.get_object(wheel1))
self.sim.get_object(wheel3).offset_by_object(
self.sim.get_object(wheel1))
self.sim.get_object(wheel4).offset_by_object(
self.sim.get_object(wheel1))
self.sim.get_object(chassis).offset_by_object(
self.sim.get_object(wheel1))
self.sim.add_rotary_joint(
'r1', # name
self.sim.get_object(chassis), # obj1
self.sim.get_object(wheel1), # obj2
(1, 1, 1), # anchor
(0, 0, 1)) # axis
self.sim.add_rotary_joint(
'r2',
self.sim.get_object(chassis),
self.sim.get_object(wheel2),
(1, 1, 2),
(0, 0, 1))
self.sim.add_rotary_joint(
'r3',
self.sim.get_object(chassis),
self.sim.get_object(wheel3),
(2, 1, 1),
(0, 0, 1))
self.sim.add_rotary_joint(
'r4',
self.sim.get_object(chassis),
self.sim.get_object(wheel4),
(2, 1, 2),
(0, 0, 1))
def go_forwards(self):
self.sim.get_joint('r1').add_torque(self.TORQUE)
self.sim.get_joint('r2').add_torque(self.TORQUE)
def go_backwards(self):
self.sim.get_joint('r1').add_torque(-self.TORQUE)
self.sim.get_joint('r2').add_torque(-self.TORQUE)
def turn_left(self):
self.sim.get_joint('r1').add_torque(-self.TORQUE / 2)
self.sim.get_joint('r3').add_torque(-self.TORQUE / 2)
self.sim.get_joint('r2').add_torque(self.TORQUE / 2)
self.sim.get_joint('r4').add_torque(self.TORQUE / 2)
def turn_right(self):
self.sim.get_joint('r1').add_torque(self.TORQUE / 2)
self.sim.get_joint('r3').add_torque(self.TORQUE / 2)
self.sim.get_joint('r2').add_torque(-self.TORQUE / 2)
self.sim.get_joint('r4').add_torque(-self.TORQUE / 2) | ARS | /ARS-0.5a2.zip/ARS-0.5a2/demos/Vehicle1.py | Vehicle1.py |
import ars.app
from ars.app import Program, Simulation, logger
from ars.model.simulator import signals
import ars.utils.mathematical as mut
import ars.constants as cts
class Example2(Program):
# simulation & window parameters
CAMERA_POSITION = (6, 3, 6)
FPS = 50
STEPS_PER_FRAME = 80
# bodies' parameters
DELTA = 0.01 # to prevent the collision of the 2nd link with the floor
OFFSET = (1, 0, 2)
# ((size, center), density)
BOX_PARAMS = (((10, 0.5, 10), (0, -0.25, 0)), {'density': 100})
POLE_RADIUS = 0.141421 # 1/(5*sqrt(2))
POLE_HEIGHT = 1
POLE_INITIAL_POS = (0.0, 0.5 + DELTA, 0.0)
POLE_MASS = 10.0
ARM_RADIUS = 0.141421
ARM_LENGTH = 1.0
ARM_INITIAL_POS = (0.0, 0.5 + DELTA, 0.1)
ARM_MASS = 10.0
JOINT1_ANCHOR = (0.0, 0.0, 0.0)
JOINT1_AXIS = cts.Y_AXIS
JOINT2_ANCHOR = (0.0, 1.0 + DELTA, 0.1)
JOINT2_AXIS = cts.X_AXIS
Q1_FRICTION_COEFF = 50e-3 * 100
Q2_FRICTION_COEFF = 50e-3 * 100
# control
MAX_TORQUE = 20
SATURATION_TIME = 1
def __init__(self):
Program.__init__(self)
ars.app.dispatcher.connect(self.on_pre_step, signals.SIM_PRE_STEP)
self.q1p_prev = 0.0
self.q2p_prev = 0.0
def create_simulation(self, *args, **kwargs):
# we didn't need to code this method
# but if we want to modify the floor, we have to
# set up the simulation parameters
self.sim = Simulation(self.FPS, self.STEPS_PER_FRAME)
self.sim.graph_adapter = ars.app.gp
self.sim.add_basic_simulation_objects()
self.sim.add_axes()
self.sim.add_floor(
normal=(0,1,0), box_size=self.FLOOR_BOX_SIZE,
color=(0.7, 0.7, 0.7), dist=-0.5, box_center=(0, -0.5, 0))
self.create_sim_objects()
# add the graphic objects
self.gAdapter.add_objects_list(self.sim.actors.values())
self.sim.update_actors()
def create_sim_objects(self):
box = self.sim.add_box(*self.BOX_PARAMS[0], **self.BOX_PARAMS[1])
pole = self.sim.add_cylinder(
self.POLE_HEIGHT, self.POLE_RADIUS, self.POLE_INITIAL_POS,
mass=self.POLE_MASS)
arm = self.sim.add_cylinder(self.ARM_LENGTH, self.ARM_RADIUS,
self.ARM_INITIAL_POS, mass=self.ARM_MASS)
# bodies are rotated before attaching themselves through joints
self.sim.get_object(pole).rotate(cts.X_AXIS, mut.pi / 2)
self.sim.get_object(arm).rotate(cts.X_AXIS, mut.pi / 2)
self.sim.get_object(box).offset_by_position(self.OFFSET)
self.sim.get_object(pole).offset_by_position(self.OFFSET)
self.sim.get_object(arm).offset_by_position(self.OFFSET)
self.sim.add_rotary_joint(
'r1', # name
self.sim.get_object(box), # obj1
self.sim.get_object(pole), # obj2
None, # anchor
self.JOINT1_AXIS) # axis
self.sim.add_rotary_joint(
'r2',
self.sim.get_object(pole),
self.sim.get_object(arm),
mut.add3(self.OFFSET, self.JOINT2_ANCHOR),
self.JOINT2_AXIS)
try:
#self.sim.get_object(box).actor.set_color(cts.COLOR_RED)
self.sim.get_object(pole).actor.set_color(cts.COLOR_YELLOW)
self.sim.get_object(arm).actor.set_color(cts.COLOR_NAVY)
except AttributeError:
# if visualization is deactivated, there is no actor
pass
self.box = box
self.pole = pole
self.arm = arm
def on_pre_step(self):
try:
time = self.sim.sim_time
torque1 = self.get_torque_to_apply(time)
self.apply_torque_to_joints(torque1, None)
self.apply_friction(self.q1p_prev, self.q2p_prev)
q1 = self.get_q1()
q2 = self.get_q2()
q1p = self.get_q1p()
q2p = self.get_q2p()
self.q1p_prev = q1p
self.q2p_prev = q2p
print('%.7e\t%.7e\t%.7e\t%.7e\t%.7e' % (time, q1, q1p, q2, q2p))
except Exception:
logger.exception("Exception when executing on_pre_step")
def get_torque_to_apply(self, time):
if time < self.SATURATION_TIME:
torque = time * self.MAX_TORQUE
else:
torque = self.MAX_TORQUE
return torque
def get_q1(self):
return self.sim.get_joint('r1').joint.angle
def get_q2(self):
return self.sim.get_joint('r2').joint.angle
def get_q1p(self):
return self.sim.get_joint('r1').joint.angle_rate
def get_q2p(self):
return self.sim.get_joint('r2').joint.angle_rate
def apply_torque_to_joints(self, torque1, torque2):
if torque1 is not None:
self.sim.get_joint('r1').add_torque(torque1)
if torque2 is not None:
self.sim.get_joint('r2').add_torque(torque2)
def apply_friction(self, q1p, q2p):
self.apply_torque_to_joints(
-q1p * self.Q1_FRICTION_COEFF,
-q2p * self.Q2_FRICTION_COEFF)
def print_final_data(self):
# print arm links' inertia matrices
pole_body = self.sim.get_object(self.pole).body
arm_body = self.sim.get_object(self.arm).body
print(pole_body.get_inertia_tensor())
print(arm_body.get_inertia_tensor()) | ARS | /ARS-0.5a2.zip/ARS-0.5a2/demos/IROS/example2_conical_pendulum.py | example2_conical_pendulum.py |
import ars.exceptions as exc
from ..VehicleWithArm import VehicleWithArm, logger, mut
class Example3(VehicleWithArm):
#WINDOW_SIZE = (1024,630)
CAMERA_POSITION = (0, 8, 25) # (0,8,15)
FPS = 50
STEPS_PER_FRAME = 80
VEHICLE_OFFSET = (-4, 0.5, 4)
# ((length, radius, center), mass)
LINK1_PARAMS = ((0.8, 0.1, (0, 0, 0)), {'mass': 1})
LINK2_PARAMS = ((0.6, 0.1, (0, 0.7, 0.2)), {'mass': 1})
Q1_FRICTION_COEFF = 0.02
Q2_FRICTION_COEFF = 0.02
KP = 10 # controller proportional action
# speed profile setup
speeds = ((0, 0), (1, 0), (5, 1), (9, 1), (13, 0), (14, 0)) # (time,speed)
speed_i = 0
def __init__(self):
"""Constructor, calls the superclass constructor first."""
VehicleWithArm.__init__(self)
try:
self.sim.get_object(self.chassis).actor.set_color((0.8, 0, 0))
except AttributeError:
# if visualization is deactivated, there is no actor
pass
self.r1_angle_rate_prev = 0.0
self.r2_angle_rate_prev = 0.0
def on_pre_step(self):
try:
time = self.sim.sim_time
if self.speed_i < len(self.speeds) - 1:
if time > self.speeds[self.speed_i + 1][0]:
self.speed_i += 1
elif self.speed_i == len(self.speeds) - 1:
pass
pos = self.sim.get_object(self.chassis).get_position()
q1 = self.sim.get_joint('r1').joint.angle
q1p = self.sim.get_joint('r1').joint.angle_rate
q2 = self.sim.get_joint('r2').joint.angle
q2p = self.sim.get_joint('r2').joint.angle_rate
linear_vel = self.sim.get_object(self.chassis).get_linear_velocity()
linear_vel_XZ = (linear_vel[0], linear_vel[2])
cv = mut.length2(linear_vel_XZ) * mut.sign(linear_vel[0])
sp = self.calc_desired_speed(time)
torque = self.compensate(sp, cv)
self.apply_torque_to_wheels(torque, torque)
self.apply_friction(q1p, q2p)
print('%.7e\t%.7e\t%.7e\t%.7e\t%.7e\t%.7e\t%.7e\t%.7e\t%.7e' %
(time, pos[0], cv, sp, q1, q1p, q2, q2p, torque))
except Exception:
logger.exception("Exception when executing on_pre_step")
def calc_desired_speed(self, time):
if self.speed_i == len(self.speeds) - 1:
return float(self.speeds[self.speed_i][1])
elif 0 <= self.speed_i < len(self.speeds) - 1:
time_diff = time - self.speeds[self.speed_i][0]
time_period = self.speeds[self.speed_i + 1][0] - self.speeds[self.speed_i][0]
prev_speed = float(self.speeds[self.speed_i][1])
next_speed = float(self.speeds[self.speed_i + 1][1])
return (next_speed - prev_speed) * (time_diff / time_period) + prev_speed
else:
raise exc.ArsError('invalid speed_i value: %d' % self.speed_i)
def compensate(self, sp, cv):
return (sp - cv) * self.KP
def print_final_data(self):
# print mass of bodies defined with a density value
ball_body = self.sim.get_object(self.ball).body
chassis_body = self.sim.get_object(self.chassis).body
wheelR_body = self.sim.get_object(self.wheelR).body
wheelL_body = self.sim.get_object(self.wheelL).body
print(ball_body.get_mass())
print(chassis_body.get_mass())
print(wheelR_body.get_mass())
print(wheelL_body.get_mass()) | ARS | /ARS-0.5a2.zip/ARS-0.5a2/demos/IROS/example3_speed_profile.py | example3_speed_profile.py |
from random import random
import ars.app
from ars.model.collision.base import HeightfieldTrimesh
from ars.model.simulator import Simulation, signals
import ars.utils.mathematical as mut
from ..VehicleWithArm import VehicleWithArm, logger
def random_heightfield(num_x, num_z, scale=1.0):
"""A heightfield where values are completely random."""
# that x and z are integers, not floats, does not matter
verts = []
for x in range(num_x):
for z in range(num_z):
verts.append((x, random() * scale, z))
return verts
def sinusoidal_heightfield(num_x, num_z, height_scale=1.0, frequency_x=1.0):
"""A sinusoidal heightfield along the X axis.
:param height_scale: controls the amplitude of the wave
:param frequency_x: controls the frequency of the wave
"""
# TODO: fix the frequency units
verts = []
for x in range(num_x):
for z in range(num_z):
verts.append((x, mut.sin(x * frequency_x) * height_scale, z))
return verts
def constant_heightfield(num_x, num_z, height=0.0):
"""A heightfield where all the values are the same."""
# that x and z are integers, not floats, does not matter
verts = []
for x in range(num_x):
for z in range(num_z):
verts.append((x, height, z))
return verts
class Example4(VehicleWithArm):
FPS = 50
STEPS_PER_FRAME = 80
CAMERA_POSITION = (0, 8, 30)
VEHICLE_OFFSET = (-1.05, -0.35, 5)
TM_X, TM_Z = (40, 20)
# ((length, radius, center), mass)
LINK1_PARAMS = ((0.8, 0.1, (0, 0, 0)), {'mass': 1})
LINK2_PARAMS = ((0.6, 0.1, (0, 0.7, 0.2)), {'mass': 1})
Q1_FRICTION_COEFF = 0.02
Q2_FRICTION_COEFF = 0.02
WHEELS_TORQUE = 4.0
MAX_SPEED = 2.0
# arm controller
q1_SP = 0.0 # set point
R1_KP = 20.0 # controller proportional action
R1_KD = 15.0 # controller derivative action
q2_SP = 0.0
R2_KP = 20.0
R2_KD = 15.0
def __init__(self):
"""Constructor, calls the superclass constructor first."""
VehicleWithArm.__init__(self)
ars.app.dispatcher.connect(self.on_pre_step, signals.SIM_PRE_STEP)
self.q1_previous_error = 0.0
self.q2_previous_error = 0.0
def create_simulation(self, *args, **kwargs):
tm_x, tm_z = self.TM_X, self.TM_Z
vertices = sinusoidal_heightfield(
tm_x, tm_z, height_scale=0.7, frequency_x=0.5)
faces = HeightfieldTrimesh.calc_faces(tm_x, tm_z)
#vertices = constant_heightfield(tm_x, tm_z, height=0.0)
#vertices = random_heightfield(tm_x, tm_z, 0.5)
#shrink_factor = (1.0,1.0)
#vertices = shrink_XZ_heightfield(vertices, shrink_factor)
# set up the simulation parameters
self.sim = Simulation(self.FPS, self.STEPS_PER_FRAME)
self.sim.graph_adapter = ars.app.gp
self.sim.add_basic_simulation_objects()
self.sim.add_axes()
self.sim.add_trimesh_floor(vertices, faces, center=(-10, 0, -10),
color=(0.7, 0.7, 0.7))
self.create_sim_objects()
# add the graphic objects
self.gAdapter.add_objects_list(self.sim.actors.values())
self.sim.update_actors()
def on_pre_step(self):
try:
time = self.sim.sim_time
pos = self.sim.get_object(self.chassis).get_position()
vel = self.sim.get_object(self.chassis).get_linear_velocity()
q1 = self.sim.get_joint('r1').joint.angle
q1p = self.sim.get_joint('r1').joint.angle_rate
q2 = self.sim.get_joint('r2').joint.angle
q2p = self.sim.get_joint('r2').joint.angle_rate
if mut.length3(vel) < self.MAX_SPEED:
wheels_torque = self.WHEELS_TORQUE
else:
wheels_torque = 0.0
self.apply_torque_to_wheels(wheels_torque, wheels_torque)
self.apply_friction(q1p, q2p)
torque1, torque2 = self.get_arm_compensation(q1, q2)
self.apply_torque_to_joints(torque1, torque2)
print('%.7e\t%.7e\t%.7e\t%.7e\t%.7e\t%.7e\t%.7e\t%.7e\t%.7e' %
(time, pos[0], pos[1], pos[2], q1, torque1, q2, torque2,
wheels_torque))
except Exception:
logger.exception("Exception when executing on_pre_step")
def get_arm_compensation(self, q1, q2):
"""Calculate the control torque with a PD controller."""
time_step = self.sim.time_step
error_q1 = (self.q1_SP - q1)
error_q2 = (self.q2_SP - q2)
error_q1_p = (error_q1 - self.q1_previous_error) / time_step
error_q2_p = (error_q2 - self.q2_previous_error) / time_step
torque1 = self.R1_KP * error_q1 + self.R1_KD * error_q1_p
torque2 = self.R2_KP * error_q2 + self.R2_KD * error_q2_p
self.q1_previous_error = error_q1
self.q2_previous_error = error_q2
return torque1, torque2
#==============================================================================
# def shrink_XZ_heightfield(vertices, factor=(1.0,1.0)):
# """
# test
# """
# new_vertices = []
# for vertex in vertices:
# new_vertices.append((vertex[0]/factor[0], vertex[1], vertex[2]/factor[1]))
# return new_vertices
#============================================================================== | ARS | /ARS-0.5a2.zip/ARS-0.5a2/demos/IROS/example4_sinusoidal_terrain.py | example4_sinusoidal_terrain.py |
from ..VehicleWithArm import VehicleWithArm, logger
class Example5(VehicleWithArm):
FPS = 50
STEPS_PER_FRAME = 80
CAMERA_POSITION = (15, 10, 15)
WHEEL_TORQUE = 3
# ((length, radius, center), mass)
WHEEL_R_PARAMS = ((0.4, 0.3, (0, 0, -0.5)), {'mass': 1})
WHEEL_L_PARAMS = ((0.4, 0.3, (0, 0, 0.5)), {'mass': 1})
# joint 2 controller params
SP_STEP = 0.1 # set point step
q2_INITIAL_SP = 0.0 # initial set point
R2_KP = 3.0 # controller proportional action
R2_KD = 3.0 # controller derivative action
def __init__(self, use_capsule_wheels=False, frictionless_arm=False):
VehicleWithArm.__init__(self, use_capsule_wheels, frictionless_arm)
self.key_press_functions.add('d', self.increase_sp)
self.key_press_functions.add('c', self.decrease_sp)
self.sp = self.q2_INITIAL_SP
self.previous_error = 0.0
self.torque_w1 = 0.0
self.torque_w2 = 0.0
def on_pre_step(self):
try:
time = self.sim.sim_time
time_step = self.sim.time_step
pos = self.sim.get_object(self.chassis).get_position()
q1 = self.sim.get_joint('r1').joint.angle
q2 = self.sim.get_joint('r2').joint.angle
mv = self.get_compensation(self.sp, q2, time_step)
self.apply_torque_to_joints(0, mv) # torque1, torque2
print('%.7e\t%.7e\t%.7e\t%.7e\t%.7e\t%.7e\t%.7e\t%.7e\t%.7e' %
(time, pos[0], pos[2], q1, q2, self.sp, mv,
self.torque_w1, self.torque_w2))
self.torque_w1 = 0.0
self.torque_w2 = 0.0
except Exception:
logger.exception("Exception when executing on_pre_step")
def apply_torque_to_wheels(self, torque1, torque2):
VehicleWithArm.apply_torque_to_wheels(self, torque1, torque2)
self.torque_w1 = torque1
self.torque_w2 = torque2
def increase_sp(self):
"""Increase angle set point."""
self.sp += self.SP_STEP
def decrease_sp(self):
"""Decrease angle set point."""
self.sp -= self.SP_STEP
def get_compensation(self, sp, q, time_step):
"""Calculate the control torque with a PD controller."""
error = (sp - q)
error_p = (error - self.previous_error) / time_step
torque = self.R2_KP * error + self.R2_KD * error_p
self.previous_error = error
return torque | ARS | /ARS-0.5a2.zip/ARS-0.5a2/demos/IROS/example5_vehicle_with_user_input.py | example5_vehicle_with_user_input.py |
from ars.app import Program, dispatcher, logger
import ars.constants as cts
from ars.model.simulator import signals
import ars.utils.mathematical as mut
class PrintDataMixin(object):
def print_final_data(self):
collected_data = self.sensor.data_queue
print('sensor data queue count %d' % collected_data.count())
print(collected_data)
class CentrifugalForce(Program):
"""Demo of a system where inertia and centrifugal force intervene."""
OFFSET = (2, 0.5, 2)
# ((size, center), density)
BOX_PARAMS = (((5, 0.5, 5), (0, -0.25, 0)), {'density': 1})
WINDOW_SIZE = (900, 600)
CAMERA_POSITION = (2, 5, 10) # position [meters]
FPS = 50
STEPS_PER_FRAME = 20 # 200 # STEP_SIZE = 1e-5 # 0.01 ms
POLE_SPEED_STEP = 0.01
POLE_VISUAL_RADIUS = 0.05 # 5 cm. how it will be displayed
POLE_HEIGHT = 2 # 2 m
POLE_INITIAL_POS = (0.0, 1.0, 0.0) # (0.0,0.0,1.0) in C++ example
BALL_MASS = 1.0 # 1kg
BALL_RADIUS = 0.01 # 1 cm
BALL_VISUAL_RADIUS = 0.1 # 10 cm
BALL_INITIAL_POS = (0.0, 1.0, 1.0)
JOINT1_ANCHOR = (0.0, 0.0, 0.0)
JOINT1_AXIS = (0.0, 1.0, 0.0) # (0.0,0.0,1.0) Z-axis in C++ example
JOINT1_FMAX = 100
JOINT2_ANCHOR = (
0.0, 2.0, 1.0) # (0.0,2.0,1.0) # (0.0,1.0,2.0) in C++ example
JOINT2_AXIS = (1.0, 0.0, 0.0) # X-axis
CABLE_LENGTH = mut.length3(mut.sub3(BALL_INITIAL_POS, JOINT2_ANCHOR))
JOINT2_ANGLE_RATE_CONTROLLER_KP = 500.0
JOINT1_ANGLE_RATE_INITIAL = 3.0
def __init__(self):
Program.__init__(self)
self.key_press_functions.add('a', self.inc_joint1_vel)
self.key_press_functions.add('z', self.dec_joint1_vel)
dispatcher.connect(self.on_pre_frame, signals.SIM_PRE_FRAME)
self.joint1_vel_user = self.JOINT1_ANGLE_RATE_INITIAL
self.large_speed_steps = True
def create_sim_objects(self):
"""Implementation of the required method.
Creates and sets up all the objects of the simulation.
"""
box = self.sim.add_box(*self.BOX_PARAMS[0], **self.BOX_PARAMS[1])
# Q: Shouldn't pole have mass?
# A: Does not really matter because speed is set and fixed.
pole = self.sim.add_cylinder(
self.POLE_HEIGHT, self.POLE_VISUAL_RADIUS,
self.POLE_INITIAL_POS, density=1.0)
ball = self.sim.add_sphere(
self.BALL_RADIUS, self.BALL_INITIAL_POS, mass=self.BALL_MASS)
# bodies are rotated before attaching themselves through joints
self.sim.get_object(pole).rotate(cts.X_AXIS, mut.pi / 2)
self.sim.get_object(box).offset_by_position(self.OFFSET)
self.sim.get_object(pole).offset_by_position(self.OFFSET)
self.sim.get_object(ball).offset_by_position(self.OFFSET)
self.joint1 = self.sim.add_rotary_joint(
'r1', # name
self.sim.get_object(box), # obj1
self.sim.get_object(pole), # obj2
None, # anchor
self.JOINT1_AXIS) # axis
self.sim.add_rotary_joint(
'r2',
self.sim.get_object(pole),
self.sim.get_object(ball),
mut.add3(self.OFFSET, self.JOINT2_ANCHOR),
self.JOINT2_AXIS)
self.box = box
self.pole = pole
self.ball = ball
def on_pre_frame(self):
"""Handle simulation's pre-frame signal."""
try:
self.set_joint1_speed()
self.apply_friction()
except Exception:
logger.exception("Exception when executing on_pre_frame")
def inc_joint1_vel(self):
"""Increase joint1's speed set point."""
self.joint1_vel_user += self.POLE_SPEED_STEP
def dec_joint1_vel(self):
"""Decrease joint1's speed set point."""
self.joint1_vel_user -= self.POLE_SPEED_STEP
def set_joint1_speed(self):
"""Set joint1's speed."""
joint = self.sim.get_joint('r1').joint
joint.set_speed(self.joint1_vel_user, self.JOINT1_FMAX)
def apply_friction(self):
"""Calculate friction torque and apply it to joint 'r2'."""
kp = self.JOINT2_ANGLE_RATE_CONTROLLER_KP
joint = self.sim.get_joint('r2').joint
joint.add_torque(-kp * joint.angle_rate)
class CentrifugalForce2(CentrifugalForce):
"""Small modification of :class:`CentrifugalForce`.
Adds :meth:`on_post_step` handler to simulation's post-step signal.
"""
def __init__(self):
CentrifugalForce.__init__(self)
dispatcher.connect(self.on_post_step, signals.SIM_POST_STEP)
def on_post_step(self):
"""Handle simulation's post-step signal."""
pass
class FallingBalls(Program):
"""A very simple simulation: three falling balls impact the floor."""
#==========================================================================
# constants
#==========================================================================
CAMERA_POSITION = (-8, 6, 16)
BALL_CENTER = (3, 3, 1)
BALL_RADIUS = 1.0
BALL_MASS = 1.0
BALL2_OFFSET = (2, 1, 0)
STEPS_PER_FRAME = 100
def create_sim_objects(self):
"""Implementation of the required method.
Creates and sets up all the objects of the simulation.
"""
self.ball1 = self.sim.add_sphere(
self.BALL_RADIUS,
self.BALL_CENTER,
mass=self.BALL_MASS)
self.ball2 = self.sim.add_sphere(
self.BALL_RADIUS,
mut.add3(self.BALL_CENTER, self.BALL2_OFFSET),
mass=self.BALL_MASS) | ARS | /ARS-0.5a2.zip/ARS-0.5a2/demos/sensors/base.py | base.py |
import ars.app
from ars.constants import Z_AXIS
from ars.model.robot import sensors
from ars.model.simulator import signals
from ars.utils.geometry import calc_rotation_matrix
from ars.utils.mathematical import add3, mult_by_scalar3, pi, rotate3
from .base import FallingBalls, PrintDataMixin, logger
class LaserSensor(FallingBalls, PrintDataMixin):
"""Simulation of a laser sensor detecting intersection with a ball as it
falls.
The laser is positioned at :attr:`RAY_POS` and its range equals
:attr:`RAY_LENGTH`. The default orientation (positive Z-axis) of the
the :class:`ars.model.robot.sensors.Laser` may be modified with
:attr:`RAY_ROTATION`.
.. seealso::
:class:`ars.model.robot.sensors.BaseSourceSensor`
Sensor is created in the `create_sim_objects` method.
`self.sensor = sensors.Laser(space, self.RAY_LENGTH)`
`self.sensor.set_position(self.RAY_POS)`
It is updated in the `on_post_step` method
`self.sensor.on_change(time)`
"""
RAY_LENGTH = 1000.0
RAY_POS = (0, 2, 1)
BACKGROUND_COLOR = (0, 0, 0)
# no rotation is the same as (any_axis, 0)
# RAY_ROTATION = calc_rotation_matrix((1, 0, 0), 0)
RAY_ROTATION = calc_rotation_matrix((0, 1, 0), pi / 2)
def __init__(self):
FallingBalls.__init__(self)
ars.app.dispatcher.connect(self.on_post_step, signals.SIM_POST_STEP)
def create_simulation(self, *args, **kwargs):
FallingBalls.create_simulation(self, add_axes=True, add_floor=False)
def create_sim_objects(self):
FallingBalls.create_sim_objects(self)
space = self.sim.collision_space
self.sensor = sensors.Laser(space, self.RAY_LENGTH)
self.sensor.set_position(self.RAY_POS)
self.sensor.set_rotation(self.RAY_ROTATION)
def on_post_step(self):
try:
time = self.sim.sim_time
self.sensor.on_change(time)
except Exception:
logger.exception("Exception when executing on_post_step")
class VisualLaser(LaserSensor):
"""A simulation identical to :class:`LaserSensor` but more
interesting visually. For each intersection of the laser and an object,
a small colored sphere is shown.
.. warning::
The viewer may need to rotate the scene to see these spheres. To do
that, just click anywhere on the visualization, hold, and drag.
"""
SIGNAL = sensors.signals.SENSOR_POST_ON_CHANGE
SPHERE_RADIUS = 0.05
SPHERE_COLOR = (1.0, 1.0, 0.0)
def __init__(self):
super(VisualLaser, self).__init__()
self.intersection_point = None
ars.app.dispatcher.connect(self.on_post_on_change, self.SIGNAL)
def on_post_step(self):
if self.intersection_point is not None:
self.gAdapter.remove_object(self.intersection_point)
self.intersection_point = None
super(VisualLaser, self).on_post_step()
def on_post_on_change(self, sender, *args, **kwargs):
"""Create and paint laser's closest contact point.
The sensor data is included in ``kwargs``. This method is to be
called at the the end of :meth:`sensors.LaserSensor.on_change`, when
the measurement has already been calculated.
:param sender: signal sender
:param args: signal data
:param kwargs: signal data
"""
distance = kwargs.get('data').get_kwarg('distance')
if distance is not None:
position = self._calc_intersection(distance)
self.intersection_point = ars.app.gp.Sphere(
radius=self.SPHERE_RADIUS, center=position)
self.intersection_point.set_color(self.SPHERE_COLOR)
self.gAdapter.add_object(self.intersection_point)
def _calc_intersection(self, distance):
ray = self.sensor.get_ray()
laser_pos = ray.get_position()
laser_rot = ray.get_rotation()
distance_vector = mult_by_scalar3(Z_AXIS, distance)
offset = rotate3(laser_rot, distance_vector)
position = add3(laser_pos, offset)
return position | ARS | /ARS-0.5a2.zip/ARS-0.5a2/demos/sensors/laser.py | laser.py |
from math import pi
from .utils.mathematical import mult_by_scalar3
#==============================================================================
# GEOMETRY
#==============================================================================
X_AXIS = (1.0, 0.0, 0.0)
X_AXIS_NEG = (-1.0, 0.0, 0.0)
Y_AXIS = (0.0, 1.0, 0.0)
Y_AXIS_NEG = (0.0, -1.0, 0.0)
Z_AXIS = (0.0, 0.0, 1.0)
Z_AXIS_NEG = (0.0, 0.0, -1.0)
RIGHT_AXIS = X_AXIS
LEFT_AXIS = X_AXIS_NEG
UP_AXIS = Y_AXIS
DOWN_AXIS = Y_AXIS_NEG
OUT_AXIS = Z_AXIS # out of the screen
IN_AXIS = Z_AXIS_NEG # into the screen
#==============================================================================
# MATH & ALGEBRA
#==============================================================================
EYE_3X3 = ((1,0,0),(0,1,0),(0,0,1))
#==============================================================================
# PHYSICS
#==============================================================================
# "typical" constant of gravity acceleration
G_NORM = 9.81
G_VECTOR = mult_by_scalar3(DOWN_AXIS, G_NORM)
#==============================================================================
# COLORS
#==============================================================================
def convert_color(R_int, G_int, B_int):
return mult_by_scalar3((R_int,G_int,B_int), 1.0 / 256)
# names according to W3C Recommendation - 4.4 Recognized color keyword names
# http://www.w3.org/TR/SVG/types.html#ColorKeywords
COLOR_BLACK = convert_color(0,0,0)
COLOR_BLUE = convert_color(0,0,255)
COLOR_BROWN = convert_color(165,42,42)
COLOR_CYAN = convert_color(0,255,255)
COLOR_GOLD = convert_color(255,215,0)
COLOR_GRAY = convert_color(128,128,128)
COLOR_GREEN = convert_color(0,128,0)
COLOR_LIME = convert_color(0,255,0)
COLOR_LIME_GREEN = convert_color(50,205,50)
COLOR_MAROON = convert_color(128,0,0)
COLOR_MAGENTA = convert_color(255,0,255)
COLOR_NAVY = convert_color(0,0,128)
COLOR_OLIVE = convert_color(128,128,0)
COLOR_ORANGE = convert_color(255,165,0)
COLOR_PINK = convert_color(255,192,203)
COLOR_PURPLE = convert_color(128,0,128)
COLOR_RED = convert_color(255,0,0)
COLOR_SILVER = convert_color(192,192,192)
COLOR_SNOW = convert_color(255,250,250)
COLOR_VIOLET = convert_color(238,130,238)
COLOR_YELLOW = convert_color(255,255,0)
COLOR_WHITE = convert_color(255,255,255) | ARS | /ARS-0.5a2.zip/ARS-0.5a2/ars/constants.py | constants.py |
from abc import abstractmethod
import logging
from .. import exceptions as exc
from ..graphics import vtk_adapter as gp
from ..lib.pydispatch import dispatcher
from ..model.simulator import Simulation, signals
logger = logging.getLogger(__name__)
class Program(object):
"""Main class of ARS.
To run a custom simulation, create a subclass.
It must contain an implementation of the 'create_sim_objects' method
which will be called during the simulation creation.
To use it, only two statements are necessary:
* create an object of this class
>>> sim_program = ProgramSubclass()
* call its 'start' method
>>> sim_program.start()
"""
WINDOW_TITLE = "Autonomous Robot Simulator"
WINDOW_POSITION = (0, 0)
WINDOW_SIZE = (1024, 768) # (width,height)
WINDOW_ZOOM = 1.0
CAMERA_POSITION = (10, 8, 10)
BACKGROUND_COLOR = (1, 1, 1)
FPS = 50
STEPS_PER_FRAME = 50
FLOOR_BOX_SIZE = (10, 0.01, 10)
def __init__(self):
"""Constructor. Defines some attributes and calls some initialization
methods to:
* set the basic mapping of key to action,
* create the visualization window according to class constants,
* create the simulation.
"""
self.do_create_window = True
self.key_press_functions = None
self.sim = None
self._screenshot_recorder = None
# (key -> action) mapping
self.set_key_2_action_mapping()
self.gAdapter = gp.Engine(
self.WINDOW_TITLE,
self.WINDOW_POSITION,
self.WINDOW_SIZE,
zoom=self.WINDOW_ZOOM,
background_color=self.BACKGROUND_COLOR,
cam_position=self.CAMERA_POSITION)
self.create_simulation()
def start(self):
"""Starts (indirectly) the simulation handled by this class by starting
the visualization window. If it is closed, the simulation ends. It will
restart if :attr:`do_create_window` has been previously set to ``True``.
"""
while self.do_create_window:
self.do_create_window = False
self.gAdapter.start_window(self.sim.on_idle, self.reset_simulation,
self.on_action_selection)
def finalize(self):
"""Finalize the program, deleting or releasing all associated resources.
Currently, the following is done:
* the graphics engine is told to
:meth:`ars.graphics.base.Engine.finalize_window`
* all attributes are set to None or False
A finalized program file cannot be used for further simulations.
.. note::
This method may be called more than once without error.
"""
if self.gAdapter is not None:
try:
self.gAdapter.finalize_window()
except AttributeError:
pass
self.do_create_window = False
self.key_press_functions = None
self.sim = None
self._screenshot_recorder = None
self.gAdapter = None
def reset_simulation(self):
"""Resets the simulation by resetting the graphics adapter and creating
a new simulation.
"""
logger.info("reset simulation")
self.do_create_window = True
self.gAdapter.reset()
self.create_simulation()
def create_simulation(self, add_axes=True, add_floor=True):
"""Creates an empty simulation and:
#. adds basic simulation objects (:meth:`add_basic_simulation_objects`),
#. (if ``add_axes`` is ``True``) adds axes to the visualization at the
coordinates-system origin,
#. (if ``add_floor`` is ``True``) adds a floor with a defined normal
vector and some visualization parameters,
#. calls :meth:`create_sim_objects` (which must be implemented by
subclasses),
#. gets the actors representing the simulation objects and adds them to
the graphics adapter.
"""
# set up the simulation parameters
self.sim = Simulation(self.FPS, self.STEPS_PER_FRAME)
self.sim.graph_adapter = gp
self.sim.add_basic_simulation_objects()
if add_axes:
self.sim.add_axes()
if add_floor:
self.sim.add_floor(normal=(0, 1, 0), box_size=self.FLOOR_BOX_SIZE,
color=(0.7, 0.7, 0.7))
self.create_sim_objects()
# add the graphic objects
self.gAdapter.add_objects_list(self.sim.actors.values())
self.sim.update_actors()
@abstractmethod
def create_sim_objects(self):
"""This method must be overriden (at least once in the inheritance tree)
by the subclass that will instatiated to run the simulator.
It shall contain statements calling its 'sim' attribute's methods for
adding objects (e.g. add_sphere).
For example:
>>> self.sim.add_sphere(0.5, (1,10,1), density=1)
"""
pass
def set_key_2_action_mapping(self):
"""Creates an Action map, assigns it to :attr:`key_press_functions`
and then adds some ``(key, function`` tuples.
"""
# TODO: add to constructor ``self.key_press_functions = None``?
self.key_press_functions = ActionMap()
self.key_press_functions.add('r', self.reset_simulation)
def on_action_selection(self, key):
"""Method called after an actions is selected by pressing a key."""
logger.info("key: %s" % key)
try:
if self.key_press_functions.has_key(key):
if self.key_press_functions.is_repeat(key):
f = self.key_press_functions.get_function(key)
self.sim.all_frame_steps_callbacks.append(f)
else:
self.key_press_functions.call(key)
else:
logger.info("unregistered key: %s" % key)
except Exception:
logger.exception("")
#==========================================================================
# other
#==========================================================================
def on_pre_step(self):
"""This method will be called before each integration step of the simulation.
It is meant to be, optionally, implemented by subclasses.
"""
raise NotImplementedError()
def on_pre_frame(self):
"""This method will be called before each visualization frame is created.
It is meant to be, optionally, implemented by subclasses.
"""
raise NotImplementedError()
def create_screenshot_recorder(self, base_filename, periodically=False):
"""Create a screenshot (of the frames displayed in the graphics window)
recorder.
Each image will be written to a numbered file according to
``base_filename``. By default it will create an image each time
:meth:`record_frame` is called. If ``periodically`` is ``True`` then
screenshots will be saved in sequence. The time period between each
frame is determined according to :attr:`FPS`.
"""
self._screenshot_recorder = gp.ScreenshotRecorder(base_filename,
self.gAdapter)
if periodically:
period = 1.0 / self.FPS
self._screenshot_recorder.period = period
dispatcher.connect(self.record_frame, signals.SIM_PRE_FRAME)
def record_frame(self):
"""Record a frame using a screenshot recorder.
If frames are meant to be written periodically, a new one will be
recorded only if enough time has elapsed, otherwise it will return
``False``. The filename index will be ``time / period``.
If frames are not meant to be written periodically, then index equals
simulator's frame number.
"""
if self._screenshot_recorder is None:
raise exc.ArsError('Screenshot recorder is not initialized')
try:
time = self.sim.sim_time
period = self._screenshot_recorder.period
if period is None:
self._screenshot_recorder.write(self.sim.num_frame)
else:
self._screenshot_recorder.write(self.sim.num_frame, time)
except Exception:
raise exc.ArsError('Could not record frame')
class ActionMap(object):
def __init__(self):
self._map = {}
def add(self, key, value, repeat=False):
self._map[key] = (value, repeat)
def has_key(self, key):
return key in self._map
def get(self, key, default=None):
return self._map.get(key, default)
def get_function(self, key):
return self._map.get(key)[0]
def call(self, key):
try:
self._map[key][0]()
except Exception:
logger.exception("")
def is_repeat(self, key):
return self._map.get(key)[1]
def __str__(self):
raise NotImplementedError() | ARS | /ARS-0.5a2.zip/ARS-0.5a2/ars/app/__init__.py | __init__.py |
import itertools
from math import sqrt, pi, cos, sin, acos, atan, atan2, degrees, radians
import operator
import numpy as np
from . import generic as gut
# TODO: attribute the code sections that were taken from somewhere else
#==============================================================================
# added to the original refactored code
#==============================================================================
def radians_to_degrees(radians_):
return degrees(radians_)
# TODO: combine with the corresponding scalar-argument function
def vec3_radians_to_degrees(vector_):
result = []
for radians_ in vector_:
result.append(radians_to_degrees(radians_))
return tuple(result)
def degrees_to_radians(degrees_):
return radians(degrees_)
# TODO: combine with the corresponding scalar-argument function
def vec3_degrees_to_radians(vector_):
result = []
for degrees_ in vector_:
result.append(degrees_to_radians(degrees_))
return tuple(result)
def np_matrix_to_tuple(array_):
"""Convert Numpy 2D array (i.e. matrix) to a tuple of tuples.
source: http://stackoverflow.com/a/10016379/556413
Example:
>>> arr = numpy.array(((2, 2), (2, -2)))
>>> np_matrix_to_tuple(arr)
((2, 2), (2, -2))
:param array_: 2D array (i.e. matrix)
:type array_: :class:`numpy.ndarray`
:return: matrix as tuple of tuples
"""
return tuple(tuple(x) for x in array_)
def matrix_multiply(matrix1, matrix2):
"""Return the matrix multiplication of ``matrix1`` and ``matrix2``.
:param matrix1: LxM matrix
:param matrix2: MxN matrix
:return: LxN matrix, product of ``matrix1`` and ``matrix2``
:rtype: tuple of tuples
"""
# TODO: check objects are valid, or use exceptions to catch errors raised
# by numpy
a1 = np.array(matrix1)
a2 = np.array(matrix2)
result = np.dot(a1, a2)
if result.ndim == 1:
return tuple(result)
else:
return np_matrix_to_tuple(result)
def matrix_as_tuple(matrix_):
"""Convert ``matrix_`` to a tuple.
Example:
>>> matrix_as_tuple(((1, 2), (3, 4)))
(1, 2, 3, 4)
:param matrix_: nested tuples
:type matrix_: tuple
:return: ``matrix_`` flattened as a tuple
:rtype: tuple
"""
#TODO: improve a lot
return gut.nested_iterable_to_tuple(matrix_)
def matrix_as_3x3_tuples(tuple_9):
"""Return ``tuple_9`` as a 3-tuple of 3-tuples.
:param tuple_9: tuple of 9 elements
:return: ``tuple_9`` formatted as tuple of tuples
"""
#TODO: convert it to handle square matrices of any size
matrix = None
if isinstance(tuple_9, tuple):
if len(tuple_9) == 9:
matrix = (tuple_9[0:3], tuple_9[3:6], tuple_9[6:9])
return matrix
def calc_acceleration(time_step, vel0, vel1):
"""Calculate the vectorial substraction ``vel1 - vel0`` divided by
``time step``. If any of the vectors is ``None``, then ``None`` is returned.
``vel1`` is the velocity measured ``time_step`` seconds after ``vel0``.
"""
if vel0 is None or vel1 is None:
return None
vel_diff = sub3(vel1, vel0)
return div_by_scalar3(vel_diff, time_step)
def vector_matrix_vector(vector_, matrix_):
r"""Return the product of ``vector_`` transposed, ``matrix_`` and
``vector`` again, which is a scalar value.
.. math::
v^\top \mathbf{M} v
"""
return np.dot(np.dot(np.array(vector_).T, np.array(matrix_)),
np.array(vector_))
#==============================================================================
# Original code but formatted and some refactor
#==============================================================================
def sign(x):
"""Return ``1.0`` if ``x`` is positive, ``-1.0`` otherwise."""
if x > 0.0:
return 1.0
else:
return -1.0
def length2(vector):
"""Return the length of a 2-dimension ``vector``."""
return sqrt(vector[0] ** 2 + vector[1] ** 2)
def length3(vector):
"""Return the length of a 3-dimension ``vector``."""
#TODO: convert it so it can handle vector of any dimension
return sqrt(vector[0] ** 2 + vector[1] ** 2 + vector[2] ** 2)
def neg3(vector):
"""Return the negation of 3-dimension ``vector``."""
#TODO: convert it so it can handle vector of any dimension
return (-vector[0], -vector[1], -vector[2])
def add3(vector1, vector2):
"""Return the sum of 3-dimension ``vector1`` and ``vector2``."""
#TODO: convert it so it can handle vector of any dimension
return (vector1[0] + vector2[0],
vector1[1] + vector2[1],
vector1[2] + vector2[2])
def sub3(vector1, vector2):
"""Return the difference between 3-dimension ``vector1`` and ``vector2``."""
#TODO: convert it so it can handle vector of any dimension
return (vector1[0] - vector2[0],
vector1[1] - vector2[1],
vector1[2] - vector2[2])
def mult_by_scalar3(vector, scalar):
"""Return 3-dimension ``vector`` multiplied by ``scalar``."""
#TODO: convert it so it can handle vector of any dimension
return (vector[0] * scalar, vector[1] * scalar, vector[2] * scalar)
def div_by_scalar3(vector, scalar):
"""Return 3-dimension ``vector`` divided by ``scalar``."""
#TODO: convert it so it can handle vector of any dimension
return (vector[0] / scalar, vector[1] / scalar, vector[2] / scalar)
def dist3(vector1, vector2):
"""Return the distance between point 3-dimension ``vector1`` and
``vector2``.
"""
#TODO: convert it so it can handle vector of any dimension
return length3(sub3(vector1, vector2))
def norm3(vector):
"""Return the unit length vector parallel to 3-dimension ``vector``."""
#l = length3(vector)
#if l > 0.0:
# return (vector[0] / l, vector[1] / l, vector[2] / l)
#else:
# return (0.0, 0.0, 0.0)
return unitize(vector)
def unitize(vector_):
"""Unitize a vector, i.e. return a unit-length vector parallel to
``vector``.
"""
len_ = sqrt(sum(itertools.imap(operator.mul, vector_, vector_)))
size_ = len(vector_)
if len_ > 0.0:
div_vector = (len_,) * size_ # (len_, len_, len_, ...)
return tuple(itertools.imap(operator.div, vector_, div_vector))
else:
return (0.0, 0.0, 0.0)
def dot_product3(vector1, vector2):
"""Return the dot product of 3-dimension ``vector1`` and ``vector2``."""
return dot_product(vector1, vector2)
def dot_product(vec1, vec2):
"""Efficient dot-product operation between two vectors of the same size.
source: http://docs.python.org/library/itertools.html
"""
return sum(itertools.imap(operator.mul, vec1, vec2))
def cross_product(vector1, vector2):
"""Return the cross product of 3-dimension ``vector1`` and ``vector2``."""
return (vector1[1] * vector2[2] - vector1[2] * vector2[1],
vector1[2] * vector2[0] - vector1[0] * vector2[2],
vector1[0] * vector2[1] - vector1[1] * vector2[0])
def project3(vector, unit_vector):
"""Return projection of 3-dimension ``vector`` onto unit 3-dimension
``unit_vector``.
"""
#TODO: convert it so it can handle vector of any dimension
return mult_by_scalar3(vector, dot_product3(norm3(vector), unit_vector))
def acos_dot3(vector1, vector2):
"""Return the angle between unit 3-dimension ``vector1`` and ``vector2``."""
x = dot_product3(vector1, vector2)
if x < -1.0:
return pi
elif x > 1.0:
return 0.0
else:
return acos(x)
def rotate3(rot, vector):
"""Return the rotation of 3-dimension ``vector`` by 3x3 (row major) matrix
``rot``.
"""
return (vector[0] * rot[0] + vector[1] * rot[1] +
vector[2] * rot[2],
vector[0] * rot[3] + vector[1] * rot[4] +
vector[2] * rot[5],
vector[0] * rot[6] + vector[1] * rot[7] +
vector[2] * rot[8])
def transpose3(matrix):
"""Return the inversion (transpose) of 3x3 rotation matrix ``matrix``."""
#TODO: convert it so it can handle vector of any dimension
return (matrix[0], matrix[3], matrix[6],
matrix[1], matrix[4], matrix[7],
matrix[2], matrix[5], matrix[8])
def z_axis(rot):
"""Return the z-axis vector from 3x3 (row major) rotation matrix
``rot``.
"""
#TODO: convert it so it can handle vector of any dimension, and any column
return (rot[2], rot[5], rot[8])
#==============================================================================
# TESTS
#==============================================================================
def _test_angular_conversions(angle_):
#x = 2.0/3*pi
y = radians_to_degrees(angle_)
z = degrees_to_radians(y)
dif = angle_ - z
print('radians: %f' % angle_)
print('degrees: %f' % y)
print('difference: %f' % dif)
if __name__ == '__main__':
_test_angular_conversions(2.0 / 3 * pi)
_test_angular_conversions(4.68 * pi)
radians_ = (2.0 / 3 * pi, 2.0 * pi, 1.0 / 4 * pi)
degrees_ = (120, 360, 45)
print(vec3_radians_to_degrees(radians_))
print(vec3_degrees_to_radians(degrees_))
print(radians_) | ARS | /ARS-0.5a2.zip/ARS-0.5a2/ars/utils/mathematical.py | mathematical.py |
from __future__ import unicode_literals
import datetime
import subprocess
def get_version(version=None, length='full'):
"""Return a PEP 386-compliant version number from ``version``.
:param version: the value to format, expressed as a tuple of strings, of
length 5, with the element before last (i.e. version[3]) equal to
any of the following: ``('alpha', 'beta', 'rc', 'final')``
:type version: tuple of strings
:param length: the format of the returned value, equal to any of
the following: ``('short', 'medium', 'full')``
:type length: basestring
:return: version as a string
:rtype: str
>>> get_version(version=(0, 4, 0, 'alpha', 0))
0.4.dev20130401011455
>>> get_version(version=(0, 4, 0, 'alpha', 1))
0.4a1
>>> get_version(version=(0, 4, 1, 'alpha', 0))
0.4.1.dev20130401011455
>>> get_version(version=(0, 4, 1, 'alpha', 1))
0.4.1a1
>>> get_version(version=(0, 4, 0, 'beta', 0))
0.4b0
>>> get_version(version=(0, 4, 0, 'rc', 0))
0.4c0
>>> get_version(version=(0, 4, 0, 'final', 0))
0.4
>>> get_version(version=(0, 4, 0, 'final', 1))
0.4
>>> get_version(version=(0, 4, 1, 'final', 0))
0.4.1
>>> get_version(version=(0, 4, 0, 'alpha', 0), length='medium')
0.4.dev
>>> get_version(version=(0, 4, 0, 'alpha', 0), length='short')
0.4
Based on: ``django.utils.version`` @ commit 9098504.
Django's license is included at docs/Django BSD-LICENSE.txt
"""
assert length in ('short', 'medium', 'full')
if version is None:
from ars import VERSION as version
else:
assert len(version) == 5
assert version[3] in ('alpha', 'beta', 'rc', 'final')
# Now build the two parts of the version number:
# main = X.Y[.Z]
# sub = .devN - for pre-alpha releases
# | {a|b|c}N - for alpha, beta and rc releases
parts = 2 if version[2] == 0 else 3
main = '.'.join(str(x) for x in version[:parts])
if length == 'short':
return str(main)
sub = ''
if version[3] == 'alpha' and version[4] == 0:
hg_timestamp = get_hg_tip_timestamp()
if length == 'full':
if hg_timestamp:
sub = '.dev%s' % hg_timestamp
else:
sub = '.dev'
elif version[3] != 'final':
mapping = {'alpha': 'a', 'beta': 'b', 'rc': 'c'}
sub = mapping[version[3]] + str(version[4])
return str(main + sub)
def get_hg_changeset():
"""Return the global revision id that identifies the working copy.
To obtain the value it runs the command ``hg identify --id``, whose short
form is ``hg id -i``.
>>> get_hg_changeset()
1a4b04cf687a
>>> get_hg_changeset()
1a4b04cf687a+
.. note::
When there are outstanding (i.e. uncommitted) changes in the working
copy, a ``+`` character will be appended to the current revision id.
"""
pipe = subprocess.Popen(['hg', 'identify', '--id'], stdout=subprocess.PIPE)
changeset = pipe.stdout.read()
#return changeset.strip().strip('+')
return changeset.strip()
def get_hg_tip_timestamp():
"""Return a numeric identifier of the latest changeset of the current
repository based on its timestamp.
To obtain the value it runs the command ``hg tip --template '{date}'``
>> get_hg_tip_timestamp()
'20130328021918'
Based on: ``django.utils.get_git_changeset`` @ commit 9098504, and
http://hgbook.red-bean.com/read/customizing-the-output-of-mercurial.html
Django's license is included at docs/Django BSD-LICENSE.txt
"""
# Timestamp conversion process:
# '1364437158.010800'
# datetime.datetime(2013, 3, 28, 2, 19, 18, 10800)
# '20130328021918'
pipe = subprocess.Popen(
['hg', 'tip', '--template', '{date}'], # don't use "'{date}'"
stdout=subprocess.PIPE)
timestamp = pipe.stdout.read()
try:
timestamp = datetime.datetime.utcfromtimestamp(float(timestamp))
except ValueError:
return None
return timestamp.strftime('%Y%m%d%H%M%S') | ARS | /ARS-0.5a2.zip/ARS-0.5a2/ars/utils/version.py | version.py |
import numpy as np
from . import mathematical as mut
rtb_license = """RTB (The Robotics Toolbox for Matlab) is free software: you
can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
"""
def _rot_matrix_to_rpy_angles(rot, zyx=False):
"""The roll-pitch-yaw angles corresponding to a rotation matrix.
The 3 angles RPY correspond to sequential rotations about the X, Y and Z
axes respectively.
WARNING: for the convention where Y axis points upwards, swap the returned
pitch and yaw. The input remains the same.
Translated to Python by German Larrain.
Original version in Matlab, part of 'The Robotics Toolbox for Matlab (RTB)'
as '/robot/tr2rpy.m'
Copyright (C) 1993-2011, by Peter I. Corke. See `rtb_license`.
"""
m = mut.matrix_as_3x3_tuples(rot)
# "eps: distance from 1.0 to the next largest double-precision number"
eps = 2e-52 # http://www.mathworks.com/help/techdoc/ref/eps.html
rpy_1 = 0.0
rpy_2 = 0.0
rpy_3 = 0.0
if not zyx:
# XYZ order
if abs(m[2][2]) < eps and abs(m[1][2]) < eps: # if abs(m(3,3)) < eps && abs(m(2,3)) < eps
# singularity
rpy_1 = 0.0
rpy_2 = mut.atan2(m[0][2], m[2][2]) # atan2(m(1,3), m(3,3))
rpy_3 = mut.atan2(m[1][0], m[1][1]) # atan2(m(2,1), m(2,2))
else:
rpy_1 = mut.atan2(-m[1][2], m[2][2]) # atan2(m(2,1), m(2,2))
# compute sin/cos of roll angle
sr = mut.sin(rpy_1) # sr = sin(rpy(1))
cr = mut.cos(rpy_1) # cr = cos(rpy(1))
rpy_2 = mut.atan2(m[0][2], cr * m[2][2] - sr * m[1][2]) # atan2(m(1,3), cr * m(3,3) - sr * m(2,3))
rpy_3 = mut.atan2(-m[0][1], m[0][0]) # atan2(-m(1,2), m(1,1))
else:
# old ZYX order (as per Paul book)
if abs(m[0][0]) < eps and abs(m[1][0]) < eps: # if abs(m(1,1)) < eps && abs(m(2,1)) < eps
# singularity
rpy_1 = 0.0
rpy_2 = mut.atan2(-m[2][0], m[0][0]) # atan2(-m(3,1), m(1,1))
rpy_3 = mut.atan2(-m[1][2], m[1][1]) # atan2(-m(2,3), m(2,2))
else:
rpy_1 = mut.atan2(m[1][0], m[0][0]) # atan2(m(2,1), m(1,1))
sp = mut.sin(rpy_1) # sp = sin(rpy(1))
cp = mut.cos(rpy_1) # cp = cos(rpy(1))
rpy_2 = mut.atan2(-m[2][0], # atan2(-m(3,1),
cp * m[0][0] + sp * m[1][0]) # cp * m(1,1) + sp * m(2,1))
rpy_3 = mut.atan2(sp * m[0][2] - cp * m[1][2], # atan2(sp * m(1,3) - cp * m(2,3),
cp * m[1][1] - sp * m[0][1]) # cp*m(2,2) - sp*m(1,2))
return rpy_1, rpy_2, rpy_3
class Transform(object):
r"""An homogeneous transform.
It is a composition of rotation and translation. Mathematically it can be
expressed as
.. math::
\left[
\begin{array}{ccc|c}
& & & \\
& R & & T \\
& & & \\
\hline
0 & 0 & 0 & 1
\end{array}
\right]
where `R` is the 3x3 submatrix describing rotation and `T` is the
3x1 submatrix describing translation.
source:
http://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters#Denavit-Hartenberg_matrix
"""
def __init__(self, pos=None, rot=None):
"""Constructor.
With empty arguments it's just a 4x4 identity matrix.
:param pos: a size 3 vector, or 3x1 or 1x3 matrix
:type pos: tuple, :class:`numpy.ndarray` or None
:param rot: 3x3 or 9x1 rotation matrix
:type rot: tuple, :class:`numpy.ndarray` or None
"""
if pos is None:
pos = (0, 0, 0)
pos = np.array(pos)
if pos.shape != (3, 1):
pos = pos.reshape((3, 1))
if rot is None:
rot = np.identity(3)
else:
rot = np.array(rot)
if rot.shape != (3, 3):
rot = rot.reshape((3, 3))
temp = np.hstack((rot, pos))
self._matrix = np.vstack((temp, np.array([0, 0, 0, 1])))
def __str__(self):
return str(self._matrix)
@property
def matrix(self):
r"""Return matrix that contains the transform values.
:return: 4x4 matrix
:rtype: :class:`numpy.ndarray`
"""
return self._matrix
def get_long_tuple(self):
return tuple(self._matrix.flatten())
def get_translation(self, as_numpy=False):
"""Get the translation component (vector).
:param as_numpy: whether to return a numpy object or a tuple
:return: 3-sequence
:rtype: tuple or :class:`numpy.ndarray`
"""
rot = self._matrix[0:3,3]
if as_numpy:
return rot
return tuple(rot)
def get_rotation(self, as_numpy=False):
"""Get the rotation component (matrix).
:param as_numpy: whether to return a numpy object or a tuple
:return: 3x3 rotation matrix
:rtype: tuple of tuples or :class:`numpy.ndarray`
"""
rot = self._matrix[0:3,0:3]
if as_numpy:
return rot
return mut.np_matrix_to_tuple(rot)
def rot_matrix_to_hom_transform(rot):
"""Convert a rotation matrix to a homogeneous transform.
source: transform.r2t in Corke's Robotic Toolbox (python)
:param rot: 3x3 rotation matrix
:type rot: a tuple, a tuple of tuples or :class:`numpy.ndarray`
"""
if isinstance(rot, tuple):
if len(rot) == 9:
rot = (rot[0:3], rot[3:6], rot[6:9])
return np.concatenate((np.concatenate((rot, np.zeros((3, 1))), 1),
np.mat([0, 0, 0, 1])))
def calc_rotation_matrix(axis, angle):
r"""Return the row-major 3x3 rotation matrix defining a rotation of
magnitude ``angle`` around ``axis``.
Formula is the same as the one presented here (as of 2011.12.01):
http://goo.gl/RkW80
.. math::
R = \begin{bmatrix}
\cos \theta +u_x^2 \left(1-\cos \theta\right) &
u_x u_y \left(1-\cos \theta\right) - u_z \sin \theta &
u_x u_z \left(1-\cos \theta\right) + u_y \sin \theta \\
u_y u_x \left(1-\cos \theta\right) + u_z \sin \theta &
\cos \theta + u_y^2\left(1-\cos \theta\right) &
u_y u_z \left(1-\cos \theta\right) - u_x \sin \theta \\
u_z u_x \left(1-\cos \theta\right) - u_y \sin \theta &
u_z u_y \left(1-\cos \theta\right) + u_x \sin \theta &
\cos \theta + u_z^2\left(1-\cos \theta\right)
\end{bmatrix}
The returned matrix format is length-9 tuple.
"""
cos_theta = mut.cos(angle)
sin_theta = mut.sin(angle)
t = 1.0 - cos_theta
return (t * axis[0]**2 + cos_theta,
t * axis[0] * axis[1] - sin_theta * axis[2],
t * axis[0] * axis[2] + sin_theta * axis[1],
t * axis[0] * axis[1] + sin_theta * axis[2],
t * axis[1]**2 + cos_theta,
t * axis[1] * axis[2] - sin_theta * axis[0],
t * axis[0] * axis[2] - sin_theta * axis[1],
t * axis[1] * axis[2] + sin_theta * axis[0],
t * axis[2]**2 + cos_theta)
def make_OpenGL_matrix(rot, pos):
"""Return an OpenGL compatible (column-major, 4x4 homogeneous)
transformation matrix from ODE compatible (row-major, 3x3) rotation matrix
rotation and position vector position.
The returned matrix format is length-9 tuple.
"""
return (rot[0], rot[3], rot[6], 0.0,
rot[1], rot[4], rot[7], 0.0,
rot[2], rot[5], rot[8], 0.0,
pos[0], pos[1], pos[2], 1.0)
def get_body_relative_vector(body, vector):
"""Return the 3-vector vector transformed into the local coordinate system
of ODE body 'body'"""
return mut.rotate3(mut.transpose3(body.get_rotation()), vector)
def rot_matrix_to_euler_angles(rot):
r"""Return the 3-1-3 Euler angles `phi`, `theta` and `psi` (using the
x-convention) corresponding to the rotation matrix `rot`, which
is a tuple of three 3-element tuples, where each one is a row (what is
called row-major order).
Using the x-convention, the 3-1-3 Euler angles `phi`, `theta` and `psi`
(around the Z, X and again the Z-axis) can be obtained as follows:
.. math::
\phi &= \arctan2(A_{31}, A_{32}) \\
\theta &= \arccos(A_{33}) \\
\psi &= -\arctan2(A_{13}, A_{23})
http://en.wikipedia.org/wiki/Rotation_representation_(mathematics)%23Rotation_matrix_.E2.86.94_Euler_angles
"""
A = rot
phi = mut.atan2(A[2][0], A[2][1]) # arctan2(A_{31}, A_{32})
theta = mut.acos(A[2][2]) # arccos(A_{33})
psi = -mut.atan2(A[0][2], A[1][2]) # -arctan2(A_{13}, A_{23})
angles = (phi, theta, psi)
return angles
def calc_inclination(rot):
"""Return the inclination (as ``pitch`` and ``roll``) inherent of rotation
matrix ``rot``, with respect to the plane (`XZ`, since the vertical
axis is `Y`). ``pitch`` is the rotation around `Z` and ``roll`` around `Y`.
Examples:
>>> rot = calc_rotation_matrix((1.0, 0.0, 0.0), pi/6)
>>> pitch, roll = gemut.calc_inclination(rot)
0.0, pi/6
>>> rot = calc_rotation_matrix((0.0, 1.0, 0.0), whatever)
>>> pitch, roll = gemut.calc_inclination(rot)
0.0, 0.0
>>> rot = calc_rotation_matrix((0.0, 0.0, 1.0), pi/6)
>>> pitch, roll = gemut.calc_inclination(rot)
pi/6, 0.0
"""
# THE FOLLOWING worked only in some cases, damn
#y_up = UP_AXIS
#z_front = OUT_AXIS
#x_right = RIGHT_AXIS
#
#up_rotated = mut.rotate3(rot, y_up)
#pitch_proj = mut.dot_product(mut.cross_product(y_up, up_rotated), x_right)
#pitch = mut.sign(pitch_proj) * mut.acos_dot3(y_up, up_rotated)
#
#front_rotated = mut.rotate3(rot, z_front)
#roll_proj = mut.dot_product(mut.cross_product(z_front, front_rotated), y_up)
#roll = mut.sign(roll_proj) * mut.acos_dot3(z_front, front_rotated)
#
#return pitch, roll
roll_x, pitch_y, yaw_z = _rot_matrix_to_rpy_angles(rot)
roll = roll_x
pitch = yaw_z
#yaw = pitch_y # we don't need it
return pitch, roll
def calc_compass_angle(rot):
"""Return the angle around the vertical axis with respect to the `X+` axis,
i.e. the angular orientation inherent of a rotation matrix ``rot``,
constrained to the plane aligned with the horizon (`XZ`, since the vertical
axis is `Y`).
"""
roll_x, pitch_y, yaw_z = _rot_matrix_to_rpy_angles(rot)
yaw = pitch_y
return yaw | ARS | /ARS-0.5a2.zip/ARS-0.5a2/ars/utils/geometry.py | geometry.py |
from abc import ABCMeta, abstractmethod
class Entity(object):
"""Renderable and movable object.
It has position and orientation. The underlying object is :attr:`actor`,
which connects to the real entity handled by the graphics library in use.
"""
__metaclass__ = ABCMeta
adapter = None
@abstractmethod
def __init__(self, pos, rot):
self._position = pos
self._rotation = rot
self._actor = None
def set_pose(self, pos, rot):
self._position = pos
self._rotation = rot
self.adapter._update_pose(self._actor, pos, rot)
@property
def actor(self):
return self._actor
class Axes(Entity):
__metaclass__ = ABCMeta
@abstractmethod
def __init__(self, pos=(0, 0, 0), rot=None, cylinder_radius=0.05):
super(Axes, self).__init__(pos, rot)
class Body(Entity):
"""Entity representing a defined body with a given color."""
__metaclass__ = ABCMeta
@abstractmethod
def __init__(self, center, rot): # TODO: rename 'center' to 'pos'
super(Body, self).__init__(center, rot)
self._color = None
@abstractmethod
def get_color(self):
return self._color
@abstractmethod
def set_color(self, color):
self._color = color
class Box(Body):
__metaclass__ = ABCMeta
@abstractmethod
def __init__(self, size, pos, rot=None):
super(Box, self).__init__(pos, rot)
class Cone(Body):
__metaclass__ = ABCMeta
@abstractmethod
def __init__(self, height, radius, center, rot=None, resolution=100):
"""Constructor.
:param resolution: it is the circumferential number of facets
:type resolution: int
"""
super(Cone, self).__init__(center, rot)
class Sphere(Body):
__metaclass__ = ABCMeta
@abstractmethod
def __init__(self, radius, center, rot=None, phi_resolution=50,
theta_resolution=50):
"""Constructor.
:param phi_resolution: resolution in the latitude (phi) direction
:type phi_resolution: int
:param theta_resolution: resolution in the longitude (theta) direction
:type theta_resolution: int
"""
super(Sphere, self).__init__(center, rot)
class Cylinder(Body):
__metaclass__ = ABCMeta
@abstractmethod
def __init__(self, length, radius, center, rot=None, resolution=10):
super(Cylinder, self).__init__(center, rot)
class Capsule(Body):
__metaclass__ = ABCMeta
@abstractmethod
def __init__(self, length, radius, center, rot=None, resolution=10):
super(Capsule, self).__init__(center, rot)
class Trimesh(Body):
__metaclass__ = ABCMeta
@abstractmethod
def __init__(self, vertices, faces, pos=None, rot=None):
super(Trimesh, self).__init__(pos, rot)
class Engine(object):
"""
Abstract class. Not coupled (at all) with VTK or any other graphics library
"""
__metaclass__ = ABCMeta
@abstractmethod
def __init__(self, *args, **kwargs):
self.timer_count = 0
self.on_idle_parent_callback = None
self.on_reset_parent_callback = None
self.on_key_press_parent_callback = None
self._window_started = False
@abstractmethod
def add_object(self, obj):
"""Add ``obj`` to the visualization controlled by this adapter.
:param obj:
:type obj: :class:`Body`
"""
pass
@abstractmethod
def remove_object(self, obj):
"""Remove ``obj`` from the visualization controlled by this adapter.
:param obj:
:type obj: :class:`Body`
"""
pass
def add_objects_list(self, obj_list):
for obj in obj_list:
self.add_object(obj)
@abstractmethod
def start_window(
self, on_idle_callback, on_reset_callback, on_key_press_callback):
pass
@abstractmethod
def restart_window(self):
pass
@abstractmethod
def finalize_window(self):
"""Finalize window and remove/clear associated resources."""
pass
@abstractmethod
def _timer_callback(self, obj, event):
pass
@abstractmethod
def _key_press_callback(self, obj, event):
pass
@abstractmethod
def reset(self):
pass
@classmethod
def _update_pose(cls, obj, pos, rot):
# Raising an exception efectively makes this definition be that of
# an abstract method (i.e. calling it directly raises an exception),
# except that it not requires the subclass to implement it if it is
# not used. We would like to use @classmethod AND @abstractmethod,
# but until Python 3.3 that doesn't work correctly.
# http://docs.python.org/3/library/abc.html
raise NotImplementedError()
class ScreenshotRecorder(object):
__metaclass__ = ABCMeta
file_extension = None # e.g. 'png'
def __init__(self, base_filename):
self.base_filename = base_filename
self.last_write_time = None
self.period = None
@abstractmethod
def write(self, index, time):
"""Write render-window's currently displayed image to a file.
The image format (thus the file extension too) to use must be defined
by the implementation.
Image's filename is determined by :meth:`calc_filename`.
:param index: image's index to use for filename calculation
:type index: int
:param time:
:type time:
"""
pass
def calc_filename(self, index=1):
"""Calculate a filename using ``index`` for a new image.
:param index: image's index to use for filename calculation
:type index: int
:return: image's filename
:rtype: str
"""
return '%s%d.%s' % (self.base_filename, index, self.file_extension) | ARS | /ARS-0.5a2.zip/ARS-0.5a2/ars/graphics/base.py | base.py |
import vtk
from .. import exceptions as exc
from ..utils import geometry as gemut
from . import base
TIMER_PERIOD = 50 # milliseconds
TIMER_EVENT = 'TimerEvent'
KEY_PRESS_EVENT = 'KeyPressEvent'
class Engine(base.Engine):
"""Graphics adapter to the Visualization Toolkit (VTK) library"""
def __init__(
self, title, pos=None, size=(1000, 600), zoom=1.0,
cam_position=(10, 8, 10), background_color=(0.1, 0.1, 0.4),
**kwargs):
super(Engine, self).__init__()
self.renderer = vtk.vtkRenderer()
self.render_window = None
self.interactor = None
self._title = title
self._size = size
self._zoom = zoom
self._cam_position = cam_position
self._background_color = background_color
def add_object(self, obj):
self.renderer.AddActor(obj.actor)
def remove_object(self, obj):
self.renderer.RemoveActor(obj.actor)
def start_window(self, on_idle_callback=None, on_reset_callback=None,
on_key_press_callback=None):
# TODO: refactor according to restart_window(), reset() and the
# desired behavior.
self.on_idle_parent_callback = on_idle_callback
self.on_reset_parent_callback = on_reset_callback
self.on_key_press_parent_callback = on_key_press_callback
# Create the RenderWindow and the RenderWindowInteractor and
# link between them and the Renderer.
self.render_window = vtk.vtkRenderWindow()
self.interactor = vtk.vtkRenderWindowInteractor()
self.render_window.AddRenderer(self.renderer)
self.interactor.SetRenderWindow(self.render_window)
# set properties
self.renderer.SetBackground(self._background_color)
self.render_window.SetSize(*self._size)
self.render_window.SetWindowName(self._title)
self.interactor.SetInteractorStyle(
vtk.vtkInteractorStyleTrackballCamera())
# create and configure a Camera, and set it as renderer's active one
camera = vtk.vtkCamera()
camera.SetPosition(self._cam_position)
camera.Zoom(self._zoom)
self.renderer.SetActiveCamera(camera)
self.render_window.Render()
# add observers to the RenderWindowInteractor
self.interactor.AddObserver(TIMER_EVENT, self._timer_callback)
#noinspection PyUnusedLocal
timerId = self.interactor.CreateRepeatingTimer(TIMER_PERIOD)
self.interactor.AddObserver(
KEY_PRESS_EVENT, self._key_press_callback)
self.interactor.Start()
def restart_window(self):
# TODO: code according to start_window(), reset() and the desired behavior
raise exc.ArsError()
def finalize_window(self):
"""Finalize and delete :attr:`renderer`, :attr:`render_window`
and :attr:`interactor`.
.. seealso::
http://stackoverflow.com/questions/15639762/
and
http://docs.python.org/2/reference/datamodel.html#object.__del__
"""
self.render_window.Finalize()
self.interactor.TerminateApp()
# Instead of `del render_window, interactor` as would be done in a
# script, this works too. Clearing `renderer` is not necessary to close
# the window, just a good practice.
self.renderer = None
self.render_window = None
self.interactor = None
@classmethod
def _update_pose(cls, obj, pos, rot):
trans = gemut.Transform(pos, rot)
vtk_tm = cls._create_transform_matrix(trans)
cls._set_object_transform_matrix(obj, vtk_tm)
def _timer_callback(self, obj, event):
self.timer_count += 1
if self.on_idle_parent_callback is not None:
self.on_idle_parent_callback()
iren = obj
iren.GetRenderWindow().Render() # same as self.render_window.Render()?
def _key_press_callback(self, obj, event):
"""
obj: the vtkRenderWindowInteractor
event: "KeyPressEvent"
"""
key = obj.GetKeySym().lower()
if self.on_key_press_parent_callback:
self.on_key_press_parent_callback(key)
def reset(self):
# remove all actors
try:
self.renderer.RemoveAllViewProps()
self.interactor.ExitCallback()
except AttributeError:
pass
#self.restartWindow()
#===========================================================================
# Functions and methods not overriding base class functions and methods
#===========================================================================
@staticmethod
def _set_object_transform_matrix(obj, vtk_tm):
"""Set ``obj``'s pose according to the transform ``vtk_tm``.
:param obj: object to be modified
:type obj: :class:`vtk.vtkProp3D`
:param vtk_tm: homogeneous transform
:type vtk_tm: :class:`vtk.vtkMatrix4x4`
"""
obj.PokeMatrix(vtk_tm)
@staticmethod
def _create_transform_matrix(trans):
"""Create a homogeneous transform matrix valid for VTK.
:param trans: homogeneous transform
:type trans: :class:`ars.utils.geometry.Transform`
:return: a VTK-valid transform matrix
:rtype: :class:`vtk.vtkMatrix4x4`
"""
vtk_matrix = vtk.vtkMatrix4x4()
vtk_matrix.DeepCopy(trans.get_long_tuple())
return vtk_matrix
class Entity(object):
adapter = Engine
def __init__(self, *args, **kwargs):
self._actor = None
class Body(Entity):
def get_color(self):
"""
Returns the color of the body. If it is an assembly,
it is not checked whether all the objects' colors are equal.
"""
# dealing with vtkAssembly properties is more complex
if isinstance(self._actor, vtk.vtkAssembly):
props_3D = self._actor.GetParts()
props_3D.InitTraversal()
actor_ = props_3D.GetNextProp3D()
while actor_ is not None:
self._color = actor_.GetProperty().GetColor()
actor_ = props_3D.GetNextProp3D()
else:
self._color = self._actor.GetProperty().GetColor()
return self._color
def set_color(self, color):
"""
Sets the color of the body. If it is an assembly,
all the objects' color is set.
"""
# dealing with vtkAssembly properties is more complex
if isinstance(self._actor, vtk.vtkAssembly):
props_3D = self._actor.GetParts()
props_3D.InitTraversal()
actor_ = props_3D.GetNextProp3D()
while actor_ is not None:
actor_.GetProperty().SetColor(color)
actor_ = props_3D.GetNextProp3D()
else:
self._actor.GetProperty().SetColor(color)
self._color = color
class Axes(Entity, base.Axes):
def __init__(self, pos=(0, 0, 0), rot=None, cylinder_radius=0.05):
base.Axes.__init__(self, pos, rot, cylinder_radius)
# 2 different methods may be used here. See
# http://stackoverflow.com/questions/7810632/
self._actor = vtk.vtkAxesActor()
self._actor.AxisLabelsOn()
self._actor.SetShaftTypeToCylinder()
self._actor.SetCylinderRadius(cylinder_radius)
self.set_pose(pos, rot)
class Box(Body, base.Box):
def __init__(self, size, pos, rot=None):
base.Box.__init__(self, size, pos, rot)
box = vtk.vtkCubeSource()
box.SetXLength(size[0])
box.SetYLength(size[1])
box.SetZLength(size[2])
boxMapper = vtk.vtkPolyDataMapper()
boxMapper.SetInputConnection(box.GetOutputPort())
self._actor = vtk.vtkActor()
self.set_pose(pos, rot)
self._actor.SetMapper(boxMapper) # TODO: does the order matter?
class Cone(Body, base.Cone):
def __init__(self, height, radius, center, rot=None, resolution=20):
base.Cone.__init__(self, height, radius, center, rot, resolution)
cone = vtk.vtkConeSource()
cone.SetHeight(height)
cone.SetRadius(radius)
cone.SetResolution(resolution)
# TODO: cone.SetDirection(*direction)
# The vector does not have to be normalized
coneMapper = vtk.vtkPolyDataMapper()
coneMapper.SetInputConnection(cone.GetOutputPort())
self._actor = vtk.vtkActor()
self.set_pose(center, rot)
self._actor.SetMapper(coneMapper) # TODO: does the order matter?
class Sphere(Body, base.Sphere):
"""
VTK: sphere (represented by polygons) of specified radius centered at the
origin. The resolution (polygonal discretization) in both the latitude
(phi) and longitude (theta) directions can be specified.
"""
def __init__(
self, radius, center, rot=None, phi_resolution=20,
theta_resolution=20):
base.Sphere.__init__(
self, radius, center, rot, phi_resolution, theta_resolution)
sphere = vtk.vtkSphereSource()
sphere.SetRadius(radius)
sphere.SetPhiResolution(phi_resolution)
sphere.SetThetaResolution(theta_resolution)
sphereMapper = vtk.vtkPolyDataMapper()
sphereMapper.SetInputConnection(sphere.GetOutputPort())
self._actor = vtk.vtkActor()
self.set_pose(center, rot)
self._actor.SetMapper(sphereMapper) # TODO: does the order matter?
class Cylinder(Body, base.Cylinder):
def __init__(self, length, radius, center, rot=None, resolution=20):
base.Cylinder.__init__(self, length, radius, center, rot, resolution)
# VTK: The axis of the cylinder is aligned along the global y-axis.
cyl = vtk.vtkCylinderSource()
cyl.SetHeight(length)
cyl.SetRadius(radius)
cyl.SetResolution(resolution)
# set it to be aligned along the global Z-axis, ODE-like
userTransform = vtk.vtkTransform()
userTransform.RotateX(90.0)
# TODO: add argument to select the orientation axis, like
# cylDirection in Mass.setCylinder()
transFilter = vtk.vtkTransformPolyDataFilter()
transFilter.SetInputConnection(cyl.GetOutputPort())
transFilter.SetTransform(userTransform)
cylMapper = vtk.vtkPolyDataMapper()
cylMapper.SetInputConnection(transFilter.GetOutputPort())
self._actor = vtk.vtkActor()
self.set_pose(center, rot)
self._actor.SetMapper(cylMapper) # TODO: does the order matter?
class Capsule(Body, base.Capsule):
def __init__(self, length, radius, center, rot=None, resolution=20):
base.Capsule.__init__(self, length, radius, center, rot, resolution)
# TODO: simplify this construction using those corresponding to
# Cylinder and Sphere?
sphere1 = vtk.vtkSphereSource()
sphere1.SetRadius(radius)
sphere1.SetPhiResolution(resolution)
sphere1.SetThetaResolution(resolution)
sphereMapper1 = vtk.vtkPolyDataMapper()
sphereMapper1.SetInputConnection(sphere1.GetOutputPort())
sphereActor1 = vtk.vtkActor()
sphereActor1.SetMapper(sphereMapper1)
sphereActor1.SetPosition(0, 0, -length / 2.0)
sphere2 = vtk.vtkSphereSource()
sphere2.SetRadius(radius)
sphere2.SetPhiResolution(resolution)
sphere2.SetThetaResolution(resolution)
sphereMapper2 = vtk.vtkPolyDataMapper()
sphereMapper2.SetInputConnection(sphere2.GetOutputPort())
sphereActor2 = vtk.vtkActor()
sphereActor2.SetMapper(sphereMapper2)
sphereActor2.SetPosition(0, 0, length / 2.0)
# set it to be aligned along the global Z-axis, ODE-like
cylinder = vtk.vtkCylinderSource()
cylinder.SetRadius(radius)
cylinder.SetHeight(length)
cylinder.SetResolution(resolution)
userTransform = vtk.vtkTransform()
userTransform.RotateX(90.0)
# TODO: add argument to select the orientation axis, like
# cylDirection in Mass.setCylinder()
transFilter = vtk.vtkTransformPolyDataFilter()
transFilter.SetInputConnection(cylinder.GetOutputPort())
transFilter.SetTransform(userTransform)
cylinderMapper = vtk.vtkPolyDataMapper()
cylinderMapper.SetInputConnection(transFilter.GetOutputPort())
cylinderActor = vtk.vtkActor()
cylinderActor.SetMapper(cylinderMapper)
assembly = vtk.vtkAssembly()
assembly.AddPart(cylinderActor)
assembly.AddPart(sphereActor1)
assembly.AddPart(sphereActor2)
self._actor = assembly
self.set_pose(center, rot)
class Trimesh(Body, base.Trimesh):
def __init__(self, vertices, faces, pos, rot=None):
base.Trimesh.__init__(self, vertices, faces, pos, rot)
# create points
points = vtk.vtkPoints()
triangles = vtk.vtkCellArray()
triangle_list = []
for face in faces:
# get the 3 points of each face
p_id = points.InsertNextPoint(*vertices[face[0]])
points.InsertNextPoint(*vertices[face[1]])
points.InsertNextPoint(*vertices[face[2]])
# the triangle is defined by 3 points
triangle = vtk.vtkTriangle()
triangle.GetPointIds().SetId(0, p_id) # point 0
triangle.GetPointIds().SetId(1, p_id + 1) # point 1
triangle.GetPointIds().SetId(2, p_id + 2) # point 2
triangle_list.append(triangle)
# insert each triangle into the Vtk data structure
for triangle in triangle_list:
triangles.InsertNextCell(triangle)
# polydata object: represents a geometric structure consisting of
# vertices, lines, polygons, and/or triangle strips
trianglePolyData = vtk.vtkPolyData()
trianglePolyData.SetPoints(points)
trianglePolyData.SetPolys(triangles)
# mapper
mapper = vtk.vtkPolyDataMapper()
mapper.SetInput(trianglePolyData)
# actor: represents an object (geometry & properties) in a rendered scene
self._actor = vtk.vtkActor()
self.set_pose(pos, rot)
self._actor.SetMapper(mapper) # TODO: does the order matter?
class ScreenshotRecorder(base.ScreenshotRecorder):
"""
Based on an official example script, very simple:
http://www.vtk.org/Wiki/VTK/Examples/Python/Screenshot
"""
file_extension = 'png'
def __init__(self, base_filename='screenshot_', graphics_adapter=None):
self.base_filename = base_filename
self.gAdapter = graphics_adapter
self.last_write_time = None
self.period = None
def write(self, index=1, time=None):
"""
.. note::
Image files format is PNG, and extension is ``.png``.
"""
# TODO: see if the workaround (get render_window and create
# image_getter every time) was needed because we used
# image_getter.SetInput instead of SetInputConnection.
render_window = self.gAdapter.render_window
image_getter = vtk.vtkWindowToImageFilter()
image_getter.SetInput(render_window)
image_getter.Update()
writer = vtk.vtkPNGWriter()
writer.SetFileName(self.calc_filename(index))
writer.SetInputConnection(image_getter.GetOutputPort())
writer.Write()
if time is not None:
self.last_write_time = time | ARS | /ARS-0.5a2.zip/ARS-0.5a2/ars/graphics/vtk_adapter.py | vtk_adapter.py |
import operator
import sys
import types
__author__ = "Benjamin Peterson <[email protected]>"
__version__ = "1.4.1"
# Useful for very coarse version differentiation.
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
if PY3:
string_types = str,
integer_types = int,
class_types = type,
text_type = str
binary_type = bytes
MAXSIZE = sys.maxsize
else:
string_types = basestring,
integer_types = (int, long)
class_types = (type, types.ClassType)
text_type = unicode
binary_type = str
if sys.platform.startswith("java"):
# Jython always uses 32 bits.
MAXSIZE = int((1 << 31) - 1)
else:
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
class X(object):
def __len__(self):
return 1 << 31
try:
len(X())
except OverflowError:
# 32-bit
MAXSIZE = int((1 << 31) - 1)
else:
# 64-bit
MAXSIZE = int((1 << 63) - 1)
del X
def _add_doc(func, doc):
"""Add documentation to a function."""
func.__doc__ = doc
def _import_module(name):
"""Import module, returning the module after the last dot."""
__import__(name)
return sys.modules[name]
class _LazyDescr(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, tp):
result = self._resolve()
setattr(obj, self.name, result)
# This is a bit ugly, but it avoids running this again.
delattr(tp, self.name)
return result
class MovedModule(_LazyDescr):
def __init__(self, name, old, new=None):
super(MovedModule, self).__init__(name)
if PY3:
if new is None:
new = name
self.mod = new
else:
self.mod = old
def _resolve(self):
return _import_module(self.mod)
class MovedAttribute(_LazyDescr):
def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
super(MovedAttribute, self).__init__(name)
if PY3:
if new_mod is None:
new_mod = name
self.mod = new_mod
if new_attr is None:
if old_attr is None:
new_attr = name
else:
new_attr = old_attr
self.attr = new_attr
else:
self.mod = old_mod
if old_attr is None:
old_attr = name
self.attr = old_attr
def _resolve(self):
module = _import_module(self.mod)
return getattr(module, self.attr)
class _MovedItems(types.ModuleType):
"""Lazy loading of moved objects"""
_moved_attributes = [
MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"),
MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
MovedAttribute("map", "itertools", "builtins", "imap", "map"),
MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("reload_module", "__builtin__", "imp", "reload"),
MovedAttribute("reduce", "__builtin__", "functools"),
MovedAttribute("StringIO", "StringIO", "io"),
MovedAttribute("UserString", "UserString", "collections"),
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"),
MovedModule("builtins", "__builtin__"),
MovedModule("configparser", "ConfigParser"),
MovedModule("copyreg", "copy_reg"),
MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
MovedModule("http_cookies", "Cookie", "http.cookies"),
MovedModule("html_entities", "htmlentitydefs", "html.entities"),
MovedModule("html_parser", "HTMLParser", "html.parser"),
MovedModule("http_client", "httplib", "http.client"),
MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"),
MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"),
MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"),
MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
MovedModule("cPickle", "cPickle", "pickle"),
MovedModule("queue", "Queue"),
MovedModule("reprlib", "repr"),
MovedModule("socketserver", "SocketServer"),
MovedModule("tkinter", "Tkinter"),
MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
MovedModule("tkinter_colorchooser", "tkColorChooser",
"tkinter.colorchooser"),
MovedModule("tkinter_commondialog", "tkCommonDialog",
"tkinter.commondialog"),
MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
MovedModule("tkinter_font", "tkFont", "tkinter.font"),
MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
"tkinter.simpledialog"),
MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"),
MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"),
MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"),
MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
MovedModule("winreg", "_winreg"),
]
for attr in _moved_attributes:
setattr(_MovedItems, attr.name, attr)
del attr
moves = sys.modules[__name__ + ".moves"] = _MovedItems(__name__ + ".moves")
class Module_six_moves_urllib_parse(types.ModuleType):
"""Lazy loading of moved objects in six.moves.urllib_parse"""
_urllib_parse_moved_attributes = [
MovedAttribute("ParseResult", "urlparse", "urllib.parse"),
MovedAttribute("parse_qs", "urlparse", "urllib.parse"),
MovedAttribute("parse_qsl", "urlparse", "urllib.parse"),
MovedAttribute("urldefrag", "urlparse", "urllib.parse"),
MovedAttribute("urljoin", "urlparse", "urllib.parse"),
MovedAttribute("urlparse", "urlparse", "urllib.parse"),
MovedAttribute("urlsplit", "urlparse", "urllib.parse"),
MovedAttribute("urlunparse", "urlparse", "urllib.parse"),
MovedAttribute("urlunsplit", "urlparse", "urllib.parse"),
MovedAttribute("quote", "urllib", "urllib.parse"),
MovedAttribute("quote_plus", "urllib", "urllib.parse"),
MovedAttribute("unquote", "urllib", "urllib.parse"),
MovedAttribute("unquote_plus", "urllib", "urllib.parse"),
MovedAttribute("urlencode", "urllib", "urllib.parse"),
]
for attr in _urllib_parse_moved_attributes:
setattr(Module_six_moves_urllib_parse, attr.name, attr)
del attr
sys.modules[__name__ + ".moves.urllib_parse"] = Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse")
sys.modules[__name__ + ".moves.urllib.parse"] = Module_six_moves_urllib_parse(__name__ + ".moves.urllib.parse")
class Module_six_moves_urllib_error(types.ModuleType):
"""Lazy loading of moved objects in six.moves.urllib_error"""
_urllib_error_moved_attributes = [
MovedAttribute("URLError", "urllib2", "urllib.error"),
MovedAttribute("HTTPError", "urllib2", "urllib.error"),
MovedAttribute("ContentTooShortError", "urllib", "urllib.error"),
]
for attr in _urllib_error_moved_attributes:
setattr(Module_six_moves_urllib_error, attr.name, attr)
del attr
sys.modules[__name__ + ".moves.urllib_error"] = Module_six_moves_urllib_error(__name__ + ".moves.urllib_error")
sys.modules[__name__ + ".moves.urllib.error"] = Module_six_moves_urllib_error(__name__ + ".moves.urllib.error")
class Module_six_moves_urllib_request(types.ModuleType):
"""Lazy loading of moved objects in six.moves.urllib_request"""
_urllib_request_moved_attributes = [
MovedAttribute("urlopen", "urllib2", "urllib.request"),
MovedAttribute("install_opener", "urllib2", "urllib.request"),
MovedAttribute("build_opener", "urllib2", "urllib.request"),
MovedAttribute("pathname2url", "urllib", "urllib.request"),
MovedAttribute("url2pathname", "urllib", "urllib.request"),
MovedAttribute("getproxies", "urllib", "urllib.request"),
MovedAttribute("Request", "urllib2", "urllib.request"),
MovedAttribute("OpenerDirector", "urllib2", "urllib.request"),
MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"),
MovedAttribute("ProxyHandler", "urllib2", "urllib.request"),
MovedAttribute("BaseHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"),
MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"),
MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"),
MovedAttribute("FileHandler", "urllib2", "urllib.request"),
MovedAttribute("FTPHandler", "urllib2", "urllib.request"),
MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"),
MovedAttribute("UnknownHandler", "urllib2", "urllib.request"),
MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"),
MovedAttribute("urlretrieve", "urllib", "urllib.request"),
MovedAttribute("urlcleanup", "urllib", "urllib.request"),
MovedAttribute("URLopener", "urllib", "urllib.request"),
MovedAttribute("FancyURLopener", "urllib", "urllib.request"),
]
for attr in _urllib_request_moved_attributes:
setattr(Module_six_moves_urllib_request, attr.name, attr)
del attr
sys.modules[__name__ + ".moves.urllib_request"] = Module_six_moves_urllib_request(__name__ + ".moves.urllib_request")
sys.modules[__name__ + ".moves.urllib.request"] = Module_six_moves_urllib_request(__name__ + ".moves.urllib.request")
class Module_six_moves_urllib_response(types.ModuleType):
"""Lazy loading of moved objects in six.moves.urllib_response"""
_urllib_response_moved_attributes = [
MovedAttribute("addbase", "urllib", "urllib.response"),
MovedAttribute("addclosehook", "urllib", "urllib.response"),
MovedAttribute("addinfo", "urllib", "urllib.response"),
MovedAttribute("addinfourl", "urllib", "urllib.response"),
]
for attr in _urllib_response_moved_attributes:
setattr(Module_six_moves_urllib_response, attr.name, attr)
del attr
sys.modules[__name__ + ".moves.urllib_response"] = Module_six_moves_urllib_response(__name__ + ".moves.urllib_response")
sys.modules[__name__ + ".moves.urllib.response"] = Module_six_moves_urllib_response(__name__ + ".moves.urllib.response")
class Module_six_moves_urllib_robotparser(types.ModuleType):
"""Lazy loading of moved objects in six.moves.urllib_robotparser"""
_urllib_robotparser_moved_attributes = [
MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"),
]
for attr in _urllib_robotparser_moved_attributes:
setattr(Module_six_moves_urllib_robotparser, attr.name, attr)
del attr
sys.modules[__name__ + ".moves.urllib_robotparser"] = Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib_robotparser")
sys.modules[__name__ + ".moves.urllib.robotparser"] = Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser")
class Module_six_moves_urllib(types.ModuleType):
"""Create a six.moves.urllib namespace that resembles the Python 3 namespace"""
parse = sys.modules[__name__ + ".moves.urllib_parse"]
error = sys.modules[__name__ + ".moves.urllib_error"]
request = sys.modules[__name__ + ".moves.urllib_request"]
response = sys.modules[__name__ + ".moves.urllib_response"]
robotparser = sys.modules[__name__ + ".moves.urllib_robotparser"]
sys.modules[__name__ + ".moves.urllib"] = Module_six_moves_urllib(__name__ + ".moves.urllib")
def add_move(move):
"""Add an item to six.moves."""
setattr(_MovedItems, move.name, move)
def remove_move(name):
"""Remove item from six.moves."""
try:
delattr(_MovedItems, name)
except AttributeError:
try:
del moves.__dict__[name]
except KeyError:
raise AttributeError("no such move, %r" % (name,))
if PY3:
_meth_func = "__func__"
_meth_self = "__self__"
_func_closure = "__closure__"
_func_code = "__code__"
_func_defaults = "__defaults__"
_func_globals = "__globals__"
_iterkeys = "keys"
_itervalues = "values"
_iteritems = "items"
_iterlists = "lists"
else:
_meth_func = "im_func"
_meth_self = "im_self"
_func_closure = "func_closure"
_func_code = "func_code"
_func_defaults = "func_defaults"
_func_globals = "func_globals"
_iterkeys = "iterkeys"
_itervalues = "itervalues"
_iteritems = "iteritems"
_iterlists = "iterlists"
try:
advance_iterator = next
except NameError:
def advance_iterator(it):
return it.next()
next = advance_iterator
try:
callable = callable
except NameError:
def callable(obj):
return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
if PY3:
def get_unbound_function(unbound):
return unbound
create_bound_method = types.MethodType
Iterator = object
else:
def get_unbound_function(unbound):
return unbound.im_func
def create_bound_method(func, obj):
return types.MethodType(func, obj, obj.__class__)
class Iterator(object):
def next(self):
return type(self).__next__(self)
callable = callable
_add_doc(get_unbound_function,
"""Get the function out of a possibly unbound function""")
get_method_function = operator.attrgetter(_meth_func)
get_method_self = operator.attrgetter(_meth_self)
get_function_closure = operator.attrgetter(_func_closure)
get_function_code = operator.attrgetter(_func_code)
get_function_defaults = operator.attrgetter(_func_defaults)
get_function_globals = operator.attrgetter(_func_globals)
def iterkeys(d, **kw):
"""Return an iterator over the keys of a dictionary."""
return iter(getattr(d, _iterkeys)(**kw))
def itervalues(d, **kw):
"""Return an iterator over the values of a dictionary."""
return iter(getattr(d, _itervalues)(**kw))
def iteritems(d, **kw):
"""Return an iterator over the (key, value) pairs of a dictionary."""
return iter(getattr(d, _iteritems)(**kw))
def iterlists(d, **kw):
"""Return an iterator over the (key, [values]) pairs of a dictionary."""
return iter(getattr(d, _iterlists)(**kw))
if PY3:
def b(s):
return s.encode("latin-1")
def u(s):
return s
unichr = chr
if sys.version_info[1] <= 1:
def int2byte(i):
return bytes((i,))
else:
# This is about 2x faster than the implementation above on 3.2+
int2byte = operator.methodcaller("to_bytes", 1, "big")
byte2int = operator.itemgetter(0)
indexbytes = operator.getitem
iterbytes = iter
import io
StringIO = io.StringIO
BytesIO = io.BytesIO
else:
def b(s):
return s
def u(s):
return unicode(s, "unicode_escape")
unichr = unichr
int2byte = chr
def byte2int(bs):
return ord(bs[0])
def indexbytes(buf, i):
return ord(buf[i])
def iterbytes(buf):
return (ord(byte) for byte in buf)
import StringIO
StringIO = BytesIO = StringIO.StringIO
_add_doc(b, """Byte literal""")
_add_doc(u, """Text literal""")
if PY3:
import builtins
exec_ = getattr(builtins, "exec")
def reraise(tp, value, tb=None):
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
print_ = getattr(builtins, "print")
del builtins
else:
def exec_(_code_, _globs_=None, _locs_=None):
"""Execute code in a namespace."""
if _globs_ is None:
frame = sys._getframe(1)
_globs_ = frame.f_globals
if _locs_ is None:
_locs_ = frame.f_locals
del frame
elif _locs_ is None:
_locs_ = _globs_
exec("""exec _code_ in _globs_, _locs_""")
exec_("""def reraise(tp, value, tb=None):
raise tp, value, tb
""")
def print_(*args, **kwargs):
"""The new-style print function."""
fp = kwargs.pop("file", sys.stdout)
if fp is None:
return
def write(data):
if not isinstance(data, basestring):
data = str(data)
fp.write(data)
want_unicode = False
sep = kwargs.pop("sep", None)
if sep is not None:
if isinstance(sep, unicode):
want_unicode = True
elif not isinstance(sep, str):
raise TypeError("sep must be None or a string")
end = kwargs.pop("end", None)
if end is not None:
if isinstance(end, unicode):
want_unicode = True
elif not isinstance(end, str):
raise TypeError("end must be None or a string")
if kwargs:
raise TypeError("invalid keyword arguments to print()")
if not want_unicode:
for arg in args:
if isinstance(arg, unicode):
want_unicode = True
break
if want_unicode:
newline = unicode("\n")
space = unicode(" ")
else:
newline = "\n"
space = " "
if sep is None:
sep = space
if end is None:
end = newline
for i, arg in enumerate(args):
if i:
write(sep)
write(arg)
write(end)
_add_doc(reraise, """Reraise an exception.""")
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
return meta("NewBase", bases, {})
def add_metaclass(metaclass):
"""Class decorator for creating a class with a metaclass."""
def wrapper(cls):
orig_vars = cls.__dict__.copy()
orig_vars.pop('__dict__', None)
orig_vars.pop('__weakref__', None)
for slots_var in orig_vars.get('__slots__', ()):
orig_vars.pop(slots_var)
return metaclass(cls.__name__, cls.__bases__, orig_vars)
return wrapper | ARS | /ARS-0.5a2.zip/ARS-0.5a2/ars/lib/six/__init__.py | __init__.py |
import weakref, traceback, sys
if sys.hexversion >= 0x3000000:
im_func = '__func__'
im_self = '__self__'
else:
im_func = 'im_func'
im_self = 'im_self'
def safeRef(target, onDelete = None):
"""Return a *safe* weak reference to a callable target
target -- the object to be weakly referenced, if it's a
bound method reference, will create a BoundMethodWeakref,
otherwise creates a simple weakref.
onDelete -- if provided, will have a hard reference stored
to the callable to be called after the safe reference
goes out of scope with the reference object, (either a
weakref or a BoundMethodWeakref) as argument.
"""
if hasattr(target, im_self):
if getattr(target, im_self) is not None:
# Turn a bound method into a BoundMethodWeakref instance.
# Keep track of these instances for lookup by disconnect().
assert hasattr(target, im_func), """safeRef target %r has %s, but no %s, don't know how to create reference"""%( target,im_self,im_func)
reference = BoundMethodWeakref(
target=target,
onDelete=onDelete
)
return reference
if onDelete is not None:
return weakref.ref(target, onDelete)
else:
return weakref.ref( target )
class BoundMethodWeakref(object):
"""'Safe' and reusable weak references to instance methods
BoundMethodWeakref objects provide a mechanism for
referencing a bound method without requiring that the
method object itself (which is normally a transient
object) is kept alive. Instead, the BoundMethodWeakref
object keeps weak references to both the object and the
function which together define the instance method.
Attributes:
key -- the identity key for the reference, calculated
by the class's calculateKey method applied to the
target instance method
deletionMethods -- sequence of callable objects taking
single argument, a reference to this object which
will be called when *either* the target object or
target function is garbage collected (i.e. when
this object becomes invalid). These are specified
as the onDelete parameters of safeRef calls.
weakSelf -- weak reference to the target object
weakFunc -- weak reference to the target function
Class Attributes:
_allInstances -- class attribute pointing to all live
BoundMethodWeakref objects indexed by the class's
calculateKey(target) method applied to the target
objects. This weak value dictionary is used to
short-circuit creation so that multiple references
to the same (object, function) pair produce the
same BoundMethodWeakref instance.
"""
_allInstances = weakref.WeakValueDictionary()
def __new__( cls, target, onDelete=None, *arguments,**named ):
"""Create new instance or return current instance
Basically this method of construction allows us to
short-circuit creation of references to already-
referenced instance methods. The key corresponding
to the target is calculated, and if there is already
an existing reference, that is returned, with its
deletionMethods attribute updated. Otherwise the
new instance is created and registered in the table
of already-referenced methods.
"""
key = cls.calculateKey(target)
current =cls._allInstances.get(key)
if current is not None:
current.deletionMethods.append( onDelete)
return current
else:
base = super( BoundMethodWeakref, cls).__new__( cls )
cls._allInstances[key] = base
base.__init__( target, onDelete, *arguments,**named)
return base
def __init__(self, target, onDelete=None):
"""Return a weak-reference-like instance for a bound method
target -- the instance-method target for the weak
reference, must have <im_self> and <im_func> attributes
and be reconstructable via:
target.<im_func>.__get__( target.<im_self> )
which is true of built-in instance methods.
onDelete -- optional callback which will be called
when this weak reference ceases to be valid
(i.e. either the object or the function is garbage
collected). Should take a single argument,
which will be passed a pointer to this object.
"""
def remove(weak, self=self):
"""Set self.isDead to true when method or instance is destroyed"""
methods = self.deletionMethods[:]
del self.deletionMethods[:]
try:
del self.__class__._allInstances[ self.key ]
except KeyError:
pass
for function in methods:
try:
if hasattr(function, '__call__' ):
function( self )
except Exception, e:
try:
traceback.print_exc()
except AttributeError:
print '''Exception during saferef %s cleanup function %s: %s'''%(
self, function, e
)
self.deletionMethods = [onDelete]
self.key = self.calculateKey( target )
self.weakSelf = weakref.ref(getattr(target,im_self), remove)
self.weakFunc = weakref.ref(getattr(target,im_func), remove)
self.selfName = getattr(target,im_self).__class__.__name__
self.funcName = str(getattr(target,im_func).__name__)
def calculateKey( cls, target ):
"""Calculate the reference key for this reference
Currently this is a two-tuple of the id()'s of the
target object and the target function respectively.
"""
return (id(getattr(target,im_self)),id(getattr(target,im_func)))
calculateKey = classmethod( calculateKey )
def __str__(self):
"""Give a friendly representation of the object"""
return """%s( %s.%s )"""%(
self.__class__.__name__,
self.selfName,
self.funcName,
)
__repr__ = __str__
def __nonzero__( self ):
"""Whether we are still a valid reference"""
return self() is not None
def __cmp__( self, other ):
"""Compare with another reference"""
if not isinstance (other,self.__class__):
return cmp( self.__class__, type(other) )
return cmp( self.key, other.key)
def __call__(self):
"""Return a strong reference to the bound method
If the target cannot be retrieved, then will
return None, otherwise returns a bound instance
method for our object and function.
Note:
You may call this method any number of times,
as it does not invalidate the reference.
"""
target = self.weakSelf()
if target is not None:
function = self.weakFunc()
if function is not None:
return function.__get__(target)
return None | ARS | /ARS-0.5a2.zip/ARS-0.5a2/ars/lib/pydispatch/saferef.py | saferef.py |
from __future__ import generators
import weakref
import saferef, robustapply, errors
__author__ = "Patrick K. O'Brien <[email protected]>"
__cvsid__ = "$Id: dispatcher.py,v 1.1 2010/03/30 15:45:55 mcfletch Exp $"
__version__ = "$Revision: 1.1 $"[11:-2]
class _Parameter:
"""Used to represent default parameter values."""
def __repr__(self):
return self.__class__.__name__
class _Any(_Parameter):
"""Singleton used to signal either "Any Sender" or "Any Signal"
The Any object can be used with connect, disconnect,
send, or sendExact to signal that the parameter given
Any should react to all senders/signals, not just
a particular sender/signal.
"""
Any = _Any()
class _Anonymous(_Parameter):
"""Singleton used to signal "Anonymous Sender"
The Anonymous object is used to signal that the sender
of a message is not specified (as distinct from being
"any sender"). Registering callbacks for Anonymous
will only receive messages sent without senders. Sending
with anonymous will only send messages to those receivers
registered for Any or Anonymous.
Note:
The default sender for connect is Any, while the
default sender for send is Anonymous. This has
the effect that if you do not specify any senders
in either function then all messages are routed
as though there was a single sender (Anonymous)
being used everywhere.
"""
Anonymous = _Anonymous()
WEAKREF_TYPES = (weakref.ReferenceType, saferef.BoundMethodWeakref)
connections = {}
senders = {}
sendersBack = {}
def connect(receiver, signal=Any, sender=Any, weak=True):
"""Connect receiver to sender for signal
receiver -- a callable Python object which is to receive
messages/signals/events. Receivers must be hashable
objects.
if weak is True, then receiver must be weak-referencable
(more precisely saferef.safeRef() must be able to create
a reference to the receiver).
Receivers are fairly flexible in their specification,
as the machinery in the robustApply module takes care
of most of the details regarding figuring out appropriate
subsets of the sent arguments to apply to a given
receiver.
Note:
if receiver is itself a weak reference (a callable),
it will be de-referenced by the system's machinery,
so *generally* weak references are not suitable as
receivers, though some use might be found for the
facility whereby a higher-level library passes in
pre-weakrefed receiver references.
signal -- the signal to which the receiver should respond
if Any, receiver will receive any signal from the
indicated sender (which might also be Any, but is not
necessarily Any).
Otherwise must be a hashable Python object other than
None (DispatcherError raised on None).
sender -- the sender to which the receiver should respond
if Any, receiver will receive the indicated signals
from any sender.
if Anonymous, receiver will only receive indicated
signals from send/sendExact which do not specify a
sender, or specify Anonymous explicitly as the sender.
Otherwise can be any python object.
weak -- whether to use weak references to the receiver
By default, the module will attempt to use weak
references to the receiver objects. If this parameter
is false, then strong references will be used.
returns None, may raise DispatcherTypeError
"""
if signal is None:
raise errors.DispatcherTypeError(
'Signal cannot be None (receiver=%r sender=%r)'%( receiver,sender)
)
if weak:
receiver = saferef.safeRef(receiver, onDelete=_removeReceiver)
senderkey = id(sender)
if senderkey in connections:
signals = connections[senderkey]
else:
connections[senderkey] = signals = {}
# Keep track of senders for cleanup.
# Is Anonymous something we want to clean up?
if sender not in (None, Anonymous, Any):
def remove(object, senderkey=senderkey):
_removeSender(senderkey=senderkey)
# Skip objects that can not be weakly referenced, which means
# they won't be automatically cleaned up, but that's too bad.
try:
weakSender = weakref.ref(sender, remove)
senders[senderkey] = weakSender
except:
pass
receiverID = id(receiver)
# get current set, remove any current references to
# this receiver in the set, including back-references
if signal in signals:
receivers = signals[signal]
_removeOldBackRefs(senderkey, signal, receiver, receivers)
else:
receivers = signals[signal] = []
try:
current = sendersBack.get( receiverID )
if current is None:
sendersBack[ receiverID ] = current = []
if senderkey not in current:
current.append(senderkey)
except:
pass
receivers.append(receiver)
def disconnect(receiver, signal=Any, sender=Any, weak=True):
"""Disconnect receiver from sender for signal
receiver -- the registered receiver to disconnect
signal -- the registered signal to disconnect
sender -- the registered sender to disconnect
weak -- the weakref state to disconnect
disconnect reverses the process of connect,
the semantics for the individual elements are
logically equivalent to a tuple of
(receiver, signal, sender, weak) used as a key
to be deleted from the internal routing tables.
(The actual process is slightly more complex
but the semantics are basically the same).
Note:
Using disconnect is not required to cleanup
routing when an object is deleted, the framework
will remove routes for deleted objects
automatically. It's only necessary to disconnect
if you want to stop routing to a live object.
returns None, may raise DispatcherTypeError or
DispatcherKeyError
"""
if signal is None:
raise errors.DispatcherTypeError(
'Signal cannot be None (receiver=%r sender=%r)'%( receiver,sender)
)
if weak: receiver = saferef.safeRef(receiver)
senderkey = id(sender)
try:
signals = connections[senderkey]
receivers = signals[signal]
except KeyError:
raise errors.DispatcherKeyError(
"""No receivers found for signal %r from sender %r""" %(
signal,
sender
)
)
try:
# also removes from receivers
_removeOldBackRefs(senderkey, signal, receiver, receivers)
except ValueError:
raise errors.DispatcherKeyError(
"""No connection to receiver %s for signal %s from sender %s""" %(
receiver,
signal,
sender
)
)
_cleanupConnections(senderkey, signal)
def getReceivers( sender = Any, signal = Any ):
"""Get list of receivers from global tables
This utility function allows you to retrieve the
raw list of receivers from the connections table
for the given sender and signal pair.
Note:
there is no guarantee that this is the actual list
stored in the connections table, so the value
should be treated as a simple iterable/truth value
rather than, for instance a list to which you
might append new records.
Normally you would use liveReceivers( getReceivers( ...))
to retrieve the actual receiver objects as an iterable
object.
"""
try:
return connections[id(sender)][signal]
except KeyError:
return []
def liveReceivers(receivers):
"""Filter sequence of receivers to get resolved, live receivers
This is a generator which will iterate over
the passed sequence, checking for weak references
and resolving them, then returning all live
receivers.
"""
for receiver in receivers:
if isinstance( receiver, WEAKREF_TYPES):
# Dereference the weak reference.
receiver = receiver()
if receiver is not None:
yield receiver
else:
yield receiver
def getAllReceivers( sender = Any, signal = Any ):
"""Get list of all receivers from global tables
This gets all receivers which should receive
the given signal from sender, each receiver should
be produced only once by the resulting generator
"""
receivers = {}
for set in (
# Get receivers that receive *this* signal from *this* sender.
getReceivers( sender, signal ),
# Add receivers that receive *any* signal from *this* sender.
getReceivers( sender, Any ),
# Add receivers that receive *this* signal from *any* sender.
getReceivers( Any, signal ),
# Add receivers that receive *any* signal from *any* sender.
getReceivers( Any, Any ),
):
for receiver in set:
if receiver: # filter out dead instance-method weakrefs
try:
if receiver not in receivers:
receivers[receiver] = 1
yield receiver
except TypeError:
# dead weakrefs raise TypeError on hash...
pass
def send(signal=Any, sender=Anonymous, *arguments, **named):
"""Send signal from sender to all connected receivers.
signal -- (hashable) signal value, see connect for details
sender -- the sender of the signal
if Any, only receivers registered for Any will receive
the message.
if Anonymous, only receivers registered to receive
messages from Anonymous or Any will receive the message
Otherwise can be any python object (normally one
registered with a connect if you actually want
something to occur).
arguments -- positional arguments which will be passed to
*all* receivers. Note that this may raise TypeErrors
if the receivers do not allow the particular arguments.
Note also that arguments are applied before named
arguments, so they should be used with care.
named -- named arguments which will be filtered according
to the parameters of the receivers to only provide those
acceptable to the receiver.
Return a list of tuple pairs [(receiver, response), ... ]
if any receiver raises an error, the error propagates back
through send, terminating the dispatch loop, so it is quite
possible to not have all receivers called if a raises an
error.
"""
# Call each receiver with whatever arguments it can accept.
# Return a list of tuple pairs [(receiver, response), ... ].
responses = []
for receiver in liveReceivers(getAllReceivers(sender, signal)):
response = robustapply.robustApply(
receiver,
signal=signal,
sender=sender,
*arguments,
**named
)
responses.append((receiver, response))
return responses
def sendExact( signal=Any, sender=Anonymous, *arguments, **named ):
"""Send signal only to those receivers registered for exact message
sendExact allows for avoiding Any/Anonymous registered
handlers, sending only to those receivers explicitly
registered for a particular signal on a particular
sender.
"""
responses = []
for receiver in liveReceivers(getReceivers(sender, signal)):
response = robustapply.robustApply(
receiver,
signal=signal,
sender=sender,
*arguments,
**named
)
responses.append((receiver, response))
return responses
def _removeReceiver(receiver):
"""Remove receiver from connections."""
if not sendersBack:
# During module cleanup the mapping will be replaced with None
return False
backKey = id(receiver)
try:
backSet = sendersBack.pop(backKey)
except KeyError:
return False
else:
for senderkey in backSet:
try:
signals = connections[senderkey].keys()
except KeyError:
pass
else:
for signal in signals:
try:
receivers = connections[senderkey][signal]
except KeyError:
pass
else:
try:
receivers.remove( receiver )
except Exception:
pass
_cleanupConnections(senderkey, signal)
def _cleanupConnections(senderkey, signal):
"""Delete any empty signals for senderkey. Delete senderkey if empty."""
try:
receivers = connections[senderkey][signal]
except:
pass
else:
if not receivers:
# No more connected receivers. Therefore, remove the signal.
try:
signals = connections[senderkey]
except KeyError:
pass
else:
del signals[signal]
if not signals:
# No more signal connections. Therefore, remove the sender.
_removeSender(senderkey)
def _removeSender(senderkey):
"""Remove senderkey from connections."""
_removeBackrefs(senderkey)
try:
del connections[senderkey]
except KeyError:
pass
# Senderkey will only be in senders dictionary if sender
# could be weakly referenced.
try:
del senders[senderkey]
except:
pass
def _removeBackrefs( senderkey):
"""Remove all back-references to this senderkey"""
try:
signals = connections[senderkey]
except KeyError:
signals = None
else:
items = signals.items()
def allReceivers( ):
for signal,set in items:
for item in set:
yield item
for receiver in allReceivers():
_killBackref( receiver, senderkey )
def _removeOldBackRefs(senderkey, signal, receiver, receivers):
"""Kill old sendersBack references from receiver
This guards against multiple registration of the same
receiver for a given signal and sender leaking memory
as old back reference records build up.
Also removes old receiver instance from receivers
"""
try:
index = receivers.index(receiver)
# need to scan back references here and remove senderkey
except ValueError:
return False
else:
oldReceiver = receivers[index]
del receivers[index]
found = 0
signals = connections.get(signal)
if signals is not None:
for sig,recs in connections.get(signal,{}).iteritems():
if sig != signal:
for rec in recs:
if rec is oldReceiver:
found = 1
break
if not found:
_killBackref( oldReceiver, senderkey )
return True
return False
def _killBackref( receiver, senderkey ):
"""Do the actual removal of back reference from receiver to senderkey"""
receiverkey = id(receiver)
set = sendersBack.get( receiverkey, () )
while senderkey in set:
try:
set.remove( senderkey )
except:
break
if not set:
try:
del sendersBack[ receiverkey ]
except KeyError:
pass
return True | ARS | /ARS-0.5a2.zip/ARS-0.5a2/ars/lib/pydispatch/dispatcher.py | dispatcher.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.