file_name
large_stringlengths 4
140
| prefix
large_stringlengths 0
12.1k
| suffix
large_stringlengths 0
12k
| middle
large_stringlengths 0
7.51k
| fim_type
large_stringclasses 4
values |
---|---|---|---|---|
cartesian.rs | !(combinations.next(), None);
/// ```
///
/// Note that if any one of the passed containers is empty, the product
/// as a whole is empty, too.
///
/// ```rust
/// extern crate scenarios;
///
/// use scenarios::cartesian;
///
/// let vectors = [vec![1, 2], vec![11, 22], vec![]];
/// let combinations = cartesian::product(&slices);
/// assert_eq!(combinations.next(), None);
/// ```
///
/// For mathematical correctness, the product of no collections at all
/// is one empty vector.
///
/// ```rust
/// extern crate scenarios;
///
/// use scenarios::cartesian;
///
/// let combinations = cartesian::product(&[]);
/// assert_eq!(combinations.next(), Some(Vec::new()));
/// assert_eq!(combinations.next(), None);
/// ```
pub fn product<'a, C: 'a, T: 'a>(collections: &'a [C]) -> Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
{
// We start with fresh iterators and a `next_item` full of `None`s.
let mut iterators = collections.iter().map(<&C>::into_iter).collect::<Vec<_>>();
let next_item = iterators.iter_mut().map(Iterator::next).collect();
Product {
collections,
iterators,
next_item,
}
}
/// Iterator returned by [`product()`].
///
/// [`product()`]: ./fn.product.html
pub struct Product<'a, C: 'a, T: 'a>
where
&'a C: IntoIterator<Item = &'a T>,
{
/// The underlying collections that we iterate over.
collections: &'a [C],
/// Our own set of sub-iterators, taken from `collections`.
iterators: Vec<<&'a C as IntoIterator>::IntoIter>,
/// The next item to yield.
next_item: Option<Vec<&'a T>>,
}
impl<'a, C, T> Iterator for Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
{
type Item = Vec<&'a T>;
fn next(&mut self) -> Option<Self::Item> {
let result = self.next_item.clone();
self.advance();
result
}
/// Calculate bounds on the number of remaining elements.
///
/// This is calculated the same way as [`Product::len()`], but uses
/// a helper type to deal with the return type of `size_hint()`.
/// See there for information on why the used formula is corrected.
///
/// [`Product::len()`]: #method.len
fn size_hint(&self) -> (usize, Option<usize>) {
if self.next_item.is_none() {
return (0, Some(0));
}
let SizeHint(lower, upper) = SizeHint(1, Some(1))
+ self
.iterators
.iter()
.enumerate()
.map(|(i, iterator)| {
SizeHint::from(iterator)
* self.collections[i + 1..]
.iter()
.map(|c| SizeHint::from(&c.into_iter()))
.product()
})
.sum();
(lower, upper)
}
}
impl<'a, C, T> ExactSizeIterator for Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
<&'a C as IntoIterator>::IntoIter: ExactSizeIterator,
{
/// Calculates the exact number of remaining elements.
///
/// The length consists of the following contributions:
///
/// - 1 for the `next_item` to be yielded;
/// - `X` for each currently active iterator, where X is the
/// product of the iterators length and the sizes of all
/// *collections* to the right of it in the product.
///
/// Example
/// -------
///
/// Assume the Cartesian product `[1, 2, 3]×[1, 2]×[1, 2, 3]`. Upon
/// construction, the `Product` type creates three iterators `A`,
/// `B`, and `C` – one iterator for each array. It also extracts
/// one item from each to form `next_item`. Hence, `next_item`
/// contributes `1` to the total length. The three iterators
/// contribute as follows:
///
/// - A: 2 items left × collection of size 2 × collection of size
/// 3 = 12;
/// - B: 1 item left × collection of size 3 = 3;
/// - C: 2 items left = 2.
///
/// Thus, we end up with a total length of `1+12+3+2=18`. This is
/// the same length we get when multiplying the size of all passed
/// collections. (`3*2*3=18`) However, our (complicated) formula
/// also works when the iterator has already yielded some elements.
fn len(&self) -> usize {
if self.next_item.is_none() {
| 1 + self
.iterators
.iter()
.enumerate()
.map(|(i, iterator)| {
iterator.len()
* self.collections[i + 1..]
.iter()
.map(|c| c.into_iter().len())
.product::<usize>()
})
.sum::<usize>()
}
}
impl<'a, C, T> ::std::iter::FusedIterator for Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
<&'a C as IntoIterator>::IntoIter: ExactSizeIterator,
{}
impl<'a, C, T> Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
{
/// Advances the iterators and updates `self.next_item`.
///
/// This loop works like incrementing a number digit by digit. We
/// go over each iterator and its corresponding "digit" in
/// `next_item` in lockstep, starting at the back.
///
/// If we can advance the iterator, we update the "digit" and are
/// done. If the iterator is exhausted, we have to go from "9" to
/// "10": we restart the iterator, grab the first element, and move
/// on to the next digit.
///
/// The `break` expressions are to be understood literally: our
/// scheme can break in two ways.
/// 1. The very first iterator (`i==0`) is exhausted.
/// 2. A freshly restarted iterator is empty. (should never happen!)
/// In both cases, we want to exhaust `self` immediately. We do so
/// by breaking out of the loop, falling through to the very last
/// line, and manually set `self.next_item` to `None`.
///
/// Note that there is a so-called nullary case, when
/// `cartesian::product()` is called with an empty slice. While
/// this use-case is debatable, the mathematically correct way to
/// deal with it is to yield some empty vector once and then
/// nothing.
///
/// Luckily, we already handle this correctly! Because of the way
/// `Iterator::collect()` works when collecting into an
/// `Option<Vec<_>>`, `next_item` is initialized to some empty
/// vector, so this will be the first thing we yield. Then, when
/// `self.advance()` is called, we fall through the `while` loop and
/// immediately exhaust this iterator, yielding nothing more.
fn advance(&mut self) {
if let Some(ref mut next_item) = self.next_item {
let mut i = self.iterators.len();
while i > 0 {
i -= 1;
// Grab the next item from the current sub-iterator.
if let Some(elt) = self.iterators[i].next() {
next_item[i] = elt;
// If that works, we're done!
return;
} else if i == 0 {
// Last sub-iterator is exhausted, so we're
// exhausted, too.
break;
}
// The current sub-terator is empty, start anew.
self.iterators[i] = self.collections[i].into_iter();
if let Some(elt) = self.iterators[i].next() {
next_item[i] = elt;
// Roll over to the next sub-iterator.
} else {
// Should never happen: The freshly restarted
// sub-iterator is already empty.
break;
}
}
}
// Exhaust this iterator if the above loop `break`s.
self.next_item = None;
}
}
#[derive(Debug)]
struct SizeHint(usize, Option<usize>);
impl SizeHint {
fn into_inner(self) -> (usize, Option<usize>) {
(self.0, self.1)
}
}
impl<'a, I: Iterator> From<&'a I> for SizeHint {
fn from(iter: &'a I) -> Self {
let (lower, upper) = iter.size_hint();
SizeHint(lower, upper)
}
}
impl :: | return 0;
}
| conditional_block |
cartesian.rs | /// ```
pub fn product<'a, C: 'a, T: 'a>(collections: &'a [C]) -> Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
{
// We start with fresh iterators and a `next_item` full of `None`s.
let mut iterators = collections.iter().map(<&C>::into_iter).collect::<Vec<_>>();
let next_item = iterators.iter_mut().map(Iterator::next).collect();
Product {
collections,
iterators,
next_item,
}
}
/// Iterator returned by [`product()`].
///
/// [`product()`]: ./fn.product.html
pub struct Product<'a, C: 'a, T: 'a>
where
&'a C: IntoIterator<Item = &'a T>,
{
/// The underlying collections that we iterate over.
collections: &'a [C],
/// Our own set of sub-iterators, taken from `collections`.
iterators: Vec<<&'a C as IntoIterator>::IntoIter>,
/// The next item to yield.
next_item: Option<Vec<&'a T>>,
}
impl<'a, C, T> Iterator for Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
{
type Item = Vec<&'a T>;
fn next(&mut self) -> Option<Self::Item> {
let result = self.next_item.clone();
self.advance();
result
}
/// Calculate bounds on the number of remaining elements.
///
/// This is calculated the same way as [`Product::len()`], but uses
/// a helper type to deal with the return type of `size_hint()`.
/// See there for information on why the used formula is corrected.
///
/// [`Product::len()`]: #method.len
fn size_hint(&self) -> (usize, Option<usize>) {
if self.next_item.is_none() {
return (0, Some(0));
}
let SizeHint(lower, upper) = SizeHint(1, Some(1))
+ self
.iterators
.iter()
.enumerate()
.map(|(i, iterator)| {
SizeHint::from(iterator)
* self.collections[i + 1..]
.iter()
.map(|c| SizeHint::from(&c.into_iter()))
.product()
})
.sum();
(lower, upper)
}
}
impl<'a, C, T> ExactSizeIterator for Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
<&'a C as IntoIterator>::IntoIter: ExactSizeIterator,
{
/// Calculates the exact number of remaining elements.
///
/// The length consists of the following contributions:
///
/// - 1 for the `next_item` to be yielded;
/// - `X` for each currently active iterator, where X is the
/// product of the iterators length and the sizes of all
/// *collections* to the right of it in the product.
///
/// Example
/// -------
///
/// Assume the Cartesian product `[1, 2, 3]×[1, 2]×[1, 2, 3]`. Upon
/// construction, the `Product` type creates three iterators `A`,
/// `B`, and `C` – one iterator for each array. It also extracts
/// one item from each to form `next_item`. Hence, `next_item`
/// contributes `1` to the total length. The three iterators
/// contribute as follows:
///
/// - A: 2 items left × collection of size 2 × collection of size
/// 3 = 12;
/// - B: 1 item left × collection of size 3 = 3;
/// - C: 2 items left = 2.
///
/// Thus, we end up with a total length of `1+12+3+2=18`. This is
/// the same length we get when multiplying the size of all passed
/// collections. (`3*2*3=18`) However, our (complicated) formula
/// also works when the iterator has already yielded some elements.
fn len(&self) -> usize {
if self.next_item.is_none() {
return 0;
}
1 + self
.iterators
.iter()
.enumerate()
.map(|(i, iterator)| {
iterator.len()
* self.collections[i + 1..]
.iter()
.map(|c| c.into_iter().len())
.product::<usize>()
})
.sum::<usize>()
}
}
impl<'a, C, T> ::std::iter::FusedIterator for Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
<&'a C as IntoIterator>::IntoIter: ExactSizeIterator,
{}
impl<'a, C, T> Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
{
/// Advances the iterators and updates `self.next_item`.
///
/// This loop works like incrementing a number digit by digit. We
/// go over each iterator and its corresponding "digit" in
/// `next_item` in lockstep, starting at the back.
///
/// If we can advance the iterator, we update the "digit" and are
/// done. If the iterator is exhausted, we have to go from "9" to
/// "10": we restart the iterator, grab the first element, and move
/// on to the next digit.
///
/// The `break` expressions are to be understood literally: our
/// scheme can break in two ways.
/// 1. The very first iterator (`i==0`) is exhausted.
/// 2. A freshly restarted iterator is empty. (should never happen!)
/// In both cases, we want to exhaust `self` immediately. We do so
/// by breaking out of the loop, falling through to the very last
/// line, and manually set `self.next_item` to `None`.
///
/// Note that there is a so-called nullary case, when
/// `cartesian::product()` is called with an empty slice. While
/// this use-case is debatable, the mathematically correct way to
/// deal with it is to yield some empty vector once and then
/// nothing.
///
/// Luckily, we already handle this correctly! Because of the way
/// `Iterator::collect()` works when collecting into an
/// `Option<Vec<_>>`, `next_item` is initialized to some empty
/// vector, so this will be the first thing we yield. Then, when
/// `self.advance()` is called, we fall through the `while` loop and
/// immediately exhaust this iterator, yielding nothing more.
fn advance(&mut self) {
if let Some(ref mut next_item) = self.next_item {
let mut i = self.iterators.len();
while i > 0 {
i -= 1;
// Grab the next item from the current sub-iterator.
if let Some(elt) = self.iterators[i].next() {
next_item[i] = elt;
// If that works, we're done!
return;
} else if i == 0 {
// Last sub-iterator is exhausted, so we're
// exhausted, too.
break;
}
// The current sub-terator is empty, start anew.
self.iterators[i] = self.collections[i].into_iter();
if let Some(elt) = self.iterators[i].next() {
next_item[i] = elt;
// Roll over to the next sub-iterator.
} else {
// Should never happen: The freshly restarted
// sub-iterator is already empty.
break;
}
}
}
// Exhaust this iterator if the above loop `break`s.
self.next_item = None;
}
}
#[derive(Debug)]
struct SizeHint(usize, Option<usize>);
impl SizeHint {
fn into_inner(self) -> (usize, Option<usize>) {
(self.0, self.1)
}
}
impl<'a, I: Iterator> From<&'a I> for SizeHint {
fn from(iter: &'a I) -> Self {
let (lower, upper) = iter.size_hint();
SizeHint(lower, upper)
}
}
impl ::std::ops::Add for SizeHint {
type Output = Self;
fn add(self, other: Self) -> Self {
let lower = self.0 + other.0;
let upper = match (self.1, other.1) {
(Some(left), Some(right)) => Some(left + right),
_ => None,
};
SizeHint(lower, upper)
}
}
impl ::std::ops::Mul for SizeHint {
type Output = Self;
fn mul(self, other: Self) -> Self {
| let lower = self.0 * other.0;
let upper = match (self.1, other.1) {
(Some(left), Some(right)) => Some(left * right),
_ => None,
};
SizeHint(lower, upper)
}
}
impl | identifier_body |
|
cartesian.rs | ///
/// let combinations = cartesian::product(&[]);
/// assert_eq!(combinations.next(), Some(Vec::new()));
/// assert_eq!(combinations.next(), None);
/// ```
pub fn product<'a, C: 'a, T: 'a>(collections: &'a [C]) -> Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
{
// We start with fresh iterators and a `next_item` full of `None`s.
let mut iterators = collections.iter().map(<&C>::into_iter).collect::<Vec<_>>();
let next_item = iterators.iter_mut().map(Iterator::next).collect();
Product {
collections,
iterators,
next_item,
}
}
/// Iterator returned by [`product()`].
///
/// [`product()`]: ./fn.product.html
pub struct Product<'a, C: 'a, T: 'a>
where
&'a C: IntoIterator<Item = &'a T>,
{
/// The underlying collections that we iterate over.
collections: &'a [C],
/// Our own set of sub-iterators, taken from `collections`.
iterators: Vec<<&'a C as IntoIterator>::IntoIter>,
/// The next item to yield.
next_item: Option<Vec<&'a T>>,
}
impl<'a, C, T> Iterator for Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
{
type Item = Vec<&'a T>;
fn next(&mut self) -> Option<Self::Item> {
let result = self.next_item.clone();
self.advance();
result
}
/// Calculate bounds on the number of remaining elements.
///
/// This is calculated the same way as [`Product::len()`], but uses
/// a helper type to deal with the return type of `size_hint()`.
/// See there for information on why the used formula is corrected.
///
/// [`Product::len()`]: #method.len
fn size_hint(&self) -> (usize, Option<usize>) {
if self.next_item.is_none() {
return (0, Some(0));
}
let SizeHint(lower, upper) = SizeHint(1, Some(1))
+ self
.iterators
.iter()
.enumerate()
.map(|(i, iterator)| {
SizeHint::from(iterator)
* self.collections[i + 1..]
.iter()
.map(|c| SizeHint::from(&c.into_iter()))
.product()
})
.sum();
(lower, upper)
}
}
impl<'a, C, T> ExactSizeIterator for Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
<&'a C as IntoIterator>::IntoIter: ExactSizeIterator,
{
/// Calculates the exact number of remaining elements.
///
/// The length consists of the following contributions:
///
/// - 1 for the `next_item` to be yielded;
/// - `X` for each currently active iterator, where X is the
/// product of the iterators length and the sizes of all
/// *collections* to the right of it in the product.
///
/// Example
/// -------
///
/// Assume the Cartesian product `[1, 2, 3]×[1, 2]×[1, 2, 3]`. Upon
/// construction, the `Product` type creates three iterators `A`,
/// `B`, and `C` – one iterator for each array. It also extracts
/// one item from each to form `next_item`. Hence, `next_item`
/// contributes `1` to the total length. The three iterators
/// contribute as follows:
///
/// - A: 2 items left × collection of size 2 × collection of size
/// 3 = 12;
/// - B: 1 item left × collection of size 3 = 3;
/// - C: 2 items left = 2.
///
/// Thus, we end up with a total length of `1+12+3+2=18`. This is
/// the same length we get when multiplying the size of all passed
/// collections. (`3*2*3=18`) However, our (complicated) formula
/// also works when the iterator has already yielded some elements.
fn len(&self) -> usize {
if self.next_item.is_none() {
return 0;
}
1 + self
.iterators
.iter()
.enumerate()
.map(|(i, iterator)| {
iterator.len()
* self.collections[i + 1..]
.iter()
.map(|c| c.into_iter().len())
.product::<usize>()
})
.sum::<usize>()
}
}
impl<'a, C, T> ::std::iter::FusedIterator for Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
<&'a C as IntoIterator>::IntoIter: ExactSizeIterator,
{}
impl<'a, C, T> Product<'a, C, T>
where
&'a C: IntoIterator<Item = &'a T>,
{
/// Advances the iterators and updates `self.next_item`.
///
/// This loop works like incrementing a number digit by digit. We
/// go over each iterator and its corresponding "digit" in
/// `next_item` in lockstep, starting at the back.
///
/// If we can advance the iterator, we update the "digit" and are
/// done. If the iterator is exhausted, we have to go from "9" to
/// "10": we restart the iterator, grab the first element, and move
/// on to the next digit.
///
/// The `break` expressions are to be understood literally: our
/// scheme can break in two ways.
/// 1. The very first iterator (`i==0`) is exhausted.
/// 2. A freshly restarted iterator is empty. (should never happen!)
/// In both cases, we want to exhaust `self` immediately. We do so
/// by breaking out of the loop, falling through to the very last
/// line, and manually set `self.next_item` to `None`.
///
/// Note that there is a so-called nullary case, when
/// `cartesian::product()` is called with an empty slice. While
/// this use-case is debatable, the mathematically correct way to
/// deal with it is to yield some empty vector once and then
/// nothing.
///
/// Luckily, we already handle this correctly! Because of the way
/// `Iterator::collect()` works when collecting into an
/// `Option<Vec<_>>`, `next_item` is initialized to some empty
/// vector, so this will be the first thing we yield. Then, when
/// `self.advance()` is called, we fall through the `while` loop and
/// immediately exhaust this iterator, yielding nothing more.
fn advance(&mut self) {
if let Some(ref mut next_item) = self.next_item {
let mut i = self.iterators.len();
while i > 0 {
i -= 1;
// Grab the next item from the current sub-iterator.
if let Some(elt) = self.iterators[i].next() {
next_item[i] = elt;
// If that works, we're done!
return;
} else if i == 0 {
// Last sub-iterator is exhausted, so we're
// exhausted, too.
break;
}
// The current sub-terator is empty, start anew.
self.iterators[i] = self.collections[i].into_iter();
if let Some(elt) = self.iterators[i].next() {
next_item[i] = elt;
// Roll over to the next sub-iterator.
} else {
// Should never happen: The freshly restarted
// sub-iterator is already empty.
break;
}
}
}
// Exhaust this iterator if the above loop `break`s.
self.next_item = None;
}
}
#[derive(Debug)]
struct SizeHint(usize, Option<usize>);
impl SizeHint {
fn into_inner(self) -> (usize, Option<usize>) {
(self.0, self.1)
}
}
impl<'a, I: Iterator> From<&'a I> for SizeHint {
fn from(iter: &'a I) -> Self {
let (lower, upper) = iter.size_hint();
SizeHint(lower, upper)
}
}
impl ::std::ops::Add for SizeHint {
type Output = Self;
fn add(self, other: Self) -> Self {
let lower = self.0 + other.0;
let upper = match (self.1, other.1) {
(Some(left), Some(right)) => Some(left + right),
_ => None,
};
SizeHint(lower, upper)
}
}
impl ::std::ops::Mul for SizeHint { | type Output = Self;
fn mul(self, other: Self) -> Self {
let lower = self.0 * other.0;
let upper = match (self.1, other.1) { | random_line_split |
|
dnn1.py | #detector.setModelTypeAsTinyYOLOv3()
if localdir:
detector.setModelPath(os.path.join(execution_path , yolo_path))
else:
detector.setModelPath(yolo_path)
#dir(detector)
detector.loadModel()
#loaded_model = tf.keras.models.load_model("./src/mood-saved-models/"model + ".h5")
#loaded_model = tf.keras.models.load_model(detector.)
#path = "E:\capture_023_29092020_150305.jpg" #IMG_20200528_044908.jpg"
#pathOut = "E:\YOLO_capture_023_29092020_150305.jpg"
#path = "pose1.webp" #E:\\capture_046_29092020_150628.jpg"
pathOut = "yolo_out_2.jpg"
path = root + name
pathOut = root + name + "yolo_out" + ".jpg"
detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , path), output_image_path=os.path.join(execution_path , pathOut), minimum_percentage_probability=10) #30)
for eachObject in detections:
print(eachObject["name"] , " : ", eachObject["percentage_probability"], " : ", eachObject["box_points"] )
print("--------------------------------")
return detections, path
det,path = yolo()
yoloImage = cv2.imread(path) #crop regions from it
for i in det:
print(i)
protoFile = "Z:\\pose\\mpi\\pose_deploy_linevec_faster_4_stages.prototxt"
#protoFile = "pose_deploy_linevec_faster_4_stages.prototxt"
#weightsFile = "Z:\\pose\\mpi\\pose_iter_440000.caffemodel"
weightsFile = "Z:\\pose\\mpi\\pose_iter_160000.caffemodel"
#weightsFile = "pose_iter_160000.caffemodel"
#weightsFile = "pose_iter_440000.caffemodel"
# Read the network into Memory
net = cv2.dnn.readNetFromCaffe(protoFile, weightsFile)
"""
{'name': 'person', 'percentage_probability': 99.86668229103088, 'box_points': [1
8, 38, 153, 397]}
{'name': 'person', 'percentage_probability': 53.89136075973511, 'box_points': [3
86, 93, 428, 171]}
{'name': 'person', 'percentage_probability': 11.339860409498215, 'box_points': [
585, 99, 641, 180]}
{'name': 'person', 'percentage_probability': 10.276197642087936, 'box_points': [
126, 178, 164, 290]}
{'name': 'person', 'percentage_probability': 99.94878768920898, 'box_points': [2
93, 80, 394, 410]}
{'name': 'person', 'percentage_probability': 99.95986223220825, 'box_points': [4
78, 88, 589, 410]}
{'name': 'person', 'percentage_probability': 67.95878410339355, 'box_points': [1
, 212, 39, 300]}
{'name': 'person', 'percentage_probability': 63.609880208969116, 'box_points': [
153, 193, 192, 306]}
{'name': 'person', 'percentage_probability': 23.985233902931213, 'box_points': [
226, 198, 265, 308]}
{'name': 'sports ball', 'percentage_probability': 20.820775628089905, 'box_point
s': [229, 50, 269, 94]}
{'name': 'person', 'percentage_probability': 40.28712213039398, 'box_points': [4
23, 110, 457, 160]}
H, W, Ch 407 211 3
"""
yolo_thr = 70 #in percents, not 0.7
collected = []
bWiden = False
for d in det:
if (d['name'] == 'person') and d['percentage_probability'] > yolo_thr:
x1,y1,x2,y2 = d['box_points']
if bWiden:
x1-=20
x2+=20
y1-=30
y2+=30
cropped = yoloImage[y1:y2, x1:x2]
cv2.imshow(d['name']+str(x1), cropped)
collected.append(cropped) #or copy first?
cv2.waitKey()
#x1,y1, ...
# for i in collected: cv2.imshow("COLLECTED?", i); cv2.waitKey() #OK
# Read image
#frame = cv2.imread("Z:\\23367640.png") #1.jpg")
#src = "Z:\\2w.jpg" #z:\\pose1.webp" #nacep1.jpg"
#src = "z:\\pose1.webp"
srcs = ["z:\\pose1.webp","Z:\\2w.jpg", "Z:\\grigor.jpg"]
id = 2
#src = srcs[2]
src = path #from first yolo, in order to compare
frame = cv2.imread(src)
cv2.imshow("FRAME"+src, frame)
#frameWidth, frameHeight, _ = frame.shape
frameHeight, frameWidth, ch = frame.shape
print("H, W, Ch", frameHeight, frameWidth, ch)
# Specify the input image dimensions
inWidth = 368 #184 #368
inHeight = 368 #184 #368
# Prepare the frame to be fed to the network
inpBlob = cv2.dnn.blobFromImage(frame, 1.0 / 255, (inWidth, inHeight), (0, 0, 0), swapRB=False, crop=False)
#cv2.imshow("G", inpBlob) #unsupported
#cv2.waitKey(0)
# Set the prepared object as the input blob of the network
net.setInput(inpBlob)
print(inpBlob)
output = net.forward()
print(output)
print("========")
H = output.shape[2]
W = output.shape[3]
# Empty list to store the detected keypoints
points = []
threshold = 0.3
maxKeypoints = 44
Keypoints = output.shape[1]
print("Keypoints from output?", Keypoints)
Keypoints = 15 #MPI ... returns only 15
labels = ["Head", "Neck", "Right Shoulder", "Right Elbow", "Right Wrist", "Left Shoulder", "Left Elbow", "Left Wrist", "Right Hip", "Right Knee", "Right Ankle", "Left Hip", "Left Knee", "Left Ankle", "Chest", "Background"]
#for i in range(len()):
for i in range(Keypoints): #?
# confidence map of corresponding body's part.
probMap = output[0, i, :, :]
# Find global maxima of the probMap.
minVal, prob, minLoc, point = cv2.minMaxLoc(probMap)
# Scale the point to fit on the original image
x = (frameWidth * point[0]) / W
y = (frameHeight * point[1]) / H
if prob > threshold :
cv2.circle(frame, (int(x), int(y)), 5, (0, 255, 255), thickness=-1, lineType=cv2.FILLED)
cv2.putText(frame, "{}".format(i), (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2, lineType=cv2.LINE_AA)
# Add the point to the list if the probability is greater than the threshold
print(i, labels[i])
print(x, y)
points.append((int(x), int(y)))
else :
points.append(None)
print(points)
cv2.imshow("Output-Keypoints",frame)
def | Detect | identifier_name |
|
dnn1.py | -Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210812%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210812T232641Z&X-Amz-Expires=300&X-Amz-Signature=a5b91876c83b83a6aafba333c63c5f4a880bea9a937b30e52e92bbb0ac784018&X-Amz-SignedHeaders=host&actor_id=23367640&key_id=0&repo_id=125932201&response-content-disposition=attachment%3B%20filename%3Dyolo-tiny.h5&response-content-type=application%2Foctet-stream
# Todor Arnaudov - Twenkid: debug and merging, LearnOpenCV python code had a few misses, 13.8.2021
# It seems the pose model expects only one person so the image must be segmented first! pose1.jpg
# Detect with YOLO or ImageAI etc. then use DNN
# Specify the paths for the 2 files | # My experiments results: disappointingly bad pose estimation on the images I tested. Sometimes good, sometimes terrible.
import cv2
import tensorflow.compat.v1 as tf
from imageai.Detection import ObjectDetection
import os
boxes = []
def yolo():
#name = "k.jpg"
root = "Z:\\"
name = "23367640.png" #t.jpg" #"p1.jpg" #"2w.jpg" #"grigor.jpg" #"2w.jpg" #"pose1.webp" #1.jpg"
execution_path = os.getcwd()
yolo_path = "Z:\\yolo.h5"
#yolo_path = "Z:\\yolo-tiny.h5"
localdir = False
detector = ObjectDetection()
detector.setModelTypeAsYOLOv3()
#detector.setModelTypeAsTinyYOLOv3()
if localdir:
detector.setModelPath(os.path.join(execution_path , yolo_path))
else:
detector.setModelPath(yolo_path)
#dir(detector)
detector.loadModel()
#loaded_model = tf.keras.models.load_model("./src/mood-saved-models/"model + ".h5")
#loaded_model = tf.keras.models.load_model(detector.)
#path = "E:\capture_023_29092020_150305.jpg" #IMG_20200528_044908.jpg"
#pathOut = "E:\YOLO_capture_023_29092020_150305.jpg"
#path = "pose1.webp" #E:\\capture_046_29092020_150628.jpg"
pathOut = "yolo_out_2.jpg"
path = root + name
pathOut = root + name + "yolo_out" + ".jpg"
detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , path), output_image_path=os.path.join(execution_path , pathOut), minimum_percentage_probability=10) #30)
for eachObject in detections:
print(eachObject["name"] , " : ", eachObject["percentage_probability"], " : ", eachObject["box_points"] )
print("--------------------------------")
return detections, path
det,path = yolo()
yoloImage = cv2.imread(path) #crop regions from it
for i in det:
print(i)
protoFile = "Z:\\pose\\mpi\\pose_deploy_linevec_faster_4_stages.prototxt"
#protoFile = "pose_deploy_linevec_faster_4_stages.prototxt"
#weightsFile = "Z:\\pose\\mpi\\pose_iter_440000.caffemodel"
weightsFile = "Z:\\pose\\mpi\\pose_iter_160000.caffemodel"
#weightsFile = "pose_iter_160000.caffemodel"
#weightsFile = "pose_iter_440000.caffemodel"
# Read the network into Memory
net = cv2.dnn.readNetFromCaffe(protoFile, weightsFile)
"""
{'name': 'person', 'percentage_probability': 99.86668229103088, 'box_points': [1
8, 38, 153, 397]}
{'name': 'person', 'percentage_probability': 53.89136075973511, 'box_points': [3
86, 93, 428, 171]}
{'name': 'person', 'percentage_probability': 11.339860409498215, 'box_points': [
585, 99, 641, 180]}
{'name': 'person', 'percentage_probability': 10.276197642087936, 'box_points': [
126, 178, 164, 290]}
{'name': 'person', 'percentage_probability': 99.94878768920898, 'box_points': [2
93, 80, 394, 410]}
{'name': 'person', 'percentage_probability': 99.95986223220825, 'box_points': [4
78, 88, 589, 410]}
{'name': 'person', 'percentage_probability': 67.95878410339355, 'box_points': [1
, 212, 39, 300]}
{'name': 'person', 'percentage_probability': 63.609880208969116, 'box_points': [
153, 193, 192, 306]}
{'name': 'person', 'percentage_probability': 23.985233902931213, 'box_points': [
226, 198, 265, 308]}
{'name': 'sports ball', 'percentage_probability': 20.820775628089905, 'box_point
s': [229, 50, 269, 94]}
{'name': 'person', 'percentage_probability': 40.28712213039398, 'box_points': [4
23, 110, 457, 160]}
H, W, Ch 407 211 3
"""
yolo_thr = 70 #in percents, not 0.7
collected = []
bWiden = False
for d in det:
if (d['name'] == 'person') and d['percentage_probability'] > yolo_thr:
x1,y1,x2,y2 = d['box_points']
if bWiden:
x1-=20
x2+=20
y1-=30
y2+=30
cropped = yoloImage[y1:y2, x1:x2]
cv2.imshow(d['name']+str(x1), cropped)
collected.append(cropped) #or copy first?
cv2.waitKey()
#x1,y1, ...
# for i in collected: cv2.imshow("COLLECTED?", i); cv2.waitKey() #OK
# Read image
#frame = cv2.imread("Z:\\23367640.png") #1.jpg")
#src = "Z:\\2w.jpg" #z:\\pose1.webp" #nacep1.jpg"
#src = "z:\\pose1.webp"
srcs = ["z:\\pose1.webp","Z:\\2w.jpg", "Z:\\grigor.jpg"]
id = 2
#src = srcs[2]
src = path #from first yolo, in order to compare
frame = cv2.imread(src)
cv2.imshow | # I tried with yolo-tiny, but the accuracy of the bounding boxes didn't seem acceptable.
#tf 1.15 for older versions of ImageAI - but tf doesn't support Py 3.8
#ImageAI: older versions require tf 1.x
#tf 2.4 - required by ImageAI 2.1.6 -- no GPU supported on Win 7, tf requires CUDA 11.0 (Win10). Win7: CUDA 10.x. CPU: works
# Set the paths to models, images etc. | random_line_split |
dnn1.py | path = "pose1.webp" #E:\\capture_046_29092020_150628.jpg"
pathOut = "yolo_out_2.jpg"
path = root + name
pathOut = root + name + "yolo_out" + ".jpg"
detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , path), output_image_path=os.path.join(execution_path , pathOut), minimum_percentage_probability=10) #30)
for eachObject in detections:
print(eachObject["name"] , " : ", eachObject["percentage_probability"], " : ", eachObject["box_points"] )
print("--------------------------------")
return detections, path
det,path = yolo()
yoloImage = cv2.imread(path) #crop regions from it
for i in det:
print(i)
protoFile = "Z:\\pose\\mpi\\pose_deploy_linevec_faster_4_stages.prototxt"
#protoFile = "pose_deploy_linevec_faster_4_stages.prototxt"
#weightsFile = "Z:\\pose\\mpi\\pose_iter_440000.caffemodel"
weightsFile = "Z:\\pose\\mpi\\pose_iter_160000.caffemodel"
#weightsFile = "pose_iter_160000.caffemodel"
#weightsFile = "pose_iter_440000.caffemodel"
# Read the network into Memory
net = cv2.dnn.readNetFromCaffe(protoFile, weightsFile)
"""
{'name': 'person', 'percentage_probability': 99.86668229103088, 'box_points': [1
8, 38, 153, 397]}
{'name': 'person', 'percentage_probability': 53.89136075973511, 'box_points': [3
86, 93, 428, 171]}
{'name': 'person', 'percentage_probability': 11.339860409498215, 'box_points': [
585, 99, 641, 180]}
{'name': 'person', 'percentage_probability': 10.276197642087936, 'box_points': [
126, 178, 164, 290]}
{'name': 'person', 'percentage_probability': 99.94878768920898, 'box_points': [2
93, 80, 394, 410]}
{'name': 'person', 'percentage_probability': 99.95986223220825, 'box_points': [4
78, 88, 589, 410]}
{'name': 'person', 'percentage_probability': 67.95878410339355, 'box_points': [1
, 212, 39, 300]}
{'name': 'person', 'percentage_probability': 63.609880208969116, 'box_points': [
153, 193, 192, 306]}
{'name': 'person', 'percentage_probability': 23.985233902931213, 'box_points': [
226, 198, 265, 308]}
{'name': 'sports ball', 'percentage_probability': 20.820775628089905, 'box_point
s': [229, 50, 269, 94]}
{'name': 'person', 'percentage_probability': 40.28712213039398, 'box_points': [4
23, 110, 457, 160]}
H, W, Ch 407 211 3
"""
yolo_thr = 70 #in percents, not 0.7
collected = []
bWiden = False
for d in det:
if (d['name'] == 'person') and d['percentage_probability'] > yolo_thr:
x1,y1,x2,y2 = d['box_points']
if bWiden:
x1-=20
x2+=20
y1-=30
y2+=30
cropped = yoloImage[y1:y2, x1:x2]
cv2.imshow(d['name']+str(x1), cropped)
collected.append(cropped) #or copy first?
cv2.waitKey()
#x1,y1, ...
# for i in collected: cv2.imshow("COLLECTED?", i); cv2.waitKey() #OK
# Read image
#frame = cv2.imread("Z:\\23367640.png") #1.jpg")
#src = "Z:\\2w.jpg" #z:\\pose1.webp" #nacep1.jpg"
#src = "z:\\pose1.webp"
srcs = ["z:\\pose1.webp","Z:\\2w.jpg", "Z:\\grigor.jpg"]
id = 2
#src = srcs[2]
src = path #from first yolo, in order to compare
frame = cv2.imread(src)
cv2.imshow("FRAME"+src, frame)
#frameWidth, frameHeight, _ = frame.shape
frameHeight, frameWidth, ch = frame.shape
print("H, W, Ch", frameHeight, frameWidth, ch)
# Specify the input image dimensions
inWidth = 368 #184 #368
inHeight = 368 #184 #368
# Prepare the frame to be fed to the network
inpBlob = cv2.dnn.blobFromImage(frame, 1.0 / 255, (inWidth, inHeight), (0, 0, 0), swapRB=False, crop=False)
#cv2.imshow("G", inpBlob) #unsupported
#cv2.waitKey(0)
# Set the prepared object as the input blob of the network
net.setInput(inpBlob)
print(inpBlob)
output = net.forward()
print(output)
print("========")
H = output.shape[2]
W = output.shape[3]
# Empty list to store the detected keypoints
points = []
threshold = 0.3
maxKeypoints = 44
Keypoints = output.shape[1]
print("Keypoints from output?", Keypoints)
Keypoints = 15 #MPI ... returns only 15
labels = ["Head", "Neck", "Right Shoulder", "Right Elbow", "Right Wrist", "Left Shoulder", "Left Elbow", "Left Wrist", "Right Hip", "Right Knee", "Right Ankle", "Left Hip", "Left Knee", "Left Ankle", "Chest", "Background"]
#for i in range(len()):
for i in range(Keypoints): #?
# confidence map of corresponding body's part.
probMap = output[0, i, :, :]
# Find global maxima of the probMap.
minVal, prob, minLoc, point = cv2.minMaxLoc(probMap)
# Scale the point to fit on the original image
x = (frameWidth * point[0]) / W
y = (frameHeight * point[1]) / H
if prob > threshold :
cv2.circle(frame, (int(x), int(y)), 5, (0, 255, 255), thickness=-1, lineType=cv2.FILLED)
cv2.putText(frame, "{}".format(i), (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2, lineType=cv2.LINE_AA)
# Add the point to the list if the probability is greater than the threshold
print(i, labels[i])
print(x, y)
points.append((int(x), int(y)))
else :
points.append(None)
print(points)
cv2.imshow("Output-Keypoints",frame)
def Detect(image): #inWidth, Height ... - global, set as params later
| frameHeight, frameWidth, ch = image.shape
# Prepare the image to be fed to the network
inpBlob = cv2.dnn.blobFromImage(image, 1.0 / 255, (inWidth, inHeight), (0, 0, 0), swapRB=False, crop=False)
#cv2.imshow("G", inpBlob) #unsupported
#cv2.waitKey(0)
# Set the prepared object as the input blob of the network
net.setInput(inpBlob)
print(inpBlob)
output = net.forward()
print(output)
print("========")
H = output.shape[2]
W = output.shape[3]
# Empty list to store the detected keypoints
points = [] | identifier_body |
|
dnn1.py | 53, 397]}
{'name': 'person', 'percentage_probability': 53.89136075973511, 'box_points': [3
86, 93, 428, 171]}
{'name': 'person', 'percentage_probability': 11.339860409498215, 'box_points': [
585, 99, 641, 180]}
{'name': 'person', 'percentage_probability': 10.276197642087936, 'box_points': [
126, 178, 164, 290]}
{'name': 'person', 'percentage_probability': 99.94878768920898, 'box_points': [2
93, 80, 394, 410]}
{'name': 'person', 'percentage_probability': 99.95986223220825, 'box_points': [4
78, 88, 589, 410]}
{'name': 'person', 'percentage_probability': 67.95878410339355, 'box_points': [1
, 212, 39, 300]}
{'name': 'person', 'percentage_probability': 63.609880208969116, 'box_points': [
153, 193, 192, 306]}
{'name': 'person', 'percentage_probability': 23.985233902931213, 'box_points': [
226, 198, 265, 308]}
{'name': 'sports ball', 'percentage_probability': 20.820775628089905, 'box_point
s': [229, 50, 269, 94]}
{'name': 'person', 'percentage_probability': 40.28712213039398, 'box_points': [4
23, 110, 457, 160]}
H, W, Ch 407 211 3
"""
yolo_thr = 70 #in percents, not 0.7
collected = []
bWiden = False
for d in det:
if (d['name'] == 'person') and d['percentage_probability'] > yolo_thr:
x1,y1,x2,y2 = d['box_points']
if bWiden:
x1-=20
x2+=20
y1-=30
y2+=30
cropped = yoloImage[y1:y2, x1:x2]
cv2.imshow(d['name']+str(x1), cropped)
collected.append(cropped) #or copy first?
cv2.waitKey()
#x1,y1, ...
# for i in collected: cv2.imshow("COLLECTED?", i); cv2.waitKey() #OK
# Read image
#frame = cv2.imread("Z:\\23367640.png") #1.jpg")
#src = "Z:\\2w.jpg" #z:\\pose1.webp" #nacep1.jpg"
#src = "z:\\pose1.webp"
srcs = ["z:\\pose1.webp","Z:\\2w.jpg", "Z:\\grigor.jpg"]
id = 2
#src = srcs[2]
src = path #from first yolo, in order to compare
frame = cv2.imread(src)
cv2.imshow("FRAME"+src, frame)
#frameWidth, frameHeight, _ = frame.shape
frameHeight, frameWidth, ch = frame.shape
print("H, W, Ch", frameHeight, frameWidth, ch)
# Specify the input image dimensions
inWidth = 368 #184 #368
inHeight = 368 #184 #368
# Prepare the frame to be fed to the network
inpBlob = cv2.dnn.blobFromImage(frame, 1.0 / 255, (inWidth, inHeight), (0, 0, 0), swapRB=False, crop=False)
#cv2.imshow("G", inpBlob) #unsupported
#cv2.waitKey(0)
# Set the prepared object as the input blob of the network
net.setInput(inpBlob)
print(inpBlob)
output = net.forward()
print(output)
print("========")
H = output.shape[2]
W = output.shape[3]
# Empty list to store the detected keypoints
points = []
threshold = 0.3
maxKeypoints = 44
Keypoints = output.shape[1]
print("Keypoints from output?", Keypoints)
Keypoints = 15 #MPI ... returns only 15
labels = ["Head", "Neck", "Right Shoulder", "Right Elbow", "Right Wrist", "Left Shoulder", "Left Elbow", "Left Wrist", "Right Hip", "Right Knee", "Right Ankle", "Left Hip", "Left Knee", "Left Ankle", "Chest", "Background"]
#for i in range(len()):
for i in range(Keypoints): #?
# confidence map of corresponding body's part.
probMap = output[0, i, :, :]
# Find global maxima of the probMap.
minVal, prob, minLoc, point = cv2.minMaxLoc(probMap)
# Scale the point to fit on the original image
x = (frameWidth * point[0]) / W
y = (frameHeight * point[1]) / H
if prob > threshold :
cv2.circle(frame, (int(x), int(y)), 5, (0, 255, 255), thickness=-1, lineType=cv2.FILLED)
cv2.putText(frame, "{}".format(i), (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2, lineType=cv2.LINE_AA)
# Add the point to the list if the probability is greater than the threshold
print(i, labels[i])
print(x, y)
points.append((int(x), int(y)))
else :
points.append(None)
print(points)
cv2.imshow("Output-Keypoints",frame)
def Detect(image): #inWidth, Height ... - global, set as params later
frameHeight, frameWidth, ch = image.shape
# Prepare the image to be fed to the network
inpBlob = cv2.dnn.blobFromImage(image, 1.0 / 255, (inWidth, inHeight), (0, 0, 0), swapRB=False, crop=False)
#cv2.imshow("G", inpBlob) #unsupported
#cv2.waitKey(0)
# Set the prepared object as the input blob of the network
net.setInput(inpBlob)
print(inpBlob)
output = net.forward()
print(output)
print("========")
H = output.shape[2]
W = output.shape[3]
# Empty list to store the detected keypoints
points = []
threshold = 0.1
maxKeypoints = 44
Keypoints = output.shape[1]
print("Keypoints from output?", Keypoints)
Keypoints = 15 #MPI ... returns only 15
labels = ["Head", "Neck", "Right Shoulder", "Right Elbow", "Right Wrist", "Left Shoulder", "Left Elbow", "Left Wrist", "Right Hip", "Right Knee", "Right Ankle", "Left Hip", "Left Knee", "Left Ankle", "Chest", "Background"]
#for i in range(len()):
for i in range(Keypoints): #?
# confidence map of corresponding body's part.
| probMap = output[0, i, :, :]
# Find global maxima of the probMap.
minVal, prob, minLoc, point = cv2.minMaxLoc(probMap)
# Scale the point to fit on the original image
x = (frameWidth * point[0]) / W
y = (frameHeight * point[1]) / H
if prob > threshold :
cv2.circle(image, (int(x), int(y)), 5, (0, 255, 255), thickness=-1, lineType=cv2.FILLED)
cv2.putText(image, "{}".format(i), (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2, lineType=cv2.LINE_AA)
# Add the point to the list if the probability is greater than the threshold
print(i, labels[i])
print(x, y)
points.append((int(x), int(y)))
else :
points.append(None) | conditional_block |
|
manifest.go | and makes it easy to ensure we are never processing the same item
// simultaneously in two different workers.
workqueue workqueue.RateLimitingInterface
manifestLister applisters.ManifestLister
manifestSynced cache.InformerSynced
}
//NewController new controller
func NewController(clusternetClient clusternetclientset.Interface,
manifestInformer appinformers.ManifestInformer) (*Controller, error) {
c := &Controller{
clusternetClient: clusternetClient,
workqueue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "manifest"),
manifestLister: manifestInformer.Lister(),
manifestSynced: manifestInformer.Informer().HasSynced,
}
// Manage the addition/update of Manifest
manifestInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: c.addManifest,
UpdateFunc: c.updateManifest,
DeleteFunc: c.deleteManifest,
})
return c, nil
}
// Run will set up the event handlers for types we are interested in, as well
// as syncing informer caches and starting workers. It will block until stopCh
// is closed, at which point it will shutdown the workqueue and wait for
// workers to finish processing their current work items.
func (c *Controller) Run(workers int, stopCh <-chan struct{}) {
defer utilruntime.HandleCrash()
defer c.workqueue.ShutDown()
klog.Info("starting manifest controller...")
defer klog.Info("shutting down manifest controller")
// Wait for the caches to be synced before starting workers
if !cache.WaitForNamedCacheSync("manifest-controller", stopCh, c.manifestSynced) {
return
}
klog.V(5).Infof("starting %d worker threads", workers)
// Launch workers to process Manifest resources
for i := 0; i < workers; i++ {
go wait.Until(c.runWorker, time.Second, stopCh)
}
<-stopCh
}
func (c *Controller) addManifest(obj interface{}) {
manifest := obj.(*appsapi.Manifest)
klog.V(4).Infof("adding Manifest %q", klog.KObj(manifest))
if manifest.Template.Raw == nil {
klog.Warning("manifest.Template.Raw is empty, %q", klog.KObj(manifest))
return
}
utd := &unstructured.Unstructured{}
err := json.Unmarshal(manifest.Template.Raw, &utd.Object)
if err != nil {
klog.Errorf("unmarshal error, %q, err=%v", klog.KObj(manifest), err)
return
}
//过滤没有annotation的
annotations := utd.GetAnnotations()
if ok := util.MatchAnnotationsKeyPrefix(annotations); !ok {
klog.V(5).Infof("addManifest but manifest %s:%s does not find match annotation", manifest.Namespace, manifest.Name)
return
}
c.enqueue(manifest)
}
func (c *Controller) updateManifest(old, cur interface{}) {
oldManifest := old.(*appsapi.Manifest)
newManifest := cur.(*appsapi.Manifest)
if newManifest.DeletionTimestamp != nil {
c.enqueue(newManifest)
return
}
if oldManifest.Template.Raw == nil || newManifest.Template.Raw == nil {
klog.Warning("old manifest.Template.Raw or newManifest.Template.Raw is empty, %q, %q", klog.KObj(oldManifest), klog.KObj(newManifest))
return
}
oldUtd := &unstructured.Unstructured{}
err := json.Unmarshal(oldManifest.Template.Raw, &oldUtd.Object)
if err != nil {
klog.Errorf("oldManifest.Template.Raw unmarshal error, %q, err=%v", klog.KObj(oldManifest), err)
return
}
newUtd := &unstructured.Unstructured{}
err = json.Unmarshal(newManifest.Template.Raw, &newUtd.Object)
if err != nil {
klog.Errorf("newManifest.Template.Raw unmarshal error, %q, err=%v", klog.KObj(newManifest), err)
return
}
if reflect.DeepEqual(newUtd.GetAnnotations(), oldUtd.GetAnnotations()) {
klog.V(5).Infof("updateManifest oldManifest annotation and newManifest annotation is equal, skip")
return
}
klog.V(4).Infof("updating Manifest %q", klog.KObj(oldManifest))
c.enqueue(newManifest)
}
func (c *Controller) deleteManifest(obj interface{}) {
manifest, ok := obj.(*appsapi.Manifest)
if !ok {
tombstone, ok := obj.(cache.DeletedFinalStateUnknown)
if !ok {
utilruntime.HandleError(fmt.Errorf("couldn't get object from tombstone %#v", obj))
return
}
manifest, ok = tombstone.Obj.(*appsapi.Manifest)
if !ok {
utilruntime.HandleError(fmt.Errorf("tombstone contained object that is not a Manifest %#v", obj))
return
}
}
klog.V(4).Infof("deleting Manifest %q", klog.KObj(manifest))
//过滤没有指定annotation key的数据
if manifest.Template.Raw == nil {
klog.Warning("manifest.Template.Raw is empty, %q", klog.KObj(manifest))
return
}
utd := &unstructured.Unstructured{}
err := json.Unmarshal(manifest.Template.Raw, &utd.Object)
if err != nil {
klog.Errorf("unmarshal error, %q, err=%v", klog.KObj(manifest), err)
return
}
if ok := util.MatchAnnotationsKeyPrefix(utd.GetAnnotations()); !ok {
klog.V(5).Infof("deleteManifest , but manifest %s:%s does not find match annotation in annotation", manifest.Namespace, manifest.Name)
return
}
c.enqueue(manifest)
}
// runWorker is a long-running function that will continually call the
// processNextWorkItem function in order to read and process a message on the
// workqueue.
func (c *Controller) runWorker() {
for c.processNextWorkItem() {
}
}
// processNextWorkItem will read a single work item off the workqueue and
// attempt to process it, by calling the syncHandler.
func (c *Controller) processNextWorkItem() bool {
obj, shutdown := c.workqu | // more up to date that when the item was initially put onto the
// workqueue.
if key, ok = obj.(string); !ok {
// As the item in the workqueue is actually invalid, we call
// Forget here else we'd go into a loop of attempting to
// process a work item that is invalid.
c.workqueue.Forget(obj)
utilruntime.HandleError(fmt.Errorf("expected string in workqueue but got %#v", obj))
return nil
}
// Run the syncHandler, passing it the namespace/name string of the
// Manifest resource to be synced.
if err := c.syncHandler(key); err != nil {
// Put the item back on the workqueue to handle any transient errors.
c.workqueue.AddRateLimited(key)
return fmt.Errorf("error syncing '%s': %s, requeuing", key, err.Error())
}
// Finally, if no error occurs we Forget this item so it does not
// get queued again until another change happens.
c.workqueue.Forget(obj)
klog.Infof("successfully synced Manifest %q", key)
return nil
}(obj)
if err != nil {
utilruntime.HandleError(err)
return true
}
return true
}
// syncHandler compares th
e actual state with the desired, and attempts to
// converge the two. It then updates the Status block of the Manifest resource
// with the current status of the resource.
func (c *Controller) syncHandler(key string) error {
// If an error occurs during handling, we'll requeue the item so we can
// attempt processing again later. This could have been caused by a
// temporary network failure, or any other transient reason.
// Convert the namespace/name string into a distinct namespace and name
ns, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
utilruntime.HandleError(fmt.Errorf("invalid resource key: %s", key))
return nil
}
klog.V(4).Infof("start processing Manifest %q", key)
// Get the Manifest resource with this name
manifest, err := c.manifestLister.Manifests(ns).Get(name)
// | eue.Get()
if shutdown {
return false
}
// We wrap this block in a func so we can defer c.workqueue.Done.
err := func(obj interface{}) error {
// We call Done here so the workqueue knows we have finished
// processing this item. We also must remember to call Forget if we
// do not want this work item being re-queued. For example, we do
// not call Forget if a transient error occurs, instead the item is
// put back on the workqueue and attempted again after a back-off
// period.
defer c.workqueue.Done(obj)
var key string
var ok bool
// We expect strings to come off the workqueue. These are of the
// form namespace/name. We do this as the delayed nature of the
// workqueue means the items in the informer cache may actually be | identifier_body |
manifest.go | , and makes it easy to ensure we are never processing the same item
// simultaneously in two different workers.
workqueue workqueue.RateLimitingInterface
manifestLister applisters.ManifestLister
manifestSynced cache.InformerSynced
}
//NewController new controller
func NewController(clusternetClient clusternetclientset.Interface,
manifestInformer appinformers.ManifestInformer) (*Controller, error) {
c := &Controller{
clusternetClient: clusternetClient,
workqueue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "manifest"),
manifestLister: manifestInformer.Lister(),
manifestSynced: manifestInformer.Informer().HasSynced,
}
// Manage the addition/update of Manifest
manifestInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: c.addManifest,
UpdateFunc: c.updateManifest,
DeleteFunc: c.deleteManifest,
})
return c, nil
}
// Run will set up the event handlers for types we are interested in, as well
// as syncing informer caches and starting workers. It will block until stopCh
// is closed, at which point it will shutdown the workqueue and wait for
// workers to finish processing their current work items.
func (c *Controller) Run(workers int, stopCh <-chan struct{}) {
defer utilruntime.HandleCrash()
defer c.workqueue.ShutDown()
klog.Info("starting manifest controller...")
defer klog.Info("shutting down manifest controller")
// Wait for the caches to be synced before starting workers
if !cache.WaitForNamedCacheSync("manifest-controller", stopCh, c.manifestSynced) {
return
}
klog.V(5).Infof("starting %d worker threads", workers)
// Launch workers to process Manifest resources
for i := 0; i < workers; i++ {
go wait.Until(c.runWorker, time.Second, stopCh)
}
<-stopCh
}
func (c *Controller) addManifest(obj interface{}) {
manifest := obj.(*appsapi.Manifest)
klog.V(4).Infof("adding Manifest %q", klog.KObj(manifest))
if manifest.Template.Raw == nil {
klog.Warning("manifest.Template.Raw is empty, %q", klog.KObj(manifest))
return
}
utd := &unstructured.Unstructured{}
err := json.Unmarshal(manifest.Template.Raw, &utd.Object)
if err != nil {
klog.Errorf("unmarshal error, %q, err=%v", klog.KObj(manifest), err)
return
}
//过滤没有annotation的
annotations := utd.GetAnnotations()
if ok := util.MatchAnnotationsKeyPrefix(annotations); !ok {
klog.V(5).Infof("addManifest but manifest %s:%s does not find match annotation", manifest.Namespace, manifest.Name)
return
}
c.enqueue(manifest)
}
func (c *Controller) updateManifest(old, cur interface{}) {
oldManifest := old.(*appsapi.Manifest)
newManifest := cur.(*appsapi.Manifest)
if newManifest.DeletionTimestamp != nil {
c.enqueue(newManifest)
return
}
if oldManifest.Template.Raw == nil || newManifest.Template.Raw == nil {
klog.Warning("old manifest.Template.Raw or newManifest.Template.Raw is empty, %q, %q", klog.KObj(oldManifest), klog.KObj(newManifest))
return
}
oldUtd := &unstructured.Unstructured{}
err := json.Unmarshal(oldManifest.Template.Raw, &oldUtd.Object)
if err != nil {
klog.Errorf("oldManifest.Template.Raw unmarshal error, %q, err=%v", klog.KObj(oldManifest), err)
return
}
newUtd := &unstructured.Unstructured{}
err = json.Unmarshal(newManifest.Template.Raw, &newUtd.Object)
if err != nil {
klog.Errorf("newManifest.Template.Raw unmarshal error, %q, err=%v", klog.KObj(newManifest), err)
return
}
if reflect.DeepEqual(newUtd.GetAnnotations(), oldUtd.GetAnnotations()) {
klog.V(5).Infof("updateManifest oldManifest annotation and newManifest annotation is equal, skip")
return
}
klog.V(4).Infof("updating Manifest %q", klog.KObj(oldManifest))
c.enqueue(newManifest)
}
func (c *Controller) deleteManifest(obj interface{}) {
manifest, ok := obj.(*appsapi.Manifest)
if !ok {
tombstone, ok := obj.(cache.DeletedFinalStateUnknown)
if !ok {
utilruntime.HandleError(fmt.Errorf("couldn't get object from tombstone %#v", obj))
return
}
manifest, ok = tombstone.Obj.(*appsapi.Manifest)
if !ok {
utilruntime.HandleError(fmt.Errorf("tombstone contained object that is not a Manifest %#v", obj))
return
}
}
klog.V(4).Infof("deleting Manifest %q", klog.KObj(manifest))
//过滤没有指定annotation key的数据
if manifest.Template.Raw == nil {
klog.Warning("manifest.Template.Raw is empty, %q", klog.KObj(manifest))
return
}
utd := &unstructured.Unstructured{}
err := json.Unmarshal(manifest.Template.Raw, &utd.Object)
if err != nil {
klog.Errorf("unmarshal error, %q, err=%v", klog.KObj(manifest), err)
return
}
if ok := util.MatchAnnotationsKeyPrefix(utd.GetAnnotations()); !ok {
klog.V(5).Infof("deleteManifest , but manifest %s:%s does not find match annotation in annotation", manifest.Namespace, manifest.Name)
return
}
c.enqueue(manifest)
}
// runWorker is a long-running function that will continually call the
// processNextWorkItem function in order to read and process a message on the | // workqueue.
func (c *Controller) runWorker() {
for c.processNextWorkItem() {
}
}
// processNextWorkItem will read a single work item off the workqueue and
// attempt to process it, by calling the syncHandler.
func (c *Controller) processNextWorkItem() bool {
obj, shutdown := c.workqueue.Get()
if shutdown {
return false
}
// We wrap this block in a func so we can defer c.workqueue.Done.
err := func(obj interface{}) error {
// We call Done here so the workqueue knows we have finished
// processing this item. We also must remember to call Forget if we
// do not want this work item being re-queued. For example, we do
// not call Forget if a transient error occurs, instead the item is
// put back on the workqueue and attempted again after a back-off
// period.
defer c.workqueue.Done(obj)
var key string
var ok bool
// We expect strings to come off the workqueue. These are of the
// form namespace/name. We do this as the delayed nature of the
// workqueue means the items in the informer cache may actually be
// more up to date that when the item was initially put onto the
// workqueue.
if key, ok = obj.(string); !ok {
// As the item in the workqueue is actually invalid, we call
// Forget here else we'd go into a loop of attempting to
// process a work item that is invalid.
c.workqueue.Forget(obj)
utilruntime.HandleError(fmt.Errorf("expected string in workqueue but got %#v", obj))
return nil
}
// Run the syncHandler, passing it the namespace/name string of the
// Manifest resource to be synced.
if err := c.syncHandler(key); err != nil {
// Put the item back on the workqueue to handle any transient errors.
c.workqueue.AddRateLimited(key)
return fmt.Errorf("error syncing '%s': %s, requeuing", key, err.Error())
}
// Finally, if no error occurs we Forget this item so it does not
// get queued again until another change happens.
c.workqueue.Forget(obj)
klog.Infof("successfully synced Manifest %q", key)
return nil
}(obj)
if err != nil {
utilruntime.HandleError(err)
return true
}
return true
}
// syncHandler compares the actual state with the desired, and attempts to
// converge the two. It then updates the Status block of the Manifest resource
// with the current status of the resource.
func (c *Controller) syncHandler(key string) error {
// If an error occurs during handling, we'll requeue the item so we can
// attempt processing again later. This could have been caused by a
// temporary network failure, or any other transient reason.
// Convert the namespace/name string into a distinct namespace and name
ns, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
utilruntime.HandleError(fmt.Errorf("invalid resource key: %s", key))
return nil
}
klog.V(4).Infof("start processing Manifest %q", key)
// Get the Manifest resource with this name
manifest, err := c.manifestLister.Manifests(ns).Get(name)
// The Manifest resource | random_line_split |
|
manifest.go | , and makes it easy to ensure we are never processing the same item
// simultaneously in two different workers.
workqueue workqueue.RateLimitingInterface
manifestLister applisters.ManifestLister
manifestSynced cache.InformerSynced
}
//NewController new controller
func NewController(clusternetClient clusternetclientset.Interface,
manifestInformer appinformers.ManifestInformer) (*Controller, error) {
c := &Controller{
clusternetClient: clusternetClient,
workqueue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "manifest"),
manifestLister: manifestInformer.Lister(),
manifestSynced: manifestInformer.Informer().HasSynced,
}
// Manage the addition/update of Manifest
manifestInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: c.addManifest,
UpdateFunc: c.updateManifest,
DeleteFunc: c.deleteManifest,
})
return c, nil
}
// Run will set up the event handlers for types we are interested in, as well
// as syncing informer caches and starting workers. It will block until stopCh
// is closed, at which point it will shutdown the workqueue and wait for
// workers to finish processing their current work items.
func (c *Controller) | (workers int, stopCh <-chan struct{}) {
defer utilruntime.HandleCrash()
defer c.workqueue.ShutDown()
klog.Info("starting manifest controller...")
defer klog.Info("shutting down manifest controller")
// Wait for the caches to be synced before starting workers
if !cache.WaitForNamedCacheSync("manifest-controller", stopCh, c.manifestSynced) {
return
}
klog.V(5).Infof("starting %d worker threads", workers)
// Launch workers to process Manifest resources
for i := 0; i < workers; i++ {
go wait.Until(c.runWorker, time.Second, stopCh)
}
<-stopCh
}
func (c *Controller) addManifest(obj interface{}) {
manifest := obj.(*appsapi.Manifest)
klog.V(4).Infof("adding Manifest %q", klog.KObj(manifest))
if manifest.Template.Raw == nil {
klog.Warning("manifest.Template.Raw is empty, %q", klog.KObj(manifest))
return
}
utd := &unstructured.Unstructured{}
err := json.Unmarshal(manifest.Template.Raw, &utd.Object)
if err != nil {
klog.Errorf("unmarshal error, %q, err=%v", klog.KObj(manifest), err)
return
}
//过滤没有annotation的
annotations := utd.GetAnnotations()
if ok := util.MatchAnnotationsKeyPrefix(annotations); !ok {
klog.V(5).Infof("addManifest but manifest %s:%s does not find match annotation", manifest.Namespace, manifest.Name)
return
}
c.enqueue(manifest)
}
func (c *Controller) updateManifest(old, cur interface{}) {
oldManifest := old.(*appsapi.Manifest)
newManifest := cur.(*appsapi.Manifest)
if newManifest.DeletionTimestamp != nil {
c.enqueue(newManifest)
return
}
if oldManifest.Template.Raw == nil || newManifest.Template.Raw == nil {
klog.Warning("old manifest.Template.Raw or newManifest.Template.Raw is empty, %q, %q", klog.KObj(oldManifest), klog.KObj(newManifest))
return
}
oldUtd := &unstructured.Unstructured{}
err := json.Unmarshal(oldManifest.Template.Raw, &oldUtd.Object)
if err != nil {
klog.Errorf("oldManifest.Template.Raw unmarshal error, %q, err=%v", klog.KObj(oldManifest), err)
return
}
newUtd := &unstructured.Unstructured{}
err = json.Unmarshal(newManifest.Template.Raw, &newUtd.Object)
if err != nil {
klog.Errorf("newManifest.Template.Raw unmarshal error, %q, err=%v", klog.KObj(newManifest), err)
return
}
if reflect.DeepEqual(newUtd.GetAnnotations(), oldUtd.GetAnnotations()) {
klog.V(5).Infof("updateManifest oldManifest annotation and newManifest annotation is equal, skip")
return
}
klog.V(4).Infof("updating Manifest %q", klog.KObj(oldManifest))
c.enqueue(newManifest)
}
func (c *Controller) deleteManifest(obj interface{}) {
manifest, ok := obj.(*appsapi.Manifest)
if !ok {
tombstone, ok := obj.(cache.DeletedFinalStateUnknown)
if !ok {
utilruntime.HandleError(fmt.Errorf("couldn't get object from tombstone %#v", obj))
return
}
manifest, ok = tombstone.Obj.(*appsapi.Manifest)
if !ok {
utilruntime.HandleError(fmt.Errorf("tombstone contained object that is not a Manifest %#v", obj))
return
}
}
klog.V(4).Infof("deleting Manifest %q", klog.KObj(manifest))
//过滤没有指定annotation key的数据
if manifest.Template.Raw == nil {
klog.Warning("manifest.Template.Raw is empty, %q", klog.KObj(manifest))
return
}
utd := &unstructured.Unstructured{}
err := json.Unmarshal(manifest.Template.Raw, &utd.Object)
if err != nil {
klog.Errorf("unmarshal error, %q, err=%v", klog.KObj(manifest), err)
return
}
if ok := util.MatchAnnotationsKeyPrefix(utd.GetAnnotations()); !ok {
klog.V(5).Infof("deleteManifest , but manifest %s:%s does not find match annotation in annotation", manifest.Namespace, manifest.Name)
return
}
c.enqueue(manifest)
}
// runWorker is a long-running function that will continually call the
// processNextWorkItem function in order to read and process a message on the
// workqueue.
func (c *Controller) runWorker() {
for c.processNextWorkItem() {
}
}
// processNextWorkItem will read a single work item off the workqueue and
// attempt to process it, by calling the syncHandler.
func (c *Controller) processNextWorkItem() bool {
obj, shutdown := c.workqueue.Get()
if shutdown {
return false
}
// We wrap this block in a func so we can defer c.workqueue.Done.
err := func(obj interface{}) error {
// We call Done here so the workqueue knows we have finished
// processing this item. We also must remember to call Forget if we
// do not want this work item being re-queued. For example, we do
// not call Forget if a transient error occurs, instead the item is
// put back on the workqueue and attempted again after a back-off
// period.
defer c.workqueue.Done(obj)
var key string
var ok bool
// We expect strings to come off the workqueue. These are of the
// form namespace/name. We do this as the delayed nature of the
// workqueue means the items in the informer cache may actually be
// more up to date that when the item was initially put onto the
// workqueue.
if key, ok = obj.(string); !ok {
// As the item in the workqueue is actually invalid, we call
// Forget here else we'd go into a loop of attempting to
// process a work item that is invalid.
c.workqueue.Forget(obj)
utilruntime.HandleError(fmt.Errorf("expected string in workqueue but got %#v", obj))
return nil
}
// Run the syncHandler, passing it the namespace/name string of the
// Manifest resource to be synced.
if err := c.syncHandler(key); err != nil {
// Put the item back on the workqueue to handle any transient errors.
c.workqueue.AddRateLimited(key)
return fmt.Errorf("error syncing '%s': %s, requeuing", key, err.Error())
}
// Finally, if no error occurs we Forget this item so it does not
// get queued again until another change happens.
c.workqueue.Forget(obj)
klog.Infof("successfully synced Manifest %q", key)
return nil
}(obj)
if err != nil {
utilruntime.HandleError(err)
return true
}
return true
}
// syncHandler compares the actual state with the desired, and attempts to
// converge the two. It then updates the Status block of the Manifest resource
// with the current status of the resource.
func (c *Controller) syncHandler(key string) error {
// If an error occurs during handling, we'll requeue the item so we can
// attempt processing again later. This could have been caused by a
// temporary network failure, or any other transient reason.
// Convert the namespace/name string into a distinct namespace and name
ns, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
utilruntime.HandleError(fmt.Errorf("invalid resource key: %s", key))
return nil
}
klog.V(4).Infof("start processing Manifest %q", key)
// Get the Manifest resource with this name
manifest, err := c.manifestLister.Manifests(ns).Get(name)
// The | Run | identifier_name |
manifest.go | , and makes it easy to ensure we are never processing the same item
// simultaneously in two different workers.
workqueue workqueue.RateLimitingInterface
manifestLister applisters.ManifestLister
manifestSynced cache.InformerSynced
}
//NewController new controller
func NewController(clusternetClient clusternetclientset.Interface,
manifestInformer appinformers.ManifestInformer) (*Controller, error) {
c := &Controller{
clusternetClient: clusternetClient,
workqueue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "manifest"),
manifestLister: manifestInformer.Lister(),
manifestSynced: manifestInformer.Informer().HasSynced,
}
// Manage the addition/update of Manifest
manifestInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: c.addManifest,
UpdateFunc: c.updateManifest,
DeleteFunc: c.deleteManifest,
})
return c, nil
}
// Run will set up the event handlers for types we are interested in, as well
// as syncing informer caches and starting workers. It will block until stopCh
// is closed, at which point it will shutdown the workqueue and wait for
// workers to finish processing their current work items.
func (c *Controller) Run(workers int, stopCh <-chan struct{}) {
defer utilruntime.HandleCrash()
defer c.workqueue.ShutDown()
klog.Info("starting manifest controller...")
defer klog.Info("shutting down manifest controller")
// Wait for the caches to be synced before starting workers
if !cache.WaitForNamedCacheSync("manifest-controller", stopCh, c.manifestSynced) {
return
}
klog.V(5).Infof("starting %d worker threads", workers)
// Launch workers to process Manifest resources
for i := 0; i < workers; i++ {
go wait.Until(c.runWorker, time.Second, stopCh)
}
<-stopCh
}
func (c *Controller) addManifest(obj interface{}) {
manifest := obj.(*appsapi.Manifest)
klog.V(4).Infof("adding Manifest %q", klog.KObj(manifest))
if manifest.Template.Raw == nil {
klog.Warning("manifest.Template.Raw is empty, %q", klog.KObj(manifest))
return
}
utd := &unstructured.Unstructured{}
err := json.Unmarshal(manifest.Template.Raw, &utd.Object)
if err != nil |
//过滤没有annotation的
annotations := utd.GetAnnotations()
if ok := util.MatchAnnotationsKeyPrefix(annotations); !ok {
klog.V(5).Infof("addManifest but manifest %s:%s does not find match annotation", manifest.Namespace, manifest.Name)
return
}
c.enqueue(manifest)
}
func (c *Controller) updateManifest(old, cur interface{}) {
oldManifest := old.(*appsapi.Manifest)
newManifest := cur.(*appsapi.Manifest)
if newManifest.DeletionTimestamp != nil {
c.enqueue(newManifest)
return
}
if oldManifest.Template.Raw == nil || newManifest.Template.Raw == nil {
klog.Warning("old manifest.Template.Raw or newManifest.Template.Raw is empty, %q, %q", klog.KObj(oldManifest), klog.KObj(newManifest))
return
}
oldUtd := &unstructured.Unstructured{}
err := json.Unmarshal(oldManifest.Template.Raw, &oldUtd.Object)
if err != nil {
klog.Errorf("oldManifest.Template.Raw unmarshal error, %q, err=%v", klog.KObj(oldManifest), err)
return
}
newUtd := &unstructured.Unstructured{}
err = json.Unmarshal(newManifest.Template.Raw, &newUtd.Object)
if err != nil {
klog.Errorf("newManifest.Template.Raw unmarshal error, %q, err=%v", klog.KObj(newManifest), err)
return
}
if reflect.DeepEqual(newUtd.GetAnnotations(), oldUtd.GetAnnotations()) {
klog.V(5).Infof("updateManifest oldManifest annotation and newManifest annotation is equal, skip")
return
}
klog.V(4).Infof("updating Manifest %q", klog.KObj(oldManifest))
c.enqueue(newManifest)
}
func (c *Controller) deleteManifest(obj interface{}) {
manifest, ok := obj.(*appsapi.Manifest)
if !ok {
tombstone, ok := obj.(cache.DeletedFinalStateUnknown)
if !ok {
utilruntime.HandleError(fmt.Errorf("couldn't get object from tombstone %#v", obj))
return
}
manifest, ok = tombstone.Obj.(*appsapi.Manifest)
if !ok {
utilruntime.HandleError(fmt.Errorf("tombstone contained object that is not a Manifest %#v", obj))
return
}
}
klog.V(4).Infof("deleting Manifest %q", klog.KObj(manifest))
//过滤没有指定annotation key的数据
if manifest.Template.Raw == nil {
klog.Warning("manifest.Template.Raw is empty, %q", klog.KObj(manifest))
return
}
utd := &unstructured.Unstructured{}
err := json.Unmarshal(manifest.Template.Raw, &utd.Object)
if err != nil {
klog.Errorf("unmarshal error, %q, err=%v", klog.KObj(manifest), err)
return
}
if ok := util.MatchAnnotationsKeyPrefix(utd.GetAnnotations()); !ok {
klog.V(5).Infof("deleteManifest , but manifest %s:%s does not find match annotation in annotation", manifest.Namespace, manifest.Name)
return
}
c.enqueue(manifest)
}
// runWorker is a long-running function that will continually call the
// processNextWorkItem function in order to read and process a message on the
// workqueue.
func (c *Controller) runWorker() {
for c.processNextWorkItem() {
}
}
// processNextWorkItem will read a single work item off the workqueue and
// attempt to process it, by calling the syncHandler.
func (c *Controller) processNextWorkItem() bool {
obj, shutdown := c.workqueue.Get()
if shutdown {
return false
}
// We wrap this block in a func so we can defer c.workqueue.Done.
err := func(obj interface{}) error {
// We call Done here so the workqueue knows we have finished
// processing this item. We also must remember to call Forget if we
// do not want this work item being re-queued. For example, we do
// not call Forget if a transient error occurs, instead the item is
// put back on the workqueue and attempted again after a back-off
// period.
defer c.workqueue.Done(obj)
var key string
var ok bool
// We expect strings to come off the workqueue. These are of the
// form namespace/name. We do this as the delayed nature of the
// workqueue means the items in the informer cache may actually be
// more up to date that when the item was initially put onto the
// workqueue.
if key, ok = obj.(string); !ok {
// As the item in the workqueue is actually invalid, we call
// Forget here else we'd go into a loop of attempting to
// process a work item that is invalid.
c.workqueue.Forget(obj)
utilruntime.HandleError(fmt.Errorf("expected string in workqueue but got %#v", obj))
return nil
}
// Run the syncHandler, passing it the namespace/name string of the
// Manifest resource to be synced.
if err := c.syncHandler(key); err != nil {
// Put the item back on the workqueue to handle any transient errors.
c.workqueue.AddRateLimited(key)
return fmt.Errorf("error syncing '%s': %s, requeuing", key, err.Error())
}
// Finally, if no error occurs we Forget this item so it does not
// get queued again until another change happens.
c.workqueue.Forget(obj)
klog.Infof("successfully synced Manifest %q", key)
return nil
}(obj)
if err != nil {
utilruntime.HandleError(err)
return true
}
return true
}
// syncHandler compares the actual state with the desired, and attempts to
// converge the two. It then updates the Status block of the Manifest resource
// with the current status of the resource.
func (c *Controller) syncHandler(key string) error {
// If an error occurs during handling, we'll requeue the item so we can
// attempt processing again later. This could have been caused by a
// temporary network failure, or any other transient reason.
// Convert the namespace/name string into a distinct namespace and name
ns, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
utilruntime.HandleError(fmt.Errorf("invalid resource key: %s", key))
return nil
}
klog.V(4).Infof("start processing Manifest %q", key)
// Get the Manifest resource with this name
manifest, err := c.manifestLister.Manifests(ns).Get(name)
// | {
klog.Errorf("unmarshal error, %q, err=%v", klog.KObj(manifest), err)
return
} | conditional_block |
rsqf.rs | struct Metadata {
n: usize,
qbits: usize,
rbits: usize,
nblocks: usize,
nelements: usize,
ndistinct_elements: usize,
nslots: usize,
noccupied_slots: usize,
max_slots: usize,
}
/// Standard filter result type, on success returns a count on error returns a message
/// This should probably be richer over time
type FilterResult = Result<usize, &'static str>;
#[allow(dead_code)] // for now
#[allow(unused_variables)] // for now
impl RSQF {
pub fn new(n: usize, rbits: usize) -> RSQF {
RSQF::from_n_and_r(n, rbits)
}
/// Creates a structure for the filter based on `n` the number of expected elements
/// and `rbits` which specifies the false positive rate at `1/(2^rbits - 1)`
fn from_n_and_r(n: usize, rbits: usize) -> RSQF {
RSQF::from_metadata(Metadata::from_n_and_r(n, rbits))
}
/// Creates an instance of the filter given the description of the filter parameters stored in
/// a `Metadata` structure
fn from_metadata(meta: Metadata) -> RSQF {
let logical = logical::LogicalData::new(meta.nslots, meta.rbits);
return RSQF { meta, logical };
}
/// Queries the filter for the presence of `hash`.
///
/// If `hash` is not present, returns 0
/// If `hash` is likely to be present, returns an approximate count of the number of times
/// `hash` has been inserted. Note that this is approximate; it is possible that `hash` is
/// actually not present but a non-zero count is returned, with a probability no worse than
/// `2^-rbits`
pub fn get_count(&self, hash: Murmur3Hash) -> usize {
panic!("NYI");
}
/// Adds `count` to the total count for `hash` in the filter.
///
/// If `hash` is not present in the filter, it is added with `count` count.
/// If `hash` is present, `count` is added to its existing count.
///
/// As with the `query` method it is possible `hash` collides with another hash value
///
/// Returns the new total count of `hash` on success, or `Err` if the filter is already at max
/// capacity
pub fn add_count(&self, hash: Murmur3Hash, count: usize) -> FilterResult {
panic!("NYI");
}
/// Increments the count of `hash` by one.
///
/// If `hash` is not present, it's added with a count of one.
/// If `hash` is present, its existing count is incremented.
///
/// Returns the new total count of `hash` on success, or `Err` if the filter is already at max
/// capacity
pub fn inc_count(&self, hash: Murmur3Hash) -> FilterResult {
return self.add_count(hash, 1);
}
/// Subtracts `count` from the total count for `hash` in the filter.
///
/// If `hash` is not present in the filter, an error is returned
/// If `hash` is present, `count` is subtracted from the existing count. The resulting count
/// is returned. If subtracting `count` from the existing count results in a value less than
/// 1, a resulting count of 0 is returned and `hash` is removed from the filter.
///
/// As with the `query` method it is possible `hash` collides with another hash value
///
/// Returns the new total count of `hash` on success, which may be 0 in which case `hash` has
/// been removed from the filter, or an error if `hash` was not found in the filter
pub fn sub_count(&self, hash: Murmur3Hash, count: usize) -> FilterResult {
panic!("NYI");
}
pub fn dec_count(&self, hash: Murmur3Hash) -> FilterResult {
return self.sub_count(hash, 1);
}
/// Given a Murmur3 hash as input, extracts the quotient `q` and remainder `r` which will be
/// used to look up this item in the filter.
///
/// Though both values are of type `u64`, the number of bits used in each is based on the size
/// (`n`) and false-positive rate (`rbits`) specified when the filter was created
fn get_q_and_r(&self, hash: Murmur3Hash) -> (u64, u64) {
//Use only the 64-bit hash and pull out the bits we'll use for q and r
let hash = hash.value64();
// To compute the quotient q for this hash, shift right to remove the bits to be used as
// the remainder r, then mask out q bits
let q = (hash.wrapping_shr(self.meta.rbits as u32)) & bitmask!(self.meta.qbits);
let r = hash & bitmask!(self.meta.rbits as u32);
(q, r)
}
}
#[cfg(test)]
mod rsqf_tests {
use super::*;
use murmur::Murmur3Hash;
#[test]
fn creates_empty_filter() {
let _filter = RSQF::new(10000, 9);
}
#[test]
#[should_panic]
fn panics_on_invalid_r() {
RSQF::new(10000, 8);
}
#[test]
fn computes_valid_metadata() {
let filter = RSQF::new(10000, 9);
assert_eq!(filter.meta.n, 10000);
assert_eq!(filter.meta.rbits, 9);
assert_eq!(filter.meta.qbits, 14);
assert_eq!(filter.meta.nslots, 1usize << 14);
assert_eq!(filter.meta.nblocks, (filter.meta.nslots + 64 - 1) / 64);
assert_eq!(filter.meta.noccupied_slots, 0);
assert_eq!(filter.meta.nelements, 0);
assert_eq!(filter.meta.ndistinct_elements, 0);
assert_eq!(
filter.meta.max_slots,
((filter.meta.nslots as f64) * 0.95) as usize
);
}
#[test]
#[ignore]
fn get_count_nonexistent_item_returns_zero() {
let filter = RSQF::new(10000, 9);
assert_eq!(0, filter.get_count(Murmur3Hash::new(1)));
}
#[test]
fn get_q_and_r_returns_correct_results() {
let test_data = [
// (n, rbits, hash)
(30usize, 9usize, 0x0000_0000u128),
(30usize, 9usize, 0b0000_0001_1111_1111u128),
(30usize, 9usize, 0b1111_0001_1111_0000u128),
];
for (n, rbits, hash) in test_data.into_iter() {
let filter = RSQF::new(*n, *rbits);
println!(
"n={} qbits={} rbits={} hash={:x}",
n, filter.meta.qbits, rbits, hash
);
let hash = Murmur3Hash::new(*hash);
let (q, r) = filter.get_q_and_r(hash);
println!("q={:x}", q);
println!("r={:x}", r);
let rbitmask = u128::max_value() >> (128 - *rbits);
let qbitmask = u128::max_value() >> (128 - filter.meta.qbits);
//The lower rbits bits of the hash should be r
assert_eq!(hash.value128() & rbitmask, r as u128);
assert_eq!((hash.value128() >> rbits) & qbitmask, q as u128);
}
}
}
#[allow(dead_code)] // for now
#[allow(unused_variables)] // for now
impl Metadata {
/// Creates a metadata structure for the filter based on `n` the number of expected elements
/// and `rbits` which specifies the false positive rate at `1/(2^rbits - 1)`
fn from_n_and_r(n: usize, rbits: usize) -> Metadata {
assert!(block::SLOTS_PER_BLOCK == 64usize); //this code assumes 64 slots per block always
assert!(rbits as usize == block::BITS_PER_SLOT); //TODO: figure out how to make this configurable
let qbits = Metadata::calculate_qbits(n, rbits);
let total_slots = | #[derive(Default, PartialEq)] | random_line_split |
|
rsqf.rs | rbits))
}
/// Creates an instance of the filter given the description of the filter parameters stored in
/// a `Metadata` structure
fn from_metadata(meta: Metadata) -> RSQF {
let logical = logical::LogicalData::new(meta.nslots, meta.rbits);
return RSQF { meta, logical };
}
/// Queries the filter for the presence of `hash`.
///
/// If `hash` is not present, returns 0
/// If `hash` is likely to be present, returns an approximate count of the number of times
/// `hash` has been inserted. Note that this is approximate; it is possible that `hash` is
/// actually not present but a non-zero count is returned, with a probability no worse than
/// `2^-rbits`
pub fn get_count(&self, hash: Murmur3Hash) -> usize {
panic!("NYI");
}
/// Adds `count` to the total count for `hash` in the filter.
///
/// If `hash` is not present in the filter, it is added with `count` count.
/// If `hash` is present, `count` is added to its existing count.
///
/// As with the `query` method it is possible `hash` collides with another hash value
///
/// Returns the new total count of `hash` on success, or `Err` if the filter is already at max
/// capacity
pub fn add_count(&self, hash: Murmur3Hash, count: usize) -> FilterResult {
panic!("NYI");
}
/// Increments the count of `hash` by one.
///
/// If `hash` is not present, it's added with a count of one.
/// If `hash` is present, its existing count is incremented.
///
/// Returns the new total count of `hash` on success, or `Err` if the filter is already at max
/// capacity
pub fn inc_count(&self, hash: Murmur3Hash) -> FilterResult {
return self.add_count(hash, 1);
}
/// Subtracts `count` from the total count for `hash` in the filter.
///
/// If `hash` is not present in the filter, an error is returned
/// If `hash` is present, `count` is subtracted from the existing count. The resulting count
/// is returned. If subtracting `count` from the existing count results in a value less than
/// 1, a resulting count of 0 is returned and `hash` is removed from the filter.
///
/// As with the `query` method it is possible `hash` collides with another hash value
///
/// Returns the new total count of `hash` on success, which may be 0 in which case `hash` has
/// been removed from the filter, or an error if `hash` was not found in the filter
pub fn sub_count(&self, hash: Murmur3Hash, count: usize) -> FilterResult {
panic!("NYI");
}
pub fn dec_count(&self, hash: Murmur3Hash) -> FilterResult {
return self.sub_count(hash, 1);
}
/// Given a Murmur3 hash as input, extracts the quotient `q` and remainder `r` which will be
/// used to look up this item in the filter.
///
/// Though both values are of type `u64`, the number of bits used in each is based on the size
/// (`n`) and false-positive rate (`rbits`) specified when the filter was created
fn get_q_and_r(&self, hash: Murmur3Hash) -> (u64, u64) {
//Use only the 64-bit hash and pull out the bits we'll use for q and r
let hash = hash.value64();
// To compute the quotient q for this hash, shift right to remove the bits to be used as
// the remainder r, then mask out q bits
let q = (hash.wrapping_shr(self.meta.rbits as u32)) & bitmask!(self.meta.qbits);
let r = hash & bitmask!(self.meta.rbits as u32);
(q, r)
}
}
#[cfg(test)]
mod rsqf_tests {
use super::*;
use murmur::Murmur3Hash;
#[test]
fn creates_empty_filter() {
let _filter = RSQF::new(10000, 9);
}
#[test]
#[should_panic]
fn | () {
RSQF::new(10000, 8);
}
#[test]
fn computes_valid_metadata() {
let filter = RSQF::new(10000, 9);
assert_eq!(filter.meta.n, 10000);
assert_eq!(filter.meta.rbits, 9);
assert_eq!(filter.meta.qbits, 14);
assert_eq!(filter.meta.nslots, 1usize << 14);
assert_eq!(filter.meta.nblocks, (filter.meta.nslots + 64 - 1) / 64);
assert_eq!(filter.meta.noccupied_slots, 0);
assert_eq!(filter.meta.nelements, 0);
assert_eq!(filter.meta.ndistinct_elements, 0);
assert_eq!(
filter.meta.max_slots,
((filter.meta.nslots as f64) * 0.95) as usize
);
}
#[test]
#[ignore]
fn get_count_nonexistent_item_returns_zero() {
let filter = RSQF::new(10000, 9);
assert_eq!(0, filter.get_count(Murmur3Hash::new(1)));
}
#[test]
fn get_q_and_r_returns_correct_results() {
let test_data = [
// (n, rbits, hash)
(30usize, 9usize, 0x0000_0000u128),
(30usize, 9usize, 0b0000_0001_1111_1111u128),
(30usize, 9usize, 0b1111_0001_1111_0000u128),
];
for (n, rbits, hash) in test_data.into_iter() {
let filter = RSQF::new(*n, *rbits);
println!(
"n={} qbits={} rbits={} hash={:x}",
n, filter.meta.qbits, rbits, hash
);
let hash = Murmur3Hash::new(*hash);
let (q, r) = filter.get_q_and_r(hash);
println!("q={:x}", q);
println!("r={:x}", r);
let rbitmask = u128::max_value() >> (128 - *rbits);
let qbitmask = u128::max_value() >> (128 - filter.meta.qbits);
//The lower rbits bits of the hash should be r
assert_eq!(hash.value128() & rbitmask, r as u128);
assert_eq!((hash.value128() >> rbits) & qbitmask, q as u128);
}
}
}
#[allow(dead_code)] // for now
#[allow(unused_variables)] // for now
impl Metadata {
/// Creates a metadata structure for the filter based on `n` the number of expected elements
/// and `rbits` which specifies the false positive rate at `1/(2^rbits - 1)`
fn from_n_and_r(n: usize, rbits: usize) -> Metadata {
assert!(block::SLOTS_PER_BLOCK == 64usize); //this code assumes 64 slots per block always
assert!(rbits as usize == block::BITS_PER_SLOT); //TODO: figure out how to make this configurable
let qbits = Metadata::calculate_qbits(n, rbits);
let total_slots = 1usize << qbits; //2^qbits slots in the filter
let nblocks = (total_slots + block::SLOTS_PER_BLOCK - 1) / block::SLOTS_PER_BLOCK;
//Conservatively, set the maximum number of elements to 95% of the total capacity
//Realistically this structure can go higher than that but there starts to be a performance
//penalty and it's better to resize at that point
let max_slots = ((total_slots as f64) * 0.95) as usize;
return Metadata {
n,
rbits,
qbits,
nblocks,
max_slots,
nslots: total_slots,
..Default::default()
};
}
/// Given the insert count `n` and the remainder bits `rbits`, calculates the quotient size
/// `qbits` which will provide a false positive rate of no worse than `1/(2^rbits - 1)`
fn calculate_qbits(n: usize, rbits: usize) -> usize {
assert!(rbits > 1);
assert!( | panics_on_invalid_r | identifier_name |
rsqf.rs | 3Hash) -> FilterResult {
return self.add_count(hash, 1);
}
/// Subtracts `count` from the total count for `hash` in the filter.
///
/// If `hash` is not present in the filter, an error is returned
/// If `hash` is present, `count` is subtracted from the existing count. The resulting count
/// is returned. If subtracting `count` from the existing count results in a value less than
/// 1, a resulting count of 0 is returned and `hash` is removed from the filter.
///
/// As with the `query` method it is possible `hash` collides with another hash value
///
/// Returns the new total count of `hash` on success, which may be 0 in which case `hash` has
/// been removed from the filter, or an error if `hash` was not found in the filter
pub fn sub_count(&self, hash: Murmur3Hash, count: usize) -> FilterResult {
panic!("NYI");
}
pub fn dec_count(&self, hash: Murmur3Hash) -> FilterResult {
return self.sub_count(hash, 1);
}
/// Given a Murmur3 hash as input, extracts the quotient `q` and remainder `r` which will be
/// used to look up this item in the filter.
///
/// Though both values are of type `u64`, the number of bits used in each is based on the size
/// (`n`) and false-positive rate (`rbits`) specified when the filter was created
fn get_q_and_r(&self, hash: Murmur3Hash) -> (u64, u64) {
//Use only the 64-bit hash and pull out the bits we'll use for q and r
let hash = hash.value64();
// To compute the quotient q for this hash, shift right to remove the bits to be used as
// the remainder r, then mask out q bits
let q = (hash.wrapping_shr(self.meta.rbits as u32)) & bitmask!(self.meta.qbits);
let r = hash & bitmask!(self.meta.rbits as u32);
(q, r)
}
}
#[cfg(test)]
mod rsqf_tests {
use super::*;
use murmur::Murmur3Hash;
#[test]
fn creates_empty_filter() {
let _filter = RSQF::new(10000, 9);
}
#[test]
#[should_panic]
fn panics_on_invalid_r() {
RSQF::new(10000, 8);
}
#[test]
fn computes_valid_metadata() {
let filter = RSQF::new(10000, 9);
assert_eq!(filter.meta.n, 10000);
assert_eq!(filter.meta.rbits, 9);
assert_eq!(filter.meta.qbits, 14);
assert_eq!(filter.meta.nslots, 1usize << 14);
assert_eq!(filter.meta.nblocks, (filter.meta.nslots + 64 - 1) / 64);
assert_eq!(filter.meta.noccupied_slots, 0);
assert_eq!(filter.meta.nelements, 0);
assert_eq!(filter.meta.ndistinct_elements, 0);
assert_eq!(
filter.meta.max_slots,
((filter.meta.nslots as f64) * 0.95) as usize
);
}
#[test]
#[ignore]
fn get_count_nonexistent_item_returns_zero() {
let filter = RSQF::new(10000, 9);
assert_eq!(0, filter.get_count(Murmur3Hash::new(1)));
}
#[test]
fn get_q_and_r_returns_correct_results() {
let test_data = [
// (n, rbits, hash)
(30usize, 9usize, 0x0000_0000u128),
(30usize, 9usize, 0b0000_0001_1111_1111u128),
(30usize, 9usize, 0b1111_0001_1111_0000u128),
];
for (n, rbits, hash) in test_data.into_iter() {
let filter = RSQF::new(*n, *rbits);
println!(
"n={} qbits={} rbits={} hash={:x}",
n, filter.meta.qbits, rbits, hash
);
let hash = Murmur3Hash::new(*hash);
let (q, r) = filter.get_q_and_r(hash);
println!("q={:x}", q);
println!("r={:x}", r);
let rbitmask = u128::max_value() >> (128 - *rbits);
let qbitmask = u128::max_value() >> (128 - filter.meta.qbits);
//The lower rbits bits of the hash should be r
assert_eq!(hash.value128() & rbitmask, r as u128);
assert_eq!((hash.value128() >> rbits) & qbitmask, q as u128);
}
}
}
#[allow(dead_code)] // for now
#[allow(unused_variables)] // for now
impl Metadata {
/// Creates a metadata structure for the filter based on `n` the number of expected elements
/// and `rbits` which specifies the false positive rate at `1/(2^rbits - 1)`
fn from_n_and_r(n: usize, rbits: usize) -> Metadata {
assert!(block::SLOTS_PER_BLOCK == 64usize); //this code assumes 64 slots per block always
assert!(rbits as usize == block::BITS_PER_SLOT); //TODO: figure out how to make this configurable
let qbits = Metadata::calculate_qbits(n, rbits);
let total_slots = 1usize << qbits; //2^qbits slots in the filter
let nblocks = (total_slots + block::SLOTS_PER_BLOCK - 1) / block::SLOTS_PER_BLOCK;
//Conservatively, set the maximum number of elements to 95% of the total capacity
//Realistically this structure can go higher than that but there starts to be a performance
//penalty and it's better to resize at that point
let max_slots = ((total_slots as f64) * 0.95) as usize;
return Metadata {
n,
rbits,
qbits,
nblocks,
max_slots,
nslots: total_slots,
..Default::default()
};
}
/// Given the insert count `n` and the remainder bits `rbits`, calculates the quotient size
/// `qbits` which will provide a false positive rate of no worse than `1/(2^rbits - 1)`
fn calculate_qbits(n: usize, rbits: usize) -> usize {
assert!(rbits > 1);
assert!(n > 0);
let sigma = 2.0f64.powi(-(rbits as i32));
let p = ((n as f64) / sigma).log2().ceil() as usize;
assert!(p > rbits);
let qbits = p - rbits;
qbits
}
}
#[cfg(test)]
mod metadata_tests {
use super::*;
#[test]
#[should_panic]
fn panics_on_invalid_rbits() {
Metadata::from_n_and_r(10000, 8);
}
#[test]
fn computes_valid_q_for_n_and_r() | {
// Test data data values were computed from a Google Sheet using formulae from the RSQF
// paper
let test_data = [
// (n, r, expected_q)
(100_000_usize, 6_usize, 17),
(1_000_000_usize, 6_usize, 20),
(10_000_000_usize, 6_usize, 24),
(100_000_usize, 8_usize, 17),
(1_000_000_usize, 8_usize, 20),
(10_000_000_usize, 8_usize, 24),
(100_000_usize, 9_usize, 17),
(1_000_000_usize, 9_usize, 20),
(10_000_000_usize, 9_usize, 24),
];
for (n, r, expected_qbits) in test_data.into_iter() {
let q = Metadata::calculate_qbits(*n, *r);
assert_eq!(*expected_qbits, q, "n={} r={}", *n, *r);
} | identifier_body |
|
chekcPermission.go | .Userid)
} else {
log.Debugf("success GetUserByUserid(%d), user:[%+v]", input.Userid, user)
}
args := map[string]interface{}{}
if input.Args == "" {
//log.Debugf("Not args input")
} else if err := json.Unmarshal([]byte(input.Args), &args); err == nil {
//log.Debugf("args input is json")
} else if values, err := url.ParseQuery(input.Args); err == nil {
//log.Debugf("args input is querystring")
for k, varray := range values {
if varray != nil {
args[k] = varray[0]
}
}
}
//Step 4. 获取用户角色树
roletree, err := GetUserRoleTreeFromDb(c.Mysql(), input.Userid)
if err != nil {
return c.RESULT_ERROR(ERR_PERMISSION_DENIED, "获取用户角色树错误")
}
log.PrintPreety("roletree", roletree)
//Step 5. 获取所有权限
privilegeids := GetPrivilegeIds(roletree)
if len(privilegeids) == 0 {
return c.RESULT_ERROR(ERR_PERMISSION_DENIED, "用户无任何权限")
}
privileges, err := t_rbac_privilege.FindPrivilegeByIds(c.Mysql(), privilegeids)
if err != nil {
return c.RESULT_ERROR(ERR_PERMISSION_DENIED, "获取权限列表失败")
}
//Step 6. 判断用户有权限
pass := false
for _, privilege := range privileges {
if privilege.F_uri == input.Uri {
if ok, err := CheckPrivilege(privilege.F_expression, user, args); ok && err == nil {
pass = true
}
}
}
//Step 4. set output
if pass {
output.Code = 0
output.Msg = "success"
} else {
output.Code = ERR_PERMISSION_DENIED
output.Msg = "Permission denied"
}
return c.RESULT(output)
}
type NodeId struct {
Id uint64
Type int
}
type NodeInfo struct {
Id NodeId
Name string
Parents RoleTree
Children RoleTree
}
type RoleTree map[NodeId]*NodeInfo
func (roletree RoleTree) String() string {
ret := "roletree:\n"
for id, rolemap := range roletree {
ret += fmt.Sprintf("== %s->%s,parents:%d,children:%d\n",
id.String(), rolemap.Name, len(rolemap.Parents), len(rolemap.Children))
}
ret += "=="
return ret
}
func (nodeid *NodeId) String() string {
if nodeid.Type == 1 {
return fmt.Sprintf("ROLE%d", nodeid.Id)
} else {
return fmt.Sprintf("PRIVILEGE%d", nodeid.Id)
}
}
func GetUserRoleTreeFromDb(db *gorm.DB, userid uint64) (roletree RoleTree, err error) {
//Step 1. 获取用户所有角色ID
roleids := []uint64{}
roles := map[uint64]interface{}{}
userroles, err := t_rbac_user_role.FindUserRoles(db, userid)
for _, userrole := range userroles {
if id, ok := roles[userrole.F_role_id]; ok {
log.Warningf("User role %d duplicated", id)
} else {
roleids = append(roleids, userrole.F_role_id)
roles[userrole.F_role_id] = userrole.F_role_id
}
}
//Step 2. 获取子树
roletree = GetRoleTreeByRoleIds(db, roleids)
return
}
var roleCache = map[NodeId]t_rbac_role.Role{}
var privCache = map[NodeId]t_rbac_privilege.Privilege{}
func getName(db *gorm.DB, nid NodeId) string {
if nid.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE {
if v, ok := roleCache[nid]; ok {
return v.F_name
} else if roles, err := t_rbac_role.FindRolesByIds(db, []uint64{nid.Id}); err == nil && len(roles) == 1 {
roleCache[nid] = roles[0]
return roles[0].F_name
} else {
return "ROLE_NOT_EXIST"
}
} else if nid.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_PRIVILEGE {
if v, ok := privCache[nid]; ok {
return v.F_name
} else if privileges, err := t_rbac_privilege.FindPrivilegeByIds(db, []uint64{nid.Id}); err == nil && len(privileges) == 1 {
privCache[nid] = privileges[0]
return privileges[0].F_name
} else {
return "PRIVILEGE_NOT_EXIST"
}
}
return "UNKOWN_TYPE_ERROR"
}
func GetRoleTreeByRoleIds(db *gorm.DB, roleids []uint64) (roletree RoleTree) {
roletree = RoleTree{}
roles := map[uint64]interface{}{}
for _, rid := range roleids {
roles[rid] = rid
}
for len(roleids) != 0 {
rolemaps, _ := t_rbac_role_map.FindChildrenMap(db, roleids)
roleids = []uint64{}
for _, rolemap := range rolemaps {
//父节点ID
pnid := NodeId{rolemap.F_role_id, t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE}
//子节点ID
cnid := NodeId{rolemap.F_target_id, rolemap.F_target_type}
//父节点不在树内, 新建一个顶级节点
if _, ok := roletree[pnid]; !ok {
roletree[pnid] = &NodeInfo{
Id: pnid,
Name: getName(db, pnid),
Children: RoleTree{},
Parents: RoleTree{},
}
}
//添加子节点
if child, ok := roletree[cnid]; ok {
//子节点已存在,直接添加
roletree[pnid].Children[cnid] = child
roletree[cnid].Parents[pnid] = roletree[pnid]
} else {
//子节点不存在,新建添加
roletree[cnid] = &NodeInfo{
Id: cnid,
Name: getName(db, cnid),
Children: RoleTree{},
Parents: RoleTree{},
}
roletree[pnid].Children[cnid] = roletree[cnid]
roletree[cnid].Parents[pnid] = roletree[pnid]
}
//获取下一层级的节点信息
if rolemap.F_target_type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE {
target_id := rolemap.F_target_id
if id, ok := roles[target_id]; ok {
log.Debugf("role %d cached", id)
} else {
roleids = append(roleids, target_id)
roles[target_id] = target_id
}
}
}
}
return
}
func GetPrivilegeIds(roletree RoleTree) (privilegeids []uint64) {
cached := map[uint64]interface{}{}
for _, node := range roletree {
if node.Id.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_PRIVILEGE {
if _, ok := cached[node.Id.Id]; ok {
log.Debugf("privilege %d cached", node.Id.Id)
} else {
privilegeids = append(privilegeids, node.Id.Id)
cached[node.Id.Id] = node.Id.Id
}
}
}
return
}
func GetRoleIds(roletree RoleTree) (roleids []uint64) {
cached := map[uint64]interface{}{}
for _, node := range roletree {
if node.Id.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE {
if _, ok := cached[node.Id.Id]; ok {
log.Debugf("role %d cached", node.Id.Id)
} else {
roleids | ogicexp)
if err != nil {
return false, err
}
//step 2. get all token value
for token, value := range args {
tokens[token] = value
}
for token, _ := range tokens {
switch token {
case "alluser":
tokens["alluser"] = true
case "inneruser":
inneruser, err := EnvIsInnerUser(user)
if err != nil {
log.Panicf("check inneruser failed:% | = append(roleids, node.Id.Id)
cached[node.Id.Id] = node.Id.Id
}
}
}
return
}
func CheckPrivilege(logicexp string, user t_user.User, args map[string]interface{}) (ok bool, err error) {
if len(logicexp) == 0 {
return true, nil
}
//Step 1. get need tokens
tokens, expression, err := GetLogicexpTokensAndExpression(l | identifier_body |
chekcPermission.go | .Userid)
} else {
log.Debugf("success GetUserByUserid(%d), user:[%+v]", input.Userid, user)
}
args := map[string]interface{}{}
if input.Args == "" {
//log.Debugf("Not args input")
} else if err := json.Unmarshal([]byte(input.Args), &args); err == nil {
//log.Debugf("args input is json")
} else if values, err := url.ParseQuery(input.Args); err == nil {
//log.De | 获取用户角色树
roletree, err := GetUserRoleTreeFromDb(c.Mysql(), input.Userid)
if err != nil {
return c.RESULT_ERROR(ERR_PERMISSION_DENIED, "获取用户角色树错误")
}
log.PrintPreety("roletree", roletree)
//Step 5. 获取所有权限
privilegeids := GetPrivilegeIds(roletree)
if len(privilegeids) == 0 {
return c.RESULT_ERROR(ERR_PERMISSION_DENIED, "用户无任何权限")
}
privileges, err := t_rbac_privilege.FindPrivilegeByIds(c.Mysql(), privilegeids)
if err != nil {
return c.RESULT_ERROR(ERR_PERMISSION_DENIED, "获取权限列表失败")
}
//Step 6. 判断用户有权限
pass := false
for _, privilege := range privileges {
if privilege.F_uri == input.Uri {
if ok, err := CheckPrivilege(privilege.F_expression, user, args); ok && err == nil {
pass = true
}
}
}
//Step 4. set output
if pass {
output.Code = 0
output.Msg = "success"
} else {
output.Code = ERR_PERMISSION_DENIED
output.Msg = "Permission denied"
}
return c.RESULT(output)
}
type NodeId struct {
Id uint64
Type int
}
type NodeInfo struct {
Id NodeId
Name string
Parents RoleTree
Children RoleTree
}
type RoleTree map[NodeId]*NodeInfo
func (roletree RoleTree) String() string {
ret := "roletree:\n"
for id, rolemap := range roletree {
ret += fmt.Sprintf("== %s->%s,parents:%d,children:%d\n",
id.String(), rolemap.Name, len(rolemap.Parents), len(rolemap.Children))
}
ret += "=="
return ret
}
func (nodeid *NodeId) String() string {
if nodeid.Type == 1 {
return fmt.Sprintf("ROLE%d", nodeid.Id)
} else {
return fmt.Sprintf("PRIVILEGE%d", nodeid.Id)
}
}
func GetUserRoleTreeFromDb(db *gorm.DB, userid uint64) (roletree RoleTree, err error) {
//Step 1. 获取用户所有角色ID
roleids := []uint64{}
roles := map[uint64]interface{}{}
userroles, err := t_rbac_user_role.FindUserRoles(db, userid)
for _, userrole := range userroles {
if id, ok := roles[userrole.F_role_id]; ok {
log.Warningf("User role %d duplicated", id)
} else {
roleids = append(roleids, userrole.F_role_id)
roles[userrole.F_role_id] = userrole.F_role_id
}
}
//Step 2. 获取子树
roletree = GetRoleTreeByRoleIds(db, roleids)
return
}
var roleCache = map[NodeId]t_rbac_role.Role{}
var privCache = map[NodeId]t_rbac_privilege.Privilege{}
func getName(db *gorm.DB, nid NodeId) string {
if nid.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE {
if v, ok := roleCache[nid]; ok {
return v.F_name
} else if roles, err := t_rbac_role.FindRolesByIds(db, []uint64{nid.Id}); err == nil && len(roles) == 1 {
roleCache[nid] = roles[0]
return roles[0].F_name
} else {
return "ROLE_NOT_EXIST"
}
} else if nid.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_PRIVILEGE {
if v, ok := privCache[nid]; ok {
return v.F_name
} else if privileges, err := t_rbac_privilege.FindPrivilegeByIds(db, []uint64{nid.Id}); err == nil && len(privileges) == 1 {
privCache[nid] = privileges[0]
return privileges[0].F_name
} else {
return "PRIVILEGE_NOT_EXIST"
}
}
return "UNKOWN_TYPE_ERROR"
}
func GetRoleTreeByRoleIds(db *gorm.DB, roleids []uint64) (roletree RoleTree) {
roletree = RoleTree{}
roles := map[uint64]interface{}{}
for _, rid := range roleids {
roles[rid] = rid
}
for len(roleids) != 0 {
rolemaps, _ := t_rbac_role_map.FindChildrenMap(db, roleids)
roleids = []uint64{}
for _, rolemap := range rolemaps {
//父节点ID
pnid := NodeId{rolemap.F_role_id, t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE}
//子节点ID
cnid := NodeId{rolemap.F_target_id, rolemap.F_target_type}
//父节点不在树内, 新建一个顶级节点
if _, ok := roletree[pnid]; !ok {
roletree[pnid] = &NodeInfo{
Id: pnid,
Name: getName(db, pnid),
Children: RoleTree{},
Parents: RoleTree{},
}
}
//添加子节点
if child, ok := roletree[cnid]; ok {
//子节点已存在,直接添加
roletree[pnid].Children[cnid] = child
roletree[cnid].Parents[pnid] = roletree[pnid]
} else {
//子节点不存在,新建添加
roletree[cnid] = &NodeInfo{
Id: cnid,
Name: getName(db, cnid),
Children: RoleTree{},
Parents: RoleTree{},
}
roletree[pnid].Children[cnid] = roletree[cnid]
roletree[cnid].Parents[pnid] = roletree[pnid]
}
//获取下一层级的节点信息
if rolemap.F_target_type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE {
target_id := rolemap.F_target_id
if id, ok := roles[target_id]; ok {
log.Debugf("role %d cached", id)
} else {
roleids = append(roleids, target_id)
roles[target_id] = target_id
}
}
}
}
return
}
func GetPrivilegeIds(roletree RoleTree) (privilegeids []uint64) {
cached := map[uint64]interface{}{}
for _, node := range roletree {
if node.Id.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_PRIVILEGE {
if _, ok := cached[node.Id.Id]; ok {
log.Debugf("privilege %d cached", node.Id.Id)
} else {
privilegeids = append(privilegeids, node.Id.Id)
cached[node.Id.Id] = node.Id.Id
}
}
}
return
}
func GetRoleIds(roletree RoleTree) (roleids []uint64) {
cached := map[uint64]interface{}{}
for _, node := range roletree {
if node.Id.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE {
if _, ok := cached[node.Id.Id]; ok {
log.Debugf("role %d cached", node.Id.Id)
} else {
roleids = append(roleids, node.Id.Id)
cached[node.Id.Id] = node.Id.Id
}
}
}
return
}
func CheckPrivilege(logicexp string, user t_user.User, args map[string]interface{}) (ok bool, err error) {
if len(logicexp) == 0 {
return true, nil
}
//Step 1. get need tokens
tokens, expression, err := GetLogicexpTokensAndExpression(logicexp)
if err != nil {
return false, err
}
//step 2. get all token value
for token, value := range args {
tokens[token] = value
}
for token, _ := range tokens {
switch token {
case "alluser":
tokens["alluser"] = true
case "inneruser":
inneruser, err := EnvIsInnerUser(user)
if err != nil {
log.Panicf("check inneruser failed:% | bugf("args input is querystring")
for k, varray := range values {
if varray != nil {
args[k] = varray[0]
}
}
}
//Step 4. | conditional_block |
chekcPermission.go | .RESULT_ERROR(ERR_PERMISSION_DENIED, "用户无任何权限")
}
privileges, err := t_rbac_privilege.FindPrivilegeByIds(c.Mysql(), privilegeids)
if err != nil {
return c.RESULT_ERROR(ERR_PERMISSION_DENIED, "获取权限列表失败")
}
//Step 6. 判断用户有权限
pass := false
for _, privilege := range privileges {
if privilege.F_uri == input.Uri {
if ok, err := CheckPrivilege(privilege.F_expression, user, args); ok && err == nil {
pass = true
}
}
}
//Step 4. set output
if pass {
output.Code = 0
output.Msg = "success"
} else {
output.Code = ERR_PERMISSION_DENIED
output.Msg = "Permission denied"
}
return c.RESULT(output)
}
type NodeId struct {
Id uint64
Type int
}
type NodeInfo struct {
Id NodeId
Name string
Parents RoleTree
Children RoleTree
}
type RoleTree map[NodeId]*NodeInfo
func (roletree RoleTree) String() string {
ret := "roletree:\n"
for id, rolemap := range roletree {
ret += fmt.Sprintf("== %s->%s,parents:%d,children:%d\n",
id.String(), rolemap.Name, len(rolemap.Parents), len(rolemap.Children))
}
ret += "=="
return ret
}
func (nodeid *NodeId) String() string {
if nodeid.Type == 1 {
return fmt.Sprintf("ROLE%d", nodeid.Id)
} else {
return fmt.Sprintf("PRIVILEGE%d", nodeid.Id)
}
}
func GetUserRoleTreeFromDb(db *gorm.DB, userid uint64) (roletree RoleTree, err error) {
//Step 1. 获取用户所有角色ID
roleids := []uint64{}
roles := map[uint64]interface{}{}
userroles, err := t_rbac_user_role.FindUserRoles(db, userid)
for _, userrole := range userroles {
if id, ok := roles[userrole.F_role_id]; ok {
log.Warningf("User role %d duplicated", id)
} else {
roleids = append(roleids, userrole.F_role_id)
roles[userrole.F_role_id] = userrole.F_role_id
}
}
//Step 2. 获取子树
roletree = GetRoleTreeByRoleIds(db, roleids)
return
}
var roleCache = map[NodeId]t_rbac_role.Role{}
var privCache = map[NodeId]t_rbac_privilege.Privilege{}
func getName(db *gorm.DB, nid NodeId) string {
if nid.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE {
if v, ok := roleCache[nid]; ok {
return v.F_name
} else if roles, err := t_rbac_role.FindRolesByIds(db, []uint64{nid.Id}); err == nil && len(roles) == 1 {
roleCache[nid] = roles[0]
return roles[0].F_name
} else {
return "ROLE_NOT_EXIST"
}
} else if nid.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_PRIVILEGE {
if v, ok := privCache[nid]; ok {
return v.F_name
} else if privileges, err := t_rbac_privilege.FindPrivilegeByIds(db, []uint64{nid.Id}); err == nil && len(privileges) == 1 {
privCache[nid] = privileges[0]
return privileges[0].F_name
} else {
return "PRIVILEGE_NOT_EXIST"
}
}
return "UNKOWN_TYPE_ERROR"
}
func GetRoleTreeByRoleIds(db *gorm.DB, roleids []uint64) (roletree RoleTree) {
roletree = RoleTree{}
roles := map[uint64]interface{}{}
for _, rid := range roleids {
roles[rid] = rid
}
for len(roleids) != 0 {
rolemaps, _ := t_rbac_role_map.FindChildrenMap(db, roleids)
roleids = []uint64{}
for _, rolemap := range rolemaps {
//父节点ID
pnid := NodeId{rolemap.F_role_id, t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE}
//子节点ID
cnid := NodeId{rolemap.F_target_id, rolemap.F_target_type}
//父节点不在树内, 新建一个顶级节点
if _, ok := roletree[pnid]; !ok {
roletree[pnid] = &NodeInfo{
Id: pnid,
Name: getName(db, pnid),
Children: RoleTree{},
Parents: RoleTree{},
}
}
//添加子节点
if child, ok := roletree[cnid]; ok {
//子节点已存在,直接添加
roletree[pnid].Children[cnid] = child
roletree[cnid].Parents[pnid] = roletree[pnid]
} else {
//子节点不存在,新建添加
roletree[cnid] = &NodeInfo{
Id: cnid,
Name: getName(db, cnid),
Children: RoleTree{},
Parents: RoleTree{},
}
roletree[pnid].Children[cnid] = roletree[cnid]
roletree[cnid].Parents[pnid] = roletree[pnid]
}
//获取下一层级的节点信息
if rolemap.F_target_type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE {
target_id := rolemap.F_target_id
if id, ok := roles[target_id]; ok {
log.Debugf("role %d cached", id)
} else {
roleids = append(roleids, target_id)
roles[target_id] = target_id
}
}
}
}
return
}
func GetPrivilegeIds(roletree RoleTree) (privilegeids []uint64) {
cached := map[uint64]interface{}{}
for _, node := range roletree {
if node.Id.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_PRIVILEGE {
if _, ok := cached[node.Id.Id]; ok {
log.Debugf("privilege %d cached", node.Id.Id)
} else {
privilegeids = append(privilegeids, node.Id.Id)
cached[node.Id.Id] = node.Id.Id
}
}
}
return
}
func GetRoleIds(roletree RoleTree) (roleids []uint64) {
cached := map[uint64]interface{}{}
for _, node := range roletree {
if node.Id.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE {
if _, ok := cached[node.Id.Id]; ok {
log.Debugf("role %d cached", node.Id.Id)
} else {
roleids = append(roleids, node.Id.Id)
cached[node.Id.Id] = node.Id.Id
}
}
}
return
}
func CheckPrivilege(logicexp string, user t_user.User, args map[string]interface{}) (ok bool, err error) {
if len(logicexp) == 0 {
return true, nil
}
//Step 1. get need tokens
tokens, expression, err := GetLogicexpTokensAndExpression(logicexp)
if err != nil {
return false, err
}
//step 2. get all token value
for token, value := range args {
tokens[token] = value
}
for token, _ := range tokens {
switch token {
case "alluser":
tokens["alluser"] = true
case "inneruser":
inneruser, err := EnvIsInnerUser(user)
if err != nil {
log.Panicf("check inneruser failed:%s, treat as not inneruser", err.Error())
}
tokens["inneruser"] = inneruser
case "linuxpamuser":
ispamuser, err := EnvIsPamUser(user)
if err != nil {
log.Panicf("check inneruser failed:%s, treat as not inneruser", err.Error())
}
tokens["linuxpamuser"] = ispamuser
default:
log.Panicf("not supported token[%s]", token)
}
}
log.PrintPreety("tokens:", tokens)
//step 3. valuate
result, err := expression.Evaluate(tokens)
if err != nil {
log.Panicf("CheckScenePrivilege Evaluate(%+v) failed:%s", tokens, err.Error())
return false, err
}
return result == true, err
}
func GetLogicexpTokensAndExpression(logicexp string) (tokens map[string]interface{}, expr *govaluate.EvaluableExpression, err error) {
//Step 1. get need tokens
tokens = map[string]interface{}{}
expression, err := govaluate.NewEvaluableExpression | (logicexp)
if err != nil {
| identifier_name |
|
chekcPermission.go | input.Userid)
} else {
log.Debugf("success GetUserByUserid(%d), user:[%+v]", input.Userid, user)
}
args := map[string]interface{}{}
if input.Args == "" {
//log.Debugf("Not args input")
} else if err := json.Unmarshal([]byte(input.Args), &args); err == nil {
//log.Debugf("args input is json")
} else if values, err := url.ParseQuery(input.Args); err == nil {
//log.Debugf("args input is querystring")
for k, varray := range values {
if varray != nil {
args[k] = varray[0]
}
}
}
//Step 4. 获取用户角色树
roletree, err := GetUserRoleTreeFromDb(c.Mysql(), input.Userid)
if err != nil {
return c.RESULT_ERROR(ERR_PERMISSION_DENIED, "获取用户角色树错误")
}
log.PrintPreety("roletree", roletree)
//Step 5. 获取所有权限
privilegeids := GetPrivilegeIds(roletree)
if len(privilegeids) == 0 {
return c.RESULT_ERROR(ERR_PERMISSION_DENIED, "用户无任何权限")
}
privileges, err := t_rbac_privilege.FindPrivilegeByIds(c.Mysql(), privilegeids)
if err != nil {
return c.RESULT_ERROR(ERR_PERMISSION_DENIED, "获取权限列表失败")
}
//Step 6. 判断用户有权限
pass := false
for _, privilege := range privileges {
if privilege.F_uri == input.Uri {
if ok, err := CheckPrivilege(privilege.F_expression, user, args); ok && err == nil {
pass = true
}
}
}
//Step 4. set output
if pass {
output.Code = 0
output.Msg = "success"
} else {
output.Code = ERR_PERMISSION_DENIED
output.Msg = "Permission denied"
}
return c.RESULT(output)
}
type NodeId struct {
Id uint64
Type int
}
type NodeInfo struct {
Id NodeId
Name string
Parents RoleTree
Children RoleTree
}
type RoleTree map[NodeId]*NodeInfo
func (roletree RoleTree) String() string {
ret := "roletree:\n"
for id, rolemap := range roletree {
ret += fmt.Sprintf("== %s->%s,parents:%d,children:%d\n",
id.String(), rolemap.Name, len(rolemap.Parents), len(rolemap.Children))
}
ret += "=="
return ret
}
func (nodeid *NodeId) String() string {
if nodeid.Type == 1 {
return fmt.Sprintf("ROLE%d", nodeid.Id)
} else {
return fmt.Sprintf("PRIVILEGE%d", nodeid.Id)
}
}
func GetUserRoleTreeFromDb(db *gorm.DB, userid uint64) (roletree RoleTree, err error) {
//Step 1. 获取用户所有角色ID
roleids := []uint64{}
roles := map[uint64]interface{}{}
userroles, err := t_rbac_user_role.FindUserRoles(db, userid)
for _, userrole := range userroles {
if id, ok := roles[userrole.F_role_id]; ok {
log.Warningf("User role %d duplicated", id)
} else {
roleids = append(roleids, userrole.F_role_id)
roles[userrole.F_role_id] = userrole.F_role_id
}
}
//Step 2. 获取子树
roletree = GetRoleTreeByRoleIds(db, roleids)
return
}
var roleCache = map[NodeId]t_rbac_role.Role{}
var privCache = map[NodeId]t_rbac_privilege.Privilege{}
func getName(db *gorm.DB, nid NodeId) string {
if nid.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE {
if v, ok := roleCache[nid]; ok {
return v.F_name
} else if roles, err := t_rbac_role.FindRolesByIds(db, []uint64{nid.Id}); err == nil && len(roles) == 1 {
roleCache[nid] = roles[0]
return roles[0].F_name
} else {
return "ROLE_NOT_EXIST"
}
} else if nid.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_PRIVILEGE {
if v, ok := privCache[nid]; ok {
return v.F_name
} else if privileges, err := t_rbac_privilege.FindPrivilegeByIds(db, []uint64{nid.Id}); err == nil && len(privileges) == 1 {
privCache[nid] = privileges[0]
return privileges[0].F_name
} else {
return "PRIVILEGE_NOT_EXIST"
}
}
return "UNKOWN_TYPE_ERROR"
}
func GetRoleTreeByRoleIds(db *gorm.DB, roleids []uint64) (roletree RoleTree) {
roletree = RoleTree{}
roles := map[uint64]interface{}{}
for _, rid := range roleids {
roles[rid] = rid
}
for len(roleids) != 0 {
rolemaps, _ := t_rbac_role_map.FindChildrenMap(db, roleids)
roleids = []uint64{}
for _, rolemap := range rolemaps {
//父节点ID
pnid := NodeId{rolemap.F_role_id, t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE}
//子节点ID
cnid := NodeId{rolemap.F_target_id, rolemap.F_target_type}
//父节点不在树内, 新建一个顶级节点
if _, ok := roletree[pnid]; !ok {
roletree[pnid] = &NodeInfo{
Id: pnid,
Name: getName(db, pnid),
Children: RoleTree{},
Parents: RoleTree{},
}
}
//添加子节点
if child, ok := roletree[cnid]; ok {
//子节点已存在,直接添加
roletree[pnid].Children[cnid] = child
roletree[cnid].Parents[pnid] = roletree[pnid] | //子节点不存在,新建添加
roletree[cnid] = &NodeInfo{
Id: cnid,
Name: getName(db, cnid),
Children: RoleTree{},
Parents: RoleTree{},
}
roletree[pnid].Children[cnid] = roletree[cnid]
roletree[cnid].Parents[pnid] = roletree[pnid]
}
//获取下一层级的节点信息
if rolemap.F_target_type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE {
target_id := rolemap.F_target_id
if id, ok := roles[target_id]; ok {
log.Debugf("role %d cached", id)
} else {
roleids = append(roleids, target_id)
roles[target_id] = target_id
}
}
}
}
return
}
func GetPrivilegeIds(roletree RoleTree) (privilegeids []uint64) {
cached := map[uint64]interface{}{}
for _, node := range roletree {
if node.Id.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_PRIVILEGE {
if _, ok := cached[node.Id.Id]; ok {
log.Debugf("privilege %d cached", node.Id.Id)
} else {
privilegeids = append(privilegeids, node.Id.Id)
cached[node.Id.Id] = node.Id.Id
}
}
}
return
}
func GetRoleIds(roletree RoleTree) (roleids []uint64) {
cached := map[uint64]interface{}{}
for _, node := range roletree {
if node.Id.Type == t_rbac_role_map.TARGET_TYPE_ROLE_TO_ROLE {
if _, ok := cached[node.Id.Id]; ok {
log.Debugf("role %d cached", node.Id.Id)
} else {
roleids = append(roleids, node.Id.Id)
cached[node.Id.Id] = node.Id.Id
}
}
}
return
}
func CheckPrivilege(logicexp string, user t_user.User, args map[string]interface{}) (ok bool, err error) {
if len(logicexp) == 0 {
return true, nil
}
//Step 1. get need tokens
tokens, expression, err := GetLogicexpTokensAndExpression(logicexp)
if err != nil {
return false, err
}
//step 2. get all token value
for token, value := range args {
tokens[token] = value
}
for token, _ := range tokens {
switch token {
case "alluser":
tokens["alluser"] = true
case "inneruser":
inneruser, err := EnvIsInnerUser(user)
if err != nil {
log.Panicf("check inneruser failed:%s | } else { | random_line_split |
myds_retrain.py | _argument(
'-d',
'--data_path',
help='path to HDF5 file containing own dataset',
default='data/phaseI-dataset.hdf5')
argparser.add_argument(
'-a',
'--anchors_path',
help='path to anchors file, defaults to yolo_anchors.txt',
default='model_data/yolo_anchors.txt')
argparser.add_argument(
'-c',
'--classes_path',
help='path to classes file, defaults to labels.txt',
default='model_data/labels.txt')
def _main(args):
data_path = os.path.expanduser(args.data_path)
classes_path = os.path.expanduser(args.classes_path)
anchors_path = os.path.expanduser(args.anchors_path)
with open(classes_path) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
if os.path.isfile(anchors_path):
with open(anchors_path) as f:
anchors = f.readline()
anchors = [float(x) for x in anchors.split(',')]
anchors = np.array(anchors).reshape(-1, 2)
else:
anchors = YOLO_ANCHORS
data = h5py.File(data_path, 'r')
#Pre-processing data
boxes_list, image_data_list = get_preprocessed_data(data)
detectors_mask, matching_true_boxes = get_detector_mask(boxes_list, anchors)
#Create model
model_body, model = create_model(anchors, class_names, load_pretrained=True, freeze_body=False)
#train model
train(model, class_names, anchors, image_data_list, boxes_list, detectors_mask, matching_true_boxes)
draw(model_body, class_names, anchors, image_data_list, image_set='val', # assumes training/validation split is 0.9
weights_name='trained_stage_3_best.h5',
save_all=False)
def get_preprocessed_data(data):
| image = image.resize((416,416), PIL.Image.BICUBIC)
image_data = np.array(image, dtype=np.float)
image_data /= 255.0
image_data.resize((image_data.shape[0], image_data.shape[1], 1))
image_data = np.repeat(image_data, 3, 2)
image_list.append(image)
image_data_list.append(image_data)
#Box preprocessing
boxes = processed_box_data[i]
#Get box parameters as x_center, y_center, box_width, box_height, class
boxes_xy = 0.5 * (boxes[:, 3:5] + boxes[:, 1:3])
boxes_wh = boxes[:, 3:5] - boxes[:, 1:3]
boxes_xy = boxes_xy / orig_size
boxes_wh = boxes_wh / orig_size
boxes = np.concatenate((boxes_xy, boxes_wh, boxes[:, 0:1]), axis=1)
boxes_list.append(boxes)
boxes_list = np.array(boxes_list, float)
image_data_list = np.array(image_data_list, dtype=np.float)
return np.array(boxes_list, float), np.array(image_data_list, dtype=np.float)
def boxprocessing(box_data):
#function assumes that there are a maximum of 4 bbox in an image
processed_box_data = []
processed_box_data = np.array(processed_box_data)
for i in range(len(box_data)):
z = np.zeros([1,20]) #change here, multiple of 5 - for more bbox
y = np.append(box_data[i], z)
y = y[0:20] # also here
processed_box_data = np.append(processed_box_data, y)
return processed_box_data
def get_detector_mask(boxes_list, anchors):
'''
Precompute detectors_mask and matching_true_boxes for training.
Detectors mask is 1 for each spatial position in the final conv layer and
anchor that should be active for the given boxes and 0 otherwise.
Matching true boxes gives the regression targets for the ground truth box
that caused a detector to be active or 0 otherwise.
'''
detectors_mask = [0 for i in range(len(boxes_list))]
matching_true_boxes = [0 for i in range(len(boxes_list))]
for i, box in enumerate(boxes_list):
detectors_mask[i], matching_true_boxes[i] = preprocess_true_boxes(box, anchors, [416, 416])
return np.array(detectors_mask), np.array(matching_true_boxes)
def create_model(anchors, class_names, load_pretrained=True, freeze_body=True):
detectors_mask_shape = (13, 13, 5, 1)
matching_boxes_shape = (13, 13, 5, 5)
#Create model input layers
image_input = Input(shape=(416,416, 3))
boxes_input = Input(shape=(None, 5))
detectors_mask_input = Input(shape=detectors_mask_shape)
matching_boxes_input = Input(shape=matching_boxes_shape)
#Create model body
yolo_model = yolo_body(image_input,len(anchors),len(class_names))
topless_yolo = Model(yolo_model.input, yolo_model.layers[-2].output)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
if load_pretrained:
# Save topless yolo:
topless_yolo_path = os.path.join('model_data', 'yolo_topless.h5')
if not os.path.exists(topless_yolo_path):
print("CREATING TOPLESS WEIGHTS FILE")
yolo_path = os.path.join('model_data', 'yolo.h5')
model_body = load_model(yolo_path)
model_body = Model(model_body.inputs, model_body.layers[-2].output)
model_body.save_weights(topless_yolo_path)
topless_yolo.load_weights(topless_yolo_path)
if freeze_body:
for layer in topless_yolo.layers:
layer.trainable = False
final_layer = Conv2D(len(anchors)*(5+len(class_names)), (1, 1), activation='linear')(topless_yolo.output)
model_body = Model(image_input, final_layer)
#model_body = Model(image_input, model_body.output)
with tf.device('/cpu:0'):
model_loss = Lambda(
yolo_loss,
output_shape=(1,),
name='yolo_loss',
arguments={'anchors': anchors,'num_classes': len(class_names)})([
model_body.output, boxes_input,
detectors_mask_input, matching_boxes_input])
model = Model(
[model_body.input, boxes_input, detectors_mask_input,
matching_boxes_input], model_loss)
model.summary()
return model_body, model
def train(model, class_names, anchors, image_data, boxes, detectors_mask, matching_true_boxes, validation_split=0.1):
'''
retrain/fine-tune the model
logs training with tensorboard
saves training weights in current directory
best weights according to val_loss is saved as trained_stage_3_best.h5
'''
model.compile(
optimizer='adam', loss={
'yolo_loss': lambda y_true, y_pred: y_pred
}) # This is a hack to use the custom loss function in the last layer.
logging = TensorBoard()
checkpoint = ModelCheckpoint("trained_stage_3_best.h5", monitor='val_loss',
save_weights_only=True, save_best_only=True)
#uncomment following line to implement early stopping
#early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=15, verbose=1, mode='auto')
model.fit([image_data, boxes, detectors_mask, matching_true_boxes],
np.zeros(len(image_data)),
validation_split=validation_split,
batch_size=32,
epochs=5,
callbacks=[logging])
model.save_weights('trained_stage_1.h5')
model_body, model = create_model(anchors, class_names, load_pretrained=True, freeze_body=True)
#model.load_weights('trained_stage_1.h5')
model.compile(
optimizer='adam', loss={
'yolo_loss': lambda y_true, y_pred: y_pred
}) # This is a hack to use the custom loss function in the last layer.
model.fit([image_data, boxes, detectors_mask, matching_true_boxes],
np.zeros(len(image_data)),
validation_split=0.1,
batch_size=8,
epochs=30,
callbacks=[logging])
model.save_weights('trained_stage_ | '''
function to preprocess hdf5 data
borrowed code from train_overfit and retrain_yolo and modified to suit my input dataset type (hdf5)
'''
image_list = []
boxes_list = []
image_data_list = []
processed_box_data = []
# boxes processing
box_dataset = data['train/boxes']
processed_box_data = boxprocessing(box_dataset)
processed_box_data = processed_box_data.reshape(len(box_dataset),4,5)
for i in range(len(box_dataset)):
image = PIL.Image.open(io.BytesIO(data['train/images'][i]))
orig_size = np.array([image.width, image.height])
orig_size = np.expand_dims(orig_size, axis=0)
#Image preprocessing | identifier_body |
myds_retrain.py | anchors = np.array(anchors).reshape(-1, 2)
else:
anchors = YOLO_ANCHORS
data = h5py.File(data_path, 'r')
#Pre-processing data
boxes_list, image_data_list = get_preprocessed_data(data)
detectors_mask, matching_true_boxes = get_detector_mask(boxes_list, anchors)
#Create model
model_body, model = create_model(anchors, class_names, load_pretrained=True, freeze_body=False)
#train model
train(model, class_names, anchors, image_data_list, boxes_list, detectors_mask, matching_true_boxes)
draw(model_body, class_names, anchors, image_data_list, image_set='val', # assumes training/validation split is 0.9
weights_name='trained_stage_3_best.h5',
save_all=False)
def get_preprocessed_data(data):
'''
function to preprocess hdf5 data
borrowed code from train_overfit and retrain_yolo and modified to suit my input dataset type (hdf5)
'''
image_list = []
boxes_list = []
image_data_list = []
processed_box_data = []
# boxes processing
box_dataset = data['train/boxes']
processed_box_data = boxprocessing(box_dataset)
processed_box_data = processed_box_data.reshape(len(box_dataset),4,5)
for i in range(len(box_dataset)):
image = PIL.Image.open(io.BytesIO(data['train/images'][i]))
orig_size = np.array([image.width, image.height])
orig_size = np.expand_dims(orig_size, axis=0)
#Image preprocessing
image = image.resize((416,416), PIL.Image.BICUBIC)
image_data = np.array(image, dtype=np.float)
image_data /= 255.0
image_data.resize((image_data.shape[0], image_data.shape[1], 1))
image_data = np.repeat(image_data, 3, 2)
image_list.append(image)
image_data_list.append(image_data)
#Box preprocessing
boxes = processed_box_data[i]
#Get box parameters as x_center, y_center, box_width, box_height, class
boxes_xy = 0.5 * (boxes[:, 3:5] + boxes[:, 1:3])
boxes_wh = boxes[:, 3:5] - boxes[:, 1:3]
boxes_xy = boxes_xy / orig_size
boxes_wh = boxes_wh / orig_size
boxes = np.concatenate((boxes_xy, boxes_wh, boxes[:, 0:1]), axis=1)
boxes_list.append(boxes)
boxes_list = np.array(boxes_list, float)
image_data_list = np.array(image_data_list, dtype=np.float)
return np.array(boxes_list, float), np.array(image_data_list, dtype=np.float)
def boxprocessing(box_data):
#function assumes that there are a maximum of 4 bbox in an image
processed_box_data = []
processed_box_data = np.array(processed_box_data)
for i in range(len(box_data)):
z = np.zeros([1,20]) #change here, multiple of 5 - for more bbox
y = np.append(box_data[i], z)
y = y[0:20] # also here
processed_box_data = np.append(processed_box_data, y)
return processed_box_data
def get_detector_mask(boxes_list, anchors):
'''
Precompute detectors_mask and matching_true_boxes for training.
Detectors mask is 1 for each spatial position in the final conv layer and
anchor that should be active for the given boxes and 0 otherwise.
Matching true boxes gives the regression targets for the ground truth box
that caused a detector to be active or 0 otherwise.
'''
detectors_mask = [0 for i in range(len(boxes_list))]
matching_true_boxes = [0 for i in range(len(boxes_list))]
for i, box in enumerate(boxes_list):
detectors_mask[i], matching_true_boxes[i] = preprocess_true_boxes(box, anchors, [416, 416])
return np.array(detectors_mask), np.array(matching_true_boxes)
def create_model(anchors, class_names, load_pretrained=True, freeze_body=True):
detectors_mask_shape = (13, 13, 5, 1)
matching_boxes_shape = (13, 13, 5, 5)
#Create model input layers
image_input = Input(shape=(416,416, 3))
boxes_input = Input(shape=(None, 5))
detectors_mask_input = Input(shape=detectors_mask_shape)
matching_boxes_input = Input(shape=matching_boxes_shape)
#Create model body
yolo_model = yolo_body(image_input,len(anchors),len(class_names))
topless_yolo = Model(yolo_model.input, yolo_model.layers[-2].output)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
if load_pretrained:
# Save topless yolo:
topless_yolo_path = os.path.join('model_data', 'yolo_topless.h5')
if not os.path.exists(topless_yolo_path):
print("CREATING TOPLESS WEIGHTS FILE")
yolo_path = os.path.join('model_data', 'yolo.h5')
model_body = load_model(yolo_path)
model_body = Model(model_body.inputs, model_body.layers[-2].output)
model_body.save_weights(topless_yolo_path)
topless_yolo.load_weights(topless_yolo_path)
if freeze_body:
for layer in topless_yolo.layers:
layer.trainable = False
final_layer = Conv2D(len(anchors)*(5+len(class_names)), (1, 1), activation='linear')(topless_yolo.output)
model_body = Model(image_input, final_layer)
#model_body = Model(image_input, model_body.output)
with tf.device('/cpu:0'):
model_loss = Lambda(
yolo_loss,
output_shape=(1,),
name='yolo_loss',
arguments={'anchors': anchors,'num_classes': len(class_names)})([
model_body.output, boxes_input,
detectors_mask_input, matching_boxes_input])
model = Model(
[model_body.input, boxes_input, detectors_mask_input,
matching_boxes_input], model_loss)
model.summary()
return model_body, model
def train(model, class_names, anchors, image_data, boxes, detectors_mask, matching_true_boxes, validation_split=0.1):
'''
retrain/fine-tune the model
logs training with tensorboard
saves training weights in current directory
best weights according to val_loss is saved as trained_stage_3_best.h5
'''
model.compile(
optimizer='adam', loss={
'yolo_loss': lambda y_true, y_pred: y_pred
}) # This is a hack to use the custom loss function in the last layer.
logging = TensorBoard()
checkpoint = ModelCheckpoint("trained_stage_3_best.h5", monitor='val_loss',
save_weights_only=True, save_best_only=True)
#uncomment following line to implement early stopping
#early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=15, verbose=1, mode='auto')
model.fit([image_data, boxes, detectors_mask, matching_true_boxes],
np.zeros(len(image_data)),
validation_split=validation_split,
batch_size=32,
epochs=5,
callbacks=[logging])
model.save_weights('trained_stage_1.h5')
model_body, model = create_model(anchors, class_names, load_pretrained=True, freeze_body=True)
#model.load_weights('trained_stage_1.h5')
model.compile(
optimizer='adam', loss={
'yolo_loss': lambda y_true, y_pred: y_pred
}) # This is a hack to use the custom loss function in the last layer.
model.fit([image_data, boxes, detectors_mask, matching_true_boxes],
np.zeros(len(image_data)),
validation_split=0.1,
batch_size=8,
epochs=30,
callbacks=[logging])
model.save_weights('trained_stage_2.h5')
model.fit([image_data, boxes, detectors_mask, matching_true_boxes],
np.zeros(len(image_data)),
validation_split=0.1,
batch_size=32,
epochs=500,
callbacks=[logging, checkpoint])
model.save_weights('trained_stage_3.h5')
def draw(model_body, class_names, anchors, image_data, image_set='val',
weights_name='trained_stage_3_best.h5', out_path="output_images", save_all=True):
'''
Draw bounding boxes on image data
'''
if image_set == 'train':
image_data = np.array([np.expand_dims(image, axis=0)
for image in image_data[:int(len(image_data)*.9)]])
elif image_set == 'val':
image_data = np.array([np.expand_dims(image, axis=0) | for image in image_data[int(len(image_data)*.9):]])
elif image_set == 'all':
image_data = np.array([np.expand_dims(image, axis=0) | random_line_split |
|
myds_retrain.py | 5py.File(data_path, 'r')
#Pre-processing data
boxes_list, image_data_list = get_preprocessed_data(data)
detectors_mask, matching_true_boxes = get_detector_mask(boxes_list, anchors)
#Create model
model_body, model = create_model(anchors, class_names, load_pretrained=True, freeze_body=False)
#train model
train(model, class_names, anchors, image_data_list, boxes_list, detectors_mask, matching_true_boxes)
draw(model_body, class_names, anchors, image_data_list, image_set='val', # assumes training/validation split is 0.9
weights_name='trained_stage_3_best.h5',
save_all=False)
def get_preprocessed_data(data):
'''
function to preprocess hdf5 data
borrowed code from train_overfit and retrain_yolo and modified to suit my input dataset type (hdf5)
'''
image_list = []
boxes_list = []
image_data_list = []
processed_box_data = []
# boxes processing
box_dataset = data['train/boxes']
processed_box_data = boxprocessing(box_dataset)
processed_box_data = processed_box_data.reshape(len(box_dataset),4,5)
for i in range(len(box_dataset)):
image = PIL.Image.open(io.BytesIO(data['train/images'][i]))
orig_size = np.array([image.width, image.height])
orig_size = np.expand_dims(orig_size, axis=0)
#Image preprocessing
image = image.resize((416,416), PIL.Image.BICUBIC)
image_data = np.array(image, dtype=np.float)
image_data /= 255.0
image_data.resize((image_data.shape[0], image_data.shape[1], 1))
image_data = np.repeat(image_data, 3, 2)
image_list.append(image)
image_data_list.append(image_data)
#Box preprocessing
boxes = processed_box_data[i]
#Get box parameters as x_center, y_center, box_width, box_height, class
boxes_xy = 0.5 * (boxes[:, 3:5] + boxes[:, 1:3])
boxes_wh = boxes[:, 3:5] - boxes[:, 1:3]
boxes_xy = boxes_xy / orig_size
boxes_wh = boxes_wh / orig_size
boxes = np.concatenate((boxes_xy, boxes_wh, boxes[:, 0:1]), axis=1)
boxes_list.append(boxes)
boxes_list = np.array(boxes_list, float)
image_data_list = np.array(image_data_list, dtype=np.float)
return np.array(boxes_list, float), np.array(image_data_list, dtype=np.float)
def boxprocessing(box_data):
#function assumes that there are a maximum of 4 bbox in an image
processed_box_data = []
processed_box_data = np.array(processed_box_data)
for i in range(len(box_data)):
z = np.zeros([1,20]) #change here, multiple of 5 - for more bbox
y = np.append(box_data[i], z)
y = y[0:20] # also here
processed_box_data = np.append(processed_box_data, y)
return processed_box_data
def get_detector_mask(boxes_list, anchors):
'''
Precompute detectors_mask and matching_true_boxes for training.
Detectors mask is 1 for each spatial position in the final conv layer and
anchor that should be active for the given boxes and 0 otherwise.
Matching true boxes gives the regression targets for the ground truth box
that caused a detector to be active or 0 otherwise.
'''
detectors_mask = [0 for i in range(len(boxes_list))]
matching_true_boxes = [0 for i in range(len(boxes_list))]
for i, box in enumerate(boxes_list):
detectors_mask[i], matching_true_boxes[i] = preprocess_true_boxes(box, anchors, [416, 416])
return np.array(detectors_mask), np.array(matching_true_boxes)
def create_model(anchors, class_names, load_pretrained=True, freeze_body=True):
detectors_mask_shape = (13, 13, 5, 1)
matching_boxes_shape = (13, 13, 5, 5)
#Create model input layers
image_input = Input(shape=(416,416, 3))
boxes_input = Input(shape=(None, 5))
detectors_mask_input = Input(shape=detectors_mask_shape)
matching_boxes_input = Input(shape=matching_boxes_shape)
#Create model body
yolo_model = yolo_body(image_input,len(anchors),len(class_names))
topless_yolo = Model(yolo_model.input, yolo_model.layers[-2].output)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
if load_pretrained:
# Save topless yolo:
topless_yolo_path = os.path.join('model_data', 'yolo_topless.h5')
if not os.path.exists(topless_yolo_path):
print("CREATING TOPLESS WEIGHTS FILE")
yolo_path = os.path.join('model_data', 'yolo.h5')
model_body = load_model(yolo_path)
model_body = Model(model_body.inputs, model_body.layers[-2].output)
model_body.save_weights(topless_yolo_path)
topless_yolo.load_weights(topless_yolo_path)
if freeze_body:
for layer in topless_yolo.layers:
layer.trainable = False
final_layer = Conv2D(len(anchors)*(5+len(class_names)), (1, 1), activation='linear')(topless_yolo.output)
model_body = Model(image_input, final_layer)
#model_body = Model(image_input, model_body.output)
with tf.device('/cpu:0'):
model_loss = Lambda(
yolo_loss,
output_shape=(1,),
name='yolo_loss',
arguments={'anchors': anchors,'num_classes': len(class_names)})([
model_body.output, boxes_input,
detectors_mask_input, matching_boxes_input])
model = Model(
[model_body.input, boxes_input, detectors_mask_input,
matching_boxes_input], model_loss)
model.summary()
return model_body, model
def train(model, class_names, anchors, image_data, boxes, detectors_mask, matching_true_boxes, validation_split=0.1):
'''
retrain/fine-tune the model
logs training with tensorboard
saves training weights in current directory
best weights according to val_loss is saved as trained_stage_3_best.h5
'''
model.compile(
optimizer='adam', loss={
'yolo_loss': lambda y_true, y_pred: y_pred
}) # This is a hack to use the custom loss function in the last layer.
logging = TensorBoard()
checkpoint = ModelCheckpoint("trained_stage_3_best.h5", monitor='val_loss',
save_weights_only=True, save_best_only=True)
#uncomment following line to implement early stopping
#early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=15, verbose=1, mode='auto')
model.fit([image_data, boxes, detectors_mask, matching_true_boxes],
np.zeros(len(image_data)),
validation_split=validation_split,
batch_size=32,
epochs=5,
callbacks=[logging])
model.save_weights('trained_stage_1.h5')
model_body, model = create_model(anchors, class_names, load_pretrained=True, freeze_body=True)
#model.load_weights('trained_stage_1.h5')
model.compile(
optimizer='adam', loss={
'yolo_loss': lambda y_true, y_pred: y_pred
}) # This is a hack to use the custom loss function in the last layer.
model.fit([image_data, boxes, detectors_mask, matching_true_boxes],
np.zeros(len(image_data)),
validation_split=0.1,
batch_size=8,
epochs=30,
callbacks=[logging])
model.save_weights('trained_stage_2.h5')
model.fit([image_data, boxes, detectors_mask, matching_true_boxes],
np.zeros(len(image_data)),
validation_split=0.1,
batch_size=32,
epochs=500,
callbacks=[logging, checkpoint])
model.save_weights('trained_stage_3.h5')
def draw(model_body, class_names, anchors, image_data, image_set='val',
weights_name='trained_stage_3_best.h5', out_path="output_images", save_all=True):
'''
Draw bounding boxes on image data
'''
if image_set == 'train':
image_data = np.array([np.expand_dims(image, axis=0)
for image in image_data[:int(len(image_data)*.9)]])
elif image_set == 'val':
image_data = np.array([np.expand_dims(image, axis=0)
for image in image_data[int(len(image_data)*.9):]])
elif image_set == 'all':
image_data = np.array([np.expand_dims(image, axis=0)
for image in image_data])
else:
| ValueError("draw argument image_set must be 'train', 'val', or 'all'") | conditional_block |
|
myds_retrain.py | (
'-d',
'--data_path',
help='path to HDF5 file containing own dataset',
default='data/phaseI-dataset.hdf5')
argparser.add_argument(
'-a',
'--anchors_path',
help='path to anchors file, defaults to yolo_anchors.txt',
default='model_data/yolo_anchors.txt')
argparser.add_argument(
'-c',
'--classes_path',
help='path to classes file, defaults to labels.txt',
default='model_data/labels.txt')
def _main(args):
data_path = os.path.expanduser(args.data_path)
classes_path = os.path.expanduser(args.classes_path)
anchors_path = os.path.expanduser(args.anchors_path)
with open(classes_path) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
if os.path.isfile(anchors_path):
with open(anchors_path) as f:
anchors = f.readline()
anchors = [float(x) for x in anchors.split(',')]
anchors = np.array(anchors).reshape(-1, 2)
else:
anchors = YOLO_ANCHORS
data = h5py.File(data_path, 'r')
#Pre-processing data
boxes_list, image_data_list = get_preprocessed_data(data)
detectors_mask, matching_true_boxes = get_detector_mask(boxes_list, anchors)
#Create model
model_body, model = create_model(anchors, class_names, load_pretrained=True, freeze_body=False)
#train model
train(model, class_names, anchors, image_data_list, boxes_list, detectors_mask, matching_true_boxes)
draw(model_body, class_names, anchors, image_data_list, image_set='val', # assumes training/validation split is 0.9
weights_name='trained_stage_3_best.h5',
save_all=False)
def get_preprocessed_data(data):
'''
function to preprocess hdf5 data
borrowed code from train_overfit and retrain_yolo and modified to suit my input dataset type (hdf5)
'''
image_list = []
boxes_list = []
image_data_list = []
processed_box_data = []
# boxes processing
box_dataset = data['train/boxes']
processed_box_data = boxprocessing(box_dataset)
processed_box_data = processed_box_data.reshape(len(box_dataset),4,5)
for i in range(len(box_dataset)):
image = PIL.Image.open(io.BytesIO(data['train/images'][i]))
orig_size = np.array([image.width, image.height])
orig_size = np.expand_dims(orig_size, axis=0)
#Image preprocessing
image = image.resize((416,416), PIL.Image.BICUBIC)
image_data = np.array(image, dtype=np.float)
image_data /= 255.0
image_data.resize((image_data.shape[0], image_data.shape[1], 1))
image_data = np.repeat(image_data, 3, 2)
image_list.append(image)
image_data_list.append(image_data)
#Box preprocessing
boxes = processed_box_data[i]
#Get box parameters as x_center, y_center, box_width, box_height, class
boxes_xy = 0.5 * (boxes[:, 3:5] + boxes[:, 1:3])
boxes_wh = boxes[:, 3:5] - boxes[:, 1:3]
boxes_xy = boxes_xy / orig_size
boxes_wh = boxes_wh / orig_size
boxes = np.concatenate((boxes_xy, boxes_wh, boxes[:, 0:1]), axis=1)
boxes_list.append(boxes)
boxes_list = np.array(boxes_list, float)
image_data_list = np.array(image_data_list, dtype=np.float)
return np.array(boxes_list, float), np.array(image_data_list, dtype=np.float)
def boxprocessing(box_data):
#function assumes that there are a maximum of 4 bbox in an image
processed_box_data = []
processed_box_data = np.array(processed_box_data)
for i in range(len(box_data)):
z = np.zeros([1,20]) #change here, multiple of 5 - for more bbox
y = np.append(box_data[i], z)
y = y[0:20] # also here
processed_box_data = np.append(processed_box_data, y)
return processed_box_data
def | (boxes_list, anchors):
'''
Precompute detectors_mask and matching_true_boxes for training.
Detectors mask is 1 for each spatial position in the final conv layer and
anchor that should be active for the given boxes and 0 otherwise.
Matching true boxes gives the regression targets for the ground truth box
that caused a detector to be active or 0 otherwise.
'''
detectors_mask = [0 for i in range(len(boxes_list))]
matching_true_boxes = [0 for i in range(len(boxes_list))]
for i, box in enumerate(boxes_list):
detectors_mask[i], matching_true_boxes[i] = preprocess_true_boxes(box, anchors, [416, 416])
return np.array(detectors_mask), np.array(matching_true_boxes)
def create_model(anchors, class_names, load_pretrained=True, freeze_body=True):
detectors_mask_shape = (13, 13, 5, 1)
matching_boxes_shape = (13, 13, 5, 5)
#Create model input layers
image_input = Input(shape=(416,416, 3))
boxes_input = Input(shape=(None, 5))
detectors_mask_input = Input(shape=detectors_mask_shape)
matching_boxes_input = Input(shape=matching_boxes_shape)
#Create model body
yolo_model = yolo_body(image_input,len(anchors),len(class_names))
topless_yolo = Model(yolo_model.input, yolo_model.layers[-2].output)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
if load_pretrained:
# Save topless yolo:
topless_yolo_path = os.path.join('model_data', 'yolo_topless.h5')
if not os.path.exists(topless_yolo_path):
print("CREATING TOPLESS WEIGHTS FILE")
yolo_path = os.path.join('model_data', 'yolo.h5')
model_body = load_model(yolo_path)
model_body = Model(model_body.inputs, model_body.layers[-2].output)
model_body.save_weights(topless_yolo_path)
topless_yolo.load_weights(topless_yolo_path)
if freeze_body:
for layer in topless_yolo.layers:
layer.trainable = False
final_layer = Conv2D(len(anchors)*(5+len(class_names)), (1, 1), activation='linear')(topless_yolo.output)
model_body = Model(image_input, final_layer)
#model_body = Model(image_input, model_body.output)
with tf.device('/cpu:0'):
model_loss = Lambda(
yolo_loss,
output_shape=(1,),
name='yolo_loss',
arguments={'anchors': anchors,'num_classes': len(class_names)})([
model_body.output, boxes_input,
detectors_mask_input, matching_boxes_input])
model = Model(
[model_body.input, boxes_input, detectors_mask_input,
matching_boxes_input], model_loss)
model.summary()
return model_body, model
def train(model, class_names, anchors, image_data, boxes, detectors_mask, matching_true_boxes, validation_split=0.1):
'''
retrain/fine-tune the model
logs training with tensorboard
saves training weights in current directory
best weights according to val_loss is saved as trained_stage_3_best.h5
'''
model.compile(
optimizer='adam', loss={
'yolo_loss': lambda y_true, y_pred: y_pred
}) # This is a hack to use the custom loss function in the last layer.
logging = TensorBoard()
checkpoint = ModelCheckpoint("trained_stage_3_best.h5", monitor='val_loss',
save_weights_only=True, save_best_only=True)
#uncomment following line to implement early stopping
#early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=15, verbose=1, mode='auto')
model.fit([image_data, boxes, detectors_mask, matching_true_boxes],
np.zeros(len(image_data)),
validation_split=validation_split,
batch_size=32,
epochs=5,
callbacks=[logging])
model.save_weights('trained_stage_1.h5')
model_body, model = create_model(anchors, class_names, load_pretrained=True, freeze_body=True)
#model.load_weights('trained_stage_1.h5')
model.compile(
optimizer='adam', loss={
'yolo_loss': lambda y_true, y_pred: y_pred
}) # This is a hack to use the custom loss function in the last layer.
model.fit([image_data, boxes, detectors_mask, matching_true_boxes],
np.zeros(len(image_data)),
validation_split=0.1,
batch_size=8,
epochs=30,
callbacks=[logging])
model.save_weights('trained_stage | get_detector_mask | identifier_name |
server.ts | != WebSocket.OPEN || this.webSocket.url.indexOf(server.address) == -1) {
console.log('[S]: not connected or new server selected, creating a new WS connection...');
this.wsConnect(server, skipQueue);
} else if (this.webSocket.readyState == WebSocket.OPEN) {
console.log('[S]: already connected to a server, no action taken');
this.serverQueue = [];
this.connected = true;
this.wsEventObservable.next({ name: wsEvent.EVENT_ALREADY_OPEN, ws: this.webSocket });
}
//console.log('[S]: queue: ', this.serverQueue);
}
disconnect() {
this.wsDisconnect(false);
}
isConnected() {
return this.connected;
}
isReconnecting() {
return this.reconnecting;
}
private wsDisconnect(reconnect = false) |
private isTransitioningState() {
return this.webSocket && (this.webSocket.readyState == WebSocket.CLOSING || this.webSocket.readyState == WebSocket.CONNECTING);
}
private wsConnect(server: ServerModel, skipQueue: boolean = false) {
//console.log('[S]: wsConnect(' + server.address + ')', new Date())
if (skipQueue) {
console.log('[S]: WS: skipQueue is true, skipping the queue and disconnecting from the old one')
this.serverQueue = [];
this.serverQueue.push(server);
this.reconnecting = true;
} else if (this.isTransitioningState()) {
//console.log('[S]: WS: the connection is in a transitioning state');
// If the connection is in one of these two transitioning states the new connection should be queued
if (!this.serverQueue.find(x => x.equals(server))) {
this.serverQueue.push(server);
//console.log('[S]: WS: the server has been added to the connections list')
} else {
//console.log('[S]: WS: the server is already in the connections queue');
}
setTimeout(() => {
if (this.isTransitioningState()/* && this.webSocket.url.indexOf(server.address) != -1*/) {
//console.log('[S]: the server ' + server.address + ' is still in transitiong state after 5 secs of connect(), closing the connection...')
this.wsDisconnect();
this.webSocket = null;
}
}, 5000);
return;
}
this.wsDisconnect();
let wsUrl = 'ws://' + server.address + ':' + Config.SERVER_PORT + '/';
this.webSocket = new WebSocket(wsUrl);
//console.log('[S]: WS: A new WebSocket has been created')
this.webSocket.onmessage = message => {
//console.log('[S]: this.webSocket.onmessage()', message)
let messageData = null;
if (message.data) {
messageData = JSON.parse(message.data);
}
if (messageData.action == responseModel.ACTION_HELO) {
// fallBack for old server versions
console.log('FallBack: new HELO request received, aborting fallback')
if (this.fallBackTimeout) clearTimeout(this.fallBackTimeout);
// fallBack for old server versions
// Given a version number MAJOR.MINOR.PATCH, increment the:
// MAJOR version when you make incompatible API changes,
// MINOR version when you add functionality in a backwards-compatible manner
// PATCH version when you make backwards-compatible bug fixes.
// See: https://semver.org/
let heloResponse: responseModelHelo = messageData;
this.appVersion.getVersionNumber().then(appVersionString => {
let appVersion = new SemVer(appVersionString);
let serverVersion = new SemVer(heloResponse.version);
if (appVersion.major != serverVersion.major) {
this.onVersionMismatch();
}
});
this.settings.setQuantityEnabled(heloResponse.quantityEnabled);
} else if (messageData.action == responseModel.ACTION_PONG) {
//console.log('[S]: WS: pong received, stop waiting 5 secs')
if (this.pongTimeout) clearTimeout(this.pongTimeout);
} else if (messageData.action == responseModel.ACTION_POPUP) {
let responseModelPopup: responseModelPopup = messageData;
if (this.popup) {
this.popup.dismiss();
}
this.popup = this.alertCtrl.create({
title: responseModelPopup.title,
message: responseModelPopup.message,
buttons: ['Ok']
});
this.popup.present();
} else if (messageData.action == responseModel.ACTION_ENABLE_QUANTITY) {
let responseModelEnableQuantity: responseModelEnableQuantity = messageData;
this.settings.setQuantityEnabled(responseModelEnableQuantity.enable);
} else if (messageData.action == responseModel.ACTION_GET_VERSION) {
// fallBack for old server versions
console.log('FallBack: old getVersion received, showing version mismatch');
this.onVersionMismatch();
// fallBack for old server versions
} else if (messageData.action == responseModel.ACTION_KICK) {
let responseModelKick: responseModelKick = messageData;
this.kickedOut = true;
if (responseModelKick.message != '') {
this.alertCtrl.create({
title: 'Limit raeched', message: responseModelKick.message,
buttons: [{ text: 'Close', role: 'cancel' }]
}).present();
}
} else {
this.responseObservable.next(messageData);
}
}
this.webSocket.onopen = () => {
//console.log('[S]: onopen')
this.connectionProblemAlert = false;
this.everConnected = true; // for current instance
this.settings.setEverConnected(true); // for statistics usage
this.serverQueue = [];
if (this.pongTimeout) clearTimeout(this.pongTimeout);
console.log("[S]: WS: reconnected successfully...")
this.clearReconnectInterval();
this.settings.saveServer(server);
this.connected = true;
if (!this.continuoslyWatchForServers) {
console.log("[S]: stopping watching for servers")
this.unwatch();
} else {
console.log("[S]: stopping watching for servers")
}
this.wsEventObservable.next({ name: 'open', ws: this.webSocket });
this.lastToast.present('Connection established with ' + server.name)
//console.log('[S]: WS: new heartbeat started');
if (this.heartBeatInterval) clearInterval(this.heartBeatInterval);
this.heartBeatInterval = setInterval(() => {
//console.log('[S]: WS: sending ping')
let request = new requestModelPing();
this.send(request);
//console.log('[S]: WS: waiting 5 secs before starting the connection again')
if (this.pongTimeout) clearTimeout(this.pongTimeout);
this.pongTimeout = setTimeout(() => { // do 5 secondi per rispondere
console.log('[S]: WS pong not received, closing connection...')
this.wsDisconnect(false);
this.scheduleNewWsConnection(server); // se il timeout non è stato fermato prima da una risposta, allora schedulo una nuova connessione
}, 1000 * 5);
}, 1000 * 60); // ogni 60 secondi invio ping
/** Since we are inside onopen it means that we're connected to a server
* and we can try to reconnect to it up until another onopen event will
* occour.
* When the next onopen will occour it'll use a new 'server' variable, and
* it'll try to reconnect to it on every resume event */
if (this.lastOnResumeSubscription != null) {
this.lastOnResumeSubscription.unsubscribe();
this.lastOnResumeSubscription = null;
}
this.lastOnResumeSubscription = this.platform.resume.subscribe(next => {
console.log('resume()')
if (!this.connected) {
console.log('onResume: not connected -> scheduling new connection immediately')
this.scheduleNewWsConnection(server);
}
});
this.settings.getDeviceName().then(deviceName => {
console.log('promise join: getDeviceName getRated getLastScanDate ')
let request = new requestModelHelo().fromObject({
deviceName: deviceName,
deviceId: this.device.uuid,
});
this.send(request);
// fallBack for old server versions
console.log('FallBack: new Helo sent, waiting for response...')
if (this.fallBackTimeout) clearTimeout(this.fallBackTimeout);
this.fallBackTimeout = setTimeout(() => {
console.log('FallBack: new Helo response not received, sending old getVersion');
let request = new requestModelGetVersion().fromObject({});
this.send(request);
}, 5000);
// fallBack for old server versions
});
};
this.webSocket.onerror = err => {
console.log('[S]: WS | {
console.log('[S]: wsDisconnect(reconnect=' + reconnect + ')', this.webSocket);
if (this.webSocket) {
if (this.everConnected && !this.reconnecting) {
this.lastToast.present('Connection lost');
this.connected = false;
this.wsEventObservable.next({ name: wsEvent.EVENT_ERROR, ws: this.webSocket });
}
let code = reconnect ? ServerProvider.EVENT_CODE_CLOSE_NORMAL : ServerProvider.EVENT_CODE_DO_NOT_ATTEMP_RECCONECTION;
this.webSocket.close(code);
this.webSocket.onmessage = null;
this.webSocket.onopen = null;
this.webSocket.onerror = null;
this.webSocket.onclose = null;
this.webSocket = null;
}
} | identifier_body |
server.ts | .readyState != WebSocket.OPEN || this.webSocket.url.indexOf(server.address) == -1) {
console.log('[S]: not connected or new server selected, creating a new WS connection...');
this.wsConnect(server, skipQueue);
} else if (this.webSocket.readyState == WebSocket.OPEN) {
console.log('[S]: already connected to a server, no action taken');
this.serverQueue = [];
this.connected = true;
this.wsEventObservable.next({ name: wsEvent.EVENT_ALREADY_OPEN, ws: this.webSocket });
} | //console.log('[S]: queue: ', this.serverQueue);
}
disconnect() {
this.wsDisconnect(false);
}
isConnected() {
return this.connected;
}
isReconnecting() {
return this.reconnecting;
}
private wsDisconnect(reconnect = false) {
console.log('[S]: wsDisconnect(reconnect=' + reconnect + ')', this.webSocket);
if (this.webSocket) {
if (this.everConnected && !this.reconnecting) {
this.lastToast.present('Connection lost');
this.connected = false;
this.wsEventObservable.next({ name: wsEvent.EVENT_ERROR, ws: this.webSocket });
}
let code = reconnect ? ServerProvider.EVENT_CODE_CLOSE_NORMAL : ServerProvider.EVENT_CODE_DO_NOT_ATTEMP_RECCONECTION;
this.webSocket.close(code);
this.webSocket.onmessage = null;
this.webSocket.onopen = null;
this.webSocket.onerror = null;
this.webSocket.onclose = null;
this.webSocket = null;
}
}
private isTransitioningState() {
return this.webSocket && (this.webSocket.readyState == WebSocket.CLOSING || this.webSocket.readyState == WebSocket.CONNECTING);
}
private wsConnect(server: ServerModel, skipQueue: boolean = false) {
//console.log('[S]: wsConnect(' + server.address + ')', new Date())
if (skipQueue) {
console.log('[S]: WS: skipQueue is true, skipping the queue and disconnecting from the old one')
this.serverQueue = [];
this.serverQueue.push(server);
this.reconnecting = true;
} else if (this.isTransitioningState()) {
//console.log('[S]: WS: the connection is in a transitioning state');
// If the connection is in one of these two transitioning states the new connection should be queued
if (!this.serverQueue.find(x => x.equals(server))) {
this.serverQueue.push(server);
//console.log('[S]: WS: the server has been added to the connections list')
} else {
//console.log('[S]: WS: the server is already in the connections queue');
}
setTimeout(() => {
if (this.isTransitioningState()/* && this.webSocket.url.indexOf(server.address) != -1*/) {
//console.log('[S]: the server ' + server.address + ' is still in transitiong state after 5 secs of connect(), closing the connection...')
this.wsDisconnect();
this.webSocket = null;
}
}, 5000);
return;
}
this.wsDisconnect();
let wsUrl = 'ws://' + server.address + ':' + Config.SERVER_PORT + '/';
this.webSocket = new WebSocket(wsUrl);
//console.log('[S]: WS: A new WebSocket has been created')
this.webSocket.onmessage = message => {
//console.log('[S]: this.webSocket.onmessage()', message)
let messageData = null;
if (message.data) {
messageData = JSON.parse(message.data);
}
if (messageData.action == responseModel.ACTION_HELO) {
// fallBack for old server versions
console.log('FallBack: new HELO request received, aborting fallback')
if (this.fallBackTimeout) clearTimeout(this.fallBackTimeout);
// fallBack for old server versions
// Given a version number MAJOR.MINOR.PATCH, increment the:
// MAJOR version when you make incompatible API changes,
// MINOR version when you add functionality in a backwards-compatible manner
// PATCH version when you make backwards-compatible bug fixes.
// See: https://semver.org/
let heloResponse: responseModelHelo = messageData;
this.appVersion.getVersionNumber().then(appVersionString => {
let appVersion = new SemVer(appVersionString);
let serverVersion = new SemVer(heloResponse.version);
if (appVersion.major != serverVersion.major) {
this.onVersionMismatch();
}
});
this.settings.setQuantityEnabled(heloResponse.quantityEnabled);
} else if (messageData.action == responseModel.ACTION_PONG) {
//console.log('[S]: WS: pong received, stop waiting 5 secs')
if (this.pongTimeout) clearTimeout(this.pongTimeout);
} else if (messageData.action == responseModel.ACTION_POPUP) {
let responseModelPopup: responseModelPopup = messageData;
if (this.popup) {
this.popup.dismiss();
}
this.popup = this.alertCtrl.create({
title: responseModelPopup.title,
message: responseModelPopup.message,
buttons: ['Ok']
});
this.popup.present();
} else if (messageData.action == responseModel.ACTION_ENABLE_QUANTITY) {
let responseModelEnableQuantity: responseModelEnableQuantity = messageData;
this.settings.setQuantityEnabled(responseModelEnableQuantity.enable);
} else if (messageData.action == responseModel.ACTION_GET_VERSION) {
// fallBack for old server versions
console.log('FallBack: old getVersion received, showing version mismatch');
this.onVersionMismatch();
// fallBack for old server versions
} else if (messageData.action == responseModel.ACTION_KICK) {
let responseModelKick: responseModelKick = messageData;
this.kickedOut = true;
if (responseModelKick.message != '') {
this.alertCtrl.create({
title: 'Limit raeched', message: responseModelKick.message,
buttons: [{ text: 'Close', role: 'cancel' }]
}).present();
}
} else {
this.responseObservable.next(messageData);
}
}
this.webSocket.onopen = () => {
//console.log('[S]: onopen')
this.connectionProblemAlert = false;
this.everConnected = true; // for current instance
this.settings.setEverConnected(true); // for statistics usage
this.serverQueue = [];
if (this.pongTimeout) clearTimeout(this.pongTimeout);
console.log("[S]: WS: reconnected successfully...")
this.clearReconnectInterval();
this.settings.saveServer(server);
this.connected = true;
if (!this.continuoslyWatchForServers) {
console.log("[S]: stopping watching for servers")
this.unwatch();
} else {
console.log("[S]: stopping watching for servers")
}
this.wsEventObservable.next({ name: 'open', ws: this.webSocket });
this.lastToast.present('Connection established with ' + server.name)
//console.log('[S]: WS: new heartbeat started');
if (this.heartBeatInterval) clearInterval(this.heartBeatInterval);
this.heartBeatInterval = setInterval(() => {
//console.log('[S]: WS: sending ping')
let request = new requestModelPing();
this.send(request);
//console.log('[S]: WS: waiting 5 secs before starting the connection again')
if (this.pongTimeout) clearTimeout(this.pongTimeout);
this.pongTimeout = setTimeout(() => { // do 5 secondi per rispondere
console.log('[S]: WS pong not received, closing connection...')
this.wsDisconnect(false);
this.scheduleNewWsConnection(server); // se il timeout non è stato fermato prima da una risposta, allora schedulo una nuova connessione
}, 1000 * 5);
}, 1000 * 60); // ogni 60 secondi invio ping
/** Since we are inside onopen it means that we're connected to a server
* and we can try to reconnect to it up until another onopen event will
* occour.
* When the next onopen will occour it'll use a new 'server' variable, and
* it'll try to reconnect to it on every resume event */
if (this.lastOnResumeSubscription != null) {
this.lastOnResumeSubscription.unsubscribe();
this.lastOnResumeSubscription = null;
}
this.lastOnResumeSubscription = this.platform.resume.subscribe(next => {
console.log('resume()')
if (!this.connected) {
console.log('onResume: not connected -> scheduling new connection immediately')
this.scheduleNewWsConnection(server);
}
});
this.settings.getDeviceName().then(deviceName => {
console.log('promise join: getDeviceName getRated getLastScanDate ')
let request = new requestModelHelo().fromObject({
deviceName: deviceName,
deviceId: this.device.uuid,
});
this.send(request);
// fallBack for old server versions
console.log('FallBack: new Helo sent, waiting for response...')
if (this.fallBackTimeout) clearTimeout(this.fallBackTimeout);
this.fallBackTimeout = setTimeout(() => {
console.log('FallBack: new Helo response not received, sending old getVersion');
let request = new requestModelGetVersion().fromObject({});
this.send(request);
}, 5000);
// fallBack for old server versions
});
};
this.webSocket.onerror = err => {
console.log('[S]: WS: | random_line_split |
|
server.ts | this.popup.present();
} else if (messageData.action == responseModel.ACTION_ENABLE_QUANTITY) {
let responseModelEnableQuantity: responseModelEnableQuantity = messageData;
this.settings.setQuantityEnabled(responseModelEnableQuantity.enable);
} else if (messageData.action == responseModel.ACTION_GET_VERSION) {
// fallBack for old server versions
console.log('FallBack: old getVersion received, showing version mismatch');
this.onVersionMismatch();
// fallBack for old server versions
} else if (messageData.action == responseModel.ACTION_KICK) {
let responseModelKick: responseModelKick = messageData;
this.kickedOut = true;
if (responseModelKick.message != '') {
this.alertCtrl.create({
title: 'Limit raeched', message: responseModelKick.message,
buttons: [{ text: 'Close', role: 'cancel' }]
}).present();
}
} else {
this.responseObservable.next(messageData);
}
}
this.webSocket.onopen = () => {
//console.log('[S]: onopen')
this.connectionProblemAlert = false;
this.everConnected = true; // for current instance
this.settings.setEverConnected(true); // for statistics usage
this.serverQueue = [];
if (this.pongTimeout) clearTimeout(this.pongTimeout);
console.log("[S]: WS: reconnected successfully...")
this.clearReconnectInterval();
this.settings.saveServer(server);
this.connected = true;
if (!this.continuoslyWatchForServers) {
console.log("[S]: stopping watching for servers")
this.unwatch();
} else {
console.log("[S]: stopping watching for servers")
}
this.wsEventObservable.next({ name: 'open', ws: this.webSocket });
this.lastToast.present('Connection established with ' + server.name)
//console.log('[S]: WS: new heartbeat started');
if (this.heartBeatInterval) clearInterval(this.heartBeatInterval);
this.heartBeatInterval = setInterval(() => {
//console.log('[S]: WS: sending ping')
let request = new requestModelPing();
this.send(request);
//console.log('[S]: WS: waiting 5 secs before starting the connection again')
if (this.pongTimeout) clearTimeout(this.pongTimeout);
this.pongTimeout = setTimeout(() => { // do 5 secondi per rispondere
console.log('[S]: WS pong not received, closing connection...')
this.wsDisconnect(false);
this.scheduleNewWsConnection(server); // se il timeout non è stato fermato prima da una risposta, allora schedulo una nuova connessione
}, 1000 * 5);
}, 1000 * 60); // ogni 60 secondi invio ping
/** Since we are inside onopen it means that we're connected to a server
* and we can try to reconnect to it up until another onopen event will
* occour.
* When the next onopen will occour it'll use a new 'server' variable, and
* it'll try to reconnect to it on every resume event */
if (this.lastOnResumeSubscription != null) {
this.lastOnResumeSubscription.unsubscribe();
this.lastOnResumeSubscription = null;
}
this.lastOnResumeSubscription = this.platform.resume.subscribe(next => {
console.log('resume()')
if (!this.connected) {
console.log('onResume: not connected -> scheduling new connection immediately')
this.scheduleNewWsConnection(server);
}
});
this.settings.getDeviceName().then(deviceName => {
console.log('promise join: getDeviceName getRated getLastScanDate ')
let request = new requestModelHelo().fromObject({
deviceName: deviceName,
deviceId: this.device.uuid,
});
this.send(request);
// fallBack for old server versions
console.log('FallBack: new Helo sent, waiting for response...')
if (this.fallBackTimeout) clearTimeout(this.fallBackTimeout);
this.fallBackTimeout = setTimeout(() => {
console.log('FallBack: new Helo response not received, sending old getVersion');
let request = new requestModelGetVersion().fromObject({});
this.send(request);
}, 5000);
// fallBack for old server versions
});
};
this.webSocket.onerror = err => {
console.log('[S]: WS: onerror ')
if (!this.reconnecting) {
this.lastToast.present('Unable to connect. Select Help from the app menu in order to determine the cause');
}
this.connected = false;
this.wsEventObservable.next({ name: wsEvent.EVENT_ERROR, ws: this.webSocket });
this.scheduleNewWsConnection(server);
}
this.webSocket.onclose = (ev: CloseEvent) => {
console.log('[S]: onclose')
if (this.everConnected && !this.reconnecting) {
this.lastToast.present('Connection closed');
}
if (ev.code != ServerProvider.EVENT_CODE_DO_NOT_ATTEMP_RECCONECTION) {
this.scheduleNewWsConnection(server);
}
this.connected = false;
this.kickedOut = false;
this.wsEventObservable.next({ name: wsEvent.EVENT_CLOSE, ws: this.webSocket });
if (!this.watchForServersObserver) {
this.watchForServersObserver = this.watchForServers().subscribe((discoveryResult: discoveryResultModel) => {
this.settings.getDefaultServer().then(defaultServer => {
if (defaultServer.name == discoveryResult.server.name && discoveryResult.server.name.length && defaultServer.address != discoveryResult.server.address) { // if the server has the same name, but a different ip => ask to reconnect
let alert = this.alertCtrl.create({
title: "Reconnect",
message: "It seems that the computer " + defaultServer.name + " changed ip address from \
" + defaultServer.address + " to " + discoveryResult.server.address + ", do you want to reconnect?",
buttons: [{
text: 'No',
role: 'cancel',
handler: () => { }
}, {
text: 'Reconnect',
handler: () => {
this.settings.setDefaultServer(discoveryResult.server); // override the defaultServer
this.settings.getSavedServers().then(savedServers => {
this.settings.setSavedServers(
savedServers
.filter(x => x.name != discoveryResult.server.name) // remove the old server
.concat(discoveryResult.server)) // add a new one
});
this.wsConnect(discoveryResult.server, true);
}
}]
});
alert.present();
} else if (defaultServer.name == discoveryResult.server.name && defaultServer.address == discoveryResult.server.address && this.everConnected) { // if the server was closed and open again => reconnect whitout asking
this.wsConnect(discoveryResult.server, true);
}
})
})
}
}
} // wsConnect() end
send(request: requestModel) {
if (this.kickedOut) {
return;
}
if (this.webSocket) {
if (this.webSocket.readyState == WebSocket.OPEN) {
//console.log(request, JSON.stringify(request));
this.webSocket.send(JSON.stringify(request));
} else if (!this.connectionProblemAlert) {
this.connectionProblemAlert = true;
this.alertCtrl.create({
title: 'Connection problem', message: 'To determine the cause check the help page',
buttons: [{ text: 'Close', role: 'cancel' }, {
text: 'Help page', handler: () => {
this.events.publish('setPage', HelpPage);
}
}]
}).present();
}
} else {
// //console.log("offline mode, cannot send!")
}
}
watchForServers(): Observable<discoveryResultModel> {
if (this.watchForServersObservable) {
return this.watchForServersObservable;
}
this.watchForServersObservable = Observable.create(observer => {
if (!this.platform.is('cordova')) { // for browser support
setTimeout(() => {
let dummyServer: discoveryResultModel = { server: new ServerModel('localhost', 'localhost'), action: 'added' };
observer.next(dummyServer);
}, 1000)
return;
}
this.unwatch();
this.zeroconf.watch('_http._tcp.', 'local.').subscribe(result => {
var action = result.action;
var service = result.service;
if (service.port == Config.SERVER_PORT && service.ipv4Addresses && service.ipv4Addresses.length) {
console.log("ZEROCONF:", result);
this.NgZone.run(() => {
service.ipv4Addresses.forEach(ipv4 => {
if (ipv4 && ipv4.length) {
observer.next({ server: new ServerModel(ipv4, service.hostname), action: action });
}
})
});
}
});
});
return this.watchForServersObservable;
}
unwatch() {
this.watchForServersObservable = null;
if (this.watchForServersObserver) {
this.watchForServersObserver.unsubscribe();
this.watchForServersObserver = null;
}
//console.log('[S]: UNWATCHED ')
this.zeroconf.close();
}
// isConnectedWith(server: ServerModel) {
// if (this.webSocket.readyState != WebSocket.OPEN || this.webSocket.url.indexOf(server.address) == -1) {
// return false;
// }
// return true;
// }
public s | etContinuoslyWatchForServers( | identifier_name |
|
server.ts | .readyState != WebSocket.OPEN || this.webSocket.url.indexOf(server.address) == -1) {
console.log('[S]: not connected or new server selected, creating a new WS connection...');
this.wsConnect(server, skipQueue);
} else if (this.webSocket.readyState == WebSocket.OPEN) {
console.log('[S]: already connected to a server, no action taken');
this.serverQueue = [];
this.connected = true;
this.wsEventObservable.next({ name: wsEvent.EVENT_ALREADY_OPEN, ws: this.webSocket });
}
//console.log('[S]: queue: ', this.serverQueue);
}
disconnect() {
this.wsDisconnect(false);
}
isConnected() {
return this.connected;
}
isReconnecting() {
return this.reconnecting;
}
private wsDisconnect(reconnect = false) {
console.log('[S]: wsDisconnect(reconnect=' + reconnect + ')', this.webSocket);
if (this.webSocket) {
if (this.everConnected && !this.reconnecting) {
this.lastToast.present('Connection lost');
this.connected = false;
this.wsEventObservable.next({ name: wsEvent.EVENT_ERROR, ws: this.webSocket });
}
let code = reconnect ? ServerProvider.EVENT_CODE_CLOSE_NORMAL : ServerProvider.EVENT_CODE_DO_NOT_ATTEMP_RECCONECTION;
this.webSocket.close(code);
this.webSocket.onmessage = null;
this.webSocket.onopen = null;
this.webSocket.onerror = null;
this.webSocket.onclose = null;
this.webSocket = null;
}
}
private isTransitioningState() {
return this.webSocket && (this.webSocket.readyState == WebSocket.CLOSING || this.webSocket.readyState == WebSocket.CONNECTING);
}
private wsConnect(server: ServerModel, skipQueue: boolean = false) {
//console.log('[S]: wsConnect(' + server.address + ')', new Date())
if (skipQueue) {
console.log('[S]: WS: skipQueue is true, skipping the queue and disconnecting from the old one')
this.serverQueue = [];
this.serverQueue.push(server);
this.reconnecting = true;
} else if (this.isTransitioningState()) {
//console.log('[S]: WS: the connection is in a transitioning state');
// If the connection is in one of these two transitioning states the new connection should be queued
if (!this.serverQueue.find(x => x.equals(server))) | else {
//console.log('[S]: WS: the server is already in the connections queue');
}
setTimeout(() => {
if (this.isTransitioningState()/* && this.webSocket.url.indexOf(server.address) != -1*/) {
//console.log('[S]: the server ' + server.address + ' is still in transitiong state after 5 secs of connect(), closing the connection...')
this.wsDisconnect();
this.webSocket = null;
}
}, 5000);
return;
}
this.wsDisconnect();
let wsUrl = 'ws://' + server.address + ':' + Config.SERVER_PORT + '/';
this.webSocket = new WebSocket(wsUrl);
//console.log('[S]: WS: A new WebSocket has been created')
this.webSocket.onmessage = message => {
//console.log('[S]: this.webSocket.onmessage()', message)
let messageData = null;
if (message.data) {
messageData = JSON.parse(message.data);
}
if (messageData.action == responseModel.ACTION_HELO) {
// fallBack for old server versions
console.log('FallBack: new HELO request received, aborting fallback')
if (this.fallBackTimeout) clearTimeout(this.fallBackTimeout);
// fallBack for old server versions
// Given a version number MAJOR.MINOR.PATCH, increment the:
// MAJOR version when you make incompatible API changes,
// MINOR version when you add functionality in a backwards-compatible manner
// PATCH version when you make backwards-compatible bug fixes.
// See: https://semver.org/
let heloResponse: responseModelHelo = messageData;
this.appVersion.getVersionNumber().then(appVersionString => {
let appVersion = new SemVer(appVersionString);
let serverVersion = new SemVer(heloResponse.version);
if (appVersion.major != serverVersion.major) {
this.onVersionMismatch();
}
});
this.settings.setQuantityEnabled(heloResponse.quantityEnabled);
} else if (messageData.action == responseModel.ACTION_PONG) {
//console.log('[S]: WS: pong received, stop waiting 5 secs')
if (this.pongTimeout) clearTimeout(this.pongTimeout);
} else if (messageData.action == responseModel.ACTION_POPUP) {
let responseModelPopup: responseModelPopup = messageData;
if (this.popup) {
this.popup.dismiss();
}
this.popup = this.alertCtrl.create({
title: responseModelPopup.title,
message: responseModelPopup.message,
buttons: ['Ok']
});
this.popup.present();
} else if (messageData.action == responseModel.ACTION_ENABLE_QUANTITY) {
let responseModelEnableQuantity: responseModelEnableQuantity = messageData;
this.settings.setQuantityEnabled(responseModelEnableQuantity.enable);
} else if (messageData.action == responseModel.ACTION_GET_VERSION) {
// fallBack for old server versions
console.log('FallBack: old getVersion received, showing version mismatch');
this.onVersionMismatch();
// fallBack for old server versions
} else if (messageData.action == responseModel.ACTION_KICK) {
let responseModelKick: responseModelKick = messageData;
this.kickedOut = true;
if (responseModelKick.message != '') {
this.alertCtrl.create({
title: 'Limit raeched', message: responseModelKick.message,
buttons: [{ text: 'Close', role: 'cancel' }]
}).present();
}
} else {
this.responseObservable.next(messageData);
}
}
this.webSocket.onopen = () => {
//console.log('[S]: onopen')
this.connectionProblemAlert = false;
this.everConnected = true; // for current instance
this.settings.setEverConnected(true); // for statistics usage
this.serverQueue = [];
if (this.pongTimeout) clearTimeout(this.pongTimeout);
console.log("[S]: WS: reconnected successfully...")
this.clearReconnectInterval();
this.settings.saveServer(server);
this.connected = true;
if (!this.continuoslyWatchForServers) {
console.log("[S]: stopping watching for servers")
this.unwatch();
} else {
console.log("[S]: stopping watching for servers")
}
this.wsEventObservable.next({ name: 'open', ws: this.webSocket });
this.lastToast.present('Connection established with ' + server.name)
//console.log('[S]: WS: new heartbeat started');
if (this.heartBeatInterval) clearInterval(this.heartBeatInterval);
this.heartBeatInterval = setInterval(() => {
//console.log('[S]: WS: sending ping')
let request = new requestModelPing();
this.send(request);
//console.log('[S]: WS: waiting 5 secs before starting the connection again')
if (this.pongTimeout) clearTimeout(this.pongTimeout);
this.pongTimeout = setTimeout(() => { // do 5 secondi per rispondere
console.log('[S]: WS pong not received, closing connection...')
this.wsDisconnect(false);
this.scheduleNewWsConnection(server); // se il timeout non è stato fermato prima da una risposta, allora schedulo una nuova connessione
}, 1000 * 5);
}, 1000 * 60); // ogni 60 secondi invio ping
/** Since we are inside onopen it means that we're connected to a server
* and we can try to reconnect to it up until another onopen event will
* occour.
* When the next onopen will occour it'll use a new 'server' variable, and
* it'll try to reconnect to it on every resume event */
if (this.lastOnResumeSubscription != null) {
this.lastOnResumeSubscription.unsubscribe();
this.lastOnResumeSubscription = null;
}
this.lastOnResumeSubscription = this.platform.resume.subscribe(next => {
console.log('resume()')
if (!this.connected) {
console.log('onResume: not connected -> scheduling new connection immediately')
this.scheduleNewWsConnection(server);
}
});
this.settings.getDeviceName().then(deviceName => {
console.log('promise join: getDeviceName getRated getLastScanDate ')
let request = new requestModelHelo().fromObject({
deviceName: deviceName,
deviceId: this.device.uuid,
});
this.send(request);
// fallBack for old server versions
console.log('FallBack: new Helo sent, waiting for response...')
if (this.fallBackTimeout) clearTimeout(this.fallBackTimeout);
this.fallBackTimeout = setTimeout(() => {
console.log('FallBack: new Helo response not received, sending old getVersion');
let request = new requestModelGetVersion().fromObject({});
this.send(request);
}, 5000);
// fallBack for old server versions
});
};
this.webSocket.onerror = err => {
console.log('[S]: WS | {
this.serverQueue.push(server);
//console.log('[S]: WS: the server has been added to the connections list')
} | conditional_block |
ash-v3_8h.js | [ "ASH_FLAG", "ash-v3_8h.html#a35d6a5603fa48cc59eb18417e4376ace", null ],
[ "ASH_FRAME_COUNTER_ROLLOVER", "ash-v3_8h.html#a3898f887b8d025d7b9b58ed70ca31a9c", null ],
[ "ASH_PAYLOAD_LENGTH_BYTE_ESCAPED", "ash-v3_8h.html#acdbd78ad666c177bb3111175b6b20c97", null ],
[ "ASH_STATE_STRINGS", "ash-v3_8h.html#aac5fff083788dced19c12c979b348264", null ],
[ "ASH_WAKEUP", "ash-v3_8h.html#a2bf6e3a6c9e80d86301b9ac32c2caf33", null ],
[ "ASH_XOFF", "ash-v3_8h.html#a0a5eca103d9652529a4dfa180e3c9bfb", null ],
[ "ASH_XON", "ash-v3_8h.html#a4babb5a5068de0f896475917c68acced", null ],
[ "AshHeaderEscapeType", "ash-v3_8h.html#a864ff08698c94ae369d5c6ea26fb4e2a", null ],
[ "MAX_ASH_PACKET_SIZE", "ash-v3_8h.html#ad4d321059e5e86ff2b6e60659eefcbf2", null ],
[ "MAX_ASH_PAYLOAD_SIZE", "ash-v3_8h.html#a598f6925f7287486ead2b884f292a77a", null ],
[ "MAX_ASH_RESEND_COUNT", "ash-v3_8h.html#a757a2286e8381d0e81461ad7533862ac", null ],
[ "NEXT_ASH_OUTGOING_FRAME_COUNTER", "ash-v3_8h.html#a127ad961982873f1612c912a39358f28", null ],
[ "AshHeaderBytesLocation", "ash-v3_8h.html#a39ebc096bfa274794691e6e06c9d3149", [
[ "ASH_FLAG_INDEX", "ash-v3_8h.html#ga39ebc096bfa274794691e6e06c9d3149a7e5b7e42faf91da5417137f876f6b176", null ],
[ "ASH_HEADER_ESCAPE_BYTE_INDEX", "ash-v3_8h.html#ga39ebc096bfa274794691e6e06c9d3149a2d59eadc683a2dcb0f89bb9e9b7e202c", null ],
[ "ASH_CONTROL_BYTE_INDEX", "ash-v3_8h.html#ga39ebc096bfa274794691e6e06c9d3149a9246d2e83b1617b15d75b01eab70673b", null ],
[ "ASH_PAYLOAD_LENGTH_INDEX", "ash-v3_8h.html#ga39ebc096bfa274794691e6e06c9d3149af4d7f75ef08f8698ddcd332db622a0ea", null ],
[ "ASH_HEADER_LENGTH", "ash-v3_8h.html#ga39ebc096bfa274794691e6e06c9d3149a2b5aa1efcc9a06887f6e09330fa21fef", null ]
] ],
[ "AshMessageType", "ash-v3_8h.html#a8e66d148c8384fee7b668b8663125e3b", [
[ "ASH_RESET", "ash-v3_8h.html#ga8e66d148c8384fee7b668b8663125e3baf5b51a9f6ac7aceeaf97f1a6c42d0934", null ],
[ "ASH_RESET_ACK", "ash-v3_8h.html#ga8e66d148c8384fee7b668b8663125e3ba0b0c42982b0882fcd7fcea93c085a862", null ],
[ "ASH_ACK", "ash-v3_8h.html#ga8e66d148c8384fee7b668b8663125e3ba243879639919eaa265e07c7424977d13", null ],
[ "ASH_NACK", "ash-v3_8h.html#ga8e66d148c8384fee7b668b8663125e3bacf2c86fca269752b5a74d5bcceaa446a", null ],
[ "LAST_ASH_MESSAGE_TYPE", "ash-v3_8h.html#ga8e66d148c8384fee7b668b8663125e3ba9a25a69ebcfb34836ca156ff5dc01157", null ]
] ],
[ "AshRxFrameState", "ash-v3_8h.html#acdd10d54faf7759bee0f28b833b0f38f", [
[ "ASH_INACTIVE", "ash-v3_8h.html#gacdd10d54faf7759bee0f28b833b0f38fab40619eac08151e902646a93e8b55f58", null ],
[ "ASH_NEED_HEADER_ESCAPE_BYTE", "ash-v3_8h.html#gacdd10d54faf7759bee0f28b833b0f38fa1a9cca817cb56e176481f968b9810924", null ],
[ "ASH_NEED_CONTROL_BYTE", "ash-v3_8h.html#gacdd10d54faf7759bee0f28b833b0f38facb7438b8b297ba11690273ffde15512d", null ],
[ "ASH_NEED_PAYLOAD_LENGTH", "ash-v3_8h.html#gacdd10d54faf7759bee0f28b833b0f38fab5c20157b3852b3f2386c9aba39004dc", null ],
[ "ASH_NEED_PAYLOAD", "ash-v3_8h.html#gacdd10d54faf7759bee0f28b833b0f38fab021580c630ae256bef08ce0e46674db", null ],
[ "ASH_NEED_HIGH_CRC", "ash-v3_8h.html#gacdd10d54faf7759bee0f28b833b0f38fa5f6b913ddf077b7c814102c367d0a404", null ],
[ "ASH_NEED_IN_BETWEEN_CRC", "ash-v3_8h.html#gacdd10d54faf7759bee0f28b833b0f38fa1d355c99f5f9833aa0db0bdcd0f76e14", null ],
| random_line_split |
||
lpc55_flash.rs | , clap::ValueEnum)]
enum CfpaChoice {
Scratch,
Ping,
Pong,
}
#[derive(Debug, Parser)]
#[clap(name = "isp")]
struct Isp {
/// UART port
#[clap(name = "port")]
port: String,
/// How fast to run the UART. 57,600 baud seems very reliable but is rather
/// slow. In certain test setups we've gotten rates of up to 1Mbaud to work
/// reliably -- your mileage may vary!
#[clap(short = 'b', default_value = "57600")]
baud_rate: u32,
#[clap(subcommand)]
cmd: ISPCommand,
}
fn | (prop: BootloaderProperty, params: Vec<u32>) {
match prop {
BootloaderProperty::BootloaderVersion => {
println!("Version {:x}", params[1]);
}
BootloaderProperty::AvailablePeripherals => {
println!("Bitmask of peripherals {:x}", params[1]);
}
BootloaderProperty::FlashStart => {
println!("Flash start = 0x{:x}", params[1]);
}
BootloaderProperty::FlashSize => {
println!("Flash Size = {:x}", params[1]);
}
BootloaderProperty::FlashSectorSize => {
println!("Flash Sector Size = {:x}", params[1]);
}
BootloaderProperty::AvailableCommands => {
println!("Bitmask of commands = {:x}", params[1]);
}
BootloaderProperty::CRCStatus => {
println!("CRC status = {}", params[1]);
}
BootloaderProperty::VerifyWrites => {
println!("Verify Writes (bool) {}", params[1]);
}
BootloaderProperty::MaxPacketSize => {
println!("Max Packet Size = {}", params[1]);
}
BootloaderProperty::ReservedRegions => {
println!("Reserved regions? = {:x?}", params);
}
BootloaderProperty::RAMStart => {
println!("RAM start = 0x{:x}", params[1]);
}
BootloaderProperty::RAMSize => {
println!("RAM size = 0x{:x}", params[1]);
}
BootloaderProperty::SystemDeviceID => {
println!("DEVICE_ID0 register = 0x{:x}", params[1]);
}
BootloaderProperty::SecurityState => {
println!(
"Security State = {}",
if params[1] == 0x5aa55aa5 {
"UNLOCKED"
} else {
"LOCKED"
}
);
}
BootloaderProperty::UniqueID => {
println!(
"UUID = {:x}{:x}{:x}{:x}",
params[1], params[2], params[3], params[4]
);
}
BootloaderProperty::TargetVersion => {
println!("Target version = {:x}", params[1]);
}
BootloaderProperty::FlashPageSize => {
println!("Flash page size = {:x}", params[1]);
}
BootloaderProperty::IRQPinStatus => {
println!("IRQ Pin Status = {}", params[1]);
}
BootloaderProperty::FFRKeyStoreStatus => {
println!("FFR Store Status = {}", params[1]);
}
}
}
fn pretty_print_error(params: Vec<u32>) {
let reason = params[1] & 0xfffffff0;
if reason == 0 {
println!("No errors reported");
} else if reason == 0x0602f300 {
println!("Passive boot failed, reason:");
let specific_reason = params[2] & 0xfffffff0;
match specific_reason {
0x0b36f300 => {
println!("Secure image authentication failed. Check:");
println!("- Is the image you are booting signed?");
println!("- Is the image signed with the corresponding key?");
}
0x0b37f300 => {
println!("Application CRC failed");
}
0x0b35f300 => {
println!("Application entry point and/or stack is invalid");
}
0x0b38f300 => {
println!("DICE failure. Check:");
println!("- Key store is set up properly (UDS)");
}
0x0d70f300 => {
println!("Trying to boot a TZ image on a device that doesn't have TZ!");
}
0x0d71f300 => {
println!("Error reading TZ Image type from CMPA");
}
0x0d72f300 => {
println!("Bad TZ image mode, check your image");
}
0x0c00f500 => {
println!("Application returned to the ROM?");
}
_ => {
println!("Some other reason, raw bytes: {:x?}", params);
}
}
} else {
println!("Something bad happen: {:x?}", params);
}
}
fn main() -> Result<()> {
let cmd = Isp::parse();
// The target _technically_ has autobaud but it's very flaky
// and these seem to be the preferred settings
//
// We initially set the timeout short so we can drain the incoming buffer in
// a portable manner below. We'll adjust it up after that.
let mut port = serialport::new(&cmd.port, cmd.baud_rate)
.timeout(Duration::from_millis(100))
.data_bits(DataBits::Eight)
.flow_control(FlowControl::None)
.parity(Parity::None)
.stop_bits(StopBits::One)
.open()?;
// Extract any bytes left over in the serial port driver from previous
// interaction.
loop {
let mut throwaway = [0; 16];
match port.read(&mut throwaway) {
Ok(0) => {
// This should only happen on nonblocking reads, which we
// haven't asked for, but it does mean the buffer is empty so
// treat it as success.
break;
}
Ok(_) => {
// We've collected some characters to throw away, keep going.
}
Err(e) if e.kind() == ErrorKind::TimedOut => {
// Buffer is empty!
break;
}
Err(e) => {
return Err(e.into());
}
}
}
// Crank the timeout back up.
port.set_timeout(Duration::from_secs(1))?;
match cmd.cmd {
ISPCommand::Ping => {
do_ping(&mut *port)?;
println!("ping success.");
}
ISPCommand::ReadMemory {
address,
count,
path,
} => {
do_ping(&mut *port)?;
let m = do_isp_read_memory(&mut *port, address, count)?;
let mut out = std::fs::OpenOptions::new()
.write(true)
.truncate(true)
.create(true)
.open(&path)?;
out.write_all(&m)?;
println!("Output written to {:?}", path);
}
ISPCommand::WriteMemory { address, file } => {
do_ping(&mut *port)?;
println!("If you didn't already erase the flash this operation will fail!");
println!("This operation may take a while");
let infile = std::fs::read(file)?;
do_isp_write_memory(&mut *port, address, &infile)?;
println!("Write complete!");
}
ISPCommand::FlashEraseAll => {
do_ping(&mut *port)?;
do_isp_flash_erase_all(&mut *port)?;
println!("Flash erased!");
}
ISPCommand::FlashEraseRegion {
start_address,
byte_count,
} => {
do_ping(&mut *port)?;
do_isp_flash_erase_region(&mut *port, start_address, byte_count)?;
println!("Flash region erased!");
}
// Yes this is just another write-memory call but remembering addresses
// is hard.
ISPCommand::WriteCMPA { file } => {
do_ping(&mut *port)?;
let infile = std::fs::read(file)?;
do_isp_write_memory(&mut *port, 0x9e400, &infile)?;
println!("Write to CMPA done!");
}
ISPCommand::EraseCMPA => {
do_ping(&mut *port)?;
// Write 512 bytes of zero
let bytes = [0; 512];
do_isp_write_memory(&mut *port, 0x9e400, &bytes)?;
println!("CMPA region erased!");
println!("You can now boot unsigned images");
}
ISPCommand::ReadCMPA { file } => {
do_ping(&mut *port)?;
let m = do_isp_read_memory(&mut *port, 0x9e400, 512)?;
let mut out = match file {
Some(ref path) => Box::new(
std::fs::OpenOptions::new()
.write(true)
.truncate(true)
.create(true)
.open(path)?,
) as Box<dyn Write>,
None => Box::new(std::io::stdout()) as Box<dyn Write>,
};
out.write_all(&m)?;
eprintln | pretty_print_bootloader_prop | identifier_name |
lpc55_flash.rs | , clap::ValueEnum)]
enum CfpaChoice {
Scratch,
Ping,
Pong,
}
#[derive(Debug, Parser)]
#[clap(name = "isp")]
struct Isp {
/// UART port
#[clap(name = "port")]
port: String,
/// How fast to run the UART. 57,600 baud seems very reliable but is rather
/// slow. In certain test setups we've gotten rates of up to 1Mbaud to work
/// reliably -- your mileage may vary!
#[clap(short = 'b', default_value = "57600")]
baud_rate: u32,
#[clap(subcommand)]
cmd: ISPCommand,
}
fn pretty_print_bootloader_prop(prop: BootloaderProperty, params: Vec<u32>) | BootloaderProperty::CRCStatus => {
println!("CRC status = {}", params[1]);
}
BootloaderProperty::VerifyWrites => {
println!("Verify Writes (bool) {}", params[1]);
}
BootloaderProperty::MaxPacketSize => {
println!("Max Packet Size = {}", params[1]);
}
BootloaderProperty::ReservedRegions => {
println!("Reserved regions? = {:x?}", params);
}
BootloaderProperty::RAMStart => {
println!("RAM start = 0x{:x}", params[1]);
}
BootloaderProperty::RAMSize => {
println!("RAM size = 0x{:x}", params[1]);
}
BootloaderProperty::SystemDeviceID => {
println!("DEVICE_ID0 register = 0x{:x}", params[1]);
}
BootloaderProperty::SecurityState => {
println!(
"Security State = {}",
if params[1] == 0x5aa55aa5 {
"UNLOCKED"
} else {
"LOCKED"
}
);
}
BootloaderProperty::UniqueID => {
println!(
"UUID = {:x}{:x}{:x}{:x}",
params[1], params[2], params[3], params[4]
);
}
BootloaderProperty::TargetVersion => {
println!("Target version = {:x}", params[1]);
}
BootloaderProperty::FlashPageSize => {
println!("Flash page size = {:x}", params[1]);
}
BootloaderProperty::IRQPinStatus => {
println!("IRQ Pin Status = {}", params[1]);
}
BootloaderProperty::FFRKeyStoreStatus => {
println!("FFR Store Status = {}", params[1]);
}
}
}
fn pretty_print_error(params: Vec<u32>) {
let reason = params[1] & 0xfffffff0;
if reason == 0 {
println!("No errors reported");
} else if reason == 0x0602f300 {
println!("Passive boot failed, reason:");
let specific_reason = params[2] & 0xfffffff0;
match specific_reason {
0x0b36f300 => {
println!("Secure image authentication failed. Check:");
println!("- Is the image you are booting signed?");
println!("- Is the image signed with the corresponding key?");
}
0x0b37f300 => {
println!("Application CRC failed");
}
0x0b35f300 => {
println!("Application entry point and/or stack is invalid");
}
0x0b38f300 => {
println!("DICE failure. Check:");
println!("- Key store is set up properly (UDS)");
}
0x0d70f300 => {
println!("Trying to boot a TZ image on a device that doesn't have TZ!");
}
0x0d71f300 => {
println!("Error reading TZ Image type from CMPA");
}
0x0d72f300 => {
println!("Bad TZ image mode, check your image");
}
0x0c00f500 => {
println!("Application returned to the ROM?");
}
_ => {
println!("Some other reason, raw bytes: {:x?}", params);
}
}
} else {
println!("Something bad happen: {:x?}", params);
}
}
fn main() -> Result<()> {
let cmd = Isp::parse();
// The target _technically_ has autobaud but it's very flaky
// and these seem to be the preferred settings
//
// We initially set the timeout short so we can drain the incoming buffer in
// a portable manner below. We'll adjust it up after that.
let mut port = serialport::new(&cmd.port, cmd.baud_rate)
.timeout(Duration::from_millis(100))
.data_bits(DataBits::Eight)
.flow_control(FlowControl::None)
.parity(Parity::None)
.stop_bits(StopBits::One)
.open()?;
// Extract any bytes left over in the serial port driver from previous
// interaction.
loop {
let mut throwaway = [0; 16];
match port.read(&mut throwaway) {
Ok(0) => {
// This should only happen on nonblocking reads, which we
// haven't asked for, but it does mean the buffer is empty so
// treat it as success.
break;
}
Ok(_) => {
// We've collected some characters to throw away, keep going.
}
Err(e) if e.kind() == ErrorKind::TimedOut => {
// Buffer is empty!
break;
}
Err(e) => {
return Err(e.into());
}
}
}
// Crank the timeout back up.
port.set_timeout(Duration::from_secs(1))?;
match cmd.cmd {
ISPCommand::Ping => {
do_ping(&mut *port)?;
println!("ping success.");
}
ISPCommand::ReadMemory {
address,
count,
path,
} => {
do_ping(&mut *port)?;
let m = do_isp_read_memory(&mut *port, address, count)?;
let mut out = std::fs::OpenOptions::new()
.write(true)
.truncate(true)
.create(true)
.open(&path)?;
out.write_all(&m)?;
println!("Output written to {:?}", path);
}
ISPCommand::WriteMemory { address, file } => {
do_ping(&mut *port)?;
println!("If you didn't already erase the flash this operation will fail!");
println!("This operation may take a while");
let infile = std::fs::read(file)?;
do_isp_write_memory(&mut *port, address, &infile)?;
println!("Write complete!");
}
ISPCommand::FlashEraseAll => {
do_ping(&mut *port)?;
do_isp_flash_erase_all(&mut *port)?;
println!("Flash erased!");
}
ISPCommand::FlashEraseRegion {
start_address,
byte_count,
} => {
do_ping(&mut *port)?;
do_isp_flash_erase_region(&mut *port, start_address, byte_count)?;
println!("Flash region erased!");
}
// Yes this is just another write-memory call but remembering addresses
// is hard.
ISPCommand::WriteCMPA { file } => {
do_ping(&mut *port)?;
let infile = std::fs::read(file)?;
do_isp_write_memory(&mut *port, 0x9e400, &infile)?;
println!("Write to CMPA done!");
}
ISPCommand::EraseCMPA => {
do_ping(&mut *port)?;
// Write 512 bytes of zero
let bytes = [0; 512];
do_isp_write_memory(&mut *port, 0x9e400, &bytes)?;
println!("CMPA region erased!");
println!("You can now boot unsigned images");
}
ISPCommand::ReadCMPA { file } => {
do_ping(&mut *port)?;
let m = do_isp_read_memory(&mut *port, 0x9e400, 512)?;
let mut out = match file {
Some(ref path) => Box::new(
std::fs::OpenOptions::new()
.write(true)
.truncate(true)
.create(true)
.open(path)?,
) as Box<dyn Write>,
None => Box::new(std::io::stdout()) as Box<dyn Write>,
};
out.write_all(&m)?;
eprintln | {
match prop {
BootloaderProperty::BootloaderVersion => {
println!("Version {:x}", params[1]);
}
BootloaderProperty::AvailablePeripherals => {
println!("Bitmask of peripherals {:x}", params[1]);
}
BootloaderProperty::FlashStart => {
println!("Flash start = 0x{:x}", params[1]);
}
BootloaderProperty::FlashSize => {
println!("Flash Size = {:x}", params[1]);
}
BootloaderProperty::FlashSectorSize => {
println!("Flash Sector Size = {:x}", params[1]);
}
BootloaderProperty::AvailableCommands => {
println!("Bitmask of commands = {:x}", params[1]);
} | identifier_body |
lpc55_flash.rs | , clap::ValueEnum)]
enum CfpaChoice {
Scratch,
Ping,
Pong,
}
#[derive(Debug, Parser)]
#[clap(name = "isp")]
struct Isp {
/// UART port
#[clap(name = "port")]
port: String,
/// How fast to run the UART. 57,600 baud seems very reliable but is rather
/// slow. In certain test setups we've gotten rates of up to 1Mbaud to work
/// reliably -- your mileage may vary!
#[clap(short = 'b', default_value = "57600")]
baud_rate: u32,
#[clap(subcommand)]
cmd: ISPCommand,
}
fn pretty_print_bootloader_prop(prop: BootloaderProperty, params: Vec<u32>) {
match prop {
BootloaderProperty::BootloaderVersion => {
println!("Version {:x}", params[1]);
}
BootloaderProperty::AvailablePeripherals => {
println!("Bitmask of peripherals {:x}", params[1]);
}
BootloaderProperty::FlashStart => {
println!("Flash start = 0x{:x}", params[1]);
}
BootloaderProperty::FlashSize => {
println!("Flash Size = {:x}", params[1]);
}
BootloaderProperty::FlashSectorSize => {
println!("Flash Sector Size = {:x}", params[1]);
}
BootloaderProperty::AvailableCommands => {
println!("Bitmask of commands = {:x}", params[1]);
}
BootloaderProperty::CRCStatus => {
println!("CRC status = {}", params[1]);
}
BootloaderProperty::VerifyWrites => {
println!("Verify Writes (bool) {}", params[1]);
}
BootloaderProperty::MaxPacketSize => {
println!("Max Packet Size = {}", params[1]);
}
BootloaderProperty::ReservedRegions => {
println!("Reserved regions? = {:x?}", params);
}
BootloaderProperty::RAMStart => {
println!("RAM start = 0x{:x}", params[1]);
}
BootloaderProperty::RAMSize => {
println!("RAM size = 0x{:x}", params[1]);
}
BootloaderProperty::SystemDeviceID => {
println!("DEVICE_ID0 register = 0x{:x}", params[1]);
}
BootloaderProperty::SecurityState => {
println!(
"Security State = {}",
if params[1] == 0x5aa55aa5 {
"UNLOCKED"
} else {
"LOCKED"
}
);
}
BootloaderProperty::UniqueID => {
println!(
"UUID = {:x}{:x}{:x}{:x}",
params[1], params[2], params[3], params[4]
);
}
BootloaderProperty::TargetVersion => {
println!("Target version = {:x}", params[1]);
}
BootloaderProperty::FlashPageSize => {
println!("Flash page size = {:x}", params[1]);
}
BootloaderProperty::IRQPinStatus => {
println!("IRQ Pin Status = {}", params[1]);
}
BootloaderProperty::FFRKeyStoreStatus => {
println!("FFR Store Status = {}", params[1]);
}
}
}
fn pretty_print_error(params: Vec<u32>) {
let reason = params[1] & 0xfffffff0;
if reason == 0 {
println!("No errors reported");
} else if reason == 0x0602f300 {
println!("Passive boot failed, reason:");
let specific_reason = params[2] & 0xfffffff0;
match specific_reason {
0x0b36f300 => {
println!("Secure image authentication failed. Check:");
println!("- Is the image you are booting signed?");
println!("- Is the image signed with the corresponding key?");
}
0x0b37f300 => {
println!("Application CRC failed");
}
0x0b35f300 => |
0x0b38f300 => {
println!("DICE failure. Check:");
println!("- Key store is set up properly (UDS)");
}
0x0d70f300 => {
println!("Trying to boot a TZ image on a device that doesn't have TZ!");
}
0x0d71f300 => {
println!("Error reading TZ Image type from CMPA");
}
0x0d72f300 => {
println!("Bad TZ image mode, check your image");
}
0x0c00f500 => {
println!("Application returned to the ROM?");
}
_ => {
println!("Some other reason, raw bytes: {:x?}", params);
}
}
} else {
println!("Something bad happen: {:x?}", params);
}
}
fn main() -> Result<()> {
let cmd = Isp::parse();
// The target _technically_ has autobaud but it's very flaky
// and these seem to be the preferred settings
//
// We initially set the timeout short so we can drain the incoming buffer in
// a portable manner below. We'll adjust it up after that.
let mut port = serialport::new(&cmd.port, cmd.baud_rate)
.timeout(Duration::from_millis(100))
.data_bits(DataBits::Eight)
.flow_control(FlowControl::None)
.parity(Parity::None)
.stop_bits(StopBits::One)
.open()?;
// Extract any bytes left over in the serial port driver from previous
// interaction.
loop {
let mut throwaway = [0; 16];
match port.read(&mut throwaway) {
Ok(0) => {
// This should only happen on nonblocking reads, which we
// haven't asked for, but it does mean the buffer is empty so
// treat it as success.
break;
}
Ok(_) => {
// We've collected some characters to throw away, keep going.
}
Err(e) if e.kind() == ErrorKind::TimedOut => {
// Buffer is empty!
break;
}
Err(e) => {
return Err(e.into());
}
}
}
// Crank the timeout back up.
port.set_timeout(Duration::from_secs(1))?;
match cmd.cmd {
ISPCommand::Ping => {
do_ping(&mut *port)?;
println!("ping success.");
}
ISPCommand::ReadMemory {
address,
count,
path,
} => {
do_ping(&mut *port)?;
let m = do_isp_read_memory(&mut *port, address, count)?;
let mut out = std::fs::OpenOptions::new()
.write(true)
.truncate(true)
.create(true)
.open(&path)?;
out.write_all(&m)?;
println!("Output written to {:?}", path);
}
ISPCommand::WriteMemory { address, file } => {
do_ping(&mut *port)?;
println!("If you didn't already erase the flash this operation will fail!");
println!("This operation may take a while");
let infile = std::fs::read(file)?;
do_isp_write_memory(&mut *port, address, &infile)?;
println!("Write complete!");
}
ISPCommand::FlashEraseAll => {
do_ping(&mut *port)?;
do_isp_flash_erase_all(&mut *port)?;
println!("Flash erased!");
}
ISPCommand::FlashEraseRegion {
start_address,
byte_count,
} => {
do_ping(&mut *port)?;
do_isp_flash_erase_region(&mut *port, start_address, byte_count)?;
println!("Flash region erased!");
}
// Yes this is just another write-memory call but remembering addresses
// is hard.
ISPCommand::WriteCMPA { file } => {
do_ping(&mut *port)?;
let infile = std::fs::read(file)?;
do_isp_write_memory(&mut *port, 0x9e400, &infile)?;
println!("Write to CMPA done!");
}
ISPCommand::EraseCMPA => {
do_ping(&mut *port)?;
// Write 512 bytes of zero
let bytes = [0; 512];
do_isp_write_memory(&mut *port, 0x9e400, &bytes)?;
println!("CMPA region erased!");
println!("You can now boot unsigned images");
}
ISPCommand::ReadCMPA { file } => {
do_ping(&mut *port)?;
let m = do_isp_read_memory(&mut *port, 0x9e400, 512)?;
let mut out = match file {
Some(ref path) => Box::new(
std::fs::OpenOptions::new()
.write(true)
.truncate(true)
.create(true)
.open(path)?,
) as Box<dyn Write>,
None => Box::new(std::io::stdout()) as Box<dyn Write>,
};
out.write_all(&m)?;
e | {
println!("Application entry point and/or stack is invalid");
} | conditional_block |
lpc55_flash.rs | file: PathBuf,
},
/// Erase the CMPA region (use to boot non-secure binaries again)
#[clap(name = "erase-cmpa")]
EraseCMPA,
/// Save the CMPA region to a file
ReadCMPA {
/// Write to FILE, or stdout if omitted
file: Option<PathBuf>,
},
/// Save the CFPA region to a file
ReadCFPA {
#[clap(short, long)]
page: Option<CfpaChoice>,
file: PathBuf,
},
/// Write the CFPA region from the contents of a file.
WriteCFPA {
#[clap(short, long)]
update_version: bool,
file: PathBuf,
},
/// Put a minimalist program on to allow attaching via SWD
Restore,
/// Send SB update file
SendSBUpdate {
file: PathBuf,
},
/// Set up key store this involves
/// - Enroll
/// - Setting UDS
/// - Setting SBKEK
/// - Writing to persistent storage
SetupKeyStore {
file: PathBuf,
},
/// Trigger a new enrollment in the PUF
Enroll,
/// Generate a new device secret for use in DICE
GenerateUDS,
/// Write keystore to flash
WriteKeyStore,
/// Erase existing keystore
EraseKeyStore,
/// Set the SBKEK, required for SB Updates
SetSBKek {
file: PathBuf,
},
GetProperty {
#[arg(value_parser = BootloaderProperty::from_str)]
prop: BootloaderProperty,
},
LastError,
}
#[derive(Copy, Clone, Debug, clap::ValueEnum)]
enum CfpaChoice {
Scratch,
Ping,
Pong,
}
#[derive(Debug, Parser)]
#[clap(name = "isp")]
struct Isp {
/// UART port
#[clap(name = "port")]
port: String,
/// How fast to run the UART. 57,600 baud seems very reliable but is rather
/// slow. In certain test setups we've gotten rates of up to 1Mbaud to work
/// reliably -- your mileage may vary!
#[clap(short = 'b', default_value = "57600")]
baud_rate: u32,
#[clap(subcommand)]
cmd: ISPCommand,
}
fn pretty_print_bootloader_prop(prop: BootloaderProperty, params: Vec<u32>) {
match prop {
BootloaderProperty::BootloaderVersion => {
println!("Version {:x}", params[1]);
}
BootloaderProperty::AvailablePeripherals => {
println!("Bitmask of peripherals {:x}", params[1]);
}
BootloaderProperty::FlashStart => {
println!("Flash start = 0x{:x}", params[1]);
}
BootloaderProperty::FlashSize => {
println!("Flash Size = {:x}", params[1]);
}
BootloaderProperty::FlashSectorSize => {
println!("Flash Sector Size = {:x}", params[1]);
}
BootloaderProperty::AvailableCommands => {
println!("Bitmask of commands = {:x}", params[1]);
}
BootloaderProperty::CRCStatus => {
println!("CRC status = {}", params[1]);
}
BootloaderProperty::VerifyWrites => {
println!("Verify Writes (bool) {}", params[1]);
}
BootloaderProperty::MaxPacketSize => {
println!("Max Packet Size = {}", params[1]);
}
BootloaderProperty::ReservedRegions => {
println!("Reserved regions? = {:x?}", params);
}
BootloaderProperty::RAMStart => {
println!("RAM start = 0x{:x}", params[1]);
}
BootloaderProperty::RAMSize => {
println!("RAM size = 0x{:x}", params[1]);
}
BootloaderProperty::SystemDeviceID => {
println!("DEVICE_ID0 register = 0x{:x}", params[1]);
}
BootloaderProperty::SecurityState => {
println!(
"Security State = {}",
if params[1] == 0x5aa55aa5 {
"UNLOCKED"
} else {
"LOCKED"
}
);
}
BootloaderProperty::UniqueID => {
println!(
"UUID = {:x}{:x}{:x}{:x}",
params[1], params[2], params[3], params[4]
);
}
BootloaderProperty::TargetVersion => {
println!("Target version = {:x}", params[1]);
}
BootloaderProperty::FlashPageSize => {
println!("Flash page size = {:x}", params[1]);
}
BootloaderProperty::IRQPinStatus => {
println!("IRQ Pin Status = {}", params[1]);
}
BootloaderProperty::FFRKeyStoreStatus => {
println!("FFR Store Status = {}", params[1]);
}
}
}
fn pretty_print_error(params: Vec<u32>) {
let reason = params[1] & 0xfffffff0;
if reason == 0 {
println!("No errors reported");
} else if reason == 0x0602f300 {
println!("Passive boot failed, reason:");
let specific_reason = params[2] & 0xfffffff0;
match specific_reason {
0x0b36f300 => {
println!("Secure image authentication failed. Check:");
println!("- Is the image you are booting signed?");
println!("- Is the image signed with the corresponding key?");
}
0x0b37f300 => {
println!("Application CRC failed");
}
0x0b35f300 => {
println!("Application entry point and/or stack is invalid");
}
0x0b38f300 => {
println!("DICE failure. Check:");
println!("- Key store is set up properly (UDS)");
}
0x0d70f300 => {
println!("Trying to boot a TZ image on a device that doesn't have TZ!");
}
0x0d71f300 => {
println!("Error reading TZ Image type from CMPA");
}
0x0d72f300 => {
println!("Bad TZ image mode, check your image");
}
0x0c00f500 => {
println!("Application returned to the ROM?");
}
_ => {
println!("Some other reason, raw bytes: {:x?}", params);
}
}
} else {
println!("Something bad happen: {:x?}", params);
}
}
fn main() -> Result<()> {
let cmd = Isp::parse();
// The target _technically_ has autobaud but it's very flaky
// and these seem to be the preferred settings
//
// We initially set the timeout short so we can drain the incoming buffer in
// a portable manner below. We'll adjust it up after that.
let mut port = serialport::new(&cmd.port, cmd.baud_rate)
.timeout(Duration::from_millis(100))
.data_bits(DataBits::Eight)
.flow_control(FlowControl::None)
.parity(Parity::None)
.stop_bits(StopBits::One)
.open()?;
// Extract any bytes left over in the serial port driver from previous
// interaction.
loop {
let mut throwaway = [0; 16];
match port.read(&mut throwaway) {
Ok(0) => {
// This should only happen on nonblocking reads, which we
// haven't asked for, but it does mean the buffer is empty so
// treat it as success.
break;
}
Ok(_) => {
// We've collected some characters to throw away, keep going.
}
Err(e) if e.kind() == ErrorKind::TimedOut => {
// Buffer is empty!
break;
}
Err(e) => {
return Err(e.into());
}
}
}
// Crank the timeout back up.
port.set_timeout(Duration::from_secs(1))?;
match cmd.cmd {
ISPCommand::Ping => {
do_ping(&mut *port)?;
println!("ping success.");
}
ISPCommand::ReadMemory {
address,
count,
path,
} => {
do_ping(&mut *port)?;
let m = do_isp_read_memory(&mut *port, address, count)?;
let mut out = std::fs::OpenOptions::new()
.write(true)
.truncate(true)
.create(true)
.open(&path)?;
out.write_all(&m)?;
println!("Output written to {:?}", path);
}
ISPCommand::WriteMemory { address, file } => {
do_ping(&mut *port)?;
println!("If you didn't already erase the flash this operation will fail!");
println!("This operation may take a while");
let infile = std::fs::read(file)?;
do_isp_write_memory(&mut *port, address, & | },
/// Write a file to the CMPA region
#[clap(name = "write-cmpa")]
WriteCMPA { | random_line_split |
|
main.js | websocket.onmessage = function (evt) { onMessage(evt) };
websocket.onerror = function (evt) { onError(evt) };
}
function onOpen(evt) {
state.className = "success";
addMessage(127, '');
// state.innerHTML = "Connected to server";
}
function onClose(evt) {
state.className = "fail";
// state.innerHTML = "Not connected";
//connected.innerHTML = "0";
}
function onMessage(evt) {
var cad = evt.data;
var obj;
var opcode = cad.charCodeAt(0);
var message = cad.slice(1);
message_websocket_recibed = message;
log.innerHTML = '<li class="message">' + cad + "</li>" + log.innerHTML;
switch (opcode) {
case 0:
expressions_ListExpressions();
break;
case 1:
loadProgramsList(message);
break;
case 12:
arm_expressions_ListExpressions();
break;
case 19:
obj = JSON.parse(message);
if (obj.robot.error === "Permission denied.")
alert("Tiene que tener el control.");
break;
case 20:
obj = JSON.parse(message);
if (obj.robot.error === "Permission denied.")
alert("Tiene que tener el control.");
break;
case 21:
loadMenuMap(1);
break;
case 22:
obj = JSON.parse(message);
if (obj.robot.error === "Permission denied.") {
alert("Tiene que tener el control.");
var room_number = parseInt(room_selected, 10); //Cogemos la escala aqui tambien aunque no tengamos el control, para asi dibujar donde se encuentra DOris
scaleSVG.x = document.getElementById("mySVG").getBBox().width / parseInt(room_properties.width[room_number], 10);
scaleSVG.y = document.getElementById("mySVG").getBBox().height / parseInt(room_properties.height[room_number], 10);
}
else if (obj.robot.error === "None.") {
alert(selected_map.nameSector.concat(" cargado con éxito."));
}
/// Cargar o no cargar todos las caracteristicas del mapa dependiendo de si tenemos el control ///
///Si es positivo llamamos a features///
break;
case 23: //We load landmarks
selected_map.landmarks = StringTo_Object(message_websocket_recibed);
addMessage(24, selected_map.idMap.concat(",", selected_map.idSector));
//we call features
break;
case 24:
selected_map.features = StringTo_Object(message_websocket_recibed);
addMessage(25, selected_map.idMap.concat(",", selected_map.idSector));
break;
case 25: //we load sites
selected_map.sites = StringTo_Object(message_websocket_recibed);
loadMenuMap(2);
// getFeaturesAndSites(map_points.sites);
// makeMenuProperties(); //We make the menu
// DrawPoints();
break;
/*case 27:
siteAdded(message);
break;*/
case 34:
// MakeMenuRooms();
loadMenuMap(0);
break;
case 124:
notifyMe(message);
break;
case 125:
changeControlStatus(message);
//log.innerHTML = '<li class="message"> case 125: ' + message + "</li>" + log.innerHTML;
break;
case 126:
releaseControlStatus(message);
break;
case 127:
processRTP(message);
break;
default:
break;
}
connected = document.getElementById("connected");
// log = document.getElementById("log");
//state = document.getElementById("status");
}
function onError(evt) {
state.className = "fail";
// state.innerHTML = "Communication error";
}
function addMessage(command, complement) {
var message = String.fromCharCode(command);
message = message + complement;
//chat.value = "";
websocket.send(message);
}
function processRTP(message) { |
function onRTPMessage(evt) {
var cad = evt.data;
var message = cad.slice(1, cad.length);
messageSplitted = message.split("|");
if (messageSplitted[0] === "$POSE_VEL") {
showPositionVel(messageSplitted[1]);
if(svg_first_sector_load){
drawDorisPosition(messageSplitted[1]);
}
} else if (messageSplitted[0] === "$DORIS"){
showDorisInfo(messageSplitted[1]);
}
}
function onRTPOpen(evt){
console.log("2 websocket creado con exito");
}
function onRTPClose(evt){
console.log("2 websocket se ha cerrado");
}
function onRTPError(evt){
console.log("2 websocket ha tenido un error");
}
/*
function siteAdded(message){
var obj = JSON.parse(message);
alert("Added at index: " + obj.robot.index);
}
*/
function requestReleaseControl() {
if (controlStatus === 0) {
addMessage(124, '');
} else if (controlStatus === 1) {
addMessage(126, '');
}
}
function notifyMe(message) {
var obj = JSON.parse(message);
var notificationUser = noty({
text: "Hey there! The user from " + obj.control.requester + " is requesting control.",
type: "information",
dismissQueue: true,
layout: "bottomRight",
theme: 'defaultTheme',
buttons: [
{
addClass: 'btn btn-primary', text: 'Ok', onClick: function ($noty) {
$.noty.closeAll();
addMessage(125, '1');
// noty({ dismissQueue: true, force: true, layout: layout, theme: 'defaultTheme', text: 'You clicked "Ok" button', type: 'success' });
}
},
{
addClass: 'btn btn-danger', text: 'Cancel', onClick: function ($noty) {
$.noty.closeAll();
addMessage(125, '0');
// noty({ dismissQueue: true, force: true, layout: layout, theme: 'defaultTheme', text: 'You clicked "Cancel" button', type: 'error' });
}
}
]
});
}
function changeControlStatus(message) {
var obj = JSON.parse(message);
var errorStatus = parseInt(obj.control.error);
if (errorStatus === 0) {
var grantedStatus = parseInt(obj.control.granted);
var controlDiv = document.getElementById("control");
var controlLink = document.getElementById("control-link");
if (grantedStatus === 0) {
controlDiv.className = "hi-icon-effect-1 hi-icon-effect-1a has-no-control"
controlLink.className = "has-no-control hi-icon hi-icon-locked";
controlStatus = 0;
} else if (grantedStatus === 1) {
controlDiv.className = "hi-icon-effect-1 hi-icon-effect-1a has-control"
controlLink.className = "hi-icon hi-icon-locked";
controlStatus = 1;
}
}
}
function releaseControlStatus(message) {
var obj = JSON.parse(message);
var errorStatus = parseInt(obj.control.error);
if (errorStatus === 0) {
var releasedStatus = parseInt(obj.control.released);
var controlDiv = document.getElementById("control");
var controlLink = document.getElementById("control-link");
if (releasedStatus === 1) {
controlDiv.className = "hi-icon-effect-1 hi-icon-effect-1a has-no-control"
controlLink.className = "has-no-control hi-icon hi-icon-locked";
controlStatus = 0;
}
}
}
////////////////for not duplicating SCRIPTS........ http://www.javascriptkit.com/javatutors/loadjavascriptcss.shtml http://www.javascriptkit.com/javatutors/loadjavascriptcss2.shtml
function loadjscssfile(filename, filetype) {
if (filetype == "js") { //if filename is a external JavaScript file
var fileref = document.createElement('script')
fileref.setAttribute("type", "text/javascript")
fileref.setAttribute("src", filename)
}
else if (filetype == "css") { //if filename is an external CSS file
var fileref = document.createElement("link")
fileref.setAttribute("rel", "stylesheet")
fileref.setAttribute("type", "text/css")
fileref.setAttribute("href", filename)
}
if (typeof fileref != "undefined")
document.getElementsByTagName |
var obj = JSON.parse(message);
var errorStatus = parseInt(obj.streaming.error);
if (errorStatus === 0) {
var port = obj.streaming.port;
var rtpURL = 'ws://192.168.1.101:' + port;
rtPackages = new WebSocket(rtpURL);
rtPackages.onmessage = function (evt) { onRTPMessage(evt) };
rtPackages.onopen = function (evt) {onRTPOpen(evt)};
rtPackages.onclose = function (evt) {onRTPClose(evt)};
rtPackages.onerror = function (evt) {onRTPError(evt)};
}
}
| identifier_body |
main.js | websocket.onmessage = function (evt) { onMessage(evt) };
websocket.onerror = function (evt) { onError(evt) };
}
function onOpen(evt) {
state.className = "success";
addMessage(127, '');
// state.innerHTML = "Connected to server";
}
function onClose(evt) {
state.className = "fail";
// state.innerHTML = "Not connected";
//connected.innerHTML = "0";
}
function onMessage(evt) {
var cad = evt.data;
var obj;
var opcode = cad.charCodeAt(0);
var message = cad.slice(1);
message_websocket_recibed = message;
log.innerHTML = '<li class="message">' + cad + "</li>" + log.innerHTML;
switch (opcode) {
case 0:
expressions_ListExpressions();
break;
case 1:
loadProgramsList(message);
break;
case 12:
arm_expressions_ListExpressions();
break;
case 19:
obj = JSON.parse(message);
if (obj.robot.error === "Permission denied.")
alert("Tiene que tener el control.");
break;
case 20:
obj = JSON.parse(message);
if (obj.robot.error === "Permission denied.")
alert("Tiene que tener el control.");
break;
case 21:
loadMenuMap(1);
break;
case 22:
obj = JSON.parse(message);
if (obj.robot.error === "Permission denied.") {
alert("Tiene que tener el control.");
var room_number = parseInt(room_selected, 10); //Cogemos la escala aqui tambien aunque no tengamos el control, para asi dibujar donde se encuentra DOris
scaleSVG.x = document.getElementById("mySVG").getBBox().width / parseInt(room_properties.width[room_number], 10);
scaleSVG.y = document.getElementById("mySVG").getBBox().height / parseInt(room_properties.height[room_number], 10);
}
else if (obj.robot.error === "None.") {
alert(selected_map.nameSector.concat(" cargado con éxito."));
}
/// Cargar o no cargar todos las caracteristicas del mapa dependiendo de si tenemos el control ///
///Si es positivo llamamos a features///
break;
case 23: //We load landmarks
selected_map.landmarks = StringTo_Object(message_websocket_recibed);
addMessage(24, selected_map.idMap.concat(",", selected_map.idSector));
//we call features
break;
case 24:
selected_map.features = StringTo_Object(message_websocket_recibed);
addMessage(25, selected_map.idMap.concat(",", selected_map.idSector));
break;
case 25: //we load sites
selected_map.sites = StringTo_Object(message_websocket_recibed);
loadMenuMap(2);
// getFeaturesAndSites(map_points.sites);
// makeMenuProperties(); //We make the menu
// DrawPoints();
break;
| /*case 27:
siteAdded(message);
break;*/
case 34:
// MakeMenuRooms();
loadMenuMap(0);
break;
case 124:
notifyMe(message);
break;
case 125:
changeControlStatus(message);
//log.innerHTML = '<li class="message"> case 125: ' + message + "</li>" + log.innerHTML;
break;
case 126:
releaseControlStatus(message);
break;
case 127:
processRTP(message);
break;
default:
break;
}
connected = document.getElementById("connected");
// log = document.getElementById("log");
//state = document.getElementById("status");
}
function onError(evt) {
state.className = "fail";
// state.innerHTML = "Communication error";
}
function addMessage(command, complement) {
var message = String.fromCharCode(command);
message = message + complement;
//chat.value = "";
websocket.send(message);
}
function processRTP(message) {
var obj = JSON.parse(message);
var errorStatus = parseInt(obj.streaming.error);
if (errorStatus === 0) {
var port = obj.streaming.port;
var rtpURL = 'ws://192.168.1.101:' + port;
rtPackages = new WebSocket(rtpURL);
rtPackages.onmessage = function (evt) { onRTPMessage(evt) };
rtPackages.onopen = function (evt) {onRTPOpen(evt)};
rtPackages.onclose = function (evt) {onRTPClose(evt)};
rtPackages.onerror = function (evt) {onRTPError(evt)};
}
}
function onRTPMessage(evt) {
var cad = evt.data;
var message = cad.slice(1, cad.length);
messageSplitted = message.split("|");
if (messageSplitted[0] === "$POSE_VEL") {
showPositionVel(messageSplitted[1]);
if(svg_first_sector_load){
drawDorisPosition(messageSplitted[1]);
}
} else if (messageSplitted[0] === "$DORIS"){
showDorisInfo(messageSplitted[1]);
}
}
function onRTPOpen(evt){
console.log("2 websocket creado con exito");
}
function onRTPClose(evt){
console.log("2 websocket se ha cerrado");
}
function onRTPError(evt){
console.log("2 websocket ha tenido un error");
}
/*
function siteAdded(message){
var obj = JSON.parse(message);
alert("Added at index: " + obj.robot.index);
}
*/
function requestReleaseControl() {
if (controlStatus === 0) {
addMessage(124, '');
} else if (controlStatus === 1) {
addMessage(126, '');
}
}
function notifyMe(message) {
var obj = JSON.parse(message);
var notificationUser = noty({
text: "Hey there! The user from " + obj.control.requester + " is requesting control.",
type: "information",
dismissQueue: true,
layout: "bottomRight",
theme: 'defaultTheme',
buttons: [
{
addClass: 'btn btn-primary', text: 'Ok', onClick: function ($noty) {
$.noty.closeAll();
addMessage(125, '1');
// noty({ dismissQueue: true, force: true, layout: layout, theme: 'defaultTheme', text: 'You clicked "Ok" button', type: 'success' });
}
},
{
addClass: 'btn btn-danger', text: 'Cancel', onClick: function ($noty) {
$.noty.closeAll();
addMessage(125, '0');
// noty({ dismissQueue: true, force: true, layout: layout, theme: 'defaultTheme', text: 'You clicked "Cancel" button', type: 'error' });
}
}
]
});
}
function changeControlStatus(message) {
var obj = JSON.parse(message);
var errorStatus = parseInt(obj.control.error);
if (errorStatus === 0) {
var grantedStatus = parseInt(obj.control.granted);
var controlDiv = document.getElementById("control");
var controlLink = document.getElementById("control-link");
if (grantedStatus === 0) {
controlDiv.className = "hi-icon-effect-1 hi-icon-effect-1a has-no-control"
controlLink.className = "has-no-control hi-icon hi-icon-locked";
controlStatus = 0;
} else if (grantedStatus === 1) {
controlDiv.className = "hi-icon-effect-1 hi-icon-effect-1a has-control"
controlLink.className = "hi-icon hi-icon-locked";
controlStatus = 1;
}
}
}
function releaseControlStatus(message) {
var obj = JSON.parse(message);
var errorStatus = parseInt(obj.control.error);
if (errorStatus === 0) {
var releasedStatus = parseInt(obj.control.released);
var controlDiv = document.getElementById("control");
var controlLink = document.getElementById("control-link");
if (releasedStatus === 1) {
controlDiv.className = "hi-icon-effect-1 hi-icon-effect-1a has-no-control"
controlLink.className = "has-no-control hi-icon hi-icon-locked";
controlStatus = 0;
}
}
}
////////////////for not duplicating SCRIPTS........ http://www.javascriptkit.com/javatutors/loadjavascriptcss.shtml http://www.javascriptkit.com/javatutors/loadjavascriptcss2.shtml
function loadjscssfile(filename, filetype) {
if (filetype == "js") { //if filename is a external JavaScript file
var fileref = document.createElement('script')
fileref.setAttribute("type", "text/javascript")
fileref.setAttribute("src", filename)
}
else if (filetype == "css") { //if filename is an external CSS file
var fileref = document.createElement("link")
fileref.setAttribute("rel", "stylesheet")
fileref.setAttribute("type", "text/css")
fileref.setAttribute("href", filename)
}
if (typeof fileref != "undefined")
document.getElementsByTagName("head")[ | random_line_split |
|
main.js | websocket.onmessage = function (evt) { onMessage(evt) };
websocket.onerror = function (evt) { onError(evt) };
}
function onOpen(evt) {
state.className = "success";
addMessage(127, '');
// state.innerHTML = "Connected to server";
}
function onClose(evt) {
state.className = "fail";
// state.innerHTML = "Not connected";
//connected.innerHTML = "0";
}
function onMessage(evt) {
var cad = evt.data;
var obj;
var opcode = cad.charCodeAt(0);
var message = cad.slice(1);
message_websocket_recibed = message;
log.innerHTML = '<li class="message">' + cad + "</li>" + log.innerHTML;
switch (opcode) {
case 0:
expressions_ListExpressions();
break;
case 1:
loadProgramsList(message);
break;
case 12:
arm_expressions_ListExpressions();
break;
case 19:
obj = JSON.parse(message);
if (obj.robot.error === "Permission denied.")
alert("Tiene que tener el control.");
break;
case 20:
obj = JSON.parse(message);
if (obj.robot.error === "Permission denied.")
alert("Tiene que tener el control.");
break;
case 21:
loadMenuMap(1);
break;
case 22:
obj = JSON.parse(message);
if (obj.robot.error === "Permission denied.") {
alert("Tiene que tener el control.");
var room_number = parseInt(room_selected, 10); //Cogemos la escala aqui tambien aunque no tengamos el control, para asi dibujar donde se encuentra DOris
scaleSVG.x = document.getElementById("mySVG").getBBox().width / parseInt(room_properties.width[room_number], 10);
scaleSVG.y = document.getElementById("mySVG").getBBox().height / parseInt(room_properties.height[room_number], 10);
}
else if (obj.robot.error === "None.") {
alert(selected_map.nameSector.concat(" cargado con éxito."));
}
/// Cargar o no cargar todos las caracteristicas del mapa dependiendo de si tenemos el control ///
///Si es positivo llamamos a features///
break;
case 23: //We load landmarks
selected_map.landmarks = StringTo_Object(message_websocket_recibed);
addMessage(24, selected_map.idMap.concat(",", selected_map.idSector));
//we call features
break;
case 24:
selected_map.features = StringTo_Object(message_websocket_recibed);
addMessage(25, selected_map.idMap.concat(",", selected_map.idSector));
break;
case 25: //we load sites
selected_map.sites = StringTo_Object(message_websocket_recibed);
loadMenuMap(2);
// getFeaturesAndSites(map_points.sites);
// makeMenuProperties(); //We make the menu
// DrawPoints();
break;
/*case 27:
siteAdded(message);
break;*/
case 34:
// MakeMenuRooms();
loadMenuMap(0);
break;
case 124:
notifyMe(message);
break;
case 125:
changeControlStatus(message);
//log.innerHTML = '<li class="message"> case 125: ' + message + "</li>" + log.innerHTML;
break;
case 126:
releaseControlStatus(message);
break;
case 127:
processRTP(message);
break;
default:
break;
}
connected = document.getElementById("connected");
// log = document.getElementById("log");
//state = document.getElementById("status");
}
function onError(evt) {
state.className = "fail";
// state.innerHTML = "Communication error";
}
function addMessage(command, complement) {
var message = String.fromCharCode(command);
message = message + complement;
//chat.value = "";
websocket.send(message);
}
function processRTP(message) {
var obj = JSON.parse(message);
var errorStatus = parseInt(obj.streaming.error);
if (errorStatus === 0) {
var port = obj.streaming.port;
var rtpURL = 'ws://192.168.1.101:' + port;
rtPackages = new WebSocket(rtpURL);
rtPackages.onmessage = function (evt) { onRTPMessage(evt) };
rtPackages.onopen = function (evt) {onRTPOpen(evt)};
rtPackages.onclose = function (evt) {onRTPClose(evt)};
rtPackages.onerror = function (evt) {onRTPError(evt)};
}
}
function onRTPMessage(evt) {
var cad = evt.data;
var message = cad.slice(1, cad.length);
messageSplitted = message.split("|");
if (messageSplitted[0] === "$POSE_VEL") {
showPositionVel(messageSplitted[1]);
if(svg_first_sector_load){
drawDorisPosition(messageSplitted[1]);
}
} else if (messageSplitted[0] === "$DORIS"){
showDorisInfo(messageSplitted[1]);
}
}
function onRTPOpen(evt){
console.log("2 websocket creado con exito");
}
function o | evt){
console.log("2 websocket se ha cerrado");
}
function onRTPError(evt){
console.log("2 websocket ha tenido un error");
}
/*
function siteAdded(message){
var obj = JSON.parse(message);
alert("Added at index: " + obj.robot.index);
}
*/
function requestReleaseControl() {
if (controlStatus === 0) {
addMessage(124, '');
} else if (controlStatus === 1) {
addMessage(126, '');
}
}
function notifyMe(message) {
var obj = JSON.parse(message);
var notificationUser = noty({
text: "Hey there! The user from " + obj.control.requester + " is requesting control.",
type: "information",
dismissQueue: true,
layout: "bottomRight",
theme: 'defaultTheme',
buttons: [
{
addClass: 'btn btn-primary', text: 'Ok', onClick: function ($noty) {
$.noty.closeAll();
addMessage(125, '1');
// noty({ dismissQueue: true, force: true, layout: layout, theme: 'defaultTheme', text: 'You clicked "Ok" button', type: 'success' });
}
},
{
addClass: 'btn btn-danger', text: 'Cancel', onClick: function ($noty) {
$.noty.closeAll();
addMessage(125, '0');
// noty({ dismissQueue: true, force: true, layout: layout, theme: 'defaultTheme', text: 'You clicked "Cancel" button', type: 'error' });
}
}
]
});
}
function changeControlStatus(message) {
var obj = JSON.parse(message);
var errorStatus = parseInt(obj.control.error);
if (errorStatus === 0) {
var grantedStatus = parseInt(obj.control.granted);
var controlDiv = document.getElementById("control");
var controlLink = document.getElementById("control-link");
if (grantedStatus === 0) {
controlDiv.className = "hi-icon-effect-1 hi-icon-effect-1a has-no-control"
controlLink.className = "has-no-control hi-icon hi-icon-locked";
controlStatus = 0;
} else if (grantedStatus === 1) {
controlDiv.className = "hi-icon-effect-1 hi-icon-effect-1a has-control"
controlLink.className = "hi-icon hi-icon-locked";
controlStatus = 1;
}
}
}
function releaseControlStatus(message) {
var obj = JSON.parse(message);
var errorStatus = parseInt(obj.control.error);
if (errorStatus === 0) {
var releasedStatus = parseInt(obj.control.released);
var controlDiv = document.getElementById("control");
var controlLink = document.getElementById("control-link");
if (releasedStatus === 1) {
controlDiv.className = "hi-icon-effect-1 hi-icon-effect-1a has-no-control"
controlLink.className = "has-no-control hi-icon hi-icon-locked";
controlStatus = 0;
}
}
}
////////////////for not duplicating SCRIPTS........ http://www.javascriptkit.com/javatutors/loadjavascriptcss.shtml http://www.javascriptkit.com/javatutors/loadjavascriptcss2.shtml
function loadjscssfile(filename, filetype) {
if (filetype == "js") { //if filename is a external JavaScript file
var fileref = document.createElement('script')
fileref.setAttribute("type", "text/javascript")
fileref.setAttribute("src", filename)
}
else if (filetype == "css") { //if filename is an external CSS file
var fileref = document.createElement("link")
fileref.setAttribute("rel", "stylesheet")
fileref.setAttribute("type", "text/css")
fileref.setAttribute("href", filename)
}
if (typeof fileref != "undefined")
document.getElementsByTagName("head | nRTPClose( | identifier_name |
main.js | websocket.onmessage = function (evt) { onMessage(evt) };
websocket.onerror = function (evt) { onError(evt) };
}
function onOpen(evt) {
state.className = "success";
addMessage(127, '');
// state.innerHTML = "Connected to server";
}
function onClose(evt) {
state.className = "fail";
// state.innerHTML = "Not connected";
//connected.innerHTML = "0";
}
function onMessage(evt) {
var cad = evt.data;
var obj;
var opcode = cad.charCodeAt(0);
var message = cad.slice(1);
message_websocket_recibed = message;
log.innerHTML = '<li class="message">' + cad + "</li>" + log.innerHTML;
switch (opcode) {
case 0:
expressions_ListExpressions();
break;
case 1:
loadProgramsList(message);
break;
case 12:
arm_expressions_ListExpressions();
break;
case 19:
obj = JSON.parse(message);
if (obj.robot.error === "Permission denied.")
alert("Tiene que tener el control.");
break;
case 20:
obj = JSON.parse(message);
if (obj.robot.error === "Permission denied.")
alert("Tiene que tener el control.");
break;
case 21:
loadMenuMap(1);
break;
case 22:
obj = JSON.parse(message);
if (obj.robot.error === "Permission denied.") |
else if (obj.robot.error === "None.") {
alert(selected_map.nameSector.concat(" cargado con éxito."));
}
/// Cargar o no cargar todos las caracteristicas del mapa dependiendo de si tenemos el control ///
///Si es positivo llamamos a features///
break;
case 23: //We load landmarks
selected_map.landmarks = StringTo_Object(message_websocket_recibed);
addMessage(24, selected_map.idMap.concat(",", selected_map.idSector));
//we call features
break;
case 24:
selected_map.features = StringTo_Object(message_websocket_recibed);
addMessage(25, selected_map.idMap.concat(",", selected_map.idSector));
break;
case 25: //we load sites
selected_map.sites = StringTo_Object(message_websocket_recibed);
loadMenuMap(2);
// getFeaturesAndSites(map_points.sites);
// makeMenuProperties(); //We make the menu
// DrawPoints();
break;
/*case 27:
siteAdded(message);
break;*/
case 34:
// MakeMenuRooms();
loadMenuMap(0);
break;
case 124:
notifyMe(message);
break;
case 125:
changeControlStatus(message);
//log.innerHTML = '<li class="message"> case 125: ' + message + "</li>" + log.innerHTML;
break;
case 126:
releaseControlStatus(message);
break;
case 127:
processRTP(message);
break;
default:
break;
}
connected = document.getElementById("connected");
// log = document.getElementById("log");
//state = document.getElementById("status");
}
function onError(evt) {
state.className = "fail";
// state.innerHTML = "Communication error";
}
function addMessage(command, complement) {
var message = String.fromCharCode(command);
message = message + complement;
//chat.value = "";
websocket.send(message);
}
function processRTP(message) {
var obj = JSON.parse(message);
var errorStatus = parseInt(obj.streaming.error);
if (errorStatus === 0) {
var port = obj.streaming.port;
var rtpURL = 'ws://192.168.1.101:' + port;
rtPackages = new WebSocket(rtpURL);
rtPackages.onmessage = function (evt) { onRTPMessage(evt) };
rtPackages.onopen = function (evt) {onRTPOpen(evt)};
rtPackages.onclose = function (evt) {onRTPClose(evt)};
rtPackages.onerror = function (evt) {onRTPError(evt)};
}
}
function onRTPMessage(evt) {
var cad = evt.data;
var message = cad.slice(1, cad.length);
messageSplitted = message.split("|");
if (messageSplitted[0] === "$POSE_VEL") {
showPositionVel(messageSplitted[1]);
if(svg_first_sector_load){
drawDorisPosition(messageSplitted[1]);
}
} else if (messageSplitted[0] === "$DORIS"){
showDorisInfo(messageSplitted[1]);
}
}
function onRTPOpen(evt){
console.log("2 websocket creado con exito");
}
function onRTPClose(evt){
console.log("2 websocket se ha cerrado");
}
function onRTPError(evt){
console.log("2 websocket ha tenido un error");
}
/*
function siteAdded(message){
var obj = JSON.parse(message);
alert("Added at index: " + obj.robot.index);
}
*/
function requestReleaseControl() {
if (controlStatus === 0) {
addMessage(124, '');
} else if (controlStatus === 1) {
addMessage(126, '');
}
}
function notifyMe(message) {
var obj = JSON.parse(message);
var notificationUser = noty({
text: "Hey there! The user from " + obj.control.requester + " is requesting control.",
type: "information",
dismissQueue: true,
layout: "bottomRight",
theme: 'defaultTheme',
buttons: [
{
addClass: 'btn btn-primary', text: 'Ok', onClick: function ($noty) {
$.noty.closeAll();
addMessage(125, '1');
// noty({ dismissQueue: true, force: true, layout: layout, theme: 'defaultTheme', text: 'You clicked "Ok" button', type: 'success' });
}
},
{
addClass: 'btn btn-danger', text: 'Cancel', onClick: function ($noty) {
$.noty.closeAll();
addMessage(125, '0');
// noty({ dismissQueue: true, force: true, layout: layout, theme: 'defaultTheme', text: 'You clicked "Cancel" button', type: 'error' });
}
}
]
});
}
function changeControlStatus(message) {
var obj = JSON.parse(message);
var errorStatus = parseInt(obj.control.error);
if (errorStatus === 0) {
var grantedStatus = parseInt(obj.control.granted);
var controlDiv = document.getElementById("control");
var controlLink = document.getElementById("control-link");
if (grantedStatus === 0) {
controlDiv.className = "hi-icon-effect-1 hi-icon-effect-1a has-no-control"
controlLink.className = "has-no-control hi-icon hi-icon-locked";
controlStatus = 0;
} else if (grantedStatus === 1) {
controlDiv.className = "hi-icon-effect-1 hi-icon-effect-1a has-control"
controlLink.className = "hi-icon hi-icon-locked";
controlStatus = 1;
}
}
}
function releaseControlStatus(message) {
var obj = JSON.parse(message);
var errorStatus = parseInt(obj.control.error);
if (errorStatus === 0) {
var releasedStatus = parseInt(obj.control.released);
var controlDiv = document.getElementById("control");
var controlLink = document.getElementById("control-link");
if (releasedStatus === 1) {
controlDiv.className = "hi-icon-effect-1 hi-icon-effect-1a has-no-control"
controlLink.className = "has-no-control hi-icon hi-icon-locked";
controlStatus = 0;
}
}
}
////////////////for not duplicating SCRIPTS........ http://www.javascriptkit.com/javatutors/loadjavascriptcss.shtml http://www.javascriptkit.com/javatutors/loadjavascriptcss2.shtml
function loadjscssfile(filename, filetype) {
if (filetype == "js") { //if filename is a external JavaScript file
var fileref = document.createElement('script')
fileref.setAttribute("type", "text/javascript")
fileref.setAttribute("src", filename)
}
else if (filetype == "css") { //if filename is an external CSS file
var fileref = document.createElement("link")
fileref.setAttribute("rel", "stylesheet")
fileref.setAttribute("type", "text/css")
fileref.setAttribute("href", filename)
}
if (typeof fileref != "undefined")
document.getElementsByTagName("head | {
alert("Tiene que tener el control.");
var room_number = parseInt(room_selected, 10); //Cogemos la escala aqui tambien aunque no tengamos el control, para asi dibujar donde se encuentra DOris
scaleSVG.x = document.getElementById("mySVG").getBBox().width / parseInt(room_properties.width[room_number], 10);
scaleSVG.y = document.getElementById("mySVG").getBBox().height / parseInt(room_properties.height[room_number], 10);
} | conditional_block |
handler.go | // 星号相关:有几种情况要特殊处理,核心是要分清在标点内部还是外部
// -------------------------- 情况一 -------------------------
// 当前是中文,后面是星号或反引号对开头
// -----------------------------------------------------------
// 粗体中文**abc**
// 斜体中文*abc*
// 点中文`abc`
if isZh(currentRune) {
switch nextRune {
case '*':
doZhStar(&buffer, line, idx, boldCnt, italicCnt)
case '`':
doZhBackQuote(&buffer, line, idx, backQuoteCnt)
}
preRune = currentRune
continue
}
// -------------------------- 情况二 -------------------------
// 当前是星号对结尾,后面是中文
// -----------------------------------------------------------
// *abc*中文
if currentRune == '*' && isZh(nextRune) {
// * 之前的字符是英文则需要加空格
// 区分 bold 和 italic
switch preRune {
case '*':
doBoldStarZh(&buffer, line, idx, boldCnt)
default:
doSingleStarZh(&buffer, line, idx, italicCnt)
}
preRune = currentRune
continue
}
// -------------------------- 情况三 -------------------------
// 当前是反引号结尾,后面是中文
// -----------------------------------------------------------
// `abc`中文
if currentRune == '`' && isZh(nextRune) {
doBackQuoteZh(&buffer, line, idx, backQuoteCnt)
preRune = currentRune
continue
}
preRune = currentRune
}
}
return buffer.String()
}
func doZhStar(buffer *bytes.Buffer, line []rune, idx, boldCnt, italicCnt int) {
length := len(line)
if idx < length-2 {
cn2 := line[idx+2]
if cn2 == '*' {
// 粗体要看后面第三个字符是否是英文
if idx < length-3 {
cn3 := line[idx+3]
// 粗体中文**a**
if boldCnt%2 == 0 && isGeneralEn(cn3) {
// 一个新粗体的开始
buffer.WriteString(" ")
}
}
} else {
// 斜体要看后面第二个字符是否是英文
// 斜体中文*a*
if italicCnt%2 == 0 && isGeneralEn(cn2) {
// 一个新斜体的开始
buffer.WriteString(" ")
}
}
}
}
func doZhBackQuote(buffer *bytes.Buffer, line []rune, idx, backQuoteCnt int) {
if idx < len(line)-2 {
cn2 := line[idx+2]
// 小代码块要看后面第二个字符是否是英文
// 点中文`a`
if backQuoteCnt%2 == 0 && isGeneralEn(cn2) {
// 一个新代码块的开始
buffer.WriteString(" ")
}
}
}
func doBoldStarZh(buffer *bytes.Buffer, line []rune, idx, boldCnt int) {
if boldCnt%2 == 0 {
// **abc**粗体中文
if idx-2 > 0 && line[idx-2] != '*' && !isZh(line[idx-2]) {
buffer.WriteString(" ")
}
// ***abc***粗体中文
if line[idx-2] == '*' && idx-3 > 0 && !isZh(line[idx-3]) {
buffer.WriteString(" ")
}
}
}
func doSingleStarZh(buffer *bytes.Buffer, line []rune, idx, italicCnt int) {
if italicCnt%2 == 0 {
// *abc*粗体中文
if idx-1 > 0 && !isZh(line[idx-1]) {
buffer.WriteString(" ")
}
}
}
func doBackQuoteZh(buffer *bytes.Buffer, line []rune, idx, backQuoteCnt int) {
if backQuoteCnt%2 == 0 {
if idx-1 > 0 && !isZh(line[idx-1]) {
// `abc`点中文
buffer.WriteString(" ")
}
}
}
func handleLinks(text string) string {
if match := linkDetailRegex.FindStringSubmatch(text); len(match) > 3 {
linkText := doFormat([]rune(match[2]))
return fmt.Sprintf("%s[%s](%s)", match[1], linkText, match[3])
}
return text
}
func handleFileInput(inPath string, outPath string, backupPath string) error {
fstat, err := os.Stat(inPath)
if err != nil {
return err
}
var noBackup bool
// 处理文件:初始化 outPath、backupPath
if outPath == "" {
// 省略输出,则默认输出覆盖输入文件,需要备份文件
outPath = inPath
// 处理备份路径
if err := setBackupFilePath(fstat, inPath, outPath, &backupPath, &noBackup); err != nil {
return err
}
} else {
// 指定输出路径
if inPath == outPath {
// 当输出和输入一致,且未指定 backup 为 nobackup 时,还是要设置备份路径
if err := setBackupFilePath(fstat, inPath, outPath, &backupPath, &noBackup); err != nil {
return err
}
} else {
// 输入和输出不一致,不需要备份
noBackup = true
backupPath = "--"
}
// 非 .md 结尾,默认输出路径为目录
if !strings.HasSuffix(outPath, MarkdownSuffix) {
// 传入输出目录
if !existsDir(outPath) {
return errors.New("输出目录不存在")
}
outPath = fmt.Sprintf("%s%c%s%s%s", outPath, os.PathSeparator,
strings.TrimSuffix(fstat.Name(), MarkdownSuffix),
DefaultOutputSuffix, MarkdownSuffix)
}
// 其他情况 outPath 不处理
}
bf, err := os.Open(inPath)
if err != nil {
return err
}
// 内容处理
inContentBytes, err := ioutil.ReadAll(bf)
if err != nil {
return err
}
inContent := string(inContentBytes)
// 手动关闭输入文件(因为可能后面是覆盖该文件,要写入)
bf.Close()
// 备份
if !noBackup {
os.Rename(inPath, backupPath)
}
// 文本写入
of, err := os.Create(outPath)
if err != nil {
return err
}
defer of.Close()
of.WriteString(FormatMarkdown(inContent))
if !globalConfig.QuietMode {
log.Printf("【输入文件】: %s 【输出文件】: %s 【备份文件】: %s", inPath, outPath, backupPath)
}
return nil
}
// 处理文件和目录输入
func handlePathInput(inPath string, outPath string, backupPath string) error {
fstat, err := os.Stat(inPath)
if err != nil {
return err
}
if fstat.IsDir() {
var allSuccess = true
// 处理目录
if backupPath == "" {
// 备份路径为空,默认当前路径,inPath_bk
backupPath = fmt.Sprintf("%s%s", inPath, DefaultBackupSuffix)
}
// step1. 备份目录到 backupPath
if backupPath != NoBackupFlag {
log.Printf("目录已备份,备份路径 %s", backupPath)
copy.Copy(inPath, backupPath)
}
// step2. 遍历 inPath,逐个替换文件
log.Println("开始处理目录中的 Markdown 文件...")
filepath.Walk(inPath, func(path string, info os.FileInfo, err error) error {
// 忽略目录
if info.IsDir() {
return nil
}
if strings.HasSuffix(info.Name(), MarkdownSuffix) {
if err := handleFileInput(path, "", NoBackupFlag); err != nil {
allSuccess = false
return err
}
}
// 忽略非 Markdown 文件
return nil
})
if !allSuccess {
return errors.New("目录处理未成功,请从备份目录恢复")
}
} else {
return handleFileInput(inPath, outPath, backupPath)
}
return nil
}
// 设置文件备份路径
func setBackupFilePath(fstat os.FileInfo, inPath string, outPath string, backupPath *string, noBackup *bool) error {
// 备份路径为空,默认当前路径
if *backupPath == "" {
*backupPath = fmt.Sprintf("%s%c%s%s%s", | filepath.Dir(in | identifier_name |
|
handler.go | 块中,连续多个空行只保留一个
if strings.TrimSpace(newLine) == "" {
if emptyLineFlag {
continue
}
outputLines = append(outputLines, "")
emptyLineFlag = true
} else {
emptyLineFlag = false
outputLines = append(outputLines, newLine)
}
}
return strings.Join(outputLines, "\n")
}
func formatLine(line string) string {
// 位于代码块中
if codeFlag {
if strings.HasPrefix(line, "```") {
codeFlag = false
}
return line
}
// 位于非代码块中
if strings.HasPrefix(line, "```") {
codeFlag = true
return line
}
runes := []rune(line)
// 引用
if strings.HasPrefix(line, "> ") {
return "> " + doFormat([]rune(strings.TrimLeft(string(runes[2:]), " ")))
}
// 标题
if match := headerRegex.FindStringSubmatch(line); match != nil {
return match[1] + doFormat([]rune(strings.TrimLeft(string(match[2]), " ")))
}
// 有序列表
if match := orderedListRegex.FindStringSubmatch(line); match != nil {
return match[1] + doFormat([]rune(strings.TrimLeft(string(match[2]), " ")))
}
// 无序列表
if strings.HasPrefix(line, "- ") {
return "- " + doFormat([]rune(strings.TrimLeft(string(runes[2:]), " ")))
}
// 包含链接的文本
linkMatchIdx := linkRegex.FindAllStringSubmatchIndex(line, -1)
if len(linkMatchIdx) != 0 {
return doFormatWithLink(line, linkMatchIdx)
}
// 正常文本
return doFormat([]rune(line))
}
func doFormatWithLink(line string, linkMatchIdx [][]int) string {
pairs := make([]matchIdxPair, 0)
for _, idxList := range linkMatchIdx {
if len(idxList) == 0 {
return doFormat([]rune(line))
}
if len(idxList)%2 != 0 {
log.Println("idxList not in pairs")
return doFormat([]rune(line))
}
start := 0
end := 0
// get (start, end) pairs
for i, idx := range idxList {
// skip the first and second index
if i < 2 {
continue
}
if i%2 == 0 {
start = idx
} else {
end = idx
pairs = append(pairs, matchIdxPair{
startIdx: start,
endIdx: end,
})
}
}
}
// like 0 .... (10 ... 20) ... (30 ... 40) ...
resultBuf := bytes.Buffer{}
buf := bytes.Buffer{}
prevEndIdx := 0
resultBuf.Grow(len(line)>>1 + len(line))
for i, pair := range pairs {
buf.Reset()
// 处理 link 与文本之间的数据
buf.WriteString(doFormat([]rune(line[prevEndIdx:pair.startIdx])))
prevEndIdx = pair.endIdx
// 处理 link 数据
buf.WriteString(handleLinks(line[pair.startIdx:pair.endIdx]))
// 处理最后的 link 与文本直接的数据
if i == len(pairs)-1 {
buf.WriteString(doFormat([]rune(line[pair.endIdx:])))
prevEndIdx = pair.endIdx
}
resultBuf.WriteString(buf.String())
}
return resultBuf.String()
}
func doFormat(line []rune) string {
var (
preRune rune // 前一个字符
length = len(line) // 行字符数
buffer bytes.Buffer // 字符串缓冲区
)
var (
italicCnt = 0 // 斜体 * 计数
boldCnt = 0 // 粗体 ** 计数
backQuoteCnt = 0 // 反引号 ` 计数
)
// buffer 写入方式:先写字符,后判断是否写入空格
for idx, currentRune := range line {
buffer.WriteRune(currentRune)
// 相关符号数量统计
switch currentRune {
case '*':
if preRune == '*' {
boldCnt++
italicCnt--
} else {
italicCnt++
}
case '`':
backQuoteCnt++
}
// 判断当前字符后是否要加空格
if idx < length-1 {
nextRune := line[idx+1]
// 注:泛用英文不包括 Markdown 中的特殊符号 * ` [ ] ( )
if isZh(currentRune) && isGeneralEn(nextRune) {
// 中文 + 泛用英文 -> 加空格
buffer.WriteString(" ")
} else if isGeneralEn(currentRune) && isZh(nextRune) {
// 泛用英文 + 中文 -> 加空格
buffer.WriteString(" ")
} else if (isZh(currentRune) && isEnLeftBracket(nextRune)) || (isEnRightBracket(currentRune) && isZh(nextRune)) {
// 只用于这样的情况 “中文(” 或者 “)中文”,主要针对链接、图片等格式
buffer.WriteString(" ")
}
// 星号相关:有几种情况要特殊处理,核心是要分清在标点内部还是外部
// -------------------------- 情况一 -------------------------
// 当前是中文,后面是星号或反引号对开头
// -----------------------------------------------------------
// 粗体中文**abc**
// 斜体中文*abc*
// 点中文`abc`
if isZh(currentRune) {
switch nextRune {
case '*':
doZhStar(&buffer, line, idx, boldCnt, italicCnt)
case '`':
doZhBackQuote(&buffer, line, idx, backQuoteCnt)
}
preRune = currentRune
continue
}
// -------------------------- 情况二 -------------------------
// 当前是星号对结尾,后面是中文
// -----------------------------------------------------------
// *abc*中文
if currentRune == '*' && isZh(nextRune) {
// * 之前的字符是英文则需要加空格
// 区分 bold 和 italic
switch preRune {
case '*':
doBoldStarZh(&buffer, line, idx, boldCnt)
default:
doSingleStarZh(&buffer, line, idx, italicCnt)
}
preRune = currentRune
continue
}
// -------------------------- 情况三 -------------------------
// 当前是反引号结尾,后面是中文
// -----------------------------------------------------------
// `abc`中文
if currentRune == '`' && isZh(nextRune) {
doBackQuoteZh(&buffer, line, idx, backQuoteCnt)
preRune = currentRune
continue
}
preRune = currentRune
}
}
return buffer.String()
}
func doZhStar(buffer *bytes.Buffer, line []rune, idx, boldCnt, italicCnt int) {
length := len(line)
if idx < length-2 {
cn2 := line[idx+2]
if cn2 == '*' {
// 粗体要看后面第三个字符是否是英文
if idx < length-3 {
cn3 := line[idx+3]
// 粗体中文**a**
if boldCnt%2 == 0 && isGeneralEn(cn3) {
// 一个新粗体的开始
buffer.WriteString(" ")
}
}
} else {
// 斜体要看后面第二个字符是否是英文
// 斜体中文*a*
if italicCnt%2 == 0 && isGeneralEn(cn2) {
// 一个新斜体的开始
buffer.WriteString(" ")
}
}
}
}
func doZhBackQuote(buffer *bytes.Buffer, line []rune, idx, backQuoteCnt int) {
if idx < len(line)-2 {
cn2 := line[idx+2]
// 小代码块要看后面第二个字符是否是英文
// 点中文`a`
if backQuoteCnt%2 == 0 && isGeneralEn(cn2) {
// 一个新代码块的开始
buffer.WriteString(" ")
}
}
}
func doBoldStarZh(buffer *bytes.Buffer, line []rune, idx, boldCnt int) {
if boldCnt%2 == 0 {
// **abc**粗体中文
if idx-2 > 0 && line[idx-2] != '*' | neFlag = false
outputLines = append(outputLines, newLine)
continue
}
// 位于非代码 | conditional_block |
|
handler.go | .WriteString(doFormat([]rune(line[prevEndIdx:pair.startIdx])))
prevEndIdx = pair.endIdx
// 处理 link 数据
buf.WriteString(handleLinks(line[pair.startIdx:pair.endIdx]))
// 处理最后的 link 与文本直接的数据
if i == len(pairs)-1 {
buf.WriteString(doFormat([]rune(line[pair.endIdx:])))
prevEndIdx = pair.endIdx
}
resultBuf.WriteString(buf.String())
}
return resultBuf.String()
}
func doFormat(line []rune) string {
var (
preRune rune // 前一个字符
length = len(line) // 行字符数
buffer bytes.Buffer // 字符串缓冲区
)
var (
italicCnt = 0 // 斜体 * 计数
boldCnt = 0 // 粗体 ** 计数
backQuoteCnt = 0 // 反引号 ` 计数
)
// buffer 写入方式:先写字符,后判断是否写入空格
for idx, currentRune := range line {
buffer.WriteRune(currentRune)
// 相关符号数量统计
switch currentRune {
case '*':
if preRune == '*' {
boldCnt++
italicCnt--
} else {
italicCnt++
}
case '`':
backQuoteCnt++
}
// 判断当前字符后是否要加空格
if idx < length-1 {
nextRune := line[idx+1]
// 注:泛用英文不包括 Markdown 中的特殊符号 * ` [ ] ( )
if isZh(currentRune) && isGeneralEn(nextRune) {
// 中文 + 泛用英文 -> 加空格
buffer.WriteString(" ")
} else if isGeneralEn(currentRune) && isZh(nextRune) {
// 泛用英文 + 中文 -> 加空格
buffer.WriteString(" ")
} else if (isZh(currentRune) && isEnLeftBracket(nextRune)) || (isEnRightBracket(currentRune) && isZh(nextRune)) {
// 只用于这样的情况 “中文(” 或者 “)中文”,主要针对链接、图片等格式
buffer.WriteString(" ")
}
// 星号相关:有几种情况要特殊处理,核心是要分清在标点内部还是外部
// -------------------------- 情况一 -------------------------
// 当前是中文,后面是星号或反引号对开头
// -----------------------------------------------------------
// 粗体中文**abc**
// 斜体中文*abc*
// 点中文`abc`
if isZh(currentRune) {
switch nextRune {
case '*':
doZhStar(&buffer, line, idx, boldCnt, italicCnt)
case '`':
doZhBackQuote(&buffer, line, idx, backQuoteCnt)
}
preRune = currentRune
continue
}
// -------------------------- 情况二 -------------------------
// 当前是星号对结尾,后面是中文
// -----------------------------------------------------------
// *abc*中文
if currentRune == '*' && isZh(nextRune) {
// * 之前的字符是英文则需要加空格
// 区分 bold 和 italic
switch preRune {
case '*':
doBoldStarZh(&buffer, line, idx, boldCnt)
default:
doSingleStarZh(&buffer, line, idx, italicCnt)
}
preRune = currentRune
continue
}
// -------------------------- 情况三 -------------------------
// 当前是反引号结尾,后面是中文
// -----------------------------------------------------------
// `abc`中文
if currentRune == '`' && isZh(nextRune) {
doBackQuoteZh(&buffer, line, idx, backQuoteCnt)
preRune = currentRune
continue
}
preRune = currentRune
}
}
return buffer.String()
}
func doZhStar(buffer *bytes.Buffer, line []rune, idx, boldCnt, italicCnt int) {
length := len(line)
if idx < length-2 {
cn2 := line[idx+2]
if cn2 == '*' {
// 粗体要看后面第三个字符是否是英文
if idx < length-3 {
cn3 := line[idx+3]
// 粗体中文**a**
if boldCnt%2 == 0 && isGeneralEn(cn3) {
// 一个新粗体的开始
buffer.WriteString(" ")
}
}
} else {
// 斜体要看后面第二个字符是否是英文
// 斜体中文*a*
if italicCnt%2 == 0 && isGeneralEn(cn2) {
// 一个新斜体的开始
buffer.WriteString(" ")
}
}
}
}
func doZhBackQuote(buffer *bytes.Buffer, line []rune, idx, backQuoteCnt int) {
if idx < len(line)-2 {
cn2 := line[idx+2]
// 小代码块要看后面第二个字符是否是英文
// 点中文`a`
if backQuoteCnt%2 == 0 && isGeneralEn(cn2) {
// 一个新代码块的开始
buffer.WriteString(" ")
}
}
}
func doBoldStarZh(buffer *bytes.Buffer, line []rune, idx, boldCnt int) {
if boldCnt%2 == 0 {
// **abc**粗体中文
if idx-2 > 0 && line[idx-2] != '*' && !isZh(line[idx-2]) {
buffer.WriteString(" ")
}
// ***abc***粗体中文
if line[idx-2] == '*' && idx-3 > 0 && !isZh(line[idx-3]) {
buffer.WriteString(" ")
}
}
}
func doSingleStarZh(buffer *bytes.Buffer, line []rune, idx, italicCnt int) {
if italicCnt%2 == 0 {
// *abc*粗体中文
if idx-1 > 0 && !isZh(line[idx-1]) {
buffer.WriteString(" ")
}
}
}
func doBackQuoteZh(buffer *bytes.Buffer, line []rune, idx, backQuoteCnt int) {
if backQuoteCnt%2 == 0 {
if idx-1 > 0 && !isZh(line[idx-1]) {
// `abc`点中文
buffer.WriteString(" ")
}
}
}
func handleLinks(text string) string {
if match := linkDetailRegex.FindStringSubmatch(text); len(match) > 3 {
linkText := doFormat([]rune(match[2]))
return fmt.Sprintf("%s[%s](%s)", match[1], linkText, match[3])
}
return text
}
func handleFileInput(inPath string, outPath string, backupPath string) error {
fstat, err := os.Stat(inPath)
if err != nil {
return err
}
var noBackup bool
// 处理文件:初始化 outPath、backupPath
if outPath == "" {
// 省略输出,则默认输出覆盖输入文件,需要备份文件
outPath = inPath
// 处理备份路径
if err := setBackupFilePath(fstat, inPath, outPath, &backupPath, &noBackup); err != nil {
return err
}
} else {
// 指定输出路径
if inPath == outPath {
// 当输出和输入一致,且未指定 backup 为 nobackup 时,还是要设置备份路径
if err := setBackupFilePath(fstat, inPath, outPath, &backupPath, &noBackup); err != nil {
return err
}
} else {
// 输入和输出不一致,不需要备份
noBackup = true
backupPath = "--"
}
// 非 .md 结尾,默认输出路径为目录
if !strings.HasSuffix(outPath, MarkdownSuffix) {
// 传入输出目录
if !existsDir(outPath) {
return errors.New("输出目录不存在")
}
outPath = fmt.Sprintf("%s%c%s%s%s", outPath, os.PathSeparator,
strings.TrimSuffix(fstat.Name(), MarkdownSuffix),
Default | OutputSuffix, MarkdownSuffix)
}
// 其他情况 outPath 不处理
}
bf, err := os.Open(inPath)
if err != nil {
return err
}
// 内容处理
inContentBytes, err := ioutil.ReadAll(bf)
if err != nil {
return err
}
inContent := string(inContentBytes)
// 手动关闭输入文件(因为可能后面是覆盖该文件,要写入)
bf.Close()
// 备份 | identifier_body |
|
handler.go | '*' {
boldCnt++
italicCnt--
} else {
italicCnt++
}
case '`':
backQuoteCnt++
}
// 判断当前字符后是否要加空格
if idx < length-1 {
nextRune := line[idx+1]
// 注:泛用英文不包括 Markdown 中的特殊符号 * ` [ ] ( )
if isZh(currentRune) && isGeneralEn(nextRune) {
// 中文 + 泛用英文 -> 加空格
buffer.WriteString(" ")
} else if isGeneralEn(currentRune) && isZh(nextRune) {
// 泛用英文 + 中文 -> 加空格
buffer.WriteString(" ")
} else if (isZh(currentRune) && isEnLeftBracket(nextRune)) || (isEnRightBracket(currentRune) && isZh(nextRune)) {
// 只用于这样的情况 “中文(” 或者 “)中文”,主要针对链接、图片等格式
buffer.WriteString(" ")
}
// 星号相关:有几种情况要特殊处理,核心是要分清在标点内部还是外部
// -------------------------- 情况一 -------------------------
// 当前是中文,后面是星号或反引号对开头
// -----------------------------------------------------------
// 粗体中文**abc**
// 斜体中文*abc*
// 点中文`abc`
if isZh(currentRune) {
switch nextRune {
case '*':
doZhStar(&buffer, line, idx, boldCnt, italicCnt)
case '`':
doZhBackQuote(&buffer, line, idx, backQuoteCnt)
}
preRune = currentRune
continue
}
// -------------------------- 情况二 -------------------------
// 当前是星号对结尾,后面是中文
// -----------------------------------------------------------
// *abc*中文
if currentRune == '*' && isZh(nextRune) {
// * 之前的字符是英文则需要加空格
// 区分 bold 和 italic
switch preRune {
case '*':
doBoldStarZh(&buffer, line, idx, boldCnt)
default:
doSingleStarZh(&buffer, line, idx, italicCnt)
}
preRune = currentRune
continue
}
// -------------------------- 情况三 -------------------------
// 当前是反引号结尾,后面是中文
// -----------------------------------------------------------
// `abc`中文
if currentRune == '`' && isZh(nextRune) {
doBackQuoteZh(&buffer, line, idx, backQuoteCnt)
preRune = currentRune
continue
}
preRune = currentRune
}
}
return buffer.String()
}
func doZhStar(buffer *bytes.Buffer, line []rune, idx, boldCnt, italicCnt int) {
length := len(line)
if idx < length-2 {
cn2 := line[idx+2]
if cn2 == '*' {
// 粗体要看后面第三个字符是否是英文
if idx < length-3 {
cn3 := line[idx+3]
// 粗体中文**a**
if boldCnt%2 == 0 && isGeneralEn(cn3) {
// 一个新粗体的开始
buffer.WriteString(" ")
}
}
} else {
// 斜体要看后面第二个字符是否是英文
// 斜体中文*a*
if italicCnt%2 == 0 && isGeneralEn(cn2) {
// 一个新斜体的开始
buffer.WriteString(" ")
}
}
}
}
func doZhBackQuote(buffer *bytes.Buffer, line []rune, idx, backQuoteCnt int) {
if idx < len(line)-2 {
cn2 := line[idx+2]
// 小代码块要看后面第二个字符是否是英文
// 点中文`a`
if backQuoteCnt%2 == 0 && isGeneralEn(cn2) {
// 一个新代码块的开始
buffer.WriteString(" ")
}
}
}
func doBoldStarZh(buffer *bytes.Buffer, line []rune, idx, boldCnt int) {
if boldCnt%2 == 0 {
// **abc**粗体中文
if idx-2 > 0 && line[idx-2] != '*' && !isZh(line[idx-2]) {
buffer.WriteString(" ")
}
// ***abc***粗体中文
if line[idx-2] == '*' && idx-3 > 0 && !isZh(line[idx-3]) {
buffer.WriteString(" ")
}
}
}
func doSingleStarZh(buffer *bytes.Buffer, line []rune, idx, italicCnt int) {
if italicCnt%2 == 0 {
// *abc*粗体中文
if idx-1 > 0 && !isZh(line[idx-1]) {
buffer.WriteString(" ")
}
}
}
func doBackQuoteZh(buffer *bytes.Buffer, line []rune, idx, backQuoteCnt int) {
if backQuoteCnt%2 == 0 {
if idx-1 > 0 && !isZh(line[idx-1]) {
// `abc`点中文
buffer.WriteString(" ")
}
}
}
func handleLinks(text string) string {
if match := linkDetailRegex.FindStringSubmatch(text); len(match) > 3 {
linkText := doFormat([]rune(match[2]))
return fmt.Sprintf("%s[%s](%s)", match[1], linkText, match[3])
}
return text
}
func handleFileInput(inPath string, outPath string, backupPath string) error {
fstat, err := os.Stat(inPath)
if err != nil {
return err
}
var noBackup bool
// 处理文件:初始化 outPath、backupPath
if outPath == "" {
// 省略输出,则默认输出覆盖输入文件,需要备份文件
outPath = inPath
// 处理备份路径
if err := setBackupFilePath(fstat, inPath, outPath, &backupPath, &noBackup); err != nil {
return err
}
} else {
// 指定输出路径
if inPath == outPath {
// 当输出和输入一致,且未指定 backup 为 nobackup 时,还是要设置备份路径
if err := setBackupFilePath(fstat, inPath, outPath, &backupPath, &noBackup); err != nil {
return err
}
} else {
// 输入和输出不一致,不需要备份
noBackup = true
backupPath = "--"
}
// 非 .md 结尾,默认输出路径为目录
if !strings.HasSuffix(outPath, MarkdownSuffix) {
// 传入输出目录
if !existsDir(outPath) {
return errors.New("输出目录不存在")
}
outPath = fmt.Sprintf("%s%c%s%s%s", outPath, os.PathSeparator,
strings.TrimSuffix(fstat.Name(), MarkdownSuffix),
DefaultOutputSuffix, MarkdownSuffix)
}
// 其他情况 outPath 不处理
}
bf, err := os.Open(inPath)
if err != nil {
return err
}
// 内容处理
inContentBytes, err := ioutil.ReadAll(bf)
if err != nil {
return err
}
inContent := string(inContentBytes)
// 手动关闭输入文件(因为可能后面是覆盖该文件,要写入)
bf.Close()
// 备份
if !noBackup {
os.Rename(inPath, backupPath)
}
// 文本写入
of, err := os.Create(outPath)
if err != nil {
return err
}
defer of.Close()
of.WriteString(FormatMarkdown(inContent))
if !globalConfig.QuietMode {
log.Printf("【输入文件】: %s 【输出文件】: %s 【备份文件】: %s", inPath, outPath, backupPath)
}
return nil
}
// 处理文件和目录输入
func handlePathInput(inPath string, outPath string, backupPath string) error {
fstat, err := os.Stat(inPath)
if err != nil {
return err
}
if fstat.IsDir() {
var allSuccess = true
// 处理目录
if backupPath == "" {
// 备份路径为空,默认当前路径,inPath_bk
backupPath = fmt.Sprintf("%s%s", inPath, DefaultBackupSuffix)
}
// step1. 备份目录到 backupPath
if backupPath != NoBackupFlag {
log.Printf("目录已备份,备份路径 %s", backupPath)
copy.Copy(inPath, backupPath)
}
| // step2. 遍历 inPath,逐个替换文件 | random_line_split |
|
manage-hospital.component.ts | : any;
public collect_email_array: any = [];
public collect_phone_array: any = [];
public id: any;
public btn_text: string = "SUBMIT";
public condition: any;
public message: string;
public salesrepname: string;
public ErrCode: boolean;
public action: string;
public defaultData: any;
public myDate: any;
public date: any;
public useridval: any = null;
public sharelink:any;
public countryList: any = [];
constructor(public formBuilder: FormBuilder, public http: HttpServiceService,
public cookieService: CookieService, public snackBar: MatSnackBar, public router: Router,
public activatedRoute: ActivatedRoute,public clipboardService:ClipboardService,public readonly meta: MetaService,
public readonly Title:Title) {
// this.meta.setTitle('MD Stock International - Your Medical Partner');
// this.meta.setTag('og:description', 'MD Stock International is the Medical Equipment & Supplies Partner you want for Top-Quality On-Demand Supplies, Direct-to-Manufacturer Purchases and much more.');
// this.meta.setTag('og:title', 'MD Stock International - Your Medical Partner');
// this.meta.setTag('og:type', 'website');
// this.meta.setTag('og:url', 'https://dev.mdstockinternational.com/');
// this.meta.setTag('og:image', 'https://dev.mdstockinternational.com/assets/images/mdstocklogometa.jpg');
// this.meta.setTag('og:keywords','');
// this.meta.setTag('twitter:description', 'MD-stock-international');
// this.meta.setTag('twitter:title', 'MD Stock International is the Medical Equipment & Supplies Partner you want for Top-Quality On-Demand Supplies, Direct-to-Manufacturer Purchases and much more.');
// this.meta.setTag('twitter:card', 'summary');
// this.meta.setTag('twitter:image', 'https://dev.mdstockinternational.com/assets/images/mdstocklogometa.jpg');
this.meta.setTitle('MedWorldOne - Manage Hospital Sales-rep');
this.meta.setTag('og:description', '');
this.meta.setTag('twitter:description', '');
this.meta.setTag('og:keyword', '');
this.meta.setTag('twitter:keyword', '');
this.meta.setTag('og:title', 'MedWorldOne - Manage Hospital Sales-rep');
this.meta.setTag('twitter:title', 'MedWorldOne - Manage Hospital Sales-rep');
this.meta.setTag('og:type', 'website');
this.meta.setTag('og:image', 'https://medworldonebackend.influxiq.com/assets/images/logo-fb.png');
this.meta.setTag('twitter:image', 'https://medworldonebackend.influxiq.com/assets/images/logo-twitter.png');
this.activatedRoute.params.subscribe(params => {
if (params['_id'] != null) {
this.action = "edit";
this.condition = { id: params._id };
this.activatedRoute.data.subscribe(resolveData => {
console.log(resolveData);
this.defaultData = resolveData.data.res[0];
this.date = moment(this.defaultData.created_at).format('MM/DD/YYYY');
});
}
else
this.action = "add";
});
/** getting the sales rep information **/
let allData: any = {};
allData = cookieService.getAll()
this.userData = JSON.parse(allData.user_details);
this.id = this.userData.id;
this.salesrepname = this.userData.firstname + ' ' + this.userData.lastname;
this.sharelink='https://dev-hospital-signup.mdstockinternational.com/'+this.userData._id;
/** fetching the current date **/
if (this.action == 'add')
this.date = moment(this.myDate).format('MM/DD/YYYY');
}
ngOnInit() {
/** generating the form **/
this.generateForm();
/** calling all state **/
this.allStateCityData();
/** switch case **/
switch (this.action) {
case 'add':
/* Button text */
this.btn_text = "SUBMIT";
//Generating the form on ngOnInit
this.generateForm();
/** generating the current date **/
this.myDate = new Date();
this.message = "Hospital Added!!!"
/** generating the user id **/
this.http.httpViaPost('userid', undefined).subscribe((response: any) => {
this.useridval = response.userID;
//Generating the form on ngOnInit
this.generateForm();
setTimeout(() => {
this.manageHospitalForm.controls['user_id'].disable();
}, 500);
});
break;
case 'edit':
/* Button text */
this.btn_text = "UPDATE";
this.message = "Hospital Information Updated";
// this.generateForm();
this.setDefaultValue(this.defaultData);
setTimeout(() => {
this.getCityByName(this.defaultData.state);
}, 2000);
setTimeout(() => {
this.manageHospitalForm.controls['user_id'].disable();
}, 500);
break;
}
//country list
let data: any = {
"source": 'country',
};
this.http.httpViaPost('datalist', data).subscribe((res:any) => {
//console.log(res.res);
this.countryList = res.res;
})
}
/** setting the default data **/
setDefaultValue(defaultValue) {
this.manageHospitalForm.patchValue({
user_id: defaultValue.user_id,
date_added: defaultValue.date_added,
hospitalname: defaultValue.hospitalname,
email: defaultValue.email,
contactperson: defaultValue.contactperson,
zip: defaultValue.zip,
city: defaultValue.city,
state: defaultValue.state,
address:defaultValue.address,
country:defaultValue.country
})
this.collect_phone_array = this.defaultData.contactemails;
this.collect_email_array = this.defaultData.contactphones;
if(this.defaultData.mpimage!=null){
this.imgMedical=this.defaultData.mpimage.basepath+this.defaultData.mpimage.image;
}
}
/** configuration for image **/
public configData: any = {
baseUrl: "https://fileupload.influxhostserver.com/",
endpoint: "uploads",
size: "51200", // kb
format: ["jpg", "jpeg", "png"], // use all small font
type: "inventory-file",
path: "files",
prefix: "_inventory-file",
formSubmit: false,
conversionNeeded: 0,
bucketName: "crmfiles.influxhostserver"
}
/** generating the form **/
generateForm() {
this.manageHospitalForm = this.formBuilder.group({
user_id: [this.useridval],
date_added: [{ value: this.date, disabled: true }],
hospitalname: [],
contactperson: [],
password: [],
confirmpassword: [],
state: [],
city: [],
zip: [],
address: [],
type: ['hospital'],
contactemails: [],
contactphones: [],
salesrepselect: [this.id],
status: [0],
email: [],
salesrepname: [this.salesrepname],
mpimage: [],
country:[]
});
}
/** for getting all states & cities function start here **/
allStateCityData() {
this.http.getSiteSettingData("./assets/data-set/state.json").subscribe(response => {
this.states = response;
});
this.http.getSiteSettingData("./assets/data-set/city.json").subscribe(response => {
this.allCities = response;
});
}
/** for getting all states & cities function end here **/
getCity(event: any) {
var val = event;
this.cities = this.allCities[val];
}
/** activating the state operation **/
getCityByName(stateName) {
this.cities = this.allCities[stateName];
}
/** collecting the email **/
collect_email(event: any) {
if (event.keyCode == 32 || event.keyCode == 13) {
this.collect_email_array.push(event.target.value);
this.manageHospitalForm.controls['contactemails'].patchValue("");
return;
}
}
/** cleraring multiple emails **/
clearEmail(index) {
this.collect_email_array.splice(index, 1);
}
/** collecting the phone numbers **/
collect_phones(event: any) {
if (event.keyCode == 32 || event.keyCode == 13) {
this.collect_phone_array.push(event.target.value);
this.manageHospitalForm.controls['contactphones'].patchValue("");
return;
}
}
/** clearing the phones **/
clearPhones(index) {
this.collect_phone_array.splice(index, 1);
}
/** --------------------------submit------------------------**/
onSubmit() {
this.manageHospitalForm.value.user_id = this.manageHospitalForm.controls['user_id'].value;
this.manageHospitalForm.value.date_added = this.manageHospitalForm.controls['date_added'].value;
if (this.action == 'edit')
delete this.manageHospitalForm.value.confirmpassword;
// File Upload Works
if (this.configData.files) | {
if (this.configData.files.length > 1) { this.ErrCode = true; return; }
this.manageHospitalForm.value.mpimage =
{
"basepath": this.configData.files[0].upload.data.basepath + '/' + this.configData.path + '/',
"image": this.configData.files[0].upload.data.data.fileservername,
"name": this.configData.files[0].name,
"type": this.configData.files[0].type
};
} | conditional_block |
|
manage-hospital.component.ts | ManageHospitalComponent implements OnInit {
/** declarations **/
public manageHospitalForm: FormGroup;
public imgMedical:any;
public states: string;
public allCities: any;
public cities: string;
public userData: any;
public collect_email_array: any = [];
public collect_phone_array: any = [];
public id: any;
public btn_text: string = "SUBMIT";
public condition: any;
public message: string;
public salesrepname: string;
public ErrCode: boolean;
public action: string;
public defaultData: any;
public myDate: any;
public date: any;
public useridval: any = null;
public sharelink:any;
public countryList: any = [];
constructor(public formBuilder: FormBuilder, public http: HttpServiceService,
public cookieService: CookieService, public snackBar: MatSnackBar, public router: Router,
public activatedRoute: ActivatedRoute,public clipboardService:ClipboardService,public readonly meta: MetaService,
public readonly Title:Title) {
// this.meta.setTitle('MD Stock International - Your Medical Partner');
// this.meta.setTag('og:description', 'MD Stock International is the Medical Equipment & Supplies Partner you want for Top-Quality On-Demand Supplies, Direct-to-Manufacturer Purchases and much more.');
// this.meta.setTag('og:title', 'MD Stock International - Your Medical Partner');
// this.meta.setTag('og:type', 'website');
// this.meta.setTag('og:url', 'https://dev.mdstockinternational.com/');
// this.meta.setTag('og:image', 'https://dev.mdstockinternational.com/assets/images/mdstocklogometa.jpg');
// this.meta.setTag('og:keywords','');
// this.meta.setTag('twitter:description', 'MD-stock-international');
// this.meta.setTag('twitter:title', 'MD Stock International is the Medical Equipment & Supplies Partner you want for Top-Quality On-Demand Supplies, Direct-to-Manufacturer Purchases and much more.');
// this.meta.setTag('twitter:card', 'summary');
// this.meta.setTag('twitter:image', 'https://dev.mdstockinternational.com/assets/images/mdstocklogometa.jpg');
this.meta.setTitle('MedWorldOne - Manage Hospital Sales-rep');
this.meta.setTag('og:description', '');
this.meta.setTag('twitter:description', '');
this.meta.setTag('og:keyword', '');
this.meta.setTag('twitter:keyword', '');
this.meta.setTag('og:title', 'MedWorldOne - Manage Hospital Sales-rep');
this.meta.setTag('twitter:title', 'MedWorldOne - Manage Hospital Sales-rep');
this.meta.setTag('og:type', 'website');
this.meta.setTag('og:image', 'https://medworldonebackend.influxiq.com/assets/images/logo-fb.png');
this.meta.setTag('twitter:image', 'https://medworldonebackend.influxiq.com/assets/images/logo-twitter.png');
this.activatedRoute.params.subscribe(params => {
if (params['_id'] != null) {
this.action = "edit";
this.condition = { id: params._id };
this.activatedRoute.data.subscribe(resolveData => {
console.log(resolveData);
this.defaultData = resolveData.data.res[0];
this.date = moment(this.defaultData.created_at).format('MM/DD/YYYY');
});
}
else
this.action = "add";
});
/** getting the sales rep information **/
let allData: any = {};
allData = cookieService.getAll()
this.userData = JSON.parse(allData.user_details);
this.id = this.userData.id;
this.salesrepname = this.userData.firstname + ' ' + this.userData.lastname;
this.sharelink='https://dev-hospital-signup.mdstockinternational.com/'+this.userData._id;
|
ngOnInit() {
/** generating the form **/
this.generateForm();
/** calling all state **/
this.allStateCityData();
/** switch case **/
switch (this.action) {
case 'add':
/* Button text */
this.btn_text = "SUBMIT";
//Generating the form on ngOnInit
this.generateForm();
/** generating the current date **/
this.myDate = new Date();
this.message = "Hospital Added!!!"
/** generating the user id **/
this.http.httpViaPost('userid', undefined).subscribe((response: any) => {
this.useridval = response.userID;
//Generating the form on ngOnInit
this.generateForm();
setTimeout(() => {
this.manageHospitalForm.controls['user_id'].disable();
}, 500);
});
break;
case 'edit':
/* Button text */
this.btn_text = "UPDATE";
this.message = "Hospital Information Updated";
// this.generateForm();
this.setDefaultValue(this.defaultData);
setTimeout(() => {
this.getCityByName(this.defaultData.state);
}, 2000);
setTimeout(() => {
this.manageHospitalForm.controls['user_id'].disable();
}, 500);
break;
}
//country list
let data: any = {
"source": 'country',
};
this.http.httpViaPost('datalist', data).subscribe((res:any) => {
//console.log(res.res);
this.countryList = res.res;
})
}
/** setting the default data **/
setDefaultValue(defaultValue) {
this.manageHospitalForm.patchValue({
user_id: defaultValue.user_id,
date_added: defaultValue.date_added,
hospitalname: defaultValue.hospitalname,
email: defaultValue.email,
contactperson: defaultValue.contactperson,
zip: defaultValue.zip,
city: defaultValue.city,
state: defaultValue.state,
address:defaultValue.address,
country:defaultValue.country
})
this.collect_phone_array = this.defaultData.contactemails;
this.collect_email_array = this.defaultData.contactphones;
if(this.defaultData.mpimage!=null){
this.imgMedical=this.defaultData.mpimage.basepath+this.defaultData.mpimage.image;
}
}
/** configuration for image **/
public configData: any = {
baseUrl: "https://fileupload.influxhostserver.com/",
endpoint: "uploads",
size: "51200", // kb
format: ["jpg", "jpeg", "png"], // use all small font
type: "inventory-file",
path: "files",
prefix: "_inventory-file",
formSubmit: false,
conversionNeeded: 0,
bucketName: "crmfiles.influxhostserver"
}
/** generating the form **/
generateForm() {
this.manageHospitalForm = this.formBuilder.group({
user_id: [this.useridval],
date_added: [{ value: this.date, disabled: true }],
hospitalname: [],
contactperson: [],
password: [],
confirmpassword: [],
state: [],
city: [],
zip: [],
address: [],
type: ['hospital'],
contactemails: [],
contactphones: [],
salesrepselect: [this.id],
status: [0],
email: [],
salesrepname: [this.salesrepname],
mpimage: [],
country:[]
});
}
/** for getting all states & cities function start here **/
allStateCityData() {
this.http.getSiteSettingData("./assets/data-set/state.json").subscribe(response => {
this.states = response;
});
this.http.getSiteSettingData("./assets/data-set/city.json").subscribe(response => {
this.allCities = response;
});
}
/** for getting all states & cities function end here **/
getCity(event: any) {
var val = event;
this.cities = this.allCities[val];
}
/** activating the state operation **/
getCityByName(stateName) {
this.cities = this.allCities[stateName];
}
/** collecting the email **/
collect_email(event: any) {
if (event.keyCode == 32 || event.keyCode == 13) {
this.collect_email_array.push(event.target.value);
this.manageHospitalForm.controls['contactemails'].patchValue("");
return;
}
}
/** cleraring multiple emails **/
clearEmail(index) {
this.collect_email_array.splice(index, 1);
}
/** collecting the phone numbers **/
collect_phones(event: any) {
if (event.keyCode == 32 || event.keyCode == 13) {
this.collect_phone_array.push(event.target.value);
this.manageHospitalForm.controls['contactphones'].patchValue("");
return;
}
}
/** clearing the phones **/
clearPhones(index) {
this.collect_phone_array.splice(index, 1);
}
/** --------------------------submit------------------------**/
onSubmit() {
this.manageHospitalForm.value.user_id = this.manageHospitalForm.controls['user_id'].value;
this.manageHospitalForm.value.date_added = this.manageHospitalForm.controls['date_added'].value;
if (this.action == 'edit')
delete this.manageHospitalForm.value.confirmpassword;
// File Upload Works
if (this.configData.files) {
if (this.configData.files.length > 1) { this.ErrCode = true; return; }
this.manageHospitalForm.value.mpimage =
{
"basepath": this.configData.files[0].upload.data.basepath + '/' + this.configData.path + | /** fetching the current date **/
if (this.action == 'add')
this.date = moment(this.myDate).format('MM/DD/YYYY');
}
| random_line_split |
manage-hospital.component.ts | ManageHospitalComponent implements OnInit {
/** declarations **/
public manageHospitalForm: FormGroup;
public imgMedical:any;
public states: string;
public allCities: any;
public cities: string;
public userData: any;
public collect_email_array: any = [];
public collect_phone_array: any = [];
public id: any;
public btn_text: string = "SUBMIT";
public condition: any;
public message: string;
public salesrepname: string;
public ErrCode: boolean;
public action: string;
public defaultData: any;
public myDate: any;
public date: any;
public useridval: any = null;
public sharelink:any;
public countryList: any = [];
constructor(public formBuilder: FormBuilder, public http: HttpServiceService,
public cookieService: CookieService, public snackBar: MatSnackBar, public router: Router,
public activatedRoute: ActivatedRoute,public clipboardService:ClipboardService,public readonly meta: MetaService,
public readonly Title:Title) {
// this.meta.setTitle('MD Stock International - Your Medical Partner');
// this.meta.setTag('og:description', 'MD Stock International is the Medical Equipment & Supplies Partner you want for Top-Quality On-Demand Supplies, Direct-to-Manufacturer Purchases and much more.');
// this.meta.setTag('og:title', 'MD Stock International - Your Medical Partner');
// this.meta.setTag('og:type', 'website');
// this.meta.setTag('og:url', 'https://dev.mdstockinternational.com/');
// this.meta.setTag('og:image', 'https://dev.mdstockinternational.com/assets/images/mdstocklogometa.jpg');
// this.meta.setTag('og:keywords','');
// this.meta.setTag('twitter:description', 'MD-stock-international');
// this.meta.setTag('twitter:title', 'MD Stock International is the Medical Equipment & Supplies Partner you want for Top-Quality On-Demand Supplies, Direct-to-Manufacturer Purchases and much more.');
// this.meta.setTag('twitter:card', 'summary');
// this.meta.setTag('twitter:image', 'https://dev.mdstockinternational.com/assets/images/mdstocklogometa.jpg');
this.meta.setTitle('MedWorldOne - Manage Hospital Sales-rep');
this.meta.setTag('og:description', '');
this.meta.setTag('twitter:description', '');
this.meta.setTag('og:keyword', '');
this.meta.setTag('twitter:keyword', '');
this.meta.setTag('og:title', 'MedWorldOne - Manage Hospital Sales-rep');
this.meta.setTag('twitter:title', 'MedWorldOne - Manage Hospital Sales-rep');
this.meta.setTag('og:type', 'website');
this.meta.setTag('og:image', 'https://medworldonebackend.influxiq.com/assets/images/logo-fb.png');
this.meta.setTag('twitter:image', 'https://medworldonebackend.influxiq.com/assets/images/logo-twitter.png');
this.activatedRoute.params.subscribe(params => {
if (params['_id'] != null) {
this.action = "edit";
this.condition = { id: params._id };
this.activatedRoute.data.subscribe(resolveData => {
console.log(resolveData);
this.defaultData = resolveData.data.res[0];
this.date = moment(this.defaultData.created_at).format('MM/DD/YYYY');
});
}
else
this.action = "add";
});
/** getting the sales rep information **/
let allData: any = {};
allData = cookieService.getAll()
this.userData = JSON.parse(allData.user_details);
this.id = this.userData.id;
this.salesrepname = this.userData.firstname + ' ' + this.userData.lastname;
this.sharelink='https://dev-hospital-signup.mdstockinternational.com/'+this.userData._id;
/** fetching the current date **/
if (this.action == 'add')
this.date = moment(this.myDate).format('MM/DD/YYYY');
}
ngOnInit() {
/** generating the form **/
this.generateForm();
/** calling all state **/
this.allStateCityData();
/** switch case **/
switch (this.action) {
case 'add':
/* Button text */
this.btn_text = "SUBMIT";
//Generating the form on ngOnInit
this.generateForm();
/** generating the current date **/
this.myDate = new Date();
this.message = "Hospital Added!!!"
/** generating the user id **/
this.http.httpViaPost('userid', undefined).subscribe((response: any) => {
this.useridval = response.userID;
//Generating the form on ngOnInit
this.generateForm();
setTimeout(() => {
this.manageHospitalForm.controls['user_id'].disable();
}, 500);
});
break;
case 'edit':
/* Button text */
this.btn_text = "UPDATE";
this.message = "Hospital Information Updated";
// this.generateForm();
this.setDefaultValue(this.defaultData);
setTimeout(() => {
this.getCityByName(this.defaultData.state);
}, 2000);
setTimeout(() => {
this.manageHospitalForm.controls['user_id'].disable();
}, 500);
break;
}
//country list
let data: any = {
"source": 'country',
};
this.http.httpViaPost('datalist', data).subscribe((res:any) => {
//console.log(res.res);
this.countryList = res.res;
})
}
/** setting the default data **/
setDefaultValue(defaultValue) {
this.manageHospitalForm.patchValue({
user_id: defaultValue.user_id,
date_added: defaultValue.date_added,
hospitalname: defaultValue.hospitalname,
email: defaultValue.email,
contactperson: defaultValue.contactperson,
zip: defaultValue.zip,
city: defaultValue.city,
state: defaultValue.state,
address:defaultValue.address,
country:defaultValue.country
})
this.collect_phone_array = this.defaultData.contactemails;
this.collect_email_array = this.defaultData.contactphones;
if(this.defaultData.mpimage!=null){
this.imgMedical=this.defaultData.mpimage.basepath+this.defaultData.mpimage.image;
}
}
/** configuration for image **/
public configData: any = {
baseUrl: "https://fileupload.influxhostserver.com/",
endpoint: "uploads",
size: "51200", // kb
format: ["jpg", "jpeg", "png"], // use all small font
type: "inventory-file",
path: "files",
prefix: "_inventory-file",
formSubmit: false,
conversionNeeded: 0,
bucketName: "crmfiles.influxhostserver"
}
/** generating the form **/
generateForm() {
this.manageHospitalForm = this.formBuilder.group({
user_id: [this.useridval],
date_added: [{ value: this.date, disabled: true }],
hospitalname: [],
contactperson: [],
password: [],
confirmpassword: [],
state: [],
city: [],
zip: [],
address: [],
type: ['hospital'],
contactemails: [],
contactphones: [],
salesrepselect: [this.id],
status: [0],
email: [],
salesrepname: [this.salesrepname],
mpimage: [],
country:[]
});
}
/** for getting all states & cities function start here **/
allStateCityData() {
this.http.getSiteSettingData("./assets/data-set/state.json").subscribe(response => {
this.states = response;
});
this.http.getSiteSettingData("./assets/data-set/city.json").subscribe(response => {
this.allCities = response;
});
}
/** for getting all states & cities function end here **/
getCity(event: any) {
var val = event;
this.cities = this.allCities[val];
}
/** activating the state operation **/
getCityByName(stateName) {
this.cities = this.allCities[stateName];
}
/** collecting the email **/
collect_email(event: any) {
if (event.keyCode == 32 || event.keyCode == 13) {
this.collect_email_array.push(event.target.value);
this.manageHospitalForm.controls['contactemails'].patchValue("");
return;
}
}
/** cleraring multiple emails **/
| (index) {
this.collect_email_array.splice(index, 1);
}
/** collecting the phone numbers **/
collect_phones(event: any) {
if (event.keyCode == 32 || event.keyCode == 13) {
this.collect_phone_array.push(event.target.value);
this.manageHospitalForm.controls['contactphones'].patchValue("");
return;
}
}
/** clearing the phones **/
clearPhones(index) {
this.collect_phone_array.splice(index, 1);
}
/** --------------------------submit------------------------**/
onSubmit() {
this.manageHospitalForm.value.user_id = this.manageHospitalForm.controls['user_id'].value;
this.manageHospitalForm.value.date_added = this.manageHospitalForm.controls['date_added'].value;
if (this.action == 'edit')
delete this.manageHospitalForm.value.confirmpassword;
// File Upload Works
if (this.configData.files) {
if (this.configData.files.length > 1) { this.ErrCode = true; return; }
this.manageHospitalForm.value.mpimage =
{
"basepath": this.configData.files[0].upload.data.basepath + '/' + this.configData.path + '/',
| clearEmail | identifier_name |
manage-hospital.component.ts | ManageHospitalComponent implements OnInit {
/** declarations **/
public manageHospitalForm: FormGroup;
public imgMedical:any;
public states: string;
public allCities: any;
public cities: string;
public userData: any;
public collect_email_array: any = [];
public collect_phone_array: any = [];
public id: any;
public btn_text: string = "SUBMIT";
public condition: any;
public message: string;
public salesrepname: string;
public ErrCode: boolean;
public action: string;
public defaultData: any;
public myDate: any;
public date: any;
public useridval: any = null;
public sharelink:any;
public countryList: any = [];
constructor(public formBuilder: FormBuilder, public http: HttpServiceService,
public cookieService: CookieService, public snackBar: MatSnackBar, public router: Router,
public activatedRoute: ActivatedRoute,public clipboardService:ClipboardService,public readonly meta: MetaService,
public readonly Title:Title) {
// this.meta.setTitle('MD Stock International - Your Medical Partner');
// this.meta.setTag('og:description', 'MD Stock International is the Medical Equipment & Supplies Partner you want for Top-Quality On-Demand Supplies, Direct-to-Manufacturer Purchases and much more.');
// this.meta.setTag('og:title', 'MD Stock International - Your Medical Partner');
// this.meta.setTag('og:type', 'website');
// this.meta.setTag('og:url', 'https://dev.mdstockinternational.com/');
// this.meta.setTag('og:image', 'https://dev.mdstockinternational.com/assets/images/mdstocklogometa.jpg');
// this.meta.setTag('og:keywords','');
// this.meta.setTag('twitter:description', 'MD-stock-international');
// this.meta.setTag('twitter:title', 'MD Stock International is the Medical Equipment & Supplies Partner you want for Top-Quality On-Demand Supplies, Direct-to-Manufacturer Purchases and much more.');
// this.meta.setTag('twitter:card', 'summary');
// this.meta.setTag('twitter:image', 'https://dev.mdstockinternational.com/assets/images/mdstocklogometa.jpg');
this.meta.setTitle('MedWorldOne - Manage Hospital Sales-rep');
this.meta.setTag('og:description', '');
this.meta.setTag('twitter:description', '');
this.meta.setTag('og:keyword', '');
this.meta.setTag('twitter:keyword', '');
this.meta.setTag('og:title', 'MedWorldOne - Manage Hospital Sales-rep');
this.meta.setTag('twitter:title', 'MedWorldOne - Manage Hospital Sales-rep');
this.meta.setTag('og:type', 'website');
this.meta.setTag('og:image', 'https://medworldonebackend.influxiq.com/assets/images/logo-fb.png');
this.meta.setTag('twitter:image', 'https://medworldonebackend.influxiq.com/assets/images/logo-twitter.png');
this.activatedRoute.params.subscribe(params => {
if (params['_id'] != null) {
this.action = "edit";
this.condition = { id: params._id };
this.activatedRoute.data.subscribe(resolveData => {
console.log(resolveData);
this.defaultData = resolveData.data.res[0];
this.date = moment(this.defaultData.created_at).format('MM/DD/YYYY');
});
}
else
this.action = "add";
});
/** getting the sales rep information **/
let allData: any = {};
allData = cookieService.getAll()
this.userData = JSON.parse(allData.user_details);
this.id = this.userData.id;
this.salesrepname = this.userData.firstname + ' ' + this.userData.lastname;
this.sharelink='https://dev-hospital-signup.mdstockinternational.com/'+this.userData._id;
/** fetching the current date **/
if (this.action == 'add')
this.date = moment(this.myDate).format('MM/DD/YYYY');
}
ngOnInit() {
/** generating the form **/
this.generateForm();
/** calling all state **/
this.allStateCityData();
/** switch case **/
switch (this.action) {
case 'add':
/* Button text */
this.btn_text = "SUBMIT";
//Generating the form on ngOnInit
this.generateForm();
/** generating the current date **/
this.myDate = new Date();
this.message = "Hospital Added!!!"
/** generating the user id **/
this.http.httpViaPost('userid', undefined).subscribe((response: any) => {
this.useridval = response.userID;
//Generating the form on ngOnInit
this.generateForm();
setTimeout(() => {
this.manageHospitalForm.controls['user_id'].disable();
}, 500);
});
break;
case 'edit':
/* Button text */
this.btn_text = "UPDATE";
this.message = "Hospital Information Updated";
// this.generateForm();
this.setDefaultValue(this.defaultData);
setTimeout(() => {
this.getCityByName(this.defaultData.state);
}, 2000);
setTimeout(() => {
this.manageHospitalForm.controls['user_id'].disable();
}, 500);
break;
}
//country list
let data: any = {
"source": 'country',
};
this.http.httpViaPost('datalist', data).subscribe((res:any) => {
//console.log(res.res);
this.countryList = res.res;
})
}
/** setting the default data **/
setDefaultValue(defaultValue) {
this.manageHospitalForm.patchValue({
user_id: defaultValue.user_id,
date_added: defaultValue.date_added,
hospitalname: defaultValue.hospitalname,
email: defaultValue.email,
contactperson: defaultValue.contactperson,
zip: defaultValue.zip,
city: defaultValue.city,
state: defaultValue.state,
address:defaultValue.address,
country:defaultValue.country
})
this.collect_phone_array = this.defaultData.contactemails;
this.collect_email_array = this.defaultData.contactphones;
if(this.defaultData.mpimage!=null){
this.imgMedical=this.defaultData.mpimage.basepath+this.defaultData.mpimage.image;
}
}
/** configuration for image **/
public configData: any = {
baseUrl: "https://fileupload.influxhostserver.com/",
endpoint: "uploads",
size: "51200", // kb
format: ["jpg", "jpeg", "png"], // use all small font
type: "inventory-file",
path: "files",
prefix: "_inventory-file",
formSubmit: false,
conversionNeeded: 0,
bucketName: "crmfiles.influxhostserver"
}
/** generating the form **/
generateForm() {
this.manageHospitalForm = this.formBuilder.group({
user_id: [this.useridval],
date_added: [{ value: this.date, disabled: true }],
hospitalname: [],
contactperson: [],
password: [],
confirmpassword: [],
state: [],
city: [],
zip: [],
address: [],
type: ['hospital'],
contactemails: [],
contactphones: [],
salesrepselect: [this.id],
status: [0],
email: [],
salesrepname: [this.salesrepname],
mpimage: [],
country:[]
});
}
/** for getting all states & cities function start here **/
allStateCityData() {
this.http.getSiteSettingData("./assets/data-set/state.json").subscribe(response => {
this.states = response;
});
this.http.getSiteSettingData("./assets/data-set/city.json").subscribe(response => {
this.allCities = response;
});
}
/** for getting all states & cities function end here **/
getCity(event: any) {
var val = event;
this.cities = this.allCities[val];
}
/** activating the state operation **/
getCityByName(stateName) {
this.cities = this.allCities[stateName];
}
/** collecting the email **/
collect_email(event: any) {
if (event.keyCode == 32 || event.keyCode == 13) {
this.collect_email_array.push(event.target.value);
this.manageHospitalForm.controls['contactemails'].patchValue("");
return;
}
}
/** cleraring multiple emails **/
clearEmail(index) {
this.collect_email_array.splice(index, 1);
}
/** collecting the phone numbers **/
collect_phones(event: any) |
/** clearing the phones **/
clearPhones(index) {
this.collect_phone_array.splice(index, 1);
}
/** --------------------------submit------------------------**/
onSubmit() {
this.manageHospitalForm.value.user_id = this.manageHospitalForm.controls['user_id'].value;
this.manageHospitalForm.value.date_added = this.manageHospitalForm.controls['date_added'].value;
if (this.action == 'edit')
delete this.manageHospitalForm.value.confirmpassword;
// File Upload Works
if (this.configData.files) {
if (this.configData.files.length > 1) { this.ErrCode = true; return; }
this.manageHospitalForm.value.mpimage =
{
"basepath": this.configData.files[0].upload.data.basepath + '/' + this.configData.path + '/ | {
if (event.keyCode == 32 || event.keyCode == 13) {
this.collect_phone_array.push(event.target.value);
this.manageHospitalForm.controls['contactphones'].patchValue("");
return;
}
} | identifier_body |
option_definition.py | ForDictionary(tokens)
if tokens == -1:
print '''ERROR: found in definition of %s:\n
Please check your tokens and settings again''' %(name)
self.parents = parents
self.non_parents = non_parents
self.non_parent_exceptions = non_parent_exceptions
self.childs = childs
self.name = name
self.help = helpstr
self.plot = plot
self.tokens = tokens
self.showInGui = showInGui
self.continious_update = continious_update
self._mandatory=mandatory
self._extra_dependencies=extra_dependencies
self.non_unique=non_unique
# Please use empty lists
if self.childs == '': self.childs = []
if self.non_parents == '': self.non_parents = []
if self.non_parent_exceptions == '': self.non_parent_exceptions = []
self.dependencies = list(self.childs + self.non_parents + self.non_parent_exceptions + extra_dependencies)
if self._mandatory and not name in self.dependencies: self.dependencies.append(self.name)
for inp in gui_inputs:
assert inp.__class__.__name__ in GUI_definition.__all__
if not gui_inputs: gui_inputs = self._set_gui_input(gui_inputs,tokens)
self.gui_inputs = self._addBooleanToGuiInput(gui_inputs)
self.dict = {
'name' : name, #as defined in uvspec_lex.l
'group' : group, # e.g. WC
'help' : helpstr, # Help string (short), could appear as pop-up in GUI
'documentation' : documentation, # Full documentation
'tokens' : tokens, # Variable in uvspec inp_struct to change
'parents' : parents, # (specifies which options must also be defined together with this option)
# One of the options must also be defined with this options
'non_parents' : non_parents, # specifies which options must not be defined together with this option
'non_parent_exceptions' : non_parent_exceptions, # specifies which options inside non_parents should be ignored
'childs' : childs, # (specifies which options can be defined with this option)
# Options which will be unlocked when defining this option
'mystic' : mystic, # mystic option
'threedmystic' : threedmystic, # 3D mystic option
'islidar' : islidar, # lidar option
'developer' : developer, # developer option, undocumented for esasLight
'plot' : plot, # Setup plotting for options which should be plotted
}
self.canEnable = self._canEnable
if speaker and enable_values:
self._speaker = speaker
assert not isinstance(enable_values, basestring), "Missing comma in one-item-tuple?"
self._enable_values = enable_values
self.canEnable = self._canEnableContinousOption
if extra_dependencies:
self.isMandatory = self._isMandatoryMixedOption
# Pretend to be a dictionary to avoid breaking old code
def __getitem__(self, *args, **kwargs):
return self.dict.__getitem__(*args, **kwargs)
def __setitem__(self, *args, **kwargs):
return self.dict.__setitem__(*args, **kwargs)
def __contains__(self, *args, **kwargs):
return self.dict.__contains__(*args, **kwargs)
def get(self, *args, **kwargs):
return self.dict.get(*args, **kwargs)
def _canEnableContinousOption(self, is_set, get_value):
"""
Remember that self.speaker must be a subclass of
continious_option.
"""
r = self._canEnable(is_set, get_value)
if r and is_set(self._speaker) and \
get_value(self._speaker)[0] in self._enable_values:
return r
else:
return False
def _canEnable(self, is_set, get_value):
"""
Tells the GUI wether the option should be enabled or disabled.
Returns True if the option should be enabled and False if it
should be disabled.
is_set is a function that returns True if an option is enabled
and has been edited by the user, else False. It takes one
argument, the name of an option as a string.
get_value returns the current value of an option
This is used to implement the logic in the GUI. If more
complex logic than the parent, non-parent, children logic is
needed this function should be overloaded.
Remember to update the dependency tuple, a tuple of options
which should be enabled or disabled depending on if this
option is set.
"""
parents = any([is_set(parent) for parent in self.parents]) \
or not self.parents
non_parents = all([(not is_set(non_parent) or self.non_parent_exceptions.count(non_parent) or non_parent==self.name) \
for non_parent in self.non_parents]) \
or not self.non_parents
return parents and non_parents
def isMandatory(self, is_set, get_value):
"""
Returns True for mandatory options. Similar to canEnable.
"""
if self._mandatory and not is_set(self.name): return True
return False
def _isMandatoryMixedOption(self, is_set, get_value):
cond = [is_set(opt) for opt in self._extra_dependencies]
if all(cond):
return False
elif any(cond):
return True
else:
return False
def _set_gui_input(self,gui_inputs,tokens):
if not self.showInGui:
return gui_inputs
for inp in tokens:
try:
name = inp.gui_name
except KeyError:
pass
if not name: name = inp.get('name')
try:
vr = inp.get('valid_range')
except KeyError:
vr = None
if isinstance(inp, addSetting):
continue
elif isinstance(inp, addLogical):
gui_inp = (ListInput(name=name,valid_range=inp.get('valid_range'),optional=inp.get('optional'),default=inp.get('default'),logical_file=inp.get('logical_file')),)
elif isinstance(inp, addToken):
dtype = inp.get('datatype')
if dtype == float or dtype==Double:
if not vr: vr = (-1e99, 1e99)
gui_inp = (FloatInput(name=name,optional=inp.get('optional'),valid_range=vr,default=inp.get('default')),)
elif dtype == int:
if not vr: vr = (-1e99, 1e99)
gui_inp = (IntegerInput(name=name,valid_range=vr,optional=inp.get('optional'),default=inp.get('default')),)
elif vr:
gui_inp = (ListInput(name=name,valid_range=inp.get('valid_range'),optional=inp.get('optional'),default=inp.get('default'),logical_file=inp.get('logical_file')),)
elif dtype == file:
gui_inp = ( FileInput(name=name,optional=inp.get('optional')) ,)
else: gui_inp = (TextInput(name=name,optional=inp.get('optional')),)
gui_inputs = gui_inputs.__add__(gui_inp)
return gui_inputs
def _addBooleanToGuiInput(self,gui_inputs):
if not self.showInGui:
return ()
for inp in gui_inputs:
if not inp.optional or inp.__class__ == BooleanInput:
return gui_inputs
return ( BooleanInput(name=''), ).__add__(gui_inputs)
class Dimension():
"""
Options which can take dimensions (number+word) as argument ( 1D, 3D )
"""
def | (self):
self.valid_range = ["1d","3d"]
def get_valid_range(self):
return self.valid_range
class ProfileType():
"""
Options which can take several profile files as argument (e.g. 1D, 3D, moments, ipa_files)
"""
def __init__(self):
self.valid_range = ["1d","3d","ipa_files","moments"]
def get_valid_range(self):
return self.valid_range
class CaothType():
"""
Options which can take several profile as argument (e.g. wc, ic or any other profile)
"""
def __init__(self,caoth=None):
self.caoth = caoth
def get_caoth(self):
return self.caoth
class CaothoffType():
"""
Quick fix for new option names no_scattering and no_absorption
"""
class Double(float):
"""Double for c allocation double"""
class SignedFloats():
"""Signed floats for c allocation multiple floats"""
class Integers():
"""Integers for c allocation multiple integers"""
class VariableNumberOfLines():
pass
# valid_datatypes = ( # i.e. datatypes the GUI support (soon)
# ProfileType,
# CaothType,
# CaothoffType,
# Double,
# SignedFloats,
# VariableNumberOfLines,
# # In my opinion these should | __init__ | identifier_name |
option_definition.py | checkForDictionary(tokens)
if tokens == -1:
print '''ERROR: found in definition of %s:\n
Please check your tokens and settings again''' %(name)
self.parents = parents
self.non_parents = non_parents
self.non_parent_exceptions = non_parent_exceptions
self.childs = childs
self.name = name
self.help = helpstr
self.plot = plot
self.tokens = tokens
self.showInGui = showInGui
self.continious_update = continious_update
self._mandatory=mandatory
self._extra_dependencies=extra_dependencies
self.non_unique=non_unique
# Please use empty lists
if self.childs == '': self.childs = []
if self.non_parents == '': self.non_parents = []
if self.non_parent_exceptions == '': self.non_parent_exceptions = []
self.dependencies = list(self.childs + self.non_parents + self.non_parent_exceptions + extra_dependencies)
if self._mandatory and not name in self.dependencies: self.dependencies.append(self.name)
for inp in gui_inputs:
assert inp.__class__.__name__ in GUI_definition.__all__
if not gui_inputs: gui_inputs = self._set_gui_input(gui_inputs,tokens)
self.gui_inputs = self._addBooleanToGuiInput(gui_inputs)
self.dict = {
'name' : name, #as defined in uvspec_lex.l
'group' : group, # e.g. WC
'help' : helpstr, # Help string (short), could appear as pop-up in GUI
'documentation' : documentation, # Full documentation
'tokens' : tokens, # Variable in uvspec inp_struct to change
'parents' : parents, # (specifies which options must also be defined together with this option)
# One of the options must also be defined with this options
'non_parents' : non_parents, # specifies which options must not be defined together with this option
'non_parent_exceptions' : non_parent_exceptions, # specifies which options inside non_parents should be ignored
'childs' : childs, # (specifies which options can be defined with this option)
# Options which will be unlocked when defining this option
'mystic' : mystic, # mystic option
'threedmystic' : threedmystic, # 3D mystic option
'islidar' : islidar, # lidar option
'developer' : developer, # developer option, undocumented for esasLight
'plot' : plot, # Setup plotting for options which should be plotted
}
self.canEnable = self._canEnable
if speaker and enable_values:
|
if extra_dependencies:
self.isMandatory = self._isMandatoryMixedOption
# Pretend to be a dictionary to avoid breaking old code
def __getitem__(self, *args, **kwargs):
return self.dict.__getitem__(*args, **kwargs)
def __setitem__(self, *args, **kwargs):
return self.dict.__setitem__(*args, **kwargs)
def __contains__(self, *args, **kwargs):
return self.dict.__contains__(*args, **kwargs)
def get(self, *args, **kwargs):
return self.dict.get(*args, **kwargs)
def _canEnableContinousOption(self, is_set, get_value):
"""
Remember that self.speaker must be a subclass of
continious_option.
"""
r = self._canEnable(is_set, get_value)
if r and is_set(self._speaker) and \
get_value(self._speaker)[0] in self._enable_values:
return r
else:
return False
def _canEnable(self, is_set, get_value):
"""
Tells the GUI wether the option should be enabled or disabled.
Returns True if the option should be enabled and False if it
should be disabled.
is_set is a function that returns True if an option is enabled
and has been edited by the user, else False. It takes one
argument, the name of an option as a string.
get_value returns the current value of an option
This is used to implement the logic in the GUI. If more
complex logic than the parent, non-parent, children logic is
needed this function should be overloaded.
Remember to update the dependency tuple, a tuple of options
which should be enabled or disabled depending on if this
option is set.
"""
parents = any([is_set(parent) for parent in self.parents]) \
or not self.parents
non_parents = all([(not is_set(non_parent) or self.non_parent_exceptions.count(non_parent) or non_parent==self.name) \
for non_parent in self.non_parents]) \
or not self.non_parents
return parents and non_parents
def isMandatory(self, is_set, get_value):
"""
Returns True for mandatory options. Similar to canEnable.
"""
if self._mandatory and not is_set(self.name): return True
return False
def _isMandatoryMixedOption(self, is_set, get_value):
cond = [is_set(opt) for opt in self._extra_dependencies]
if all(cond):
return False
elif any(cond):
return True
else:
return False
def _set_gui_input(self,gui_inputs,tokens):
if not self.showInGui:
return gui_inputs
for inp in tokens:
try:
name = inp.gui_name
except KeyError:
pass
if not name: name = inp.get('name')
try:
vr = inp.get('valid_range')
except KeyError:
vr = None
if isinstance(inp, addSetting):
continue
elif isinstance(inp, addLogical):
gui_inp = (ListInput(name=name,valid_range=inp.get('valid_range'),optional=inp.get('optional'),default=inp.get('default'),logical_file=inp.get('logical_file')),)
elif isinstance(inp, addToken):
dtype = inp.get('datatype')
if dtype == float or dtype==Double:
if not vr: vr = (-1e99, 1e99)
gui_inp = (FloatInput(name=name,optional=inp.get('optional'),valid_range=vr,default=inp.get('default')),)
elif dtype == int:
if not vr: vr = (-1e99, 1e99)
gui_inp = (IntegerInput(name=name,valid_range=vr,optional=inp.get('optional'),default=inp.get('default')),)
elif vr:
gui_inp = (ListInput(name=name,valid_range=inp.get('valid_range'),optional=inp.get('optional'),default=inp.get('default'),logical_file=inp.get('logical_file')),)
elif dtype == file:
gui_inp = ( FileInput(name=name,optional=inp.get('optional')) ,)
else: gui_inp = (TextInput(name=name,optional=inp.get('optional')),)
gui_inputs = gui_inputs.__add__(gui_inp)
return gui_inputs
def _addBooleanToGuiInput(self,gui_inputs):
if not self.showInGui:
return ()
for inp in gui_inputs:
if not inp.optional or inp.__class__ == BooleanInput:
return gui_inputs
return ( BooleanInput(name=''), ).__add__(gui_inputs)
class Dimension():
"""
Options which can take dimensions (number+word) as argument ( 1D, 3D )
"""
def __init__(self):
self.valid_range = ["1d","3d"]
def get_valid_range(self):
return self.valid_range
class ProfileType():
"""
Options which can take several profile files as argument (e.g. 1D, 3D, moments, ipa_files)
"""
def __init__(self):
self.valid_range = ["1d","3d","ipa_files","moments"]
def get_valid_range(self):
return self.valid_range
class CaothType():
"""
Options which can take several profile as argument (e.g. wc, ic or any other profile)
"""
def __init__(self,caoth=None):
self.caoth = caoth
def get_caoth(self):
return self.caoth
class CaothoffType():
"""
Quick fix for new option names no_scattering and no_absorption
"""
class Double(float):
"""Double for c allocation double"""
class SignedFloats():
"""Signed floats for c allocation multiple floats"""
class Integers():
"""Integers for c allocation multiple integers"""
class VariableNumberOfLines():
pass
# valid_datatypes = ( # i.e. datatypes the GUI support (soon)
# ProfileType,
# CaothType,
# CaothoffType,
# Double,
# SignedFloats,
# VariableNumberOfLines,
# # In my opinion these should | self._speaker = speaker
assert not isinstance(enable_values, basestring), "Missing comma in one-item-tuple?"
self._enable_values = enable_values
self.canEnable = self._canEnableContinousOption | conditional_block |
option_definition.py | checkForDictionary(tokens)
if tokens == -1:
print '''ERROR: found in definition of %s:\n
Please check your tokens and settings again''' %(name)
self.parents = parents
self.non_parents = non_parents
self.non_parent_exceptions = non_parent_exceptions
self.childs = childs
self.name = name
self.help = helpstr
self.plot = plot
self.tokens = tokens
self.showInGui = showInGui
self.continious_update = continious_update
self._mandatory=mandatory
self._extra_dependencies=extra_dependencies
self.non_unique=non_unique
# Please use empty lists
if self.childs == '': self.childs = []
if self.non_parents == '': self.non_parents = []
if self.non_parent_exceptions == '': self.non_parent_exceptions = []
self.dependencies = list(self.childs + self.non_parents + self.non_parent_exceptions + extra_dependencies)
if self._mandatory and not name in self.dependencies: self.dependencies.append(self.name)
for inp in gui_inputs:
assert inp.__class__.__name__ in GUI_definition.__all__
if not gui_inputs: gui_inputs = self._set_gui_input(gui_inputs,tokens)
self.gui_inputs = self._addBooleanToGuiInput(gui_inputs)
self.dict = {
'name' : name, #as defined in uvspec_lex.l
'group' : group, # e.g. WC
'help' : helpstr, # Help string (short), could appear as pop-up in GUI
'documentation' : documentation, # Full documentation
'tokens' : tokens, # Variable in uvspec inp_struct to change
'parents' : parents, # (specifies which options must also be defined together with this option)
# One of the options must also be defined with this options
'non_parents' : non_parents, # specifies which options must not be defined together with this option
'non_parent_exceptions' : non_parent_exceptions, # specifies which options inside non_parents should be ignored
'childs' : childs, # (specifies which options can be defined with this option)
# Options which will be unlocked when defining this option
'mystic' : mystic, # mystic option
'threedmystic' : threedmystic, # 3D mystic option
'islidar' : islidar, # lidar option
'developer' : developer, # developer option, undocumented for esasLight
'plot' : plot, # Setup plotting for options which should be plotted
}
self.canEnable = self._canEnable
if speaker and enable_values:
self._speaker = speaker
assert not isinstance(enable_values, basestring), "Missing comma in one-item-tuple?"
self._enable_values = enable_values
self.canEnable = self._canEnableContinousOption
if extra_dependencies:
self.isMandatory = self._isMandatoryMixedOption
# Pretend to be a dictionary to avoid breaking old code
def __getitem__(self, *args, **kwargs):
return self.dict.__getitem__(*args, **kwargs)
def __setitem__(self, *args, **kwargs):
return self.dict.__setitem__(*args, **kwargs)
def __contains__(self, *args, **kwargs):
return self.dict.__contains__(*args, **kwargs)
def get(self, *args, **kwargs):
return self.dict.get(*args, **kwargs)
def _canEnableContinousOption(self, is_set, get_value):
"""
Remember that self.speaker must be a subclass of
continious_option.
"""
r = self._canEnable(is_set, get_value)
if r and is_set(self._speaker) and \
get_value(self._speaker)[0] in self._enable_values:
return r
else:
return False
def _canEnable(self, is_set, get_value):
"""
Tells the GUI wether the option should be enabled or disabled.
Returns True if the option should be enabled and False if it
should be disabled.
is_set is a function that returns True if an option is enabled
and has been edited by the user, else False. It takes one
argument, the name of an option as a string.
get_value returns the current value of an option
This is used to implement the logic in the GUI. If more
complex logic than the parent, non-parent, children logic is
needed this function should be overloaded.
Remember to update the dependency tuple, a tuple of options
which should be enabled or disabled depending on if this
option is set.
"""
parents = any([is_set(parent) for parent in self.parents]) \
or not self.parents
non_parents = all([(not is_set(non_parent) or self.non_parent_exceptions.count(non_parent) or non_parent==self.name) \
for non_parent in self.non_parents]) \
or not self.non_parents
return parents and non_parents
def isMandatory(self, is_set, get_value):
"""
Returns True for mandatory options. Similar to canEnable.
"""
if self._mandatory and not is_set(self.name): return True
return False
def _isMandatoryMixedOption(self, is_set, get_value):
cond = [is_set(opt) for opt in self._extra_dependencies]
if all(cond):
return False
elif any(cond):
return True
else:
return False
def _set_gui_input(self,gui_inputs,tokens):
| if not vr: vr = (-1e99, 1e99)
gui_inp = (FloatInput(name=name,optional=inp.get('optional'),valid_range=vr,default=inp.get('default')),)
elif dtype == int:
if not vr: vr = (-1e99, 1e99)
gui_inp = (IntegerInput(name=name,valid_range=vr,optional=inp.get('optional'),default=inp.get('default')),)
elif vr:
gui_inp = (ListInput(name=name,valid_range=inp.get('valid_range'),optional=inp.get('optional'),default=inp.get('default'),logical_file=inp.get('logical_file')),)
elif dtype == file:
gui_inp = ( FileInput(name=name,optional=inp.get('optional')) ,)
else: gui_inp = (TextInput(name=name,optional=inp.get('optional')),)
gui_inputs = gui_inputs.__add__(gui_inp)
return gui_inputs
def _addBooleanToGuiInput(self,gui_inputs):
if not self.showInGui:
return ()
for inp in gui_inputs:
if not inp.optional or inp.__class__ == BooleanInput:
return gui_inputs
return ( BooleanInput(name=''), ).__add__(gui_inputs)
class Dimension():
"""
Options which can take dimensions (number+word) as argument ( 1D, 3D )
"""
def __init__(self):
self.valid_range = ["1d","3d"]
def get_valid_range(self):
return self.valid_range
class ProfileType():
"""
Options which can take several profile files as argument (e.g. 1D, 3D, moments, ipa_files)
"""
def __init__(self):
self.valid_range = ["1d","3d","ipa_files","moments"]
def get_valid_range(self):
return self.valid_range
class CaothType():
"""
Options which can take several profile as argument (e.g. wc, ic or any other profile)
"""
def __init__(self,caoth=None):
self.caoth = caoth
def get_caoth(self):
return self.caoth
class CaothoffType():
"""
Quick fix for new option names no_scattering and no_absorption
"""
class Double(float):
"""Double for c allocation double"""
class SignedFloats():
"""Signed floats for c allocation multiple floats"""
class Integers():
"""Integers for c allocation multiple integers"""
class VariableNumberOfLines():
pass
# valid_datatypes = ( # i.e. datatypes the GUI support (soon)
# ProfileType,
# CaothType,
# CaothoffType,
# Double,
# SignedFloats,
# VariableNumberOfLines,
# # In my opinion these should | if not self.showInGui:
return gui_inputs
for inp in tokens:
try:
name = inp.gui_name
except KeyError:
pass
if not name: name = inp.get('name')
try:
vr = inp.get('valid_range')
except KeyError:
vr = None
if isinstance(inp, addSetting):
continue
elif isinstance(inp, addLogical):
gui_inp = (ListInput(name=name,valid_range=inp.get('valid_range'),optional=inp.get('optional'),default=inp.get('default'),logical_file=inp.get('logical_file')),)
elif isinstance(inp, addToken):
dtype = inp.get('datatype')
if dtype == float or dtype==Double: | identifier_body |
option_definition.py | ForDictionary(tokens)
if tokens == -1:
print '''ERROR: found in definition of %s:\n
Please check your tokens and settings again''' %(name)
self.parents = parents
self.non_parents = non_parents
self.non_parent_exceptions = non_parent_exceptions
self.childs = childs
self.name = name
self.help = helpstr
self.plot = plot
self.tokens = tokens
self.showInGui = showInGui
self.continious_update = continious_update
self._mandatory=mandatory
self._extra_dependencies=extra_dependencies
self.non_unique=non_unique
# Please use empty lists
if self.childs == '': self.childs = []
if self.non_parents == '': self.non_parents = []
if self.non_parent_exceptions == '': self.non_parent_exceptions = []
self.dependencies = list(self.childs + self.non_parents + self.non_parent_exceptions + extra_dependencies)
if self._mandatory and not name in self.dependencies: self.dependencies.append(self.name)
for inp in gui_inputs:
assert inp.__class__.__name__ in GUI_definition.__all__
if not gui_inputs: gui_inputs = self._set_gui_input(gui_inputs,tokens)
self.gui_inputs = self._addBooleanToGuiInput(gui_inputs)
self.dict = {
'name' : name, #as defined in uvspec_lex.l
'group' : group, # e.g. WC
'help' : helpstr, # Help string (short), could appear as pop-up in GUI
'documentation' : documentation, # Full documentation
'tokens' : tokens, # Variable in uvspec inp_struct to change
'parents' : parents, # (specifies which options must also be defined together with this option)
# One of the options must also be defined with this options
'non_parents' : non_parents, # specifies which options must not be defined together with this option
'non_parent_exceptions' : non_parent_exceptions, # specifies which options inside non_parents should be ignored
'childs' : childs, # (specifies which options can be defined with this option)
# Options which will be unlocked when defining this option
'mystic' : mystic, # mystic option
'threedmystic' : threedmystic, # 3D mystic option
'islidar' : islidar, # lidar option
'developer' : developer, # developer option, undocumented for esasLight
'plot' : plot, # Setup plotting for options which should be plotted
}
self.canEnable = self._canEnable
if speaker and enable_values:
self._speaker = speaker
assert not isinstance(enable_values, basestring), "Missing comma in one-item-tuple?"
self._enable_values = enable_values
self.canEnable = self._canEnableContinousOption
if extra_dependencies:
self.isMandatory = self._isMandatoryMixedOption
# Pretend to be a dictionary to avoid breaking old code
def __getitem__(self, *args, **kwargs):
return self.dict.__getitem__(*args, **kwargs)
def __setitem__(self, *args, **kwargs):
return self.dict.__setitem__(*args, **kwargs)
def __contains__(self, *args, **kwargs):
return self.dict.__contains__(*args, **kwargs)
def get(self, *args, **kwargs):
return self.dict.get(*args, **kwargs)
def _canEnableContinousOption(self, is_set, get_value):
"""
Remember that self.speaker must be a subclass of
continious_option.
"""
r = self._canEnable(is_set, get_value)
if r and is_set(self._speaker) and \
get_value(self._speaker)[0] in self._enable_values:
return r
else:
return False
def _canEnable(self, is_set, get_value):
"""
Tells the GUI wether the option should be enabled or disabled.
Returns True if the option should be enabled and False if it
should be disabled.
is_set is a function that returns True if an option is enabled
and has been edited by the user, else False. It takes one
argument, the name of an option as a string.
get_value returns the current value of an option
This is used to implement the logic in the GUI. If more
complex logic than the parent, non-parent, children logic is
needed this function should be overloaded.
Remember to update the dependency tuple, a tuple of options
which should be enabled or disabled depending on if this
option is set.
"""
parents = any([is_set(parent) for parent in self.parents]) \
or not self.parents
non_parents = all([(not is_set(non_parent) or self.non_parent_exceptions.count(non_parent) or non_parent==self.name) \
for non_parent in self.non_parents]) \
or not self.non_parents
return parents and non_parents
def isMandatory(self, is_set, get_value):
"""
Returns True for mandatory options. Similar to canEnable.
"""
if self._mandatory and not is_set(self.name): return True
return False
def _isMandatoryMixedOption(self, is_set, get_value):
cond = [is_set(opt) for opt in self._extra_dependencies]
if all(cond):
return False
elif any(cond):
return True
else:
return False
def _set_gui_input(self,gui_inputs,tokens):
if not self.showInGui:
return gui_inputs
for inp in tokens:
try:
name = inp.gui_name
except KeyError:
pass
if not name: name = inp.get('name')
try:
vr = inp.get('valid_range')
except KeyError:
vr = None
if isinstance(inp, addSetting):
continue
elif isinstance(inp, addLogical):
gui_inp = (ListInput(name=name,valid_range=inp.get('valid_range'),optional=inp.get('optional'),default=inp.get('default'),logical_file=inp.get('logical_file')),)
elif isinstance(inp, addToken):
dtype = inp.get('datatype') | if not vr: vr = (-1e99, 1e99)
gui_inp = (FloatInput(name=name,optional=inp.get('optional'),valid_range=vr,default=inp.get('default')),)
elif dtype == int:
if not vr: vr = (-1e99, 1e99)
gui_inp = (IntegerInput(name=name,valid_range=vr,optional=inp.get('optional'),default=inp.get('default')),)
elif vr:
gui_inp = (ListInput(name=name,valid_range=inp.get('valid_range'),optional=inp.get('optional'),default=inp.get('default'),logical_file=inp.get('logical_file')),)
elif dtype == file:
gui_inp = ( FileInput(name=name,optional=inp.get('optional')) ,)
else: gui_inp = (TextInput(name=name,optional=inp.get('optional')),)
gui_inputs = gui_inputs.__add__(gui_inp)
return gui_inputs
def _addBooleanToGuiInput(self,gui_inputs):
if not self.showInGui:
return ()
for inp in gui_inputs:
if not inp.optional or inp.__class__ == BooleanInput:
return gui_inputs
return ( BooleanInput(name=''), ).__add__(gui_inputs)
class Dimension():
"""
Options which can take dimensions (number+word) as argument ( 1D, 3D )
"""
def __init__(self):
self.valid_range = ["1d","3d"]
def get_valid_range(self):
return self.valid_range
class ProfileType():
"""
Options which can take several profile files as argument (e.g. 1D, 3D, moments, ipa_files)
"""
def __init__(self):
self.valid_range = ["1d","3d","ipa_files","moments"]
def get_valid_range(self):
return self.valid_range
class CaothType():
"""
Options which can take several profile as argument (e.g. wc, ic or any other profile)
"""
def __init__(self,caoth=None):
self.caoth = caoth
def get_caoth(self):
return self.caoth
class CaothoffType():
"""
Quick fix for new option names no_scattering and no_absorption
"""
class Double(float):
"""Double for c allocation double"""
class SignedFloats():
"""Signed floats for c allocation multiple floats"""
class Integers():
"""Integers for c allocation multiple integers"""
class VariableNumberOfLines():
pass
# valid_datatypes = ( # i.e. datatypes the GUI support (soon)
# ProfileType,
# CaothType,
# CaothoffType,
# Double,
# SignedFloats,
# VariableNumberOfLines,
# # In my opinion these should be | if dtype == float or dtype==Double: | random_line_split |
mod.rs | of changes to files under `path` from external sources, it
/// expects to have sole maintence of the contents.
pub fn new<T>(path: T, size: u64) -> Result<Self>
where
PathBuf: From<T>,
{
LruDiskCache {
lru: LruCache::with_meter(size, FileSize),
root: PathBuf::from(path),
}
.init()
}
/// Return the current size of all the files in the cache.
pub fn size(&self) -> u64 {
self.lru.size()
}
/// Return the count of entries in the cache.
pub fn len(&self) -> usize {
self.lru.len()
}
pub fn is_empty(&self) -> bool {
self.lru.len() == 0
}
/// Return the maximum size of the cache.
pub fn capacity(&self) -> u64 {
self.lru.capacity()
}
/// Return the path in which the cache is stored.
pub fn path(&self) -> &Path {
self.root.as_path()
}
/// Return the path that `key` would be stored at.
fn rel_to_abs_path<K: AsRef<Path>>(&self, rel_path: K) -> PathBuf {
self.root.join(rel_path)
}
/// Scan `self.root` for existing files and store them.
fn init(mut self) -> Result<Self> {
fs::create_dir_all(&self.root)?;
for (file, size) in get_all_files(&self.root) {
if !self.can_store(size) {
fs::remove_file(file).unwrap_or_else(|e| {
error!(
"Error removing file `{}` which is too large for the cache ({} bytes)",
e, size
)
});
} else {
self.add_file(AddFile::AbsPath(file), size)
.unwrap_or_else(|e| error!("Error adding file: {}", e));
}
}
Ok(self)
}
/// Returns `true` if the disk cache can store a file of `size` bytes.
pub fn can_store(&self, size: u64) -> bool {
size <= self.lru.capacity()
}
/// Add the file at `path` of size `size` to the cache.
fn add_file(&mut self, addfile_path: AddFile<'_>, size: u64) -> Result<()> | }
fn insert_by<K: AsRef<OsStr>, F: FnOnce(&Path) -> io::Result<()>>(
&mut self,
key: K,
size: Option<u64>,
by: F,
) -> Result<()> {
if let Some(size) = size {
if !self.can_store(size) {
return Err(Error::FileTooLarge);
}
}
let rel_path = key.as_ref();
let path = self.rel_to_abs_path(rel_path);
fs::create_dir_all(path.parent().expect("Bad path?"))?;
by(&path)?;
let size = match size {
Some(size) => size,
None => fs::metadata(path)?.len(),
};
self.add_file(AddFile::RelPath(rel_path), size)
.map_err(|e| {
error!(
"Failed to insert file `{}`: {}",
rel_path.to_string_lossy(),
e
);
fs::remove_file(self.rel_to_abs_path(rel_path))
.expect("Failed to remove file we just created!");
e
})
}
/// Add a file by calling `with` with the open `File` corresponding to the cache at path `key`.
pub fn insert_with<K: AsRef<OsStr>, F: FnOnce(File) -> io::Result<()>>(
&mut self,
key: K,
with: F,
) -> Result<()> {
self.insert_by(key, None, |path| with(File::create(path)?))
}
/// Add a file with `bytes` as its contents to the cache at path `key`.
pub fn insert_bytes<K: AsRef<OsStr>>(&mut self, key: K, bytes: &[u8]) -> Result<()> {
self.insert_by(key, Some(bytes.len() as u64), |path| {
let mut f = File::create(path)?;
f.write_all(bytes)?;
Ok(())
})
}
/// Add an existing file at `path` to the cache at path `key`.
pub fn insert_file<K: AsRef<OsStr>, P: AsRef<OsStr>>(&mut self, key: K, path: P) -> Result<()> {
let size = fs::metadata(path.as_ref())?.len();
self.insert_by(key, Some(size), |new_path| {
fs::rename(path.as_ref(), new_path).or_else(|_| {
warn!("fs::rename failed, falling back to copy!");
fs::copy(path.as_ref(), new_path)?;
fs::remove_file(path.as_ref()).unwrap_or_else(|e| {
error!("Failed to remove original file in insert_file: {}", e)
});
Ok(())
})
})
}
/// Return `true` if a file with path `key` is in the cache.
pub fn contains_key<K: AsRef<OsStr>>(&self, key: K) -> bool {
self.lru.contains_key(key.as_ref())
}
/// Get an opened `File` for `key`, if one exists and can be opened. Updates the LRU state
/// of the file if present. Avoid using this method if at all possible, prefer `.get`.
pub fn get_file<K: AsRef<OsStr>>(&mut self, key: K) -> Result<File> {
let rel_path = key.as_ref();
let path = self.rel_to_abs_path(rel_path);
self.lru
.get(rel_path)
.ok_or(Error::FileNotInCache)
.and_then(|_| {
let t = FileTime::now();
set_file_times(&path, t, t)?;
File::open(path).map_err(Into::into)
})
}
/// Get an opened readable and seekable handle to the file at `key`, if one exists and can
/// be opened. Updates the LRU state of the file if present.
pub fn get<K: AsRef<OsStr>>(&mut self, key: K) -> Result<Box<dyn ReadSeek>> {
self.get_file(key).map(|f| Box::new(f) as Box<dyn ReadSeek>)
}
/// Remove the given key from the cache.
pub fn remove<K: AsRef<OsStr>>(&mut self, key: K) -> Result<()> {
match self.lru.remove(key.as_ref()) {
Some(_) => {
let path = self.rel_to_abs_path(key.as_ref());
fs::remove_file(&path).map_err(|e| {
error!("Error removing file from cache: `{:?}`: {}", path, e);
Into::into(e)
})
}
None => Ok(()),
}
}
}
#[cfg(test)]
mod tests {
use super::fs::{self, File};
use super::{Error, LruDiskCache};
use filetime::{set_file_times, FileTime};
use std::io::{self, Read, Write};
use std::path::{Path, PathBuf};
use tempfile::TempDir;
struct TestFixture {
/// Temp directory.
pub tempdir: TempDir,
}
fn create_file<T: AsRef<Path>, F: FnOnce(File) -> io::Result<()>>(
dir: &Path,
path: T,
fill_contents: F,
) -> io::Result<PathBuf> {
let b = dir.join(path);
fs::create_dir_all(b.parent().unwrap())?;
let f = fs::File::create(&b)?;
fill_contents(f)?;
b.canonicalize()
}
/// Set the last modified time of `path` backwards by `seconds` seconds.
fn set_mtime_back<T: AsRef<Path>>(path: T, seconds: usize) {
let m = fs::metadata(path.as_ref()).unwrap();
let t = FileTime::from_last_modification_time(&m);
let t = FileTime::from_unix_time(t.unix_seconds() - seconds as i64, t.nanoseconds());
set_file_times(path, | {
if !self.can_store(size) {
return Err(Error::FileTooLarge);
}
let rel_path = match addfile_path {
AddFile::AbsPath(ref p) => p.strip_prefix(&self.root).expect("Bad path?").as_os_str(),
AddFile::RelPath(p) => p,
};
//TODO: ideally LRUCache::insert would give us back the entries it had to remove.
while self.lru.size() + size > self.lru.capacity() {
let (rel_path, _) = self.lru.remove_lru().expect("Unexpectedly empty cache!");
let remove_path = self.rel_to_abs_path(rel_path);
//TODO: check that files are removable during `init`, so that this is only
// due to outside interference.
fs::remove_file(&remove_path).unwrap_or_else(|e| {
panic!("Error removing file from cache: `{:?}`: {}", remove_path, e)
});
}
self.lru.insert(rel_path.to_owned(), size);
Ok(()) | identifier_body |
mod.rs | <Q: ?Sized>(&self, _: &Q, v: &u64) -> usize
where
K: Borrow<Q>,
{
*v as usize
}
}
/// Return an iterator of `(path, size)` of files under `path` sorted by ascending last-modified
/// time, such that the oldest modified file is returned first.
fn get_all_files<P: AsRef<Path>>(path: P) -> Box<dyn Iterator<Item = (PathBuf, u64)>> {
let mut files: Vec<_> = WalkDir::new(path.as_ref())
.into_iter()
.filter_map(|e| {
e.ok().and_then(|f| {
// Only look at files
if f.file_type().is_file() {
// Get the last-modified time, size, and the full path.
f.metadata().ok().and_then(|m| {
m.modified()
.ok()
.map(|mtime| (mtime, f.path().to_owned(), m.len()))
})
} else {
None
}
})
})
.collect();
// Sort by last-modified-time, so oldest file first.
files.sort_by_key(|k| k.0);
Box::new(files.into_iter().map(|(_mtime, path, size)| (path, size)))
}
/// An LRU cache of files on disk.
pub struct LruDiskCache<S: BuildHasher = RandomState> {
lru: LruCache<OsString, u64, S, FileSize>,
root: PathBuf,
}
/// Errors returned by this crate.
#[derive(Debug)]
pub enum Error {
/// The file was too large to fit in the cache.
FileTooLarge,
/// The file was not in the cache.
FileNotInCache,
/// An IO Error occurred.
Io(io::Error),
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Error::FileTooLarge => write!(f, "File too large"),
Error::FileNotInCache => write!(f, "File not in cache"),
Error::Io(ref e) => write!(f, "{}", e),
}
}
}
impl StdError for Error {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match self {
Error::FileTooLarge => None,
Error::FileNotInCache => None,
Error::Io(ref e) => Some(e),
}
}
}
impl From<io::Error> for Error {
fn from(e: io::Error) -> Error {
Error::Io(e)
}
}
/// A convenience `Result` type
pub type Result<T> = std::result::Result<T, Error>;
/// Trait objects can't be bounded by more than one non-builtin trait.
pub trait ReadSeek: Read + Seek + Send {}
impl<T: Read + Seek + Send> ReadSeek for T {}
enum AddFile<'a> {
AbsPath(PathBuf),
RelPath(&'a OsStr),
}
impl LruDiskCache {
/// Create an `LruDiskCache` that stores files in `path`, limited to `size` bytes.
///
/// Existing files in `path` will be stored with their last-modified time from the filesystem
/// used as the order for the recency of their use. Any files that are individually larger
/// than `size` bytes will be removed.
///
/// The cache is not observant of changes to files under `path` from external sources, it
/// expects to have sole maintence of the contents.
pub fn new<T>(path: T, size: u64) -> Result<Self>
where
PathBuf: From<T>,
{
LruDiskCache {
lru: LruCache::with_meter(size, FileSize),
root: PathBuf::from(path),
}
.init()
}
/// Return the current size of all the files in the cache.
pub fn size(&self) -> u64 {
self.lru.size()
}
/// Return the count of entries in the cache.
pub fn len(&self) -> usize {
self.lru.len()
}
pub fn is_empty(&self) -> bool {
self.lru.len() == 0
}
/// Return the maximum size of the cache.
pub fn capacity(&self) -> u64 {
self.lru.capacity()
}
/// Return the path in which the cache is stored.
pub fn path(&self) -> &Path {
self.root.as_path()
}
/// Return the path that `key` would be stored at.
fn rel_to_abs_path<K: AsRef<Path>>(&self, rel_path: K) -> PathBuf {
self.root.join(rel_path)
}
/// Scan `self.root` for existing files and store them.
fn init(mut self) -> Result<Self> {
fs::create_dir_all(&self.root)?;
for (file, size) in get_all_files(&self.root) {
if !self.can_store(size) {
fs::remove_file(file).unwrap_or_else(|e| {
error!(
"Error removing file `{}` which is too large for the cache ({} bytes)",
e, size
)
});
} else {
self.add_file(AddFile::AbsPath(file), size)
.unwrap_or_else(|e| error!("Error adding file: {}", e));
}
}
Ok(self)
}
/// Returns `true` if the disk cache can store a file of `size` bytes.
pub fn can_store(&self, size: u64) -> bool {
size <= self.lru.capacity()
}
/// Add the file at `path` of size `size` to the cache.
fn add_file(&mut self, addfile_path: AddFile<'_>, size: u64) -> Result<()> {
if !self.can_store(size) {
return Err(Error::FileTooLarge);
}
let rel_path = match addfile_path {
AddFile::AbsPath(ref p) => p.strip_prefix(&self.root).expect("Bad path?").as_os_str(),
AddFile::RelPath(p) => p,
};
//TODO: ideally LRUCache::insert would give us back the entries it had to remove.
while self.lru.size() + size > self.lru.capacity() {
let (rel_path, _) = self.lru.remove_lru().expect("Unexpectedly empty cache!");
let remove_path = self.rel_to_abs_path(rel_path);
//TODO: check that files are removable during `init`, so that this is only
// due to outside interference.
fs::remove_file(&remove_path).unwrap_or_else(|e| {
panic!("Error removing file from cache: `{:?}`: {}", remove_path, e)
});
}
self.lru.insert(rel_path.to_owned(), size);
Ok(())
}
fn insert_by<K: AsRef<OsStr>, F: FnOnce(&Path) -> io::Result<()>>(
&mut self,
key: K,
size: Option<u64>,
by: F,
) -> Result<()> {
if let Some(size) = size {
if !self.can_store(size) {
return Err(Error::FileTooLarge);
}
}
let rel_path = key.as_ref();
let path = self.rel_to_abs_path(rel_path);
fs::create_dir_all(path.parent().expect("Bad path?"))?;
by(&path)?;
let size = match size {
Some(size) => size,
None => fs::metadata(path)?.len(),
};
self.add_file(AddFile::RelPath(rel_path), size)
.map_err(|e| {
error!(
"Failed to insert file `{}`: {}",
rel_path.to_string_lossy(),
e
);
fs::remove_file(self.rel_to_abs_path(rel_path))
.expect("Failed to remove file we just created!");
e
})
}
/// Add a file by calling `with` with the open `File` corresponding to the cache at path `key`.
pub fn insert_with<K: AsRef<OsStr>, F: FnOnce(File) -> io::Result<()>>(
&mut self,
key: K,
with: F,
) -> Result<()> {
self.insert_by(key, None, |path| with(File::create(path)?))
}
/// Add a file with `bytes` as its contents to the cache at path `key`.
pub fn insert_bytes<K: AsRef<OsStr>>(&mut self, key: K, bytes: &[u8]) -> Result<()> {
self.insert_by(key, Some(bytes.len() as u64), |path| {
let mut f = File::create(path)?;
f.write_all(bytes)?;
Ok(())
})
}
/// Add an existing file at `path` to the cache at path `key`.
pub fn insert_file<K: AsRef<OsStr>, P: AsRef<OsStr>>(&mut self, key: K, path: P) -> Result<()> {
let size = fs::metadata(path.as_ref())?.len();
self.insert_by(key, Some(size), |new_path| {
fs::rename(path.as_ref(), new_path).or_else(|_| {
warn!(" | measure | identifier_name |
|
mod.rs | : u64) -> Result<()> {
if !self.can_store(size) {
return Err(Error::FileTooLarge);
}
let rel_path = match addfile_path {
AddFile::AbsPath(ref p) => p.strip_prefix(&self.root).expect("Bad path?").as_os_str(),
AddFile::RelPath(p) => p,
};
//TODO: ideally LRUCache::insert would give us back the entries it had to remove.
while self.lru.size() + size > self.lru.capacity() {
let (rel_path, _) = self.lru.remove_lru().expect("Unexpectedly empty cache!");
let remove_path = self.rel_to_abs_path(rel_path);
//TODO: check that files are removable during `init`, so that this is only
// due to outside interference.
fs::remove_file(&remove_path).unwrap_or_else(|e| {
panic!("Error removing file from cache: `{:?}`: {}", remove_path, e)
});
}
self.lru.insert(rel_path.to_owned(), size);
Ok(())
}
fn insert_by<K: AsRef<OsStr>, F: FnOnce(&Path) -> io::Result<()>>(
&mut self,
key: K,
size: Option<u64>,
by: F,
) -> Result<()> {
if let Some(size) = size {
if !self.can_store(size) {
return Err(Error::FileTooLarge);
}
}
let rel_path = key.as_ref();
let path = self.rel_to_abs_path(rel_path);
fs::create_dir_all(path.parent().expect("Bad path?"))?;
by(&path)?;
let size = match size {
Some(size) => size,
None => fs::metadata(path)?.len(),
};
self.add_file(AddFile::RelPath(rel_path), size)
.map_err(|e| {
error!(
"Failed to insert file `{}`: {}",
rel_path.to_string_lossy(),
e
);
fs::remove_file(self.rel_to_abs_path(rel_path))
.expect("Failed to remove file we just created!");
e
})
}
/// Add a file by calling `with` with the open `File` corresponding to the cache at path `key`.
pub fn insert_with<K: AsRef<OsStr>, F: FnOnce(File) -> io::Result<()>>(
&mut self,
key: K,
with: F,
) -> Result<()> {
self.insert_by(key, None, |path| with(File::create(path)?))
}
/// Add a file with `bytes` as its contents to the cache at path `key`.
pub fn insert_bytes<K: AsRef<OsStr>>(&mut self, key: K, bytes: &[u8]) -> Result<()> {
self.insert_by(key, Some(bytes.len() as u64), |path| {
let mut f = File::create(path)?;
f.write_all(bytes)?;
Ok(())
})
}
/// Add an existing file at `path` to the cache at path `key`.
pub fn insert_file<K: AsRef<OsStr>, P: AsRef<OsStr>>(&mut self, key: K, path: P) -> Result<()> {
let size = fs::metadata(path.as_ref())?.len();
self.insert_by(key, Some(size), |new_path| {
fs::rename(path.as_ref(), new_path).or_else(|_| {
warn!("fs::rename failed, falling back to copy!");
fs::copy(path.as_ref(), new_path)?;
fs::remove_file(path.as_ref()).unwrap_or_else(|e| {
error!("Failed to remove original file in insert_file: {}", e)
});
Ok(())
})
})
}
/// Return `true` if a file with path `key` is in the cache.
pub fn contains_key<K: AsRef<OsStr>>(&self, key: K) -> bool {
self.lru.contains_key(key.as_ref())
}
/// Get an opened `File` for `key`, if one exists and can be opened. Updates the LRU state
/// of the file if present. Avoid using this method if at all possible, prefer `.get`.
pub fn get_file<K: AsRef<OsStr>>(&mut self, key: K) -> Result<File> {
let rel_path = key.as_ref();
let path = self.rel_to_abs_path(rel_path);
self.lru
.get(rel_path)
.ok_or(Error::FileNotInCache)
.and_then(|_| {
let t = FileTime::now();
set_file_times(&path, t, t)?;
File::open(path).map_err(Into::into)
})
}
/// Get an opened readable and seekable handle to the file at `key`, if one exists and can
/// be opened. Updates the LRU state of the file if present.
pub fn get<K: AsRef<OsStr>>(&mut self, key: K) -> Result<Box<dyn ReadSeek>> {
self.get_file(key).map(|f| Box::new(f) as Box<dyn ReadSeek>)
}
/// Remove the given key from the cache.
pub fn remove<K: AsRef<OsStr>>(&mut self, key: K) -> Result<()> {
match self.lru.remove(key.as_ref()) {
Some(_) => {
let path = self.rel_to_abs_path(key.as_ref());
fs::remove_file(&path).map_err(|e| {
error!("Error removing file from cache: `{:?}`: {}", path, e);
Into::into(e)
})
}
None => Ok(()),
}
}
}
#[cfg(test)]
mod tests {
use super::fs::{self, File};
use super::{Error, LruDiskCache};
use filetime::{set_file_times, FileTime};
use std::io::{self, Read, Write};
use std::path::{Path, PathBuf};
use tempfile::TempDir;
struct TestFixture {
/// Temp directory.
pub tempdir: TempDir,
}
fn create_file<T: AsRef<Path>, F: FnOnce(File) -> io::Result<()>>(
dir: &Path,
path: T,
fill_contents: F,
) -> io::Result<PathBuf> {
let b = dir.join(path);
fs::create_dir_all(b.parent().unwrap())?;
let f = fs::File::create(&b)?;
fill_contents(f)?;
b.canonicalize()
}
/// Set the last modified time of `path` backwards by `seconds` seconds.
fn set_mtime_back<T: AsRef<Path>>(path: T, seconds: usize) {
let m = fs::metadata(path.as_ref()).unwrap();
let t = FileTime::from_last_modification_time(&m);
let t = FileTime::from_unix_time(t.unix_seconds() - seconds as i64, t.nanoseconds());
set_file_times(path, t, t).unwrap();
}
fn read_all<R: Read>(r: &mut R) -> io::Result<Vec<u8>> {
let mut v = vec![];
r.read_to_end(&mut v)?;
Ok(v)
}
impl TestFixture {
pub fn new() -> TestFixture {
TestFixture {
tempdir: tempfile::Builder::new()
.prefix("lru-disk-cache-test")
.tempdir()
.unwrap(),
}
}
pub fn tmp(&self) -> &Path {
self.tempdir.path()
}
pub fn create_file<T: AsRef<Path>>(&self, path: T, size: usize) -> PathBuf {
create_file(self.tempdir.path(), path, |mut f| {
f.write_all(&vec![0; size])
})
.unwrap()
}
}
#[test]
fn test_empty_dir() {
let f = TestFixture::new();
LruDiskCache::new(f.tmp(), 1024).unwrap();
}
#[test]
fn test_missing_root() {
let f = TestFixture::new();
LruDiskCache::new(f.tmp().join("not-here"), 1024).unwrap();
}
#[test]
fn test_some_existing_files() {
let f = TestFixture::new();
f.create_file("file1", 10);
f.create_file("file2", 10);
let c = LruDiskCache::new(f.tmp(), 20).unwrap();
assert_eq!(c.size(), 20);
assert_eq!(c.len(), 2);
}
#[test]
fn test_existing_file_too_large() {
let f = TestFixture::new();
// Create files explicitly in the past.
set_mtime_back(f.create_file("file1", 10), 10);
set_mtime_back(f.create_file("file2", 10), 5);
let c = LruDiskCache::new(f.tmp(), 15).unwrap();
assert_eq!(c.size(), 10);
assert_eq!(c.len(), 1);
assert!(!c.contains_key("file1"));
assert!(c.contains_key("file2"));
}
#[test]
fn test_existing_files_lru_mtime() { | let f = TestFixture::new();
// Create files explicitly in the past. | random_line_split |
|
utils.py | 'XXXII': 27, 'IX': 28, 'L': 29, 'XXXI': 30, 'XXXVI': 31,
'XIII': 32, 'XXVII': 33, 'XXXIII': 34, 'VI': 35, 'XV': 36, 'XLI': 37, 'LI': 38, 'XLII': 39,
'XXVI': 40, 'XLV': 41, 'XVI': 42, 'LIII': 43, 'XX': 44, 'LII': 45, 'XL': 46, 'XLIII': 47, 'XXX': 48,
'XLVII': 49, 'XXXIV': 50, 'XLVI': 51, 'XXV': 52, 'XLVIII': 53, 'II': 54, 'XXVIII': 55}
ID2TAG = {v: k for k, v in MAP_DICT.items()}
class Params:
"""参数定义
"""
def __init__(self):
# 根路径
self.root_path = Path(os.path.abspath(os.path.dirname(__file__)))
# 数据集路径
self.data_dir = self.root_path / 'data'
# 参数路径
self.params_path = self.root_path / 'experiments'
# 模型保存路径
self.model_dir = self.root_path / 'model'
# 预训练模型路径
self.pretrain_model_dir = self.root_path / 'pretrain_model'
# device
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.tags = list(MAP_DICT.values())
self.n_gpu = torch.cuda.device_count()
# 读取保存的data
self.data_cache = True
self.train_batch_size = 256
self.dev_batch_size = 128
self.test_batch_size = 128
# 最小训练次数
self.min_epoch_num = 5
# 容纳的提高量(f1-score)
self.patience = 0.001
# 容纳多少次未提高
self.patience_num = 5
self.seed = 2020
# 句子最大长度(pad)
self.max_seq_length = 256
# learning_rate
self.fin_tuning_lr = 1e-4
# downstream lr
self.ds_lr = 1e-4 * 100
# 梯度截断
self.clip_grad = 2
# dropout prob
self.drop_prob = 0.3
# 权重衰减系数
self.weight_decay_rate = 0.01
def get(self):
"""Gives dict-like access to Params instance by `params.show['learning_rate']"""
return self.__dict__
def load(self, json_path):
"""Loads parameters from json file"""
with open(json_path) as f:
params = json.load(f)
self.__dict__.update(params)
def save(self, json_path):
"""保存配置到json文件
"""
params = {}
with open(json_path, 'w') as f:
for k, v in self.__dict__.items():
if isinstance(v, (str, int, float, bool)):
params[k] = v
json.dump(params, f, indent=4)
class RunningAverage:
"""A simple class that maintains the running average of a quantity
记录平均损失
Example:
```
loss_avg = RunningAverage()
loss_avg.update(2)
loss_avg.update(4)
loss_avg() = 3
```
"""
def __init__(self):
self.steps = 0
self.total = 0
def update(self, val):
self.total += val
self.steps += 1
def __call__(self):
return self.total / float(self.steps)
def set_logger(save=False, log_path=None):
"""Set the logger to log info in terminal and file `log_path`.
In general, it is useful to have a logger so that every output to the terminal is saved
in a permanent file. Here we save it to `model_dir/train.log`.
Example:
```
logging.info("Starting training...")
```
Args:
log_path: (string) where to log
"""
logger = logging.getLogger()
logger.setLevel(logging.INFO)
if not logger.handlers:
if save:
# Logging to a file
file_handler = logging.FileHandler(log_path)
file_handler.setFormatter(logging.Formatter('%(asctime)s:%(levelname)s: %(message)s'))
logger.addHandler(file_handler)
# Logging to console
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(logging.Formatter('%(message)s'))
logger.addHandler(stream_handler)
def save_checkpoint(state, is_best, checkpoint):
"""Saves model and training parameters at checkpoint + 'last.pth.tar'. If is_best==True, also saves
checkpoint + |
Args:
state: (dict) contains model's state_dict, may contain other keys such as epoch, optimizer state_dict
is_best: (bool) True if it is the best model seen till now
checkpoint: (string) folder where parameters are to be saved
"""
filepath = os.path.join(checkpoint, 'last.pth.tar')
if not os.path.exists(checkpoint):
print("Checkpoint Directory does not exist! Making directory {}".format(checkpoint))
os.mkdir(checkpoint)
torch.save(state, filepath)
# 如果是最好的checkpoint则以best为文件名保存
if is_best:
shutil.copyfile(filepath, os.path.join(checkpoint, 'best.pth.tar'))
def load_checkpoint(checkpoint, model, optimizer=None):
"""Loads model parameters (state_dict) from file_path. If optimizer is provided, loads state_dict of
optimizer assuming it is present in checkpoint.
Args:
checkpoint: (string) filename which needs to be loaded
model: (torch.nn.Module) model for which the parameters are loaded
optimizer: (torch.optim) optional: resume optimizer from checkpoint
"""
if not os.path.exists(checkpoint):
raise ("File doesn't exist {}".format(checkpoint))
checkpoint = torch.load(checkpoint, map_location=torch.device('cpu'))
model.load_state_dict(checkpoint['state_dict'])
if optimizer:
optimizer.load_state_dict(checkpoint['optim_dict'])
return checkpoint
def initial_parameter(net, initial_method=None):
r"""A method used to initialize the weights of PyTorch models.
:param net: a PyTorch model or a List of Pytorch model
:param str initial_method: one of the following initializations.
- xavier_uniform
- xavier_normal (default)
- kaiming_normal, or msra
- kaiming_uniform
- orthogonal
- sparse
- normal
- uniform
"""
if initial_method == 'xavier_uniform':
init_method = init.xavier_uniform_
elif initial_method == 'xavier_normal':
init_method = init.xavier_normal_
elif initial_method == 'kaiming_normal' or initial_method == 'msra':
init_method = init.kaiming_normal_
elif initial_method == 'kaiming_uniform':
init_method = init.kaiming_uniform_
elif initial_method == 'orthogonal':
init_method = init.orthogonal_
elif initial_method == 'sparse':
init_method = init.sparse_
elif initial_method == 'normal':
init_method = init.normal_
elif initial_method == 'uniform':
init_method = init.uniform_
else:
init_method = init.xavier_normal_
def weights_init(m):
# classname = m.__class__.__name__
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Conv1d) or isinstance(m, nn.Conv3d): # for all the cnn
if initial_method is not None:
init_method(m.weight.data)
else:
init.xavier_normal_(m.weight.data)
init.normal_(m.bias.data)
elif isinstance(m, nn.LSTM):
for w in m.parameters():
if len(w.data.size()) > 1:
init_method(w.data) # weight
else:
init.normal_(w.data) # bias
elif m is not None and hasattr(m, 'weight') and \
hasattr(m.weight, "requires_grad"):
if len(m.weight.size()) > 1:
init_method(m.weight.data)
else:
init.normal_(m.weight.data)
else:
for w in m.parameters():
if w.requires_grad:
if len(w.data.size()) > 1:
init_method(w.data) # weight
else:
init.normal_(w.data) # bias
# print("init else")
if isinstance(net, list):
for n in net:
n.apply(weights_init)
else:
net.apply(weights_init)
class FGM:
"""扰动训练(Fast Gradient Method)"""
def __init__(self, model):
self.model = model
self.backup = {}
def attack(self, epsilon=1., emb_name='embeddings | 'best.pth.tar'
| identifier_name |
utils.py | 2, 'XXVII': 33, 'XXXIII': 34, 'VI': 35, 'XV': 36, 'XLI': 37, 'LI': 38, 'XLII': 39,
'XXVI': 40, 'XLV': 41, 'XVI': 42, 'LIII': 43, 'XX': 44, 'LII': 45, 'XL': 46, 'XLIII': 47, 'XXX': 48,
'XLVII': 49, 'XXXIV': 50, 'XLVI': 51, 'XXV': 52, 'XLVIII': 53, 'II': 54, 'XXVIII': 55}
ID2TAG = {v: k for k, v in MAP_DICT.items()}
class Params:
"""参数定义
"""
def __init__(self):
# 根路径
self.root_path = Path(os.path.abspath(os.path.dirname(__file__)))
# 数据集路径
self.data_dir = self.root_path / 'data'
# 参数路径
self.params_path = self.root_path / 'experiments'
# 模型保存路径
self.model_dir = self.root_path / 'model'
# 预训练模型路径
self.pretrain_model_dir = self.root_path / 'pretrain_model'
# device
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.tags = list(MAP_DICT.values())
self.n_gpu = torch.cuda.device_count()
# 读取保存的data
self.data_cache = True
self.train_batch_size = 256
self.dev_batch_size = 128
self.test_batch_size = 128
# 最小训练次数
self.min_epoch_num = 5
# 容纳的提高量(f1-score)
self.patience = 0.001
# 容纳多少次未提高
self.patience_num = 5
self.seed = 2020
# 句子最大长度(pad)
self.max_seq_length = 256
# learning_rate
self.fin_tuning_lr = 1e-4
# downstream lr
self.ds_lr = 1e-4 * 100
# 梯度截断
self.clip_grad = 2
# dropout prob
self.drop_prob = 0.3
# 权重衰减系数
self.weight_decay_rate = 0.01
def get(self):
"""Gives dict-like access to Params instance by `params.show['learning_rate']"""
return self.__dict__
def load(self, json_path):
"""Loads parameters from json file"""
with open(json_path) as f:
params = json.load(f)
self.__dict__.update(params)
def save(self, json_path):
"""保存配置到json文件
"""
params = {}
with open(json_path, 'w') as f:
for k, v in self.__dict__.items():
if isinstance(v, (str, int, float, bool)):
params[k] = v
json.dump(params, f, indent=4)
class RunningAverage:
"""A simple class that maintains the running average of a quantity
记录平均损失
Example:
```
loss_avg = RunningAverage()
loss_avg.update(2)
loss_avg.update(4)
loss_avg() = 3
```
"""
def __init__(self):
self.steps = 0
self.total = 0
def update(self, val):
self.total += val
self.steps += 1
def __call__(self):
return self.total / float(self.steps)
def set_logger(save=False, log_path=None):
"""Set the logger to log info in terminal and file `log_path`.
In general, it is useful to have a logger so that every output to the terminal is saved
in a permanent file. Here we save it to `model_dir/train.log`.
Example:
```
logging.info("Starting training...")
```
Args:
log_path: (string) where to log
"""
logger = logging.getLogger()
logger.setLevel(logging.INFO)
if not logger.handlers:
if save:
# Logging to a file
file_handler = logging.FileHandler(log_path)
file_handler.setFormatter(logging.Formatter('%(asctime)s:%(levelname)s: %(message)s'))
logger.addHandler(file_handler)
# Logging to console
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(logging.Formatter('%(message)s'))
logger.addHandler(stream_handler)
def save_checkpoint(state, is_best, checkpoint):
"""Saves model and training parameters at checkpoint + 'last.pth.tar'. If is_best==True, also saves
checkpoint + 'best.pth.tar'
Args:
state: (dict) contains model's state_dict, may contain other keys such as epoch, optimizer state_dict
is_best: (bool) True if it is the best model seen till now
checkpoint: (string) folder where parameters are to be saved
"""
filepath = os.path.join(checkpoint, 'last.pth.tar')
if not os.path.exists(checkpoint):
print("Checkpoint Directory does not exist! Making directory {}".format(checkpoint))
os.mkdir(checkpoint)
torch.save(state, filepath)
# 如果是最好的checkpoint则以best为文件名保存
if is_best:
shutil.copyfile(filepath, os.path.join(checkpoint, 'best.pth.tar'))
def load_checkpoint(checkpoint, model, optimizer=None):
"""Loads model parameters (state_dict) from file_path. If optimizer is provided, loads state_dict of
optimizer assuming it is present in checkpoint.
Args:
checkpoint: (string) filename which needs to be loaded
model: (torch.nn.Module) model for which the parameters are loaded
optimizer: (torch.optim) optional: resume optimizer from checkpoint
"""
if not os.path.exists(checkpoint):
raise ("File doesn't exist {}".format(checkpoint))
checkpoint = torch.load(checkpoint, map_location=torch.device('cpu'))
model.load_state_dict(checkpoint['state_dict'])
if optimizer:
optimizer.load_state_dict(checkpoint['optim_dict'])
return checkpoint
def initial_parameter(net, initial_method=None):
r"""A method used to initialize the weights of PyTorch models.
:param net: a PyTorch model or a List of Pytorch model
:param str initial_method: one of the following initializations.
- xavier_uniform
- xavier_normal (default)
- kaiming_normal, or msra
- kaiming_uniform
- orthogonal
- sparse
- normal
- uniform
"""
if initial_method == 'xavier_uniform':
init_method = init.xavier_uniform_
elif initial_method == 'xavier_normal':
init_method = init.xavier_normal_
elif initial_method == 'kaiming_normal' or initial_method == 'msra':
init_method = init.kaiming_normal_
elif initial_method == 'kaiming_uniform':
init_method = init.kaiming_uniform_
elif initial_method == 'orthogonal':
init_method = init.orthogonal_
elif initial_method == 'sparse':
init_method = init.sparse_
elif initial_method == 'normal':
init_method = init.normal_
elif initial_method == 'uniform':
init_method = init.uniform_
else:
init_method = init.xavier_normal_
def weights_init(m):
# classname = m.__class__.__name__
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Conv1d) or isinstance(m, nn.Conv3d): # for all the cnn
if initial_method is not None:
init_method(m.weight.data)
else:
init.xavier_normal_(m.weight.data)
init.normal_(m.bias.data)
elif isinstance(m, nn.LSTM):
for w in m.parameters():
if len(w.data.size()) > 1:
init_method(w.data) # weight
else:
init.normal_(w.data) # bias
elif m is not None and hasattr(m, 'weight') and \
hasattr(m.weight, "requires_grad"):
if len(m.weight.size()) > 1:
init_method(m.weight.data)
else:
init.normal_(m.weight.data)
else:
for w in m.parameters():
if w.requires_grad:
if len(w.data.size()) > 1:
init_method(w.data) # weight
else:
init.normal_(w.data) # bias
# print("init else")
if isinstance(net, list):
for n in net:
n.apply(weights_init)
else:
net.apply(weights_init)
class FGM:
"""扰动训练(Fast Gradient Method)"""
def __init__(self, model):
self.model = model
self.backup = {}
def attack(self, epsilon=1., emb_name='embeddings.'):
"""在embedding层中加扰动
:param epsilon: 系数
:param emb_name: 模型中embedding的参数名
| """
#
for name, param | identifier_body |
|
utils.py | , 'XXXII': 27, 'IX': 28, 'L': 29, 'XXXI': 30, 'XXXVI': 31,
'XIII': 32, 'XXVII': 33, 'XXXIII': 34, 'VI': 35, 'XV': 36, 'XLI': 37, 'LI': 38, 'XLII': 39,
'XXVI': 40, 'XLV': 41, 'XVI': 42, 'LIII': 43, 'XX': 44, 'LII': 45, 'XL': 46, 'XLIII': 47, 'XXX': 48,
'XLVII': 49, 'XXXIV': 50, 'XLVI': 51, 'XXV': 52, 'XLVIII': 53, 'II': 54, 'XXVIII': 55}
ID2TAG = {v: k for k, v in MAP_DICT.items()}
class Params:
"""参数定义
"""
def __init__(self):
# 根路径
self.root_path = Path(os.path.abspath(os.path.dirname(__file__)))
# 数据集路径
self.data_dir = self.root_path / 'data'
# 参数路径
self.params_path = self.root_path / 'experiments'
# 模型保存路径
self.model_dir = self.root_path / 'model'
# 预训练模型路径
self.pretrain_model_dir = self.root_path / 'pretrain_model'
# device
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') | self.data_cache = True
self.train_batch_size = 256
self.dev_batch_size = 128
self.test_batch_size = 128
# 最小训练次数
self.min_epoch_num = 5
# 容纳的提高量(f1-score)
self.patience = 0.001
# 容纳多少次未提高
self.patience_num = 5
self.seed = 2020
# 句子最大长度(pad)
self.max_seq_length = 256
# learning_rate
self.fin_tuning_lr = 1e-4
# downstream lr
self.ds_lr = 1e-4 * 100
# 梯度截断
self.clip_grad = 2
# dropout prob
self.drop_prob = 0.3
# 权重衰减系数
self.weight_decay_rate = 0.01
def get(self):
"""Gives dict-like access to Params instance by `params.show['learning_rate']"""
return self.__dict__
def load(self, json_path):
"""Loads parameters from json file"""
with open(json_path) as f:
params = json.load(f)
self.__dict__.update(params)
def save(self, json_path):
"""保存配置到json文件
"""
params = {}
with open(json_path, 'w') as f:
for k, v in self.__dict__.items():
if isinstance(v, (str, int, float, bool)):
params[k] = v
json.dump(params, f, indent=4)
class RunningAverage:
"""A simple class that maintains the running average of a quantity
记录平均损失
Example:
```
loss_avg = RunningAverage()
loss_avg.update(2)
loss_avg.update(4)
loss_avg() = 3
```
"""
def __init__(self):
self.steps = 0
self.total = 0
def update(self, val):
self.total += val
self.steps += 1
def __call__(self):
return self.total / float(self.steps)
def set_logger(save=False, log_path=None):
"""Set the logger to log info in terminal and file `log_path`.
In general, it is useful to have a logger so that every output to the terminal is saved
in a permanent file. Here we save it to `model_dir/train.log`.
Example:
```
logging.info("Starting training...")
```
Args:
log_path: (string) where to log
"""
logger = logging.getLogger()
logger.setLevel(logging.INFO)
if not logger.handlers:
if save:
# Logging to a file
file_handler = logging.FileHandler(log_path)
file_handler.setFormatter(logging.Formatter('%(asctime)s:%(levelname)s: %(message)s'))
logger.addHandler(file_handler)
# Logging to console
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(logging.Formatter('%(message)s'))
logger.addHandler(stream_handler)
def save_checkpoint(state, is_best, checkpoint):
"""Saves model and training parameters at checkpoint + 'last.pth.tar'. If is_best==True, also saves
checkpoint + 'best.pth.tar'
Args:
state: (dict) contains model's state_dict, may contain other keys such as epoch, optimizer state_dict
is_best: (bool) True if it is the best model seen till now
checkpoint: (string) folder where parameters are to be saved
"""
filepath = os.path.join(checkpoint, 'last.pth.tar')
if not os.path.exists(checkpoint):
print("Checkpoint Directory does not exist! Making directory {}".format(checkpoint))
os.mkdir(checkpoint)
torch.save(state, filepath)
# 如果是最好的checkpoint则以best为文件名保存
if is_best:
shutil.copyfile(filepath, os.path.join(checkpoint, 'best.pth.tar'))
def load_checkpoint(checkpoint, model, optimizer=None):
"""Loads model parameters (state_dict) from file_path. If optimizer is provided, loads state_dict of
optimizer assuming it is present in checkpoint.
Args:
checkpoint: (string) filename which needs to be loaded
model: (torch.nn.Module) model for which the parameters are loaded
optimizer: (torch.optim) optional: resume optimizer from checkpoint
"""
if not os.path.exists(checkpoint):
raise ("File doesn't exist {}".format(checkpoint))
checkpoint = torch.load(checkpoint, map_location=torch.device('cpu'))
model.load_state_dict(checkpoint['state_dict'])
if optimizer:
optimizer.load_state_dict(checkpoint['optim_dict'])
return checkpoint
def initial_parameter(net, initial_method=None):
r"""A method used to initialize the weights of PyTorch models.
:param net: a PyTorch model or a List of Pytorch model
:param str initial_method: one of the following initializations.
- xavier_uniform
- xavier_normal (default)
- kaiming_normal, or msra
- kaiming_uniform
- orthogonal
- sparse
- normal
- uniform
"""
if initial_method == 'xavier_uniform':
init_method = init.xavier_uniform_
elif initial_method == 'xavier_normal':
init_method = init.xavier_normal_
elif initial_method == 'kaiming_normal' or initial_method == 'msra':
init_method = init.kaiming_normal_
elif initial_method == 'kaiming_uniform':
init_method = init.kaiming_uniform_
elif initial_method == 'orthogonal':
init_method = init.orthogonal_
elif initial_method == 'sparse':
init_method = init.sparse_
elif initial_method == 'normal':
init_method = init.normal_
elif initial_method == 'uniform':
init_method = init.uniform_
else:
init_method = init.xavier_normal_
def weights_init(m):
# classname = m.__class__.__name__
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Conv1d) or isinstance(m, nn.Conv3d): # for all the cnn
if initial_method is not None:
init_method(m.weight.data)
else:
init.xavier_normal_(m.weight.data)
init.normal_(m.bias.data)
elif isinstance(m, nn.LSTM):
for w in m.parameters():
if len(w.data.size()) > 1:
init_method(w.data) # weight
else:
init.normal_(w.data) # bias
elif m is not None and hasattr(m, 'weight') and \
hasattr(m.weight, "requires_grad"):
if len(m.weight.size()) > 1:
init_method(m.weight.data)
else:
init.normal_(m.weight.data)
else:
for w in m.parameters():
if w.requires_grad:
if len(w.data.size()) > 1:
init_method(w.data) # weight
else:
init.normal_(w.data) # bias
# print("init else")
if isinstance(net, list):
for n in net:
n.apply(weights_init)
else:
net.apply(weights_init)
class FGM:
"""扰动训练(Fast Gradient Method)"""
def __init__(self, model):
self.model = model
self.backup = {}
def attack(self, epsilon=1., emb_name='embeddings.'):
| self.tags = list(MAP_DICT.values())
self.n_gpu = torch.cuda.device_count()
# 读取保存的data | random_line_split |
utils.py | 'XXXII': 27, 'IX': 28, 'L': 29, 'XXXI': 30, 'XXXVI': 31,
'XIII': 32, 'XXVII': 33, 'XXXIII': 34, 'VI': 35, 'XV': 36, 'XLI': 37, 'LI': 38, 'XLII': 39,
'XXVI': 40, 'XLV': 41, 'XVI': 42, 'LIII': 43, 'XX': 44, 'LII': 45, 'XL': 46, 'XLIII': 47, 'XXX': 48,
'XLVII': 49, 'XXXIV': 50, 'XLVI': 51, 'XXV': 52, 'XLVIII': 53, 'II': 54, 'XXVIII': 55}
ID2TAG = {v: k for k, v in MAP_DICT.items()}
class Params:
"""参数定义
"""
def __init__(self):
# 根路径
self.root_path = Path(os.path.abspath(os.path.dirname(__file__)))
# 数据集路径
self.data_dir = self.root_path / 'data'
# 参数路径
self.params_path = self.root_path / 'experiments'
# 模型保存路径
self.model_dir = self.root_path / 'model'
# 预训练模型路径
self.pretrain_model_dir = self.root_path / 'pretrain_model'
# device
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.tags = list(MAP_DICT.values())
self.n_gpu = torch.cuda.device_count()
# 读取保存的data
self.data_cache = True
self.train_batch_size = 256
self.dev_batch_size = 128
self.test_batch_size = 128
# 最小训练次数
self.min_epoch_num = 5
# 容纳的提高量(f1-score)
self.patience = 0.001
# 容纳多少次未提高
self.patience_num = 5
self.seed = 2020
# 句子最大长度(pad)
self.max_seq_length = 256
# learning_rate
self.fin_tuning_lr = 1e-4
# downstream lr
self.ds_lr = 1e-4 * 100
# 梯度截断
self.clip_grad = 2
# dropout prob
self.drop_prob = 0.3
# 权重衰减系数
self.weight_decay_rate = 0.01
def get(self):
"""Gives dict-like access to Params instance by `params.show['learning_rate']"""
return self.__dict__
def load(self, json_path):
"""Loads parameters from json file"""
with open(json_path) as f:
params = json.load(f)
self.__dict__.update(params)
def save(self, json_path):
"""保存配置到json文件
"""
params = {}
with open(json_path, 'w') as f:
for k, v in self.__dict__.items():
if isinstance(v, (str, int, float, bool)):
params[k] = v
json.dump(params, f, indent=4)
class RunningAverage:
"""A simple class that maintains the running average of a quantity
记录平均损失
Example:
```
loss_avg = RunningAverage()
loss_avg.update(2)
loss_avg.update(4)
loss_avg() = 3
```
"""
def __init__(self):
self.steps = 0
self.total = 0
def update(self, val):
self.total += val
self.steps += 1
def __call__(self):
return self.total / float(self.steps)
def set_logger(save=False, log_path=None):
"""Set the logger to log info in terminal and file `log_path`.
In general, it is useful to have a logger so that every output to the terminal is saved
in a permanent file. Here we save it to `model_dir/train.log`.
Example:
```
logging.info("Starting training...")
```
Args:
log_path: (string) where to log
"""
logger = logging.getLogger()
logger.setLevel(logging.INFO)
if not logger.handlers:
if save:
# Logging to a file
file_handler = logging.FileHandler(log_path)
file_handler.setFormatter(logging.Formatter('%(asctime)s:%(levelname)s: %(message)s'))
logger.addHandler(file_handler)
# Logging to console
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(logging.Formatter('%(message)s'))
logger.addHandler(stream_handler)
def save_checkpoint(state, is_best, checkpoint):
"""Saves model and training parameters at checkpoint + 'last.pth.tar'. If is_best==True, also saves
checkpoint + 'best.pth.tar'
Args:
state: (dict) contains model's state_dict, may contain other keys such as epoch, optimizer state_dict
is_best: (bool) True if it is the best model seen till now
checkpoint: (string) folder where parameters are to be saved
"""
filepath = os.path.join(checkpoint, 'last.pth.tar')
if not os.path.exists(checkpoint):
print("Checkpoint Directory does not exist! Making directory {}".format(checkpoint))
os.mkdir(checkpoint)
torch.save(state, filepath)
# 如果是最好的checkpoint则以best为文件名保存
if is_best:
shutil.copyfile(filepath, os.path.join(checkpoint, 'best.pth.tar'))
def load_checkpoint(checkpoint, model, optimizer=None):
"""Loads model parameters (state_dict) from file_path. If optimizer is provided, loads state_dict of
optimizer assuming it is present in checkpoint.
Args:
checkpoint: (string) filename which needs to be loaded
model: (torch.nn.Module) model for which the parameters are loaded
optimizer: (torch.optim) optional: resume optimizer from checkpoint
"""
if not os.path.exists(checkpoint):
raise ("File doesn't exist {}".format(checkpoint))
checkpoint = torch.load(checkpoint, map_location=torch.device('cpu'))
model.load_state_dict(checkpoint['state_dict'])
if optimizer:
optimizer.load_state_dict(checkpoint['optim_dict'])
return checkpoint
def initial_parameter(net, initial_method=None):
r"""A method used to initialize the weights of PyTorch models.
:param net: a PyTorch model or a List of Pytorch model
:param str initial_method: one of the following initializations.
- xavier_uniform
- xavier_normal (default)
- kaiming_normal, or msra
- kaiming_uniform
- orthogonal
- sparse
- normal
- uniform
"""
if initial_method == 'xavier_uniform':
init_method = init.xavier_uniform_
elif initial_method == 'xavier_normal':
init_method = init.xavier_normal_
elif initial_method == 'kaiming_normal' or initial_method == 'msra':
init_method = init.kaiming_normal_
elif initial_method == 'kaiming_uniform':
init_method = init.kaiming_uniform_
elif initial_method == 'orthogonal':
init_method = init.orthogonal_
elif initial_method == 'sparse':
init_method = init.sparse_
elif initial_method == 'normal':
init_method = init.normal_
elif initial_method == 'uniform':
init_method = init.uniform_
else:
init_method = init.xavier_normal_
def weights_init(m):
# classname = m.__class__.__name__
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Conv1d) or isinstance(m, nn.Conv3d): # for all the cnn
if initial_method is not None:
init_method(m.weight.data)
else:
init.xavier_normal_(m.weight.data)
init.normal_(m.bias.data)
elif | init.normal_(w.data) # bias
elif m is not None and hasattr(m, 'weight') and \
hasattr(m.weight, "requires_grad"):
if len(m.weight.size()) > 1:
init_method(m.weight.data)
else:
init.normal_(m.weight.data)
else:
for w in m.parameters():
if w.requires_grad:
if len(w.data.size()) > 1:
init_method(w.data) # weight
else:
init.normal_(w.data) # bias
# print("init else")
if isinstance(net, list):
for n in net:
n.apply(weights_init)
else:
net.apply(weights_init)
class FGM:
"""扰动训练(Fast Gradient Method)"""
def __init__(self, model):
self.model = model
self.backup = {}
def attack(self, epsilon=1., emb_name='embed | isinstance(m, nn.LSTM):
for w in m.parameters():
if len(w.data.size()) > 1:
init_method(w.data) # weight
else:
| conditional_block |
Unity_K144.py | .get()
NR5G.NR_ChBW = int(entryCol.entry6_enum.get())
NR5G.NR_SubSp = int(entryCol.entry7_enum.get())
NR5G.NR_RB = int(entryCol.entry8.get())
NR5G.NR_RBO = int(entryCol.entry9.get())
NR5G.NR_Mod = entryCol.entry10_enum.get()
NR5G.NR_CC = int(entryCol.entry11.get())
NR5G.NR_TF = 'OFF'
return NR5G
def btn1():
"""*IDN Query"""
NR5G = VST().jav_Open(entryCol.entry0.get(),entryCol.entry1.get())
print(NR5G.SMW.query('*IDN?'))
print(NR5G.FSW.query('*IDN?'))
NR5G.jav_Close()
def btn2():
"""Display Max RB"""
topWind.writeN('-------------------------- --------------------------')
topWind.writeN('|u[<6GHz ]010 020 050 100| |u[>6GHz ]050 100 200 400|')
topWind.writeN('|-+------+---+---+---+---| |-+------+---+---+---+---|')
topWind.writeN('|0 015kHz|052 106 270 N/A| |0 015kHz|N/A N/A N/A N/A|')
topWind.writeN('|1 030kHz|024 051 133 273| |1 030kHz|N/A N/A N/A N/A|')
topWind.writeN('|2 060kHz|011 024 065 135| |2 060kHz|066 132 264 N/A|')
topWind.writeN('|3 120kHz|N/A N/A N/A N/A| |3 120kHz|032 066 132 264|')
topWind.writeN('-------------------------- --------------------------')
topWind.writeN(' ')
# NR5G = gui_reader()
# data = NR5G.SMW.Get_5GNR_RBMax()
# topWind.writeN("=== Max RB ===")
# topWind.writeN("Mode: %s %sMHz"%(NR5G.SMW.Get_5GNR_FreqRange(),NR5G.SMW.Get_5GNR_ChannelBW()))
# for i in data:
# topWind.writeN("SubC:%d RB Max:%d"%(i[0],i[1]))
# NR5G.jav_Close()
def btn3():
""" Get EVM """
NR5G = gui_reader()
NR5G.FSW.Set_InitImm()
topWind.writeN(f'EVM: {NR5G.FSW.Get_5GNR_EVM():.4f}')
NR5G.FSW.jav_Close()
def btn4():
"""Set 5GNR Parameters"""
NR5G = gui_reader()
print("SMW Creating Waveform.")
NR5G.Set_5GNR_All()
print(NR5G.FSW.jav_ClrErr())
print(NR5G.SMW.jav_ClrErr())
print("SMW/FSW Setting Written")
NR5G.jav_Close()
def btn5():
"""Read 5GNR Parameters"""
NR5G = gui_reader()
K144Data = NR5G.Get_5GNR_All()
topWind.writeN(" ")
botWind.writeH('Get_5GNR Differences ')
for i in range(len(K144Data[0])):
try:
topWind.writeN("%s\t%s\t%s"%(K144Data[0][i],K144Data[1][i],K144Data[2][i]))
if 'Direction' in K144Data[0][i]:
K144Data[1][i] = 'UL' if K144Data[1][i] == 'UP' else 'DL'
if 'FreqRange' in K144Data[0][i]:
K144Data[1][i] = 'HIGH' if K144Data[1][i] == 'GT6' else K144Data[1][i]
K144Data[1][i] = 'MIDD' if K144Data[1][i] == 'BT36' else K144Data[1][i]
K144Data[1][i] = 'LOW' if K144Data[1][i] == 'LT3' else K144Data[1][i]
if 'SubSpacing' in K144Data[0][i]:
|
if 'DMRS Config'in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('T','')
if 'L_PTRS' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('TD','')
if 'K_PTRS' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('FD','')
if 'RE-offs' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('RE','')
K144Data[2][i] = K144Data[2][i].replace('OS','')
if K144Data[1][i] != K144Data[2][i]:
botWind.writeH(f'{K144Data[0][i]}\t{K144Data[1][i]}\t{K144Data[2][i]}')
except:
pass
NR5G.jav_Close()
def btn6():
"""filename: 5GNR_UL_BW_SubCar_Mod"""
NR5G = gui_reader()
udl = NR5G.SMW.Get_5GNR_Direction()
filename = f'5GNR_{udl}_{NR5G.SMW.Get_5GNR_ChannelBW()}MHz_{NR5G.SMW.Get_5GNR_BWP_SubSpace()}kHz_{NR5G.SMW.Get_5GNR_BWP_Ch_Modulation()}'
topWind.writeN(f'Writing: {filename}')
NR5G.FSW.Set_5GNR_savesetting(filename)
for i in range(1):
NR5G.SMW.Set_5GNR_savesetting(filename+str(i))
topWind.writeN('Writing: DONE!')
def click3(tkEvent):
"""Set FSW/SMW frequency"""
#print(tkEvent)
NR5G = gui_reader()
NR5G.SMW.Set_Freq(NR5G.Freq)
NR5G.FSW.Set_Freq(NR5G.Freq)
NR5G.jav_Close()
botWind.writeN('SMW/FSW Freq: %d Hz'%NR5G.Freq)
def click4(tkEvent):
"""Set SMW RF Pwr"""
#print(tkEvent)
NR5G = gui_reader()
NR5G.SMW.Set_RFPwr(int(NR5G.SWM_Out))
NR5G.jav_Close()
botWind.writeN('SMW RMS Pwr : %d dBm'%int(NR5G.SWM_Out))
def click14(tkEvent):
"""Set RB """
# print(tkEvent)
# NR5G = gui_reader()
# NR5G.SMW.Set_5GNR_Direction(NR5G.NR_Dir)
# NR5G.SMW.Set_5GNR_BWP_ResBlock(NR5G.NR_RB)
# NR5G.SMW.Set_5GNR_BWP_Ch_ResBlock(NR5G.NR_RB)
# NR5G.FSW.Set_5GNR_Direction(NR5G.NR_Dir)
# NR5G.FSW.Set_5GNR_BWP_ResBlock(NR5G.NR_RB)
# NR5G.FSW.Set_5GNR_BWP_Ch_ResBlock(NR5G.NR_RB)
# NR5G.jav_Close()
botWind.writeN('FSW:Signal Description-->RadioFrame-->BWP Config-->RB')
botWind.writeN('FSW:Signal Description-->RadioFrame-->PxSCH Config-->RB')
botWind.writeN('SMW:User/BWP-->UL BWP-->RB')
botWind | K144Data[1][i] = K144Data[1][i].replace('N','')
K144Data[2][i] = K144Data[2][i].replace('SS','') | conditional_block |
Unity_K144.py | ---+---+---|')
topWind.writeN('|0 015kHz|052 106 270 N/A| |0 015kHz|N/A N/A N/A N/A|')
topWind.writeN('|1 030kHz|024 051 133 273| |1 030kHz|N/A N/A N/A N/A|')
topWind.writeN('|2 060kHz|011 024 065 135| |2 060kHz|066 132 264 N/A|')
topWind.writeN('|3 120kHz|N/A N/A N/A N/A| |3 120kHz|032 066 132 264|')
topWind.writeN('-------------------------- --------------------------')
topWind.writeN(' ')
# NR5G = gui_reader()
# data = NR5G.SMW.Get_5GNR_RBMax()
# topWind.writeN("=== Max RB ===")
# topWind.writeN("Mode: %s %sMHz"%(NR5G.SMW.Get_5GNR_FreqRange(),NR5G.SMW.Get_5GNR_ChannelBW()))
# for i in data:
# topWind.writeN("SubC:%d RB Max:%d"%(i[0],i[1]))
# NR5G.jav_Close()
def btn3():
""" Get EVM """
NR5G = gui_reader()
NR5G.FSW.Set_InitImm()
topWind.writeN(f'EVM: {NR5G.FSW.Get_5GNR_EVM():.4f}')
NR5G.FSW.jav_Close()
def btn4():
"""Set 5GNR Parameters"""
NR5G = gui_reader()
print("SMW Creating Waveform.")
NR5G.Set_5GNR_All()
print(NR5G.FSW.jav_ClrErr())
print(NR5G.SMW.jav_ClrErr())
print("SMW/FSW Setting Written")
NR5G.jav_Close()
def btn5():
"""Read 5GNR Parameters"""
NR5G = gui_reader()
K144Data = NR5G.Get_5GNR_All()
topWind.writeN(" ")
botWind.writeH('Get_5GNR Differences ')
for i in range(len(K144Data[0])):
try:
topWind.writeN("%s\t%s\t%s"%(K144Data[0][i],K144Data[1][i],K144Data[2][i]))
if 'Direction' in K144Data[0][i]:
K144Data[1][i] = 'UL' if K144Data[1][i] == 'UP' else 'DL'
if 'FreqRange' in K144Data[0][i]:
K144Data[1][i] = 'HIGH' if K144Data[1][i] == 'GT6' else K144Data[1][i]
K144Data[1][i] = 'MIDD' if K144Data[1][i] == 'BT36' else K144Data[1][i]
K144Data[1][i] = 'LOW' if K144Data[1][i] == 'LT3' else K144Data[1][i]
if 'SubSpacing' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('N','')
K144Data[2][i] = K144Data[2][i].replace('SS','')
if 'DMRS Config'in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('T','')
if 'L_PTRS' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('TD','')
if 'K_PTRS' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('FD','')
if 'RE-offs' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('RE','')
K144Data[2][i] = K144Data[2][i].replace('OS','')
if K144Data[1][i] != K144Data[2][i]:
botWind.writeH(f'{K144Data[0][i]}\t{K144Data[1][i]}\t{K144Data[2][i]}')
except:
pass
NR5G.jav_Close()
def btn6():
"""filename: 5GNR_UL_BW_SubCar_Mod"""
NR5G = gui_reader()
udl = NR5G.SMW.Get_5GNR_Direction()
filename = f'5GNR_{udl}_{NR5G.SMW.Get_5GNR_ChannelBW()}MHz_{NR5G.SMW.Get_5GNR_BWP_SubSpace()}kHz_{NR5G.SMW.Get_5GNR_BWP_Ch_Modulation()}'
topWind.writeN(f'Writing: {filename}')
NR5G.FSW.Set_5GNR_savesetting(filename)
for i in range(1):
NR5G.SMW.Set_5GNR_savesetting(filename+str(i))
topWind.writeN('Writing: DONE!')
def click3(tkEvent):
"""Set FSW/SMW frequency"""
#print(tkEvent)
NR5G = gui_reader()
NR5G.SMW.Set_Freq(NR5G.Freq)
NR5G.FSW.Set_Freq(NR5G.Freq)
NR5G.jav_Close()
botWind.writeN('SMW/FSW Freq: %d Hz'%NR5G.Freq)
def click4(tkEvent):
"""Set SMW RF Pwr"""
#print(tkEvent)
NR5G = gui_reader()
NR5G.SMW.Set_RFPwr(int(NR5G.SWM_Out))
NR5G.jav_Close()
botWind.writeN('SMW RMS Pwr : %d dBm'%int(NR5G.SWM_Out))
def click14(tkEvent):
"""Set RB """
# print(tkEvent)
# NR5G = gui_reader()
# NR5G.SMW.Set_5GNR_Direction(NR5G.NR_Dir)
# NR5G.SMW.Set_5GNR_BWP_ResBlock(NR5G.NR_RB)
# NR5G.SMW.Set_5GNR_BWP_Ch_ResBlock(NR5G.NR_RB)
# NR5G.FSW.Set_5GNR_Direction(NR5G.NR_Dir)
# NR5G.FSW.Set_5GNR_BWP_ResBlock(NR5G.NR_RB)
# NR5G.FSW.Set_5GNR_BWP_Ch_ResBlock(NR5G.NR_RB)
# NR5G.jav_Close()
botWind.writeN('FSW:Signal Description-->RadioFrame-->BWP Config-->RB')
botWind.writeN('FSW:Signal Description-->RadioFrame-->PxSCH Config-->RB')
botWind.writeN('SMW:User/BWP-->UL BWP-->RB')
botWind.writeN('SMW:Scheduling-->PxSCH-->RB')
def click15(tkEvent):
"""Set RB Offset"""
botWind.writeN('FSW:Signal Description-->RadioFrame-->BWP Config-->RB Offset')
botWind.writeN('SMW:User/BWP-->UL BWP-->RB Offset')
def clearTopWind(tkEvent):
"""Clear Top Window"""
topWind.clear()
topWind.writeH("===Please Click Buttons Below===")
RSVar = GUIData()
for item in RSVar.List1:
topWind.writeN(item)
def dataLoad():
"""Read setting file --> GUI"""
try:
try: #Python3
f = open(__file__ + ".csv","rt")
except: #Python2
f = open(__file__ + ".csv","rb")
data = f.read().split(',')
entryCol.entry0.delete(0,END)
entryCol.entry0.insert(0,data[0])
entryCol.entry1.delete(0,END)
entryCol.entry1.insert(0,data[1])
entryCol.entry2.delete(0,END)
entryCol.entry2.insert(0,data[2]) | entryCol.entry3.delete(0,END)
entryCol.entry3.insert(0,data[3])
botWind.writeN("DataLoad: File") | random_line_split |
|
Unity_K144.py | 9.get())
NR5G.NR_Mod = entryCol.entry10_enum.get()
NR5G.NR_CC = int(entryCol.entry11.get())
NR5G.NR_TF = 'OFF'
return NR5G
def btn1():
"""*IDN Query"""
NR5G = VST().jav_Open(entryCol.entry0.get(),entryCol.entry1.get())
print(NR5G.SMW.query('*IDN?'))
print(NR5G.FSW.query('*IDN?'))
NR5G.jav_Close()
def btn2():
"""Display Max RB"""
topWind.writeN('-------------------------- --------------------------')
topWind.writeN('|u[<6GHz ]010 020 050 100| |u[>6GHz ]050 100 200 400|')
topWind.writeN('|-+------+---+---+---+---| |-+------+---+---+---+---|')
topWind.writeN('|0 015kHz|052 106 270 N/A| |0 015kHz|N/A N/A N/A N/A|')
topWind.writeN('|1 030kHz|024 051 133 273| |1 030kHz|N/A N/A N/A N/A|')
topWind.writeN('|2 060kHz|011 024 065 135| |2 060kHz|066 132 264 N/A|')
topWind.writeN('|3 120kHz|N/A N/A N/A N/A| |3 120kHz|032 066 132 264|')
topWind.writeN('-------------------------- --------------------------')
topWind.writeN(' ')
# NR5G = gui_reader()
# data = NR5G.SMW.Get_5GNR_RBMax()
# topWind.writeN("=== Max RB ===")
# topWind.writeN("Mode: %s %sMHz"%(NR5G.SMW.Get_5GNR_FreqRange(),NR5G.SMW.Get_5GNR_ChannelBW()))
# for i in data:
# topWind.writeN("SubC:%d RB Max:%d"%(i[0],i[1]))
# NR5G.jav_Close()
def btn3():
""" Get EVM """
NR5G = gui_reader()
NR5G.FSW.Set_InitImm()
topWind.writeN(f'EVM: {NR5G.FSW.Get_5GNR_EVM():.4f}')
NR5G.FSW.jav_Close()
def btn4():
"""Set 5GNR Parameters"""
NR5G = gui_reader()
print("SMW Creating Waveform.")
NR5G.Set_5GNR_All()
print(NR5G.FSW.jav_ClrErr())
print(NR5G.SMW.jav_ClrErr())
print("SMW/FSW Setting Written")
NR5G.jav_Close()
def btn5():
"""Read 5GNR Parameters"""
NR5G = gui_reader()
K144Data = NR5G.Get_5GNR_All()
topWind.writeN(" ")
botWind.writeH('Get_5GNR Differences ')
for i in range(len(K144Data[0])):
try:
topWind.writeN("%s\t%s\t%s"%(K144Data[0][i],K144Data[1][i],K144Data[2][i]))
if 'Direction' in K144Data[0][i]:
K144Data[1][i] = 'UL' if K144Data[1][i] == 'UP' else 'DL'
if 'FreqRange' in K144Data[0][i]:
K144Data[1][i] = 'HIGH' if K144Data[1][i] == 'GT6' else K144Data[1][i]
K144Data[1][i] = 'MIDD' if K144Data[1][i] == 'BT36' else K144Data[1][i]
K144Data[1][i] = 'LOW' if K144Data[1][i] == 'LT3' else K144Data[1][i]
if 'SubSpacing' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('N','')
K144Data[2][i] = K144Data[2][i].replace('SS','')
if 'DMRS Config'in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('T','')
if 'L_PTRS' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('TD','')
if 'K_PTRS' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('FD','')
if 'RE-offs' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('RE','')
K144Data[2][i] = K144Data[2][i].replace('OS','')
if K144Data[1][i] != K144Data[2][i]:
botWind.writeH(f'{K144Data[0][i]}\t{K144Data[1][i]}\t{K144Data[2][i]}')
except:
pass
NR5G.jav_Close()
def btn6():
"""filename: 5GNR_UL_BW_SubCar_Mod"""
NR5G = gui_reader()
udl = NR5G.SMW.Get_5GNR_Direction()
filename = f'5GNR_{udl}_{NR5G.SMW.Get_5GNR_ChannelBW()}MHz_{NR5G.SMW.Get_5GNR_BWP_SubSpace()}kHz_{NR5G.SMW.Get_5GNR_BWP_Ch_Modulation()}'
topWind.writeN(f'Writing: {filename}')
NR5G.FSW.Set_5GNR_savesetting(filename)
for i in range(1):
NR5G.SMW.Set_5GNR_savesetting(filename+str(i))
topWind.writeN('Writing: DONE!')
def click3(tkEvent):
"""Set FSW/SMW frequency"""
#print(tkEvent)
NR5G = gui_reader()
NR5G.SMW.Set_Freq(NR5G.Freq)
NR5G.FSW.Set_Freq(NR5G.Freq)
NR5G.jav_Close()
botWind.writeN('SMW/FSW Freq: %d Hz'%NR5G.Freq)
def click4(tkEvent):
"""Set SMW RF Pwr"""
#print(tkEvent)
NR5G = gui_reader()
NR5G.SMW.Set_RFPwr(int(NR5G.SWM_Out))
NR5G.jav_Close()
botWind.writeN('SMW RMS Pwr : %d dBm'%int(NR5G.SWM_Out))
def click14(tkEvent):
"""Set RB """
# print(tkEvent)
# NR5G = gui_reader()
# NR5G.SMW.Set_5GNR_Direction(NR5G.NR_Dir)
# NR5G.SMW.Set_5GNR_BWP_ResBlock(NR5G.NR_RB)
# NR5G.SMW.Set_5GNR_BWP_Ch_ResBlock(NR5G.NR_RB)
# NR5G.FSW.Set_5GNR_Direction(NR5G.NR_Dir)
# NR5G.FSW.Set_5GNR_BWP_ResBlock(NR5G.NR_RB)
# NR5G.FSW.Set_5GNR_BWP_Ch_ResBlock(NR5G.NR_RB)
# NR5G.jav_Close()
botWind.writeN('FSW:Signal Description-->RadioFrame-->BWP Config-->RB')
botWind.writeN('FSW:Signal Description-->RadioFrame-->PxSCH Config-->RB')
botWind.writeN('SMW:User/BWP-->UL BWP-->RB')
botWind.writeN('SMW:Scheduling-->PxSCH-->RB')
def click15(tkEvent):
| """Set RB Offset"""
botWind.writeN('FSW:Signal Description-->RadioFrame-->BWP Config-->RB Offset')
botWind.writeN('SMW:User/BWP-->UL BWP-->RB Offset') | identifier_body |
|
Unity_K144.py | (self):
self.List1 = ['- Utility does not validate settings against 3GPP 5G',
'- Click *IDN? to validate IP Addresses',
'- Frequency & SMW Power labels are clickable',
'']
def gui_reader():
"""Read values from GUI"""
SMW_IP = entryCol.entry0.get()
FSW_IP = entryCol.entry1.get()
### Set 5GNR Parameters
NR5G = VST().jav_Open(SMW_IP,FSW_IP)
NR5G.Freq = float(entryCol.entry2.get())
NR5G.SWM_Out = float(entryCol.entry3.get())
NR5G.NR_Dir = entryCol.entry4_enum.get()
NR5G.NR_Deploy = entryCol.entry5_enum.get()
NR5G.NR_ChBW = int(entryCol.entry6_enum.get())
NR5G.NR_SubSp = int(entryCol.entry7_enum.get())
NR5G.NR_RB = int(entryCol.entry8.get())
NR5G.NR_RBO = int(entryCol.entry9.get())
NR5G.NR_Mod = entryCol.entry10_enum.get()
NR5G.NR_CC = int(entryCol.entry11.get())
NR5G.NR_TF = 'OFF'
return NR5G
def btn1():
"""*IDN Query"""
NR5G = VST().jav_Open(entryCol.entry0.get(),entryCol.entry1.get())
print(NR5G.SMW.query('*IDN?'))
print(NR5G.FSW.query('*IDN?'))
NR5G.jav_Close()
def btn2():
"""Display Max RB"""
topWind.writeN('-------------------------- --------------------------')
topWind.writeN('|u[<6GHz ]010 020 050 100| |u[>6GHz ]050 100 200 400|')
topWind.writeN('|-+------+---+---+---+---| |-+------+---+---+---+---|')
topWind.writeN('|0 015kHz|052 106 270 N/A| |0 015kHz|N/A N/A N/A N/A|')
topWind.writeN('|1 030kHz|024 051 133 273| |1 030kHz|N/A N/A N/A N/A|')
topWind.writeN('|2 060kHz|011 024 065 135| |2 060kHz|066 132 264 N/A|')
topWind.writeN('|3 120kHz|N/A N/A N/A N/A| |3 120kHz|032 066 132 264|')
topWind.writeN('-------------------------- --------------------------')
topWind.writeN(' ')
# NR5G = gui_reader()
# data = NR5G.SMW.Get_5GNR_RBMax()
# topWind.writeN("=== Max RB ===")
# topWind.writeN("Mode: %s %sMHz"%(NR5G.SMW.Get_5GNR_FreqRange(),NR5G.SMW.Get_5GNR_ChannelBW()))
# for i in data:
# topWind.writeN("SubC:%d RB Max:%d"%(i[0],i[1]))
# NR5G.jav_Close()
def btn3():
""" Get EVM """
NR5G = gui_reader()
NR5G.FSW.Set_InitImm()
topWind.writeN(f'EVM: {NR5G.FSW.Get_5GNR_EVM():.4f}')
NR5G.FSW.jav_Close()
def btn4():
"""Set 5GNR Parameters"""
NR5G = gui_reader()
print("SMW Creating Waveform.")
NR5G.Set_5GNR_All()
print(NR5G.FSW.jav_ClrErr())
print(NR5G.SMW.jav_ClrErr())
print("SMW/FSW Setting Written")
NR5G.jav_Close()
def btn5():
"""Read 5GNR Parameters"""
NR5G = gui_reader()
K144Data = NR5G.Get_5GNR_All()
topWind.writeN(" ")
botWind.writeH('Get_5GNR Differences ')
for i in range(len(K144Data[0])):
try:
topWind.writeN("%s\t%s\t%s"%(K144Data[0][i],K144Data[1][i],K144Data[2][i]))
if 'Direction' in K144Data[0][i]:
K144Data[1][i] = 'UL' if K144Data[1][i] == 'UP' else 'DL'
if 'FreqRange' in K144Data[0][i]:
K144Data[1][i] = 'HIGH' if K144Data[1][i] == 'GT6' else K144Data[1][i]
K144Data[1][i] = 'MIDD' if K144Data[1][i] == 'BT36' else K144Data[1][i]
K144Data[1][i] = 'LOW' if K144Data[1][i] == 'LT3' else K144Data[1][i]
if 'SubSpacing' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('N','')
K144Data[2][i] = K144Data[2][i].replace('SS','')
if 'DMRS Config'in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('T','')
if 'L_PTRS' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('TD','')
if 'K_PTRS' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('FD','')
if 'RE-offs' in K144Data[0][i]:
K144Data[1][i] = K144Data[1][i].replace('RE','')
K144Data[2][i] = K144Data[2][i].replace('OS','')
if K144Data[1][i] != K144Data[2][i]:
botWind.writeH(f'{K144Data[0][i]}\t{K144Data[1][i]}\t{K144Data[2][i]}')
except:
pass
NR5G.jav_Close()
def btn6():
"""filename: 5GNR_UL_BW_SubCar_Mod"""
NR5G = gui_reader()
udl = NR5G.SMW.Get_5GNR_Direction()
filename = f'5GNR_{udl}_{NR5G.SMW.Get_5GNR_ChannelBW()}MHz_{NR5G.SMW.Get_5GNR_BWP_SubSpace()}kHz_{NR5G.SMW.Get_5GNR_BWP_Ch_Modulation()}'
topWind.writeN(f'Writing: {filename}')
NR5G.FSW.Set_5GNR_savesetting(filename)
for i in range(1):
NR5G.SMW.Set_5GNR_savesetting(filename+str(i))
topWind.writeN('Writing: DONE!')
def click3(tkEvent):
"""Set FSW/SMW frequency"""
#print(tkEvent)
NR5G = gui_reader()
NR5G.SMW.Set_Freq(NR5G.Freq)
NR5G.FSW.Set_Freq(NR5G.Freq)
NR5G.jav_Close()
botWind.writeN('SMW/FSW Freq: %d Hz'%NR5G.Freq)
def click4(tkEvent):
"""Set SMW RF Pwr"""
#print(tkEvent)
NR5G = gui_reader()
NR5G.SMW.Set_RFPwr(int(NR5G.SWM_Out))
NR5G.jav_Close()
botWind.writeN('SMW RMS Pwr : %d dBm'%int(NR5G.SWM_Out))
def click14(tkEvent):
"""Set RB """
# print(tkEvent)
# NR5G = gui_reader()
# NR5G.SMW.Set_5GNR_Direction(NR5G.NR_Dir)
# NR5G.SMW.Set_5GNR_BWP_ResBlock(NR | __init__ | identifier_name |
|
lab1e.py | 30]
for i in range(5030):
unigram[uni[i][0]] = i
#The below chunk of code makes sure all words from table RG65 are included
#instead of just the 5030 most frequently occuring words. it ends up being the most
#frequently occuring 5000 words plus the other 30 words in table RG65 that were not
#already in the top 5030.
#comment the below code chunk out to keep the model using just the most frequent 5030 words
ctr = 0
for x,y in P:
if x != 'serf':
if x != uni[unigram[x]][0]:
uni[5000+ctr] = (x, old[x])
unigram[uni[5000+ctr][0]] = 5000+ctr
ctr += 1
if y != uni[unigram[y]][0]:
uni[5000+ctr] = (y, old[y])
unigram[uni[5000+ctr][0]] = 5000+ctr
ctr += 1
#STEP 3 word-context vector M1 based on bigram counts; modified to count both preceding and following words
M1 = np.zeros(shape=(5030,5030))
for i in range(len(words) - 1):
wi = re.sub('[^A-Za-z]+', '', words[i]).lower()
wi1 = re.sub('[^A-Za-z]+', '', words[i+1]).lower()
if wi != '' and wi1 != '' and wi == uni[unigram[wi]][0] and wi1 == uni[unigram[wi1]][0]:
M1[unigram[wi], unigram[wi1]] += 1
M1[unigram[wi], unigram[wi1]] += 1
#STEP 4 PPMI for M1 denoted M1plus
M1plus = np.zeros(shape=(5030,5030))
for i in range(5030):
for j in range(5030):
M1plus[i, j] = max(math.log((M1[i, j] / sum) / ((uni[i][1] / sum) * (uni[j][1] / sum) + 1e-31) + 1e-31, 2.0), 0)
#STEP 5 latent semantic model using SVD. M2_10, M2_50, and M2_100 denote
#truncated dimensions of 10, 50, 100 respectively
A, D, Q = np.linalg.svd(M1plus, full_matrices=False)
M2_10 = A[:, :10]
M2_50 = A[:, :50]
M2_100 = A[:, :100]
#STEP 6 done at beginning
#STEP 7 cosine similiarities for M1 (SM1), M1plus(SM1plus), M2_10 (SM2_10), M2_50 (SM2_50), M2_100 (SM2_100)
#a in front of name denotes matrix has cosine similarity for all pairs of words, later we pick relevant pairs
aSM1 = cosine_similarity(M1)
aSM1plus = cosine_similarity(M1plus)
aSM2_10 = cosine_similarity(M2_10)
aSM2_50 = cosine_similarity(M2_50)
aSM2_100 = cosine_similarity(M2_100)
#pick out the cosine similarity scores for the relevant pairs in P.
#SL only includes scores from S for pairs of words which actually exist in our top 5030 (so we have data)
#since I later forced all words in table RG65 into the top 5030, SL will contain all scores from S, except
#note the word 'serf' does not occur at all in the Brown Corpus, so its pair was omitted from analysis
L = []
SL = []
for i in range(len(P)):
x,y = P[i]
if x != 'serf' and x == uni[unigram[x]][0] and y == uni[unigram[y]][0]:
L.append((x, y))
SL.append(S[i])
SM1 = []
SM1plus = []
SM2_10 = []
SM2_50 = []
SM2_100 = []
for x,y in L:
SM1.append(aSM1[unigram[x], unigram[y]])
SM1plus.append(aSM1plus[unigram[x], unigram[y]])
SM2_10.append(aSM2_10[unigram[x], unigram[y]])
SM2_50.append(aSM2_50[unigram[x], unigram[y]])
SM2_100.append(aSM2_100[unigram[x], unigram[y]])
#STEP 8 Pearson correlation. outputs tuple (Pearson coefficient, 2-tailed p value)
print("Cosine Similarities:")
print("S and SM1: ", pearsonr(SL, SM1))
print("S and SM1+: ", pearsonr(SL, SM1plus))
print("S and SM2_10: ", pearsonr(SL, SM2_10))
print("S and SM2_50: ", pearsonr(SL, SM2_50))
print("S and SM2_100: ", pearsonr(SL, SM2_100))
#Lab 1 extension Step 2 extract vectors for all pairs of words in Table 1 of RG65
from gensim.models import KeyedVectors
model = KeyedVectors.load_word2vec_format('word2vec_pretrain_vec/GoogleNews-vectors-negative300.bin', binary=True)
Mw = np.zeros(shape=(130,300))
for index, (i,j) in enumerate(P, start=0):
Mw[index] = model[i]
for index, (i,j) in enumerate(P, start=0):
Mw[index+65] = model[j]
#Step 3 calculate cosine similarities and report Pearson correlation with S
aSMw = cosine_similarity(Mw)
SMw = []
for i in range(len(P)):
if P[i][0] != 'serf':
SMw.append(aSMw[i][i+65])
print("S and SMw: ", pearsonr(SL, SMw))
#Step 4 Load analogy data from file
file = open('word-test.v1.txt', 'r')
text = file.read()
lines = text.split('\n')
words2 = []
for i in lines:
if i != '' and i[0] != '/' and i[0] != ':':
words2.append(i.split())
#Keep only the analogy tuples that have all relevant words in them (i.e. all 4 words in the
#analogy are part of our most common 5030 words from above), so we can use same set on LSA.
#Unfortunately, since the LSA model was built by converting everything to lowercase letters,
#anything with a capital letter such as city and country names will not be included.
rel_words = []
for w in words2:
try:
if w[0] == uni[unigram[w[0]]][0] and w[1] == uni[unigram[w[1]]][0] and w[2] == uni[unigram[w[2]]][0] and w[3] == uni[unigram[w[3]]][0]:
rel_words.append(w)
except KeyError:
pass
#The below code will perform the analogy test for LSA on semantic analogy tuples. We had 90
#semantic relevant analogy tuples left from the original data and they are the first 90
#instances in rel_words. It counts how many times LSA pick the right word for the analogy.
#(Note: picks the word from the pool of 5030 whose vector is closest in cosine distance to the
#added vectors)
Mw3 = np.zeros(shape=(5031,100))
for index, (i, j) in enumerate(uni, start=0):
try:
Mw3[index] = M2_100[unigram[i]]
except:
pass
cnt = 0
for ww in rel_words[0:89]:
| Mw3[len(Mw3)-1] = M2_100[unigram[ww[0].lower()]] - M2_100[unigram[ww[1].lower()]] + M2_100[unigram[ww[3].lower()]]
SMw3 = cosine_similarity(Mw3)
max = -10
maxind = -1
for index, i in enumerate(SMw3[len(Mw3)-1], start=0):
if i > max and index < 5030 and uni[index][0] != ww[3].lower():
max = i
maxind = index
if ww[2] == uni[maxind][0]:
cnt += 1 | conditional_block |
|
lab1e.py | 3.66,
3.68,
3.82,
3.84,
3.88,
3.92,
3.94,
3.94]
#STEP 2 5030 most frequent words stored using uni and unigram
#unigram['example'] returns the index of the tuple ('example', count) in
#the uni list, where count is the unigram count of 'example'.
unigram = dict()
bigram = dict()
words = brown.words()
for w in words:
w = re.sub('[^A-Za-z]+', '', w)
if w != '':
if not w.lower() in unigram:
unigram[w.lower()] = 1
else:
unigram[w.lower()] += 1
old = unigram
uni = sorted(unigram.items(), key=operator.itemgetter(1), reverse=True)
sum = 0.0
for i,j in uni:
sum += j
uni = uni[:5030]
for i in range(5030):
unigram[uni[i][0]] = i
#The below chunk of code makes sure all words from table RG65 are included
#instead of just the 5030 most frequently occuring words. it ends up being the most
#frequently occuring 5000 words plus the other 30 words in table RG65 that were not
#already in the top 5030.
#comment the below code chunk out to keep the model using just the most frequent 5030 words
ctr = 0
for x,y in P:
if x != 'serf':
if x != uni[unigram[x]][0]:
uni[5000+ctr] = (x, old[x])
unigram[uni[5000+ctr][0]] = 5000+ctr
ctr += 1
if y != uni[unigram[y]][0]:
uni[5000+ctr] = (y, old[y])
unigram[uni[5000+ctr][0]] = 5000+ctr
ctr += 1
#STEP 3 word-context vector M1 based on bigram counts; modified to count both preceding and following words
M1 = np.zeros(shape=(5030,5030))
for i in range(len(words) - 1):
wi = re.sub('[^A-Za-z]+', '', words[i]).lower()
wi1 = re.sub('[^A-Za-z]+', '', words[i+1]).lower()
if wi != '' and wi1 != '' and wi == uni[unigram[wi]][0] and wi1 == uni[unigram[wi1]][0]:
M1[unigram[wi], unigram[wi1]] += 1
M1[unigram[wi], unigram[wi1]] += 1
#STEP 4 PPMI for M1 denoted M1plus
M1plus = np.zeros(shape=(5030,5030))
for i in range(5030):
for j in range(5030):
M1plus[i, j] = max(math.log((M1[i, j] / sum) / ((uni[i][1] / sum) * (uni[j][1] / sum) + 1e-31) + 1e-31, 2.0), 0)
#STEP 5 latent semantic model using SVD. M2_10, M2_50, and M2_100 denote
#truncated dimensions of 10, 50, 100 respectively
A, D, Q = np.linalg.svd(M1plus, full_matrices=False)
M2_10 = A[:, :10]
M2_50 = A[:, :50]
M2_100 = A[:, :100]
#STEP 6 done at beginning
#STEP 7 cosine similiarities for M1 (SM1), M1plus(SM1plus), M2_10 (SM2_10), M2_50 (SM2_50), M2_100 (SM2_100)
#a in front of name denotes matrix has cosine similarity for all pairs of words, later we pick relevant pairs
aSM1 = cosine_similarity(M1)
aSM1plus = cosine_similarity(M1plus)
aSM2_10 = cosine_similarity(M2_10)
aSM2_50 = cosine_similarity(M2_50)
aSM2_100 = cosine_similarity(M2_100)
#pick out the cosine similarity scores for the relevant pairs in P.
#SL only includes scores from S for pairs of words which actually exist in our top 5030 (so we have data)
#since I later forced all words in table RG65 into the top 5030, SL will contain all scores from S, except
#note the word 'serf' does not occur at all in the Brown Corpus, so its pair was omitted from analysis
L = []
SL = []
for i in range(len(P)):
x,y = P[i]
if x != 'serf' and x == uni[unigram[x]][0] and y == uni[unigram[y]][0]:
L.append((x, y))
SL.append(S[i])
SM1 = []
SM1plus = []
SM2_10 = []
SM2_50 = []
SM2_100 = []
for x,y in L:
SM1.append(aSM1[unigram[x], unigram[y]])
SM1plus.append(aSM1plus[unigram[x], unigram[y]])
SM2_10.append(aSM2_10[unigram[x], unigram[y]])
SM2_50.append(aSM2_50[unigram[x], unigram[y]])
SM2_100.append(aSM2_100[unigram[x], unigram[y]])
#STEP 8 Pearson correlation. outputs tuple (Pearson coefficient, 2-tailed p value)
print("Cosine Similarities:")
print("S and SM1: ", pearsonr(SL, SM1))
print("S and SM1+: ", pearsonr(SL, SM1plus))
print("S and SM2_10: ", pearsonr(SL, SM2_10))
print("S and SM2_50: ", pearsonr(SL, SM2_50))
print("S and SM2_100: ", pearsonr(SL, SM2_100))
#Lab 1 extension Step 2 extract vectors for all pairs of words in Table 1 of RG65
from gensim.models import KeyedVectors
model = KeyedVectors.load_word2vec_format('word2vec_pretrain_vec/GoogleNews-vectors-negative300.bin', binary=True)
Mw = np.zeros(shape=(130,300))
for index, (i,j) in enumerate(P, start=0):
Mw[index] = model[i]
for index, (i,j) in enumerate(P, start=0):
Mw[index+65] = model[j]
#Step 3 calculate cosine similarities and report Pearson correlation with S
aSMw = cosine_similarity(Mw)
SMw = []
for i in range(len(P)):
if P[i][0] != 'serf':
SMw.append(aSMw[i][i+65])
print("S and SMw: ", pearsonr(SL, SMw))
#Step 4 Load analogy data from file
file = open('word-test.v1.txt', 'r')
text = file.read()
lines = text.split('\n')
words2 = []
for i in lines:
if i != '' and i[0] != '/' and i[0] != ':':
words2.append(i.split())
#Keep only the analogy tuples that have all relevant words in them (i.e. all 4 words in the
#analogy are part of our most common 5030 words from above), so we can use same set on LSA.
#Unfortunately, since the LSA model was built by converting everything to lowercase letters,
#anything with a capital letter such as city and country names will not be included.
rel_words = []
for w in words2:
try:
if w[0] == uni[unigram[w[0]]][0] and w[1] == uni[unigram[w[1]]][0] and w[2] == uni[unigram[w[2]]][0] and w[3] == uni[unigram[w[3]]][0]:
rel_words.append(w)
except KeyError:
pass
#The below code will perform the analogy test for LSA on semantic analogy tuples. We had 90
#semantic relevant analogy tuples left from the original data and they are the first 90 | #instances in rel_words. It counts how many times LSA pick the right word for the analogy.
#(Note: picks the word from the pool of 5030 whose vector is closest in cosine distance to the
#added vectors)
Mw3 = np.zeros(shape=(5031,100)) | random_line_split |
|
serializers.py | а в виде int приходит). По ТЗ нам нужно принимать только
целые числа.
Поэтому наследуемся от сериализатора IntegerField и переопределяем первичную валидацию.
Если входные данные str, то возвращаем bad request 400.
"""
def to_internal_value(self, data):
if isinstance(data, int):
return super().to_internal_value(data)
raise serializers.ValidationError("Value should be an integer")
class TrueCharField(serializers.CharField):
"""
Так как в DRF стандартный сериализатор CharField принимает int и
конвертирует его до валидации (туда он приходит уже в виде str),
то наследуемся от этого класса и переопределяем первичную валидацию.
Если входные данные int, то возвращаем bad request 400.
"""
def to_internal_value(self, data):
if isinstance(data, str):
return super().to_internal_value(data)
raise serializers.ValidationError("Value should be a string")
class BaseCitizenInfoSerializer(serializers.ModelSerializer):
"""
Кастомный базовый валидатор с общей логикой сериализации
и десериализации
"""
# Наследуемся от кастомных валидаторов типа данных
citizen_id = TrueIntegerField(min_value=0)
town = TrueCharField(max_length=256)
street = TrueCharField(max_length=256)
building = TrueCharField(max_length=256)
apartment = TrueIntegerField(min_value=0)
name = TrueCharField(max_length=256)
relatives = serializers.ListField(child=TrueIntegerField(min_value=0))
class Meta:
model = CitizenInfo
exclude = ['id', 'import_id', ]
def run_validation(self, data=empty):
"""Валидация неизвестных полей. Отдаем 400, если есть."""
if data:
unknown = set(data) - set(self.fields)
if unknown:
errors = ["Unknown field: {}".format(f) for f in unknown]
raise serializers.ValidationError({
api_settings.NON_FIELD_ERRORS_KEY: errors,
})
return super(BaseCitizenInfoSerializer, self).run_validation(data)
def validate_birth_date(self, value):
"""
Валидация дня рождения
Проверяем, чтобы дата была не позже чем сегодня | if birth_date > current_date:
raise serializers.ValidationError("Birth_date can't be "
"after current date")
return value
def to_representation(self, instance):
"""
Корректируем отображение данных из БД.
m2m отображение родственников, формат даты дня рождения
"""
citizen_pk = instance.pk
citizen = CitizenInfo.objects.filter(pk=citizen_pk)
relatives_pk = citizen.values_list('relatives', flat=True)
relatives_id_list = []
if None not in relatives_pk:
for relative_pk in relatives_pk:
relative_id = CitizenInfo.objects.get(pk=relative_pk).citizen_id
relatives_id_list.append(relative_id)
# Сортирую по порядку, вдруг важно при отображении
relatives_id_list.sort()
# Нужный формат дня рождения
JSON_birth_date = datetime.strptime(str(instance.birth_date),
'%Y-%m-%d').strftime('%d.%m.%Y')
# Составляем ответ вручную
return {
"citizen_id": instance.citizen_id,
"town": instance.town,
"street": instance.street,
"building": instance.building,
"apartment": instance.apartment,
"name": instance.name,
"birth_date": JSON_birth_date,
"gender": instance.gender,
"relatives": relatives_id_list
}
def save_citizens(citizen_data, new_import):
"""Логика полного сохранения гражданина"""
new_citizen = CitizenInfo(import_id=new_import,
citizen_id=citizen_data.get('citizen_id'),
town=citizen_data.get('town'),
street=citizen_data.get('street'),
building=citizen_data.get('building'),
apartment=citizen_data.get('apartment'),
name=citizen_data.get('name'),
birth_date=citizen_data.get('birth_date'),
gender=citizen_data.get('gender'),
)
# К этому моменту мы уверены, что все данные прошли валидацию.
# Сохраняем объект, чтобы было проще искать нужных нам родственников ниже
new_citizen.save()
# Далее достаем сохраненные объекты данного импорта уже из БД
importing_citizens = CitizenInfo.objects.filter(import_id=new_import)
# Поле родственников будем сохранять постепенно, по мере создания граждан
relatives_id_list = citizen_data.get('relatives')
# В рамках одного импорта citizen_id == relative_id
for relative_id in relatives_id_list:
# Если родственник еще не сохранен в БД, то просто продолжаем цикл,
# так как связь симметричная и далее он все-равно попадет в родственники
try:
relative_instance = importing_citizens.get(citizen_id=relative_id)
except:
continue
# Добавляем инстанс каждого родственника по-одному
new_citizen.relatives.add(relative_instance)
class BulkCitizensSerializer(serializers.ListSerializer):
"""
Логика сохранения объектов для POST запроса в create. Всё ради m2m
Сюда приходят валидные данные по всем полям, кроме relatives.
Делаем для relatives кастомную валидацию (проверяем, что все родственники
прописаны друг у друга). Далее сохраняем в нужном виде relatives.
"""
def create(self, validated_data):
"""Здесь валидация поля relatives, а так же сохранение объектов"""
# Валидация поля relatives у всех граждан.
relatives_dict = self.context.pop('relatives_dict')
for citizen_data in validated_data:
citizen_id = citizen_data.get('citizen_id')
for relative_id in citizen_data.get('relatives'):
# Гарантировано, что значения уникальные и существуют, но
# на всякий случай проверяю существование.
try:
relatives_dict[relative_id]
except:
raise serializers.ValidationError(
'At least one of the relatives_id does not exist.')
if citizen_id in relatives_dict[relative_id]:
# Экономим время, если нашли симметрию, то удаляем
# текущего гражданина из "родственников" его родственника,
# Что бы не проверять по два раза. Сохранению не помешает.
relatives_dict[relative_id].remove(citizen_id)
# Если находим несовпадение, то сразу отдаем 400 BAD_REQUEST
elif citizen_id not in relatives_dict[relative_id]:
raise serializers.ValidationError(
'At least one of the relatives_id is not matching.')
# Сохраняем валидные объект
# Создаем инстанс текущего импорта, чтобы далее присвоить к
# новым объектам CitizenInfo
new_import = ImportId.objects.create()
import_id = new_import.import_id
# Кладем номер импорта в контекст
self.context["import_id"] = import_id
for citizen_data in validated_data:
save_citizens(citizen_data, new_import)
return ImportId.objects.filter(import_id=import_id)
class PostSerializer(BaseCitizenInfoSerializer):
"""
Сериализатор для POST запроса. Логика сохранения в BulkCitizensSerializer
"""
def run_validation(self, data=empty):
"""
Помимо родительской логики здесь подготовка к общей валидации relatives,
которая будет проходить с данными из глобального
словаря relatives_dict, в сериализаторе BulkCitizensSerializer
Добавляем в глобальный словарь {"id гражданина": [id его родственников]}
для последующей валидации.
"""
# Если входных данных нет, то отдаем bad request 400
if not data:
raise serializers.ValidationError("Input data can't be empty")
citizen_id = data['citizen_id']
relatives_id_list = data['relatives']
# Добавляем список id родственников в общий словарь граждан
self.context['relatives_dict'][citizen_id] = relatives_id_list
return super(PostSerializer, self).run_validation(data)
def to_representation(self, instance):
"""
Перезаписываем базовое представление.
Возвращаем только инстанс импорта.
"""
return instance
class Meta:
model = CitizenInfo
exclude = ['id', 'import_id', ]
# Для доступа ко всем ин | """
birth_date = value
current_date = datetime.now().date() | random_line_split |
serializers.py | в виде int приходит). По ТЗ нам нужно принимать только
целые числа.
Поэтому наследуемся от сериализатора IntegerField и переопределяем первичную валидацию.
Если входные данные str, то возвращаем bad request 400.
"""
def to_internal_value(self, data):
if isinstance(data, int):
return super().to_internal_value(data)
raise serializers.ValidationError("Value should be an integer")
class TrueCharField(serializers.CharField):
"""
Так как в DRF стандартный сериализатор CharField принимает int и
конвертирует его до валидации (туда он приходит уже в виде str),
то наследуемся от этого класса и переопределяем первичную валидацию.
Если входные данные int, то возвращаем bad request 400.
"""
def to_internal_value(self, data):
if isinstance(data, str):
return super().to_internal_value(data)
raise serializers.ValidationError("Value should be a string")
class BaseCitizenInfoSerializer(serializers.ModelSerializer):
"""
Кастомный базовый валидатор с общей логикой сериализации
и десериализации
"""
# Наследуемся от кастомных валидаторов типа данных
citizen_id = TrueIntegerField(min_value=0)
town = TrueCharField(max_length=256)
street = TrueCharField(max_length=256)
building = TrueCharField(max_length=256)
apartment = TrueIntegerField(min_value=0)
name = TrueCharField(max_length=256)
relatives = serializers.ListField(child=TrueIntegerField(min_value=0))
class Meta:
model = CitizenInfo
exclude = ['id', 'import_id', ]
def run_validation(self, data=empty):
"""Валидация неизвестных полей. Отдаем 400, если есть."""
if data:
unknown = set(data) - set(self.fields)
if unknown:
errors = ["Unknown field: {}".format(f) for f in unknown]
raise serializers.ValidationError({
api_settings.NON_FIELD_ERRORS_KEY: errors,
})
return super(BaseCitizenInfoSerializer, self).run_validation(data)
def validate_birth_date(self, value):
"""
Валидация дня рождения
Проверяем, чтобы дата была не позже чем сегодня
"""
birth_date = value
current_date = datetime.now().date()
if birth_date > current_date:
raise serializers.ValidationError("Birth_date can't be "
"after current date")
return value
def to_representation(self, instance):
"""
Корректируем отображение данных из БД.
m2m отображение родственников, формат даты дня рождения
"""
citizen_pk = instance.pk
citizen = CitizenInfo.objects.filter(pk=citizen_pk)
relatives_pk = citizen.values_list('relatives', flat=True)
relatives_id_list = []
if None not in relatives_pk:
for relative_pk in relatives_pk:
relative_id = CitizenInfo.objects.get(pk=relative_pk).citizen_id
relatives_id_list.append(relative_id)
# Сортирую по порядку, вдруг важно при отображении
relatives_id_list.sort()
# Нужный формат дня рождения
JSON_birth_date = datetime.strptime(str(instance.birth_date),
'%Y-%m-%d').strftime('%d.%m.%Y')
# Составляем ответ вручную
return {
"citizen_id": instance.citizen_id,
"town": instance.town,
"street": instance.street,
"building": instance.building,
"apartment": instance.apartment,
|
def save_citizens(citizen_data, new_import):
"""Логика полного сохранения гражданина"""
new_citizen = CitizenInfo(import_id=new_import,
citizen_id=citizen_data.get('citizen_id'),
town=citizen_data.get('town'),
street=citizen_data.get('street'),
building=citizen_data.get('building'),
apartment=citizen_data.get('apartment'),
name=citizen_data.get('name'),
birth_date=citizen_data.get('birth_date'),
gender=citizen_data.get('gender'),
)
# К этому моменту мы уверены, что все данные прошли валидацию.
# Сохраняем объект, чтобы было проще искать нужных нам родственников ниже
new_citizen.save()
# Далее достаем сохраненные объекты данного импорта уже из БД
importing_citizens = CitizenInfo.objects.filter(import_id=new_import)
# Поле родственников будем сохранять постепенно, по мере создания граждан
relatives_id_list = citizen_data.get('relatives')
# В рамках одного импорта citizen_id == relative_id
for relative_id in relatives_id_list:
# Если родственник еще не сохранен в БД, то просто продолжаем цикл,
# так как связь симметричная и далее он все-равно попадет в родственники
try:
relative_instance = importing_citizens.get(citizen_id=relative_id)
except:
continue
# Добавляем инстанс каждого родственника по-одному
new_citizen.relatives.add(relative_instance)
class BulkCitizensSerializer(serializers.ListSerializer):
"""
Логика сохранения объектов для POST запроса в create. Всё ради m2m
Сюда приходят валидные данные по всем полям, кроме relatives.
Делаем для relatives кастомную валидацию (проверяем, что все родственники
прописаны друг у друга). Далее сохраняем в нужном виде relatives.
"""
def create(self, validated_data):
"""Здесь валидация поля relatives, а так же сохранение объектов"""
# Валидация поля relatives у всех граждан.
relatives_dict = self.context.pop('relatives_dict')
for citizen_data in validated_data:
citizen_id = citizen_data.get('citizen_id')
for relative_id in citizen_data.get('relatives'):
# Гарантировано, что значения уникальные и существуют, но
# на всякий случай проверяю существование.
try:
relatives_dict[relative_id]
except:
raise serializers.ValidationError(
'At least one of the relatives_id does not exist.')
if citizen_id in relatives_dict[relative_id]:
# Экономим время, если нашли симметрию, то удаляем
# текущего гражданина из "родственников" его родственника,
# Что бы не проверять по два раза. Сохранению не помешает.
relatives_dict[relative_id].remove(citizen_id)
# Если находим несовпадение, то сразу отдаем 400 BAD_REQUEST
elif citizen_id not in relatives_dict[relative_id]:
raise serializers.ValidationError(
'At least one of the relatives_id is not matching.')
# Сохраняем валидные объект
# Создаем инстанс текущего импорта, чтобы далее присвоить к
# новым объектам CitizenInfo
new_import = ImportId.objects.create()
import_id = new_import.import_id
# Кладем номер импорта в контекст
self.context["import_id"] = import_id
for citizen_data in validated_data:
save_citizens(citizen_data, new_import)
return ImportId.objects.filter(import_id=import_id)
class PostSerializer(BaseCitizenInfoSerializer):
"""
Сериализатор для POST запроса. Логика сохранения в BulkCitizensSerializer
"""
def run_validation(self, data=empty):
"""
Помимо родительской логики здесь подготовка к общей валидации relatives,
которая будет проходить с данными из глобального
словаря relatives_dict, в сериализаторе BulkCitizensSerializer
Добавляем в глобальный словарь {"id гражданина": [id его родственников]}
для последующей валидации.
"""
# Если входных данных нет, то отдаем bad request 400
if not data:
raise serializers.ValidationError("Input data can't be empty")
citizen_id = data['citizen_id']
relatives_id_list = data['relatives']
# Добавляем список id родственников в общий словарь граждан
self.context['relatives_dict'][citizen_id] = relatives_id_list
return super(PostSerializer, self).run_validation(data)
def to_representation(self, instance):
"""
Перезаписываем базовое представление.
Возвращаем только инстанс импорта.
"""
return instance
class Meta:
model = CitizenInfo
exclude = ['id', 'import_id', ]
# Для доступа ко | "name": instance.name,
"birth_date": JSON_birth_date,
"gender": instance.gender,
"relatives": relatives_id_list
} | conditional_block |
serializers.py | в виде int приходит). По ТЗ нам нужно принимать только
целые числа.
Поэтому наследуемся от сериализатора IntegerField и переопределяем первичную валидацию.
Если входные данные str, то возвращаем bad request 400.
"""
def to_internal_value(self, data):
if isinstance(data, int):
return super().to_internal_value(data)
raise serializers.ValidationError("Value should be an integer")
class TrueCharField(serializers.CharField):
"""
Так как в DRF стандартный сериализатор CharField принимает int и
конвертирует его до валидации (туда он приходит уже в виде str),
то наследуемся от этого класса и переопределяем первичную валидацию.
Если входные данные int, то возвращаем bad request 400.
"""
def to_internal_value(self, data):
if isinstance(data, str):
return super().to_internal_value(data)
raise serializers.ValidationError("Value should be a string")
class BaseCitizenInfoSerializer(serializers.ModelSerializer):
"""
Кастомный базовый валидатор с общей логикой сериализации
и десериализации
"""
# Наследуемся от кастомных валидаторов типа данных
citizen_id = TrueIntegerField(min_value=0)
town = TrueCharField(max_length=256)
street = TrueCharField(max_length=256)
building = TrueCharField(max_length=256)
apartment = TrueIntegerField(min_value=0)
name = TrueCharField(max_length=256)
relatives = serializers.ListField(child=TrueIntegerField(min_value=0))
class Meta:
model = CitizenInfo
exclude = ['id', 'import_id', ]
def run_validation(self, data=empty):
"""Валидация неизвестных полей. Отдаем 400, если есть."""
if data:
unknown = set(data) - set(self.fields)
if unknown:
errors = ["Unknown field: {}".format(f) for f in unknown]
raise serializers.ValidationError({
api_settings.NON_FIELD_ERRORS_KEY: errors,
})
return super(BaseCitizenInfoSerializer, self).run_validation(data)
def validate_birth_date(self, value):
"""
Валидация дня рождения
Проверяем, чтобы дата была не позже чем сегодня
"""
birth_date = value
current_date = datetime.now().date()
if birth_date > current_date:
raise serializers.ValidationError("Birth_date can't be "
"after current date")
return value
def to_representation(self, instance):
"""
Корректируем отображение данных из БД.
m2m отображение родственников, формат даты дня рождения
"""
citizen_pk = instance.pk
citizen = CitizenInfo.objects.filter(pk=citizen_pk)
relatives_pk = citizen.values_list('relatives', flat=True)
relatives_id_list = []
if None not in relatives_pk:
for relative_pk in relatives_pk:
relative_id = CitizenInfo.objects.get(pk=relative_pk).citizen_id
relatives_id_list.append(relative_id)
# Сортирую по порядку, вдруг важно при отоб | elatives_id_list.sort()
# Нужный формат дня рождения
JSON_birth_date = datetime.strptime(str(instance.birth_date),
'%Y-%m-%d').strftime('%d.%m.%Y')
# Составляем ответ вручную
return {
"citizen_id": instance.citizen_id,
"town": instance.town,
"street": instance.street,
"building": instance.building,
"apartment": instance.apartment,
"name": instance.name,
"birth_date": JSON_birth_date,
"gender": instance.gender,
"relatives": relatives_id_list
}
def save_citizens(citizen_data, new_import):
"""Логика полного сохранения гражданина"""
new_citizen = CitizenInfo(import_id=new_import,
citizen_id=citizen_data.get('citizen_id'),
town=citizen_data.get('town'),
street=citizen_data.get('street'),
building=citizen_data.get('building'),
apartment=citizen_data.get('apartment'),
name=citizen_data.get('name'),
birth_date=citizen_data.get('birth_date'),
gender=citizen_data.get('gender'),
)
# К этому моменту мы уверены, что все данные прошли валидацию.
# Сохраняем объект, чтобы было проще искать нужных нам родственников ниже
new_citizen.save()
# Далее достаем сохраненные объекты данного импорта уже из БД
importing_citizens = CitizenInfo.objects.filter(import_id=new_import)
# Поле родственников будем сохранять постепенно, по мере создания граждан
relatives_id_list = citizen_data.get('relatives')
# В рамках одного импорта citizen_id == relative_id
for relative_id in relatives_id_list:
# Если родственник еще не сохранен в БД, то просто продолжаем цикл,
# так как связь симметричная и далее он все-равно попадет в родственники
try:
relative_instance = importing_citizens.get(citizen_id=relative_id)
except:
continue
# Добавляем инстанс каждого родственника по-одному
new_citizen.relatives.add(relative_instance)
class BulkCitizensSerializer(serializers.ListSerializer):
"""
Логика сохранения объектов для POST запроса в create. Всё ради m2m
Сюда приходят валидные данные по всем полям, кроме relatives.
Делаем для relatives кастомную валидацию (проверяем, что все родственники
прописаны друг у друга). Далее сохраняем в нужном виде relatives.
"""
def create(self, validated_data):
"""Здесь валидация поля relatives, а так же сохранение объектов"""
# Валидация поля relatives у всех граждан.
relatives_dict = self.context.pop('relatives_dict')
for citizen_data in validated_data:
citizen_id = citizen_data.get('citizen_id')
for relative_id in citizen_data.get('relatives'):
# Гарантировано, что значения уникальные и существуют, но
# на всякий случай проверяю существование.
try:
relatives_dict[relative_id]
except:
raise serializers.ValidationError(
'At least one of the relatives_id does not exist.')
if citizen_id in relatives_dict[relative_id]:
# Экономим время, если нашли симметрию, то удаляем
# текущего гражданина из "родственников" его родственника,
# Что бы не проверять по два раза. Сохранению не помешает.
relatives_dict[relative_id].remove(citizen_id)
# Если находим несовпадение, то сразу отдаем 400 BAD_REQUEST
elif citizen_id not in relatives_dict[relative_id]:
raise serializers.ValidationError(
'At least one of the relatives_id is not matching.')
# Сохраняем валидные объект
# Создаем инстанс текущего импорта, чтобы далее присвоить к
# новым объектам CitizenInfo
new_import = ImportId.objects.create()
import_id = new_import.import_id
# Кладем номер импорта в контекст
self.context["import_id"] = import_id
for citizen_data in validated_data:
save_citizens(citizen_data, new_import)
return ImportId.objects.filter(import_id=import_id)
class PostSerializer(BaseCitizenInfoSerializer):
"""
Сериализатор для POST запроса. Логика сохранения в BulkCitizensSerializer
"""
def run_validation(self, data=empty):
"""
Помимо родительской логики здесь подготовка к общей валидации relatives,
которая будет проходить с данными из глобального
словаря relatives_dict, в сериализаторе BulkCitizensSerializer
Добавляем в глобальный словарь {"id гражданина": [id его родственников]}
для последующей валидации.
"""
# Если входных данных нет, то отдаем bad request 400
if not data:
raise serializers.ValidationError("Input data can't be empty")
citizen_id = data['citizen_id']
relatives_id_list = data['relatives']
# Добавляем список id родственников в общий словарь граждан
self.context['relatives_dict'][citizen_id] = relatives_id_list
return super(PostSerializer, self).run_validation(data)
def to_representation(self, instance):
"""
Перезаписываем базовое представление.
Возвращаем только инстанс импорта.
"""
return instance
class Meta:
model = CitizenInfo
exclude = ['id', 'import_id', ]
# Для доступа | ражении
r | identifier_name |
serializers.py | в виде int приходит). По ТЗ нам нужно принимать только
целые числа.
Поэтому наследуемся от сериализатора IntegerField и переопределяем первичную валидацию.
Если входные данные str, то возвращаем bad request 400.
"""
def to_internal_value(self, data):
if isinstance(data, int):
return super().to_internal_value(data)
raise serializers.ValidationError("Value should be an integer")
class TrueCharField(serializers.CharField):
"""
Так как в DRF стандартный сериализатор CharField принимает int и
конвертирует его до валидации (туда он приходит уже в виде str),
то наследуемся от этого класса и переопределяем первичную валидацию.
Если входные данные int, то возвращаем bad request 400.
"""
def to_internal_value(self, data):
if isinstance(data, str):
return super().to_internal_value(data)
raise serializers.ValidationError("Value should be a string")
class BaseCitizenInfoSerializer(serializers.ModelSerializer):
"""
Кастомный базовый валидатор с общей логикой сериализации
и десериализации
"""
# Наследуемся от кастомных валидаторов типа данных
citizen_id = TrueIntegerField(min_value=0)
town = TrueCharField(max_length=256)
street = TrueCharField(max_length=256)
building = TrueCharField(max_length=256)
apartment = TrueIntegerField(min_value=0)
name = TrueCharField(max_length=256)
relatives = serializers.ListField(child=TrueIntegerField(min_value=0))
class Meta:
model = CitizenInfo
exclude = ['id', 'import_id', ]
def run_validation(self, data=empty):
"""Валидация неизвестных полей. Отдаем 400, если есть."""
if data:
unknown = set(data) - set(self.fields)
if unknown:
errors = ["Unknown field: {}".format(f) for f in unknown]
raise serializers.ValidationError({
api_settings.NON_FIELD_ERRORS_KEY: errors,
})
return super(BaseCitizenInfoSerializer, self).run_validation(data)
def validate_birth_date(self, value):
"""
Валидация дня рождения
Проверяем, чтобы дата была не позже чем сегодня
"""
birth_date = value
current_date = datetime.now().date()
if birth_date > current_date:
raise serializers.ValidationError("Birth_date can't be "
"after current date")
return value
def to_representation(self, instance):
"""
Корректируем отображение данных из БД.
m2m отображение родственников, формат даты дня рождения
"""
citizen_pk = instance.pk
citizen = CitizenInfo.objects.filter(pk=citizen_pk)
relatives_pk = citizen.values_list('relatives', flat=True)
relatives_id_list = []
if None not in relatives_pk:
for relative_pk in relatives_pk:
relative_id = CitizenInfo.objects.get(pk=relative_pk).citizen_id
relatives_id_list.append(relative_id)
# Сортирую по порядку, вдруг важно при отображении
relatives_id_list.sort()
# Нужный формат дня рождения
JSON_birth_date = datetime.strptime(str(instance.birth_date),
'%Y-%m-%d').strftime('%d.%m.%Y')
# Составляем ответ вручную
return {
"citizen_id": instance.citizen_id,
"town": instance.town,
"street": instance.street,
"building": instance.building,
"apartment": instance.apartment,
"name": instance.name,
"birth_date": JSON_birth_date,
"gender": instance.gender,
"relatives": relatives_id_list
}
def save_citizens(citizen_data, new_import):
"""Логика полного сохранения гражданина"""
new_citizen = CitizenInfo(import_id=new_import,
citizen_id=citizen_data.get('citizen_id'),
town=citizen_data.get('town'),
street=citizen_data.get('street'),
building=citizen_data.get('building'),
apartment=citizen_data.get('apartment'),
name=citizen_data.get('name'),
birth_date=citizen_data.get('birth_date'),
gender=citizen_data.get('gender'),
)
# К этому моменту мы уверены, что все данные прошли валидацию.
# Сохраняем объект, чтобы было проще искать нужных нам родственников ниже
new_citizen.save()
# Далее достаем сохраненные объекты данного импорта уже из БД
importing_citizens = CitizenInfo.objects.filter(import_id=new_import)
# Поле родственников будем сохранять постепенно, по мере создания граждан
relatives_id_list = citizen_data.get('relatives')
# В рамках одного импорта citizen_id == relative_id
for relative_id in relatives_id_list:
# Если родственник еще не сохранен в БД, то просто продолжаем цикл,
# так как связь симметричная и далее он все-равно попадет в родственники
try:
relative_instance = importing_citizens.get(citizen_id=relative_id)
except:
continue
# Добавляем инстанс каждого родственника по-одному
new_citizen.relatives.add(relative_instance)
class BulkCitizensSerializer(serializers.ListSerializer):
"""
Логика сохранения объектов для POST запроса в create. Всё ради m2m
Сюда приходят валидные данные по всем полям, кроме relatives.
Делаем для relatives кастомную валидацию (проверяем, что все родственники
прописаны друг у друга). Далее сохраняем в нужном виде relatives.
"""
def create(self, validated_data):
"""Здесь валидация поля relatives, а так же сохранение объектов"""
# Валидация поля relatives у всех граждан.
relatives_dict = self.context.pop('relatives_dict')
for citizen_data in validated_data:
citizen_id = citizen_data.get('citizen_id')
for relative_id in citizen_data.get('relatives'):
# Гарантировано, что значения уникальные и существуют, но
# на всякий случай проверяю существование.
try:
relatives_dict[relative_id]
except:
raise serializers.ValidationError(
'At least one of the relatives_id does not exist.')
if citizen_id in relatives_dict[relative_id]:
# Экономим время, если нашли симметрию, то удаляем
# текущего гражданина из "родс | """
Сериализатор для POST запроса. Логика сохранения в BulkCitizensSerializer
"""
def run_validation(self, data=empty):
"""
Помимо родительской логики здесь подготовка к общей валидации relatives,
которая будет проходить с данными из глобального
словаря relatives_dict, в сериализаторе BulkCitizensSerializer
Добавляем в глобальный словарь {"id гражданина": [id его родственников]}
для последующей валидации.
"""
# Если входных данных нет, то отдаем bad request 400
if not data:
raise serializers.ValidationError("Input data can't be empty")
citizen_id = data['citizen_id']
relatives_id_list = data['relatives']
# Добавляем список id родственников в общий словарь граждан
self.context['relatives_dict'][citizen_id] = relatives_id_list
return super(PostSerializer, self).run_validation(data)
def to_representation(self, instance):
"""
Перезаписываем базовое представление.
Возвращаем только инстанс импорта.
"""
return instance
class Meta:
model = CitizenInfo
exclude = ['id', 'import_id', ]
# Для доступа ко всем | твенников" его родственника,
# Что бы не проверять по два раза. Сохранению не помешает.
relatives_dict[relative_id].remove(citizen_id)
# Если находим несовпадение, то сразу отдаем 400 BAD_REQUEST
elif citizen_id not in relatives_dict[relative_id]:
raise serializers.ValidationError(
'At least one of the relatives_id is not matching.')
# Сохраняем валидные объект
# Создаем инстанс текущего импорта, чтобы далее присвоить к
# новым объектам CitizenInfo
new_import = ImportId.objects.create()
import_id = new_import.import_id
# Кладем номер импорта в контекст
self.context["import_id"] = import_id
for citizen_data in validated_data:
save_citizens(citizen_data, new_import)
return ImportId.objects.filter(import_id=import_id)
class PostSerializer(BaseCitizenInfoSerializer): | identifier_body |
server.go | , reader)
return err
}
func (server *Server) putApp(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
err := server.app.PutApp(runID, r.Body, r.ContentLength)
if errors.Is(err, commands.ErrArchiveFormat) {
return newHTTPError(err, http.StatusBadRequest, err.Error())
}
return err
}
func (server *Server) getCache(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
reader, err := server.app.GetCache(runID)
if err != nil {
// Returns 404 if there is no cache
if errors.Is(err, commands.ErrNotFound) {
return newHTTPError(err, http.StatusNotFound, err.Error())
}
return err
}
w.Header().Set("Content-Type", "application/gzip")
_, err = io.Copy(w, reader)
return err
}
func (server *Server) putCache(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
return server.app.PutCache(runID, r.Body, r.ContentLength)
}
func (server *Server) getOutput(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
reader, err := server.app.GetOutput(runID)
if err != nil {
// Returns 404 if there is no output
if errors.Is(err, commands.ErrNotFound) {
return newHTTPError(err, http.StatusNotFound, err.Error())
}
return err
}
w.Header().Set("Content-Type", "application/octet-stream")
_, err = io.Copy(w, reader)
return err
}
func (server *Server) putOutput(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
return server.app.PutOutput(runID, r.Body, r.ContentLength)
}
func (server *Server) getExitData(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
exitData, err := server.app.GetExitData(runID)
if err != nil {
return err
}
w.Header().Set("Content-Type", "application/json")
enc := json.NewEncoder(w)
return enc.Encode(exitData)
}
func (server *Server) startRun(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
decoder := json.NewDecoder(r.Body)
var options protocol.StartRunOptions
err := decoder.Decode(&options)
if err != nil {
return newHTTPError(err, http.StatusBadRequest, "JSON in body not correctly formatted")
}
if options.MaxRunTime == 0 {
options.MaxRunTime = server.defaultMaxRunTime
} else if options.MaxRunTime > server.maxRunTime {
return newHTTPError(err, http.StatusBadRequest, fmt.Sprintf("max_run_time should not be larger than %v", server.maxRunTime))
}
if options.Memory == 0 {
options.Memory = server.defaultMemory
} else if options.Memory > server.maxMemory {
return newHTTPError(err, http.StatusBadRequest, fmt.Sprintf("memory should not be larger than %v", server.maxMemory))
}
env := make(map[string]string)
for _, keyvalue := range options.Env {
env[keyvalue.Name] = keyvalue.Value
}
err = server.app.StartRun(runID, server.runDockerImage, options)
if errors.Is(err, commands.ErrAppNotAvailable) {
err = newHTTPError(err, http.StatusBadRequest, "app needs to be uploaded before starting a run")
} else if errors.Is(err, integrationclient.ErrNotAllowed) {
err = newHTTPError(err, http.StatusUnauthorized, err.Error())
}
return err
}
func (server *Server) getEvents(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
lastID := r.URL.Query().Get("last_id")
if lastID == "" {
lastID = "0"
}
w.Header().Set("Content-Type", "application/ld+json")
flusher, ok := w.(http.Flusher)
if !ok {
return errors.New("couldn't access the flusher")
}
events := server.app.GetEvents(runID, lastID)
enc := json.NewEncoder(w)
for events.More() {
e, err := events.Next()
if err != nil {
return err
}
err = enc.Encode(e)
if err != nil {
return err
}
flusher.Flush()
}
return nil
}
func (server *Server) createEvent(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
// Read json message as is into a string
// TODO: Switch over to json decoder
buf, err := ioutil.ReadAll(r.Body)
if err != nil {
return err
}
// Check the form of the JSON by interpreting it
var event protocol.Event
err = json.Unmarshal(buf, &event)
if err != nil {
return newHTTPError(err, http.StatusBadRequest, "JSON in body not correctly formatted")
}
return server.app.CreateEvent(runID, event)
}
func (server *Server) delete(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
return server.app.DeleteRun(runID)
}
func (server *Server) hello(w http.ResponseWriter, r *http.Request) error {
hello := protocol.Hello{
Message: "Hello from Yinyo!",
MaxRunTime: protocol.DefaultAndMax{
Default: server.defaultMaxRunTime,
Max: server.maxRunTime,
},
Memory: protocol.DefaultAndMax{
Default: server.defaultMemory,
Max: server.maxMemory,
},
Version: server.version,
RunnerImage: server.runDockerImage,
}
w.Header().Set("Content-Type", "application/json")
enc := json.NewEncoder(w)
return enc.Encode(hello)
}
// isExternal returns true if the request has arrived via the public internet. This relies
// on the requests from the internet coming in via a load balancer (which sets the
// X-Forwarded-For header) and internal requests not coming via a load balancer
// This is used in measuring network traffic
func isExternal(request *http.Request) bool {
return request.Header.Get("X-Forwarded-For") != ""
}
// Middleware that logs the request uri
func logRequests(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
var source string
if isExternal(r) {
source = "external"
} else {
source = "internal"
}
log.Println(source, r.Method, r.RequestURI)
// Call the next handler, which can be another middleware in the chain, or the final handler.
next.ServeHTTP(w, r)
})
}
type readMeasurer struct {
rc io.ReadCloser
BytesRead int64
}
func newReadMeasurer(rc io.ReadCloser) *readMeasurer {
return &readMeasurer{rc: rc}
}
func (r *readMeasurer) Read(p []byte) (n int, err error) {
n, err = r.rc.Read(p)
atomic.AddInt64(&r.BytesRead, int64(n))
return
}
func (r *readMeasurer) Close() error {
return r.rc.Close()
}
func (server *Server) recordTraffic(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
runID := mux.Vars(r)["id"]
readMeasurer := newReadMeasurer(r.Body)
r.Body = readMeasurer
m := httpsnoop.CaptureMetrics(next, w, r)
if runID != "" && isExternal(r) {
err := server.app.ReportAPINetworkUsage(runID, uint64(readMeasurer.BytesRead), uint64(m.Written))
if err != nil {
// TODO: Will this actually work here
logAndReturnError(err, w)
return
}
}
})
}
// Middleware function, which will be called for each request
// TODO: Refactor checkRunCreated method to return an error
func (server *Server) checkRunCreated(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
runID := mux.Vars(r)["id"]
created, err := server.app.IsRunCreated(runID)
if err != nil {
log.Println(err) | if !created {
err = newHTTPError(err, http.StatusNotFound, fmt.Sprintf("run %v: not found", runID))
logAndReturnError(err, w)
return
}
next.ServeHTTP(w, r)
})
}
func logAndReturnError(err error, w http.ResponseWriter) {
log.Println(err)
err2, ok := err.(clientError)
if !ok {
// TODO: Factor out common code with other error handling
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(http.StatusInternalServerError)
//nolint:errcheck // ignore error while logging an error
//skipcq: GSC-G104
w.Write([]byte(`{"error":"Internal server error"}`))
return
}
body, err | logAndReturnError(err, w)
return
} | random_line_split |
server.go | (w http.ResponseWriter, r *http.Request) error {
createResult, err := server.app.CreateRun(protocol.CreateRunOptions{APIKey: r.URL.Query().Get("api_key")})
if err != nil {
if errors.Is(err, integrationclient.ErrNotAllowed) {
return newHTTPError(err, http.StatusUnauthorized, err.Error())
}
return err
}
w.Header().Set("Content-Type", "application/json")
return json.NewEncoder(w).Encode(createResult)
}
func (server *Server) getApp(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
w.Header().Set("Content-Type", "application/gzip")
reader, err := server.app.GetApp(runID)
if err != nil {
// Returns 404 if there is no app
if errors.Is(err, commands.ErrNotFound) {
return newHTTPError(err, http.StatusNotFound, err.Error())
}
return err
}
_, err = io.Copy(w, reader)
return err
}
func (server *Server) putApp(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
err := server.app.PutApp(runID, r.Body, r.ContentLength)
if errors.Is(err, commands.ErrArchiveFormat) {
return newHTTPError(err, http.StatusBadRequest, err.Error())
}
return err
}
func (server *Server) getCache(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
reader, err := server.app.GetCache(runID)
if err != nil {
// Returns 404 if there is no cache
if errors.Is(err, commands.ErrNotFound) {
return newHTTPError(err, http.StatusNotFound, err.Error())
}
return err
}
w.Header().Set("Content-Type", "application/gzip")
_, err = io.Copy(w, reader)
return err
}
func (server *Server) putCache(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
return server.app.PutCache(runID, r.Body, r.ContentLength)
}
func (server *Server) getOutput(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
reader, err := server.app.GetOutput(runID)
if err != nil {
// Returns 404 if there is no output
if errors.Is(err, commands.ErrNotFound) {
return newHTTPError(err, http.StatusNotFound, err.Error())
}
return err
}
w.Header().Set("Content-Type", "application/octet-stream")
_, err = io.Copy(w, reader)
return err
}
func (server *Server) putOutput(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
return server.app.PutOutput(runID, r.Body, r.ContentLength)
}
func (server *Server) getExitData(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
exitData, err := server.app.GetExitData(runID)
if err != nil {
return err
}
w.Header().Set("Content-Type", "application/json")
enc := json.NewEncoder(w)
return enc.Encode(exitData)
}
func (server *Server) startRun(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
decoder := json.NewDecoder(r.Body)
var options protocol.StartRunOptions
err := decoder.Decode(&options)
if err != nil {
return newHTTPError(err, http.StatusBadRequest, "JSON in body not correctly formatted")
}
if options.MaxRunTime == 0 {
options.MaxRunTime = server.defaultMaxRunTime
} else if options.MaxRunTime > server.maxRunTime {
return newHTTPError(err, http.StatusBadRequest, fmt.Sprintf("max_run_time should not be larger than %v", server.maxRunTime))
}
if options.Memory == 0 {
options.Memory = server.defaultMemory
} else if options.Memory > server.maxMemory {
return newHTTPError(err, http.StatusBadRequest, fmt.Sprintf("memory should not be larger than %v", server.maxMemory))
}
env := make(map[string]string)
for _, keyvalue := range options.Env {
env[keyvalue.Name] = keyvalue.Value
}
err = server.app.StartRun(runID, server.runDockerImage, options)
if errors.Is(err, commands.ErrAppNotAvailable) {
err = newHTTPError(err, http.StatusBadRequest, "app needs to be uploaded before starting a run")
} else if errors.Is(err, integrationclient.ErrNotAllowed) {
err = newHTTPError(err, http.StatusUnauthorized, err.Error())
}
return err
}
func (server *Server) getEvents(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
lastID := r.URL.Query().Get("last_id")
if lastID == "" {
lastID = "0"
}
w.Header().Set("Content-Type", "application/ld+json")
flusher, ok := w.(http.Flusher)
if !ok {
return errors.New("couldn't access the flusher")
}
events := server.app.GetEvents(runID, lastID)
enc := json.NewEncoder(w)
for events.More() {
e, err := events.Next()
if err != nil {
return err
}
err = enc.Encode(e)
if err != nil {
return err
}
flusher.Flush()
}
return nil
}
func (server *Server) createEvent(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
// Read json message as is into a string
// TODO: Switch over to json decoder
buf, err := ioutil.ReadAll(r.Body)
if err != nil {
return err
}
// Check the form of the JSON by interpreting it
var event protocol.Event
err = json.Unmarshal(buf, &event)
if err != nil {
return newHTTPError(err, http.StatusBadRequest, "JSON in body not correctly formatted")
}
return server.app.CreateEvent(runID, event)
}
func (server *Server) delete(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
return server.app.DeleteRun(runID)
}
func (server *Server) hello(w http.ResponseWriter, r *http.Request) error {
hello := protocol.Hello{
Message: "Hello from Yinyo!",
MaxRunTime: protocol.DefaultAndMax{
Default: server.defaultMaxRunTime,
Max: server.maxRunTime,
},
Memory: protocol.DefaultAndMax{
Default: server.defaultMemory,
Max: server.maxMemory,
},
Version: server.version,
RunnerImage: server.runDockerImage,
}
w.Header().Set("Content-Type", "application/json")
enc := json.NewEncoder(w)
return enc.Encode(hello)
}
// isExternal returns true if the request has arrived via the public internet. This relies
// on the requests from the internet coming in via a load balancer (which sets the
// X-Forwarded-For header) and internal requests not coming via a load balancer
// This is used in measuring network traffic
func isExternal(request *http.Request) bool {
return request.Header.Get("X-Forwarded-For") != ""
}
// Middleware that logs the request uri
func logRequests(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
var source string
if isExternal(r) {
source = "external"
} else {
source = "internal"
}
log.Println(source, r.Method, r.RequestURI)
// Call the next handler, which can be another middleware in the chain, or the final handler.
next.ServeHTTP(w, r)
})
}
type readMeasurer struct {
rc io.ReadCloser
BytesRead int64
}
func newReadMeasurer(rc io.ReadCloser) *readMeasurer {
return &readMeasurer{rc: rc}
}
func (r *readMeasurer) Read(p []byte) (n int, err error) {
n, err = r.rc.Read(p)
atomic.AddInt64(&r.BytesRead, int64(n))
return
}
func (r *readMeasurer) Close() error {
return r.rc.Close()
}
func (server *Server) recordTraffic(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
runID := mux.Vars(r)["id"]
readMeasurer := newReadMeasurer(r.Body)
r.Body = readMeasurer
m := httpsnoop.CaptureMetrics(next, w, r)
if runID != "" && isExternal(r) {
err := server.app.ReportAPINetworkUsage(runID, uint64(readMeasurer.BytesRead), uint64(m.Written))
if err != nil {
// TODO: Will this actually work here
logAndReturnError(err, w)
return
}
}
})
}
// Middleware function, which will be called for each request
// TODO: Refactor checkRunCreated method to return an error
func (server *Server) checkRunCreated(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
runID := mux.Vars(r)["id"]
created | createRun | identifier_name |
|
server.go | , reader)
return err
}
func (server *Server) putApp(w http.ResponseWriter, r *http.Request) error |
func (server *Server) getCache(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
reader, err := server.app.GetCache(runID)
if err != nil {
// Returns 404 if there is no cache
if errors.Is(err, commands.ErrNotFound) {
return newHTTPError(err, http.StatusNotFound, err.Error())
}
return err
}
w.Header().Set("Content-Type", "application/gzip")
_, err = io.Copy(w, reader)
return err
}
func (server *Server) putCache(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
return server.app.PutCache(runID, r.Body, r.ContentLength)
}
func (server *Server) getOutput(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
reader, err := server.app.GetOutput(runID)
if err != nil {
// Returns 404 if there is no output
if errors.Is(err, commands.ErrNotFound) {
return newHTTPError(err, http.StatusNotFound, err.Error())
}
return err
}
w.Header().Set("Content-Type", "application/octet-stream")
_, err = io.Copy(w, reader)
return err
}
func (server *Server) putOutput(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
return server.app.PutOutput(runID, r.Body, r.ContentLength)
}
func (server *Server) getExitData(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
exitData, err := server.app.GetExitData(runID)
if err != nil {
return err
}
w.Header().Set("Content-Type", "application/json")
enc := json.NewEncoder(w)
return enc.Encode(exitData)
}
func (server *Server) startRun(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
decoder := json.NewDecoder(r.Body)
var options protocol.StartRunOptions
err := decoder.Decode(&options)
if err != nil {
return newHTTPError(err, http.StatusBadRequest, "JSON in body not correctly formatted")
}
if options.MaxRunTime == 0 {
options.MaxRunTime = server.defaultMaxRunTime
} else if options.MaxRunTime > server.maxRunTime {
return newHTTPError(err, http.StatusBadRequest, fmt.Sprintf("max_run_time should not be larger than %v", server.maxRunTime))
}
if options.Memory == 0 {
options.Memory = server.defaultMemory
} else if options.Memory > server.maxMemory {
return newHTTPError(err, http.StatusBadRequest, fmt.Sprintf("memory should not be larger than %v", server.maxMemory))
}
env := make(map[string]string)
for _, keyvalue := range options.Env {
env[keyvalue.Name] = keyvalue.Value
}
err = server.app.StartRun(runID, server.runDockerImage, options)
if errors.Is(err, commands.ErrAppNotAvailable) {
err = newHTTPError(err, http.StatusBadRequest, "app needs to be uploaded before starting a run")
} else if errors.Is(err, integrationclient.ErrNotAllowed) {
err = newHTTPError(err, http.StatusUnauthorized, err.Error())
}
return err
}
func (server *Server) getEvents(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
lastID := r.URL.Query().Get("last_id")
if lastID == "" {
lastID = "0"
}
w.Header().Set("Content-Type", "application/ld+json")
flusher, ok := w.(http.Flusher)
if !ok {
return errors.New("couldn't access the flusher")
}
events := server.app.GetEvents(runID, lastID)
enc := json.NewEncoder(w)
for events.More() {
e, err := events.Next()
if err != nil {
return err
}
err = enc.Encode(e)
if err != nil {
return err
}
flusher.Flush()
}
return nil
}
func (server *Server) createEvent(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
// Read json message as is into a string
// TODO: Switch over to json decoder
buf, err := ioutil.ReadAll(r.Body)
if err != nil {
return err
}
// Check the form of the JSON by interpreting it
var event protocol.Event
err = json.Unmarshal(buf, &event)
if err != nil {
return newHTTPError(err, http.StatusBadRequest, "JSON in body not correctly formatted")
}
return server.app.CreateEvent(runID, event)
}
func (server *Server) delete(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
return server.app.DeleteRun(runID)
}
func (server *Server) hello(w http.ResponseWriter, r *http.Request) error {
hello := protocol.Hello{
Message: "Hello from Yinyo!",
MaxRunTime: protocol.DefaultAndMax{
Default: server.defaultMaxRunTime,
Max: server.maxRunTime,
},
Memory: protocol.DefaultAndMax{
Default: server.defaultMemory,
Max: server.maxMemory,
},
Version: server.version,
RunnerImage: server.runDockerImage,
}
w.Header().Set("Content-Type", "application/json")
enc := json.NewEncoder(w)
return enc.Encode(hello)
}
// isExternal returns true if the request has arrived via the public internet. This relies
// on the requests from the internet coming in via a load balancer (which sets the
// X-Forwarded-For header) and internal requests not coming via a load balancer
// This is used in measuring network traffic
func isExternal(request *http.Request) bool {
return request.Header.Get("X-Forwarded-For") != ""
}
// Middleware that logs the request uri
func logRequests(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
var source string
if isExternal(r) {
source = "external"
} else {
source = "internal"
}
log.Println(source, r.Method, r.RequestURI)
// Call the next handler, which can be another middleware in the chain, or the final handler.
next.ServeHTTP(w, r)
})
}
type readMeasurer struct {
rc io.ReadCloser
BytesRead int64
}
func newReadMeasurer(rc io.ReadCloser) *readMeasurer {
return &readMeasurer{rc: rc}
}
func (r *readMeasurer) Read(p []byte) (n int, err error) {
n, err = r.rc.Read(p)
atomic.AddInt64(&r.BytesRead, int64(n))
return
}
func (r *readMeasurer) Close() error {
return r.rc.Close()
}
func (server *Server) recordTraffic(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
runID := mux.Vars(r)["id"]
readMeasurer := newReadMeasurer(r.Body)
r.Body = readMeasurer
m := httpsnoop.CaptureMetrics(next, w, r)
if runID != "" && isExternal(r) {
err := server.app.ReportAPINetworkUsage(runID, uint64(readMeasurer.BytesRead), uint64(m.Written))
if err != nil {
// TODO: Will this actually work here
logAndReturnError(err, w)
return
}
}
})
}
// Middleware function, which will be called for each request
// TODO: Refactor checkRunCreated method to return an error
func (server *Server) checkRunCreated(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
runID := mux.Vars(r)["id"]
created, err := server.app.IsRunCreated(runID)
if err != nil {
log.Println(err)
logAndReturnError(err, w)
return
}
if !created {
err = newHTTPError(err, http.StatusNotFound, fmt.Sprintf("run %v: not found", runID))
logAndReturnError(err, w)
return
}
next.ServeHTTP(w, r)
})
}
func logAndReturnError(err error, w http.ResponseWriter) {
log.Println(err)
err2, ok := err.(clientError)
if !ok {
// TODO: Factor out common code with other error handling
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(http.StatusInternalServerError)
//nolint:errcheck // ignore error while logging an error
//skipcq: GSC-G104
w.Write([]byte(`{"error":"Internal server error"}`))
return
}
body | {
runID := mux.Vars(r)["id"]
err := server.app.PutApp(runID, r.Body, r.ContentLength)
if errors.Is(err, commands.ErrArchiveFormat) {
return newHTTPError(err, http.StatusBadRequest, err.Error())
}
return err
} | identifier_body |
server.go | , reader)
return err
}
func (server *Server) putApp(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
err := server.app.PutApp(runID, r.Body, r.ContentLength)
if errors.Is(err, commands.ErrArchiveFormat) {
return newHTTPError(err, http.StatusBadRequest, err.Error())
}
return err
}
func (server *Server) getCache(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
reader, err := server.app.GetCache(runID)
if err != nil {
// Returns 404 if there is no cache
if errors.Is(err, commands.ErrNotFound) {
return newHTTPError(err, http.StatusNotFound, err.Error())
}
return err
}
w.Header().Set("Content-Type", "application/gzip")
_, err = io.Copy(w, reader)
return err
}
func (server *Server) putCache(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
return server.app.PutCache(runID, r.Body, r.ContentLength)
}
func (server *Server) getOutput(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
reader, err := server.app.GetOutput(runID)
if err != nil {
// Returns 404 if there is no output
if errors.Is(err, commands.ErrNotFound) {
return newHTTPError(err, http.StatusNotFound, err.Error())
}
return err
}
w.Header().Set("Content-Type", "application/octet-stream")
_, err = io.Copy(w, reader)
return err
}
func (server *Server) putOutput(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
return server.app.PutOutput(runID, r.Body, r.ContentLength)
}
func (server *Server) getExitData(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
exitData, err := server.app.GetExitData(runID)
if err != nil {
return err
}
w.Header().Set("Content-Type", "application/json")
enc := json.NewEncoder(w)
return enc.Encode(exitData)
}
func (server *Server) startRun(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
decoder := json.NewDecoder(r.Body)
var options protocol.StartRunOptions
err := decoder.Decode(&options)
if err != nil {
return newHTTPError(err, http.StatusBadRequest, "JSON in body not correctly formatted")
}
if options.MaxRunTime == 0 {
options.MaxRunTime = server.defaultMaxRunTime
} else if options.MaxRunTime > server.maxRunTime {
return newHTTPError(err, http.StatusBadRequest, fmt.Sprintf("max_run_time should not be larger than %v", server.maxRunTime))
}
if options.Memory == 0 {
options.Memory = server.defaultMemory
} else if options.Memory > server.maxMemory {
return newHTTPError(err, http.StatusBadRequest, fmt.Sprintf("memory should not be larger than %v", server.maxMemory))
}
env := make(map[string]string)
for _, keyvalue := range options.Env {
env[keyvalue.Name] = keyvalue.Value
}
err = server.app.StartRun(runID, server.runDockerImage, options)
if errors.Is(err, commands.ErrAppNotAvailable) {
err = newHTTPError(err, http.StatusBadRequest, "app needs to be uploaded before starting a run")
} else if errors.Is(err, integrationclient.ErrNotAllowed) {
err = newHTTPError(err, http.StatusUnauthorized, err.Error())
}
return err
}
func (server *Server) getEvents(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
lastID := r.URL.Query().Get("last_id")
if lastID == "" {
lastID = "0"
}
w.Header().Set("Content-Type", "application/ld+json")
flusher, ok := w.(http.Flusher)
if !ok {
return errors.New("couldn't access the flusher")
}
events := server.app.GetEvents(runID, lastID)
enc := json.NewEncoder(w)
for events.More() {
e, err := events.Next()
if err != nil {
return err
}
err = enc.Encode(e)
if err != nil {
return err
}
flusher.Flush()
}
return nil
}
func (server *Server) createEvent(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
// Read json message as is into a string
// TODO: Switch over to json decoder
buf, err := ioutil.ReadAll(r.Body)
if err != nil {
return err
}
// Check the form of the JSON by interpreting it
var event protocol.Event
err = json.Unmarshal(buf, &event)
if err != nil {
return newHTTPError(err, http.StatusBadRequest, "JSON in body not correctly formatted")
}
return server.app.CreateEvent(runID, event)
}
func (server *Server) delete(w http.ResponseWriter, r *http.Request) error {
runID := mux.Vars(r)["id"]
return server.app.DeleteRun(runID)
}
func (server *Server) hello(w http.ResponseWriter, r *http.Request) error {
hello := protocol.Hello{
Message: "Hello from Yinyo!",
MaxRunTime: protocol.DefaultAndMax{
Default: server.defaultMaxRunTime,
Max: server.maxRunTime,
},
Memory: protocol.DefaultAndMax{
Default: server.defaultMemory,
Max: server.maxMemory,
},
Version: server.version,
RunnerImage: server.runDockerImage,
}
w.Header().Set("Content-Type", "application/json")
enc := json.NewEncoder(w)
return enc.Encode(hello)
}
// isExternal returns true if the request has arrived via the public internet. This relies
// on the requests from the internet coming in via a load balancer (which sets the
// X-Forwarded-For header) and internal requests not coming via a load balancer
// This is used in measuring network traffic
func isExternal(request *http.Request) bool {
return request.Header.Get("X-Forwarded-For") != ""
}
// Middleware that logs the request uri
func logRequests(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
var source string
if isExternal(r) {
source = "external"
} else {
source = "internal"
}
log.Println(source, r.Method, r.RequestURI)
// Call the next handler, which can be another middleware in the chain, or the final handler.
next.ServeHTTP(w, r)
})
}
type readMeasurer struct {
rc io.ReadCloser
BytesRead int64
}
func newReadMeasurer(rc io.ReadCloser) *readMeasurer {
return &readMeasurer{rc: rc}
}
func (r *readMeasurer) Read(p []byte) (n int, err error) {
n, err = r.rc.Read(p)
atomic.AddInt64(&r.BytesRead, int64(n))
return
}
func (r *readMeasurer) Close() error {
return r.rc.Close()
}
func (server *Server) recordTraffic(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
runID := mux.Vars(r)["id"]
readMeasurer := newReadMeasurer(r.Body)
r.Body = readMeasurer
m := httpsnoop.CaptureMetrics(next, w, r)
if runID != "" && isExternal(r) {
err := server.app.ReportAPINetworkUsage(runID, uint64(readMeasurer.BytesRead), uint64(m.Written))
if err != nil |
}
})
}
// Middleware function, which will be called for each request
// TODO: Refactor checkRunCreated method to return an error
func (server *Server) checkRunCreated(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
runID := mux.Vars(r)["id"]
created, err := server.app.IsRunCreated(runID)
if err != nil {
log.Println(err)
logAndReturnError(err, w)
return
}
if !created {
err = newHTTPError(err, http.StatusNotFound, fmt.Sprintf("run %v: not found", runID))
logAndReturnError(err, w)
return
}
next.ServeHTTP(w, r)
})
}
func logAndReturnError(err error, w http.ResponseWriter) {
log.Println(err)
err2, ok := err.(clientError)
if !ok {
// TODO: Factor out common code with other error handling
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(http.StatusInternalServerError)
//nolint:errcheck // ignore error while logging an error
//skipcq: GSC-G104
w.Write([]byte(`{"error":"Internal server error"}`))
return
}
body | {
// TODO: Will this actually work here
logAndReturnError(err, w)
return
} | conditional_block |
saga.ts | import {
call,
cancel,
delay,
fork,
join,
put,
race,
select,
spawn,
takeLeading,
} from 'redux-saga/effects'
import { Task } from '@redux-saga/types'
import { setBackupCompleted } from 'src/account/actions'
import { uploadNameAndPicture } from 'src/account/profileInfo'
import { recoveringFromStoreWipeSelector } from 'src/account/selectors'
import { showError } from 'src/alert/actions'
import { AppEvents, OnboardingEvents } from 'src/analytics/Events'
import ValoraAnalytics from 'src/analytics/ValoraAnalytics'
import { ErrorMessages } from 'src/app/ErrorMessages'
import { countMnemonicWords, storeMnemonic } from 'src/backup/utils'
import { refreshAllBalances } from 'src/home/actions'
import {
Actions,
ImportBackupPhraseAction,
importBackupPhraseFailure,
importBackupPhraseSuccess,
} from 'src/import/actions'
import { navigate, navigateClearingStack } from 'src/navigator/NavigationService'
import { Screens } from 'src/navigator/Screens'
import { fetchTokenBalanceInWeiWithRetry } from 'src/tokens/saga'
import { Currency } from 'src/utils/currencies'
import Logger from 'src/utils/Logger'
import { assignAccountFromPrivateKey, waitWeb3LastBlock } from 'src/web3/saga'
const TAG = 'import/saga'
export const MAX_BALANCE_CHECK_TASKS = 5
export const MNEMONIC_AUTOCORRECT_TIMEOUT = 5000 // ms
export function* importBackupPhraseSaga({ phrase, useEmptyWallet }: ImportBackupPhraseAction) {
Logger.debug(TAG + '@importBackupPhraseSaga', 'Importing backup phrase')
yield call(waitWeb3LastBlock)
try {
const normalizedPhrase = normalizeMnemonic(phrase)
const phraseIsValid = validateMnemonic(normalizedPhrase, bip39)
const invalidWords = phraseIsValid ? [] : invalidMnemonicWords(normalizedPhrase)
if (!phraseIsValid) |
// If the given mnemonic phrase is invalid, spend up to 5 seconds trying to correct it.
// A balance check happens before the phrase is returned, so if the phrase was autocorrected,
// we do not need to check the balance again later in this method.
// If useEmptyWallet is true, skip this step. It only helps find non-empty wallets.
let mnemonic = phraseIsValid ? normalizedPhrase : undefined
let checkedBalance = false
if (!phraseIsValid && !useEmptyWallet) {
try {
const { correctedPhrase, timeout } = yield race({
correctedPhrase: call(attemptBackupPhraseCorrection, normalizedPhrase),
timeout: delay(MNEMONIC_AUTOCORRECT_TIMEOUT),
})
if (correctedPhrase) {
Logger.info(TAG + '@importBackupPhraseSaga', 'Using suggested mnemonic autocorrection')
mnemonic = correctedPhrase
checkedBalance = true
} else {
Logger.info(
TAG + '@importBackupPhraseSaga',
`Backup phrase autocorrection ${timeout ? 'timed out' : 'failed'}`
)
ValoraAnalytics.track(OnboardingEvents.wallet_import_phrase_correction_failed, {
timeout: timeout !== undefined,
})
}
} catch (error) {
Logger.error(
TAG + '@importBackupPhraseSaga',
`Encountered an error trying to correct a phrase`,
error
)
ValoraAnalytics.track(OnboardingEvents.wallet_import_phrase_correction_failed, {
timeout: false,
error: error.message,
})
}
}
// If the input phrase was invalid, and the correct phrase could not be found automatically,
// report an error to the user.
if (mnemonic === undefined) {
Logger.error(TAG + '@importBackupPhraseSaga', 'Invalid mnemonic')
if (invalidWords !== undefined && invalidWords.length > 0) {
yield put(
showError(ErrorMessages.INVALID_WORDS_IN_BACKUP_PHRASE, null, {
invalidWords: invalidWords.join(', '),
})
)
} else {
yield put(showError(ErrorMessages.INVALID_BACKUP_PHRASE))
}
yield put(importBackupPhraseFailure())
return
}
const { privateKey } = yield call(
generateKeys,
mnemonic,
undefined,
undefined,
undefined,
bip39
)
if (!privateKey) {
throw new Error('Failed to convert mnemonic to hex')
}
// Check that the provided mnemonic derives an account with at least some balance. If the wallet
// is empty, and useEmptyWallet is not true, display a warning to the user before they continue.
if (!useEmptyWallet && !checkedBalance) {
const backupAccount = privateKeyToAddress(privateKey)
if (!(yield call(walletHasBalance, backupAccount))) {
yield put(importBackupPhraseSuccess())
ValoraAnalytics.track(OnboardingEvents.wallet_import_zero_balance, {
account: backupAccount,
})
navigate(Screens.ImportWallet, { clean: false, showZeroBalanceModal: true })
return
}
}
const account: string | null = yield call(assignAccountFromPrivateKey, privateKey, mnemonic)
if (!account) {
throw new Error('Failed to assign account from private key')
}
// Set key in phone's secure store
yield call(storeMnemonic, mnemonic, account)
// Set backup complete so user isn't prompted to do backup flow
yield put(setBackupCompleted())
yield put(refreshAllBalances())
yield call(uploadNameAndPicture)
const recoveringFromStoreWipe = yield select(recoveringFromStoreWipeSelector)
if (recoveringFromStoreWipe) {
ValoraAnalytics.track(AppEvents.redux_store_recovery_success, { account })
}
ValoraAnalytics.track(OnboardingEvents.wallet_import_success)
navigateClearingStack(Screens.VerificationEducationScreen)
yield put(importBackupPhraseSuccess())
} catch (error) {
Logger.error(TAG + '@importBackupPhraseSaga', 'Error importing backup phrase', error)
yield put(showError(ErrorMessages.IMPORT_BACKUP_FAILED))
yield put(importBackupPhraseFailure())
ValoraAnalytics.track(OnboardingEvents.wallet_import_error, { error: error.message })
}
}
// Uses suggestMnemonicCorrections to generate valid mnemonic phrases that are likely given the
// invalid phrase that the user entered. Checks the balance of any phrase the generator suggests
// before returning it. If the wallet has non-zero balance, then we are be very confident that its
// the account the user was actually trying to restore. Otherwise, this method does not return any
// suggested correction.
function* attemptBackupPhraseCorrection(mnemonic: string) {
// Counter of how many suggestions have been tried and a list of tasks for ongoing balance checks.
let counter = 0
let tasks: { index: number; suggestion: string; task: Task; done: boolean }[] = []
for (const suggestion of suggestMnemonicCorrections(mnemonic)) {
ValoraAnalytics.track(OnboardingEvents.wallet_import_phrase_correction_attempt)
Logger.info(
TAG + '@attemptBackupPhraseCorrection',
`Checking account balance on suggestion #${++counter}`
)
const { privateKey } = yield call(
generateKeys,
suggestion,
undefined,
undefined,
undefined,
bip39
)
if (!privateKey) {
Logger.error(TAG + '@attemptBackupPhraseCorrection', 'Failed to convert mnemonic to hex')
continue
}
// Push a new check wallet balance task onto the list of running tasks.
// If our list of tasks is full, wait for at least one to finish.
tasks.push({
index: counter,
suggestion,
task: yield fork(walletHasBalance, privateKeyToAddress(privateKey)),
done: false,
})
if (tasks.length >= MAX_BALANCE_CHECK_TASKS) {
yield race(tasks.map(({ task }) => join(task)))
}
// Check the results of any balance check tasks. Prune any that have finished, and leave those
// that are still running. If any return a positive result, cancel remaining tasks and return.
for (const task of tasks) {
const result = task.task.result()
if (result === undefined) {
continue
}
// Erase the task to mark that it has been checked.
task.done = true
if (result) {
Logger.info(
TAG + '@attemptBackupPhraseCorrection',
`Found correction phrase with balance in attempt ${task.index}`
)
ValoraAnalytics.track(OnboardingEvents.wallet_import_phrase_correction_success, {
attemptNumber: task.index,
})
// Cancel any remaining tasks.
cancel(tasks.map(({ task }) => task))
return task.suggestion
}
}
tasks = tasks.filter((task) => !task.done)
}
return undefined
}
/**
* Check the CELO, cUSD, and cEUR balances of the given address, returning true if any are greater
* than zero. Returns as soon as a single balance check request comes back positive.
*/
function* walletHasBalance(address: string) {
Logger.debug(TAG + '@walletHasBalance', 'Checking account balance')
let requests = [
yield fork(fetchTokenBalanceInWeiWithRetry, Currency.Euro, address),
yield fork(fetchTokenBalanceInWeiWithRetry, Currency.Dollar | {
ValoraAnalytics.track(OnboardingEvents.wallet_import_phrase_invalid, {
wordCount: countMnemonicWords(normalizedPhrase),
invalidWordCount: invalidWords?.length,
})
} | conditional_block |
saga.ts | '
import {
call,
cancel,
delay,
fork,
join,
put,
race,
select,
spawn,
takeLeading,
} from 'redux-saga/effects'
import { Task } from '@redux-saga/types'
import { setBackupCompleted } from 'src/account/actions'
import { uploadNameAndPicture } from 'src/account/profileInfo'
import { recoveringFromStoreWipeSelector } from 'src/account/selectors'
import { showError } from 'src/alert/actions'
import { AppEvents, OnboardingEvents } from 'src/analytics/Events'
import ValoraAnalytics from 'src/analytics/ValoraAnalytics'
import { ErrorMessages } from 'src/app/ErrorMessages'
import { countMnemonicWords, storeMnemonic } from 'src/backup/utils'
import { refreshAllBalances } from 'src/home/actions'
import {
Actions,
ImportBackupPhraseAction,
importBackupPhraseFailure,
importBackupPhraseSuccess,
} from 'src/import/actions'
import { navigate, navigateClearingStack } from 'src/navigator/NavigationService'
import { Screens } from 'src/navigator/Screens'
import { fetchTokenBalanceInWeiWithRetry } from 'src/tokens/saga'
import { Currency } from 'src/utils/currencies'
import Logger from 'src/utils/Logger'
import { assignAccountFromPrivateKey, waitWeb3LastBlock } from 'src/web3/saga'
const TAG = 'import/saga'
export const MAX_BALANCE_CHECK_TASKS = 5
export const MNEMONIC_AUTOCORRECT_TIMEOUT = 5000 // ms
export function* importBackupPhraseSaga({ phrase, useEmptyWallet }: ImportBackupPhraseAction) {
Logger.debug(TAG + '@importBackupPhraseSaga', 'Importing backup phrase')
yield call(waitWeb3LastBlock)
try {
const normalizedPhrase = normalizeMnemonic(phrase)
const phraseIsValid = validateMnemonic(normalizedPhrase, bip39)
const invalidWords = phraseIsValid ? [] : invalidMnemonicWords(normalizedPhrase)
if (!phraseIsValid) {
ValoraAnalytics.track(OnboardingEvents.wallet_import_phrase_invalid, {
wordCount: countMnemonicWords(normalizedPhrase),
invalidWordCount: invalidWords?.length,
})
}
// If the given mnemonic phrase is invalid, spend up to 5 seconds trying to correct it.
// A balance check happens before the phrase is returned, so if the phrase was autocorrected,
// we do not need to check the balance again later in this method.
// If useEmptyWallet is true, skip this step. It only helps find non-empty wallets.
let mnemonic = phraseIsValid ? normalizedPhrase : undefined
let checkedBalance = false
if (!phraseIsValid && !useEmptyWallet) {
try {
const { correctedPhrase, timeout } = yield race({ | mnemonic = correctedPhrase
checkedBalance = true
} else {
Logger.info(
TAG + '@importBackupPhraseSaga',
`Backup phrase autocorrection ${timeout ? 'timed out' : 'failed'}`
)
ValoraAnalytics.track(OnboardingEvents.wallet_import_phrase_correction_failed, {
timeout: timeout !== undefined,
})
}
} catch (error) {
Logger.error(
TAG + '@importBackupPhraseSaga',
`Encountered an error trying to correct a phrase`,
error
)
ValoraAnalytics.track(OnboardingEvents.wallet_import_phrase_correction_failed, {
timeout: false,
error: error.message,
})
}
}
// If the input phrase was invalid, and the correct phrase could not be found automatically,
// report an error to the user.
if (mnemonic === undefined) {
Logger.error(TAG + '@importBackupPhraseSaga', 'Invalid mnemonic')
if (invalidWords !== undefined && invalidWords.length > 0) {
yield put(
showError(ErrorMessages.INVALID_WORDS_IN_BACKUP_PHRASE, null, {
invalidWords: invalidWords.join(', '),
})
)
} else {
yield put(showError(ErrorMessages.INVALID_BACKUP_PHRASE))
}
yield put(importBackupPhraseFailure())
return
}
const { privateKey } = yield call(
generateKeys,
mnemonic,
undefined,
undefined,
undefined,
bip39
)
if (!privateKey) {
throw new Error('Failed to convert mnemonic to hex')
}
// Check that the provided mnemonic derives an account with at least some balance. If the wallet
// is empty, and useEmptyWallet is not true, display a warning to the user before they continue.
if (!useEmptyWallet && !checkedBalance) {
const backupAccount = privateKeyToAddress(privateKey)
if (!(yield call(walletHasBalance, backupAccount))) {
yield put(importBackupPhraseSuccess())
ValoraAnalytics.track(OnboardingEvents.wallet_import_zero_balance, {
account: backupAccount,
})
navigate(Screens.ImportWallet, { clean: false, showZeroBalanceModal: true })
return
}
}
const account: string | null = yield call(assignAccountFromPrivateKey, privateKey, mnemonic)
if (!account) {
throw new Error('Failed to assign account from private key')
}
// Set key in phone's secure store
yield call(storeMnemonic, mnemonic, account)
// Set backup complete so user isn't prompted to do backup flow
yield put(setBackupCompleted())
yield put(refreshAllBalances())
yield call(uploadNameAndPicture)
const recoveringFromStoreWipe = yield select(recoveringFromStoreWipeSelector)
if (recoveringFromStoreWipe) {
ValoraAnalytics.track(AppEvents.redux_store_recovery_success, { account })
}
ValoraAnalytics.track(OnboardingEvents.wallet_import_success)
navigateClearingStack(Screens.VerificationEducationScreen)
yield put(importBackupPhraseSuccess())
} catch (error) {
Logger.error(TAG + '@importBackupPhraseSaga', 'Error importing backup phrase', error)
yield put(showError(ErrorMessages.IMPORT_BACKUP_FAILED))
yield put(importBackupPhraseFailure())
ValoraAnalytics.track(OnboardingEvents.wallet_import_error, { error: error.message })
}
}
// Uses suggestMnemonicCorrections to generate valid mnemonic phrases that are likely given the
// invalid phrase that the user entered. Checks the balance of any phrase the generator suggests
// before returning it. If the wallet has non-zero balance, then we are be very confident that its
// the account the user was actually trying to restore. Otherwise, this method does not return any
// suggested correction.
function* attemptBackupPhraseCorrection(mnemonic: string) {
// Counter of how many suggestions have been tried and a list of tasks for ongoing balance checks.
let counter = 0
let tasks: { index: number; suggestion: string; task: Task; done: boolean }[] = []
for (const suggestion of suggestMnemonicCorrections(mnemonic)) {
ValoraAnalytics.track(OnboardingEvents.wallet_import_phrase_correction_attempt)
Logger.info(
TAG + '@attemptBackupPhraseCorrection',
`Checking account balance on suggestion #${++counter}`
)
const { privateKey } = yield call(
generateKeys,
suggestion,
undefined,
undefined,
undefined,
bip39
)
if (!privateKey) {
Logger.error(TAG + '@attemptBackupPhraseCorrection', 'Failed to convert mnemonic to hex')
continue
}
// Push a new check wallet balance task onto the list of running tasks.
// If our list of tasks is full, wait for at least one to finish.
tasks.push({
index: counter,
suggestion,
task: yield fork(walletHasBalance, privateKeyToAddress(privateKey)),
done: false,
})
if (tasks.length >= MAX_BALANCE_CHECK_TASKS) {
yield race(tasks.map(({ task }) => join(task)))
}
// Check the results of any balance check tasks. Prune any that have finished, and leave those
// that are still running. If any return a positive result, cancel remaining tasks and return.
for (const task of tasks) {
const result = task.task.result()
if (result === undefined) {
continue
}
// Erase the task to mark that it has been checked.
task.done = true
if (result) {
Logger.info(
TAG + '@attemptBackupPhraseCorrection',
`Found correction phrase with balance in attempt ${task.index}`
)
ValoraAnalytics.track(OnboardingEvents.wallet_import_phrase_correction_success, {
attemptNumber: task.index,
})
// Cancel any remaining tasks.
cancel(tasks.map(({ task }) => task))
return task.suggestion
}
}
tasks = tasks.filter((task) => !task.done)
}
return undefined
}
/**
* Check the CELO, cUSD, and cEUR balances of the given address, returning true if any are greater
* than zero. Returns as soon as a single balance check request comes back positive.
*/
function* walletHasBalance(address: string) {
Logger.debug(TAG + '@walletHasBalance', 'Checking account balance')
let requests = [
yield fork(fetchTokenBalanceInWeiWithRetry, Currency.Euro, address),
yield fork(fetchTokenBalanceInWeiWithRetry, Currency.Dollar, | correctedPhrase: call(attemptBackupPhraseCorrection, normalizedPhrase),
timeout: delay(MNEMONIC_AUTOCORRECT_TIMEOUT),
})
if (correctedPhrase) {
Logger.info(TAG + '@importBackupPhraseSaga', 'Using suggested mnemonic autocorrection') | random_line_split |
agent.py | = use_frozen_net
if net_path is not None:
self.load_net(net_path, use_frozen_net)
else:
self.net = None
self.net_input_placeholder = None
self.sess = None
def act(self, obz, reward, done):
if obz is not None:
log.debug('steering %r', obz['steering'])
log.debug('throttle %r', obz['throttle'])
obz = self.preprocess_obz(obz)
if self.should_record_recovery_from_random_actions:
action = self.toggle_random_action()
self.action_count += 1
elif self.net is not None:
if obz is None or not obz['cameras']:
y = None
else:
image = obz['cameras'][0]['image']
y = self.get_net_out(image)
action = self.get_next_action(obz, y)
else:
action = Action(has_control=(not self.path_follower_mode))
self.previous_action = action
self.step += 1
if obz and obz['is_game_driving'] == 1 and self.should_record:
self.obz_recording.append(obz)
# utils.save_camera(obz['cameras'][0]['image'], obz['cameras'][0]['depth'],
# os.path.join(self.sess_dir, str(self.total_obz).zfill(10)))
self.recorded_obz_count += 1
else:
log.debug('Not recording frame')
self.maybe_save()
action = action.as_gym()
return action
def | (self, obz, y):
log.debug('getting next action')
if y is None:
log.debug('net out is None')
return self.previous_action or Action()
desired_spin, desired_direction, desired_speed, desired_speed_change, desired_steering, desired_throttle = y[0]
desired_spin = desired_spin * c.SPIN_NORMALIZATION_FACTOR
desired_speed = desired_speed * c.SPEED_NORMALIZATION_FACTOR
desired_speed_change = desired_speed_change * c.SPEED_NORMALIZATION_FACTOR
log.debug('desired_steering %f', desired_steering)
log.debug('desired_throttle %f', desired_throttle)
log.debug('desired_direction %f', desired_direction)
log.debug('desired_speed %f', desired_speed)
log.debug('desired_speed_change %f', desired_speed_change)
log.debug('desired_throttle %f', desired_throttle)
log.debug('desired_spin %f', desired_spin)
actual_speed = obz['speed']
log.debug('actual_speed %f', actual_speed)
log.debug('desired_speed %f', desired_speed)
target_speed = 9 * 100
log.debug('actual_speed %r' % actual_speed)
# Network overfit on speed, plus it's nice to be able to change it,
# so we just ignore output speed of net
desired_throttle = abs(target_speed / max(actual_speed, 1e-3))
desired_throttle = min(max(desired_throttle, 0.), 1.)
log.debug('desired_steering %f', desired_steering)
log.debug('desired_throttle %f', desired_throttle)
smoothed_steering = 0.2 * self.previous_action.steering + 0.5 * desired_steering
# desired_throttle = desired_throttle * 1.1
action = Action(smoothed_steering, desired_throttle)
return action
def maybe_save(self):
if (
self.should_record and self.recorded_obz_count % c.FRAMES_PER_HDF5_FILE == 0 and
self.recorded_obz_count != 0
):
filename = os.path.join(self.sess_dir, '%s.hdf5' %
str(self.recorded_obz_count // c.FRAMES_PER_HDF5_FILE).zfill(10))
save_hdf5(self.obz_recording, filename=filename)
log.info('Flushing output data')
self.obz_recording = []
def toggle_random_action(self):
"""Reduce sampling error by diversifying experience"""
if self.performing_random_actions:
if self.action_count < self.random_action_count and self.previous_action is not None:
action = self.previous_action
else:
# switch to non-random
action = Action(has_control=False)
self.action_count = 0
self.performing_random_actions = False
else:
if self.action_count < self.non_random_action_count and self.previous_action is not None:
action = self.previous_action
else:
# switch to random
steering = np.random.uniform(-0.5, 0.5, 1)[0] # Going too large here gets us stuck
log.debug('random steering %f', steering)
throttle = 0.65 # TODO: Make throttle random to get better variation here
action = Action(steering, throttle)
self.action_count = 0
self.performing_random_actions = True
return action
def load_net(self, net_path, is_frozen=False):
'''
Frozen nets can be generated with something like
`python freeze_graph.py --input_graph="C:\tmp\deepdrive\tensorflow_random_action\train\graph.pbtxt" --input_checkpoint="C:\tmp\deepdrive\tensorflow_random_action\train\model.ckpt-273141" --output_graph="C:\tmp\deepdrive\tensorflow_random_action\frozen_graph.pb" --output_node_names="model/add_2"`
where model/add_2 is the auto-generated name for self.net.p
'''
self.net_input_placeholder = tf.placeholder(tf.float32, (None,) + c.BASELINE_IMAGE_SHAPE)
if is_frozen:
# TODO: Get frozen nets working
# We load the protobuf file from the disk and parse it to retrieve the
# unserialized graph_def
with tf.gfile.GFile(net_path, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Then, we can use again a convenient built-in function to import a graph_def into the
# current default Graph
with tf.Graph().as_default() as graph:
tf.import_graph_def(
graph_def,
input_map=None,
return_elements=None,
name="prefix",
op_dict=None,
producer_op_list=None
)
self.net = graph
else:
with tf.variable_scope("model") as _vs:
self.net = Net(self.net_input_placeholder, c.NUM_TARGETS, is_training=False)
saver = tf.train.Saver()
saver.restore(self.sess, net_path)
def close(self):
if self.sess is not None:
self.sess.close()
def get_net_out(self, image):
begin = time.time()
if self.use_frozen_net:
out_var = 'prefix/model/add_2'
else:
out_var = self.net.p
net_out = self.sess.run(out_var, feed_dict={
self.net_input_placeholder: image.reshape(1, *image.shape),})
# print(net_out)
end = time.time()
log.debug('inference time %s', end - begin)
return net_out
def preprocess_obz(self, obz):
for camera in obz['cameras']:
image = camera['image']
image = image.astype(np.float32)
image -= c.MEAN_PIXEL
camera['image'] = image
return obz
def set_random_action_repeat_count(self):
if self.semirandom_sequence_step == (self.random_action_count + self.non_random_action_count):
self.semirandom_sequence_step = 0
rand = c.RNG.random()
if 0 <= rand < 0.67:
self.random_action_count = 0
self.non_random_action_count = 10
elif 0.67 <= rand < 0.85:
self.random_action_count = 4
self.non_random_action_count = 5
elif 0.85 <= rand < 0.95:
self.random_action_count = 8
self.non_random_action_count = 10
else:
self.random_action_count = 12
self.non_random_action_count = 15
log.debug('random actions at %r, non-random %r', self.random_action_count, self.non_random_action_count)
else:
self.semirandom_sequence_step += 1
def run(experiment, env_id='DeepDrivePreproTensorflow-v0', should_record=False, net_path=None, should_benchmark=True,
run_baseline_agent=False, camera_rigs=None, should_rotate_sim_types=False,
should_record_recovery_from_random_actions=False, render=False, path_follower=False, fps=c.DEFAULT_FPS):
if run_baseline_agent:
net_path = ensure_baseline_weights(net_path)
reward = 0
episode_done = False
max_episodes = 1000
tf_config = tf.ConfigProto(
gpu_options=tf.GPUOptions(
per_process_gpu_memory_fraction=0.8,
# leave room for the game,
# NOTE: debugging python, i.e. with PyCharm can cause OOM errors, where running will not
allow_growth=True
),
)
sess = tf.Session(config=tf_config)
| get_next_action | identifier_name |
agent.py | = use_frozen_net
if net_path is not None:
self.load_net(net_path, use_frozen_net)
else:
self.net = None
self.net_input_placeholder = None
self.sess = None
def act(self, obz, reward, done):
if obz is not None:
log.debug('steering %r', obz['steering'])
log.debug('throttle %r', obz['throttle'])
obz = self.preprocess_obz(obz)
if self.should_record_recovery_from_random_actions:
action = self.toggle_random_action()
self.action_count += 1
elif self.net is not None:
if obz is None or not obz['cameras']:
y = None
else:
image = obz['cameras'][0]['image']
y = self.get_net_out(image)
action = self.get_next_action(obz, y)
else:
action = Action(has_control=(not self.path_follower_mode))
self.previous_action = action
self.step += 1
if obz and obz['is_game_driving'] == 1 and self.should_record:
self.obz_recording.append(obz)
# utils.save_camera(obz['cameras'][0]['image'], obz['cameras'][0]['depth'],
# os.path.join(self.sess_dir, str(self.total_obz).zfill(10)))
self.recorded_obz_count += 1
else:
log.debug('Not recording frame')
self.maybe_save()
action = action.as_gym()
return action
def get_next_action(self, obz, y):
log.debug('getting next action')
if y is None:
log.debug('net out is None')
return self.previous_action or Action()
desired_spin, desired_direction, desired_speed, desired_speed_change, desired_steering, desired_throttle = y[0]
desired_spin = desired_spin * c.SPIN_NORMALIZATION_FACTOR
desired_speed = desired_speed * c.SPEED_NORMALIZATION_FACTOR
desired_speed_change = desired_speed_change * c.SPEED_NORMALIZATION_FACTOR
log.debug('desired_steering %f', desired_steering)
log.debug('desired_throttle %f', desired_throttle)
log.debug('desired_direction %f', desired_direction)
log.debug('desired_speed %f', desired_speed)
log.debug('desired_speed_change %f', desired_speed_change)
log.debug('desired_throttle %f', desired_throttle)
log.debug('desired_spin %f', desired_spin)
actual_speed = obz['speed']
log.debug('actual_speed %f', actual_speed)
log.debug('desired_speed %f', desired_speed)
target_speed = 9 * 100
log.debug('actual_speed %r' % actual_speed)
# Network overfit on speed, plus it's nice to be able to change it,
# so we just ignore output speed of net
desired_throttle = abs(target_speed / max(actual_speed, 1e-3))
desired_throttle = min(max(desired_throttle, 0.), 1.)
log.debug('desired_steering %f', desired_steering)
log.debug('desired_throttle %f', desired_throttle)
smoothed_steering = 0.2 * self.previous_action.steering + 0.5 * desired_steering
# desired_throttle = desired_throttle * 1.1
action = Action(smoothed_steering, desired_throttle)
return action
def maybe_save(self):
if (
self.should_record and self.recorded_obz_count % c.FRAMES_PER_HDF5_FILE == 0 and
self.recorded_obz_count != 0
):
filename = os.path.join(self.sess_dir, '%s.hdf5' %
str(self.recorded_obz_count // c.FRAMES_PER_HDF5_FILE).zfill(10))
save_hdf5(self.obz_recording, filename=filename)
log.info('Flushing output data')
self.obz_recording = []
def toggle_random_action(self):
"""Reduce sampling error by diversifying experience"""
if self.performing_random_actions:
if self.action_count < self.random_action_count and self.previous_action is not None:
action = self.previous_action
else:
# switch to non-random
action = Action(has_control=False)
self.action_count = 0
self.performing_random_actions = False
else:
if self.action_count < self.non_random_action_count and self.previous_action is not None:
action = self.previous_action
else:
# switch to random
steering = np.random.uniform(-0.5, 0.5, 1)[0] # Going too large here gets us stuck
log.debug('random steering %f', steering)
throttle = 0.65 # TODO: Make throttle random to get better variation here
action = Action(steering, throttle)
self.action_count = 0
self.performing_random_actions = True
return action
def load_net(self, net_path, is_frozen=False):
'''
Frozen nets can be generated with something like
`python freeze_graph.py --input_graph="C:\tmp\deepdrive\tensorflow_random_action\train\graph.pbtxt" --input_checkpoint="C:\tmp\deepdrive\tensorflow_random_action\train\model.ckpt-273141" --output_graph="C:\tmp\deepdrive\tensorflow_random_action\frozen_graph.pb" --output_node_names="model/add_2"`
where model/add_2 is the auto-generated name for self.net.p
'''
self.net_input_placeholder = tf.placeholder(tf.float32, (None,) + c.BASELINE_IMAGE_SHAPE)
if is_frozen:
# TODO: Get frozen nets working
# We load the protobuf file from the disk and parse it to retrieve the
# unserialized graph_def
with tf.gfile.GFile(net_path, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Then, we can use again a convenient built-in function to import a graph_def into the
# current default Graph
with tf.Graph().as_default() as graph:
tf.import_graph_def(
graph_def,
input_map=None,
return_elements=None,
name="prefix",
op_dict=None,
producer_op_list=None
)
self.net = graph
else:
with tf.variable_scope("model") as _vs:
self.net = Net(self.net_input_placeholder, c.NUM_TARGETS, is_training=False)
saver = tf.train.Saver()
saver.restore(self.sess, net_path)
def close(self):
if self.sess is not None:
self.sess.close()
def get_net_out(self, image):
begin = time.time()
if self.use_frozen_net:
out_var = 'prefix/model/add_2'
else:
out_var = self.net.p
net_out = self.sess.run(out_var, feed_dict={
self.net_input_placeholder: image.reshape(1, *image.shape),})
# print(net_out)
end = time.time()
log.debug('inference time %s', end - begin)
return net_out
def preprocess_obz(self, obz):
for camera in obz['cameras']:
image = camera['image']
image = image.astype(np.float32)
image -= c.MEAN_PIXEL
camera['image'] = image
return obz
def set_random_action_repeat_count(self):
if self.semirandom_sequence_step == (self.random_action_count + self.non_random_action_count):
self.semirandom_sequence_step = 0
rand = c.RNG.random()
if 0 <= rand < 0.67:
|
elif 0.67 <= rand < 0.85:
self.random_action_count = 4
self.non_random_action_count = 5
elif 0.85 <= rand < 0.95:
self.random_action_count = 8
self.non_random_action_count = 10
else:
self.random_action_count = 12
self.non_random_action_count = 15
log.debug('random actions at %r, non-random %r', self.random_action_count, self.non_random_action_count)
else:
self.semirandom_sequence_step += 1
def run(experiment, env_id='DeepDrivePreproTensorflow-v0', should_record=False, net_path=None, should_benchmark=True,
run_baseline_agent=False, camera_rigs=None, should_rotate_sim_types=False,
should_record_recovery_from_random_actions=False, render=False, path_follower=False, fps=c.DEFAULT_FPS):
if run_baseline_agent:
net_path = ensure_baseline_weights(net_path)
reward = 0
episode_done = False
max_episodes = 1000
tf_config = tf.ConfigProto(
gpu_options=tf.GPUOptions(
per_process_gpu_memory_fraction=0.8,
# leave room for the game,
# NOTE: debugging python, i.e. with PyCharm can cause OOM errors, where running will not
allow_growth=True
),
)
sess = tf.Session(config=tf_config)
if | self.random_action_count = 0
self.non_random_action_count = 10 | conditional_block |
agent.py | import glob
import gym
import tensorflow as tf
import numpy as np
import config as c
import deepdrive
from gym_deepdrive.envs.deepdrive_gym_env import Action
from tensorflow_agent.net import Net
from utils import save_hdf5, download
import logs
log = logs.get_log(__name__)
class Agent(object):
def __init__(self, action_space, tf_session, env, should_record_recovery_from_random_actions=True,
should_record=False, net_path=None, use_frozen_net=False, random_action_count=0,
non_random_action_count=5, path_follower=False, recording_dir=c.RECORDING_DIR):
np.random.seed(c.RNG_SEED)
self.action_space = action_space
self.previous_action = None
self.step = 0
self.env = env
# State for toggling random actions
self.should_record_recovery_from_random_actions = should_record_recovery_from_random_actions
self.random_action_count = random_action_count
self.non_random_action_count = non_random_action_count
self.semirandom_sequence_step = 0
self.action_count = 0
self.recorded_obz_count = 0
self.performing_random_actions = False
self.path_follower_mode = path_follower
self.recording_dir = recording_dir
# Recording state
self.should_record = should_record
self.sess_dir = os.path.join(recording_dir, datetime.now().strftime(c.DIR_DATE_FORMAT))
self.obz_recording = []
if should_record_recovery_from_random_actions:
log.info('Mixing in random actions to increase data diversity (these are not recorded).')
if should_record:
log.info('Recording driving data to %s', self.sess_dir)
# Net
self.sess = tf_session
self.use_frozen_net = use_frozen_net
if net_path is not None:
self.load_net(net_path, use_frozen_net)
else:
self.net = None
self.net_input_placeholder = None
self.sess = None
def act(self, obz, reward, done):
if obz is not None:
log.debug('steering %r', obz['steering'])
log.debug('throttle %r', obz['throttle'])
obz = self.preprocess_obz(obz)
if self.should_record_recovery_from_random_actions:
action = self.toggle_random_action()
self.action_count += 1
elif self.net is not None:
if obz is None or not obz['cameras']:
y = None
else:
image = obz['cameras'][0]['image']
y = self.get_net_out(image)
action = self.get_next_action(obz, y)
else:
action = Action(has_control=(not self.path_follower_mode))
self.previous_action = action
self.step += 1
if obz and obz['is_game_driving'] == 1 and self.should_record:
self.obz_recording.append(obz)
# utils.save_camera(obz['cameras'][0]['image'], obz['cameras'][0]['depth'],
# os.path.join(self.sess_dir, str(self.total_obz).zfill(10)))
self.recorded_obz_count += 1
else:
log.debug('Not recording frame')
self.maybe_save()
action = action.as_gym()
return action
def get_next_action(self, obz, y):
log.debug('getting next action')
if y is None:
log.debug('net out is None')
return self.previous_action or Action()
desired_spin, desired_direction, desired_speed, desired_speed_change, desired_steering, desired_throttle = y[0]
desired_spin = desired_spin * c.SPIN_NORMALIZATION_FACTOR
desired_speed = desired_speed * c.SPEED_NORMALIZATION_FACTOR
desired_speed_change = desired_speed_change * c.SPEED_NORMALIZATION_FACTOR
log.debug('desired_steering %f', desired_steering)
log.debug('desired_throttle %f', desired_throttle)
log.debug('desired_direction %f', desired_direction)
log.debug('desired_speed %f', desired_speed)
log.debug('desired_speed_change %f', desired_speed_change)
log.debug('desired_throttle %f', desired_throttle)
log.debug('desired_spin %f', desired_spin)
actual_speed = obz['speed']
log.debug('actual_speed %f', actual_speed)
log.debug('desired_speed %f', desired_speed)
target_speed = 9 * 100
log.debug('actual_speed %r' % actual_speed)
# Network overfit on speed, plus it's nice to be able to change it,
# so we just ignore output speed of net
desired_throttle = abs(target_speed / max(actual_speed, 1e-3))
desired_throttle = min(max(desired_throttle, 0.), 1.)
log.debug('desired_steering %f', desired_steering)
log.debug('desired_throttle %f', desired_throttle)
smoothed_steering = 0.2 * self.previous_action.steering + 0.5 * desired_steering
# desired_throttle = desired_throttle * 1.1
action = Action(smoothed_steering, desired_throttle)
return action
def maybe_save(self):
if (
self.should_record and self.recorded_obz_count % c.FRAMES_PER_HDF5_FILE == 0 and
self.recorded_obz_count != 0
):
filename = os.path.join(self.sess_dir, '%s.hdf5' %
str(self.recorded_obz_count // c.FRAMES_PER_HDF5_FILE).zfill(10))
save_hdf5(self.obz_recording, filename=filename)
log.info('Flushing output data')
self.obz_recording = []
def toggle_random_action(self):
"""Reduce sampling error by diversifying experience"""
if self.performing_random_actions:
if self.action_count < self.random_action_count and self.previous_action is not None:
action = self.previous_action
else:
# switch to non-random
action = Action(has_control=False)
self.action_count = 0
self.performing_random_actions = False
else:
if self.action_count < self.non_random_action_count and self.previous_action is not None:
action = self.previous_action
else:
# switch to random
steering = np.random.uniform(-0.5, 0.5, 1)[0] # Going too large here gets us stuck
log.debug('random steering %f', steering)
throttle = 0.65 # TODO: Make throttle random to get better variation here
action = Action(steering, throttle)
self.action_count = 0
self.performing_random_actions = True
return action
def load_net(self, net_path, is_frozen=False):
'''
Frozen nets can be generated with something like
`python freeze_graph.py --input_graph="C:\tmp\deepdrive\tensorflow_random_action\train\graph.pbtxt" --input_checkpoint="C:\tmp\deepdrive\tensorflow_random_action\train\model.ckpt-273141" --output_graph="C:\tmp\deepdrive\tensorflow_random_action\frozen_graph.pb" --output_node_names="model/add_2"`
where model/add_2 is the auto-generated name for self.net.p
'''
self.net_input_placeholder = tf.placeholder(tf.float32, (None,) + c.BASELINE_IMAGE_SHAPE)
if is_frozen:
# TODO: Get frozen nets working
# We load the protobuf file from the disk and parse it to retrieve the
# unserialized graph_def
with tf.gfile.GFile(net_path, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Then, we can use again a convenient built-in function to import a graph_def into the
# current default Graph
with tf.Graph().as_default() as graph:
tf.import_graph_def(
graph_def,
input_map=None,
return_elements=None,
name="prefix",
op_dict=None,
producer_op_list=None
)
self.net = graph
else:
with tf.variable_scope("model") as _vs:
self.net = Net(self.net_input_placeholder, c.NUM_TARGETS, is_training=False)
saver = tf.train.Saver()
saver.restore(self.sess, net_path)
def close(self):
if self.sess is not None:
self.sess.close()
def get_net_out(self, image):
begin = time.time()
if self.use_frozen_net:
out_var = 'prefix/model/add_2'
else:
out_var = self.net.p
net_out = self.sess.run(out_var, feed_dict={
self.net_input_placeholder: image.reshape(1, *image.shape),})
# print(net_out)
end = time.time()
log.debug('inference time %s', end - begin)
return net_out
def preprocess_obz(self, obz):
for camera in obz['cameras']:
image = camera['image']
image = image.astype(np.float32)
image -= c.MEAN_PIXEL
camera['image'] = image
return obz
def set_random_action_repeat | import time
from datetime import datetime
import math | random_line_split |
|
agent.py | z['cameras'][0]['image'], obz['cameras'][0]['depth'],
# os.path.join(self.sess_dir, str(self.total_obz).zfill(10)))
self.recorded_obz_count += 1
else:
log.debug('Not recording frame')
self.maybe_save()
action = action.as_gym()
return action
def get_next_action(self, obz, y):
log.debug('getting next action')
if y is None:
log.debug('net out is None')
return self.previous_action or Action()
desired_spin, desired_direction, desired_speed, desired_speed_change, desired_steering, desired_throttle = y[0]
desired_spin = desired_spin * c.SPIN_NORMALIZATION_FACTOR
desired_speed = desired_speed * c.SPEED_NORMALIZATION_FACTOR
desired_speed_change = desired_speed_change * c.SPEED_NORMALIZATION_FACTOR
log.debug('desired_steering %f', desired_steering)
log.debug('desired_throttle %f', desired_throttle)
log.debug('desired_direction %f', desired_direction)
log.debug('desired_speed %f', desired_speed)
log.debug('desired_speed_change %f', desired_speed_change)
log.debug('desired_throttle %f', desired_throttle)
log.debug('desired_spin %f', desired_spin)
actual_speed = obz['speed']
log.debug('actual_speed %f', actual_speed)
log.debug('desired_speed %f', desired_speed)
target_speed = 9 * 100
log.debug('actual_speed %r' % actual_speed)
# Network overfit on speed, plus it's nice to be able to change it,
# so we just ignore output speed of net
desired_throttle = abs(target_speed / max(actual_speed, 1e-3))
desired_throttle = min(max(desired_throttle, 0.), 1.)
log.debug('desired_steering %f', desired_steering)
log.debug('desired_throttle %f', desired_throttle)
smoothed_steering = 0.2 * self.previous_action.steering + 0.5 * desired_steering
# desired_throttle = desired_throttle * 1.1
action = Action(smoothed_steering, desired_throttle)
return action
def maybe_save(self):
if (
self.should_record and self.recorded_obz_count % c.FRAMES_PER_HDF5_FILE == 0 and
self.recorded_obz_count != 0
):
filename = os.path.join(self.sess_dir, '%s.hdf5' %
str(self.recorded_obz_count // c.FRAMES_PER_HDF5_FILE).zfill(10))
save_hdf5(self.obz_recording, filename=filename)
log.info('Flushing output data')
self.obz_recording = []
def toggle_random_action(self):
"""Reduce sampling error by diversifying experience"""
if self.performing_random_actions:
if self.action_count < self.random_action_count and self.previous_action is not None:
action = self.previous_action
else:
# switch to non-random
action = Action(has_control=False)
self.action_count = 0
self.performing_random_actions = False
else:
if self.action_count < self.non_random_action_count and self.previous_action is not None:
action = self.previous_action
else:
# switch to random
steering = np.random.uniform(-0.5, 0.5, 1)[0] # Going too large here gets us stuck
log.debug('random steering %f', steering)
throttle = 0.65 # TODO: Make throttle random to get better variation here
action = Action(steering, throttle)
self.action_count = 0
self.performing_random_actions = True
return action
def load_net(self, net_path, is_frozen=False):
'''
Frozen nets can be generated with something like
`python freeze_graph.py --input_graph="C:\tmp\deepdrive\tensorflow_random_action\train\graph.pbtxt" --input_checkpoint="C:\tmp\deepdrive\tensorflow_random_action\train\model.ckpt-273141" --output_graph="C:\tmp\deepdrive\tensorflow_random_action\frozen_graph.pb" --output_node_names="model/add_2"`
where model/add_2 is the auto-generated name for self.net.p
'''
self.net_input_placeholder = tf.placeholder(tf.float32, (None,) + c.BASELINE_IMAGE_SHAPE)
if is_frozen:
# TODO: Get frozen nets working
# We load the protobuf file from the disk and parse it to retrieve the
# unserialized graph_def
with tf.gfile.GFile(net_path, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Then, we can use again a convenient built-in function to import a graph_def into the
# current default Graph
with tf.Graph().as_default() as graph:
tf.import_graph_def(
graph_def,
input_map=None,
return_elements=None,
name="prefix",
op_dict=None,
producer_op_list=None
)
self.net = graph
else:
with tf.variable_scope("model") as _vs:
self.net = Net(self.net_input_placeholder, c.NUM_TARGETS, is_training=False)
saver = tf.train.Saver()
saver.restore(self.sess, net_path)
def close(self):
if self.sess is not None:
self.sess.close()
def get_net_out(self, image):
begin = time.time()
if self.use_frozen_net:
out_var = 'prefix/model/add_2'
else:
out_var = self.net.p
net_out = self.sess.run(out_var, feed_dict={
self.net_input_placeholder: image.reshape(1, *image.shape),})
# print(net_out)
end = time.time()
log.debug('inference time %s', end - begin)
return net_out
def preprocess_obz(self, obz):
for camera in obz['cameras']:
image = camera['image']
image = image.astype(np.float32)
image -= c.MEAN_PIXEL
camera['image'] = image
return obz
def set_random_action_repeat_count(self):
if self.semirandom_sequence_step == (self.random_action_count + self.non_random_action_count):
self.semirandom_sequence_step = 0
rand = c.RNG.random()
if 0 <= rand < 0.67:
self.random_action_count = 0
self.non_random_action_count = 10
elif 0.67 <= rand < 0.85:
self.random_action_count = 4
self.non_random_action_count = 5
elif 0.85 <= rand < 0.95:
self.random_action_count = 8
self.non_random_action_count = 10
else:
self.random_action_count = 12
self.non_random_action_count = 15
log.debug('random actions at %r, non-random %r', self.random_action_count, self.non_random_action_count)
else:
self.semirandom_sequence_step += 1
def run(experiment, env_id='DeepDrivePreproTensorflow-v0', should_record=False, net_path=None, should_benchmark=True,
run_baseline_agent=False, camera_rigs=None, should_rotate_sim_types=False,
should_record_recovery_from_random_actions=False, render=False, path_follower=False, fps=c.DEFAULT_FPS):
if run_baseline_agent:
net_path = ensure_baseline_weights(net_path)
reward = 0
episode_done = False
max_episodes = 1000
tf_config = tf.ConfigProto(
gpu_options=tf.GPUOptions(
per_process_gpu_memory_fraction=0.8,
# leave room for the game,
# NOTE: debugging python, i.e. with PyCharm can cause OOM errors, where running will not
allow_growth=True
),
)
sess = tf.Session(config=tf_config)
if camera_rigs:
cameras = camera_rigs[0]
else:
cameras = None
if should_record and camera_rigs is not None and len(camera_rigs) >= 1:
should_rotate_camera_rigs = True
else:
should_rotate_camera_rigs = False
if should_rotate_camera_rigs:
randomize_cameras(cameras)
use_sim_start_command_first_lap = c.SIM_START_COMMAND is not None
gym_env = deepdrive.start(experiment, env_id, should_benchmark=should_benchmark, cameras=cameras,
use_sim_start_command=use_sim_start_command_first_lap, render=render,
fps=fps)
dd_env = gym_env.env
# Perform random actions to reduce sampling error in the recorded dataset
agent = Agent(gym_env.action_space, sess, env=gym_env.env,
should_record_recovery_from_random_actions=should_record_recovery_from_random_actions,
should_record=should_record, net_path=net_path, random_action_count=4, non_random_action_count=5,
path_follower=path_follower)
if net_path:
log.info('Running tensorflow agent checkpoint: %s', net_path)
def close():
| gym_env.close()
agent.close() | identifier_body |
|
AddSlotModal.js | import ModalDropdown from 'react-native-modal-dropdown';
import DateTimePickerModal from "react-native-modal-datetime-picker";
import RNFetchBlob from 'rn-fetch-blob';
import Toast from 'react-native-simple-toast';
class AddSlotModal extends React.Component {
state = {
datePickerMode:'',
isDatePickerVisible:false,
startTime:'',
endTime:'',
weekDay: '',
isLoading:false,
errors:[]
};
weekDays = ['Mon', 'Tues','Wed','Thur','Fri', 'Sat', 'Sun'];
getCurrentTime = (date) => {
let hours = date.getHours();
let minutes = date.getMinutes();
// let seconds = date.getSeconds();
hours = this.makeTwoDigits(hours);
minutes = this.makeTwoDigits(minutes)
// seconds = makeTwoDigits(seconds)
return `${hours}:${minutes}`;
}
tConvert = (time)=> {
// Check correct time format and split into components
time = time.toString ().match (/^([01]\d|2[0-3])(:)([0-5]\d)(:[0-5]\d)?$/) || [time];
if (time.length > 1) { // If time format correct
time = time.slice (1); // Remove full string match value
time[5] = +time[0] < 12 ? ' AM' : ' PM'; // Set AM/PM
time[0] = +time[0] % 12 || 12; // Adjust hours
}
return time.join (''); // return adjusted time or original string
}
handleConfirm = (date) => {
switch (this.state.datePickerMode)
{
case 'start':
this.setState({ startTime: this.tConvert(this.getCurrentTime(date)) })
break;
case 'end':
this.setState({ endTime: this.tConvert(this.getCurrentTime(date)) })
break;
}
this.hideDatePicker();
};
makeTwoDigits = (time) => {
const timeString = `${time}`;
if (timeString.length === 2) return time
return `0${time}`
}
hideDatePicker = () => {
this.setState({ isDatePickerVisible: false });
};
AddSlot = () =>
{
if(!this.state.isLoading)
{
this.setState({isLoading:true})
RNFetchBlob.fetch('POST', 'https://www.tattey.com/tattey_app/appapis/appointment.php', {
Authorization: "Bearer access-token",
Accept: 'application/json',
'Content-Type': 'multipart/form-data',
}, [
{ name:'temp_id', data: this.props.user },
{ name:'start', data: this.state.startTime },
{ name:'end', data: this.state.endTime },
{ name:'day', data: this.state.weekDay },
{ name:"addSlot", data:"true"},
]).then((resp) => {
console.log(resp);
var tempMSG = JSON.parse(resp.data);
if (tempMSG.msg === "success") {
this.setState({error:""});
// this.props.user_func();
var id = tempMSG.insert_id.toString();
console.log(id)
var slots = [...this.props.slots,{day:this.state.weekDay,start:this.state.startTime,end:this.state.endTime,id:id}]
this.props.updateSlots(slots)
this.props.updateSlotsLocal(slots)
Toast.show('Slot Added')
this.props.closeModal()
} else if(tempMSG.msg === "usernameError")
{
this.setState({error:"User Name Not Available"});
}
this.setState({isLoading:false})
}).catch((err) => {
console.log(err)
})
}
}
displayErrors = errors =>
errors.map((error, i) => <Text key={i} style={{fontSize:15,color: 'white'}}>{error.message}</Text>);
isFormValid = () => {
let errors = [];
let error;
if (this.isFormEmpty(this.state)) {
error = { message: "Fill in all fields" };
this.setState({ errors: errors.concat(error) });
return false;
} else {
return true;
}
};
isFormEmpty = ({ startTime, endTime, weekDay }) => {
return (
!startTime.length ||
!weekDay.length ||
!endTime.length
);
};
render() {
return (
<Modal
style={styles.Model}
animationType="slide"
transparent={false}
visible={this.props.isVisible}
onRequestClose={this.props.closeModal} // Used to handle the Android Back Button
backdropOpacity={0}
swipeToClose={true}
// swipeDirection="left" | onBackdropPress={this.props.closeModal}>
<ScrollView style={styles.scrollView}>
<View style={styles.container}>
<View style={styles.sectionHeader}>
<View style={styles.sectionHeading}>
<Text style={styles.sectionHeadingText}>Add Hours</Text>
</View>
<Icon name="x" size={20} color="red" onPress={() =>this.props.closeModal()} style={styles.modalCloseIcon} />
</View>
<View style={{flex:1,flexDirection:'row',justifyContent:'center',marginTop:20}}>
{this.state.errors.length > 0 && (
<View style={{color: "white",
backgroundColor:"#000",
borderColor: "#000",
borderWidth:2,
fontSize:15,
width:300,
justifyContent: "center",
flexDirection:"row",
padding:5, }}>
{this.displayErrors(this.state.errors)}
</View>
)}
</View>
<View style={styles.mainSection}>
<View style={styles.addField}>
<Text style={styles.fieldLabel}>Week Day : {' '}</Text>
<View style={styles.selectContainer}>
<ModalDropdown
options={this.weekDays}
defaultValue="Select"
dropdownStyle={{width:90,}}
textStyle={{fontSize:17,fontWeight: "bold",color:"white"}}
dropdownTextStyle={{fontSize:18}}
style={{fontSize:20}}
onSelect={(data)=>{this.setState({weekDay:this.weekDays[data]})}}
ref={input => this.dropdown = input}
/>
<TouchableOpacity onPress={()=>{this.dropdown.show()}} style={{paddingLeft:10}}>
<Icon name="chevron-down" size={20} color="white"/>
</TouchableOpacity>
</View>
</View>
<View style={styles.addField}>
<Text style={styles.fieldLabel}>Opening Hour: </Text>
<View style={styles.selectContainer}>
<TouchableOpacity onPress={()=>{this.setState({isDatePickerVisible: true,datePickerMode:"start"})}}>
<Text style={{color: 'white',fontSize:18,fontWeight: "bold"}}>{this.state.startTime!=''?(this.state.startTime):('Select')}</Text>
</TouchableOpacity>
</View>
</View>
<View style={styles.addField}>
<Text style={styles.fieldLabel}>Closing Hour: {' '} </Text>
<View style={styles.selectContainer}>
<TouchableOpacity onPress={()=>{this.setState({isDatePickerVisible: true,datePickerMode:"end"})}}>
<Text style={{color: 'white',fontSize:18,fontWeight: "bold"}}>{this.state.endTime!=''?(this.state.endTime):('Select')}</Text>
</TouchableOpacity>
</View>
</View>
<TouchableOpacity onPress={()=>{this.isFormValid()?this.AddSlot():null}}>
<View style={styles.addSlotBtn}>
{this.state.isLoading?(
<ActivityIndicator
style={{flex: 0.2, flexDirection: 'column'}}
size="large"
color="white" />
):(
<Text style={{color: 'white',fontSize:18,fontWeight: "bold"}}>Add </Text>
)}
</View>
</TouchableOpacity>
</View>
</View>
<DateTimePickerModal
isVisible={this.state.isDatePickerVisible}
mode="time"
onConfirm={this.handleConfirm}
onCancel={this.hideDatePicker}
/>
</ScrollView>
</Modal>
);
}
}
const styles = StyleSheet.create({
scrollView:{
backgroundColor: '#fff',
paddingTop: 25
},
container:
{
flex:1,
flexDirection:'column',
},
sectionHeader:
{
flex:1,
flexDirection: 'row'
},
sectionHeading:
{
flex:1,
flexDirection: 'column',
alignItems: 'center'
},
sectionHeadingText:
{
margin:10,
fontSize:18,
fontWeight: "bold"
},
modalCloseIcon:
{
flex:0.1,
flexDirection: 'column',
textAlign:"right",
margin:10
},
mainSection:
{
margin:15
},
addField:
{
flex:1,
flexDirection: 'row',
marginTop:30
},
fieldLabel:
{
fontSize:18,
fontWeight: "bold",
marginRight:20,
marginTop:2
},
selectContainer:
{
backgroundColor:'black | onSwipe={this.props.closeModal} | random_line_split |
AddSlotModal.js | import ModalDropdown from 'react-native-modal-dropdown';
import DateTimePickerModal from "react-native-modal-datetime-picker";
import RNFetchBlob from 'rn-fetch-blob';
import Toast from 'react-native-simple-toast';
class | extends React.Component {
state = {
datePickerMode:'',
isDatePickerVisible:false,
startTime:'',
endTime:'',
weekDay: '',
isLoading:false,
errors:[]
};
weekDays = ['Mon', 'Tues','Wed','Thur','Fri', 'Sat', 'Sun'];
getCurrentTime = (date) => {
let hours = date.getHours();
let minutes = date.getMinutes();
// let seconds = date.getSeconds();
hours = this.makeTwoDigits(hours);
minutes = this.makeTwoDigits(minutes)
// seconds = makeTwoDigits(seconds)
return `${hours}:${minutes}`;
}
tConvert = (time)=> {
// Check correct time format and split into components
time = time.toString ().match (/^([01]\d|2[0-3])(:)([0-5]\d)(:[0-5]\d)?$/) || [time];
if (time.length > 1) { // If time format correct
time = time.slice (1); // Remove full string match value
time[5] = +time[0] < 12 ? ' AM' : ' PM'; // Set AM/PM
time[0] = +time[0] % 12 || 12; // Adjust hours
}
return time.join (''); // return adjusted time or original string
}
handleConfirm = (date) => {
switch (this.state.datePickerMode)
{
case 'start':
this.setState({ startTime: this.tConvert(this.getCurrentTime(date)) })
break;
case 'end':
this.setState({ endTime: this.tConvert(this.getCurrentTime(date)) })
break;
}
this.hideDatePicker();
};
makeTwoDigits = (time) => {
const timeString = `${time}`;
if (timeString.length === 2) return time
return `0${time}`
}
hideDatePicker = () => {
this.setState({ isDatePickerVisible: false });
};
AddSlot = () =>
{
if(!this.state.isLoading)
{
this.setState({isLoading:true})
RNFetchBlob.fetch('POST', 'https://www.tattey.com/tattey_app/appapis/appointment.php', {
Authorization: "Bearer access-token",
Accept: 'application/json',
'Content-Type': 'multipart/form-data',
}, [
{ name:'temp_id', data: this.props.user },
{ name:'start', data: this.state.startTime },
{ name:'end', data: this.state.endTime },
{ name:'day', data: this.state.weekDay },
{ name:"addSlot", data:"true"},
]).then((resp) => {
console.log(resp);
var tempMSG = JSON.parse(resp.data);
if (tempMSG.msg === "success") {
this.setState({error:""});
// this.props.user_func();
var id = tempMSG.insert_id.toString();
console.log(id)
var slots = [...this.props.slots,{day:this.state.weekDay,start:this.state.startTime,end:this.state.endTime,id:id}]
this.props.updateSlots(slots)
this.props.updateSlotsLocal(slots)
Toast.show('Slot Added')
this.props.closeModal()
} else if(tempMSG.msg === "usernameError")
{
this.setState({error:"User Name Not Available"});
}
this.setState({isLoading:false})
}).catch((err) => {
console.log(err)
})
}
}
displayErrors = errors =>
errors.map((error, i) => <Text key={i} style={{fontSize:15,color: 'white'}}>{error.message}</Text>);
isFormValid = () => {
let errors = [];
let error;
if (this.isFormEmpty(this.state)) {
error = { message: "Fill in all fields" };
this.setState({ errors: errors.concat(error) });
return false;
} else {
return true;
}
};
isFormEmpty = ({ startTime, endTime, weekDay }) => {
return (
!startTime.length ||
!weekDay.length ||
!endTime.length
);
};
render() {
return (
<Modal
style={styles.Model}
animationType="slide"
transparent={false}
visible={this.props.isVisible}
onRequestClose={this.props.closeModal} // Used to handle the Android Back Button
backdropOpacity={0}
swipeToClose={true}
// swipeDirection="left"
onSwipe={this.props.closeModal}
onBackdropPress={this.props.closeModal}>
<ScrollView style={styles.scrollView}>
<View style={styles.container}>
<View style={styles.sectionHeader}>
<View style={styles.sectionHeading}>
<Text style={styles.sectionHeadingText}>Add Hours</Text>
</View>
<Icon name="x" size={20} color="red" onPress={() =>this.props.closeModal()} style={styles.modalCloseIcon} />
</View>
<View style={{flex:1,flexDirection:'row',justifyContent:'center',marginTop:20}}>
{this.state.errors.length > 0 && (
<View style={{color: "white",
backgroundColor:"#000",
borderColor: "#000",
borderWidth:2,
fontSize:15,
width:300,
justifyContent: "center",
flexDirection:"row",
padding:5, }}>
{this.displayErrors(this.state.errors)}
</View>
)}
</View>
<View style={styles.mainSection}>
<View style={styles.addField}>
<Text style={styles.fieldLabel}>Week Day : {' '}</Text>
<View style={styles.selectContainer}>
<ModalDropdown
options={this.weekDays}
defaultValue="Select"
dropdownStyle={{width:90,}}
textStyle={{fontSize:17,fontWeight: "bold",color:"white"}}
dropdownTextStyle={{fontSize:18}}
style={{fontSize:20}}
onSelect={(data)=>{this.setState({weekDay:this.weekDays[data]})}}
ref={input => this.dropdown = input}
/>
<TouchableOpacity onPress={()=>{this.dropdown.show()}} style={{paddingLeft:10}}>
<Icon name="chevron-down" size={20} color="white"/>
</TouchableOpacity>
</View>
</View>
<View style={styles.addField}>
<Text style={styles.fieldLabel}>Opening Hour: </Text>
<View style={styles.selectContainer}>
<TouchableOpacity onPress={()=>{this.setState({isDatePickerVisible: true,datePickerMode:"start"})}}>
<Text style={{color: 'white',fontSize:18,fontWeight: "bold"}}>{this.state.startTime!=''?(this.state.startTime):('Select')}</Text>
</TouchableOpacity>
</View>
</View>
<View style={styles.addField}>
<Text style={styles.fieldLabel}>Closing Hour: {' '} </Text>
<View style={styles.selectContainer}>
<TouchableOpacity onPress={()=>{this.setState({isDatePickerVisible: true,datePickerMode:"end"})}}>
<Text style={{color: 'white',fontSize:18,fontWeight: "bold"}}>{this.state.endTime!=''?(this.state.endTime):('Select')}</Text>
</TouchableOpacity>
</View>
</View>
<TouchableOpacity onPress={()=>{this.isFormValid()?this.AddSlot():null}}>
<View style={styles.addSlotBtn}>
{this.state.isLoading?(
<ActivityIndicator
style={{flex: 0.2, flexDirection: 'column'}}
size="large"
color="white" />
):(
<Text style={{color: 'white',fontSize:18,fontWeight: "bold"}}>Add </Text>
)}
</View>
</TouchableOpacity>
</View>
</View>
<DateTimePickerModal
isVisible={this.state.isDatePickerVisible}
mode="time"
onConfirm={this.handleConfirm}
onCancel={this.hideDatePicker}
/>
</ScrollView>
</Modal>
);
}
}
const styles = StyleSheet.create({
scrollView:{
backgroundColor: '#fff',
paddingTop: 25
},
container:
{
flex:1,
flexDirection:'column',
},
sectionHeader:
{
flex:1,
flexDirection: 'row'
},
sectionHeading:
{
flex:1,
flexDirection: 'column',
alignItems: 'center'
},
sectionHeadingText:
{
margin:10,
fontSize:18,
fontWeight: "bold"
},
modalCloseIcon:
{
flex:0.1,
flexDirection: 'column',
textAlign:"right",
margin:10
},
mainSection:
{
margin:15
},
addField:
{
flex:1,
flexDirection: 'row',
marginTop:30
},
fieldLabel:
{
fontSize:18,
fontWeight: "bold",
marginRight:20,
marginTop:2
},
selectContainer:
{
backgroundColor:' | AddSlotModal | identifier_name |
Home.js | // import Storage from '../../tools/storage'
import Storage from '../../libs/storage';
import VTouch from '../Live/Refresh';
import Tracker from '../../tracker';
import { browserHistory, hashHistory} from 'react-router';
const history = window.config.hashHistory ? hashHistory : browserHistory
import './home.css';
class Home extends Component {
constructor(props) {
super(props);
this.state = {
index: 0,
isUpLoading: true,
isDownLoading: true,
firstLoading: true,
freezeNum: 0,
refreshIcon: false,
navSource: [],
mainSource: []
};
this.start = this.end = 0;
this.firstUpKey = this.lastUpKey = this.firstDownKey = this.lastDownKey = null;
this.isUpLoading = this.isDownLoading = true;
this.fromScroll = this.refreshKey = false;
this.handle = 'up';
this.channelKey = '';
this.Touch = null;
/* home start */
this.showCompile = this.showCompile.bind(this);
this.navHandleClick = this.navHandleClick.bind(this);
this.uploadNav = this.uploadNav.bind(this);
/* home end */
/* episode start */
// this.calculate = this.calculate.bind(this);
this.scrollListener = this.scrollListener.bind(this);
/* episode end */
/* pull down start */
this.showRefresh = this.showRefresh.bind(this);
this.pullUp = this.pullUp.bind(this);
/* pull down end */
}
static contextTypes = {
store: React.PropTypes.object
}
componentWillMount() {
//表示第一次进入,频道刷新key
this.channelKey = 1;
this.refreshKey = true;
//获取全部导航tab
!(Storage.s_get(Storage.KEYS.PLAY_PAGE_KEY) || Storage.s_get(Storage.KEYS.SEARCH_PAGE_KEY) || Storage.s_get(Storage.KEYS.RECORD_PAGE_KEY) || Storage.s_get(Storage.KEYS.COLLECT_PAGE_KEY) || Storage.s_get(Storage.KEYS.LIVE_PAGE_KEY) || Storage.s_get(Storage.KEYS.ME_PAGE_KEY)) && this.context.store.dispatch(clearClassifyList());
this.context.store.dispatch(getClassifyList());
//其他页面返回首页先清除list store
(Storage.s_get(Storage.KEYS.SEARCH_PAGE_KEY) || Storage.s_get(Storage.KEYS.RECORD_PAGE_KEY) || Storage.s_get(Storage.KEYS.COLLECT_PAGE_KEY) || Storage.s_get(Storage.KEYS.PLAY_PAGE_KEY) || Storage.s_get(Storage.KEYS.LIVE_PAGE_KEY) || Storage.s_get(Storage.KEYS.ME_PAGE_KEY)) && this.context.store.dispatch(clearHomeData());
}
componentDidMount() {
//pull-down init
this.Touch = new VTouch(this.refs.homeList, "y", this.showRefresh, this.pullUp)
this.Touch.init()
document.addEventListener('scroll', this.scrollListener)
}
// shouldComponentUpdate(nextProps, nextState) {
// return !Immutable.is(nextProps.getHomeData, this.props.getHomeData)
// }
componentWillReceiveProps(nextProps, nextState) {
//存储频道
if (nextProps.classifyList && nextProps.classifyList.length > 0 && this.channelKey === 1) {
let freezeChannelList = [];
let noFreezeChannelList = [];
let userShowLists = [];
for (let i in nextProps.classifyList) {
nextProps.classifyList[i].fixed && freezeChannelList.push(nextProps.classifyList[i]);
}
let userSelectedLists = Storage.get(Storage.KEYS.USER_SELECTED_CHANNEL) ? Storage.get(Storage.KEYS.USER_SELECTED_CHANNEL) : [];
for (let i in userSelectedLists) {
!userSelectedLists[i].fixed && noFreezeChannelList.push(userSelectedLists[i])
}
userShowLists = freezeChannelList.concat(noFreezeChannelList);
//首次进入设置初始channel_id
!Storage.get(Storage.KEYS.HOME_LOCALS.CURRENT_CHANNEL_ID) && Storage.set(Storage.KEYS.HOME_LOCALS.CURRENT_CHANNEL_ID, userShowLists[0].id);
if (!Storage.s_get(Storage.KEYS.PLAY_PAGE_KEY) && !Storage.s_get(Storage.KEYS.LIVE_PAGE_KEY) && !Storage.s_get(Storage.KEYS.ME_PAGE_KEY) && !Storage.s_get(Storage.KEYS.SEARCH_PAGE_KEY) && !Storage.s_get(Storage.KEYS.RECORD_PAGE_KEY) && !Storage.s_get(Storage.KEYS.COLLECT_PAGE_KEY)) {
//当从播放页及我的页回来时会携带一个key值,以此判断是否清除store数据
//console.log(this.start);
//console.log(this.end);
this.handle = 'up';
this.context.store.dispatch(getHomeList(userShowLists[0].id, this.start, this.end, this.handle))
// this.navHandleClick(userShowLists[0].id, 0)
} else {
this.context.store.dispatch(clearHomeData())
}
//缓存所有的频道
Storage.set(Storage.KEYS.ALL_CHANNEL, nextProps.classifyList)
//设置导航条
Storage.set(Storage.KEYS.USER_SELECTED_CHANNEL, userShowLists);
userShowLists.length !== 0 && this.setState({
index: userShowLists[0].id,
freezeNum: freezeChannelList.length,
navSource: userShowLists
})
userShowLists = null;
//设置频道刷新key, 仅当刷新页面时才请求服务器全部频道api
this.channelKey = 2;
}
//由pull down 引起的render
if (this.state.refreshIcon) {
this.setState({
refreshIcon: false,
firstLoading: false
})
}
if (nextProps.getCompileStatus !== true && nextProps.getCompileStatus !== this.props.getCompileStatus) {
document.body.scrollTop = Storage.s_get(Storage.KEYS.HOME_SCROLL_KEY)
this.Touch.init();
Storage.s_remove(Storage.KEYS.HOME_SCROLL_KEY);
}
if ( nextProps.getHomeData.data && nextProps.getHomeData.data.data.length > 0) {
this.refreshKey = false;
let newPropsObj = nextProps.getHomeData.data;
let newPropsData = newPropsObj.data;
this.handle === 'up' ? this.isUpLoading = newPropsObj.loadMore : this.isDownLoading = newPropsObj.loadMore
let loadMore = newPropsObj.loadMore;
//设置返回首页所需缓存
// (Number(Storage.get(Storage.KEYS.HOME_LOCALS.CURRENT_CHANNEL_ID)) !== this.state.index || !Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA)) ?
// Storage.set(Storage.KEYS.HOME_LOCALS.EPISODE_DATA, newPropsData) : Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA).length < 40 ?
// Storage.set(Storage.KEYS.HOME_LOCALS.EPISODE_DATA, this.state.mainSource.concat(newPropsData)) : Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA).splice(19, Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA).length - 20).concat(newPropsData)
// Storage.set(Storage.KEYS.HOME_LOCALS.LOAD_MORE, loadMore);
// Storage.set(Storage.KEYS.HOME_LOCALS.FIRST_KEY, newPropsObj.firstKey);
// Storage.set(Storage.KEYS.HOME_LOCALS.LAST_KEY, newPropsObj.lastKey);
if (!this.fromScroll) {
document.body.scrollTop = 0;
}
//从浮层回来时不再渲染
if (this.props.getHomeData.data && nextProps.getHomeData.data.data[0].id === this.props.getHomeData.data.data[0].id) {
return
}
this.start = (!this.firstUpKey && !this.lastUpKey && !this.firstDownKey && !this.lastDownKey) ? newPropsObj.firstKey : this.start;
this.end = (!this.firstUpKey && !this.lastUpKey && !this.firstDownKey && !this.lastDownKey) ? newPropsObj.lastKey : this.end;
this.firstUpKey = this.handle === 'up' ? newPropsObj.firstKey : this.firstUpKey;
this.lastUpKey = this.handle === 'up' ? newPropsObj.lastKey : this.lastUpKey;
this.firstDownKey = this.handle === 'down' ? newPropsObj.firstKey : this.firstDownKey;
this.lastDownKey = this.handle === 'down' ? newPropsObj.lastKey : this.lastDownKey;
console.log(this.state.firstLoading);
this.handle === 'down' ? this.setState({
isDownLoading: this.isDownLoading,
firstLoading: false,
mainSource: newPropsData.concat(this.state.mainSource)
}) : this.fromScroll ? this.setState({
isUpLoading: this.isUpLoading,
firstLoading: false,
mainSource: this.state.mainSource.concat(newPropsData)
}) : this.setState({
isUpLoading: this.isUpLoading,
firstLoading: false | import { getChannels } from '../../actions/compile';
import Immutable from 'immutable';
import ToolKit from '../../tools/tools' | random_line_split |
|
Home.js | .state.index || !Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA)) ?
// Storage.set(Storage.KEYS.HOME_LOCALS.EPISODE_DATA, newPropsData) : Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA).length < 40 ?
// Storage.set(Storage.KEYS.HOME_LOCALS.EPISODE_DATA, this.state.mainSource.concat(newPropsData)) : Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA).splice(19, Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA).length - 20).concat(newPropsData)
// Storage.set(Storage.KEYS.HOME_LOCALS.LOAD_MORE, loadMore);
// Storage.set(Storage.KEYS.HOME_LOCALS.FIRST_KEY, newPropsObj.firstKey);
// Storage.set(Storage.KEYS.HOME_LOCALS.LAST_KEY, newPropsObj.lastKey);
if (!this.fromScroll) {
document.body.scrollTop = 0;
}
//从浮层回来时不再渲染
if (this.props.getHomeData.data && nextProps.getHomeData.data.data[0].id === this.props.getHomeData.data.data[0].id) {
return
}
this.start = (!this.firstUpKey && !this.lastUpKey && !this.firstDownKey && !this.lastDownKey) ? newPropsObj.firstKey : this.start;
this.end = (!this.firstUpKey && !this.lastUpKey && !this.firstDownKey && !this.lastDownKey) ? newPropsObj.lastKey : this.end;
this.firstUpKey = this.handle === 'up' ? newPropsObj.firstKey : this.firstUpKey;
this.lastUpKey = this.handle === 'up' ? newPropsObj.lastKey : this.lastUpKey;
this.firstDownKey = this.handle === 'down' ? newPropsObj.firstKey : this.firstDownKey;
this.lastDownKey = this.handle === 'down' ? newPropsObj.lastKey : this.lastDownKey;
console.log(this.state.firstLoading);
this.handle === 'down' ? this.setState({
isDownLoading: this.isDownLoading,
firstLoading: false,
mainSource: newPropsData.concat(this.state.mainSource)
}) : this.fromScroll ? this.setState({
isUpLoading: this.isUpLoading,
firstLoading: false,
mainSource: this.state.mainSource.concat(newPropsData)
}) : this.setState({
isUpLoading: this.isUpLoading,
firstLoading: false,
mainSource: newPropsData
})
// if (this.props.getCompileStatus === true) {
// this.isUpLoading = false;
// this.isDownLoading = false;
// }
} else {
this.refreshKey ? this.setState({
isUpLoading: true,
firstLoading: true
}, () => {
removeSessionStHandler()
}) : this.handle === 'down' ? this.setState({
isDownLoading: false
}) : this.setState({
isUpLoading: false,
firstLoading: false
}, () => {
removeSessionStHandler()
})
}
const removeSessionStHandler = () => {
this.isUpLoading = false;
if (Storage.s_get(Storage.KEYS.SEARCH_PAGE_KEY) || Storage.s_get(Storage.KEYS.RECORD_PAGE_KEY) || Storage.s_get(Storage.KEYS.COLLECT_PAGE_KEY) || Storage.s_get(Storage.KEYS.PLAY_PAGE_KEY) || Storage.s_get(Storage.KEYS.LIVE_PAGE_KEY) || Storage.s_get(Storage.KEYS.ME_PAGE_KEY)) {
Storage.s_remove(Storage.KEYS.PLAY_PAGE_KEY)
Storage.s_remove(Storage.KEYS.LIVE_PAGE_KEY)
Storage.s_remove(Storage.KEYS.ME_PAGE_KEY)
Storage.s_remove(Storage.KEYS.SEARCH_PAGE_KEY)
Storage.s_remove(Storage.KEYS.RECORD_PAGE_KEY)
Storage.s_remove(Storage.KEYS.COLLECT_PAGE_KEY)
//根据backKey值返回至相应的tab
let firstChannelID = Storage.get(Storage.KEYS.USER_SELECTED_CHANNEL)[0].id;
this.handle = 'up';
this.start = this.end = 0;
this.firstUpKey = this.lastUpKey = this.firstDownKey = this.lastDownKey = null;
!Storage.s_get(Storage.KEYS.BACK_KEY) ?
this.navHandleClick(Number(firstChannelID), 0) : Number(Storage.s_get(Storage.KEYS.BACK_POSITION_KEY)) === 0 ?
this.context.store.dispatch(getHomeList(Number(Storage.s_get(Storage.KEYS.BACK_KEY)), 0, 0, this.handle)) : this.navHandleClick(Number(Storage.s_get(Storage.KEYS.BACK_KEY)), Number(Storage.s_get(Storage.KEYS.BACK_POSITION_KEY)));
}
}
}
componentWillUnmount() {
this.refreshKey = false;
//卸载组件上时,取消事件监听与绑定
document.removeEventListener('scroll', this.scrollListener)
}
//传给频道组件更新导航
uploadNav(source) {
this.setState({
navSource: source
})
}
//display refresh icon
showRefresh() {
this.setState({
refreshIcon: true,
firstLoading: false
});
console.log('========show refresh icon===========');
}
//redispatch live data
pullUp() {
console.log('=============document.body.scrollTop==================' + document.body.scrollTop)
//console.log(this.props.getCompileStatus)
if (document.body.scrollTop <= 50) {
this.handle = 'down';
let firstDownKey = this.firstDownKey ? this.firstDownKey : this.start;
let lastDownKey = this.lastDownKey ? this.lastDownKey : this.end;
this.context.store.dispatch(getHomeList(this.state.index, firstDownKey, lastDownKey, this.handle));
Tracker.track('send', 'event', 'app', 'pull', 'down', 1);
console.log('=========下拉刷新==========');
}
}
navHandleClick(i, index) {
//index表示当前元素在nav中的index位数
let linkageKey;
//频道点击同一tab返回
// console.log(Storage.s_get(Storage.KEYS.HOME_SCROLL_KEY));
if (Storage.s_get(Storage.KEYS.HOME_SCROLL_KEY) !== null && this.state.index === Number(i)) {
return;
}
Storage.s_get(Storage.KEYS.HOME_SCROLL_KEY) && this.context.store.dispatch(clearHomeData());
this.fromScroll = false;
this.setState({
index: Number(i),
firstLoading: true,
isUpLoading: true
})
//设置页面返回定义key
Storage.s_set(Storage.KEYS.BACK_KEY, Number(i))
Storage.s_set(Storage.KEYS.BACK_POSITION_KEY, Number(index))
let active = document.querySelector('.active');
//样式联动
let navMain = this.refs.navMain;
let navHeader = this.refs.navHeader;
let navChildLen = navMain.childNodes.length;
let scrollLeft;
//此处用于判断从频道页跳转回首页时选中tab被删除情况
if (index >= 3) {
let dw = document.body.offsetWidth; //网页可见区域宽
// let nw = navMain.clientWidth //navMain元素宽度
let nol = active.offsetLeft //当前元素的获取对象相对于版面或由 offsetParent 属性指定的父坐标的计算左侧位置
let ncw = active.clientWidth //当前元素的宽度
nol = (index !== '') ? navMain.children[Number(index)].offsetLeft : nol;
//相对距离 = 绝对坐标(offsetLeft) - scrollLeft
scrollLeft = Math.abs(nol - (dw - ncw) / 2); //计算位移距离
}
navHeader.scrollLeft = index < 3 ? 0 : scrollLeft;
this.handle = 'up';
this.start = this.end = 0;
this.isUpLoading = this.isDownLoading = true;
this.firstUpKey = this.lastUpKey = this.firstDownKey = this.lastDownKey = null;
this.context.store.dispatch(getHomeList(Number(i), 0, 0, this.handle));
let name = navMain.children[index] && navMain.children[index].innerText || ''
Tracker.track('send', 'event', 'genre', 'click', i + '-' + name , 1);
}
//控制频道浮层显示消失
showCompile() {
// console.log(document.body.scrollTop);
Storage.s_set(Storage.KEYS.HOME_SCROLL_KEY, document.body.scrollTop)
// sessionStorage.setItem('home_scroll', document.body.scrollTop);
this.Touch.destory();
this.context.store.dispatch(showCompile())
Tracker.track('send', 'event', 'genre', 'edit', 'genre', 1);
}
scrollListener() {
if(document.body.scrollTop + ToolKit.winHeight() > document.body.scrollHeight - 10 && this.isUpLoading === true){
console.log('=========上拉加载==========')
this.fromScroll = true;
this.isUpLoading = false;
this.handle = | 'up';
t | identifier_name |
|
Home.js | (Storage.KEYS.HOME_LOCALS.CURRENT_CHANNEL_ID) && Storage.set(Storage.KEYS.HOME_LOCALS.CURRENT_CHANNEL_ID, userShowLists[0].id);
if (!Storage.s_get(Storage.KEYS.PLAY_PAGE_KEY) && !Storage.s_get(Storage.KEYS.LIVE_PAGE_KEY) && !Storage.s_get(Storage.KEYS.ME_PAGE_KEY) && !Storage.s_get(Storage.KEYS.SEARCH_PAGE_KEY) && !Storage.s_get(Storage.KEYS.RECORD_PAGE_KEY) && !Storage.s_get(Storage.KEYS.COLLECT_PAGE_KEY)) {
//当从播放页及我的页回来时会携带一个key值,以此判断是否清除store数据
//console.log(this.start);
//console.log(this.end);
this.handle = 'up';
this.context.store.dispatch(getHomeList(userShowLists[0].id, this.start, this.end, this.handle))
// this.navHandleClick(userShowLists[0].id, 0)
} else {
this.context.store.dispatch(clearHomeData())
}
//缓存所有的频道
Storage.set(Storage.KEYS.ALL_CHANNEL, nextProps.classifyList)
//设置导航条
Storage.set(Storage.KEYS.USER_SELECTED_CHANNEL, userShowLists);
userShowLists.length !== 0 && this.setState({
index: userShowLists[0].id,
freezeNum: freezeChannelList.length,
navSource: userShowLists
})
userShowLists = null;
//设置频道刷新key, 仅当刷新页面时才请求服务器全部频道api
this.channelKey = 2;
}
//由pull down 引起的render
if (this.state.refreshIcon) {
this.setState({
refreshIcon: false,
firstLoading: false
})
}
if (nextProps.getCompileStatus !== true && nextProps.getCompileStatus !== this.props.getCompileStatus) {
document.body.scrollTop = Storage.s_get(Storage.KEYS.HOME_SCROLL_KEY)
this.Touch.init();
Storage.s_remove(Storage.KEYS.HOME_SCROLL_KEY);
}
if ( nextProps.getHomeData.data && nextProps.getHomeData.data.data.length > 0) {
this.refreshKey = false;
let newPropsObj = nextProps.getHomeData.data;
let newPropsData = newPropsObj.data;
this.handle === 'up' ? this.isUpLoading = newPropsObj.loadMore : this.isDownLoading = newPropsObj.loadMore
let loadMore = newPropsObj.loadMore;
//设置返回首页所需缓存
// (Number(Storage.get(Storage.KEYS.HOME_LOCALS.CURRENT_CHANNEL_ID)) !== this.state.index || !Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA)) ?
// Storage.set(Storage.KEYS.HOME_LOCALS.EPISODE_DATA, newPropsData) : Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA).length < 40 ?
// Storage.set(Storage.KEYS.HOME_LOCALS.EPISODE_DATA, this.state.mainSource.concat(newPropsData)) : Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA).splice(19, Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA).length - 20).concat(newPropsData)
// Storage.set(Storage.KEYS.HOME_LOCALS.LOAD_MORE, loadMore);
// Storage.set(Storage.KEYS.HOME_LOCALS.FIRST_KEY, newPropsObj.firstKey);
// Storage.set(Storage.KEYS.HOME_LOCALS.LAST_KEY, newPropsObj.lastKey);
if (!this.fromScroll) {
document.body.scrollTop = 0;
}
//从浮层回来时不再渲染
if (this.props.getHomeData.data && nextProps.getHomeData.data.data[0].id === this.props.getHomeData.data.data[0].id) {
return
}
this.start = (!this.firstUpKey && !this.lastUpKey && !this.firstDownKey && !this.lastDownKey) ? newPropsObj.firstKey : this.start;
this.end = (!this.firstUpKey && !this.lastUpKey && !this.firstDownKey && !this.lastDownKey) ? newPropsObj.lastKey : this.end;
this.firstUpKey = this.handle === 'up' ? newPropsObj.firstKey : this.firstUpKey;
this.lastUpKey = this.handle === 'up' ? newPropsObj.lastKey : this.lastUpKey;
this.firstDownKey = this.handle === 'down' ? newPropsObj.firstKey : this.firstDownKey;
this.lastDownKey = this.handle === 'down' ? newPropsObj.lastKey : this.lastDownKey;
console.log(this.state.firstLoading);
this.handle === 'down' ? this.setState({
isDownLoading: this.isDownLoading,
firstLoading: false,
mainSource: newPropsData.concat(this.state.mainSource)
}) : this.fromScroll ? this.setState({
isUpLoading: this.isUpLoading,
firstLoading: false,
mainSource: this.state.mainSource.concat(newPropsData)
}) : this.setState({
isUpLoading: this.isUpLoading,
firstLoading: false,
mainSource: newPropsData
})
// if (this.props.getCompileStatus === true) {
// this.isUpLoading = false;
// this.isDownLoading = false;
// }
} else {
this.refreshKey ? this.setState({
isUpLoading: true,
firstLoading: true
}, () => {
removeSessionStHandler()
}) : this.handle === 'down' ? this.setState({
isDownLoading: false
}) : this.setState({
isUpLoading: false,
firstLoading: false
}, () => {
removeSessionStHandler()
})
}
const removeSessionStHandler = () => {
this.isUpLoading = false;
if (Storage.s_get(Storage.KEYS.SEARCH_PAGE_KEY) || Storage.s_get(Storage.KEYS.RECORD_PAGE_KEY) || Storage.s_get(Storage.KEYS.COLLECT_PAGE_KEY) || Storage.s_get(Storage.KEYS.PLAY_PAGE_KEY) || Storage.s_get(Storage.KEYS.LIVE_PAGE_KEY) || Storage.s_get(Storage.KEYS.ME_PAGE_KEY)) {
Storage.s_remove(Storage.KEYS.PLAY_PAGE_KEY)
Storage.s_remove(Storage.KEYS.LIVE_PAGE_KEY)
Storage.s_remove(Storage.KEYS.ME_PAGE_KEY)
Storage.s_remove(Storage.KEYS.SEARCH_PAGE_KEY)
Storage.s_remove(Storage.KEYS.RECORD_PAGE_KEY)
Storage.s_remove(Storage.KEYS.COLLECT_PAGE_KEY)
//根据backKey值返回至相应的tab
let firstChannelID = Storage.get(Storage.KEYS.USER_SELECTED_CHANNEL)[0].id;
this.handle = 'up';
this.start = this.end = 0;
this.firstUpKey = this.lastUpKey = this.firstDownKey = this.lastDownKey = null;
!Storage.s_get(Storage.KEYS.BACK_KEY) ?
this.navHandleClick(Number(firstChannelID), 0) : Number(Storage.s_get(Storage.KEYS.BACK_POSITION_KEY)) === 0 ?
this.context.store.dispatch(getHomeList(Number(Storage.s_get(Storage.KEYS.BACK_KEY)), 0, 0, this.handle)) : this.navHandleClick(Number(Storage.s_get(Storage.KEYS.BACK_KEY)), Number(Storage.s_get(Storage.KEYS.BACK_POSITION_KEY)));
}
}
}
componentWillUnmount() {
this.refreshKey = false;
//卸载组件上时,取消事件监听与绑定
document.removeEventListener('scroll', this.scrollListener)
}
//传给频道组件更新导航
uploadNav(source) {
this.setState({
navSource: source
})
}
//display refresh icon
showRefresh() {
this.setState({
refreshIcon: true,
firstLoading: false
});
console.log('========show refresh icon===========');
}
//redispatch live data
pullUp() {
console.log('=============document.body.scrollTop==================' + document.body.scrollTop)
//console.log(this.props.getCompileStatus)
if (document.body.scrollTop <= 50) {
this.handle = 'down';
let firstDownKey = this.firstDownKey ? this.firstDownKey : this.start;
let lastDownKey = this.lastDownKey ? t | his.lastDownKey : this.end;
this.context.store.dispatch(getHomeList(this.state.index, firstDownKey, lastDownKey, this.handle));
Tracker.track('send', 'event', 'app', 'pull', 'down', 1);
console.log('=========下拉刷新==========');
}
}
navHandleClick(i, index) {
//index表示当前元素在nav中的index位数
let linkageKey;
//频道点击同一tab返回
// console.log(Storage.s_get(Storage.KEYS.HOME_SCROLL_KEY));
if (Storage.s_get(Storage.KEYS.HOME_SCROLL_KEY) !== null && this.state.index === Number(i)) {
return;
}
Storage.s_get(Storage.KEYS.HOME_SCROLL_KEY) && this.context.st | identifier_body |
|
Home.js | _PAGE_KEY) || Storage.s_get(Storage.KEYS.LIVE_PAGE_KEY) || Storage.s_get(Storage.KEYS.ME_PAGE_KEY)) && this.context.store.dispatch(clearHomeData());
}
componentDidMount() {
//pull-down init
this.Touch = new VTouch(this.refs.homeList, "y", this.showRefresh, this.pullUp)
this.Touch.init()
document.addEventListener('scroll', this.scrollListener)
}
// shouldComponentUpdate(nextProps, nextState) {
// return !Immutable.is(nextProps.getHomeData, this.props.getHomeData)
// }
componentWillReceiveProps(nextProps, nextState) {
//存储频道
if (nextProps.classifyList && nextProps.classifyList.length > 0 && this.channelKey === 1) {
let freezeChannelList = [];
let noFreezeChannelList = [];
let userShowLists = [];
for (let i in nextProps.classifyList) {
nextProps.classifyList[i].fixed && freezeChannelList.push(nextProps.classifyList[i]);
}
let userSelectedLists = Storage.get(Storage.KEYS.USER_SELECTED_CHANNEL) ? Storage.get(Storage.KEYS.USER_SELECTED_CHANNEL) : [];
for (let i in userSelectedLists) {
!userSelectedLists[i].fixed && noFreezeChannelList.push(userSelectedLists[i])
}
userShowLists = freezeChannelList.concat(noFreezeChannelList);
//首次进入设置初始channel_id
!Storage.get(Storage.KEYS.HOME_LOCALS.CURRENT_CHANNEL_ID) && Storage.set(Storage.KEYS.HOME_LOCALS.CURRENT_CHANNEL_ID, userShowLists[0].id);
if (!Storage.s_get(Storage.KEYS.PLAY_PAGE_KEY) && !Storage.s_get(Storage.KEYS.LIVE_PAGE_KEY) && !Storage.s_get(Storage.KEYS.ME_PAGE_KEY) && !Storage.s_get(Storage.KEYS.SEARCH_PAGE_KEY) && !Storage.s_get(Storage.KEYS.RECORD_PAGE_KEY) && !Storage.s_get(Storage.KEYS.COLLECT_PAGE_KEY)) {
//当从播放页及我的页回来时会携带一个key值,以此判断是否清除store数据
//console.log(this.start);
//console.log(this.end);
this.handle = 'up';
this.context.store.dispatch(getHomeList(userShowLists[0].id, this.start, this.end, this.handle))
// this.navHandleClick(userShowLists[0].id, 0)
} else {
this.context.store.dispatch(clearHomeData())
}
//缓存所有的频道
Storage.set(Storage.KEYS.ALL_CHANNEL, nextProps.classifyList)
//设置导航条
Storage.set(Storage.KEYS.USER_SELECTED_CHANNEL, userShowLists);
userShowLists.length !== 0 && this.setState({
index: userShowLists[0].id,
freezeNum: freezeChannelList.length,
navSource: userShowLists
})
userShowLists = null;
//设置频道刷新key, 仅当刷新页面时才请求服务器全部频道api
this.channelKey = 2;
}
//由pull down 引起的render
if (this.state.refreshIcon) {
this.setState({
refreshIcon: false,
firstLoading: false
})
}
if (nextProps.getCompileStatus !== true && nextProps.getCompileStatus !== this.props.getCompileStatus) {
document.body.scrollTop | .s_remove(Storage.KEYS.HOME_SCROLL_KEY);
}
if ( nextProps.getHomeData.data && nextProps.getHomeData.data.data.length > 0) {
this.refreshKey = false;
let newPropsObj = nextProps.getHomeData.data;
let newPropsData = newPropsObj.data;
this.handle === 'up' ? this.isUpLoading = newPropsObj.loadMore : this.isDownLoading = newPropsObj.loadMore
let loadMore = newPropsObj.loadMore;
//设置返回首页所需缓存
// (Number(Storage.get(Storage.KEYS.HOME_LOCALS.CURRENT_CHANNEL_ID)) !== this.state.index || !Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA)) ?
// Storage.set(Storage.KEYS.HOME_LOCALS.EPISODE_DATA, newPropsData) : Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA).length < 40 ?
// Storage.set(Storage.KEYS.HOME_LOCALS.EPISODE_DATA, this.state.mainSource.concat(newPropsData)) : Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA).splice(19, Storage.get(Storage.KEYS.HOME_LOCALS.EPISODE_DATA).length - 20).concat(newPropsData)
// Storage.set(Storage.KEYS.HOME_LOCALS.LOAD_MORE, loadMore);
// Storage.set(Storage.KEYS.HOME_LOCALS.FIRST_KEY, newPropsObj.firstKey);
// Storage.set(Storage.KEYS.HOME_LOCALS.LAST_KEY, newPropsObj.lastKey);
if (!this.fromScroll) {
document.body.scrollTop = 0;
}
//从浮层回来时不再渲染
if (this.props.getHomeData.data && nextProps.getHomeData.data.data[0].id === this.props.getHomeData.data.data[0].id) {
return
}
this.start = (!this.firstUpKey && !this.lastUpKey && !this.firstDownKey && !this.lastDownKey) ? newPropsObj.firstKey : this.start;
this.end = (!this.firstUpKey && !this.lastUpKey && !this.firstDownKey && !this.lastDownKey) ? newPropsObj.lastKey : this.end;
this.firstUpKey = this.handle === 'up' ? newPropsObj.firstKey : this.firstUpKey;
this.lastUpKey = this.handle === 'up' ? newPropsObj.lastKey : this.lastUpKey;
this.firstDownKey = this.handle === 'down' ? newPropsObj.firstKey : this.firstDownKey;
this.lastDownKey = this.handle === 'down' ? newPropsObj.lastKey : this.lastDownKey;
console.log(this.state.firstLoading);
this.handle === 'down' ? this.setState({
isDownLoading: this.isDownLoading,
firstLoading: false,
mainSource: newPropsData.concat(this.state.mainSource)
}) : this.fromScroll ? this.setState({
isUpLoading: this.isUpLoading,
firstLoading: false,
mainSource: this.state.mainSource.concat(newPropsData)
}) : this.setState({
isUpLoading: this.isUpLoading,
firstLoading: false,
mainSource: newPropsData
})
// if (this.props.getCompileStatus === true) {
// this.isUpLoading = false;
// this.isDownLoading = false;
// }
} else {
this.refreshKey ? this.setState({
isUpLoading: true,
firstLoading: true
}, () => {
removeSessionStHandler()
}) : this.handle === 'down' ? this.setState({
isDownLoading: false
}) : this.setState({
isUpLoading: false,
firstLoading: false
}, () => {
removeSessionStHandler()
})
}
const removeSessionStHandler = () => {
this.isUpLoading = false;
if (Storage.s_get(Storage.KEYS.SEARCH_PAGE_KEY) || Storage.s_get(Storage.KEYS.RECORD_PAGE_KEY) || Storage.s_get(Storage.KEYS.COLLECT_PAGE_KEY) || Storage.s_get(Storage.KEYS.PLAY_PAGE_KEY) || Storage.s_get(Storage.KEYS.LIVE_PAGE_KEY) || Storage.s_get(Storage.KEYS.ME_PAGE_KEY)) {
Storage.s_remove(Storage.KEYS.PLAY_PAGE_KEY)
Storage.s_remove(Storage.KEYS.LIVE_PAGE_KEY)
Storage.s_remove(Storage.KEYS.ME_PAGE_KEY)
Storage.s_remove(Storage.KEYS.SEARCH_PAGE_KEY)
Storage.s_remove(Storage.KEYS.RECORD_PAGE_KEY)
Storage.s_remove(Storage.KEYS.COLLECT_PAGE_KEY)
//根据backKey值返回至相应的tab
let firstChannelID = Storage.get(Storage.KEYS.USER_SELECTED_CHANNEL)[0].id;
this.handle = 'up';
this.start = this.end = 0;
this.firstUpKey = this.lastUpKey = this.firstDownKey = this.lastDownKey = null;
!Storage.s_get(Storage.KEYS.BACK_KEY) ?
this.navHandleClick(Number(firstChannelID), 0) : Number(Storage.s_get(Storage.KEYS.BACK_POSITION_KEY)) === 0 ?
this.context.store.dispatch(getHomeList(Number(Storage.s_get(Storage.KEYS.BACK_KEY)), 0, 0, this.handle)) : this.navHandleClick(Number(Storage.s_get(Storage.KEYS.BACK_KEY)), Number(Storage.s_get(Storage.KEYS.BACK_POSITION_KEY)));
}
}
}
componentWillUnmount() {
this.refreshKey = false;
//卸载组件上时,取消事件监听与绑定
document.removeEventListener('scroll', this.scrollListener)
}
//传给频道组件更新导航
uploadNav(source) {
this.setState({
| = Storage.s_get(Storage.KEYS.HOME_SCROLL_KEY)
this.Touch.init();
Storage | conditional_block |
word_module_trials07.py | currentPage_font_count
currentPage_font_count = 0
def my_add_paragraph(text, bold_or_not,change_font, underlined, fontSize=None):
#global document
p = document.add_paragraph()
run = p.add_run(text)
if(bold_or_not):
run.bold=True
if(change_font):
font = run.font
font.size = Pt(fontSize)
if(underlined):
run.font.underline = True
return p
def make_answer_rect(last_row,length_answer):
table = document.add_table(last_row, 2)
table.style = 'Table Grid'
#write the lines
my_final_line=""
my_line = ['_']*105
my_final_line = my_final_line.join(my_line)
for i in range(length_answer):
row = table.rows[i+1].cells
row[0].text = '\n\n'+my_final_line
paragraphs = row[0].paragraphs
for paragraph in paragraphs:
for run in paragraph.runs:
font = run.font
#font.size= Pt(14)
font.color.rgb = RGBColor(220,220,220)#light gray
#font.color.rgb = RGBColor(192,192,192)#darker gray=silver#
a = table.cell(0, 0)
b = table.cell(last_row-1, 1)
A = a.merge(b)
#document.add_paragraph(str(length_answer))
def check_end_of_page(which_part):
global currentPage_font_count
max_cnt = 0
if(which_part):#in the part of mcqs
max_cnt = 511#530 #519 = 12*5*2 + 19*11 + 19*10 #this value was computed from the real MS word = 12*5*2 + 20*11 + 19*9
else :#in the part of essay questions
max_cnt = 500#512 #this value was computed from the real MS word = (11 * 24) + (12*5) + (10*5*2) +14*2 (for the header only)+12*5(for word answer)
if(currentPage_font_count >= max_cnt):
myAddPageBreak()
#document.add_paragraph(str(currentPage_font_count))
return True
else:
#document.add_paragraph(str(currentPage_font_count))
return False
def make_essay_questions(questions,answer):
global currentPage_font_count
i=0
for str1 in questions:
#for the answer:
length_answer = math.floor(answer[i]/30) + 1#math.floor(answer[i]/83) + 1
last_length = length_answer*3+2
#check if end of page or not
currentPage_font_count += math.ceil(len(str1)/72)*12+12 +last_length*11 + 10*2 + 12 #10 and 10 for the margins around the question
if(check_end_of_page(False)):
currentPage_font_count += math.ceil(len(str1)/72)*12+12 +last_length*11 + 10*2 + 12
#document.add_paragraph('count= '+str(currentPage_font_count))
#for question
paragraph = document.add_paragraph('')
paragraph_format1 = paragraph.paragraph_format
paragraph_format1.space_before = Pt(10)
paragraph_format1.space_before.pt
paragraph_format1.space_after = Pt(10)
paragraph_format1.space_after.pt
#run = paragraph.add_run('Q-'+str(i+1)+':\n'+str1)
run = paragraph.add_run('Question:\n'+str(i+1)+'- '+str1)
run.bold = True
font = run.font
font.name = 'Calibri'
font.size = Pt(12)
#for the answer:
p_answer = my_add_paragraph("Answer:",False,True,False,12)
paragraph_format2 = p_answer.paragraph_format
paragraph_format2.space_before = Pt(0)
paragraph_format2.space_before.pt
paragraph_format2.space_after = Pt(0)
paragraph_format2.space_after.pt
make_answer_rect(last_length,length_answer)
i = i+1
#------------------------------------------- page format -----------------------------------------------
#add the QR code at the heading or the top of the page
insertBarcode()
#create the ticket of name and ID after writing 'Exam'
my_add_paragraph("Cairo University",True,True,False,16)
my_add_paragraph("Faculty of Engineering",True,True,False,16)
my_add_paragraph("Computer department",True,True,False,16)
my_add_paragraph("\n\n\t\t\t\t\tExam\n",True,True,False,22)
#rectangle of the ticket name and ID of student
table_merge = document.add_table(6, 2)
table_merge.style = 'Table Grid'
row = table_merge.rows[1].cells
row[0].text = '\nName: '
paragraphs = row[0].paragraphs
for paragraph in paragraphs:
for run in paragraph.runs:
font = run.font
font.size= Pt(14)
row = table_merge.rows[3].cells
row[0].text = '\nID: '
paragraphs = row[0].paragraphs
for paragraph in paragraphs:
for run in paragraph.runs:
font = run.font
font.size= Pt(14)
a = table_merge.cell(0, 0)
b = table_merge.cell(5, 1)
A = a.merge(b)
#add notes
my_add_paragraph("\n\n\t\t\t\tImportant Notes\n",True,True,False,18)
my_add_paragraph("\t1. You should write your quartet name",False,True,False,12)
my_add_paragraph("\t2. For essay questions only answers written in the rectangles will be graded",False,True,False,12)
my_add_paragraph("\t3. Your answers shouldn't exceed the space specified below each question",False,True,False,12)
my_add_paragraph("\t4. For Multiple choice questions, only answers in the table will be graded",False,True,False,12)
#------------------------------------------- MCQ -----------------------------------------------
myAddPageBreak()
#get the list of mcq questions
pp = my_add_paragraph('Multiple Choice Questions:',True,True,True,14)
paragraph_format1 = pp.paragraph_format
paragraph_format1.space_before = Pt(0)
paragraph_format1.space_before.pt
paragraph_format1.space_after = Pt(14)
paragraph_format1.space_after.pt
currentPage_font_count += 14*2
question_header = get_mcq()
question_choices = get_choices()
#max_choices = max_no_of_choices()
#first create the table for the students to put their mcq answers in
#table = document.add_table(len(question_header)+1, max_choices +1)
all_questions = math.ceil(len(question_header)/23)
last_row_written = 1
while all_questions != 0:
table = document.add_table(1, 2)
table.style = 'Table Grid'
#1. put the header of the table
header_row = table.rows[0].cells
header_row[0].text = str(last_row_written)
header_row[0].width= Inches(0.4)
header_row[1].width= Inches(1.8)
paragraphs = header_row[0].paragraphs
for paragraph in paragraphs:
for run in paragraph.runs:
font = run.font
font.size= Pt(20)
paragraphs = header_row[1].paragraphs
for paragraph in paragraphs:
for run in paragraph.runs:
font = run.font
font.size= Pt(20)
for x in range(last_row_written-1,len(question_header)-1):
#row_cells[1].text = 'in original loop '+str(all_questions)
if(((x+2)%24) == 0): #means end of table in the current page
myAddPageBreak()
last_row_written += 23
#row_cells[1].text = 'break'+str(last_row_written)+' '+str(all_questions)
break
row_cells = table.add_row().cells
row_cells[0].text = str(x+2)
row_cells[0].width= Inches(0.4)
paragraphs = row_cells[0].paragraphs
for paragraph in paragraphs:
for run in paragraph.runs:
font = run.font
font.size= Pt(20)
row_cells[1].width= Inches(1.8)
paragraphs = row_cells[1].paragraphs
for paragraph in paragraphs:
for run in paragraph.runs:
font = run.font
font.size= Pt(20)
table.alignment = WD_TABLE_ALIGNMENT.CENTER
all_questions -= 1
#add another table if the table exceeds the length of a page(make two tables in one page as in bag data exam)
#add a new table in a new page if we had two tables in one page and they already filled the current page
| #write the multiple choice questions
myAddPageBreak()
for x in range(len(question_header)):
| random_line_split |
|
word_module_trials07.py | was able to draw the attention of the salesmen who thought him rich and likely to make heavy purchases. He was shown the superior varieties of suit lengths and sarees. But after casually examining them, he kept moving to the next section, where readymade goods were being sold and further on to another section. By then, the salesmen had begun to doubt his intentions and drew the attention of the manager. The manager asked him what exactly he wanted and he replied that he wanted courteous treatment. He explained that he had come to the same shop in casual dress that morning and drawn little attention. His pride was hurt and he wanted to assert himself. He had come in good dress only to get decent treatment, not for getting any textiles. He left without making any purchase."""
list_questions = ["why did the sales man think that the young man would buy lots of clothes ?" , \
"what did the young man want when the manager asked him ?", "why did the pride of the young man got hurt ?"\
, "what did the sales man do when he doubted about that customer ?" ]
list_answers = [len("because he thought he was a rich man and would make heavy purchases") ,\
len("he only wanted courteous treatment") , len("he had come to the shop in casual dress , and had little attention.") \
, len("he called on his manager.")]
return list_quotes,list_questions,list_answers
def get_quotations():
list_quotes = "there has been a great argument between the two main politcal groups "
list_questions = ["what did gulliver see inside the kin's palace ?",\
"why did the trameksan want to wear high heels on their shoes ?",\
"explain if the king'sson was obedient or disobedient to his father."]
list_answers = [60, 20 , 70]
return list_quotes,list_questions,list_answers
def get_barcode_no():
#we get the following list from DB test ----------------------------------------------
return 11224444
def insertBarcode():
'''
pic_par = document.add_paragraph()
run = pic_par.add_run()
run.add_picture('barcode03.png', width=Inches(1.0))
paragraph_format_pic = pic_par.paragraph_format
paragraph_format_pic.space_before = Pt(0)
paragraph_format_pic.space_before.pt
paragraph_format_pic.space_after = Pt(0)
paragraph_format_pic.space_after.pt
'''
barcode = document.add_paragraph()
paragraph_format_barcode = barcode.paragraph_format
paragraph_format_barcode.space_before = Pt(0)
paragraph_format_barcode.space_before.pt
paragraph_format_barcode.space_after = Pt(10)
paragraph_format_barcode.space_after.pt
barcode_run = barcode.add_run('Code: ')
barcode_run.bold = True
font_bar = barcode_run.font
font_bar.size= Pt(14)
barcode_run2 = barcode.add_run(str(get_barcode_no()))
#barcode_run2 = barcode.add_run(' '+str(get_barcode_no()))
font_bar2 = barcode_run2.font
font_bar2.size= Pt(12)
#insert page number here at the top of the page
global counter
run_page_no = barcode.add_run('\t\t\t\t\t\t\t\t page '+str(counter))
font_page_no = run_page_no.font
font_page_no.size = Pt(12)
counter += 1
insertHR(barcode)
def myAddPageBreak():
document.add_page_break()
insertBarcode()
global currentPage_font_count
currentPage_font_count = 0
def my_add_paragraph(text, bold_or_not,change_font, underlined, fontSize=None):
#global document
p = document.add_paragraph()
run = p.add_run(text)
if(bold_or_not):
run.bold=True
if(change_font):
font = run.font
font.size = Pt(fontSize)
if(underlined):
run.font.underline = True
return p
def make_answer_rect(last_row,length_answer):
table = document.add_table(last_row, 2)
table.style = 'Table Grid'
#write the lines
my_final_line=""
my_line = ['_']*105
my_final_line = my_final_line.join(my_line)
for i in range(length_answer):
row = table.rows[i+1].cells
row[0].text = '\n\n'+my_final_line
paragraphs = row[0].paragraphs
for paragraph in paragraphs:
for run in paragraph.runs:
font = run.font
#font.size= Pt(14)
font.color.rgb = RGBColor(220,220,220)#light gray
#font.color.rgb = RGBColor(192,192,192)#darker gray=silver#
a = table.cell(0, 0)
b = table.cell(last_row-1, 1)
A = a.merge(b)
#document.add_paragraph(str(length_answer))
def check_end_of_page(which_part):
global currentPage_font_count
max_cnt = 0
if(which_part):#in the part of mcqs
max_cnt = 511#530 #519 = 12*5*2 + 19*11 + 19*10 #this value was computed from the real MS word = 12*5*2 + 20*11 + 19*9
else :#in the part of essay questions
max_cnt = 500#512 #this value was computed from the real MS word = (11 * 24) + (12*5) + (10*5*2) +14*2 (for the header only)+12*5(for word answer)
if(currentPage_font_count >= max_cnt):
myAddPageBreak()
#document.add_paragraph(str(currentPage_font_count))
return True
else:
#document.add_paragraph(str(currentPage_font_count))
return False
def make_essay_questions(questions,answer):
global currentPage_font_count
i=0
for str1 in questions:
#for the answer:
length_answer = math.floor(answer[i]/30) + 1#math.floor(answer[i]/83) + 1
last_length = length_answer*3+2
#check if end of page or not
currentPage_font_count += math.ceil(len(str1)/72)*12+12 +last_length*11 + 10*2 + 12 #10 and 10 for the margins around the question
if(check_end_of_page(False)):
currentPage_font_count += math.ceil(len(str1)/72)*12+12 +last_length*11 + 10*2 + 12
#document.add_paragraph('count= '+str(currentPage_font_count))
#for question
paragraph = document.add_paragraph('')
paragraph_format1 = paragraph.paragraph_format
paragraph_format1.space_before = Pt(10)
paragraph_format1.space_before.pt
paragraph_format1.space_after = Pt(10)
paragraph_format1.space_after.pt
#run = paragraph.add_run('Q-'+str(i+1)+':\n'+str1)
run = paragraph.add_run('Question:\n'+str(i+1)+'- '+str1)
run.bold = True
font = run.font
font.name = 'Calibri'
font.size = Pt(12)
#for the answer:
p_answer = my_add_paragraph("Answer:",False,True,False,12)
paragraph_format2 = p_answer.paragraph_format
paragraph_format2.space_before = Pt(0)
paragraph_format2.space_before.pt
paragraph_format2.space_after = Pt(0)
paragraph_format2.space_after.pt
make_answer_rect(last_length,length_answer)
i = i+1
#------------------------------------------- page format -----------------------------------------------
#add the QR code at the heading or the top of the page
insertBarcode()
#create the ticket of name and ID after writing 'Exam'
my_add_paragraph("Cairo University",True,True,False,16)
my_add_paragraph("Faculty of Engineering",True,True,False,16)
my_add_paragraph("Computer department",True,True,False,16)
my_add_paragraph("\n\n\t\t\t\t\tExam\n",True,True,False,22)
#rectangle of the ticket name and ID of student
table_merge = document.add_table(6, 2)
table_merge.style = 'Table Grid'
row = table_merge.rows[1].cells
row[0].text = '\nName: '
paragraphs = row[0].paragraphs
for paragraph in paragraphs:
for run in paragraph.runs:
|
row = table_merge.rows[3].cells
row[0].text = '\nID: '
paragraphs = row[0].paragraphs
for paragraph in paragraphs:
for run in paragraph.runs:
font = run.font
font.size= Pt(14)
a = table_merge.cell(0, 0)
b = table_merge.cell(5, 1)
A = a.merge(b)
#add notes
my_add_paragraph("\n\n\t\t\t\tImportant Notes\n",True,True,False,18)
my_add_paragraph("\t | font = run.font
font.size= Pt(14) | conditional_block |
word_module_trials07.py | ():
#we get the following list from DB test ----------------------------------------------
question_header = ["there is very little .... from the factory, so it's nor bad for the environment", \
"here is your ticket for the museum , the ticket is ....... for two days." , \
"ola spent most of her ..... living on a farm , but she moved to cairo when she was sixteen",\
"it ...... that the population if the world is more than seven billion" ,\
"nour ... father is a surgeon , is my best friend" , \
"i remember things better when i study ....... things such as maps and pictures.",\
" the Qsr-ElNile bridge is not ..... the 6th october bridge "]
return question_header
def get_choices():
#we get the following list from DB test ----------------------------------------------
#split choices using ",,__,,"
question_choices = ["waste,,__,,wave,,__,,wildlife,,__,,weight" , "virtual,,__,,valid,,__,,vinegar,,__,,vapour" , \
"child,,__,,childhood,,__,,character,,__,,family" , "believes,,__,,believed,,__,,is believed,,__,, believes" , \
"whose,,__,,which,,__,,that,,__,,who" , "wirtual,,__,,seeing,,__,,see,,__,,visual" , \
"as long as ,,__,, the long as ,,__,,long as ,,__,, as long"]
return question_choices
def get_reading():
list_quotes = """A well-dressed young man entered a big textile shop one evening. He was able to draw the attention of the salesmen who thought him rich and likely to make heavy purchases. He was shown the superior varieties of suit lengths and sarees. But after casually examining them, he kept moving to the next section, where readymade goods were being sold and further on to another section. By then, the salesmen had begun to doubt his intentions and drew the attention of the manager. The manager asked him what exactly he wanted and he replied that he wanted courteous treatment. He explained that he had come to the same shop in casual dress that morning and drawn little attention. His pride was hurt and he wanted to assert himself. He had come in good dress only to get decent treatment, not for getting any textiles. He left without making any purchase."""
list_questions = ["why did the sales man think that the young man would buy lots of clothes ?" , \
"what did the young man want when the manager asked him ?", "why did the pride of the young man got hurt ?"\
, "what did the sales man do when he doubted about that customer ?" ]
list_answers = [len("because he thought he was a rich man and would make heavy purchases") ,\
len("he only wanted courteous treatment") , len("he had come to the shop in casual dress , and had little attention.") \
, len("he called on his manager.")]
return list_quotes,list_questions,list_answers
def get_quotations():
list_quotes = "there has been a great argument between the two main politcal groups "
list_questions = ["what did gulliver see inside the kin's palace ?",\
"why did the trameksan want to wear high heels on their shoes ?",\
"explain if the king'sson was obedient or disobedient to his father."]
list_answers = [60, 20 , 70]
return list_quotes,list_questions,list_answers
def get_barcode_no():
#we get the following list from DB test ----------------------------------------------
return 11224444
def insertBarcode():
'''
pic_par = document.add_paragraph()
run = pic_par.add_run()
run.add_picture('barcode03.png', width=Inches(1.0))
paragraph_format_pic = pic_par.paragraph_format
paragraph_format_pic.space_before = Pt(0)
paragraph_format_pic.space_before.pt
paragraph_format_pic.space_after = Pt(0)
paragraph_format_pic.space_after.pt
'''
barcode = document.add_paragraph()
paragraph_format_barcode = barcode.paragraph_format
paragraph_format_barcode.space_before = Pt(0)
paragraph_format_barcode.space_before.pt
paragraph_format_barcode.space_after = Pt(10)
paragraph_format_barcode.space_after.pt
barcode_run = barcode.add_run('Code: ')
barcode_run.bold = True
font_bar = barcode_run.font
font_bar.size= Pt(14)
barcode_run2 = barcode.add_run(str(get_barcode_no()))
#barcode_run2 = barcode.add_run(' '+str(get_barcode_no()))
font_bar2 = barcode_run2.font
font_bar2.size= Pt(12)
#insert page number here at the top of the page
global counter
run_page_no = barcode.add_run('\t\t\t\t\t\t\t\t page '+str(counter))
font_page_no = run_page_no.font
font_page_no.size = Pt(12)
counter += 1
insertHR(barcode)
def myAddPageBreak():
document.add_page_break()
insertBarcode()
global currentPage_font_count
currentPage_font_count = 0
def my_add_paragraph(text, bold_or_not,change_font, underlined, fontSize=None):
#global document
p = document.add_paragraph()
run = p.add_run(text)
if(bold_or_not):
run.bold=True
if(change_font):
font = run.font
font.size = Pt(fontSize)
if(underlined):
run.font.underline = True
return p
def make_answer_rect(last_row,length_answer):
table = document.add_table(last_row, 2)
table.style = 'Table Grid'
#write the lines
my_final_line=""
my_line = ['_']*105
my_final_line = my_final_line.join(my_line)
for i in range(length_answer):
row = table.rows[i+1].cells
row[0].text = '\n\n'+my_final_line
paragraphs = row[0].paragraphs
for paragraph in paragraphs:
for run in paragraph.runs:
font = run.font
#font.size= Pt(14)
font.color.rgb = RGBColor(220,220,220)#light gray
#font.color.rgb = RGBColor(192,192,192)#darker gray=silver#
a = table.cell(0, 0)
b = table.cell(last_row-1, 1)
A = a.merge(b)
#document.add_paragraph(str(length_answer))
def check_end_of_page(which_part):
global currentPage_font_count
max_cnt = 0
if(which_part):#in the part of mcqs
max_cnt = 511#530 #519 = 12*5*2 + 19*11 + 19*10 #this value was computed from the real MS word = 12*5*2 + 20*11 + 19*9
else :#in the part of essay questions
max_cnt = 500#512 #this value was computed from the real MS word = (11 * 24) + (12*5) + (10*5*2) +14*2 (for the header only)+12*5(for word answer)
if(currentPage_font_count >= max_cnt):
myAddPageBreak()
#document.add_paragraph(str(currentPage_font_count))
return True
else:
#document.add_paragraph(str(currentPage_font_count))
return False
def make_essay_questions(questions,answer):
global currentPage_font_count
i=0
for str1 in questions:
#for the answer:
length_answer = math.floor(answer[i]/30) + 1#math.floor(answer[i]/83) + 1
last_length = length_answer*3+2
#check if end of page or not
currentPage_font_count += math.ceil(len(str1)/72)*12+12 +last_length*11 + 10*2 + 12 #10 and 10 for the margins around the question
if(check_end_of_page(False)):
currentPage_font_count += math.ceil(len(str1)/72)*12+12 +last_length*11 + 10*2 + 12
#document.add_paragraph('count= '+str(currentPage_font_count))
#for question
paragraph = document.add_paragraph('')
paragraph_format1 = paragraph.paragraph_format
paragraph_format1.space_before = Pt(10)
paragraph_format1.space_before.pt
paragraph_format1.space_after = Pt(10)
paragraph_format1.space_after.pt
#run = paragraph.add_run('Q-'+str(i+1)+':\n'+str1)
run = paragraph.add_run('Question:\n'+str(i+1)+'- '+str1)
run.bold = True
font = run.font
font.name = 'Calibri'
font.size = Pt(12)
#for the answer:
p_answer = my_add_paragraph("Answer:",False,True,False,12)
paragraph_format2 = p_answer.paragraph_format
paragraph_format2.space_before = Pt( | get_mcq | identifier_name |
|
word_module_trials07.py |
def get_choices():
#we get the following list from DB test ----------------------------------------------
#split choices using ",,__,,"
question_choices = ["waste,,__,,wave,,__,,wildlife,,__,,weight" , "virtual,,__,,valid,,__,,vinegar,,__,,vapour" , \
"child,,__,,childhood,,__,,character,,__,,family" , "believes,,__,,believed,,__,,is believed,,__,, believes" , \
"whose,,__,,which,,__,,that,,__,,who" , "wirtual,,__,,seeing,,__,,see,,__,,visual" , \
"as long as ,,__,, the long as ,,__,,long as ,,__,, as long"]
return question_choices
def get_reading():
list_quotes = """A well-dressed young man entered a big textile shop one evening. He was able to draw the attention of the salesmen who thought him rich and likely to make heavy purchases. He was shown the superior varieties of suit lengths and sarees. But after casually examining them, he kept moving to the next section, where readymade goods were being sold and further on to another section. By then, the salesmen had begun to doubt his intentions and drew the attention of the manager. The manager asked him what exactly he wanted and he replied that he wanted courteous treatment. He explained that he had come to the same shop in casual dress that morning and drawn little attention. His pride was hurt and he wanted to assert himself. He had come in good dress only to get decent treatment, not for getting any textiles. He left without making any purchase."""
list_questions = ["why did the sales man think that the young man would buy lots of clothes ?" , \
"what did the young man want when the manager asked him ?", "why did the pride of the young man got hurt ?"\
, "what did the sales man do when he doubted about that customer ?" ]
list_answers = [len("because he thought he was a rich man and would make heavy purchases") ,\
len("he only wanted courteous treatment") , len("he had come to the shop in casual dress , and had little attention.") \
, len("he called on his manager.")]
return list_quotes,list_questions,list_answers
def get_quotations():
list_quotes = "there has been a great argument between the two main politcal groups "
list_questions = ["what did gulliver see inside the kin's palace ?",\
"why did the trameksan want to wear high heels on their shoes ?",\
"explain if the king'sson was obedient or disobedient to his father."]
list_answers = [60, 20 , 70]
return list_quotes,list_questions,list_answers
def get_barcode_no():
#we get the following list from DB test ----------------------------------------------
return 11224444
def insertBarcode():
'''
pic_par = document.add_paragraph()
run = pic_par.add_run()
run.add_picture('barcode03.png', width=Inches(1.0))
paragraph_format_pic = pic_par.paragraph_format
paragraph_format_pic.space_before = Pt(0)
paragraph_format_pic.space_before.pt
paragraph_format_pic.space_after = Pt(0)
paragraph_format_pic.space_after.pt
'''
barcode = document.add_paragraph()
paragraph_format_barcode = barcode.paragraph_format
paragraph_format_barcode.space_before = Pt(0)
paragraph_format_barcode.space_before.pt
paragraph_format_barcode.space_after = Pt(10)
paragraph_format_barcode.space_after.pt
barcode_run = barcode.add_run('Code: ')
barcode_run.bold = True
font_bar = barcode_run.font
font_bar.size= Pt(14)
barcode_run2 = barcode.add_run(str(get_barcode_no()))
#barcode_run2 = barcode.add_run(' '+str(get_barcode_no()))
font_bar2 = barcode_run2.font
font_bar2.size= Pt(12)
#insert page number here at the top of the page
global counter
run_page_no = barcode.add_run('\t\t\t\t\t\t\t\t page '+str(counter))
font_page_no = run_page_no.font
font_page_no.size = Pt(12)
counter += 1
insertHR(barcode)
def myAddPageBreak():
document.add_page_break()
insertBarcode()
global currentPage_font_count
currentPage_font_count = 0
def my_add_paragraph(text, bold_or_not,change_font, underlined, fontSize=None):
#global document
p = document.add_paragraph()
run = p.add_run(text)
if(bold_or_not):
run.bold=True
if(change_font):
font = run.font
font.size = Pt(fontSize)
if(underlined):
run.font.underline = True
return p
def make_answer_rect(last_row,length_answer):
table = document.add_table(last_row, 2)
table.style = 'Table Grid'
#write the lines
my_final_line=""
my_line = ['_']*105
my_final_line = my_final_line.join(my_line)
for i in range(length_answer):
row = table.rows[i+1].cells
row[0].text = '\n\n'+my_final_line
paragraphs = row[0].paragraphs
for paragraph in paragraphs:
for run in paragraph.runs:
font = run.font
#font.size= Pt(14)
font.color.rgb = RGBColor(220,220,220)#light gray
#font.color.rgb = RGBColor(192,192,192)#darker gray=silver#
a = table.cell(0, 0)
b = table.cell(last_row-1, 1)
A = a.merge(b)
#document.add_paragraph(str(length_answer))
def check_end_of_page(which_part):
global currentPage_font_count
max_cnt = 0
if(which_part):#in the part of mcqs
max_cnt = 511#530 #519 = 12*5*2 + 19*11 + 19*10 #this value was computed from the real MS word = 12*5*2 + 20*11 + 19*9
else :#in the part of essay questions
max_cnt = 500#512 #this value was computed from the real MS word = (11 * 24) + (12*5) + (10*5*2) +14*2 (for the header only)+12*5(for word answer)
if(currentPage_font_count >= max_cnt):
myAddPageBreak()
#document.add_paragraph(str(currentPage_font_count))
return True
else:
#document.add_paragraph(str(currentPage_font_count))
return False
def make_essay_questions(questions,answer):
global currentPage_font_count
i=0
for str1 in questions:
#for the answer:
length_answer = math.floor(answer[i]/30) + 1#math.floor(answer[i]/83) + 1
last_length = length_answer*3+2
#check if end of page or not
currentPage_font_count += math.ceil(len(str1)/72)*12+12 +last_length*11 + 10*2 + 12 #10 and 10 for the margins around the question
if(check_end_of_page(False)):
currentPage_font_count += math.ceil(len(str1)/72)*12+12 +last_length*11 + 10*2 + 12
#document.add_paragraph('count= '+str(currentPage_font_count))
#for question
paragraph = document.add_paragraph('')
paragraph_format1 = paragraph.paragraph_format
paragraph_format1.space_before = Pt(10)
paragraph_format1.space_before.pt
paragraph_format1.space_after = Pt(10)
paragraph_format1.space_after.pt
#run = paragraph.add_run('Q-'+str(i+1)+':\n'+str1)
run = paragraph.add_run('Question:\n'+str(i+1)+'- '+str1)
run.bold = True
font = run.font
font.name = 'Calibri'
font.size = Pt(12)
#for the answer:
p_answer = my_add_paragraph("Answer:",False,True,False,12)
paragraph_format2 = p_answer.paragraph_format
paragraph_format2.space_before = Pt(0)
paragraph_format2.space_before.pt
paragraph_format2.space_after = | question_header = ["there is very little .... from the factory, so it's nor bad for the environment", \
"here is your ticket for the museum , the ticket is ....... for two days." , \
"ola spent most of her ..... living on a farm , but she moved to cairo when she was sixteen",\
"it ...... that the population if the world is more than seven billion" ,\
"nour ... father is a surgeon , is my best friend" , \
"i remember things better when i study ....... things such as maps and pictures.",\
" the Qsr-ElNile bridge is not ..... the 6th october bridge "]
return question_header | identifier_body |
|
displayTopology.js | 884567890','9884567890','9884567890','9884567890'],
apstatus :['1','1','0','0','1','1','1','0']
}
*/
var apsdata = newdata;
DATA.count = apsdata.apcount;
DATA.width = (apsdata.apcount == 0?230:apsdata.apcount*230);
/* 自适应外框*/
var $div = $('<div id="ap_page_cover_all"></div>');
$div.css({
'position':'relative',
'width' :'100%',
'overflow':'auto'
});
$con.empty().append($div);
/* 自适应内框*/
var $div1 = $('<div id="ap_page_cover"></div>');
$div1.css({
'position':'relative',
'width' :DATA.width+'px',
'min-height':'530px',
'margin' :'0px auto',
'background-color':'#ffffff'
});
$div.append($div1);
/* 内框背景绘制*/
var $bgcan = $('<canvas width="'+DATA.width+'px'+'" height="530px" id="ap_bgcanvas"></canvas>');
$div1.append($bgcan)
/*获得AC表达框*/
var $ac = getACDom();
$div1.append($ac);
/*获得AP数量及信息 生成AP表达框*/
// var apcount = 4;
// var iparr = ['192.168.1.5','192.168.1.5','192.168.1.5','192.168.1.5'];
// var ssidarr = ['ASDASD','34242','哈善良的的','alsdkj++'];
// var ssidarr_5 = ['98679879as','','','llllllll'];
// var channelarr = ['auto(5)','auto(6)','6','8'];
// var channelarr_5 = ['auto(153)','auto','auto','auto(153)'];
var aparr = getAPsDom(apsdata);
$div1.append(aparr);
drawCanvas($bgcan);
/* 绑定触摸事件 */
$('.ap_signle_cover_div').mouseenter(function(){
$('.ap_signle_cover_div').css({
'opacity':'0.2',
'z-index':'1'
});
var $t = $(this);
$t.css({
'opacity':'1',
'z-index':'2'
});
$t.find('.ap_inner_table_cover').css({
'overflow':'visible',
// 'box-shadow':'0 0 0px transparent inset',
})
}).mouseleave(function(){
$('.ap_signle_cover_div').css({
'opacity':'1',
'z-index':'1'
});
var $t = $(this);
$t.find('.ap_inner_table_cover').css({
'overflow':'hidden',
// 'box-shadow':'0 0 0px transparent inset',
})
})
}
function getACDom(){
var $ac = $('<div></div>');
$ac.css({
// 'opacity':'0.2',
'position': 'absolute',
'width' :'120px',
'height' :'120px',
'top' :'50px',
'left' :(DATA.width-120)/2 +'px',
'background-color':'#78AFF2',
'border-radius':'50%',
'box-shadow':'0px 0px 6px rgba(0,0,0,0.6)',
'text-align' :'center',
'line-height' :'119px'
});
var $span = $('<span>AC</span>');
$span.css({
'font-size':'48px',
'font-weight':'bold',
'color' :'#ffffff',
})
$ac.append($span)
return $ac;
}
function getAPsDom(apsdata){
var aparr = [];
for(var i = 0;i<apsdata.apcount;i++){
aparr.push(getAPDom(
i+1,
apsdata.iparr[i],
apsdata.ssidarr[i],
apsdata.ssidarr_5[i],
apsdata.channelarr[i],
apsdata.channelarr_5[i],
apsdata.serial[i],
apsdata.apstatus[i],
apsdata.clienCt[i],
apsdata.mac[i]
));
}
if(aparr.length>0){
var leftx = (DATA.width-180*DATA.count)/(DATA.count+1);
aparr.forEach(function(aobj,ai){
var j = Number(ai);
var x = Number(leftx);
aobj.css({
top:'250px',
left:((j+1)*x+180*j)+'px'
});
})
}
/*
if(apsdata.apcount == 4){
aparr[0].css({
top:'250px',
left:'10px'
});
aparr[1].css({
top:'250px',
left:'215px'
});
aparr[2].css({
top:'250px',
left:'420px'
});
aparr[3].css({
top:'250px',
left:'630px'
});
}else if(apsdata.apcount == 3){
aparr[0].css({
top:'250px',
left:'45px'
});
aparr[1].css({
top:'250px',
left:'300px'
});
aparr[2].css({
top:'250px',
left:'565px'
});
}else if(apsdata.apcount == 2){
aparr[0].css({
top:'250px',
left:'100px'
});
aparr[1].css({
top:'250px',
left:'490px'
});
}else if(apsdata.apcount == 1){
aparr[0].css({
top:'250px',
left:'300px'
});
}
if(aparr.length == 0){
var $noap = $('<span>(暂无AP)</span>');
$noap.css({
'position':'absolute',
'font-size':'30px',
'color' :'#dfdfdf',
'font-weight':'bold',
'top' :'253px',
| 'left' :'308px'
})
aparr.push($noap);
}
*/
return aparr;
}
function getAPDom(index,ip,ssid,ssid5,cnl,cnl5,serial,apstatus,clienCt,mac){
var $ap = $('<div class="ap_signle_cover_div"></div>');
$ap.css({
// 'opacity':'0.2',
'transition':'all 0.3s',
'color' :'#000000',
'position': 'absolute',
'width' :'180px',
'min-height' :'220px',
'border-radius':'0px',
'box-shadow':'0px 0px 6px rgba(0,0,0,0.3)',
'font-size':'13px',
// 'font-weight':'bold',
'background-color':(apstatus == '1'?'#F9EE9A':'#e2e2e2')
});
var cnum24 = clienCt.split('/')[0];
var num5 = clienCt.split('/')[1];
var table = '<table>'+
'<tbody>'+
'<tr><td style="width:20px;padding:0 0"> IP地址</td><td> :'+ip+'</td></tr>'+
(ssid == ''?'':('<tr><td style="width:20px;padding:3px 0">SSID(2.4G)</td><td> :'+ssid+'</td></tr>'))+
(ssid == ''?'':('<tr><td style="width:20px;padding:3px 0"> | random_line_split |
|
displayTopology.js | :20px;padding:3px 0">信道(2.4G)</td><td> :'+cnl+'</td></tr>'))+
(ssid == ''?'':('<tr><td style="width:20px;padding:3px 0">用户(2.4G)</td><td> :'+cnum24+'</td></tr>'))+
(ssid5 == ''?'':('<tr><td style="width:20px;padding:3px 0">SSID(5G)</td><td> :'+ssid5+'</td></tr>'))+
(ssid5 == ''?'':('<tr><td style="width:20px;padding:3px 0">信道(5G)</td><td> :'+cnl5+'</td></tr>'))+
(ssid5 == ''?'':('<tr><td style="width:20px;padding:3px 0">用户(5G)</td><td> :'+num5+'</td></tr>'))+
'<tr><td style="width:20px;padding:0 0" colspan="2" > <a class="u-inputLink link-forUser" data-mac="'+mac+'">查看在线用户</a></td></tr>'+
'</tbody>'+
'</table>';
var $table = $(table);
$table.find('.link-forUser').click(function(){
showUserModal($(this))
});
/*
if(ssid !='' &&ssid5 !='' ){
$table.css({
'position':'absolute',
'left' :'24px',
'top' :'39px'
});
}else if((ssid !='' && ssid5 == '') || (ssid5 !='' && ssid == '')){
$table.css({
'position':'absolute',
'left' :'24px',
'top' :'55px'
});
}else{
$table.css({
'position':'absolute',
'left' :'24px',
'top' :'55px'
});
}
*/
$table.css({
'position':'absolute',
'left' :'10px',
'top' :'28px'
});
var $tablecover = $('<div class="ap_inner_table_cover"></div>');
$tablecover.css({
position:'relative',
width:'100%',
height:'225px',
'border-radius':'0px',
'overflow':'hidden',
// 'box-shadow':'0px 0px 18px '+(apstatus == '1'?'#F9EE9A':'#e2e2e2')+' inset'
})
$tablecover.append($table);
var $index = $('<div>AP-'+mac+' '+'<span '+(apstatus == '1'?' style="color:#36B71F">在线':' style="color:#FF0000">离线')+'</span></div>');
$index.css({
'position': 'absolute',
'width':'186px',
'height':'38px',
'border-radius':'6px',
'box-shadow':'0px 0px 6px rgba(0,0,0,0.3)',
'font-size':'15px',
'color':'#333333',
'font-weight':'bold',
'top':'-25px',
'left':'-3px',
'line-height':'42px',
'text-align':'center',
'background-color':(apstatus == '1'?'#F9EE9A':'#e2e2e2')
});
$ap.append($index);
$ap.append($tablecover);
return $ap;
}
function drawCanvas($can){
var c=$can[0];
var ctx=c.getContext("2d");
ctx.strokeStyle="#787878";
var leftx = (DATA.width-180*DATA.count)/(DATA.count+1);
for(var i=0;i<DATA.count;i++){
var x = (i+1)*leftx+180*i+90;
ctx.moveTo(x,225);
ctx.lineTo(DATA.width/2,120);
ctx.stroke();
}
/*
if(count == 4){
ctx.moveTo(100,240);
ctx.lineTo(390,120);
ctx.stroke();
ctx.moveTo(300,240);
ctx.lineTo(390,120);
ctx.stroke();
ctx.moveTo(500,240);
ctx.lineTo(390,120);
ctx.stroke();
ctx.moveTo(700,240);
ctx.lineTo(390,120);
ctx.stroke();
}else if(count == 3){
ctx.moveTo(130,240);
ctx.lineTo(390,120);
ctx.stroke();
ctx.moveTo(390,240);
ctx.lineTo(390,120);
ctx.stroke();
ctx.moveTo(670,240);
ctx.lineTo(390,120);
ctx.stroke();
}else if(count == 2){
ctx.moveTo(200,240);
ctx.lineTo(390,120);
ctx.stroke();
ctx.moveTo(590,240);
ctx.lineTo(390,120);
ctx.stroke();
}else if(count == 1){
ctx.moveTo(390,240);
ctx.lineTo(390,120);
ctx.stroke();
}else{
}
*/
}
/*用户数据弹框*/
function showUserModal($this){
var mac = $this.attr('data-mac');
$.ajax({
url:'common.asp?optType=aspOutPutWlanCltList&mac='+mac,
type:'GET',
success:function(result){
var doEval = require('Eval');
var codeStr = result,
variableArr = [
'macarrays',
'linkedaps', // 接入AP
'linkedSSIDs', // 接入SSID
'wlanFre', // 频段
'signals', // 信号
'rates', // 速率
'bindwidths', // 频宽
'downloads', // 下载
'downRate', // 下载速率
'uploads', // 上传
'upRate', // 上传速率
'time' // 在线时长
],
result = doEval.doEval(codeStr, variableArr),
isSuccess = result["isSuccessful"];
// 判断代码字符串执行是否成功
if (isSuccess) {
var data = result["data"];
var Database = require('Database'),
database = Database.getDatabaseObj(); // 数据库的引用
// 存入全局变量DATA中,方便其他函数使用
DATA["userData"] = database;
// 声明字段列表
var fieldArr =[
'ID',
'macarrays',
'linkedaps', /*ip地址列*/
'linkedSSIDs',
'wlanFre',
'signals',
'rates',
'bindwidths',
'downloads',
'downRate',
'uploads',
'upRate',
'time'
];
var baseData = [];
if(data.macarrays){
data.macarrays.forEach(function(obj,i){
baseData.push([
Number(i)+1,
data.macarrays[i],
data.linkedaps[i],
data.linkedSSIDs[i],
data.wlanFre[i],
data.signals[i],
data.rates[i],
data.bindwidths[i],
data.downloads[i],
data.downRate[i],
data.uploads[i],
data.upRate[i],
data.time[i]
]);
});
}
// 将数据存入数据表中
database.addTitle(fieldArr);
database.addData(baseData);
makeUserModal(database);
} else {
Tips.showWarning('{parseStrErr}');
}
}
});
function makeUserModal(database){
var modallist = {
id:'userInfo_modal',
title:'用户',
size:'large1',
"btns" : [
/*
{
"type" : 'save',
"clickFunc" : function($this){
}
},
{
"type" : 'reset',
clickFunc : function($this){
}
},
*/
{
"type" : 'close'
}
]
|
};
va | identifier_name |
|
displayTopology.js | 884567890','9884567890','9884567890','9884567890'],
apstatus :['1','1','0','0','1','1','1','0']
}
*/
var apsdata = newdata;
DATA.count = apsdata.apcount;
DATA.width = (apsdata.apcount == 0?230:apsdata.apcount*230);
/* 自适应外框*/
var $div = $('<div id="ap_page_cover_all"></div>');
$div.css({
'position':'relative',
'width' :'100%',
'overflow':'auto'
});
$con.empty().append($div);
/* 自适应内框*/
var $div1 = $('<div id="ap_page_cover"></div>');
$div1.css({
'position':'relative',
'width' :DATA.width+'px',
'min-height':'530px',
'margin' :'0px auto',
'background-color':'#ffffff'
});
$div.append($div1);
/* 内框背景绘制*/
var $bgcan = $('<canvas width="'+DATA.width+'px'+'" height="530px" id="ap_bgcanvas"></canvas>');
$div1.append($bgcan)
/*获得AC表达框*/
var $ac = getACDom();
$div1.append($ac);
/*获得AP数量及信息 生成AP表达框*/
// var apcount = 4;
// var iparr = ['192.168.1.5','192.168.1.5','192.168.1.5','192.168.1.5'];
// var ssidarr = ['ASDASD','34242','哈善良的的','alsdkj++'];
// var ssidarr_5 = ['98679879as','','','llllllll'];
// var channelarr = ['auto(5)','auto(6)','6','8'];
// var channelarr_5 = ['auto(153)','auto','auto','auto(153)'];
var aparr = getAPsDom(apsdata);
$div1.append(aparr);
drawCanvas($bgcan);
/* 绑定触摸事件 */
$('.ap_signle_cover_div').mouseenter(function(){
$('.ap_signle_cover_div').css({
'opacity':'0.2',
'z-index':'1'
});
var $t = $(this);
$t.css({
'opacity':'1',
'z-index':'2'
});
$t.find('.ap_inner_table_cover').css({
'overflow':'visible',
// 'box-shadow':'0 0 0px transparent inset',
})
}).mouseleave(function(){
$('.ap_signle_cover_div').css({
'opacity':'1',
'z-index':'1'
});
var $t = $(this);
$t.find('.ap_inner_table_cover').css({
'overflow':'hidden',
// 'box-shadow':'0 0 0px transparent inset',
})
})
}
function getACDom(){
var $ac = $('<div></div>');
$ac.css({
// 'opacity':'0.2',
'position': 'absolute',
'width' :'120px',
'height' :'120px',
'top' :'50px',
'left' :(DATA.width-120)/2 +'px',
'background-color':'#78AFF2',
'border-radius':'50%',
'box-shadow':'0px 0px 6px rgba(0,0,0,0.6)',
'text-align' :'center',
'line-height' :'119px'
});
var $span = $('<span>AC</span>');
$span.css({
'font-size':'48px',
'font-weight':'bold',
'color' :'#ffffff',
})
$ac.append($span)
return $ac;
}
function getAPsDom(apsdata){
var aparr = [];
for(var i = 0;i<apsdata.apcount;i++){
aparr.push(getAPDom(
i+1,
apsdata.iparr[i],
apsdata.ssidarr[i],
apsdata.ssidarr_5[i],
apsdata.channelarr[i],
apsdata.channelarr_5[i],
apsdata.serial[i],
apsdata.apstatus[i],
apsdata.clienCt[i],
apsdata.mac[i]
));
}
if(aparr.length>0){
var leftx = (DATA.width-180*DATA.count)/(DATA.count+1);
aparr.forEach(function(aobj,ai){
var j = Number(ai);
var x = Number(leftx);
aobj.css({
top:'250px',
left:((j+1)*x+180*j)+'px'
});
})
}
/*
if(apsdata.apcount == 4){
aparr[0] | lse if(apsdata.apcount == 3){
aparr[0].css({
top:'250px',
left:'45px'
});
aparr[1].css({
top:'250px',
left:'300px'
});
aparr[2].css({
top:'250px',
left:'565px'
});
}else if(apsdata.apcount == 2){
aparr[0].css({
top:'250px',
left:'100px'
});
aparr[1].css({
top:'250px',
left:'490px'
});
}else if(apsdata.apcount == 1){
aparr[0].css({
top:'250px',
left:'300px'
});
}
if(aparr.length == 0){
var $noap = $('<span>(暂无AP)</span>');
$noap.css({
'position':'absolute',
'font-size':'30px',
'color' :'#dfdfdf',
'font-weight':'bold',
'top' :'253px',
'left' :'308px'
})
aparr.push($noap);
}
*/
return aparr;
}
function getAPDom(index,ip,ssid,ssid5,cnl,cnl5,serial,apstatus,clienCt,mac){
var $ap = $('<div class="ap_signle_cover_div"></div>');
$ap.css({
// 'opacity':'0.2',
'transition':'all 0.3s',
'color' :'#000000',
'position': 'absolute',
'width' :'180px',
'min-height' :'220px',
'border-radius':'0px',
'box-shadow':'0px 0px 6px rgba(0,0,0,0.3)',
'font-size':'13px',
// 'font-weight':'bold',
'background-color':(apstatus == '1'?'#F9EE9A':'#e2e2e2')
});
var cnum24 = clienCt.split('/')[0];
var num5 = clienCt.split('/')[1];
var table = '<table>'+
'<tbody>'+
'<tr><td style="width:20px;padding:0 0"> IP地址</td><td> :'+ip+'</td></tr>'+
(ssid == ''?'':('<tr><td style="width:20px;padding:3px 0">SSID(2.4G)</td><td> :'+ssid+'</td></tr>'))+
(ssid == ''?'':('<tr><td style="width:20px;padding:3px 0 | .css({
top:'250px',
left:'10px'
});
aparr[1].css({
top:'250px',
left:'215px'
});
aparr[2].css({
top:'250px',
left:'420px'
});
aparr[3].css({
top:'250px',
left:'630px'
});
}e | conditional_block |
displayTopology.js | :3px 0">信道(5G)</td><td> :'+cnl5+'</td></tr>'))+
(ssid5 == ''?'':('<tr><td style="width:20px;padding:3px 0">用户(5G)</td><td> :'+num5+'</td></tr>'))+
'<tr><td style="width:20px;padding:0 0" colspan="2" > <a class="u-inputLink link-forUser" data-mac="'+mac+'">查看在线用户</a></td></tr>'+
'</tbody>'+
'</table>';
var $table = $(table);
$table.find('.link-forUser').click(function(){
showUserModal($(this))
});
/*
if(ssid !='' &&ssid5 !='' ){
$table.css({
'position':'absolute',
'left' :'24px',
'top' :'39px'
});
}else if((ssid !='' && ssid5 == '') || (ssid5 !='' && ssid == '')){
$table.css({
'position':'absolute',
'left' :'24px',
'top' :'55px'
});
}else{
$table.css({
'position':'absolute',
'left' :'24px',
'top' :'55px'
});
}
*/
$table.css({
'position':'absolute',
'left' :'10px',
'top' :'28px'
});
var $tablecover = $('<div class="ap_inner_table_cover"></div>');
$tablecover.css({
position:'relative',
width:'100%',
height:'225px',
'border-radius':'0px',
'overflow':'hidden',
// 'box-shadow':'0px 0px 18px '+(apstatus == '1'?'#F9EE9A':'#e2e2e2')+' inset'
})
$tablecover.append($table);
var $index = $('<div>AP-'+mac+' '+'<span '+(apstatus == '1'?' style="color:#36B71F">在线':' style="color:#FF0000">离线')+'</span></div>');
$index.css({
'position': 'absolute',
'width':'186px',
'height':'38px',
'border-radius':'6px',
'box-shadow':'0px 0px 6px rgba(0,0,0,0.3)',
'font-size':'15px',
'color':'#333333',
'font-weight':'bold',
'top':'-25px',
'left':'-3px',
'line-height':'42px',
'text-align':'center',
'background-color':(apstatus == '1'?'#F9EE9A':'#e2e2e2')
});
$ap.append($index);
$ap.append($tablecover);
return $ap;
}
function drawCanvas($can){
var c=$can[0];
var ctx=c.getContext("2d");
ctx.strokeStyle="#787878";
var leftx = (DATA.width-180*DATA.count)/(DATA.count+1);
for(var i=0;i<DATA.count;i++){
var x = (i+1)*leftx+180*i+90;
ctx.moveTo(x,225);
ctx.lineTo(DATA.width/2,120);
ctx.stroke();
}
/*
if(count == 4){
ctx.moveTo(100,240);
ctx.lineTo(390,120);
ctx.stroke();
ctx.moveTo(300,240);
ctx.lineTo(390,120);
ctx.stroke();
ctx.moveTo(500,240);
ctx.lineTo(390,120);
ctx.stroke();
ctx.moveTo(700,240);
ctx.lineTo(390,120);
ctx.stroke();
}else if(count == 3){
ctx.moveTo(130,240);
ctx.lineTo(390,120);
ctx.stroke();
ctx.moveTo(390,240);
ctx.lineTo(390,120);
ctx.stroke();
ctx.moveTo(670,240);
ctx.lineTo(390,120);
ctx.stroke();
}else if(count == 2){
ctx.moveTo(200,240);
ctx.lineTo(390,120);
ctx.stroke();
ctx.moveTo(590,240);
ctx.lineTo(390,120);
ctx.stroke();
}else if(count == 1){
ctx.moveTo(390,240);
ctx.lineTo(390,120);
ctx.stroke();
}else{
}
*/
}
/*用户数据弹框*/
function showUserModal($this){
var mac = $this.attr('data-mac');
$.ajax({
url:'common.asp?optType=aspOutPutWlanCltList&mac='+mac,
type:'GET',
success:function(result){
var doEval = require('Eval');
var codeStr = result,
variableArr = [
'macarrays',
'linkedaps', // 接入AP
'linkedSSIDs', // 接入SSID
'wlanFre', // 频段
'signals', // 信号
'rates', // 速率
'bindwidths', // 频宽
'downloads', // 下载
'downRate', // 下载速率
'uploads', // 上传
'upRate', // 上传速率
'time' // 在线时长
],
result = doEval.doEval(codeStr, variableArr),
isSuccess = result["isSuccessful"];
// 判断代码字符串执行是否成功
if (isSuccess) {
var data = result["data"];
var Database = require('Database'),
database = Database.getDatabaseObj(); // 数据库的引用
// 存入全局变量DATA中,方便其他函数使用
DATA["userData"] = database;
// 声明字段列表
var fieldArr =[
'ID',
'macarrays',
'linkedaps', /*ip地址列*/
'linkedSSIDs',
'wlanFre',
'signals',
'rates',
'bindwidths',
'downloads',
'downRate',
'uploads',
'upRate',
'time'
];
var baseData = [];
if(data.macarrays){
data.macarrays.forEach(function(obj,i){
baseData.push([
Number(i)+1,
data.macarrays[i],
data.linkedaps[i],
data.linkedSSIDs[i],
data.wlanFre[i],
data.signals[i],
data.rates[i],
data.bindwidths[i],
data.downloads[i],
data.downRate[i],
data.uploads[i],
data.upRate[i],
data.time[i]
]);
});
}
// 将数据存入数据表中
database.addTitle(fieldArr);
database.addData(baseData);
makeUserModal(database);
} else {
Tips.showWarning('{parseStrErr}');
}
}
});
function makeUserModal(database){
var modallist = {
id:'userInfo_modal',
title:'用户',
size:'large1',
"btns" : [
/*
{
"type" : 'save',
"clickFunc" : function($this){
}
},
{
"type" : 'reset',
clickFunc : function($this){
}
},
*/
{
"type" : 'close'
}
]
};
var Modal = | require('Modal');
var modalObj = Modal.getModalObj(modallist);
var TableContainer = require('P_template/common/TableContainer');
var conhtml = TableContainer.getHTML({}),
$tableCon = $(conhtml);
modalObj.insert($tableCon);
var headData = {
"btns" : []
};
// 表格配置数据
var tableList = {
"database": database,
// otherFuncAfterRefresh:textClickEvent,
"isSelectAll":false,
"dicArr" : ['common','doEqMgmt','doRFT'],
"titles": {
"ID" : {
| identifier_body |
|
step3-twist.py | init_para_list.append([np.round(a,1),np.round(b,1),theta,A1,A2,np.round(c[0],1),np.round(c[1],1),np.round(c[2],1),'NotYet'])
df_init_params = pd.DataFrame(np.array(init_para_list),columns = ['a','b','theta','A1','A2','cx','cy','cz','status'])
df_init_params.to_csv(init_params_csv,index=False)
get_init_para_csv(auto_dir,monomer_name)
auto_csv_path = os.path.join(auto_dir,'step3-twist.csv')
if not os.path.exists(auto_csv_path):
df_E = pd.DataFrame(columns = ['a','b','theta','A1','A2','cx','cy','cz','E','E_p','E_t','machine_type','status','file_name'])
else:
df_E = pd.read_csv(auto_csv_path)
df_E = df_E[df_E['status']!='InProgress']
df_E.to_csv(auto_csv_path,index=False)
df_init=pd.read_csv(os.path.join(auto_dir,'step3-twist_init_params.csv'))
df_init['status']='NotYet'
df_init.to_csv(os.path.join(auto_dir,'step3-twist_init_params.csv'),index=False)
def main_process(args):
os.chdir(os.path.join(args.auto_dir,'gaussian'))
isOver = False
while not(isOver):
#check
isOver = listen(args)
time.sleep(1)
def listen(args):
auto_dir = args.auto_dir
monomer_name = args.monomer_name
num_nodes = args.num_nodes
isTest = args.isTest
fixed_param_keys = ['A1','A2']
opt_param_keys = ['a','b','theta','cx','cy','cz']
auto_step2_csv = '/home/koyama/Working/interaction/{}/step2-twist/step2-twist.csv'.format(monomer_name)
df_step2 = pd.read_csv(auto_step2_csv)
auto_csv = os.path.join(auto_dir,'step3-twist.csv')
df_E = pd.read_csv(auto_csv)
df_queue = df_E.loc[df_E['status']=='InProgress',['machine_type','file_name','A1','A2','a','b','theta','cx','cy','cz']]
machine_type_list = df_queue['machine_type'].values.tolist()
len_queue = len(df_queue)
maxnum_machine2 = 3#int(num_nodes/2)
for idx,row in zip(df_queue.index,df_queue.values):
machine_type,file_name,A1,A2,a,b,theta,cx,cy,cz = row
log_filepath = os.path.join(*[auto_dir,'gaussian',file_name])
if not(os.path.exists(log_filepath)):#logファイルが生成される直前だとまずいので
continue
E_list=get_E(log_filepath)
if len(E_list)!=5:
continue
else:
len_queue-=1;machine_type_list.remove(machine_type)
Ei0,Eip1,Eip2,Eit1,Eit2=map(float,E_list)
Eit3 = Eit2; Eit4 = Eit1
try:
Ep, Et = df_step2[(df_step2['A1']==A1)&(df_step2['A2']==A2)&(df_step2['theta']==theta)&(df_step2['a']==a)&(df_step2['b']==b)][['E_p','E_t']].values[0]
except IndexError:
inner_params_dict = {"A1":A1,"A2":A2,"a":a,"b":b,"theta":theta,'cx':0,'cy':0,'cz':0}
inner_file_name = exec_gjf(auto_dir, monomer_name, inner_params_dict, machine_type,isInterlayer=False,isTest=isTest)
time.sleep(200)#1:40で1計算終わる
is_inner_over = False
while not(is_inner_over):
time.sleep(30)#1:40で1計算終わる
E_inner_list=get_E(inner_file_name)
is_inner_over = len(E_inner_list)==2
Ep, Et=map(float,E_inner_list)
df_newline = pd.Series({**inner_params_dict,'E':2*Ep+4*Et,'E_p':Ep,'E_t':Et,'machine_type':machine_type,'status':'Done','file_name':inner_file_name})
df_step2=df_step2.append(df_newline,ignore_index=True)
df_step2.to_csv(auto_step2_csv,index=False)
E = 4*Et + 2*Ep + 2*(Ei0 + Eip1+ Eip2 + Eit1 + Eit2 + Eit3 + Eit4)
df_E.loc[idx, ['E_p','E_t','E_i0','E_ip1','E_ip2','E_it1','E_it2','E_it3','E_it4','E','status']] = [Ep,Et,Ei0,Eip1,Eip2,Eit1,Eit2,Eit3,Eit4,E,'Done']
df_E.to_csv(auto_csv,index=False)
break#2つ同時に計算終わったりしたらまずいので一個で切る
isAvailable = len_queue < num_nodes
machine2IsFull = machine_type_list.count(2) >= maxnum_machine2
machine_type = 1 if machine2IsFull else 2
if isAvailable:
params_dict = get_params_dict(auto_dir,num_nodes, fixed_param_keys, opt_param_keys, monomer_name)
if len(params_dict)!=0:#終わりがまだ見えないなら
alreadyCalculated = check_calc_status(auto_dir,params_dict)
if not(alreadyCalculated):
file_name = exec_gjf(auto_dir, monomer_name, {**params_dict}, machine_type,isInterlayer=True,isTest=isTest)
df_newline = pd.Series({**params_dict,'E':0.,'E_p':0.,'E_t':0.,'E_i0':0.,'E_ip1':0.,'E_ip2':0.,'E_it1':0.,'E_it2':0.,'E_it3':0.,'E_it4':0.,'machine_type':machine_type,'status':'InProgress','file_name':file_name})
df_E=df_E.append(df_newline,ignore_index=True)
df_E.to_csv(auto_csv,index=False)
init_params_csv=os.path.join(auto_dir, 'step3-twist_init_params.csv')
df_init_params = pd.read_csv(init_params_csv)
df_init_params_done = filter_df(df_init_params,{'status':'Done'})
isOver = True if len(df_init_params_done)==len(df_init_params) else False
return isOver
def check_calc_status(auto_dir,params_dict):
df_E= pd.read_csv(os.path.join(auto_dir,'step3-twist.csv'))
if len(df_E)==0:
return False
df_E_filtered = filter_df | nit_params.csvとstep3-twist.csvがauto_dirの下にある
"""
init_params_csv=os.path.join(auto_dir, 'step3-twist_init_params.csv')
df_init_params = pd.read_csv(init_params_csv)
df_cur = pd.read_csv(os.path.join(auto_dir, 'step3-twist.csv'))
df_init_params_inprogress = df_init_params[df_init_params['status']=='InProgress']
#最初の立ち上がり時
if len(df_init_params_inprogress) < num_nodes:
df_init_params_notyet = df_init_params[df_init_params['status']=='NotYet']
for index in df_init_params_notyet.index:
df_init_params = update_value_in_df(df_init_params,index,'status','InProgress')
df_init_params.to_csv(init_params_csv,index=False)
params_dict = df_init_params.loc[index,fixed_param_keys+opt_param_keys].to_dict()
return params_dict
for index in df_init_params.index:
df_init_params = pd.read_csv(init_params_csv)
init_params_dict = df_init_params.loc[index,fixed_param_keys+opt_param_keys].to_dict()
fixed_params_dict = df_init_params.loc[index,fixed_param_keys].to_dict()
isDone, opt_params_dict = get_opt_params_dict(df_cur, init_params_dict,fixed_params_dict, monomer_name)
if isDone:
# df_init_paramsのstatusをupdate
df_init_params = update_value_in_df(df_init_params,index,'status','Done')
if np.max(df_init_params.index) < index+1:
status = 'Done'
else:
status = get_values_from_df(df_init_params,index+1,'status')
df_init_params.to_csv(init_params_csv,index=False)
if status=='NotYet':
opt_params_dict = get_values_from_df(df_init_params,index+1,opt_param_keys)
df_init_params = update_value_in_df(df_init_params,index+1,'status','InProgress')
df_init_params.to_csv(init_params_csv,index=False)
return {**fixed_params_dict | (df_E, params_dict)
df_E_filtered = df_E_filtered.reset_index(drop=True)
try:
status = get_values_from_df(df_E_filtered,0,'status')
return status=='Done'
except KeyError:
return False
def get_params_dict(auto_dir, num_nodes, fixed_param_keys, opt_param_keys, monomer_name):
"""
前提:
step3-twist_i | identifier_body |
step3-twist.py | init_para_list.append([np.round(a,1),np.round(b,1),theta,A1,A2,np.round(c[0],1),np.round(c[1],1),np.round(c[2],1),'NotYet'])
df_init_params = pd.DataFrame(np.array(init_para_list),columns = ['a','b','theta','A1','A2','cx','cy','cz','status'])
df_init_params.to_csv(init_params_csv,index=False)
get_init_para_csv(auto_dir,monomer_name)
auto_csv_path = os.path.join(auto_dir,'step3-twist.csv')
if not os.path.exists(auto_csv_path):
df_E = pd.DataFrame(columns = ['a','b','theta','A1','A2','cx','cy','cz','E','E_p','E_t','machine_type','status','file_name'])
else:
df_E = pd.read_csv(auto_csv_path)
df_E = df_E[df_E['status']!='InProgress']
df_E.to_csv(auto_csv_path,index=False)
df_init=pd.read_csv(os.path.join(auto_dir,'step3-twist_init_params.csv'))
df_init['status']='NotYet'
df_init.to_csv(os.path.join(auto_dir,'step3-twist_init_params.csv'),index=False)
def main_process(args):
os.chdir(os.path.join(args.auto_dir,'gaussian'))
isOver = False
while not(isOver):
#check
isOver = listen(args)
time.sleep(1)
def listen(args):
auto_dir = args.auto_dir
monomer_name = args.monomer_name
num_nodes = args.num_nodes
isTest = args.isTest
fixed_param_keys = ['A1','A2']
opt_param_keys = ['a','b','theta','cx','cy','cz']
auto_step2_csv = '/home/koyama/Working/interaction/{}/step2-twist/step2-twist.csv'.format(monomer_name)
df_step2 = pd.read_csv(auto_step2_csv)
auto_csv = os.path.join(auto_dir,'step3-twist.csv')
df_E = pd.read_csv(auto_csv)
df_queue = df_E.loc[df_E['status']=='InProgress',['machine_type','file_name','A1','A2','a','b','theta','cx','cy','cz']]
machine_type_list = df_queue['machine_type'].values.tolist()
len_queue = len(df_queue)
maxnum_machine2 = 3#int(num_nodes/2)
for idx,row in zip(df_queue.index,df_queue.values):
machine_type,file_name,A1,A2,a,b,theta,cx,cy,cz = row
log_filepath = os.path.join(*[auto_dir,'gaussian',file_name])
if not(os.path.exists(log_filepath)):#logファイルが生成される直前だとまずいので
continue
E_list=get_E(log_filepath)
if len(E_list)!=5:
continue
else:
len_queue-=1;machine_type_list.remove(machine_type)
Ei0,Eip1,Eip2,Eit1,Eit2=map(float,E_list)
Eit3 = Eit2; Eit4 = Eit1
try:
Ep, Et = df_step2[(df_step2['A1']==A1)&(df_step2['A2']==A2)&(df_step2['theta']==theta)&(df_step2['a']==a)&(df_step2['b']==b)][['E_p','E_t']].values[0]
except IndexError:
inner_params_dict = {"A1":A1,"A2":A2,"a":a,"b":b,"theta":theta,'cx':0,'cy':0,'cz':0}
inner_file_name = exec_gjf(auto_dir, monomer_name, inner_params_dict, machine_type,isInterlayer=False,isTest=isTest)
time.sleep(200)#1:40で1計算終わる
is_inner_over = False
while not(is_inner_over):
time.sleep(30)#1:40で1計算終わる
E_inner_list=get_E(inner_file_name)
is_inner_over = len(E_inner_list)==2
Ep, Et=map(float,E_inner_list)
df_newline = pd.Series({**inner_params_dict,'E':2*Ep+4*Et,'E_p':Ep,'E_t':Et,'machine_type':machine_type,'status':'Done','file_name':inner_file_name})
df_step2=df_step2.append(df_newline,ignore_index=True)
df_step2.to_csv(auto_step2_csv,index=False)
E = 4*Et + 2*Ep + 2*(Ei0 + Eip1+ Eip2 + Eit1 + Eit2 + Eit3 + Eit4)
df_E.loc[idx, ['E_p','E_t','E_i0','E_ip1','E_ip2','E_it1','E_it2','E_it3','E_it4','E','status']] = [Ep,Et,Ei0,Eip1,Eip2,Eit1,Eit2,Eit3,Eit4,E,'Done']
df_E.to_csv(auto_csv,index=False)
break#2つ同時に計算終わったりしたらまずいので一個で切る
isAvailable = len_queue < num_nodes
machine2IsFull = machine_type_list.count(2) >= maxnum_machine2
machine_type = 1 if machine2IsFull else 2
if isAvailable:
params_dict = get_params_dict(auto_dir,num_nodes, fixed_param_keys, opt_param_keys, monomer_name)
if len(params_dict)!=0:#終わりがまだ見えないなら
alreadyCalculated = check_calc_status(auto_dir,params_dict)
if not(alreadyCalculated):
file_name = exec_gjf(auto_dir, monomer_name, {**params_dict}, machine_type,isInterlayer=True,isTest=isTest)
df_newline = pd.Series({**params_dict,'E':0.,'E_p':0.,'E_t':0.,'E_i0':0.,'E_ip1':0.,'E_ip2':0.,'E_it1':0.,'E_it2':0.,'E_it3':0.,'E_it4':0.,'machine_type':machine_type,'status':'InProgress','file_name':file_name})
df_E=df_E.append(df_newline,ignore_index=True)
df_E.to_csv(auto_csv,index=False)
init_params_csv=os.path.join(auto_dir, 'step3-twist_init_params.csv')
df_init_params = pd.read_csv(init_params_csv)
df_init_params_done = filter_df(df_init_params,{'status':'Done'})
isOver = True if len(df_init_params_done)==len(df_init_params) else False
return isOver
def check_calc_status(auto_dir,params_dict):
df_E= pd.read_csv(os.path.join(auto_dir,'step3-twist.csv'))
if len(df_E)==0:
return False
df_E_filtered = filter_df(df_E, params_dict)
df_E_filtered = df_E_filtered.reset_index(drop=True)
try:
status = get_values_from_df(df_E_filtered,0,'status')
return status=='Done'
except KeyError:
return False
def get_params_dict(auto_dir, num_nodes, fixed_param_keys, opt_param_keys, monomer_name):
"""
前提:
step3-twist_init_params.csvとstep3-twist.csvがauto_dirの下にある
"""
init_params_csv=os.path.join(auto_dir, 'step3-twist_init_params.csv')
df_init_params = pd.read_csv(init_params_csv)
df_cur = pd.read_csv(os.path.join(auto_dir, 'step3-twist.csv'))
df_init_params_inprogress = df_init_params[df_init_params['status']=='InProgress']
#最初の立ち上がり時
if len(df_init_params_inprogress) < num_nodes:
df_init_params_notyet = df_init_params[df_init_params['status']=='NotYet']
for index in df_init_params_notyet.index:
df_init_params = update_value_i | param_keys].to_dict()
fixed_params_dict = df_init_params.loc[index,fixed_param_keys].to_dict()
isDone, opt_params_dict = get_opt_params_dict(df_cur, init_params_dict,fixed_params_dict, monomer_name)
if isDone:
# df_init_paramsのstatusをupdate
df_init_params = update_value_in_df(df_init_params,index,'status','Done')
if np.max(df_init_params.index) < index+1:
status = 'Done'
else:
status = get_values_from_df(df_init_params,index+1,'status')
df_init_params.to_csv(init_params_csv,index=False)
if status=='NotYet':
opt_params_dict = get_values_from_df(df_init_params,index+1,opt_param_keys)
df_init_params = update_value_in_df(df_init_params,index+1,'status','InProgress')
df_init_params.to_csv(init_params_csv,index=False)
return {**fixed_params_dict | n_df(df_init_params,index,'status','InProgress')
df_init_params.to_csv(init_params_csv,index=False)
params_dict = df_init_params.loc[index,fixed_param_keys+opt_param_keys].to_dict()
return params_dict
for index in df_init_params.index:
df_init_params = pd.read_csv(init_params_csv)
init_params_dict = df_init_params.loc[index,fixed_param_keys+opt_ | conditional_block |
step3-twist.py | ,Eip2,Eit1,Eit2=map(float,E_list)
Eit3 = Eit2; Eit4 = Eit1
try:
Ep, Et = df_step2[(df_step2['A1']==A1)&(df_step2['A2']==A2)&(df_step2['theta']==theta)&(df_step2['a']==a)&(df_step2['b']==b)][['E_p','E_t']].values[0]
except IndexError:
inner_params_dict = {"A1":A1,"A2":A2,"a":a,"b":b,"theta":theta,'cx':0,'cy':0,'cz':0}
inner_file_name = exec_gjf(auto_dir, monomer_name, inner_params_dict, machine_type,isInterlayer=False,isTest=isTest)
time.sleep(200)#1:40で1計算終わる
is_inner_over = False
while not(is_inner_over):
time.sleep(30)#1:40で1計算終わる
E_inner_list=get_E(inner_file_name)
is_inner_over = len(E_inner_list)==2
Ep, Et=map(float,E_inner_list)
df_newline = pd.Series({**inner_params_dict,'E':2*Ep+4*Et,'E_p':Ep,'E_t':Et,'machine_type':machine_type,'status':'Done','file_name':inner_file_name})
df_step2=df_step2.append(df_newline,ignore_index=True)
df_step2.to_csv(auto_step2_csv,index=False)
E = 4*Et + 2*Ep + 2*(Ei0 + Eip1+ Eip2 + Eit1 + Eit2 + Eit3 + Eit4)
df_E.loc[idx, ['E_p','E_t','E_i0','E_ip1','E_ip2','E_it1','E_it2','E_it3','E_it4','E','status']] = [Ep,Et,Ei0,Eip1,Eip2,Eit1,Eit2,Eit3,Eit4,E,'Done']
df_E.to_csv(auto_csv,index=False)
break#2つ同時に計算終わったりしたらまずいので一個で切る
isAvailable = len_queue < num_nodes
machine2IsFull = machine_type_list.count(2) >= maxnum_machine2
machine_type = 1 if machine2IsFull else 2
if isAvailable:
params_dict = get_params_dict(auto_dir,num_nodes, fixed_param_keys, opt_param_keys, monomer_name)
if len(params_dict)!=0:#終わりがまだ見えないなら
alreadyCalculated = check_calc_status(auto_dir,params_dict)
if not(alreadyCalculated):
file_name = exec_gjf(auto_dir, monomer_name, {**params_dict}, machine_type,isInterlayer=True,isTest=isTest)
df_newline = pd.Series({**params_dict,'E':0.,'E_p':0.,'E_t':0.,'E_i0':0.,'E_ip1':0.,'E_ip2':0.,'E_it1':0.,'E_it2':0.,'E_it3':0.,'E_it4':0.,'machine_type':machine_type,'status':'InProgress','file_name':file_name})
df_E=df_E.append(df_newline,ignore_index=True)
df_E.to_csv(auto_csv,index=False)
init_params_csv=os.path.join(auto_dir, 'step3-twist_init_params.csv')
df_init_params = pd.read_csv(init_params_csv)
df_init_params_done = filter_df(df_init_params,{'status':'Done'})
isOver = True if len(df_init_params_done)==len(df_init_params) else False
return isOver
def check_calc_status(auto_dir,params_dict):
df_E= pd.read_csv(os.path.join(auto_dir,'step3-twist.csv'))
if len(df_E)==0:
return False
df_E_filtered = filter_df(df_E, params_dict)
df_E_filtered = df_E_filtered.reset_index(drop=True)
try:
status = get_values_from_df(df_E_filtered,0,'status')
return status=='Done'
except KeyError:
return False
def get_params_dict(auto_dir, num_nodes, fixed_param_keys, opt_param_keys, monomer_name):
"""
前提:
step3-twist_init_params.csvとstep3-twist.csvがauto_dirの下にある
"""
init_params_csv=os.path.join(auto_dir, 'step3-twist_init_params.csv')
df_init_params = pd.read_csv(init_params_csv)
df_cur = pd.read_csv(os.path.join(auto_dir, 'step3-twist.csv'))
df_init_params_inprogress = df_init_params[df_init_params['status']=='InProgress']
#最初の立ち上がり時
if len(df_init_params_inprogress) < num_nodes:
df_init_params_notyet = df_init_params[df_init_params['status']=='NotYet']
for index in df_init_params_notyet.index:
df_init_params = update_value_in_df(df_init_params,index,'status','InProgress')
df_init_params.to_csv(init_params_csv,index=False)
params_dict = df_init_params.loc[index,fixed_param_keys+opt_param_keys].to_dict()
return params_dict
for index in df_init_params.index:
df_init_params = pd.read_csv(init_params_csv)
init_params_dict = df_init_params.loc[index,fixed_param_keys+opt_param_keys].to_dict()
fixed_params_dict = df_init_params.loc[index,fixed_param_keys].to_dict()
isDone, opt_params_dict = get_opt_params_dict(df_cur, init_params_dict,fixed_params_dict, monomer_name)
if isDone:
# df_init_paramsのstatusをupdate
df_init_params = update_value_in_df(df_init_params,index,'status','Done')
if np.max(df_init_params.index) < index+1:
status = 'Done'
else:
status = get_values_from_df(df_init_params,index+1,'status')
df_init_params.to_csv(init_params_csv,index=False)
if status=='NotYet':
opt_params_dict = get_values_from_df(df_init_params,index+1,opt_param_keys)
df_init_params = update_value_in_df(df_init_params,index+1,'status','InProgress')
df_init_params.to_csv(init_params_csv,index=False)
return {**fixed_params_dict,**opt_params_dict}
else:
continue
else:
df_inprogress = filter_df(df_cur, {**fixed_params_dict,**opt_params_dict,'status':'InProgress'})
if len(df_inprogress)>=1:
continue
return {**fixed_params_dict,**opt_params_dict}
return {}
def get_opt_params_dict(df_cur, init_params_dict,fixed_params_dict, monomer_name):
df_val = filter_df(df_cur, fixed_params_dict)
a_init_prev = init_params_dict['a']; b_init_prev = init_params_dict['b']; theta_init_prev = init_params_dict['theta']
A1 = init_params_dict['A1']; A2 = init_params_dict['A2']
while True:
E_list=[];heri_list=[]
for a in [a_init_prev-0.1,a_init_prev,a_init_prev+0.1]:
for b in [b_init_prev-0.1,b_init_prev,b_init_prev+0.1]:
a = np.round(a,1);b = np.round(b,1)
for theta in [theta_init_prev-0.5,theta_init_prev,theta_init_prev+0.5]:
df_val_ab = df_val[
(df_val['a']==a)&(df_val['b']==b)&(df_val['theta']==theta)&
(df_val['A1']==A1)&(df_val['A2']==A2)&
(df_val['status']=='Done')
]
if len(df_val_ab)==0:
cx, cy, cz = get_c_vec_vdw(monomer_name,A1,A2,a,b,theta)
cx, cy, cz = np.round(cx,1), np.round(cy,1), np.round(cz,1)
return False,{'a':a,'b':b,'theta':theta, "cx":cx, "cy":cy, "cz":cz }
heri_list.append([a,b,theta]);E_list.append(df_val_ab['E'].values[0])
a_init,b_init,theta_init = heri_list[np.argmin(np.array(E_list))]
if a_init==a_init_prev and b_init==b_init_prev and theta_init==theta_init_prev:
cx, cy, cz = get_c_vec_vdw(monomer_name,A1,A2,a_init,b_init,theta_init)
cx, cy, cz = np.round(cx,1), np.round(cy,1), np.round(cz,1)
return True,{'a':a_init,'b':b_init, 'theta':theta_init, "cx":cx, "cy":cy, "cz":cz }
else:
a_init_prev=a_init;b_init_prev=b_init;theta_init_prev=theta_init
def get_values_from_df(df,index,key):
return df.loc[index,key]
def update_value_in_df(df,index,key,value):
df.loc[index,key]=value
return df
def filter_df(df, dict_filter):
query = []
for k, v in dict_filter.items():
| if type(v) | identifier_name |
|
step3-twist.py | init_para_list.append([np.round(a,1),np.round(b,1),theta,A1,A2,np.round(c[0],1),np.round(c[1],1),np.round(c[2],1),'NotYet'])
df_init_params = pd.DataFrame(np.array(init_para_list),columns = ['a','b','theta','A1','A2','cx','cy','cz','status'])
df_init_params.to_csv(init_params_csv,index=False)
get_init_para_csv(auto_dir,monomer_name)
auto_csv_path = os.path.join(auto_dir,'step3-twist.csv')
if not os.path.exists(auto_csv_path):
df_E = pd.DataFrame(columns = ['a','b','theta','A1','A2','cx','cy','cz','E','E_p','E_t','machine_type','status','file_name'])
else:
df_E = pd.read_csv(auto_csv_path)
df_E = df_E[df_E['status']!='InProgress']
df_E.to_csv(auto_csv_path,index=False)
df_init=pd.read_csv(os.path.join(auto_dir,'step3-twist_init_params.csv'))
df_init['status']='NotYet'
df_init.to_csv(os.path.join(auto_dir,'step3-twist_init_params.csv'),index=False)
def main_process(args):
os.chdir(os.path.join(args.auto_dir,'gaussian'))
isOver = False
while not(isOver):
#check
isOver = listen(args)
time.sleep(1)
def listen(args):
auto_dir = args.auto_dir
monomer_name = args.monomer_name
num_nodes = args.num_nodes
isTest = args.isTest
fixed_param_keys = ['A1','A2']
opt_param_keys = ['a','b','theta','cx','cy','cz']
auto_step2_csv = '/home/koyama/Working/interaction/{}/step2-twist/step2-twist.csv'.format(monomer_name)
df_step2 = pd.read_csv(auto_step2_csv)
auto_csv = os.path.join(auto_dir,'step3-twist.csv')
df_E = pd.read_csv(auto_csv)
df_queue = df_E.loc[df_E['status']=='InProgress',['machine_type','file_name','A1','A2','a','b','theta','cx','cy','cz']]
machine_type_list = df_queue['machine_type'].values.tolist()
len_queue = len(df_queue)
maxnum_machine2 = 3#int(num_nodes/2)
for idx,row in zip(df_queue.index,df_queue.values):
machine_type,file_name,A1,A2,a,b,theta,cx,cy,cz = row
log_filepath = os.path.join(*[auto_dir,'gaussian',file_name])
if not(os.path.exists(log_filepath)):#logファイルが生成される直前だとまずいので
continue
E_list=get_E(log_filepath)
if len(E_list)!=5:
continue
else:
len_queue-=1;machine_type_list.remove(machine_type)
Ei0,Eip1,Eip2,Eit1,Eit2=map(float,E_list)
Eit3 = Eit2; Eit4 = Eit1
try:
Ep, Et = df_step2[(df_step2['A1']==A1)&(df_step2['A2']==A2)&(df_step2['theta']==theta)&(df_step2['a']==a)&(df_step2['b']==b)][['E_p','E_t']].values[0]
except IndexError:
inner_params_dict = {"A1":A1,"A2":A2,"a":a,"b":b,"theta":theta,'cx':0,'cy':0,'cz':0}
inner_file_name = exec_gjf(auto_dir, monomer_name, inner_params_dict, machine_type,isInterlayer=False,isTest=isTest)
time.sleep(200)#1:40で1計算終わる
is_inner_over = False
while not(is_inner_over):
time.sleep(30)#1:40で1計算終わる
E_inner_list=get_E(inner_file_name)
is_inner_over = len(E_inner_list)==2
Ep, Et=map(float,E_inner_list)
df_newline = pd.Series({**inner_params_dict,'E':2*Ep+4*Et,'E_p':Ep,'E_t':Et,'machine_type':machine_type,'status':'Done','file_name':inner_file_name})
df_step2=df_step2.append(df_newline,ignore_index=True)
df_step2.to_csv(auto_step2_csv,index=False)
E = 4*Et + 2*Ep + 2*(Ei0 + Eip1+ Eip2 + Eit1 + Eit2 + Eit3 + Eit4)
df_E.loc[idx, ['E_p','E_t','E_i0','E_ip1','E_ip2','E_it1','E_it2','E_it3','E_it4','E','status']] = [Ep,Et,Ei0,Eip1,Eip2,Eit1,Eit2,Eit3,Eit4,E,'Done']
df_E.to_csv(auto_csv,index=False)
break#2つ同時に計算終わったりしたらまずいので一個で切る
isAvailable = len_queue < num_nodes
machine2IsFull = machine_type_list.count(2) >= maxnum_machine2
machine_type = 1 if machine2IsFull else 2
if isAvailable:
params_dict = get_params_dict(auto_dir,num_nodes, fixed_param_keys, opt_param_keys, monomer_name)
if len(params_dict)!=0:#終わりがまだ見えないなら
alreadyCalculated = check_calc_status(auto_dir,params_dict)
if not(alreadyCalculated):
file_name = exec_gjf(auto_dir, monomer_name, {**params_dict}, machine_type,isInterlayer=True,isTest=isTest)
df_newline = pd.Series({**params_dict,'E':0.,'E_p':0.,'E_t':0.,'E_i0':0.,'E_ip1':0.,'E_ip2':0.,'E_it1':0.,'E_it2':0.,'E_it3':0.,'E_it4':0.,'machine_type':machine_type,'status':'InProgress','file_name':file_name})
df_E=df_E.append(df_newline,ignore_index=True)
df_E.to_csv(auto_csv,index=False)
init_params_csv=os.path.join(auto_dir, 'step3-twist_init_params.csv')
df_init_params = pd.read_csv(init_params_csv)
df_init_params_done = filter_df(df_init_params,{'status':'Done'})
isOver = True if len(df_init_params_done)==len(df_init_params) else False
return isOver
def check_calc_status(auto_dir,params_dict):
df_E= pd.read_csv(os.path.join(auto_dir,'step3-twist.csv'))
if len(df_E)==0:
return False
df_E_filtered = filter_df(df_E, params_dict)
df_E_filtered = df_E_filtered.reset_index(drop=True)
try:
status = get_values_from_df(df_E_filtered,0,'status')
return status=='Done'
except KeyError:
return False
def get_params_dict(auto_dir, num_nodes, fixed_param_keys, opt_param_keys, monomer_name):
"""
前提:
step3-twist_init_params.csvとstep3-twist.csvがauto_dirの下にある
"""
init_params_csv=os.path.join(auto_dir, 'step3-twist_init_params.csv')
df_init_params = pd.read_csv(init_params_csv)
df_cur = pd.read_csv(os.path.join(auto_dir, 'step3-twist.csv'))
df_init_params_inprogress = df_init_params[df_init_params['status']=='InProgress']
#最初の立ち上がり時
if len(df_init_params_inprogress) < num_nodes:
df_init_params_notyet = df_init_params[df_init_params['status']=='NotYet']
for index in df_init_params_notyet.index:
| for index in df_init_params.index:
df_init_params = pd.read_csv(init_params_csv)
init_params_dict = df_init_params.loc[index,fixed_param_keys+opt_param_keys].to_dict()
fixed_params_dict = df_init_params.loc[index,fixed_param_keys].to_dict()
isDone, opt_params_dict = get_opt_params_dict(df_cur, init_params_dict,fixed_params_dict, monomer_name)
if isDone:
# df_init_paramsのstatusをupdate
df_init_params = update_value_in_df(df_init_params,index,'status','Done')
if np.max(df_init_params.index) < index+1:
status = 'Done'
else:
status = get_values_from_df(df_init_params,index+1,'status')
df_init_params.to_csv(init_params_csv,index=False)
if status=='NotYet':
opt_params_dict = get_values_from_df(df_init_params,index+1,opt_param_keys)
df_init_params = update_value_in_df(df_init_params,index+1,'status','InProgress')
df_init_params.to_csv(init_params_csv,index=False)
return {**fixed_params_dict | df_init_params = update_value_in_df(df_init_params,index,'status','InProgress')
df_init_params.to_csv(init_params_csv,index=False)
params_dict = df_init_params.loc[index,fixed_param_keys+opt_param_keys].to_dict()
return params_dict
| random_line_split |
lib.rs |
}
impl Default for PickState {
fn default() -> Self {
PickState {
cursor_event_reader: EventReader::default(),
ordered_pick_list: Vec::new(),
topmost_pick: None,
}
}
}
/// Holds the entity associated with a mesh as well as it's computed intersection from a pick ray cast
#[derive(Debug, PartialOrd, PartialEq, Copy, Clone)]
pub struct PickIntersection {
entity: Entity,
pick_coord_ndc: Vec3,
}
impl PickIntersection {
fn new(entity: Entity, pick_coord_ndc: Vec3) -> Self {
PickIntersection {
entity,
pick_coord_ndc,
}
}
pub fn get_pick_coord_ndc(&self) -> Vec3 {
self.pick_coord_ndc
}
pub fn get_pick_coord_world(&self, projection_matrix: Mat4, view_matrix: Mat4) -> Vec3 {
let world_pos: Vec4 = (projection_matrix * view_matrix)
.inverse()
.mul_vec4(self.pick_coord_ndc.extend(1.0));
(world_pos / world_pos.w()).truncate().into()
}
}
#[derive(Debug)]
pub struct PickHighlightParams {
hover_color: Color,
selection_color: Color,
}
impl PickHighlightParams {
pub fn set_hover_color(&mut self, color: Color) {
self.hover_color = color;
}
pub fn set_selection_color(&mut self, color: Color) {
self.selection_color = color;
}
}
impl Default for PickHighlightParams {
fn default() -> Self {
PickHighlightParams {
hover_color: Color::rgb(0.3, 0.5, 0.8),
selection_color: Color::rgb(0.3, 0.8, 0.5),
}
}
}
/// Marks an entity as pickable
#[derive(Debug)]
pub struct PickableMesh {
camera_entity: Entity,
bounding_sphere: Option<BoundSphere>,
pick_coord_ndc: Option<Vec3>,
}
impl PickableMesh {
pub fn new(camera_entity: Entity) -> Self {
PickableMesh {
camera_entity,
bounding_sphere: None,
pick_coord_ndc: None,
}
}
pub fn get_pick_coord_ndc(&self) -> Option<Vec3> {
self.pick_coord_ndc
}
}
/// Meshes with `SelectableMesh` will have selection state managed
#[derive(Debug)]
pub struct SelectablePickMesh {
selected: bool,
}
impl SelectablePickMesh {
pub fn new() -> Self {
SelectablePickMesh { selected: false }
}
pub fn selected(&self) -> bool {
self.selected
}
}
/// Meshes with `HighlightablePickMesh` will be highlighted when hovered over. If the mesh also has
/// the `SelectablePickMesh` component, it will highlight when selected.
#[derive(Debug)]
pub struct HighlightablePickMesh {
// Stores the initial color of the mesh material prior to selecting/hovering
initial_color: Option<Color>,
}
impl HighlightablePickMesh {
pub fn new() -> Self {
HighlightablePickMesh {
initial_color: None,
}
}
}
/// Defines a bounding sphere with a center point coordinate and a radius, used for picking
#[derive(Debug)]
struct BoundSphere {
mesh_radius: f32,
transformed_radius: Option<f32>,
ndc_def: Option<NdcBoundingCircle>,
}
impl From<&Mesh> for BoundSphere {
fn from(mesh: &Mesh) -> Self {
let mut mesh_radius = 0f32;
if mesh.primitive_topology != PrimitiveTopology::TriangleList {
panic!("Non-TriangleList mesh supplied for bounding sphere generation")
}
let mut vertex_positions = Vec::new();
for attribute in mesh.attributes.iter() {
if attribute.name == VertexAttribute::POSITION {
vertex_positions = match &attribute.values {
VertexAttributeValues::Float3(positions) => positions.clone(),
_ => panic!("Unexpected vertex types in VertexAttribute::POSITION"),
};
}
}
if let Some(indices) = &mesh.indices {
for index in indices.iter() {
mesh_radius =
mesh_radius.max(Vec3::from(vertex_positions[*index as usize]).length());
}
}
BoundSphere {
mesh_radius,
transformed_radius: None,
ndc_def: None,
}
}
}
/// Created from a BoundSphere, this represents a circle that bounds the entity's mesh when the
/// bounding sphere is projected onto the screen. Note this is not as simple as transforming the
/// sphere's origin into ndc and copying the radius. Due to rectillinear projection, the sphere
/// will be projected onto the screen as an ellipse if it is not perfectly centered at 0,0 in ndc.
/// Scale ndc circle based on linear function "abs(x(sec(arctan(tan(b/2)))-1)) + 1" where b = FOV
/// All the trig can be simplified to a coeff "c" abs(x*c+1)
#[derive(Debug)]
struct NdcBoundingCircle {
center: Vec2,
radius: f32,
}
/// Given the current selected and hovered meshes and provided materials, update the meshes with the
/// appropriate materials.
fn pick_highlighting(
// Resources
pick_state: Res<PickState>,
mut materials: ResMut<Assets<StandardMaterial>>,
highlight_params: Res<PickHighlightParams>,
// Queries
mut query_picked: Query<(
&mut HighlightablePickMesh,
Changed<PickableMesh>,
&Handle<StandardMaterial>,
Entity,
)>,
mut query_selected: Query<(
&mut HighlightablePickMesh,
Changed<SelectablePickMesh>,
&Handle<StandardMaterial>,
)>,
query_selectables: Query<&SelectablePickMesh>,
) {
// Query selectable entities that have changed
for (mut highlightable, selectable, material_handle) in &mut query_selected.iter() {
let current_color = &mut materials.get_mut(material_handle).unwrap().albedo;
let initial_color = match highlightable.initial_color {
None => {
highlightable.initial_color = Some(*current_color);
*current_color
}
Some(color) => color,
};
if selectable.selected {
*current_color = highlight_params.selection_color;
} else {
*current_color = initial_color;
}
}
// Query highlightable entities that have changed
for (mut highlightable, _pickable, material_handle, entity) in &mut query_picked.iter() {
let current_color = &mut materials.get_mut(material_handle).unwrap().albedo;
let initial_color = match highlightable.initial_color {
None => {
highlightable.initial_color = Some(*current_color);
*current_color
}
Some(color) => color,
};
let mut topmost = false;
if let Some(pick_depth) = pick_state.topmost_pick {
topmost = pick_depth.entity == entity;
}
if topmost {
*current_color = highlight_params.hover_color;
} else {
if let Ok(mut query) = query_selectables.entity(entity) {
if let Some(selectable) = query.get() {
if selectable.selected {
*current_color = highlight_params.selection_color;
} else {
*current_color = initial_color;
}
}
} else {
*current_color = initial_color;
}
}
}
}
/// Given the currently hovered mesh, checks for a user click and if detected, sets the selected
/// field in the entity's component to true.
fn select_mesh(
// Resources
pick_state: Res<PickState>,
mouse_button_inputs: Res<Input<MouseButton>>,
// Queries
mut query: Query<&mut SelectablePickMesh>,
) {
if mouse_button_inputs.just_pressed(MouseButton::Left) {
// Deselect everything
for mut selectable in &mut query.iter() {
selectable.selected = false;
}
if let Some(pick_depth) = pick_state.topmost_pick {
if let Ok(mut top_mesh) = query.get_mut::<SelectablePickMesh>(pick_depth.entity) {
top_mesh.selected = true;
}
}
}
}
/// Casts a ray into the scene from the cursor position, tracking pickable meshes that are hit.
fn pick_mesh(
// Resources
mut pick_state: ResMut<PickState>,
cursor: Res<Events<CursorMoved>>,
meshes: Res<Assets<Mesh>>,
windows: Res<Windows>,
// Queries
mut mesh_query: Query<(&Handle<Mesh>, &Transform, &mut PickableMesh, Entity)>,
mut camera_query: Query<(&Transform, &Camera)>,
) {
// Get the cursor position
let cursor_pos_screen: Vec2 = match pick_state.cursor_event_reader.latest(&cursor) {
Some(cursor_moved) => cursor_moved.position,
None => return,
};
// Get current screen size
let window = windows.get_primary().unwrap();
let screen_size = Vec2::from([window.width as f32, window.height as f32]);
// Normalized device coordinates (NDC) describes cursor position from (-1, -1) to (1, 1)
let cursor_pos_ndc: Vec2 = (cursor_pos_screen / screen_size) * 2.0 - Vec2::from([1.0, 1.0]);
// Get the view transform and projection matrix from the camera
| {
&self.topmost_pick
} | identifier_body |
|
lib.rs | Some(pick_depth) = pick_state.topmost_pick {
if let Ok(mut top_mesh) = query.get_mut::<SelectablePickMesh>(pick_depth.entity) {
top_mesh.selected = true;
}
}
}
}
/// Casts a ray into the scene from the cursor position, tracking pickable meshes that are hit.
fn pick_mesh(
// Resources
mut pick_state: ResMut<PickState>,
cursor: Res<Events<CursorMoved>>,
meshes: Res<Assets<Mesh>>,
windows: Res<Windows>,
// Queries
mut mesh_query: Query<(&Handle<Mesh>, &Transform, &mut PickableMesh, Entity)>,
mut camera_query: Query<(&Transform, &Camera)>,
) {
// Get the cursor position
let cursor_pos_screen: Vec2 = match pick_state.cursor_event_reader.latest(&cursor) {
Some(cursor_moved) => cursor_moved.position,
None => return,
};
// Get current screen size
let window = windows.get_primary().unwrap();
let screen_size = Vec2::from([window.width as f32, window.height as f32]);
// Normalized device coordinates (NDC) describes cursor position from (-1, -1) to (1, 1)
let cursor_pos_ndc: Vec2 = (cursor_pos_screen / screen_size) * 2.0 - Vec2::from([1.0, 1.0]);
// Get the view transform and projection matrix from the camera
let mut view_matrix = Mat4::zero();
let mut projection_matrix = Mat4::zero();
for (transform, camera) in &mut camera_query.iter() {
view_matrix = transform.value.inverse();
projection_matrix = camera.projection_matrix;
}
// After initial checks completed, clear the pick list
pick_state.ordered_pick_list.clear();
pick_state.topmost_pick = None;
// Iterate through each pickable mesh in the scene
for (mesh_handle, transform, mut pickable, entity) in &mut mesh_query.iter() {
// Use the mesh handle to get a reference to a mesh asset
if let Some(mesh) = meshes.get(mesh_handle) {
if mesh.primitive_topology != PrimitiveTopology::TriangleList {
continue;
}
// The ray cast can hit the same mesh many times, so we need to track which hit is
// closest to the camera, and record that.
let mut hit_depth = f32::MAX;
// We need to transform the mesh vertices' positions from the mesh space to the world
// space using the mesh's transform, move it to the camera's space using the view
// matrix (camera.inverse), and finally, apply the projection matrix. Because column
// matrices are evaluated right to left, we have to order it correctly:
let mesh_to_cam_transform = view_matrix * transform.value;
// Get the vertex positions from the mesh reference resolved from the mesh handle
let vertex_positions: Vec<[f32; 3]> = mesh
.attributes
.iter()
.filter(|attribute| attribute.name == VertexAttribute::POSITION)
.filter_map(|attribute| match &attribute.values {
VertexAttributeValues::Float3(positions) => Some(positions.clone()),
_ => panic!("Unexpected vertex types in VertexAttribute::POSITION"),
})
.last()
.unwrap();
// We have everything set up, now we can jump into the mesh's list of indices and
// check triangles for cursor intersection.
if let Some(indices) = &mesh.indices {
let mut hit_found = false;
// Now that we're in the vector of vertex indices, we want to look at the vertex
// positions for each triangle, so we'll take indices in chunks of three, where each
// chunk of three indices are references to the three vertices of a triangle.
for index in indices.chunks(3) {
// Make sure this chunk has 3 vertices to avoid a panic.
if index.len() == 3 {
// Set up an empty container for triangle vertices
let mut triangle: [Vec3; 3] = [Vec3::zero(), Vec3::zero(), Vec3::zero()];
// We can now grab the position of each vertex in the triangle using the
// indices pointing into the position vector. These positions are relative
// to the coordinate system of the mesh the vertex/triangle belongs to. To
// test if the triangle is being hovered over, we need to convert this to
// NDC (normalized device coordinates)
for i in 0..3 {
// Get the raw vertex position using the index
let mut vertex_pos = Vec3::from(vertex_positions[index[i] as usize]);
// Transform the vertex to world space with the mesh transform, then
// into camera space with the view transform.
vertex_pos = mesh_to_cam_transform.transform_point3(vertex_pos);
// This next part seems to be a bug with glam - it should do the divide
// by w perspective math for us, instead we have to do it manually.
// `glam` PR https://github.com/bitshifter/glam-rs/pull/75/files
let transformed = projection_matrix.mul_vec4(vertex_pos.extend(1.0));
let w_recip = transformed.w().abs().recip();
triangle[i] = Vec3::from(transformed.truncate() * w_recip);
}
if !triangle_behind_cam(triangle) {
if point_in_tri(
&cursor_pos_ndc,
&Vec2::new(triangle[0].x(), triangle[0].y()),
&Vec2::new(triangle[1].x(), triangle[1].y()),
&Vec2::new(triangle[2].x(), triangle[2].y()),
) {
hit_found = true;
if triangle[0].z() < hit_depth {
hit_depth = triangle[0].z();
}
}
}
}
}
// Finished going through the current mesh, update pick states
let pick_coord_ndc = cursor_pos_ndc.extend(hit_depth);
pickable.pick_coord_ndc = Some(pick_coord_ndc);
if hit_found {
pick_state
.ordered_pick_list
.push(PickIntersection::new(entity, pick_coord_ndc));
}
} else {
// If we get here the mesh doesn't have an index list!
panic!(
"No index matrix found in mesh {:?}\n{:?}",
mesh_handle, mesh
);
}
}
}
// Sort the pick list
pick_state
.ordered_pick_list
.sort_by(|a, b| a.partial_cmp(b).unwrap_or(std::cmp::Ordering::Equal));
// The pick_state resource we have access to is not sorted, so we need to manually grab the
// lowest value;
if !pick_state.ordered_pick_list.is_empty() {
let mut nearest_index = 0usize;
let mut nearest_depth = f32::MAX;
for (index, pick) in pick_state.ordered_pick_list.iter().enumerate() {
let current_depth = pick.pick_coord_ndc.z();
if current_depth < nearest_depth {
nearest_depth = current_depth;
nearest_index = index;
}
}
pick_state.topmost_pick = Some(pick_state.ordered_pick_list[nearest_index]);
}
}
/// Compute the area of a triangle given 2D vertex coordinates, "/2" removed to save an operation
fn double_tri_area(a: &Vec2, b: &Vec2, c: &Vec2) -> f32 {
f32::abs(a.x() * (b.y() - c.y()) + b.x() * (c.y() - a.y()) + c.x() * (a.y() - b.y()))
}
/// Checks if a point is inside a triangle by comparing the summed areas of the triangles, the point
/// is inside the triangle if the areas are equal. An epsilon is used due to floating point error.
/// Todo: barycentric method
fn point_in_tri(p: &Vec2, a: &Vec2, b: &Vec2, c: &Vec2) -> bool {
let area = double_tri_area(a, b, c);
let pab = double_tri_area(p, a, b);
let pac = double_tri_area(p, a, c);
let pbc = double_tri_area(p, b, c);
let area_tris = pab + pac + pbc;
let epsilon = 0.00001;
let result: bool = f32::abs(area - area_tris) < epsilon;
/*
if result {
println!("Hit: {:.3} {:.3} {:.3},{:.3} {:.3},{:.3} {:.3},{:.3} ", area, area_tris, a.x(), a.y(), b.x(), b.y(), c.x(), c.y());
} else {
println!("No Hit: {:.3} {:.3} {:.3},{:.3} {:.3},{:.3} {:.3},{:.3} ", area, area_tris, a.x(), a.y(), b.x(), b.y(), c.x(), c.y());
}
*/
result
}
/// Checkes if a triangle is visibly pickable in the camera frustum. | fn triangle_behind_cam(triangle: [Vec3; 3]) -> bool { | random_line_split |
|
lib.rs | (&self) -> &Vec<PickIntersection> {
&self.ordered_pick_list
}
pub fn top(&self) -> &Option<PickIntersection> {
&self.topmost_pick
}
}
impl Default for PickState {
fn default() -> Self {
PickState {
cursor_event_reader: EventReader::default(),
ordered_pick_list: Vec::new(),
topmost_pick: None,
}
}
}
/// Holds the entity associated with a mesh as well as it's computed intersection from a pick ray cast
#[derive(Debug, PartialOrd, PartialEq, Copy, Clone)]
pub struct PickIntersection {
entity: Entity,
pick_coord_ndc: Vec3,
}
impl PickIntersection {
fn new(entity: Entity, pick_coord_ndc: Vec3) -> Self {
PickIntersection {
entity,
pick_coord_ndc,
}
}
pub fn get_pick_coord_ndc(&self) -> Vec3 {
self.pick_coord_ndc
}
pub fn get_pick_coord_world(&self, projection_matrix: Mat4, view_matrix: Mat4) -> Vec3 {
let world_pos: Vec4 = (projection_matrix * view_matrix)
.inverse()
.mul_vec4(self.pick_coord_ndc.extend(1.0));
(world_pos / world_pos.w()).truncate().into()
}
}
#[derive(Debug)]
pub struct PickHighlightParams {
hover_color: Color,
selection_color: Color,
}
impl PickHighlightParams {
pub fn set_hover_color(&mut self, color: Color) {
self.hover_color = color;
}
pub fn set_selection_color(&mut self, color: Color) {
self.selection_color = color;
}
}
impl Default for PickHighlightParams {
fn default() -> Self {
PickHighlightParams {
hover_color: Color::rgb(0.3, 0.5, 0.8),
selection_color: Color::rgb(0.3, 0.8, 0.5),
}
}
}
/// Marks an entity as pickable
#[derive(Debug)]
pub struct PickableMesh {
camera_entity: Entity,
bounding_sphere: Option<BoundSphere>,
pick_coord_ndc: Option<Vec3>,
}
impl PickableMesh {
pub fn new(camera_entity: Entity) -> Self {
PickableMesh {
camera_entity,
bounding_sphere: None,
pick_coord_ndc: None,
}
}
pub fn get_pick_coord_ndc(&self) -> Option<Vec3> {
self.pick_coord_ndc
}
}
/// Meshes with `SelectableMesh` will have selection state managed
#[derive(Debug)]
pub struct SelectablePickMesh {
selected: bool,
}
impl SelectablePickMesh {
pub fn new() -> Self {
SelectablePickMesh { selected: false }
}
pub fn selected(&self) -> bool {
self.selected
}
}
/// Meshes with `HighlightablePickMesh` will be highlighted when hovered over. If the mesh also has
/// the `SelectablePickMesh` component, it will highlight when selected.
#[derive(Debug)]
pub struct HighlightablePickMesh {
// Stores the initial color of the mesh material prior to selecting/hovering
initial_color: Option<Color>,
}
impl HighlightablePickMesh {
pub fn new() -> Self {
HighlightablePickMesh {
initial_color: None,
}
}
}
/// Defines a bounding sphere with a center point coordinate and a radius, used for picking
#[derive(Debug)]
struct BoundSphere {
mesh_radius: f32,
transformed_radius: Option<f32>,
ndc_def: Option<NdcBoundingCircle>,
}
impl From<&Mesh> for BoundSphere {
fn from(mesh: &Mesh) -> Self {
let mut mesh_radius = 0f32;
if mesh.primitive_topology != PrimitiveTopology::TriangleList {
panic!("Non-TriangleList mesh supplied for bounding sphere generation")
}
let mut vertex_positions = Vec::new();
for attribute in mesh.attributes.iter() {
if attribute.name == VertexAttribute::POSITION {
vertex_positions = match &attribute.values {
VertexAttributeValues::Float3(positions) => positions.clone(),
_ => panic!("Unexpected vertex types in VertexAttribute::POSITION"),
};
}
}
if let Some(indices) = &mesh.indices {
for index in indices.iter() {
mesh_radius =
mesh_radius.max(Vec3::from(vertex_positions[*index as usize]).length());
}
}
BoundSphere {
mesh_radius,
transformed_radius: None,
ndc_def: None,
}
}
}
/// Created from a BoundSphere, this represents a circle that bounds the entity's mesh when the
/// bounding sphere is projected onto the screen. Note this is not as simple as transforming the
/// sphere's origin into ndc and copying the radius. Due to rectillinear projection, the sphere
/// will be projected onto the screen as an ellipse if it is not perfectly centered at 0,0 in ndc.
/// Scale ndc circle based on linear function "abs(x(sec(arctan(tan(b/2)))-1)) + 1" where b = FOV
/// All the trig can be simplified to a coeff "c" abs(x*c+1)
#[derive(Debug)]
struct NdcBoundingCircle {
center: Vec2,
radius: f32,
}
/// Given the current selected and hovered meshes and provided materials, update the meshes with the
/// appropriate materials.
fn pick_highlighting(
// Resources
pick_state: Res<PickState>,
mut materials: ResMut<Assets<StandardMaterial>>,
highlight_params: Res<PickHighlightParams>,
// Queries
mut query_picked: Query<(
&mut HighlightablePickMesh,
Changed<PickableMesh>,
&Handle<StandardMaterial>,
Entity,
)>,
mut query_selected: Query<(
&mut HighlightablePickMesh,
Changed<SelectablePickMesh>,
&Handle<StandardMaterial>,
)>,
query_selectables: Query<&SelectablePickMesh>,
) {
// Query selectable entities that have changed
for (mut highlightable, selectable, material_handle) in &mut query_selected.iter() {
let current_color = &mut materials.get_mut(material_handle).unwrap().albedo;
let initial_color = match highlightable.initial_color {
None => {
highlightable.initial_color = Some(*current_color);
*current_color
}
Some(color) => color,
};
if selectable.selected {
*current_color = highlight_params.selection_color;
} else {
*current_color = initial_color;
}
}
// Query highlightable entities that have changed
for (mut highlightable, _pickable, material_handle, entity) in &mut query_picked.iter() {
let current_color = &mut materials.get_mut(material_handle).unwrap().albedo;
let initial_color = match highlightable.initial_color {
None => {
highlightable.initial_color = Some(*current_color);
*current_color
}
Some(color) => color,
};
let mut topmost = false;
if let Some(pick_depth) = pick_state.topmost_pick {
topmost = pick_depth.entity == entity;
}
if topmost {
*current_color = highlight_params.hover_color;
} else {
if let Ok(mut query) = query_selectables.entity(entity) {
if let Some(selectable) = query.get() {
if selectable.selected {
*current_color = highlight_params.selection_color;
} else {
*current_color = initial_color;
}
}
} else {
*current_color = initial_color;
}
}
}
}
/// Given the currently hovered mesh, checks for a user click and if detected, sets the selected
/// field in the entity's component to true.
fn select_mesh(
// Resources
pick_state: Res<PickState>,
mouse_button_inputs: Res<Input<MouseButton>>,
// Queries
mut query: Query<&mut SelectablePickMesh>,
) {
if mouse_button_inputs.just_pressed(MouseButton::Left) {
// Deselect everything
for mut selectable in &mut query.iter() {
selectable.selected = false;
}
if let Some(pick_depth) = pick_state.topmost_pick {
if let Ok(mut top_mesh) = query.get_mut::<SelectablePickMesh>(pick_depth.entity) {
top_mesh.selected = true;
}
}
}
}
/// Casts a ray into the scene from the cursor position, tracking pickable meshes that are hit.
fn pick_mesh(
// Resources
mut pick_state: ResMut<PickState>,
cursor: Res<Events<CursorMoved>>,
meshes: Res<Assets<Mesh>>,
windows: Res<Windows>,
// Queries
mut mesh_query: Query<(&Handle<Mesh>, &Transform, &mut PickableMesh, Entity)>,
mut camera_query: Query<(&Transform, &Camera)>,
) {
// Get the cursor position
let cursor_pos_screen: Vec2 = match pick_state.cursor_event_reader.latest(&cursor) {
Some(cursor_moved) => cursor_moved.position,
None => return,
};
// Get current screen size
let window = windows.get_primary().unwrap();
let screen_size = Vec2::from([window.width as f32, window.height as f32]);
// Normalized device coordinates (NDC) describes cursor position from (-1, -1) to (1, 1)
let cursor_pos_ndc: Vec2 = (cursor_pos_screen / screen_size | list | identifier_name |
|
lib.rs | /// Marks an entity as pickable
#[derive(Debug)]
pub struct PickableMesh {
camera_entity: Entity,
bounding_sphere: Option<BoundSphere>,
pick_coord_ndc: Option<Vec3>,
}
impl PickableMesh {
pub fn new(camera_entity: Entity) -> Self {
PickableMesh {
camera_entity,
bounding_sphere: None,
pick_coord_ndc: None,
}
}
pub fn get_pick_coord_ndc(&self) -> Option<Vec3> {
self.pick_coord_ndc
}
}
/// Meshes with `SelectableMesh` will have selection state managed
#[derive(Debug)]
pub struct SelectablePickMesh {
selected: bool,
}
impl SelectablePickMesh {
pub fn new() -> Self {
SelectablePickMesh { selected: false }
}
pub fn selected(&self) -> bool {
self.selected
}
}
/// Meshes with `HighlightablePickMesh` will be highlighted when hovered over. If the mesh also has
/// the `SelectablePickMesh` component, it will highlight when selected.
#[derive(Debug)]
pub struct HighlightablePickMesh {
// Stores the initial color of the mesh material prior to selecting/hovering
initial_color: Option<Color>,
}
impl HighlightablePickMesh {
pub fn new() -> Self {
HighlightablePickMesh {
initial_color: None,
}
}
}
/// Defines a bounding sphere with a center point coordinate and a radius, used for picking
#[derive(Debug)]
struct BoundSphere {
mesh_radius: f32,
transformed_radius: Option<f32>,
ndc_def: Option<NdcBoundingCircle>,
}
impl From<&Mesh> for BoundSphere {
fn from(mesh: &Mesh) -> Self {
let mut mesh_radius = 0f32;
if mesh.primitive_topology != PrimitiveTopology::TriangleList {
panic!("Non-TriangleList mesh supplied for bounding sphere generation")
}
let mut vertex_positions = Vec::new();
for attribute in mesh.attributes.iter() {
if attribute.name == VertexAttribute::POSITION {
vertex_positions = match &attribute.values {
VertexAttributeValues::Float3(positions) => positions.clone(),
_ => panic!("Unexpected vertex types in VertexAttribute::POSITION"),
};
}
}
if let Some(indices) = &mesh.indices {
for index in indices.iter() {
mesh_radius =
mesh_radius.max(Vec3::from(vertex_positions[*index as usize]).length());
}
}
BoundSphere {
mesh_radius,
transformed_radius: None,
ndc_def: None,
}
}
}
/// Created from a BoundSphere, this represents a circle that bounds the entity's mesh when the
/// bounding sphere is projected onto the screen. Note this is not as simple as transforming the
/// sphere's origin into ndc and copying the radius. Due to rectillinear projection, the sphere
/// will be projected onto the screen as an ellipse if it is not perfectly centered at 0,0 in ndc.
/// Scale ndc circle based on linear function "abs(x(sec(arctan(tan(b/2)))-1)) + 1" where b = FOV
/// All the trig can be simplified to a coeff "c" abs(x*c+1)
#[derive(Debug)]
struct NdcBoundingCircle {
center: Vec2,
radius: f32,
}
/// Given the current selected and hovered meshes and provided materials, update the meshes with the
/// appropriate materials.
fn pick_highlighting(
// Resources
pick_state: Res<PickState>,
mut materials: ResMut<Assets<StandardMaterial>>,
highlight_params: Res<PickHighlightParams>,
// Queries
mut query_picked: Query<(
&mut HighlightablePickMesh,
Changed<PickableMesh>,
&Handle<StandardMaterial>,
Entity,
)>,
mut query_selected: Query<(
&mut HighlightablePickMesh,
Changed<SelectablePickMesh>,
&Handle<StandardMaterial>,
)>,
query_selectables: Query<&SelectablePickMesh>,
) {
// Query selectable entities that have changed
for (mut highlightable, selectable, material_handle) in &mut query_selected.iter() {
let current_color = &mut materials.get_mut(material_handle).unwrap().albedo;
let initial_color = match highlightable.initial_color {
None => {
highlightable.initial_color = Some(*current_color);
*current_color
}
Some(color) => color,
};
if selectable.selected | else {
*current_color = initial_color;
}
}
// Query highlightable entities that have changed
for (mut highlightable, _pickable, material_handle, entity) in &mut query_picked.iter() {
let current_color = &mut materials.get_mut(material_handle).unwrap().albedo;
let initial_color = match highlightable.initial_color {
None => {
highlightable.initial_color = Some(*current_color);
*current_color
}
Some(color) => color,
};
let mut topmost = false;
if let Some(pick_depth) = pick_state.topmost_pick {
topmost = pick_depth.entity == entity;
}
if topmost {
*current_color = highlight_params.hover_color;
} else {
if let Ok(mut query) = query_selectables.entity(entity) {
if let Some(selectable) = query.get() {
if selectable.selected {
*current_color = highlight_params.selection_color;
} else {
*current_color = initial_color;
}
}
} else {
*current_color = initial_color;
}
}
}
}
/// Given the currently hovered mesh, checks for a user click and if detected, sets the selected
/// field in the entity's component to true.
fn select_mesh(
// Resources
pick_state: Res<PickState>,
mouse_button_inputs: Res<Input<MouseButton>>,
// Queries
mut query: Query<&mut SelectablePickMesh>,
) {
if mouse_button_inputs.just_pressed(MouseButton::Left) {
// Deselect everything
for mut selectable in &mut query.iter() {
selectable.selected = false;
}
if let Some(pick_depth) = pick_state.topmost_pick {
if let Ok(mut top_mesh) = query.get_mut::<SelectablePickMesh>(pick_depth.entity) {
top_mesh.selected = true;
}
}
}
}
/// Casts a ray into the scene from the cursor position, tracking pickable meshes that are hit.
fn pick_mesh(
// Resources
mut pick_state: ResMut<PickState>,
cursor: Res<Events<CursorMoved>>,
meshes: Res<Assets<Mesh>>,
windows: Res<Windows>,
// Queries
mut mesh_query: Query<(&Handle<Mesh>, &Transform, &mut PickableMesh, Entity)>,
mut camera_query: Query<(&Transform, &Camera)>,
) {
// Get the cursor position
let cursor_pos_screen: Vec2 = match pick_state.cursor_event_reader.latest(&cursor) {
Some(cursor_moved) => cursor_moved.position,
None => return,
};
// Get current screen size
let window = windows.get_primary().unwrap();
let screen_size = Vec2::from([window.width as f32, window.height as f32]);
// Normalized device coordinates (NDC) describes cursor position from (-1, -1) to (1, 1)
let cursor_pos_ndc: Vec2 = (cursor_pos_screen / screen_size) * 2.0 - Vec2::from([1.0, 1.0]);
// Get the view transform and projection matrix from the camera
let mut view_matrix = Mat4::zero();
let mut projection_matrix = Mat4::zero();
for (transform, camera) in &mut camera_query.iter() {
view_matrix = transform.value.inverse();
projection_matrix = camera.projection_matrix;
}
// After initial checks completed, clear the pick list
pick_state.ordered_pick_list.clear();
pick_state.topmost_pick = None;
// Iterate through each pickable mesh in the scene
for (mesh_handle, transform, mut pickable, entity) in &mut mesh_query.iter() {
// Use the mesh handle to get a reference to a mesh asset
if let Some(mesh) = meshes.get(mesh_handle) {
if mesh.primitive_topology != PrimitiveTopology::TriangleList {
continue;
}
// The ray cast can hit the same mesh many times, so we need to track which hit is
// closest to the camera, and record that.
let mut hit_depth = f32::MAX;
// We need to transform the mesh vertices' positions from the mesh space to the world
// space using the mesh's transform, move it to the camera's space using the view
// matrix (camera.inverse), and finally, apply the projection matrix. Because column
// matrices are evaluated right to left, we have to order it correctly:
let mesh_to_cam_transform = view_matrix * transform.value;
// Get the vertex positions from the mesh reference resolved from the mesh handle
let vertex_positions: Vec<[f32; 3]> = mesh
.attributes
.iter()
.filter(|attribute| attribute.name == VertexAttribute::POSITION)
.filter_map(|attribute| match &attribute.values {
VertexAttributeValues::Float3(positions) => Some(positions.clone()),
_ => panic!("Unexpected vertex types in VertexAttribute::POSITION"),
})
.last | {
*current_color = highlight_params.selection_color;
} | conditional_block |
lib.rs |
}
/// A per-relay-parent job for the provisioning subsystem.
pub struct ProvisioningJob {
relay_parent: Hash,
receiver: mpsc::Receiver<ProvisionerMessage>,
backed_candidates: Vec<CandidateReceipt>,
signed_bitfields: Vec<SignedAvailabilityBitfield>,
metrics: Metrics,
inherent_after: InherentAfter,
awaiting_inherent: Vec<oneshot::Sender<ProvisionerInherentData>>
}
/// Errors in the provisioner.
#[derive(Debug, Error)]
#[allow(missing_docs)]
pub enum Error {
#[error(transparent)]
Util(#[from] util::Error),
#[error("failed to get availability cores")]
CanceledAvailabilityCores(#[source] oneshot::Canceled),
#[error("failed to get persisted validation data")]
CanceledPersistedValidationData(#[source] oneshot::Canceled),
#[error("failed to get block number")]
CanceledBlockNumber(#[source] oneshot::Canceled),
#[error("failed to get backed candidates")]
CanceledBackedCandidates(#[source] oneshot::Canceled),
#[error(transparent)]
ChainApi(#[from] ChainApiError),
#[error(transparent)]
Runtime(#[from] RuntimeApiError),
#[error("failed to send message to ChainAPI")]
ChainApiMessageSend(#[source] mpsc::SendError),
#[error("failed to send message to CandidateBacking to get backed candidates")]
GetBackedCandidatesSend(#[source] mpsc::SendError),
#[error("failed to send return message with Inherents")]
InherentDataReturnChannel,
#[error("backed candidate does not correspond to selected candidate; check logic in provisioner")]
BackedCandidateOrderingProblem,
}
impl JobTrait for ProvisioningJob {
type ToJob = ProvisionerMessage;
type Error = Error;
type RunArgs = ();
type Metrics = Metrics;
const NAME: &'static str = "ProvisioningJob";
/// Run a job for the parent block indicated
//
// this function is in charge of creating and executing the job's main loop
#[tracing::instrument(skip(span, _run_args, metrics, receiver, sender), fields(subsystem = LOG_TARGET))]
fn run<S: SubsystemSender>(
relay_parent: Hash,
span: Arc<jaeger::Span>,
_run_args: Self::RunArgs,
metrics: Self::Metrics,
receiver: mpsc::Receiver<ProvisionerMessage>,
mut sender: JobSender<S>,
) -> Pin<Box<dyn Future<Output = Result<(), Self::Error>> + Send>> {
async move {
let job = ProvisioningJob::new(
relay_parent,
metrics,
receiver,
);
job.run_loop(sender.subsystem_sender(), PerLeafSpan::new(span, "provisioner")).await
}
.boxed()
}
}
impl ProvisioningJob {
fn new(
relay_parent: Hash,
metrics: Metrics,
receiver: mpsc::Receiver<ProvisionerMessage>,
) -> Self {
Self {
relay_parent,
receiver,
backed_candidates: Vec::new(),
signed_bitfields: Vec::new(),
metrics,
inherent_after: InherentAfter::new_from_now(),
awaiting_inherent: Vec::new(),
}
}
async fn run_loop(
mut self,
sender: &mut impl SubsystemSender,
span: PerLeafSpan,
) -> Result<(), Error> {
use ProvisionerMessage::{
ProvisionableData, RequestInherentData,
};
loop {
futures::select! {
msg = self.receiver.next().fuse() => match msg {
Some(RequestInherentData(_, return_sender)) => {
let _span = span.child("req-inherent-data");
let _timer = self.metrics.time_request_inherent_data();
if self.inherent_after.is_ready() {
self.send_inherent_data(sender, vec![return_sender]).await;
} else {
self.awaiting_inherent.push(return_sender);
}
}
Some(ProvisionableData(_, data)) => {
let span = span.child("provisionable-data");
let _timer = self.metrics.time_provisionable_data();
self.note_provisionable_data(&span, data);
}
None => break,
},
_ = self.inherent_after.ready().fuse() => {
let _span = span.child("send-inherent-data");
let return_senders = std::mem::take(&mut self.awaiting_inherent);
if !return_senders.is_empty() {
self.send_inherent_data(sender, return_senders).await;
}
}
}
}
Ok(())
}
async fn send_inherent_data(
&mut self,
sender: &mut impl SubsystemSender,
return_senders: Vec<oneshot::Sender<ProvisionerInherentData>>,
) {
if let Err(err) = send_inherent_data(
self.relay_parent,
&self.signed_bitfields,
&self.backed_candidates,
return_senders,
sender,
)
.await
{
tracing::warn!(target: LOG_TARGET, err = ?err, "failed to assemble or send inherent data");
self.metrics.on_inherent_data_request(Err(()));
} else {
self.metrics.on_inherent_data_request(Ok(()));
}
}
#[tracing::instrument(level = "trace", skip(self), fields(subsystem = LOG_TARGET))]
fn note_provisionable_data(&mut self, span: &jaeger::Span, provisionable_data: ProvisionableData) {
match provisionable_data {
ProvisionableData::Bitfield(_, signed_bitfield) => {
self.signed_bitfields.push(signed_bitfield)
}
ProvisionableData::BackedCandidate(backed_candidate) => {
let _span = span.child("provisionable-backed")
.with_para_id(backed_candidate.descriptor().para_id);
self.backed_candidates.push(backed_candidate)
}
_ => {}
}
}
}
type CoreAvailability = BitVec<bitvec::order::Lsb0, u8>;
/// The provisioner is the subsystem best suited to choosing which specific
/// backed candidates and availability bitfields should be assembled into the
/// block. To engage this functionality, a
/// `ProvisionerMessage::RequestInherentData` is sent; the response is a set of
/// non-conflicting candidates and the appropriate bitfields. Non-conflicting
/// means that there are never two distinct parachain candidates included for
/// the same parachain and that new parachain candidates cannot be included
/// until the previous one either gets declared available or expired.
///
/// The main complication here is going to be around handling
/// occupied-core-assumptions. We might have candidates that are only
/// includable when some bitfields are included. And we might have candidates
/// that are not includable when certain bitfields are included.
///
/// When we're choosing bitfields to include, the rule should be simple:
/// maximize availability. So basically, include all bitfields. And then
/// choose a coherent set of candidates along with that.
#[tracing::instrument(level = "trace", skip(return_senders, from_job), fields(subsystem = LOG_TARGET))]
async fn send_inherent_data(
relay_parent: Hash,
bitfields: &[SignedAvailabilityBitfield],
candidates: &[CandidateReceipt],
return_senders: Vec<oneshot::Sender<ProvisionerInherentData>>,
from_job: &mut impl SubsystemSender,
) -> Result<(), Error> {
let availability_cores = request_availability_cores(relay_parent, from_job)
.await
.await.map_err(|err| Error::CanceledAvailabilityCores(err))??;
let bitfields = select_availability_bitfields(&availability_cores, bitfields);
let candidates = select_candidates(
&availability_cores,
&bitfields,
candidates,
relay_parent,
from_job,
).await?;
let inherent_data = ProvisionerInherentData {
bitfields,
backed_candidates: candidates,
disputes: Vec::new(), // until disputes are implemented.
};
for return_sender in return_senders {
return_sender.send(inherent_data.clone()).map_err(|_data| Error::InherentDataReturnChannel)?;
}
Ok(())
}
/// In general, we want to pick all the bitfields. However, we have the following constraints:
///
/// - not more than one per validator
/// - each 1 bit must correspond to an occupied core
///
/// If we have too many, an arbitrary selection policy is fine. For purposes of maximizing availability,
/// we pick the one with the greatest number of 1 bits.
///
/// Note: This does not enforce any sorting precondition on the output; the ordering there will be unrelated
/// to the sorting of the input.
#[tracing::instrument(level = " | {
match *self {
InherentAfter::Ready => {
// Make sure we never end the returned future.
// This is required because the `select!` that calls this future will end in a busy loop.
futures::pending!()
},
InherentAfter::Wait(ref mut d) => {
d.await;
*self = InherentAfter::Ready;
},
}
} | identifier_body |
|
lib.rs | is a set of
/// non-conflicting candidates and the appropriate bitfields. Non-conflicting
/// means that there are never two distinct parachain candidates included for
/// the same parachain and that new parachain candidates cannot be included
/// until the previous one either gets declared available or expired.
///
/// The main complication here is going to be around handling
/// occupied-core-assumptions. We might have candidates that are only
/// includable when some bitfields are included. And we might have candidates
/// that are not includable when certain bitfields are included.
///
/// When we're choosing bitfields to include, the rule should be simple:
/// maximize availability. So basically, include all bitfields. And then
/// choose a coherent set of candidates along with that.
#[tracing::instrument(level = "trace", skip(return_senders, from_job), fields(subsystem = LOG_TARGET))]
async fn send_inherent_data(
relay_parent: Hash,
bitfields: &[SignedAvailabilityBitfield],
candidates: &[CandidateReceipt],
return_senders: Vec<oneshot::Sender<ProvisionerInherentData>>,
from_job: &mut impl SubsystemSender,
) -> Result<(), Error> {
let availability_cores = request_availability_cores(relay_parent, from_job)
.await
.await.map_err(|err| Error::CanceledAvailabilityCores(err))??;
let bitfields = select_availability_bitfields(&availability_cores, bitfields);
let candidates = select_candidates(
&availability_cores,
&bitfields,
candidates,
relay_parent,
from_job,
).await?;
let inherent_data = ProvisionerInherentData {
bitfields,
backed_candidates: candidates,
disputes: Vec::new(), // until disputes are implemented.
};
for return_sender in return_senders {
return_sender.send(inherent_data.clone()).map_err(|_data| Error::InherentDataReturnChannel)?;
}
Ok(())
}
/// In general, we want to pick all the bitfields. However, we have the following constraints:
///
/// - not more than one per validator
/// - each 1 bit must correspond to an occupied core
///
/// If we have too many, an arbitrary selection policy is fine. For purposes of maximizing availability,
/// we pick the one with the greatest number of 1 bits.
///
/// Note: This does not enforce any sorting precondition on the output; the ordering there will be unrelated
/// to the sorting of the input.
#[tracing::instrument(level = "trace", fields(subsystem = LOG_TARGET))]
fn select_availability_bitfields(
cores: &[CoreState],
bitfields: &[SignedAvailabilityBitfield],
) -> Vec<SignedAvailabilityBitfield> {
let mut selected: BTreeMap<ValidatorIndex, SignedAvailabilityBitfield> = BTreeMap::new();
'a:
for bitfield in bitfields.iter().cloned() {
if bitfield.payload().0.len() != cores.len() {
continue
}
let is_better = selected.get(&bitfield.validator_index())
.map_or(true, |b| b.payload().0.count_ones() < bitfield.payload().0.count_ones());
if !is_better { continue }
for (idx, _) in cores.iter().enumerate().filter(|v| !v.1.is_occupied()) {
// Bit is set for an unoccupied core - invalid
if *bitfield.payload().0.get(idx).as_deref().unwrap_or(&false) {
continue 'a
}
}
let _ = selected.insert(bitfield.validator_index(), bitfield);
}
selected.into_iter().map(|(_, b)| b).collect()
}
/// Determine which cores are free, and then to the degree possible, pick a candidate appropriate to each free core.
#[tracing::instrument(level = "trace", skip(sender), fields(subsystem = LOG_TARGET))]
async fn select_candidates(
availability_cores: &[CoreState],
bitfields: &[SignedAvailabilityBitfield],
candidates: &[CandidateReceipt],
relay_parent: Hash,
sender: &mut impl SubsystemSender,
) -> Result<Vec<BackedCandidate>, Error> {
let block_number = get_block_number_under_construction(relay_parent, sender).await?;
let mut selected_candidates =
Vec::with_capacity(candidates.len().min(availability_cores.len()));
for (core_idx, core) in availability_cores.iter().enumerate() {
let (scheduled_core, assumption) = match core {
CoreState::Scheduled(scheduled_core) => (scheduled_core, OccupiedCoreAssumption::Free),
CoreState::Occupied(occupied_core) => {
if bitfields_indicate_availability(core_idx, bitfields, &occupied_core.availability) {
if let Some(ref scheduled_core) = occupied_core.next_up_on_available {
(scheduled_core, OccupiedCoreAssumption::Included)
} else {
continue;
}
} else {
if occupied_core.time_out_at != block_number {
continue;
}
if let Some(ref scheduled_core) = occupied_core.next_up_on_time_out {
(scheduled_core, OccupiedCoreAssumption::TimedOut)
} else {
continue;
}
}
}
CoreState::Free => continue,
};
let validation_data = match request_persisted_validation_data(
relay_parent,
scheduled_core.para_id,
assumption,
sender,
)
.await
.await.map_err(|err| Error::CanceledPersistedValidationData(err))??
{
Some(v) => v,
None => continue,
};
let computed_validation_data_hash = validation_data.hash();
// we arbitrarily pick the first of the backed candidates which match the appropriate selection criteria
if let Some(candidate) = candidates.iter().find(|backed_candidate| {
let descriptor = &backed_candidate.descriptor;
descriptor.para_id == scheduled_core.para_id
&& descriptor.persisted_validation_data_hash == computed_validation_data_hash
}) {
let candidate_hash = candidate.hash();
tracing::trace!(
target: LOG_TARGET,
"Selecting candidate {}. para_id={} core={}",
candidate_hash,
candidate.descriptor.para_id,
core_idx,
);
selected_candidates.push(candidate_hash);
}
}
// now get the backed candidates corresponding to these candidate receipts
let (tx, rx) = oneshot::channel();
sender.send_message(CandidateBackingMessage::GetBackedCandidates(
relay_parent,
selected_candidates.clone(),
tx,
).into()).await;
let mut candidates = rx.await.map_err(|err| Error::CanceledBackedCandidates(err))?;
// `selected_candidates` is generated in ascending order by core index, and `GetBackedCandidates`
// _should_ preserve that property, but let's just make sure.
//
// We can't easily map from `BackedCandidate` to `core_idx`, but we know that every selected candidate
// maps to either 0 or 1 backed candidate, and the hashes correspond. Therefore, by checking them
// in order, we can ensure that the backed candidates are also in order.
let mut backed_idx = 0;
for selected in selected_candidates {
if selected == candidates.get(backed_idx).ok_or(Error::BackedCandidateOrderingProblem)?.hash() {
backed_idx += 1;
}
}
if candidates.len() != backed_idx {
Err(Error::BackedCandidateOrderingProblem)?;
}
// keep only one candidate with validation code.
let mut with_validation_code = false;
candidates.retain(|c| {
if c.candidate.commitments.new_validation_code.is_some() {
if with_validation_code {
return false
}
with_validation_code = true;
}
true
});
tracing::debug!(
target: LOG_TARGET,
"Selected {} candidates for {} cores",
candidates.len(),
availability_cores.len(),
);
Ok(candidates)
}
/// Produces a block number 1 higher than that of the relay parent
/// in the event of an invalid `relay_parent`, returns `Ok(0)`
#[tracing::instrument(level = "trace", skip(sender), fields(subsystem = LOG_TARGET))]
async fn get_block_number_under_construction(
relay_parent: Hash,
sender: &mut impl SubsystemSender,
) -> Result<BlockNumber, Error> {
let (tx, rx) = oneshot::channel();
sender
.send_message(ChainApiMessage::BlockNumber(
relay_parent,
tx,
).into())
.await;
match rx.await.map_err(|err| Error::CanceledBlockNumber(err))? {
Ok(Some(n)) => Ok(n + 1),
Ok(None) => Ok(0),
Err(err) => Err(err.into()),
}
}
/// The availability bitfield for a given core is the transpose
/// of a set of signed availability bitfields. It goes like this:
///
/// - construct a transverse slice along `core_idx`
/// - bitwise-or it with the availability slice
/// - count the 1 bits, compare to the total length; true on 2/3+
#[tracing::instrument(level = "trace", fields(subsystem = LOG_TARGET))]
fn bitfields_indicate_availability(
core_idx: usize,
bitfields: &[SignedAvailabilityBitfield],
availability: &CoreAvailability, | ) -> bool {
let mut availability = availability.clone();
let availability_len = availability.len();
| random_line_split |
|
lib.rs | source] oneshot::Canceled),
#[error(transparent)]
ChainApi(#[from] ChainApiError),
#[error(transparent)]
Runtime(#[from] RuntimeApiError),
#[error("failed to send message to ChainAPI")]
ChainApiMessageSend(#[source] mpsc::SendError),
#[error("failed to send message to CandidateBacking to get backed candidates")]
GetBackedCandidatesSend(#[source] mpsc::SendError),
#[error("failed to send return message with Inherents")]
InherentDataReturnChannel,
#[error("backed candidate does not correspond to selected candidate; check logic in provisioner")]
BackedCandidateOrderingProblem,
}
impl JobTrait for ProvisioningJob {
type ToJob = ProvisionerMessage;
type Error = Error;
type RunArgs = ();
type Metrics = Metrics;
const NAME: &'static str = "ProvisioningJob";
/// Run a job for the parent block indicated
//
// this function is in charge of creating and executing the job's main loop
#[tracing::instrument(skip(span, _run_args, metrics, receiver, sender), fields(subsystem = LOG_TARGET))]
fn run<S: SubsystemSender>(
relay_parent: Hash,
span: Arc<jaeger::Span>,
_run_args: Self::RunArgs,
metrics: Self::Metrics,
receiver: mpsc::Receiver<ProvisionerMessage>,
mut sender: JobSender<S>,
) -> Pin<Box<dyn Future<Output = Result<(), Self::Error>> + Send>> {
async move {
let job = ProvisioningJob::new(
relay_parent,
metrics,
receiver,
);
job.run_loop(sender.subsystem_sender(), PerLeafSpan::new(span, "provisioner")).await
}
.boxed()
}
}
impl ProvisioningJob {
fn | (
relay_parent: Hash,
metrics: Metrics,
receiver: mpsc::Receiver<ProvisionerMessage>,
) -> Self {
Self {
relay_parent,
receiver,
backed_candidates: Vec::new(),
signed_bitfields: Vec::new(),
metrics,
inherent_after: InherentAfter::new_from_now(),
awaiting_inherent: Vec::new(),
}
}
async fn run_loop(
mut self,
sender: &mut impl SubsystemSender,
span: PerLeafSpan,
) -> Result<(), Error> {
use ProvisionerMessage::{
ProvisionableData, RequestInherentData,
};
loop {
futures::select! {
msg = self.receiver.next().fuse() => match msg {
Some(RequestInherentData(_, return_sender)) => {
let _span = span.child("req-inherent-data");
let _timer = self.metrics.time_request_inherent_data();
if self.inherent_after.is_ready() {
self.send_inherent_data(sender, vec![return_sender]).await;
} else {
self.awaiting_inherent.push(return_sender);
}
}
Some(ProvisionableData(_, data)) => {
let span = span.child("provisionable-data");
let _timer = self.metrics.time_provisionable_data();
self.note_provisionable_data(&span, data);
}
None => break,
},
_ = self.inherent_after.ready().fuse() => {
let _span = span.child("send-inherent-data");
let return_senders = std::mem::take(&mut self.awaiting_inherent);
if !return_senders.is_empty() {
self.send_inherent_data(sender, return_senders).await;
}
}
}
}
Ok(())
}
async fn send_inherent_data(
&mut self,
sender: &mut impl SubsystemSender,
return_senders: Vec<oneshot::Sender<ProvisionerInherentData>>,
) {
if let Err(err) = send_inherent_data(
self.relay_parent,
&self.signed_bitfields,
&self.backed_candidates,
return_senders,
sender,
)
.await
{
tracing::warn!(target: LOG_TARGET, err = ?err, "failed to assemble or send inherent data");
self.metrics.on_inherent_data_request(Err(()));
} else {
self.metrics.on_inherent_data_request(Ok(()));
}
}
#[tracing::instrument(level = "trace", skip(self), fields(subsystem = LOG_TARGET))]
fn note_provisionable_data(&mut self, span: &jaeger::Span, provisionable_data: ProvisionableData) {
match provisionable_data {
ProvisionableData::Bitfield(_, signed_bitfield) => {
self.signed_bitfields.push(signed_bitfield)
}
ProvisionableData::BackedCandidate(backed_candidate) => {
let _span = span.child("provisionable-backed")
.with_para_id(backed_candidate.descriptor().para_id);
self.backed_candidates.push(backed_candidate)
}
_ => {}
}
}
}
type CoreAvailability = BitVec<bitvec::order::Lsb0, u8>;
/// The provisioner is the subsystem best suited to choosing which specific
/// backed candidates and availability bitfields should be assembled into the
/// block. To engage this functionality, a
/// `ProvisionerMessage::RequestInherentData` is sent; the response is a set of
/// non-conflicting candidates and the appropriate bitfields. Non-conflicting
/// means that there are never two distinct parachain candidates included for
/// the same parachain and that new parachain candidates cannot be included
/// until the previous one either gets declared available or expired.
///
/// The main complication here is going to be around handling
/// occupied-core-assumptions. We might have candidates that are only
/// includable when some bitfields are included. And we might have candidates
/// that are not includable when certain bitfields are included.
///
/// When we're choosing bitfields to include, the rule should be simple:
/// maximize availability. So basically, include all bitfields. And then
/// choose a coherent set of candidates along with that.
#[tracing::instrument(level = "trace", skip(return_senders, from_job), fields(subsystem = LOG_TARGET))]
async fn send_inherent_data(
relay_parent: Hash,
bitfields: &[SignedAvailabilityBitfield],
candidates: &[CandidateReceipt],
return_senders: Vec<oneshot::Sender<ProvisionerInherentData>>,
from_job: &mut impl SubsystemSender,
) -> Result<(), Error> {
let availability_cores = request_availability_cores(relay_parent, from_job)
.await
.await.map_err(|err| Error::CanceledAvailabilityCores(err))??;
let bitfields = select_availability_bitfields(&availability_cores, bitfields);
let candidates = select_candidates(
&availability_cores,
&bitfields,
candidates,
relay_parent,
from_job,
).await?;
let inherent_data = ProvisionerInherentData {
bitfields,
backed_candidates: candidates,
disputes: Vec::new(), // until disputes are implemented.
};
for return_sender in return_senders {
return_sender.send(inherent_data.clone()).map_err(|_data| Error::InherentDataReturnChannel)?;
}
Ok(())
}
/// In general, we want to pick all the bitfields. However, we have the following constraints:
///
/// - not more than one per validator
/// - each 1 bit must correspond to an occupied core
///
/// If we have too many, an arbitrary selection policy is fine. For purposes of maximizing availability,
/// we pick the one with the greatest number of 1 bits.
///
/// Note: This does not enforce any sorting precondition on the output; the ordering there will be unrelated
/// to the sorting of the input.
#[tracing::instrument(level = "trace", fields(subsystem = LOG_TARGET))]
fn select_availability_bitfields(
cores: &[CoreState],
bitfields: &[SignedAvailabilityBitfield],
) -> Vec<SignedAvailabilityBitfield> {
let mut selected: BTreeMap<ValidatorIndex, SignedAvailabilityBitfield> = BTreeMap::new();
'a:
for bitfield in bitfields.iter().cloned() {
if bitfield.payload().0.len() != cores.len() {
continue
}
let is_better = selected.get(&bitfield.validator_index())
.map_or(true, |b| b.payload().0.count_ones() < bitfield.payload().0.count_ones());
if !is_better { continue }
for (idx, _) in cores.iter().enumerate().filter(|v| !v.1.is_occupied()) {
// Bit is set for an unoccupied core - invalid
if *bitfield.payload().0.get(idx).as_deref().unwrap_or(&false) {
continue 'a
}
}
let _ = selected.insert(bitfield.validator_index(), bitfield);
}
selected.into_iter().map(|(_, b)| b).collect()
}
/// Determine which cores are free, and then to the degree possible, pick a candidate appropriate to each free core.
#[tracing::instrument(level = "trace", skip(sender), fields(subsystem = LOG_TARGET))]
async fn select_candidates(
availability_cores: &[CoreState],
bitfields: &[SignedAvailabilityBitfield],
c | new | identifier_name |
lib.rs | ] oneshot::Canceled),
#[error(transparent)]
ChainApi(#[from] ChainApiError),
#[error(transparent)]
Runtime(#[from] RuntimeApiError),
#[error("failed to send message to ChainAPI")]
ChainApiMessageSend(#[source] mpsc::SendError),
#[error("failed to send message to CandidateBacking to get backed candidates")]
GetBackedCandidatesSend(#[source] mpsc::SendError),
#[error("failed to send return message with Inherents")]
InherentDataReturnChannel,
#[error("backed candidate does not correspond to selected candidate; check logic in provisioner")]
BackedCandidateOrderingProblem,
}
impl JobTrait for ProvisioningJob {
type ToJob = ProvisionerMessage;
type Error = Error;
type RunArgs = ();
type Metrics = Metrics;
const NAME: &'static str = "ProvisioningJob";
/// Run a job for the parent block indicated
//
// this function is in charge of creating and executing the job's main loop
#[tracing::instrument(skip(span, _run_args, metrics, receiver, sender), fields(subsystem = LOG_TARGET))]
fn run<S: SubsystemSender>(
relay_parent: Hash,
span: Arc<jaeger::Span>,
_run_args: Self::RunArgs,
metrics: Self::Metrics,
receiver: mpsc::Receiver<ProvisionerMessage>,
mut sender: JobSender<S>,
) -> Pin<Box<dyn Future<Output = Result<(), Self::Error>> + Send>> {
async move {
let job = ProvisioningJob::new(
relay_parent,
metrics,
receiver,
);
job.run_loop(sender.subsystem_sender(), PerLeafSpan::new(span, "provisioner")).await
}
.boxed()
}
}
impl ProvisioningJob {
fn new(
relay_parent: Hash,
metrics: Metrics,
receiver: mpsc::Receiver<ProvisionerMessage>,
) -> Self {
Self {
relay_parent,
receiver,
backed_candidates: Vec::new(),
signed_bitfields: Vec::new(),
metrics,
inherent_after: InherentAfter::new_from_now(),
awaiting_inherent: Vec::new(),
}
}
async fn run_loop(
mut self,
sender: &mut impl SubsystemSender,
span: PerLeafSpan,
) -> Result<(), Error> {
use ProvisionerMessage::{
ProvisionableData, RequestInherentData,
};
loop {
futures::select! {
msg = self.receiver.next().fuse() => match msg {
Some(RequestInherentData(_, return_sender)) => {
let _span = span.child("req-inherent-data");
let _timer = self.metrics.time_request_inherent_data();
if self.inherent_after.is_ready() {
self.send_inherent_data(sender, vec![return_sender]).await;
} else {
self.awaiting_inherent.push(return_sender);
}
}
Some(ProvisionableData(_, data)) => {
let span = span.child("provisionable-data");
let _timer = self.metrics.time_provisionable_data();
self.note_provisionable_data(&span, data);
}
None => break,
},
_ = self.inherent_after.ready().fuse() => {
let _span = span.child("send-inherent-data");
let return_senders = std::mem::take(&mut self.awaiting_inherent);
if !return_senders.is_empty() {
self.send_inherent_data(sender, return_senders).await;
}
}
}
}
Ok(())
}
async fn send_inherent_data(
&mut self,
sender: &mut impl SubsystemSender,
return_senders: Vec<oneshot::Sender<ProvisionerInherentData>>,
) {
if let Err(err) = send_inherent_data(
self.relay_parent,
&self.signed_bitfields,
&self.backed_candidates,
return_senders,
sender,
)
.await
{
tracing::warn!(target: LOG_TARGET, err = ?err, "failed to assemble or send inherent data");
self.metrics.on_inherent_data_request(Err(()));
} else |
}
#[tracing::instrument(level = "trace", skip(self), fields(subsystem = LOG_TARGET))]
fn note_provisionable_data(&mut self, span: &jaeger::Span, provisionable_data: ProvisionableData) {
match provisionable_data {
ProvisionableData::Bitfield(_, signed_bitfield) => {
self.signed_bitfields.push(signed_bitfield)
}
ProvisionableData::BackedCandidate(backed_candidate) => {
let _span = span.child("provisionable-backed")
.with_para_id(backed_candidate.descriptor().para_id);
self.backed_candidates.push(backed_candidate)
}
_ => {}
}
}
}
type CoreAvailability = BitVec<bitvec::order::Lsb0, u8>;
/// The provisioner is the subsystem best suited to choosing which specific
/// backed candidates and availability bitfields should be assembled into the
/// block. To engage this functionality, a
/// `ProvisionerMessage::RequestInherentData` is sent; the response is a set of
/// non-conflicting candidates and the appropriate bitfields. Non-conflicting
/// means that there are never two distinct parachain candidates included for
/// the same parachain and that new parachain candidates cannot be included
/// until the previous one either gets declared available or expired.
///
/// The main complication here is going to be around handling
/// occupied-core-assumptions. We might have candidates that are only
/// includable when some bitfields are included. And we might have candidates
/// that are not includable when certain bitfields are included.
///
/// When we're choosing bitfields to include, the rule should be simple:
/// maximize availability. So basically, include all bitfields. And then
/// choose a coherent set of candidates along with that.
#[tracing::instrument(level = "trace", skip(return_senders, from_job), fields(subsystem = LOG_TARGET))]
async fn send_inherent_data(
relay_parent: Hash,
bitfields: &[SignedAvailabilityBitfield],
candidates: &[CandidateReceipt],
return_senders: Vec<oneshot::Sender<ProvisionerInherentData>>,
from_job: &mut impl SubsystemSender,
) -> Result<(), Error> {
let availability_cores = request_availability_cores(relay_parent, from_job)
.await
.await.map_err(|err| Error::CanceledAvailabilityCores(err))??;
let bitfields = select_availability_bitfields(&availability_cores, bitfields);
let candidates = select_candidates(
&availability_cores,
&bitfields,
candidates,
relay_parent,
from_job,
).await?;
let inherent_data = ProvisionerInherentData {
bitfields,
backed_candidates: candidates,
disputes: Vec::new(), // until disputes are implemented.
};
for return_sender in return_senders {
return_sender.send(inherent_data.clone()).map_err(|_data| Error::InherentDataReturnChannel)?;
}
Ok(())
}
/// In general, we want to pick all the bitfields. However, we have the following constraints:
///
/// - not more than one per validator
/// - each 1 bit must correspond to an occupied core
///
/// If we have too many, an arbitrary selection policy is fine. For purposes of maximizing availability,
/// we pick the one with the greatest number of 1 bits.
///
/// Note: This does not enforce any sorting precondition on the output; the ordering there will be unrelated
/// to the sorting of the input.
#[tracing::instrument(level = "trace", fields(subsystem = LOG_TARGET))]
fn select_availability_bitfields(
cores: &[CoreState],
bitfields: &[SignedAvailabilityBitfield],
) -> Vec<SignedAvailabilityBitfield> {
let mut selected: BTreeMap<ValidatorIndex, SignedAvailabilityBitfield> = BTreeMap::new();
'a:
for bitfield in bitfields.iter().cloned() {
if bitfield.payload().0.len() != cores.len() {
continue
}
let is_better = selected.get(&bitfield.validator_index())
.map_or(true, |b| b.payload().0.count_ones() < bitfield.payload().0.count_ones());
if !is_better { continue }
for (idx, _) in cores.iter().enumerate().filter(|v| !v.1.is_occupied()) {
// Bit is set for an unoccupied core - invalid
if *bitfield.payload().0.get(idx).as_deref().unwrap_or(&false) {
continue 'a
}
}
let _ = selected.insert(bitfield.validator_index(), bitfield);
}
selected.into_iter().map(|(_, b)| b).collect()
}
/// Determine which cores are free, and then to the degree possible, pick a candidate appropriate to each free core.
#[tracing::instrument(level = "trace", skip(sender), fields(subsystem = LOG_TARGET))]
async fn select_candidates(
availability_cores: &[CoreState],
bitfields: &[SignedAvailabilityBitfield],
c | {
self.metrics.on_inherent_data_request(Ok(()));
} | conditional_block |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.