content
stringlengths 7
2.61M
|
---|
Authorities released a sketch of a man who is a suspect in a quadruple shooting in Pelzer. (Photo: Contributed)
The Greenville County Sheriff's Office on Thursday released a sketch of a man described as a suspect in a quadruple shooting in Pelzer.
One person died and three others were injured in a flurry of gunfire April 17 in the 200 block of Eastview Road. Authorities have not reported any arrests in connection with the shooting, which may have occurred over a dispute about money in a drug deal, the Sheriff's Office said.
The Sheriff's Office described the suspects as:
A skinny black male weighing about 120 to 130 pounds. He was about 22 or 23 years old and wearing jeans and a black cutoff T-shirt. He was about 5 feet 6 inches tall.
A black male with long dreadlocks that were dyed blond. He weighed about 160 to 170 pounds and wore a bright green and yellow Bob Marley jacket. He was about 5 feet 10 inches tall.
A black male wearing a white hat and white shirt. He was about 6 feet, 1 inch tall and possibly had dreadlocks.
The suspect vehicle was described as a silver Toyota with four doors. The vehicle had tinted windows and no damage or paint blemishes. The mid-to-late 2000s car was driven by a heavyset black woman with short hair, authorities said.
Anyone with information is asked to call Crime Stoppers at 23-CRIME.
Read or Share this story: http://grnol.co/1Tqmq8Q |
brg1: a putative murine homologue of the Drosophila brahma gene, a homeotic gene regulator. To identify potential regulators of Hox gene expression in mice, we have screened for genes highly related to brahma (brm), an activator of homeotic gene expression in Drosophila. We have cloned a murine gene, brg1, which, like brm, encodes a member of the DEGH protein family, suggesting that brg1 may be a DNA-dependent ATPase or a helicase. brg1 also contains a bromodomain which may be involved in transactivation. Although the sequences of a number of mammalian genes similar to Drosophila brm have been reported, they are related to brm only within specific portions of the putative helicase region, while brg1 is highly similar to brm throughout and outside of this region. A 5.8-kb brg1 transcript was detected throughout embryogenesis and in numerous adult tissues. RNA in situ hybridization revealed widespread expression of brg1 in embryonic tissues. At later stages of embryogenesis, differences in levels of brg1 expression were seen among different tissues. brg1 expression was highest in the spinal cord, the brain, parts of the peripheral nervous system, and the vertebral column. These expression domains within the spinal cord and vertebral column encompass major regions of Hox gene expression. Within the spinal cord, brain, and retina, mRNA levels were higher in regions consisting of differentiated cells than in regions consisting of undifferentiated, proliferating cells. These patterns of brg1 expression are consistent with a possible role for brg1 in Hox gene regulation as well as in other regulatory pathways. |
The need for mapping personal goals to exercise dosage in community-based exercise programs for people with Parkinsons disease ABSTRACT Purpose: Community-based exercise can support long-term management of Parkinsons disease, although it is not known if personal goals are met in these programs. The objectives of this study were to: examine the goals of community based exercise programs from the participant and instructor perspectives; establish the extent to which these programs meet self-described exercise outcomes; and explore participant and instructor perspectives on barriers to meeting exercise expectations. Materials and Methods: This study explores the experiences of people with Parkinsons disease participating in a structured exercise program at six community sites. A mixed-methods approach was used, including participant and instructor interviews, assessment of exercise intensity, and mapping of exercise dosage to participant goals. Twenty-four exercise participants provided interview, quality of life, and exercise intensity data. Results: Twenty-one participants exercised for primary management of their Parkinsons disease. None met the exercise dosage necessary to meet this primary objective, although 60% met exercise dosage required to prevent disuse deconditioning. Participants and instructors did not describe similar goals for the community-based exercise program. Conclusions: Community-based exercise programs could be optimized by better aligning participant goals and exercise intensity. |
<gh_stars>1-10
package qq.client.view;
import qq.client.tools.ManageQqChat;
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import java.io.ObjectOutputStream;
import qq.client.tools.*;
import qq.common.Message;
import qq.common.MessageType;
public class qqFriendsList extends JFrame implements ActionListener,MouseListener {
//处理第一张卡片
JPanel jphy1,jphy2,jphy3;
JButton jphy_jb1,jphy_jb2,jphy_jb3;
JScrollPane jsp1;
JLabel []jlbs1;
//处理第二张卡片(陌生人)
JPanel jpmsr1,jpmsr2,jpmsr3;
JButton jpmsr_jb1,jpmsr_jb2,jpmsr_jb3;
JScrollPane jsp2;
JLabel []jlbs2;
//选择群聊按钮
JButton jb_qun;
//对整个JFrame进行卡片布局
CardLayout cl;
String owner;
public qqFriendsList(String ownerId){
this.owner = ownerId;
//处理第一张卡片(好友列表)
jphy_jb1 = new JButton("我的好友");
jphy_jb2 = new JButton("陌生人");
jphy_jb2.addActionListener(this); //监听
jphy_jb3 = new JButton("群聊");
jphy_jb3.addActionListener(this);
jphy1 = new JPanel(new BorderLayout());
//假设有12个特别关心好友
jphy2 = new JPanel((new GridLayout(12,1,4,4)));
//给jphy2初始化1个特别关心好友
jlbs1 = new JLabel[12];
for(int i = 0;i < jlbs1.length;i++){
jlbs1[i] = new JLabel(i+1+"",new ImageIcon("image/mine.jpg"),JLabel.LEFT);
jlbs1[i].setEnabled(false);
if(jlbs1[i].getText().equals(ownerId)){
jlbs1[i].setEnabled(true);
}
jlbs1[i].addMouseListener(this);
jphy2.add(jlbs1[i]);
}
jphy3 = new JPanel(new GridLayout(2,1));
//把两个按钮加到jphy3中
jphy3.add(jphy_jb2);
jphy3.add(jphy_jb3);
jsp1 = new JScrollPane(jphy2);
//将所有组件加到最大的jphy1中去(对jph1初始化)
jphy1.add(jphy_jb1,"North");
jphy1.add(jsp1,"Center");
jphy1.add(jphy3,"South");
/* 处理第二张卡片(我的好友)*/
jpmsr_jb1 = new JButton("我的好友");
jpmsr_jb1.addActionListener(this);
jpmsr_jb2 = new JButton("陌生人");
jpmsr_jb3 = new JButton("群聊");
jpmsr_jb3.addActionListener(this);
jpmsr1 = new JPanel(new BorderLayout());
//假设有50个好友
jpmsr2 = new JPanel((new GridLayout(50,1,4,4)));
//给jpmsr2初始化50个好友
jlbs2 = new JLabel[50];
for(int i = 0;i < jlbs2.length;i++){
jlbs2[i] = new JLabel(i+1+"",new ImageIcon("image/mine.jpg"),JLabel.LEFT);
jlbs2[i].addMouseListener(this);
jpmsr2.add(jlbs2[i]);
}
jpmsr3 = new JPanel(new GridLayout(2,1));
//把两个按钮加到jpmsr3中
jpmsr3.add(jpmsr_jb1);
jpmsr3.add(jpmsr_jb2);
jsp2 = new JScrollPane(jpmsr2);
//将所有组件加到最大的jpmsr1中去(对jpmsr1初始化)
jpmsr1.add(jpmsr3,"North");
jpmsr1.add(jsp2,"Center");
jpmsr1.add(jpmsr_jb3,"South");
/*处理jb_qun群聊按钮*/
jb_qun = new JButton("群聊");
//添加到JFrame上
cl = new CardLayout();
this.setLayout(cl);
this.add(jphy1,"1");
this.add(jpmsr1,"2");
this.add(jb_qun,"3");
this.setVisible(true);
this.setSize(200,400);
this.setTitle(ownerId);
this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}
public void actionPerformed(ActionEvent e){
if(e.getSource()==jphy_jb2){
cl.show(this.getContentPane(),"2");
}
else if(e.getSource()==jpmsr_jb1){
cl.show(this.getContentPane(),"1");
}
else if(e.getSource()==jphy_jb3||e.getSource()==jpmsr_jb3){
//新建一个群聊窗口
System.out.println("你将进行群聊!");
groupChat gc = new groupChat(this.owner);
//把群聊界面添加到管理的类中
ManageGroupChat.addGroupChat(this.owner,gc);
}
}
public void mouseClicked(MouseEvent e){
//相应用户双击事件,并得到好友的名字
if(e.getClickCount()==2){
//获取好友的姓名
String friendName = ((JLabel)e.getSource()).getText();
//System.out.println("你将和" + friendName + "聊天!");
qqChat qc = new qqChat(this.owner,friendName);
//把聊天界面加入到管理类
ManageQqChat.addQqChat(this.owner+" "+friendName, qc);
/*
Thread t = new Thread(qc);
t.start();
*/
}
}
public void mouseEntered(MouseEvent e){
JLabel jl = (JLabel)e.getSource();
jl.setForeground(Color.red);
}
public void mouseExited(MouseEvent e){
JLabel jl = (JLabel)e.getSource();
jl.setForeground(Color.black);
}
public void mousePressed(MouseEvent e){
}
public void mouseReleased(MouseEvent e){
}
//添加新的在线好友
public void updateFriendsList(Message m){
//获取需要更新的好友列表信息
String online_friends[] = m.getCon().split(" ");
//遍历进行状态更新
for(int i=0;i<online_friends.length;i++){
//尝试对特别关心列表进行更新
try {
jlbs1[Integer.parseInt(online_friends[i]) - 1].setEnabled(true);
}catch (Exception e){
e.printStackTrace();
}
//尝试对好友列表进行更新(有待修改,需要加上用户名的基数)
try {
jlbs2[Integer.parseInt(online_friends[i]) - 1].setEnabled(true);
}catch (Exception e){
e.printStackTrace();
}
}
}
//变灰下线好友头像
public void updateFriendsList2(Message m){
//获取需要更新的好友列表信息
String offline_friend = m.getCon();
//尝试对特别关心列表进行更新
try {
jlbs1[Integer.parseInt(offline_friend)-1].setEnabled(false); //此处一定要记得减1,不然效果错乱
System.out.println(Integer.parseInt(offline_friend) + "头像成功变灰!!!");
}catch (Exception e){
e.printStackTrace();
}
//尝试对好友列表进行更新(有待修改,需要加上用户名的基数)
try {
jlbs2[Integer.parseInt(offline_friend)-1].setEnabled(false);
}catch (Exception e){
e.printStackTrace();
}
}
//退出时执行的操作Runtime类的addShutdownHook函数,参考:https://blog.csdn.net/qq7342272/article/details/6852734
//添加需要在应用程序结束前执行的操作,例如关闭网络连接,关闭数据库等等
//在这里我们向服务器发送一个下线操作
public void doShutDownWork() {
Runtime run=Runtime.getRuntime();//当前 Java 应用程序相关的运行时对象。
run.addShutdownHook(new Thread(){ //注册新的虚拟机来关闭钩子
@Override
public void run() {
//程序结束时进行的操作
Message ms = new Message();
ms.setSender(owner);
ms.setMesType(MessageType.message_off_line); //客户结束时,发送给服务器下线通知
try{
ObjectOutputStream oos = new ObjectOutputStream(ManageClientConServerThread.getClientConServerThread(
owner).getS().getOutputStream());
oos.writeObject(ms);
}catch(Exception e){
e.printStackTrace();
}
System.out.println(owner + "程序结束调用");
}
});
}
public static void main(String args[]){
//qqFriendsList qqfl = new qqFriendsList();
}
}
|
Alpha-1-antitrypsin replacement therapy: will its efficacy ever be proved? The study concerned with the treatment of alpha-1antitrypsin (AT) deficiency, which appears in this issue of the Journal, requires that we all give serious thought to this controversial and difficult topic. Before dealing directly with the issues raised by the design of this study, however, it is necessary to give some account of the events that led up to it. AT deficiency is a hereditary disorder that can lead to the development of disabling pulmonary emphysema at a relatively early age. AT is the main serum inhibitor of proteolytic enzymes and one of its most important functions is thought to be the inactivation of the potent elastase produced by polymorphonuclear leucocytes. In the severe form of the deficiency (subjects homozygous for Pi type Z) the serum AT concentration may be no more than 1020% of normal and in these circumstances the pulmonary elastic tissue may be degraded by the unopposed action of the leucocyte elastase. Cigarette smoking can seriously exacerbate this effect. The disability and early mortality that can result from this disorder make it entirely logical to attempt replacement of the missing serum protein fraction and potentially suitable therapy has been available for some years. The current commercial products require intravenous administration and have a half-life of no more than 5 days. In addition, such treatment might have to be administered for many years and perhaps for the lifetime of the patient. There has been extensive debate on how this therapy should be evaluated and in particular whether a randomized controlled trial should be performed. The problems of such a trial were recognized to be substantial; emphysema usually evolves slowly over a period of many years and any trial requiring serial evaluation of lung function would need to continue for a corresponding period of time. The possibility of conducting a randomized controlled trial was discussed by a workshop supported by the USA National Heart, Lung and Blood Institute (NHLBI). Pilot studies had already shown that, in patients with severe deficiency, the serum AT concentration could be raised from a mean of 37 mgdL-1 to 108 mgdL-1 by intravenous infusion of 4 g of AT replacement. The members of the Workshop considered that the optimal form of trial would be one carried out in patients with relatively mild disease and that this would require measurement of the decline in forced expiratory volume in one second (FEV1) in treated and untreated groups over a period of some years. At that time, there was no data on the natural rate of decline in lung function in AT deficient patients and the authors, while recognizing the possible errors of this approach, were obliged to base their estimates of sample size on values obtained in cases of non-AT deficient chronic obstructive pulmonary disease (COPD) from previous studies. They came to the conclusion that a total of over 300 patients would be needed to show a 50% reduction in the rate of decline in FEV1, in a study lasting for 5 yrs. In a series of AT deficient cases published a few years later, 60 cases homozygous for Pi type Z, who had been followed up for some years, were identified; their mean rate of decline in FEV1 was 93 mLyr-1 with a large standard deviation (143 mLyr-1), as is often found in this type of survey. The authors concluded again that large numbers would be required, 253 in both treated and untreated groups in order to detect a 40% reduction in the rate of decline in FEV1 in a study lasting 3 yrs. It was pointed out that, nevertheless, this would be far preferable to a mortality study where much more severely affected patients would be involved and the numbers required would be even greater. The production of AT replacement therapy on a commercial scale was developed by Cutter Biological, Miles Inc. (subsequently Bayer AG, Leverkusen, Germany) and by the Centre Rgional de Transfusion Sanguine (Lille, France) ; the commercial forms of AT have similar activity to the native protein. Pasteurization was shown to bring about rapid inactivation of retroviruses including human immunodeficiency virus (HIV) and the serum AT levels in deficient subjects could be raised to normal without serious adverse reactions. Alongside the development of commercial AT replacement therapy, a number of studies of the natural history of the disease in several countries appeared in the literature . The average rates of decline in FEV1 were of the same order in the majority of studies but there were wide variations among patients, as demonstrated by the large standard deviations. Further calculations suggested that the required number of patients was perhaps not so great as previously envisaged and could be achieved by a multinational European study ; a parallel group study was recommended with patients randomized to treatment and nontreatment groups, ideally on a double-blind basis, the patients having FEV1 ranging 3575% of the reference value, with EDITORIAL |
/**
* Error Based SQLInjection is the easiest way for extracting data and a very dangerous way which
* can lead to serious impacts and can compromise the entire system.
*
* @author [email protected] KSASAN
*/
@VulnerableAppRestController(
descriptionLabel = "SQL_INJECTION_VULNERABILITY",
type = {VulnerabilityType.SQL_INJECTION},
value = "ErrorBasedSQLInjectionVulnerability")
public class ErrorBasedSQLInjectionVulnerability {
private JdbcTemplate applicationJdbcTemplate;
private static final transient Logger LOGGER =
LogManager.getLogger(ErrorBasedSQLInjectionVulnerability.class);
private static final Function<Exception, String> GENERIC_EXCEPTION_RESPONSE_FUNCTION =
(ex) -> "{ \"isCarPresent\": false, \"moreInfo\": " + ex.getMessage() + "}";
static final String CAR_IS_NOT_PRESENT_RESPONSE = "{ \"isCarPresent\": false}";
static final Function<String, String> CAR_IS_PRESENT_RESPONSE =
(carInformation) ->
"{ \"isCarPresent\": true, \"carInformation\":" + carInformation + "}";
public ErrorBasedSQLInjectionVulnerability(
@Qualifier("applicationJdbcTemplate") JdbcTemplate applicationJdbcTemplate) {
this.applicationJdbcTemplate = applicationJdbcTemplate;
}
@AttackVector(
vulnerabilityExposed = VulnerabilitySubType.ERROR_BASED_SQL_INJECTION,
description = "ERROR_SQL_INJECTION_URL_PARAM_APPENDED_DIRECTLY_TO_QUERY",
payload = "ERROR_BASED_SQL_INJECTION_PAYLOAD_LEVEL_1")
@VulnerableAppRequestMapping(
value = LevelConstants.LEVEL_1,
descriptionLabel = "URL_CONTAINING_CAR_ID_PARAMETER",
htmlTemplate = "LEVEL_1/SQLInjection_Level1",
parameterName = Constants.ID,
sampleValues = "1")
public ResponseEntity<String> doesCarInformationExistsLevel1(
@RequestParam Map<String, String> queryParams) {
String id = queryParams.get(Constants.ID);
BodyBuilder bodyBuilder = ResponseEntity.status(HttpStatus.OK);
try {
ResponseEntity<String> response =
applicationJdbcTemplate.query(
"select * from cars where id=" + id,
(rs) -> {
if (rs.next()) {
CarInformation carInformation = new CarInformation();
carInformation.setId(rs.getInt(1));
carInformation.setName(rs.getString(2));
carInformation.setImagePath(rs.getString(3));
try {
return bodyBuilder.body(
CAR_IS_PRESENT_RESPONSE.apply(
JSONSerializationUtils.serialize(
carInformation)));
} catch (JsonProcessingException e) {
LOGGER.error("Following error occurred", e);
return bodyBuilder.body(
GENERIC_EXCEPTION_RESPONSE_FUNCTION.apply(e));
}
} else {
return bodyBuilder.body(
ErrorBasedSQLInjectionVulnerability
.CAR_IS_NOT_PRESENT_RESPONSE);
}
});
return response;
} catch (Exception ex) {
LOGGER.error("Following error occurred", ex);
return bodyBuilder.body(GENERIC_EXCEPTION_RESPONSE_FUNCTION.apply(ex));
}
}
@AttackVector(
vulnerabilityExposed = VulnerabilitySubType.ERROR_BASED_SQL_INJECTION,
description =
"ERROR_SQL_INJECTION_URL_PARAM_WRAPPED_WITH_SINGLE_QUOTE_APPENDED_TO_QUERY",
payload = "ERROR_BASED_SQL_INJECTION_PAYLOAD_LEVEL_2")
@VulnerableAppRequestMapping(
value = LevelConstants.LEVEL_2,
descriptionLabel = "URL_CONTAINING_CAR_ID_PARAMETER",
htmlTemplate = "LEVEL_1/SQLInjection_Level1",
parameterName = Constants.ID,
sampleValues = "1")
public ResponseEntity<String> doesCarInformationExistsLevel2(
@RequestParam Map<String, String> queryParams) {
String id = queryParams.get(Constants.ID);
BodyBuilder bodyBuilder = ResponseEntity.status(HttpStatus.OK);
try {
ResponseEntity<String> response =
applicationJdbcTemplate.query(
"select * from cars where id='" + id + "'",
(rs) -> {
if (rs.next()) {
CarInformation carInformation = new CarInformation();
carInformation.setId(rs.getInt(1));
carInformation.setName(rs.getString(2));
carInformation.setImagePath(rs.getString(3));
try {
return bodyBuilder.body(
CAR_IS_PRESENT_RESPONSE.apply(
JSONSerializationUtils.serialize(
carInformation)));
} catch (JsonProcessingException e) {
LOGGER.error("Following error occurred", e);
return bodyBuilder.body(
GENERIC_EXCEPTION_RESPONSE_FUNCTION.apply(e));
}
} else {
return bodyBuilder.body(
ErrorBasedSQLInjectionVulnerability
.CAR_IS_NOT_PRESENT_RESPONSE);
}
});
return response;
} catch (Exception ex) {
LOGGER.error("Following error occurred", ex);
return bodyBuilder.body(GENERIC_EXCEPTION_RESPONSE_FUNCTION.apply(ex));
}
}
// https://stackoverflow.com/questions/15537368/how-can-sanitation-that-escapes-single-quotes-be-defeated-by-sql-injection-in-sq
@AttackVector(
vulnerabilityExposed = VulnerabilitySubType.ERROR_BASED_SQL_INJECTION,
description =
"ERROR_SQL_INJECTION_URL_PARAM_REMOVES_SINGLE_QUOTE_WRAPPED_WITH_SINGLE_QUOTE_APPENDED_TO_QUERY",
payload = "ERROR_BASED_SQL_INJECTION_PAYLOAD_LEVEL_3")
@VulnerableAppRequestMapping(
value = LevelConstants.LEVEL_3,
descriptionLabel = "URL_CONTAINING_CAR_ID_PARAMETER",
htmlTemplate = "LEVEL_1/SQLInjection_Level1",
parameterName = Constants.ID,
sampleValues = "1")
public ResponseEntity<String> doesCarInformationExistsLevel3(
@RequestParam Map<String, String> queryParams) {
String id = queryParams.get(Constants.ID);
id = id.replaceAll("'", "");
BodyBuilder bodyBuilder = ResponseEntity.status(HttpStatus.OK);
bodyBuilder.body(ErrorBasedSQLInjectionVulnerability.CAR_IS_NOT_PRESENT_RESPONSE);
try {
ResponseEntity<String> response =
applicationJdbcTemplate.query(
"select * from cars where id='" + id + "'",
(rs) -> {
if (rs.next()) {
CarInformation carInformation = new CarInformation();
carInformation.setId(rs.getInt(1));
carInformation.setName(rs.getString(2));
carInformation.setImagePath(rs.getString(3));
try {
return bodyBuilder.body(
CAR_IS_PRESENT_RESPONSE.apply(
JSONSerializationUtils.serialize(
carInformation)));
} catch (JsonProcessingException e) {
LOGGER.error("Following error occurred", e);
return bodyBuilder.body(
GENERIC_EXCEPTION_RESPONSE_FUNCTION.apply(e));
}
} else {
return bodyBuilder.body(
ErrorBasedSQLInjectionVulnerability
.CAR_IS_NOT_PRESENT_RESPONSE);
}
});
return response;
} catch (Exception ex) {
LOGGER.error("Following error occurred", ex);
return bodyBuilder.body(GENERIC_EXCEPTION_RESPONSE_FUNCTION.apply(ex));
}
}
// Assumption that only creating PreparedStatement object can save is wrong. You
// need to use the parameterized query properly.
@AttackVector(
vulnerabilityExposed = VulnerabilitySubType.ERROR_BASED_SQL_INJECTION,
description = "ERROR_SQL_INJECTION_URL_PARAM_APPENDED_TO_PARAMETERIZED_QUERY",
payload = "ERROR_BASED_SQL_INJECTION_PAYLOAD_LEVEL_4")
@VulnerableAppRequestMapping(
value = LevelConstants.LEVEL_4,
descriptionLabel = "URL_CONTAINING_CAR_ID_PARAMETER",
htmlTemplate = "LEVEL_1/SQLInjection_Level1",
parameterName = Constants.ID,
sampleValues = "1")
public ResponseEntity<String> doesCarInformationExistsLevel4(
@RequestParam Map<String, String> queryParams) {
final String id = queryParams.get(Constants.ID).replaceAll("'", "");
BodyBuilder bodyBuilder = ResponseEntity.status(HttpStatus.OK);
bodyBuilder.body(ErrorBasedSQLInjectionVulnerability.CAR_IS_NOT_PRESENT_RESPONSE);
try {
ResponseEntity<String> response =
applicationJdbcTemplate.query(
(conn) ->
conn.prepareStatement(
"select * from cars where id='" + id + "'"),
(ps) -> {},
(rs) -> {
if (rs.next()) {
CarInformation carInformation = new CarInformation();
carInformation.setId(rs.getInt(1));
carInformation.setName(rs.getString(2));
carInformation.setImagePath(rs.getString(3));
try {
return bodyBuilder.body(
CAR_IS_PRESENT_RESPONSE.apply(
JSONSerializationUtils.serialize(
carInformation)));
} catch (JsonProcessingException e) {
LOGGER.error("Following error occurred", e);
return bodyBuilder.body(
GENERIC_EXCEPTION_RESPONSE_FUNCTION.apply(e));
}
} else {
return bodyBuilder.body(
ErrorBasedSQLInjectionVulnerability
.CAR_IS_NOT_PRESENT_RESPONSE);
}
});
return response;
} catch (Exception ex) {
LOGGER.error("Following error occurred", ex);
return bodyBuilder.body(GENERIC_EXCEPTION_RESPONSE_FUNCTION.apply(ex));
}
}
@VulnerableAppRequestMapping(
value = LevelConstants.LEVEL_5,
variant = Variant.SECURE,
descriptionLabel = "URL_CONTAINING_CAR_ID_PARAMETER",
htmlTemplate = "LEVEL_1/SQLInjection_Level1",
parameterName = Constants.ID,
sampleValues = "1")
public ResponseEntity<String> doesCarInformationExistsLevel5(
@RequestParam Map<String, String> queryParams) {
final String id = queryParams.get(Constants.ID);
BodyBuilder bodyBuilder = ResponseEntity.status(HttpStatus.OK);
bodyBuilder.body(ErrorBasedSQLInjectionVulnerability.CAR_IS_NOT_PRESENT_RESPONSE);
try {
ResponseEntity<String> responseEntity =
applicationJdbcTemplate.query(
(conn) -> conn.prepareStatement("select * from cars where id=?"),
(prepareStatement) -> {
prepareStatement.setString(1, id);
},
(rs) -> {
CarInformation carInformation = new CarInformation();
if (rs.next()) {
carInformation.setId(rs.getInt(1));
carInformation.setName(rs.getString(2));
carInformation.setImagePath(rs.getString(3));
try {
return bodyBuilder.body(
CAR_IS_PRESENT_RESPONSE.apply(
JSONSerializationUtils.serialize(
carInformation)));
} catch (JsonProcessingException e) {
LOGGER.error("Following error occurred", e);
return bodyBuilder.body(
ErrorBasedSQLInjectionVulnerability
.CAR_IS_NOT_PRESENT_RESPONSE);
}
} else {
return bodyBuilder.body(
ErrorBasedSQLInjectionVulnerability
.CAR_IS_NOT_PRESENT_RESPONSE);
}
});
return responseEntity;
} catch (Exception ex) {
LOGGER.error("Following error occurred", ex);
return bodyBuilder.body(
ErrorBasedSQLInjectionVulnerability.CAR_IS_NOT_PRESENT_RESPONSE);
}
}
} |
A Canadian tourist fell to his death while riding a famed zipline in Chiang Mai, Thailand.
According to local media reports, 25-year-old Spencer Charles plunged to his death after the safety locks failed on the zipline operated by the Flight of the Gibbon company.
The accident took place on April 13, moments after the man took off from the safety platform.
An investigation showed that the faulty safety locks led to the cable that held onto him to disconnect, causing him to fall 100 metres down into a rocky creek where he was later found dead.
Local police also suspected that the equipment used on the zipline was not able to bear Charles' weight. According to The Flight of the Gibbon's website, riders have to be below 125kg to be able to ride the zipline.
However, Charles reportedly weighed 125 kg and was only held by three cables when safety requirements needed eight cables.
Local authorities have called for the zipline to suspend its services as the zipline undergoes police investigation and safety checks. Flight of the Gibbon company has also accepted full responsibility for the accident and has offered extra compensation to the victim's family.
The zipline operator markets itself as 'one of the longest single ziplines in Asia', and has the 'highest safety standards' in the region. It is also known to be the only zipline operator to allow riders to be alongside wildlife, like the Gibbon monkeys, hence their name.
This is not the first time that the zipline operator in Chiang Mai has encountered trouble. In 2016, Flight of the Gibbon was closed for safety checks after three Israeli tourists got injured on the ride. |
/* $Id: scoped_resource.hpp 48153 2011-01-01 15:57:50Z mordante $ */
/*
Copyright (C) 2003 - 2011 by <NAME> <<EMAIL>>
Part of the Battle for Wesnoth Project http://www.wesnoth.org/
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY.
See the COPYING file for more details.
*/
/**
* @file
* scoped_resource: class template, functions, helper policies etc.\ for
* resource management.
*/
#ifndef SCOPED_RESOURCE_H_INCLUDED
#define SCOPED_RESOURCE_H_INCLUDED
#include "global.hpp"
#include <cstdio> //for FILE
namespace util
{
/**
* A class template, scoped_resource, designed to implement
* the Resource Acquisition Is Initialization (RAII) approach
* to resource management.
* scoped_resource is designed to be used when a resource
* is initialized at the beginning or middle of a scope,
* and released at the end of the scope.
* The template argument ReleasePolicy is a functor
* which takes an argument of the type of the resource,
* and releases it.
*
* Usage example, for working with files:
*
* @code
* struct close_file { void operator()(int fd) const {close(fd);} };
* ...
* {
* const scoped_resource<int,close_file> file(open("file.txt",O_RDONLY));
* read(file, buf, 1000);
* } // file is automatically closed here
* @endcode
*
* Note that scoped_resource has an explicit constructor,
* and prohibits copy-construction, and thus the initialization syntax.
* The assignment syntax must be used when initializing.
*
* I.e. using scoped_resource<int,close_file> file = open("file.txt",O_RDONLY);
* in the above example is illegal.
*
*/
template<typename T,typename ReleasePolicy>
class scoped_resource
{
T resource;
//prohibited operations
scoped_resource(const scoped_resource&);
scoped_resource& operator=(const scoped_resource&);
public:
typedef T resource_type;
typedef ReleasePolicy release_type;
/**
* Constructor
*
* @param res This is the resource to be managed
*/
scoped_resource(resource_type res = resource_type())
: resource(res) {}
/**
* The destructor is the main point in this class.
* It takes care of proper deletion of the resource,
* using the provided release policy.
*/
virtual ~scoped_resource()
{
release_type()(resource);
}
/**
* This operator makes sure you can access and use the scoped_resource
* just like you were using the resource itself.
*
* @return the underlying resource
*/
operator resource_type() const { return resource; }
/**
* This function provides explicit access to the resource.
* Its behaviour is identical to operator resource_type()
*
* @return the underlying resource
*/
resource_type get() const { return resource; }
/**
* This function provides convenient direct access to the -> operator
* if the underlying resource is a pointer.
* Only call this function if resource_type is a pointer type.
*/
resource_type operator->() const { return resource; }
void assign(const resource_type& o) {
release_type()(resource);
resource = o;
}
};
/**
* A helper policy for scoped_ptr.
* It will call the delete operator on a pointer, and assign the pointer to 0
*/
struct delete_item {
template<typename T>
void operator()(T*& p) const { delete p; p = 0; }
};
/**
* A helper policy for scoped_array.
* It will call the delete[] operator on a pointer, and assign the pointer to 0
*/
struct delete_array {
template<typename T>
void operator()(T*& p) const { delete [] p; p = 0; }
};
/**
* A class which implements an approximation of
* template<typename T>
* typedef scoped_resource<T*,delete_item> scoped_ptr<T>;
*
* It is a convenient synonym for a common usage of @ref scoped_resource.
* See scoped_resource for more details on how this class behaves.
*
* Usage example:
* @code
* {
* const scoped_ptr<Object> ptr(new Object);
* ...use ptr as you would a normal Object*...
* } // ptr is automatically deleted here
* @endcode
*
* NOTE: use this class only to manage a single object, *never* an array.
* Use scoped_array to manage arrays.
* This distinction is because you may call delete only
* on objects allocated with new,
* delete[] only on objects allocated with new[].
*/
template<typename T>
struct scoped_ptr : public scoped_resource<T*,delete_item>
{
explicit scoped_ptr(T* p) : scoped_resource<T*,delete_item>(p) {}
};
/**
* This class has identical behaviour to @ref scoped_ptr, except it
* manages heap-allocated arrays instead of heap-allocated single objects
*
* Usage example:
* @code
* {
* const scoped_array<char> ptr(new char[n]);
* ...use ptr as you would a normal char*...
* } // ptr is automatically deleted here
* @endcode
*
*/
template<typename T>
struct scoped_array : public scoped_resource<T*,delete_array>
{
explicit scoped_array(T* p) : scoped_resource<T*,delete_array>(p) {}
};
/**
* This class specializes the scoped_resource to implement scoped FILEs.
* Not sure this is the best place to place such an utility, though.
*/
struct close_FILE
{
void operator()(std::FILE* f) const { if(f != NULL) { std::fclose(f); } }
};
typedef scoped_resource<std::FILE*,close_FILE> scoped_FILE;
}
#endif
|
<gh_stars>0
use actix_web::{get, App, HttpServer, Responder};
use crossclip::{Clipboard, SystemClipboard};
#[get("/")]
async fn index() -> impl Responder {
let clipboard = match SystemClipboard::new() {
Ok(value) => value,
Err(err) => return err.to_string(),
};
match clipboard.get_string_contents() {
Ok(value) => value,
Err(err) => err.to_string(),
}
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| App::new().service(index))
.bind("127.0.0.1:9753")?
.run()
.await
}
|
Impact of statins on vascular smooth muscle cells and relevance to atherosclerosis Atherosclerosis is a complex inflammatory disease of the arteries that underlies numerous cardiovascular diseases (CVDs) including myocardial ischaemia and infarction, haemorrhagic stroke and peripheral artery diseases. Elevated low-density lipoprotein (LDL) is a direct risk factor for atherosclerotic diseases. High blood LDL activates endothelial cells (ECs), resulting in surface expression of adhesion molecules on the endothelium (). The expression of vascular wall adhesion molecules initiates an event cascade that traps LDL in the subendothelial layer and its subsequent oxidation, followed by infiltration of monocytes and immune cells, differentiation of monocytes into macrophages and foam cells, as well as vascular smooth muscle cell (VSMC) migration and proliferation, and platelet aggregation at the site (). The coordinated action of these different cell types results in an inflammatory milieu that promotes the formation of a fatty streak that progresses to an overt atherosclerotic plaque (). As the plaque grows in size, it undergoes numerous rupture and repair events resulting in advanced and complex atherosclerotic lesions. During these rupture and repair events, VSMC proliferation and migration play a critical role (; ). Through the secretion of extracellular matrix protein (i.e. collagen) and vascular calcification, VSMCs promote plaque stability, a key determinant of clinical events, such as myocardial infarction and stroke (; ). Since the approval of statins in 1987, they remain the most widely prescribed drug to reduce circulating cholesterol and thereby attenuate atherosclerotic disease progression (). Statins primarily work via the inhibition of 3-hydroxy-3-methyl-glutaryl-CoA (HMG-CoA) reductase, the rate-limiting enzyme of de novo cholesterol biosynthesis, and by upregulating the hepatic LDL receptor, which accelerates blood LDL clearance (). While statins significantly reduce serum cholesterol, atherosclerotic CVDs remain the leading cause of death worldwide. Therefore, seeking alternate therapeutic strategies, as well as additional therapeutic targets in combination with statins is imperative to reduce the morbidity and mortality associated with atherosclerotic CVD. To this end, understanding the pleiotropic effects of statins, i.e. an effect that is not mediated through the cholesterol reduction, is of great interest and paramount importance. The majority of the pleiotropic effects of statins are mediated through Rho and Rac GTPase family proteins that modulate cell proliferation, migration, inflammation, cell-to-cell junction integrity and adhesion (). Numerous pleiotropic effects of statins on atherosclerotic plaques have been described in ECs and immune cells; however, little consensus has been reached on the pleiotropic effects of statins on VSMCs (). A recent study published in the Journal of Physiology by Sanyour et al. described the effect of statins on rat aortic primary VSMC using a state-of-the-art atomic force microscopy technique. In this study, rat aortic primary VSMCs were isolated, cultured and treated with 1 M Fluvastatin, one of the statins. VSMC migration is an integral part of atherosclerotic disease pathology, as VSMCs migrate from medial layer to the intima of advanced lesions to form a fibrous cap that can determine plaque stability and ultimately clinical outcomes (). Although a variety of stimuli are known to modulate VSMC migration in vitro; technical limitations have left us with a lack of understanding on how these modulations are related to in vivo conditions of atherosclerotic plaque progression. Using a gas chromatography/mass spectrometry technique, the investigators found that the Fluvastatin treatment led to an expected reduction in cellular cholesterol in VSMCs resulting in a reduction in VSMC migration on a fibronectin (FN)-coated surface (). These findings provide evidence that statin treatment may influence the morphology of the atherosclerotic cap, and in this case, in a deleterious manner. If true, this finding has enormous clinical implications for not only plaque growth, but also stability. Inhibition of cap formation that is due to a reduction in VSMC migration may explain why improvements in atherosclerosis-related morbidity and mortality have not been more robust with the widespread clinical use of statins. In the normal steady state vasculature, VSMCs express smooth muscle contractile protein; however, pro-atherogenic stimuli, such as disturbed blood flow and elevated oxidized LDL lead to a phenotypic switch of VSMCs, resulting in upregulation of extracellular matrix (ECM) protein FN and type 1 collagen (Col 1) (). Furthermore, VSMCs overexpress integrins that transduce the pro-inflammatory signals from the ECM proteins and promote the expression of pro-inflammatory genes, leading to the inflammatory and pro-atherogenic vascular remodelling (). This current study convincingly demonstrates that Fluvastatin treatment increases the adhesion force of VSMCs to a FN surface via upregulation of integrin 5 protein, resembling a pro-atherogenic phenotype. Thus, this finding suggests that statins may play a biphasic role in atherosclerosis. At the early stages of atherosclerosis, statins may prevent atherosclerotic plaque development by attenuating VSMC migration, detachment and increase in adhesion to matrix proteins. However, at the advanced stages, statins may prevent fibrous cap formation resulting in vulnerable plaque formation. Future studies are required to confirm this finding in the setting of an atherosclerosis model, as well as to perform a timecourse study to dissect the impact of statins on VSMCs at different stages of atherosclerotic plaques. Arterial stiffness serves as a predictor of future cardiovascular events and is intricately associated with all stages of atherosclerosis. However, a precise role of |
How would you like to create the best diet plan… for free? You know, the diet plan that will best allow you to lose fat, build muscle or just be healthy.
The diet plan that will not only let you reach those goals quickly and effectively, but also in the most convenient, enjoyable and sustainable way possible.
I’m talking about the diet plan that is tailored specifically to YOUR preferences, YOUR needs, YOUR body, YOUR schedule and YOUR lifestyle.
The kind of diet plan that avoids every unproven gimmick, unnecessary restriction, and pointless diet method in favor of scientifically proven facts, real world results and always doing what’s best for YOU!
Interested? Good, because I’m going to show you how to create that diet plan right now.
Welcome to The Best Diet Plan!
Below is a step-by-step guide to designing the best diet plan possible for your exact dietary needs and preferences, and your exact dietary goal (to lose fat, build muscle, be healthy, etc.). So, if you’re ready to begin, the guide starts now…
Frequently Asked Questions
Who is this guide for?
It’s for anyone who wants to create the diet plan that will work best for their exact goal and fit perfectly with their exact preferences (and do it all for free).
Men, women, young, old, fat, skinny, beginners, advanced… whatever.
Looking to lose fat, build muscle, be healthy, make your diet easier and more enjoyable, improve the way your body looks, feels or performs in any capacity, or any combination thereof.
Whoever you are and whatever your goal is… this guide is for you.
What if I have questions, comments or feedback?
If you have any questions or comments about anything in this guide or you just want to let me know what you thought of it, you can leave a comment right here. |
There is an absolutely astounding amount of money to be made in the stock market today. Just ask stock market investor gurus like Warren Buffett and Peter Lynch.
While investing in the stock market today is easy enough, being successful and profitable at it is another story. That's where Money Morning comes in.
At Money Morning, we present investors with many approaches to investing – not one fit is right for everyone.
Many of you want to hold stocks for years, to build wealth that leads to a healthy, rich retirement. We do that and give you those picks every day.
First, let's look at the best ways to invest in the stock market today for long-term gains. Then we'll get to quicker profits.
In this guide, we'll provide you with the necessary insight into how to be a smart stock market investor today, and some of the stocks you may want to consider buying into today for potentially considerable gains.
Successfully profiting from the stock market today requires the application of some sound principles that take discipline, experience, and analysis to see. Investing in the stock market requires a solid understanding of market fundamentals, the ability to evaluate stocks and their companies, and enough capital to absorb any risks.
We can help you with most of that.
How much money you stand to make from the stock market today is determined by the stocks you buy into. Picking the right stocks is key, and the best ones are typically those that are currently undervalued and come from companies with solid fundamentals and a promising future over both the short- and long-term.
Understand current market conditions and events that shape the overall market. There are plenty of things that affect the stock market today, and it's important to keep tabs on them. World events are monumental to affecting the stock market, no matter if they are positive or negative in nature, or where they happen to be occurring. Occurrences overseas can still have a strong influence on U.S. markets.
For instance, the economic slowdown in China had a ripple effect in the United States, putting plenty of investors in a precarious position this year and last. In the first week of 2016, the Dow Jones plummeted as much as 467 points. The Nasdaq lost 2% on the first trading day of 2016 after stocks in China crashed the night before.
The local economy and political stage also heavily influenced the stock market in 2016. Up until recently, the U.S. economy has been suffering, which had a negative effect on the stock market. As well, the fact that 2016 is a presidential election year has an impact on the stock market today.
History has shown that election years – especially those where the current president is not running for another term – have a tendency to subdue the market. Since 1928, the Standard & Poor's 500 has declined an average of 2.8% in presidential election years that don't involve an incumbent looking to be reelected.
Understanding key factors that influence today's stock market can help you more accurately determine what type of market environment you're currently in, which can heavily influence which way the stock market is headed, and therefore help you more accurately predict stock prices.
Pinpoint industries on an uptrend. One of the first things investors should do prior to investing is to identify a particular industry that is growing. These are the industries that should be focused on by investors using the stock market for long-term growth. Stocks within them will be more likely to follow the respective trend, allowing investors to reap the rewards of increases in stock prices.
Our Chief Investment Strategist Keith Fitz-Gerald follows trends like these in his weekly Total Wealth research service. You can follow along for free, and get his stock picks and updates from these "unstoppable" growth trends. You can also get all of Keith's content on Money Morning here.
Determine the financial fundamentals of the company. While many stocks may experience sudden spikes in price, many times the uptrend in price is not sustained. Unless you're day trading, these may not necessarily be the types of stocks you want to invest in if they're not from a strong industry and the underlying fundamentals of the company aren't substantiated.
Buying stocks from a company with limited financial history or strength can be a risky investment. Instead, it's recommended only to buy into companies with solid financial histories that reinforce any possible gains over the short term. Study the firm's 10-K, which is an annual summary of the company's financial performance that the US Securities and Exchange Commission (SEC) requires.
Find out which direction the company's revenue has been trending in, and look into the background of the CEO. Finding a strong company when investing in the stock market is one of the most important tactics behind buying a stock that will likely realize great gains.
While the U.S. economy endured a lull over the recent past, it's showing signs of strength. The Commerce Department recently revised its Q2 GDP growth estimate to 1.1%, which meets economists' projections. Consumer spending was also revised upward to 4.4% from 4.2%.
A strengthening of the local economy gives investors more spending confidence, and therefore, makes them feel more assured about investing in the stock market today.
On the other hand, the anticipation of a hike in interest rates can have the opposite effect. On Aug. 26, after Federal Reserve Chairperson Janet Yellen made a case for another increase in interest rates by September, stocks were back in the red shortly after. The S&P 500 dipped 0.2%, and the Dow Jones Industrial Average slipped 0.3%.
Interestingly enough, such a possibility of a hike is due to the fact that the U.S. economy and labor market have strengthened enough to warrant it. Yellen said she anticipates moderate growth in the GDP and labor market over the next few years.
Expect the Fed to affect the stock market today and every important Fed day for months to come.
Considering all the factors to consider when it comes to choosing stocks to buy, the following are a few to keep an eye out for and potentially add to your short list of stock picks.
This British pharmaceutical company is responsible for developing, manufacturing, and marketing vaccines and over-the-counter drugs across the globe. The firm provides pharmaceutical and healthcare products in therapeutic areas.
According to the experts at Money Morning, GSK is a stock to watch. The company has a 7-year pact with Alphabet to develop Galvani Bioelectronics. Under the agreement, GSK will control 55% of the firm, and the remainder will go to Verily Life Science, Alphabet's research subsidiary.
The partnership will see Glaxo's expertise in drug discovery and development work with Verily's knowledge of miniaturizing low-power electronics. The early work will center on developing and manufacturing minuscule electronic devices required to test bioelectronic medicine.
Another reason to add Glaxo to the radar is because of its contribution to the world of immunotherapy. The revolutionary area of medicine seeks to replace traditional medicines that fight diseases like cancer with treatments that boost the body's own immune system. Since 2014, this sector has been growing at an annual rate of 7.1%, and immunotherapies is anticipated to make up 60% of the cancer market by 2021.
GSK stock is trading around $43.34 in mid-September, up 7.4% for the year.
TASER International Inc. is an Arizona-based developer, manufacturer, and distributor of tasers, body cameras, and audio-video equipment. The taser's CEWs – otherwise known as "stun guns" – transmit electrical pulses into the body to significantly affect sensory and motor functions of the peripheral nervous system. The company offers tasers for both law enforcement as well as consumers.
As a result of the recent shootings across the country, including in Baton Rouge and Milwaukee, the firm's Axon series of body cameras are garnering plenty of attention. Even before the recent shootings this year, these cameras had already established a loyal client base consisting of 3,500 law enforcement agencies.
Taser is also making money through Evidence.com, a subscription-based website that police and security industry users can use to upload films from body cameras and share them with the world instantaneously. Taser's Axon unit recently announced a partnership with Cradlepoint, a wireless router platform that offers a means for police officers to connect and offload videos from Axon body cameras while still in the field.
Using a social media-type model and applying it to the field of law enforcement has proven to be an innovative and profitable business model.
TASR stock is up 51% this year (as of Sept. 19), trading around $26.10.
Vancouver-based Goldcorp is involved in acquiring, exploring, developing, and operating precious metal areas in North, Central, and South America. Thanks to its efforts on slashing operating costs, Goldcorp has become one of the gold mining industry's most efficient gold producers.
Our Private Briefing investment service editor, Bill Patalon, called Goldcorp "the single best gold play out there" in July 2016. Patalon cited Goldcorp's successful efforts in cutting costs as reason this stock will surge with gold prices. The company is working towards producing 2.8 million to 3.1 million ounces of precious metals in 2016 at a cost of $850 to $935 per ounce.
With the climb of gold prices, Goldcorp's margins will grow and its stock price are expected to spike.
Patalon also pointed to Goldcorp's "impeccable" balance sheet – $2.7 billion in debt and more than $3.2 billion in available liquidity.
Goldcorp stock is up 36% this year as of Sept. 19, trading around $15.69.
As promised, let's take a closer look at trading/making moves for short-term market gains.
America's #1 Trader, Tom Gentile, informs readers of these types of market moves and profit opportunities in his Power Profit Trades. For example, when back-to-school time hit the retail sector in late August 2016, Tom told readers about four stocks headed for a bump. While not the best long-term picks, their short-term run up was perfect for Tom's style of trading.
Tom's quick gains are another way to play the stock market today – you can read more on his profit plays here.
Also check out Michael Lewitt's "super crash" preparation and profit guide. You see, Lewitt believes the stock market is headed for a crash bigger than what we say in 2008. Long-term investors can get hurt if they try to time the market. But anyone with available capital can profit when the stock market tanks, if they're positioned correctly.
And that's what Lewitt helps you do in his Sure Money research service. Every week he points out profit opportunities that often come from stocks that are falling. He told investors buying puts on Tesla Motors Inc. (Nasdaq: TSLA) would pay off – and in less than two months they climbed 22%. Get all of Lewitt's profit alerts here.
Dropping capital when investing in the stock market today can be done rather quickly and easily.
However, increasing the odds that the investment is a sound one requires some homework and due diligence.
The key is to find stocks in a fundamentally strong sector and a company that has been thoroughly researched.
Successfully investing in stocks is all about maximizing performance while lowering risk in order to strengthen your investment portfolio.
Follow Money Morning onFacebook and Twitter.
What are the best ways to invest in the stock market today for long-term gains?
One of the best ways to invest in the stock market today for long-term gains is to invest in tech companies. The top tech companies today are building self-driving cars, expanding the capabilities of virtual reality, and developing massive cloud-computing platforms. Investing in companies leading the way in this type of new technology can provide market-beating gains and long-term profit. These companies include: Facebook Inc. (Nasdaq: FB), Amazon. Inc. (Nasdaq: AMZN), Apple Inc. (Nasdaq: AAPL), and Alphabet Inc. (Nasdaq: GOOGL).
What’s the best investing advice for the stock market today?
The best investing advice for the stock market today is to focus on unstoppable trends. Money Morning Chief Investment Strategist Keith Fitz-Gerald says knowing how to beat the market means knowing where money will flow. And it’s flowing into these six industries: energy, demographics, medicine, war/terrorism, security/allocation, and technology.
Should I invest in the stock market today?
Investing in the stock market today is the best way to grow your money over time. From 1950 to 2009, if you adjust for inflation and account for dividends, the average annual return was 7%. The younger you are, the more valuable your portfolio can become over time. You aren’t going to become a millionaire overnight, but buying the stocks of strong companies and holding them can help you grow your nest egg and feel comfortable with your financial situation overtime. For example, Apple Inc. (Nasdaq: AAPL) stock has climbed over 22,000% since 1980. Since 1986, Microsoft Corp. (Nasdaq: MSFT) has climbed over 56,000%. |
import {
BlobServiceClient,
StorageSharedKeyCredential,
} from '@azure/storage-blob';
const containerUrl = `https://${process.env.AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/bmgf-docs`;
const account = process.env.AZURE_STORAGE_ACCOUNT || '';
const accountKey = process.env.AZURE_STORAGE_KEY || '';
export const getReportList = async (): Promise<string[]> => {
const sharedKeyCredential = new StorageSharedKeyCredential(
account,
accountKey,
);
const blobServiceClient = new BlobServiceClient(
`https://${account}.blob.core.windows.net`,
sharedKeyCredential,
);
const containerName = 'bmgf-docs';
const containerClient = blobServiceClient.getContainerClient(containerName);
const blobs = containerClient.listBlobsFlat();
const reportNames = [];
for await (const blob of blobs) {
if (blob.name.endsWith('.html')) {
reportNames.push(blob.name);
}
}
return reportNames;
};
export const getReportUrls = async (
reportName: string,
fileNames: string[],
): Promise<any> => {
const fileName = fileNames.find((fileName) =>
fileName.startsWith(reportName),
);
return {
reportPdfUrl: `${containerUrl}/${fileName?.replace('.html', '.pdf')}`,
reportHtmlUrl: `${containerUrl}/${fileName}`,
reportAssetsUrl: `${containerUrl}/${reportName}/assets/`,
};
};
export const getReportHtmlContent = async (
reportUrl: string,
): Promise<string> => {
return fetch(`${reportUrl}`, {
method: 'GET',
headers: {
'Content-Type': 'text/html',
},
}).then((response) => (response.ok ? response.text() : ''));
};
|
Deficient Pain Modulation in Patients with Chronic Hemiplegic Shoulder Pain Hemiplegic shoulder pain (HSP) following stroke significantly affects the individual's function and quality of life. The mechanism of HSP is not clearly understood; hence, it is unclear why HSP resolves spontaneously or following routine care in some patients, while in others it becomes persistent. The aim was therefore to study whether HSP is associated with deficient pain modulation. |
import {MigrationInterface, QueryRunner} from "typeorm";
export class AddHookName1645867235515 implements MigrationInterface {
name = 'AddHookName1645867235515'
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`ALTER TABLE "hook" ADD "name" character varying NOT NULL`);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`ALTER TABLE "hook" DROP COLUMN "name"`);
}
}
|
<reponame>Robolopes/RecycleRush
package org.usfirst.frc.team2339.Barracuda.swervemath;
import org.usfirst.frc.team2339.Barracuda.smartdashboard.SendablePosition;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;
public class SwerveWheel {
/**
* Store rectangular (x and y) coordinates
* Can represent a position or a vector
*
* @author emiller
*
*/
public static class RectangularCoordinates {
public double x;
public double y;
public RectangularCoordinates(double x, double y) {
this.x = x;
this.y = y;
}
public RectangularCoordinates subtract(RectangularCoordinates p0) {
return new RectangularCoordinates(x - p0.x, y - p0.y);
}
public double magnitude() {
return Math.sqrt(x * x + y * y);
}
}
/**
* Store robot motion as strafe, front-back, and rotation.
* strafe is sideways velocity with -1.0 = max motor speed left and 1.0 = max motor speed right.
* frontBack is forward velocity with -1.0 = max motor speed backwards and 1.0 = max motor speed forward.
* rotate is clockwise rotational velocity. -1.0 = max motor speed counter-clockwise. 1.0 = max motor speed clockwise.
*
* @author emiller
*
*/
public static class RobotMotion {
public double strafe;
public double frontBack;
public double rotate;
public RobotMotion(double strafe, double frontBack, double rotate) {
this.strafe = strafe;
this.frontBack = frontBack;
this.rotate = rotate;
}
}
/**
* Store a velocity as speed and angle
* @author emiller
*
*/
public static class VelocityPolar {
public double speed = 0;
public double angle = 0;
public VelocityPolar(double speed, double angle) {
this.speed = speed;
this.angle = angle;
}
}
/**
* Class to store angle and flip together
* @author emiller
*
*/
public static class AngleFlip {
private double angle;
private boolean flip;
public AngleFlip() {
setAngle(0);
setFlip(false);
}
public AngleFlip(double angle) {
this.setAngle(angle);
setFlip(false);
}
public AngleFlip(double angle, boolean flip) {
this.setAngle(angle);
flip = false;
}
/**
* @return the angle
*/
public double getAngle() {
return angle;
}
/**
* @param angle the angle to set
*/
public void setAngle(double angle) {
this.angle = angle;
}
/**
* @return the flip
*/
public boolean isFlip() {
return flip;
}
/**
* @param flip the flip to set
*/
public void setFlip(boolean flip) {
this.flip = flip;
}
};
public static double getRadialAngle(RectangularCoordinates wheelPosition) {
return Math.toDegrees(Math.atan2(wheelPosition.x, wheelPosition.y));
}
public static double getPerpendicularAngle(RectangularCoordinates wheelPosition) {
return Math.toDegrees(Math.atan2(-wheelPosition.y, wheelPosition.x));
}
/**
* Calculate wheel velocity vector given wheel position and pivot location.
* Robot motion is expressed with strafe, forward-back, and rotational velocities.
* Wheel speed are normalized to the range [0, 1.0]. Angles are normalized to the range [-180, 180).
* @see https://docs.google.com/presentation/d/1J_BajlhCQ236HaSxthEFL2PxywlneCuLNn276MWmdiY/edit?usp=sharing
*
* @param wheelNumber Wheel number (for putting info on Smartdashboard)
* @param wheelPosition Position of wheel.
* x is left-right, with right positive. y is front-back with front positive.
* @param pivot Position of pivot.
* @param maxWheelRadius distance of furtherest wheel on robot from pivot.
* @param robotMotion desired motion of robot express by strafe, frontBack, and rotation around a pivot point.
* @return wheel polar velocity (speed and angle)
*/
public static VelocityPolar calculateWheelVelocity(
int wheelNumber,
RectangularCoordinates wheelPosition,
RectangularCoordinates pivot,
double maxWheelRadius,
RobotMotion robotMotion) {
/*SmartDashboard.putData("Wheel " + wheelNumber + " position ",
new SendablePosition(wheelPosition.x, wheelPosition.y));*/
SmartDashboard.putNumber("Wheel " + wheelNumber + " position y ", wheelPosition.y);
RectangularCoordinates wheelRelativePosition = wheelPosition.subtract(pivot);
/*SmartDashboard.putData("Wheel " + wheelNumber + " rel posit ",
new SendablePosition(wheelRelativePosition.x, wheelRelativePosition.y));*/
SmartDashboard.putNumber("Wheel " + wheelNumber + " rel posit y ", wheelRelativePosition.y);
double rotateSpeed = robotMotion.rotate / maxWheelRadius;
RectangularCoordinates wheelVectorRobotCoord = new RectangularCoordinates(
robotMotion.strafe - rotateSpeed * wheelRelativePosition.y,
robotMotion.frontBack + rotateSpeed * wheelRelativePosition.x);
double wheelSpeed = Math.hypot(wheelVectorRobotCoord.x, wheelVectorRobotCoord.y);
// Clockwise
double wheelAngle = Math.toDegrees(Math.atan2(-wheelVectorRobotCoord.x, wheelVectorRobotCoord.y));
// Counter clockwise
// double wheelAngle = Math.toDegrees(Math.atan2(-wheelVectorRobotCoord.x, wheelVectorRobotCoord.y));
return new VelocityPolar(wheelSpeed, wheelAngle);
}
public static VelocityPolar calculateWheelVelocity(
RectangularCoordinates wheelPosition,
RectangularCoordinates pivot,
double maxWheelRadius,
RobotMotion robotMotion) {
return calculateWheelVelocity(0,
wheelPosition,
pivot,
maxWheelRadius,
robotMotion);
}
public static VelocityPolar[] calculateRectangularWheelVelocities(double length, double width,
RobotMotion robotMotion, RectangularCoordinates pivot) {
VelocityPolar rawVelocities[] = new VelocityPolar[4];
RectangularCoordinates wheelPosition = new RectangularCoordinates(0, 0);
for (int iiWheel = 0; iiWheel < 4; iiWheel++) {
switch(iiWheel) {
case 0:
default:
wheelPosition.x = width/2;
wheelPosition.y = length/2;
break;
case 1:
wheelPosition.x = -width/2;
wheelPosition.y = length/2;
break;
case 2:
wheelPosition.x = -width/2;
wheelPosition.y = -length/2;
break;
case 3:
wheelPosition.x = width/2;
wheelPosition.y = -length/2;
break;
}
rawVelocities[iiWheel] = calculateWheelVelocity(wheelPosition, pivot,
Math.hypot(width/2, length/2), robotMotion);
}
normalize(rawVelocities);
return rawVelocities;
}
public static VelocityPolar[] calculateRectangularWheelVelocities(double length, double width,
RobotMotion robotMotion) {
return calculateRectangularWheelVelocities(length, width, robotMotion, new RectangularCoordinates(0, 0));
}
/**
* Normalizes an angle in degrees to (-180, 180].
* @param theta Angle to normalize
* @return Normalized angle
*/
public static double normalizeAngle(double theta) {
while (theta > 180) {
theta -= 360;
}
while (theta < -180) {
theta += 360;
}
return theta;
}
/**
* Compute angle needed to turn and whether or not flip is needed
* @param currentAngle
* @param targetAngle
* @return new angle with flip
*/
public static AngleFlip computeTurnAngle(double currentAngle, double targetAngle) {
AngleFlip turnAngle = new AngleFlip(targetAngle - currentAngle, false);
if (Math.abs(turnAngle.getAngle()) > 90) {
turnAngle.setAngle(normalizeAngle(turnAngle.getAngle() + 180));
turnAngle.setFlip(true);
}
return turnAngle;
}
/**
* Compute change angle to get from current to target angle.
* @param currentAngle Current angle
* @param targetAngle New angle to change to
* @return change angle
*/
public static double computeChangeAngle(double currentAngle, double targetAngle) {
return computeTurnAngle(currentAngle, targetAngle).getAngle();
}
/**
* Scale drive speed based on how far wheel needs to turn
* @param turnAngle Angle wheel needs to turn (with flip value)
* @return speed scale factor in range [0, 1]
*/
public static double driveScale(AngleFlip turnAngle) {
double scale = 0;
if (Math.abs(turnAngle.getAngle()) <= 90) {
/*
* Eric comment: I don't like the discontinuous nature of this scaling.
* Possible improvements:
* 1) Scale any angle < 90.
*/
scale = Math.cos(Math.toRadians(turnAngle.getAngle()));
} else {
scale = 0;
}
// Override above speed scaling.
scale = 1;
if (turnAngle.isFlip()) {
scale = -scale;
}
return scale;
}
public static void normalize(VelocityPolar velocities[]) {
double maxSpeed = 0;
for (int iiWheel = 0; iiWheel < velocities.length; iiWheel++) {
if (Math.abs(velocities[iiWheel].speed) > maxSpeed) {
maxSpeed = velocities[iiWheel].speed;
}
}
if (maxSpeed > 1.0) {
for (int iiWheel = 0; iiWheel < velocities.length; iiWheel++) {
velocities[iiWheel].speed /= maxSpeed;
}
}
}
/**
* Calculate wheel velocity change (delta) based on current data.
* @param rawVelocity Raw wheel change data
* @return wheel change data (delta) based on current wheel values
*/
public static VelocityPolar calculateDeltaWheelData(VelocityPolar currentVelocity, VelocityPolar rawVelocity) {
VelocityPolar deltaVelocity = new VelocityPolar(0, 0);
// Compute turn angle from encoder value (pidGet) and raw target value
AngleFlip turnAngle = computeTurnAngle(currentVelocity.angle, rawVelocity.angle);
double targetAngle = normalizeAngle(currentVelocity.angle + turnAngle.getAngle());
deltaVelocity.angle = targetAngle;
deltaVelocity.speed = driveScale(turnAngle) * rawVelocity.speed;
return deltaVelocity;
}
}
|
package company.nearbuy;
public class MaximumDifference {
public static void main(String[] args) {
int[] inputArr = new int[] {1, 2, 90, 10, 110};
int arrLen = inputArr.length;
int maxDiffInArr = inputArr[1] - inputArr[0];
int minArrEle = inputArr[0];
for(int i = 1; i < arrLen; i++) {
if(inputArr[i] - minArrEle > maxDiffInArr) {
maxDiffInArr = inputArr[i] - minArrEle;
}
if(inputArr[i] < minArrEle) {
minArrEle = inputArr[i];
}
}
if(maxDiffInArr <= 0) {
System.out.println(-1);
}else {
System.out.println(maxDiffInArr);
}
}
}
|
Banking inefficiencies and red tape are costing Australia’s small and medium-sized businesses and the economy billions of dollars every year, according to newly published research from Eftpos payments provider, Tyro.
Tyro chief executive Jost Stollmann says the research reveals that banking red tape is “robbing” more than 880,000 of Australia’s two million small and medium-sized businesses of four weeks’ productive work time a year, at a cost to the national economy of almost $7 billion annually.
“This equates to an extra 20 working days a year – or the entire annual holidays of the average employee,” Stollmann says.
According to Stollmann, 44% of Australian SMEs — or 880,000 businesses — spend more than three hours every week checking, entering, paying and reconciling data, costing each business an average of $7800 a year.
• 400,000, or 20%, of SME owner/operators don’t use any form of accounting software.
“Large companies, with more than 200 employees, make up only 0.3% of businesses operating in Australia,” Stollmann says.
“By comparison, small and medium-sized businesses are the creative and innovative heart of the Australian economy, generating more jobs than any other sector.
“But SMEs are drowning under the burden of inefficient online business banking processes that are robbing them of three hours a week, or 20 days a year.
According to Stollmann, the research findings explain why a “staggering 700,000 SMEs are unhappy with their business bank’s performance”.
And, despite SMEs playing a critical role in the Australian economy, Stollman says their contribution to GDP has slowed since 2012 and, he suggests, the priority is to establish what the major “pain points were for the industry and help SMEs drive future growth”.
“Australia should be looking at action to improve SME productivity, in recognition of the changing terms of trade and resources decline. We should help SMEs operate more efficiently,” he says. |
Apart from an epic lineup waiting to get into the installation Irish Pub known as the Candahar, there was nothing but happy, shining faces in the early part of the night at Nuit Blanche in Olympic Plaza.
No one really knew how many people to expect for a free, all-night festival featuring interactive art and a pair of town criers perched atop a man in the moon, but there were thousands milling around at 10pm. There were food trucks, a relatively quiet beer garden, and the evening’s curator, Wayne Baerwaldt of the Illingsworth-Kerr Gallery, who wore western wear to what he says is the first Nuit Blanche of a five year plan.
Cloud, Caitlin Brown’s interactive sculpture made of 6,000 lightbulbs, was clearly a big hit with the crowd, which stood underneath it – it’s perched seven feet in the air – pulling cords and perhaps trying to eyeball the lightbulbs they donated when Brown launched a lightbulb drive throughout the city late last month.
Cloud wasn’t the only art happening in the core, either. Earlier in the day, right across the street from the Plaza, artist Derek Besant gave an artist’s talk and tour of his new show Road to Fifteen Restless Nights, which is on display at the Museum of Contemporary Art Calgary. The show features 15 large-scale printed photos created during a trip Besant took across the Trans-Canada Highway, which will hit home for anyone who ever has to make the epic drive from Calgary to Winnipeg, among many other epic drives. Besant’s show also features sound and text. See it before it takes Paris by storm when it’s presented at the Canadian Cultural Centre, opening September 26 – just in time for Nuit Blanche Paris.
See a gallery of Herald photographer and reader photos. |
#define GCC_BACK_COMPAT 1
#define REDI_PSTREAMS_POPEN_USES_BIDIRECTIONAL_PIPE 1
#include "pstream_compat.h"
#include <unistd.h>
int main()
{
{
char c;
redi::ipstream who("whoami");
if (!(who >> c))
return 1;
redi::opstream cat("cat");
if (!(cat << c))
return 2;
while (who >> c)
cat << c;
cat << std::endl;
}
// check for zombies while this process is sleeping
sleep(10);
return 0;
}
|
import React, {useEffect} from 'react';
import {BordersPropsType} from "../../helpers/parseBordersProps";
import {Header} from "./Header";
import {useDispatch, useSelector} from "react-redux";
import {logoutTC, requestAuthUserDataTC} from "../../redux/auth-reducer";
import {RootStateType} from "../../redux/store-redux";
import {Redirect} from "react-router-dom";
type HeaderContainerPropsType = {
borders: BordersPropsType
}
function HeaderContainer(props: HeaderContainerPropsType) {
const {userID, isAuth, login: userLogin, isAuthDataFetching} = useSelector((state: RootStateType) => state.auth)
const dispatch = useDispatch()
const logoutFn = () => {
dispatch(logoutTC())
return <Redirect to={"/login"}/>
}
useEffect(() => {
if (!userID) dispatch(requestAuthUserDataTC())
}, [userID, dispatch])
return (
<Header borders={props.borders}
isAuth={isAuth}
userLogin={userLogin}
isAuthDataFetching={isAuthDataFetching}
logout={logoutFn}
/>
)
}
export default HeaderContainer |
<reponame>BlackGlory/Gloria<filename>enhanced-notification/src/enhanced-notification.ts
'use strict'
export { BasicNotification } from './BasicNotification'
export { ImageNotification } from './ImageNotification'
export { VideoNotification } from './VideoNotification'
export { ListNotification } from './ListNotification'
export { ProgressNotification } from './ProgressNotification'
export { Notification, NotificationOptions } from './Notification'
|
import { DisplayObject } from "pixi.js";
import Game from "../Game";
import EntityPhysics from "./EntityPhysics";
import GameEventHandler from "./GameEventHandler";
import IOEventHandler from "./IOEventHandler";
export interface GameSprite extends DisplayObject, WithOwner {
layerName?: string;
}
/**
* A thing that responds to game events.
*/
export default interface Entity
extends GameEventHandler,
EntityPhysics,
IOEventHandler {
/** The game this entity belongs to. This should only be set by the Game. */
game: Game | undefined;
id?: string;
/** Children that get added/destroyed along with this entity */
readonly children?: Entity[];
/** Entity that has this entity as a child */
parent?: Entity;
/** Tags to find entities by */
readonly tags?: ReadonlyArray<string>;
/** If true, this entity doesn't get cleaned up when the scene is cleared */
readonly persistenceLevel: number;
/** True if this entity will stop updating when the game is paused. */
readonly pausable: boolean;
/** Called to remove this entity from the game */
destroy(): void;
sprite?: GameSprite;
sprites?: GameSprite[];
}
export interface WithOwner {
owner?: Entity;
}
|
<reponame>esoteric-programmer/elvm
#include <stdio.h>
#include <stdlib.h>
int cmp_int(const void* a, const void* b) {
return *(int*)a - *(int*)b;
}
typedef struct {
int x, y;
} S;
int cmp_s(const void* a, const void* b) {
S* sa = (S*)a;
S* sb = (S*)b;
return sa->x * sa->y - sb->x * sb->y;
}
void test1() {
puts("test1");
int data[] = {
3, 5, 7, 1, 4, 2
};
qsort(data, 6, sizeof(int), cmp_int);
for (int i = 0; i < 6; i++) {
printf("%d\n", data[i]);
}
}
void test2() {
puts("test2");
int data[] = {
3, 5, 7, 5, 1, 4, 2
};
qsort(data, 7, sizeof(int), cmp_int);
for (int i = 0; i < 7; i++) {
printf("%d\n", data[i]);
}
}
void test3() {
puts("test3");
S data[] = {
{ 3, 3 },
{ 5, 2 },
{ 7, 1 },
{ 5, 4 },
{ 1, 11 },
{ 4, 3 },
{ 2, 1 }
};
qsort(data, 7, sizeof(S), cmp_s);
for (int i = 0; i < 7; i++) {
printf("%d = %d * %d\n", data[i].x * data[i].y, data[i].x, data[i].y);
}
}
int main() {
test1();
test2();
test3();
}
|
<filename>tracer/include/colours.h
#ifndef COLOURS_H
#define COLOURS_H
#include <stdint.h>
typedef double Component;
typedef struct Colour {
Component red;
Component green;
Component blue;
Component alpha;
} Colour;
Colour colour(Component red, Component green, Component blue);
Colour colour_add(Colour c1, Colour c2);
Colour colour_filter(Colour primary, Colour filter);
Colour colour_scale(Colour c, double scaler);
Colour colour_normalise(Colour c);
Colour colour_ceil(Colour c);
double colour_brightness(Colour c);
#endif
|
Users need to go to m.bing.com in iPhone's web browser to search for apps or they can alternatively download the Bing app for mobile. Users can either conduct general search, or search for apps only.
Advertisement
Advertisement
You might like this
The Microsoft has updated the mobile version of its Bing search engine for iOS devices, which means the Apple users would now be able to search for the right apps for their needs. Searching for the right apps has always been a problem for the smartphone generation, and the Bing app for iOS users intends to do just that. There are close to 500,000 apps in the App Store now, therefore it's easy for the right apps to get lost in the crowd.The announcement was made by Microsoft in a blog post. The users will either be able to search by the name of app or by category such as news apps or top iPhone apps.The blog post says, "Auto app discovery is a unique feature created by Bing. The Bing search engine will surface apps in the context of normal web queries. For example, Thor 3D, Facebook and Hotels in Seattle are some of the queries for which Bing automatically finds the right apps."The Bing team also adds, "If an App is not installed on your phone, when you click on the download link Bing takes you to download the app from the iTunes App Store. If the App is already installed and the developer has enabled the launch functionality, then it will launch automatically."To check the apps, users need to go to m.bing.com in iPhone's web browser or they can alternatively download the Bing app for mobile. There two ways to search one may be a general search term, which will include app search results as well, or the users can simply search for apps only.Those developers, whose apps support the Bing functionality, can be launched from within the web app itself, and there are 50 such apps in the market now. Rest of the apps are expected to add this functionality with time. |
#include <fstream>
#include <iostream>
#include <sstream>
#include <vector>
#include <assert.h>
using namespace std;
#include "tinyxml2.h"
using namespace tinyxml2;
#include "platform.h"
namespace mingine {
extern char stringBuilderBuffer[MAX_STRING];
class MapData
{
public:
~MapData() { delete[] walkabilityGrid; }
int tileSize{};
int width{};
int height{};
const char* tileSetPath{ nullptr };
const char* outTableName{ nullptr };
vector<vector<int>> tiles{};
bool* walkabilityGrid{ nullptr };
int mapLength() const { return width * height; }
};
void XMLCheckResult(int result)
{
if (result != XML_SUCCESS)
{
snprintf(stringBuilderBuffer, sizeof(stringBuilderBuffer), "XML Error: %i", result);
log(&stringBuilderBuffer[0]);
showErrorBox(&stringBuilderBuffer[0]);
}
}
int readIntAttribute(XMLElement* element, const char* intName)
{
const char * szAttributeText = element->Attribute(intName);
stringstream strValue;
strValue << szAttributeText;
int value;
strValue >> value;
return value;
}
void writeStringField(const char* outTableName, const char* valueName, const char* value, string& outString)
{
outString += string(outTableName) + "." + valueName + " = \"" + value + "\"\n";
}
void writeIntField(const char* outTableName, const char* valueName, int value, string& outString)
{
outString += string(outTableName) + "." + valueName + " = " + to_string(value) + "\n";
}
void writeLuaMapScript(const MapData& mapData, string& outScript)
{
outScript = outScript.append(mapData.outTableName).append(" = {}\n");
writeStringField(mapData.outTableName, "tileAtlas", mapData.tileSetPath, outScript);
writeIntField(mapData.outTableName, "tileSize", mapData.tileSize, outScript);
writeIntField(mapData.outTableName, "width", mapData.width, outScript);
writeIntField(mapData.outTableName, "height", mapData.height, outScript);
outScript.append(mapData.outTableName).append(".tiles = {}\n");
for (size_t layerIndex = 0; layerIndex < mapData.tiles.size(); ++layerIndex)
{
outScript += string(mapData.outTableName) + ".tiles[" + to_string(layerIndex + 1) + "] = {";
for (const int& i : mapData.tiles[layerIndex])
{
outScript += to_string(i) + ",";
}
// erase last comma
outScript.resize(outScript.size() - 1);
outScript += "}\n"; // end of tiles
}
outScript += string(mapData.outTableName) + ".walkabilityGrid = {";
for (int i = 0; i < mapData.mapLength(); ++i)
{
outScript += (mapData.walkabilityGrid[i] ? "1," : "0,");
}
// erase last comma
outScript.resize(outScript.size() - 1);
outScript += "}\n"; // end of tiles
}
int parseTmx(const char* tmxFile, const char* topPathToMatch, const char* outTableName, string& outScript)
{
XMLDocument xmlDoc;
XMLError result = xmlDoc.LoadFile(tmxFile);
XMLCheckResult(result);
MapData mapData;
mapData.outTableName = outTableName;
XMLElement* pMapElement = xmlDoc.RootElement();
if (pMapElement == nullptr) return XML_ERROR_FILE_READ_ERROR;
// parse tilesets
XMLElement* pTilesetElement = pMapElement->FirstChildElement("tileset");
if (pTilesetElement == nullptr) return XML_ERROR_PARSING_ELEMENT;
while (pTilesetElement != nullptr)
{
const char * name = pTilesetElement->Attribute("name");
if (name == nullptr) return XML_ERROR_PARSING_ATTRIBUTE;
snprintf(stringBuilderBuffer, sizeof(stringBuilderBuffer), "Parsing tileset %s", name);
log(stringBuilderBuffer);
int tileWidth = readIntAttribute(pTilesetElement, "tilewidth");
int tileHeight = readIntAttribute(pTilesetElement, "tileheight");
assert(tileWidth == tileHeight);
mapData.tileSize = tileWidth;
snprintf(stringBuilderBuffer, sizeof(stringBuilderBuffer), "Tile Width: %i Tile Height %i", tileWidth, tileHeight);
log(stringBuilderBuffer);
XMLElement* pImageElement = pTilesetElement->FirstChildElement("image");
const char* source = pImageElement->Attribute("source");
string s = string(source);
int index = s.find(topPathToMatch);
mapData.tileSetPath = &source[index];
snprintf(stringBuilderBuffer, sizeof(stringBuilderBuffer), "Tile atlas source: %s", mapData.tileSetPath);
log(stringBuilderBuffer);
pTilesetElement = pTilesetElement->NextSiblingElement("tileset");
}
// parse layers
XMLElement* pLayerElement = pMapElement->FirstChildElement("layer");
if (pLayerElement == nullptr) return XML_ERROR_PARSING_ELEMENT;
while (pLayerElement != nullptr)
{
const char * name = pLayerElement->Attribute("name");
if (name == nullptr) return XML_ERROR_PARSING_ATTRIBUTE;
snprintf(stringBuilderBuffer, sizeof(stringBuilderBuffer), "Parsing layer %s", name);
log(stringBuilderBuffer);
mapData.width = readIntAttribute(pLayerElement, "width");
mapData.height = readIntAttribute(pLayerElement, "height");
snprintf(stringBuilderBuffer, sizeof(stringBuilderBuffer), "Map width: %i Map height: %i", mapData.width, mapData.height);
log(stringBuilderBuffer);
XMLElement* pDataElement = pLayerElement->FirstChildElement("data");
stringstream ss(pDataElement->GetText());
int numLayers = mapData.tiles.size();
vector<int> v;
mapData.tiles.push_back(v);
int i;
while (ss >> i)
{
mapData.tiles[numLayers].push_back(i);
if (ss.peek() == ',')
{
ss.ignore();
}
}
pLayerElement = pLayerElement->NextSiblingElement("layer");
}
// parse object groups
XMLElement* pObjectGroupElement = pMapElement->FirstChildElement("objectgroup");
if (pObjectGroupElement == nullptr) return XML_ERROR_PARSING_ELEMENT;
mapData.walkabilityGrid = new bool[mapData.mapLength()];
memset(mapData.walkabilityGrid, 1, mapData.mapLength());
while (pObjectGroupElement != nullptr)
{
const char * name = pObjectGroupElement->Attribute("name");
if (name == nullptr) return XML_ERROR_PARSING_ATTRIBUTE;
snprintf(stringBuilderBuffer, sizeof(stringBuilderBuffer), "Parsing object group %s", name);
log(stringBuilderBuffer);
if (strcmp(name, "NoWalk") == 0)
{
XMLElement* pListElement = pObjectGroupElement->FirstChildElement("object");
while (pListElement != nullptr)
{
int x = readIntAttribute(pListElement, "x");
int y = readIntAttribute(pListElement, "y");
int w = readIntAttribute(pListElement, "width");
int h = readIntAttribute(pListElement, "height");
int row = y / mapData.tileSize;
int col = x / mapData.tileSize;
int colliderWidth = w / mapData.tileSize;
int colliderHeight = h / mapData.tileSize;
for (int r = row; r < row + colliderHeight; ++r)
{
for (int c = col; c < col + colliderWidth; ++c)
{
mapData.walkabilityGrid[c + r * mapData.width] = false;
}
}
pListElement = pListElement->NextSiblingElement("object");
}
}
pObjectGroupElement = pObjectGroupElement->NextSiblingElement("objectgroup");
}
// write file
log("Encoding tmx as lua script...");
writeLuaMapScript(mapData, outScript);
return XML_SUCCESS;
}
void readTmx(const char* tmxFile, const char* topPathToMatch, const char* outTableName, string& outScript)
{
XMLCheckResult(parseTmx(tmxFile, topPathToMatch, outTableName, outScript));
}
} // end of namespace mingine |
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import re
import os
import pickle
import time
from collections import defaultdict
from pprint import pformat
from names import FULL_TO_SHORT
POWERS = ["AUSTRIA", "ENGLAND", "FRANCE", "GERMANY", "ITALY", "RUSSIA", "TURKEY"]
IGNORE_TESTS = ["6.B.11."]
OVERRIDE_OWNERS = {"6.A.6.": {"LON": "ENGLAND"}, "6.B.13.": {"BUL/SC": "RUSSIA"}}
OVERRIDE_ORDERS = {
"6.B.13.": {"RUSSIA": {"BUL/SC": "F BUL/SC - CON"}},
"6.D.34.": {"ITALY": {"PRU": "A PRU S A LVN - PRU"}},
}
class IgnoreTest(Exception):
pass
def parse_test(html):
test_name = html.lstrip("<h4>").strip().split(" ", 1)[0]
if test_name in IGNORE_TESTS:
raise IgnoreTest()
test_description = html.lstrip("<h4>").split("</h4>", 1)[0]
splits = re.compile("</?pre>").split(html)
assert len(splits) == 3, "Bad: " + test_name
orders = parse_orders(splits[1], override_orders=OVERRIDE_ORDERS.get(test_name, {}))
result_str = splits[2].strip().replace("\n", " ")
return test_name, test_description, orders, result_str
def parse_orders(s, override_orders={}):
r = defaultdict(list)
current_power = None
for line in s.strip().upper().split("\n"):
line = line.strip()
if len(line) == 0:
continue
for p in POWERS:
if line.startswith(p):
current_power = p
break
else:
if any(line.startswith(pre) for pre in ["A ", "F ", "BUILD ", "DISBAND ", "REMOVE "]):
# parse line
for full, short in FULL_TO_SHORT.items():
if full in line:
line = line.replace(full, short)
for c in ["NC", "EC", "SC", "WC"]:
line = line.replace(f"({c})", f"/{c}")
for a, b in [
("SUPPORTS", "S"),
("CONVOYS", "C"),
("BUILD", "B"),
("DISBAND", "D"),
("REMOVE", "D"),
("HOLD", "H"),
("VIA CONVOY", "VIA"),
]:
line = line.replace(a, b)
# lines are sometimes written "B F STP/NC" instead of "F STP/NC B"
for pre in ["B ", "D "]:
if line.startswith(pre):
line = line[2:] + " " + pre[0]
line = override_orders.get(current_power, {}).pop(line.split()[1], line)
r[current_power].append(line)
else:
raise Exception("Bad line: " + line)
# handle remaining override orders
for power, d in override_orders.items():
for order in d.values():
r[power].append(order)
return dict(r)
def parse_tests_from_html(html_path):
with open(html_path, "r") as f:
html = f.read()
splits = re.compile("""a name="[6]\.[A-Z]\.[0-9]+">""").split(html)
test_strs = splits[1:-1]
parsed_tests = []
for s in test_strs:
try:
parsed_tests.append(parse_test(s))
except IgnoreTest:
continue
except AssertionError:
continue
return parsed_tests
if __name__ == "__main__":
# read input cache
CACHE_FILE = "parse_datc_cache.pkl"
if os.path.isfile(CACHE_FILE):
with open(CACHE_FILE, "rb") as f:
user_inputs = pickle.load(f)
else:
user_inputs = {}
# read html file and extract orders
parsed_tests = parse_tests_from_html("datc.html")
stats_t, stats_count = 0, 0
try:
for i, (test_name, test_description, orders, result_str) in enumerate(parsed_tests):
if test_name in user_inputs:
continue
print(chr(27) + "[2J") # clear screen
print(f"{i} / {len(test_strs)}\tavg={stats_t/(stats_count+.0001)}")
print(s, pformat(orders), result_str, sep="\n\n")
t_start = time.time()
user_input = input(
"\nWhich locs' moves should succeed? [NO(NE), ALL, !LOC, LOC H, LOC D, SKIP]: "
)
t_elapsed = time.time() - t_start
user_inputs[test_name] = user_input
# some stats keeping
stats_t += t_elapsed
stats_count += 1
finally:
with open(CACHE_FILE, "wb") as f:
pickle.dump(user_inputs, f)
|
Contributing Factors to Online Banking Adoption Among Customers in Kampala, Uganda The advancement of technology has greatly impacted the nature of businesses pushing most from the traditional brick and wall to a virtual market. The Covid-19 strains in the past year have also immensely cut of human physical contact and spanned the conventional monetary exchange to a cashless platform. However, surveys still register that several Ugandans access bank services at the banking hall instead of using the internet that has been termed as more convenient, easy to use and widely accessible. This study therefore aimed at providing a deeper understanding of factors that contribute to the adoption of online banking in Kampala-Uganda. More specifically, the study explores perceived risk, perceived ease of use, perceived usefulness mediated by customer satisfaction as the contributing factors to online banking. into The study adopted the Technology Adoption Model (TAM) to develop the conceptual framework used. A cross sectional research design was employed to achieve the study aims and objectives together with a quantitative approach that used a survey questionnaire to collect data from 300 bank customers from Standard chartered, Centenary and Stanbic banks. The results revealed that perceived ease of use and perceived usefulness positively affected adoption of mobile banking and therefore should guide online adoption applications. However, the increase in perceived risks was found to not necessarily lead to a decline in the levels of adoption. This study therefore adds to the scant literature in the area of online banking especially in emerging countries like Uganda. 2020). The banking industry of the 21st century is facing a complex and ever changing environment due to the constant innovation and developments in the technological marketplace. (Ogare, 2013;Tseng & Wei, 2020). In Qatar for instance, the stable economy, Islamic values and principles (Anouze & Alamro, 2019) together with the considerable investments potential has catalysed the need and value of E-banking. Therefore, web-based banking continues to evolve and stand as a competitive advantage used to attract and retain customers in the financial sectors of developed economies. What is left now, is to ensure service quality in the process such as efficiency, reliability, security and privacy (Khatoon, Zhengliang, & Hussain, 2020). This has a positive influence on customers purchasing intentions which is important in the survival of the business. One thing for certain in the MENA region is the level of involvement by the central banks and other regulatory bodies as they use traditional conservatism to protect family owned businesses from cyber-crime and encourage banking the unbanked. This has led to a broader level of financial inclusion (Global payments, 2020). Anyona conducted a research in the CBD of Nairobi Kenya and concluded that most of the respondents strongly agreed that they used mobile banking systems to check on their account details and mini statements and make payments towards government and public utilities. However, when it came to the actual transfer of funds from one account to another or making loan and credit card payments, majority of the respondents disagreed. A similar study conducted in Sudan (Nancy, Siddig, & Abdel, 2014) showed that bank customers are more likely to adopt m-banking services if they find them easy to use with minimal efforts and yet the bank can provide protection for their confidential information. Introduced to the Ugandan market as early as 1997, internet banking which forms part of the financial inclusion has been researched to result into poverty alleviation, increased commercial bank sales which all result into economic growth (Alliance for Financial Inclusion, 2019). After the automation of banking services in 1993,, Standard chartered bank launched the first ATM in 1997 and other banks followed suit. Cerudeb (now Centenary Bank) followed in the subsequent year and participated in the Bankom interconnection electronic system that was introduced in 2004 to encourage and support the use of IT facilities within Ugandan banks. The Ugandan banking sector witnessed major turmoil between the late 1990s and early 2000s when several banks exited the market such as Greenland, Teefe, Gold Trust and Uganda Cooperative Banks. The remaining banks seized the market though growing and adapting to e-banking at rates of 30% and 49% below projected levels. In regards to Vision 2020, Bank of Uganda (BOU) became a member of the Alliance for Financial Inclusion (AFI) in 2011 ultimately raising financial service usage from 52% in 2013 to 58% in 2018. Emphasis has since been placed on digital financial services, an ecosystem that comprises mobile Network operators (MNO), commercial banks and other financial institutions, Bank of Uganda, and technology operators. Currently, MNO leads this market at 56% and has moved into partnerships with BOU supervised financial institutions to ensure safety of customer funds when they are being transferred electronically (Alliance for Financial Inclusion, 2019). Despite efforts to promote online banking through the financial inclusion strategy, several customers are still not comfortable with the internet, have little faith in cashless transactions and have fear that they may lose their money in phishing scams and online frauds. Many customers are seen resistant to the use of E-banking services; hence, the branch services are much necessary too. The threats posed to the customer include identity theft, the risk to private information and sensitive data, and the danger of loss of money due to internet scammers (Sardana & Singhania, 2018). 1.1 Statement of the problem E-banking has set new heights and drastically influenced consumer behaviour. However, despite a superficially good adoption level, the degree of usage is still low among the customers. Finscope survey 2018 discovered that 73% of Ugandan Adults accessed their bank account via the bank branches while only 2% used the internet for similar services. Of these adults, 52% used bank accounts for saving purposes, 40% to pay bills and 11% for credit facilities. Critical to enhance the adoption of digital financial services are consumer data protection, empowerment of users and instilled confidence through awareness and literacy. Again, according to Capgemini, payment volumes are predicted to continue at elevated levels due to the dampened global economy and increased reliance on non-cash transactions. The report also predicts the Compound Annual Growth Rate (CAGR) for non-cash transactions between 2019 and 2023 to be at 12% having risen from a near surge of 14% in 2018/2019. There has been increased demand for the internet and a drastic change of consumer behaviour. The year 2019 registered increased non-cash transactions worldwide championed by Asia Pacific that recorded 31.1% growth. This growth might be attributed to increased use of smartphones, mobile and QR payment systems, digital wallet adoption among others. Bank of Uganda acknowledges an increase in the number of transactions for EFTs and RTGS from six million in 2010 to approximately eight million transactions in 2017 and from Ugx 5 billion in 2010 to Ugx 300 billion in 2017 of electronic value. With online banking, banks could be able to reach unbanked populations, enhance access of financial services to low income, reach geographically distant people and offer social and financial services but adoption to these services is still debatable due to perceived risks and costs. However, the adoption rates of online banking are still lower than would be expected. The rate at which people have adapted to mobile money is in fact higher than mobile banking yet the latter offers more advantages such as access to larger loans, interest on savings and access to lengthy statements. Low adoption of internet banking can be evidenced from: 1) customer service desks at the banks have continuously registered complaints about slow web pages that make the transactions slow ending up in error reports. These lead to customer dissatisfaction and low rates of adoption as there will not be registered repeated usage. 2) Banking halls are still packed with queues of people requiring services that could be carried out online. The Monitor newspaper, (2020, January, 11) as detailed by Mr Robert Nyamu the Deloitte's director forensic and litigation support, fraudsters have more recently targeted RTGS systems, EFTs, and point of sale EFTs. This has created fear in the customers who would rather line up in the banking halls than lose huge amounts of money to fraudsters. This could be blamed on the increased bank fraud due toperceived failure to establish high controls that match the increased innovation in the banking sector. Customers might still be hesitant to use online banking because of perceived risks such as loss of money that may end up in wrong transaction accounts. They prefer that if any, such mistakes be made by the bank and its employees so that correction will be made and the customers retain their finances. Max Patrico (2020r 5) reported that Stanbic bank had been hit by online bank fraud where bank accounts falsely instructed the bank to wire money to about 2000 mobile money accounts registered with Airtel and MTN. Owing to the above, this study sought to identify and examine the most contributing factors to online banking adoption rates among individual bank customers in Kampala, Uganda. The study delimits itself to the relevance of perceived usefulness (PU), perceived ease of use (PEOU), perceived risk (PR) and customer satisfaction (CS) in regard to the adoption of online banking technology in 3 out of 26 commercial banks (Standard Chartered Bank, Centenary Bank, and Stanbic Bank) with branches in Kampala. The study specifically intended to: i) To evaluate the factors that affect adoption of internet banking among customers in Kampala, Uganda. ii) To analyse the relationship between adoption of internet banking and its contributing factors iii) To establish the role of customer satisfaction in relation to the adoption of internet banking in Uganda iv) To establish the mediating effects of customer satisfaction in a relationship of the determining factors of online banking adoption This research makes a contribution by adding knowledge value to the theory and limited literature available about the banking sector in Uganda and sub-Saharan Africa in general. Specifically, the study offers solutions to the bankers and several other financial institutions in Uganda on how to enhance online banking adoption amidst rapid customer behavioural changes towards brick and mortar service delivery and therefore require rapid technological changes to cope with the market transition. The rest of the paper presents the literature review, the study methodology, the discussion of findings of the study and the conclusions and recommendations. Roger under the diffusion of innovation theory contends that culture determines the spread of technology and innovation, and it is based on the fact that several people possess different qualities and may therefore accept or reject innovation. This means that customer adoption frequency can be modelled into a classical bell-shaped distribution curve (Achieng & Ingari, 2015). Innovation here is the extent to which individuals adopt to the system compared to others (Rogers, 1983;Achieng & Ingari, 2015) with a five-category classification of innovators, early adopters, early majority, late majority and laggards. In this order, innovators that form 2.5 percent will always be the most versatile, inquisitive and creative compared to the rest of the categories as they are willing to take on more risks and are more daring. Ideally, such adopters have the ability to observe, have relative advantage, ability to try out complex issues. Theoretical Framework Additionally, Davis, Bagozzi, and Warshaw provide the Technology Acceptance Model (TAM). This model suggests perceived usefulness and perceived ease of use as the most important factors that readily explain consumer behaviour towards adoption. Davis explains that intention to use determines actual behaviour which influences attitude and the perceived usefulness. These in turn are based on perceived ease of use. TAM argues that external variables that support ease of use and advantages of adoption are very vital to technology because other factors being constant, an easy to use system offers wide benefits to the user. The extended model of the technology acceptance model added variables such as risk, trust and cost to determine the consistency of the research results (). In fact, Makanyeza urges banks to concentrate on TAM when designing new mobile bank services as this model determines consumers' intention to adopt mobile banking services in the same way that Lin, Wang and Hung, does. This study therefore makes more contribution towards the Technology Acceptance Model and diffusion of innovation theory. Source: Extant literature review 2. Literature review 2.1 Perceived risk (PR) and online banking adoption Risk perceptions are by far the most influential barrier towards adoption of internet banking as customers perceive online banking to be riskier than conventional banking. Perceived risk refers to a "spirit cost associated with uncertainty in the future that directly affects consumers' intention to purchase" (Wei, Wang, Zhu, Xue, & Chen, 2018;Zhang & Yu, 2020;Zhang, Lu, & Kizildag, 2018). Bauer who introduced perceived risk to consumer behavioural analysis noted that perceived risk is brought about by lack of product information. While critiquing the works of Tan and Teo and Brown et al. who modelled perceived risk as a single construct, Lee expanded on the variable to encompass 5 facets. These facets include performance risks, security/privacy risks, time/convenience risks, social risks and financial risks. Performance risks are losses that can be experienced due to deficiencies or malfunctions of mobile servers, expected to reduce customers' willingness to use internet banking. Security risks on the other hand consider potential losses arising from fraud or compromise of the security system of an online banker by a hacker. The other looks at any inconvenience or loss of time arising from the difficulty or failure to navigate the site or even receive a payment requested, difficulty in finding the right website application to use uploads and downloads on a bank page. The social or psychological risk would occur if the social circles of a customer like friends, family and work-group disapprove of the use of online banking, bringing negative decisions made by the customer to the adoption of online banking. Lastly, financial risks may arise out of distrust for the system functions, fear of being overcharged, fear that the purchased product may not fulfil the purpose for which they were bought and the general internet insecurity. These risks represent potential monetary losses owing to transaction errors or bank account misuse. All these constructs impact negatively on the adoption of online banking. Davis define perceived ease of use as the level at which a person believes that little effort is needed to be able to use a particular system with a high degree of certainty so as to improve their work performance. This certainly drives consumers towards the use of new technology based applications to run their financial transactions. Moreover, according to Roy and Sinha perceived ease of use can considerably influence the adoption of online banking. It influences the attitude towards the system which in turn influences the behavioural intention of the customer which directly affects the actual use (Lin, Wang, & Hung, 2020). This therefore implies that banking institutions might increase online banking adoption rates among their customers if they design systems that are easy to operate (Achieng & Ingari, 2015). David describes perceived usefulness to be confidence initiated by an individual that its adoption of an activity will have a positive impact and would as a result increase their performance. Many customers have therefore appreciated the ease of use, reliability, abundance of information and cost reductions which are important to the success of internet banking (Cai, Yang, & Cude, 2008). Usefulness is determined by the following ISSN 2222-1697 (Paper) ISSN 2222-2847 (Online) Vol.12, No.10, 2021 55 characteristics; relative advantage, website quality, knowledge and support. With relative advantage, online bankers are more likely to access a wide variety and amount of information, 24/7 accessibility compared to nonusers (). Moreover, they will be saving money and time. Website quality conversely supports the banks by offering traditional services and highlighting new opportunities that arise from the internet being a new channel to conduct business (Swaid & Wigand, 2007). Banks are therefore encouraged to design their websites putting emphasis on screen designs, navigation patterns strategically to enhance usability and usefulness (Floh & Treblamarer, 2006) as customers might be impatient with delays. Lastly, considering knowledge and support, the lack of knowledge about financial services may negatively affect individuals' decisions as well as consumer usage. Consumers need the type and amount of information as availed by customer service personnel in the banking halls. But even most importantly, the customer service personnel should have adequate information as regards how the bank website works, how customers can register for online services and what perquisites they may be taking if they sign up. A number of studies corroborate the positive effect of perceived usefulness on customers' adoption of online banking (;Yitbarek & Zaleke, 2013;Al-Smadi, 2012). Customer satisfaction (CS) and adoption of online banking as a mediator Customer satisfaction denotes a person's feeling of pleasure or disappointment, which resulted from comparing a product's perceived performance or outcome against his/her expectations (Asiyanbi & Ishola, 2018). Asiyanbi & Ishola's study highlight among others service quality, security and privacy as highly vital to customer satisfaction since they have a positive effect and determine customers' purchasing intentions and relationship with the bank. While customers look for satisfaction, the usually opt for portability versus reliability when choosing a platform to access internet services (Al-Khalaf & Choe, 2020) even when it can be reasoned that reliability possesses the utmost essential features of internet banking. Customer satisfaction and quality are very parallel in the banking sector most especially because banking is in the service industry (Pooya, Khorasani & Ghouzdhi, 2020;Tseng & Wei, 2020). Accordingly, quality is measured by the consumer who might appreciate quality of service beyond quality of information or quality of the system. Research methodology A cross sectional research design was employed in this study to achieve the study aims and objectives. Quantitative approach was employed through the use of a survey questionnaire. Quantitative indicators and hypotheses were identified and structured into a questionnaire. The self-administered questionnaire consisted of two sections. The first captured demographic information about respondents that included personal data and experiences they have had with online banking. The second section used a five-point likert scale to survey respondents' perceptions towards the dependent variables. The study was carried out in Kampala, the capital city of Uganda specifically with customers of Centenary bank -Mapeera House, Standard Chartered Bank Nile Avenue and Stanbic Bank Crested Towers which are all around the Central Business District. The target population consisted of customers of the headquarters of the three banks since these were the biggest branches with the most numbers of customers. All banks offer both in store and online services and respondents were account holders with personal, business or corporate accounts and either Savings, current or fixed accounts. This study used simple random strategy and purposive sampling method to identify the customers as they are the key subjects of study in online banking. The respondents' knowledge and experiences were important to this study as they are key in online banking adoption. Krejcie and Morgan sample size determination table was used to get a sample of 310 customers out of which only 300 responded to the study questionnaires representing 96.77% response rate. Primary data such as demographics, motivations, attitudes and lifestyle characteristics were collected using questionnaires with guidance from Anyola, Achieng & Ingari, Karma, Ibrahim & Ali and Kabir. To increase the quality of primary data, questionnaire respondents were guaranteed confidentiality and anonymity and given a brief before they started filling out questionnaires. The questions were also precise and concise making them easy to understand and respond to. This led to increased response rates and ensured quality of data collected. Once survey questionnaires were collected, they were checked for completeness, accuracy and review to screen out incomplete, redundant and ambiguous responses. The remaining information was then transformed into research terms. A Microsoft excel sheet was then drawn and a thorough collection of responses presented through tables, percentages, standard deviation and mean. These provided a summary of answers from the questionnaire and answered the research questions. The excel sheet data was then imported into and analysed using a statistical package for social scientists (SPSS) version 23. Content validity index (CVI) to measure the validity of the instrument given to 10 experts to give their expert view on whether the questionnaire would capture the intended data. All responses gave a CVI of over 0.8. The aggregate Cronbach alpha test computation was done for reliability of questions for the study from a pre-test on 30 respondents before the actual survey. A Cronbach alpha coefficient of 0.770 was achieved for the 26 items which was a satisfactory level of internal consistency of the scale. Descriptive statistics were computed and thereafter correlation and regression tests were conducted for the study. The study model specification went as follows for the inferential tests: LOA = 0 + 1X1 + 2X2 + 3X3 + Where; LOA = Level of Adoption of online banking 0 = constant X1 = Perceived Risks (PR) X2 = Perceived Ease of Use (PEOU) X3 = Perceived Usefulness (PU) X4 = Customer Satisfaction (CS) = Error margin 4. Findings 310 questionnaires were distributed to customers of Centenary, Stanbic and standard chartered banks. A response rate of 96.77% was registered as 300 questionnaires were properly filled and returned. These were the basis of this study analyses that follow. The findings in table 4.1 first, on gender indicate that majority respondents were female (58%) with 42% male and both genders were fairly represented. Second, majority of respondents on were aged between 31 and 40 years (41.67%) followed by 32.67% who were between the ages 18 and 30 years. The rest of the respondents were between the age of 41 and 50 years representing 15.33% and the last category of respondents of 50 years and above represented 10.33%. Therefore, there was fair representation as the views of the entire population regardless of age was reflected though most of the commercial banks seem to serve more youthful and middle-aged customers. Demographic characteristics Regarding the Level of Education and where the respondents get their banking services from, table 4.2 below provides a clear description. 4.2 indicate that most of the respondents had a first degree as the highest level of education (58.7%). The second largest number was those with master's degree or above as a qualification represented by 27% followed by diploma holders (12%) and lastly certificate holders represented by 2.3%. Therefore, the level of education was diverse and relatively represented. This could also attest to the level at which the questionnaire questions and hence the research questions were understood and answered to the best of knowledge of the respondents. Averagely, respondents showed they get their banking services from any of the three banks under study. Most successful questionnaires returned were those from Standard Chartered bank at a rate of 34.7% followed by Stanbic at 34.3% then Centenary at 31.0%. All banks were relatively well represented in the survey as respondents are their actual customers. Table 4.3 below shows the type of account the held by respondents and how often they use e-banking with their bankers. With the banking diversity, majority of respondents (91%) hold personal accounts and a few (7%) have business accounts and the least (2%) run corporate accounts with the banks under study. This shows that most people with bank accounts are actually keeping their personal funds and therefore any mishap or credit that may be borne by the account is also borne by the individual. When asked how often they use online banking, the majority of the respondents (33.3%) indicated they rarely used it, others never (25.7%), sometimes (25.7%), usually (10.0%) and very few often (5.3%). This shows that the number of respondents using internet banking often is a very small number out of the entire population. 4.1.1 Benefits of using online banking hence adoption. To establish the benefits associated with online banking, a likert scale was engaged to enable respondents attach the level at which they value these benefits. The findings in table 4.4 below show majority of respondents to strongly agree to using online banking to check bank details. The mean response was 4.23 and standard deviation 1.068 indicating that the responses were varied. Most respondents also agreed to using their online accounts to check on mini statements at a mean of 4.16 and a standard deviation of 0.927 to show that responses were not varied. Several respondents use online accounts to check for foreign exchange rates as 54.3% strongly agreed to the statement. The question recorded a mean of 4.34 and a standard deviation of 0.910 to show that responses were not varied. When asked if online banking was beneficial for paying utilities and bills, majority strongly agreed at a 52.3% frequency, 4.34 mean and 0.865 standard deviation to show that responses were not varied. As regards using online platform to transfer funds, most respondents (49%) strongly disagreed to using the platform for these purposes. The mean was 1.92 and responses varied at a standard deviation of 1.148. For the purposes of credit card and loan payments, respondents strongly disagreed (27.7%) to benefit from online-banking this way. The mean was 3.00 and responses varied quite a bit as the standard deviation was 1.549. Most respondents (46.7%) strongly agreed that online bank services provided real time services with a mean of 4.18 and standard deviation of 1.026 to show that responses varied among the respondents. 55.7% of the respondents strongly agreed that internet banking provided 24 hour services 7 days a week throughout the 365 days in a year. Therefore, internet banking was constantly available. The mean was 4.28 and standard deviation 1.016. Testing variables. 4.2.1 Perceived risk of online banking The study enquired about the perceived risk of online banking among the adults who have a traditional bank account with centenary bank, standard chartered bank and Stanbic bank around Kampala with responses rated on a five-point Likert scale ranging from strongly disagree (SD) to strongly agree (SA) and the results are presented in table 4.5 below. Most of the respondents (46.7%) strongly agreed with the statement that the online banking was expensive to use. The mean for the statement was 3.80 and standard deviation was 1.43 implying a high variation in responses. Majority of the respondents 252(84%) strongly disagreed with the statement that mobile banking platform was a secure place through which to send sensitive information indicating a perceived risk in online banking. The mean response was 1.39, indicating that most of the respondents did not agree with the statement, while the standard deviation of 0.939 indicated a minimal variance in responses. Furthermore, approximately 46% of the respondents agreed with the statement that they find online banking to be convenient and saves time. The mean response to the statement was 3.74 showing that majority of the respondents agreed with that particular statement. The standard deviation was 1.060. About 74.3% of the respondents also agreed that online banking is risky to use, directly increasing the perceived risk in this mode of banking. The mean response for the statement was 4.56, meaning that majority of the respondents strongly agreed with the information. The standard deviation was 0.921 showing less variation in responses. When asked if they were satisfied with the verification process of online users, the respondents (59.0%) strongly disagreed. The mean for this statement was 1.88 as several responses disagreed with the statement and standard deviation 1.208 because responses slightly varied. 4.2.2 Perceived Ease of Use of online banking The second independent variable tested was perceived ease of use of online banking among customers. The responses were evaluated on the same Likert scale and the results presented in table 4.6 below as follows; The majority, 70.0% of the respondents strongly agreed with only 2.0% strongly disagreeing that they can access their bank account on internet devices such as phones, tablets and computers. The mean for the statement is 4.45 while the standard deviation was 0.019 showing that responses were not varied. Additionally, 49.3% of the respondents strongly disagreed with the idea that internet banking is complicated and difficult to use. Only 9.0% thought it was. The mean for this variable was 2.18 and standard deviation 1.394. The study also established that 55.7% of the respondents strongly agreed with the statement that interaction with online banking services did not require a lot of mental effort. Their mean was 4.06 while the standard deviation was 1.248. this was almost similar where 55.7% respondents found online banking procedures flexible to interact with, which registered a mean of 4.12 and standard deviation of 1.207. In regards to the question 'learning to operate online banking is easy and quick', most people (48.7%) strongly agreed and only 2.0% strongly disagreed. The standard deviation was varied at 1.098. As to whether the bank websites and apps are appealing, clear and easy to understand, majority of the respondents (62.6%) agreed while 15% were not sure and the rest disagreed, with mean of 3.71 and standard deviation of 1.322. Table 4.7 below shows results of perceived usefulness of online banking among bank customers as rated on a Likert scale. When asked if they thought mobile banking helped them accomplish tasks faster, majority respondents, 45.3% strongly agreed with mean of 4.01 standard deviation of 1.185. Similarly, the results indicate that majority (49.7%) of the respondents strongly agreed with the fact that online banking makes it easier to carry out tasks with mean of 4.09 and standard deviation of 1.160. Several respondents (81%) agreed that using online payment improved their self-esteem and prestige and therefore was useful in regard to their social influence, with mean of 4.21 and standard deviation of 1.057. This implies that online payment agreed is useful albeit some variation in responses. Similarly, 81.7% of respondents agreed that mobile banking made them look trendy and savvy among peers hence increasing their social appearance and capital with mean of 4.22 and standard deviation 1.050, showing slight variation in responses. Mobile banking was agreed to offer a wider range of information than the traditional banking hall system as most respondents 82.4% agreed overall to this statement at a mean of 3.98 and standard deviation of 0.920 to show that results were not varied. As to whether online banking is advantageous overall, respondents 58.0% strongly agreed and 33.7% agreed with mean of 4.43 and standard deviation of 0.833, hence responses were not greatly varied. All these responses reflect usefulness of online banking to customers. 4.2.4 Results of customer satisfaction with mobile/online banking The results in table 4.8 below show the study sought to establish the effect of demographics among bank account holders on the five-point Likert scale. Most responses (49%) disagreed that they preferred internet banking with mean of 2.62 and varied responses at a standard deviation of 1.244. This could be attributed to the fact that majority respondents (51.7%) strongly agreed that they would be criticized if they lost money due to errors within online banking at a mean of 4.31 with responses that did not vary at a standard deviation of 0.919. Overall, majority of the respondents (79.4%) agreed that age does not affect their ability to operate online banking at a mean of 3.8 and standard deviation of 1.029 showing that responses were slightly varied. When asked about education level, most respondents (65.7%) disagreed overall that education levels did affect their usage capacity of online banking at a mean of 2.25 with varied responses at 1.189 standard deviation. Most respondents (63%) also agreed overall that they will continue using mobile banking services with varied responses at mean 3.61 and standard deviation of 1.256. They also agreed to recommend online banking with majority responses totaling to 56.4% agreeing with mean of 3.54 and standard deviation 1.122. These responses show that bank customers are satisfied with online banking..000.002.909 N 300 300 300 300 **. Correlation is significant at the 0.01 level (2-tailed). Inferential statistics The direction of relationship and significance between variables was established by correlation analysis. The results showed that adoption of mobile banking is positively correlated with the three independent variables. The results show that adoption is positively and significantly associated with PU at (r = 0.341, p < 0.001). With PEOU, adoption of mobile banking is positively correlated at (r = 0.104, p = 0.071) though it does not show any significant relationship. Adoption is positively correlated to PU (r = 0.279, p < 0.001). Therefore, any improvements in the independent factors mean an improvement in the level of adoption and vice versa. The study also indicates that PU and PEOU, are positively and significantly correlated with each other (r = 0.355, p = 0.001), PU and PR are also positively and significantly correlated with each other (r = 0.104, p = 0.002). PEOU and PR are positively associated though not significant according to results (r = 0.007, p = 0.909). ANOVA is utilised to establish the fitness of the model and justify the significance of the relationship between adoption of online banking and the independent variables herein. Findings show a significant relationship between the dependent and independent variables given the level of significance 0.001 which is below p-value of 0.05. Therefore, the model was a reasonable fit that had a significant association between adoption of online banking and selected independent variables. 4.3.2 Regression analysis Regression analysis was undertaken to identify the nature of the relationships that exist between the study variables. The standardized coefficients represent a change in the adoption levels of online banking due to one unit change in each independent variable, other factors held constant as can be seen in the results table 4.12 below. Results show that PU is positively and significantly related with adoption of online banking among the banks' customers ( = 0.302, p = 0.001). A unit increase in PU translates to 0.302 increases in adoption and therefore PU positively affects adoption of online banking. Relatedly, PR positively and significantly influences online banking adoption ( = 0.225, p = 0.001). this means that a unit increase in PR leads to 0.225 increase in adoption levels other factors in the model held constant. However, PEOU did not show any tangible and significant relationship with online banking adoption ( = -0.004, p = 0.940). For The model LOA = 0 + 1PU + 2PEOU + 3PR + therefore shows that all other variables have a value of zero when adoption level is at 1.741..3 Customer satisfaction as a mediating factor The mediating effects of satisfaction was first tested against perceived use (PU) and the results are presented in table 4.14. Table 4.14: Satisfaction mediating perceived use (PU) and adoption of online banking Y=ADOPTION, X=PU, M=SATISFY Source: Primary data The results in table 4.14 reveal that customer satisfaction is a significant predictor for adoption as a result of perceived use. (R = 0.367, p < 0.001) with R = 13.4% change in online banking adoption levels. The = 0.132 for satisfaction. The effect of satisfaction on adoption is further defined by the indirect effect of 0.009 which leads to a total effect of 0.302 when added to a direct effect of 0.293. (Process procedure matrix in appendix). The finding indicates that there is a partial mediation of customer satisfaction between perceived usefulness and level of adoption. Satisfaction mediating PEOU and online banking adoption 4.15 reveal that customer satisfaction partially mediates perceived ease of use and adoption of mobile money. The results revealed that customer satisfaction is a significant predictor for adoption as a result of perceived ease of use (R = 0.189, p = 0.0045) with R = 3.57% change in adoption levels. The = 0.1512 for satisfaction. The effect of satisfaction on adoption is further defined by the indirect effect of 0.0061 which leads to a total effect of 0.098 when added to a direct effect of 0.092 (Process procedure matrix in appendix). The finding indicates that there is a partial mediation of customer satisfaction between perceived usefulness and level of adoption. Satisfaction mediating PR and adoption of online banking Table 4.16: Satisfaction mediating PR Y=ADOPTION, X=PR, M=SATISFY Source: Primary data Test results elaborate that customer satisfaction plays a controlling role on perceived risks and adoption of internet banking (R = 0.309, p = 0.001) with R = 9.57% change in adoption levels. The = 0.1284 for satisfaction. The total effect of change is 0.366, the direct effect 0.347, hence indirect effect of 0.018. Therefore, customer satisfaction mediates the effects of PR on adoption of internet banking. Discussions, conclusions and recommendations According to the mean statements, the findings report that most of the respondents agreed with the statements about perceived risks. In this context, perceived risks negatively affect adoption as they believe details to their bank accounts are sensitive to different pieces of information online and deem the platform very risky. Regarding perceived usefulness, most respondents agreed that online banking is overall advantageous as it makes it easier to accomplish tasks and generally improves the personal outlook of users. Most respondents also agreed that little mental effort is required to operate internet banking. Perceived risks (PR) encompasses security and privacy which consumers hold take very seriously. In this study, the overall mean for PR was 2.658 indicating that most of the respondents found e-banking to be very risky. The standard deviation of 1.067 showed that these responses were varied. The correlation coefficient between PR and adoption levels was r = 0.295 meaning an increase in PR may not necessarily lead to a decline in adoption levels for this sample population. The p-value returned 0.001 therefore statistically very significant. The perception of risk is flamed by the increased degree in inconsistencies between customer judgement and anticipated outcome vis-a-vis the actual behaviour and actual outcome delivered by the technology. This coupled with losses incurred as a consequence of error, fraud or failed systems negatively affects the level at which an individual will adopt internet or online banking. The study surveyed whether an e-banking platform is secure for sending sensitive information, whether it's convenient and saves time or it's expensive and may lead to loss of money in case of errors. It also asked whether users are comfortable and agree with the verification process on the online account. The empirical evidence suggests that e-banking users are not confident in the service and are not willing to take any risks. The online banking environment inherently carries several risks such as lack of compensation or holding contracts in case of errors. For perceived ease of use, the average mean was 3.77 and the standard deviation 1.048. The correlation coefficient was r = 0.104. These results show that the majority of respondents agreed that online banking is easy to use. The correlation coefficient however reveals a statistically non-significant relationship between the two variables stating that the perceived ease of use may not affect the level of adoption of e-banking. This is in premise with previous studies such as Lema, Al-Jabari, and Bidarra. It is however in contrast with several other researchers that found ease of use to be directly and significantly influential in determining adoption of online banking such as Anouze & Alamro, Achieng & Ingari, and Alkhowaiter. The ICT companies in recent years have striven to create cheaper, easy to use and more user-friendly software such as artificial intelligence, machine learning, and robotics. These have greatly affected consumers' attitudes towards adopting online activities. TAM states that perceived ease of use directly and positively affects attitudes towards the acceptance of technology and in the same way predicts perceived usefulness of the stated technology hence increasing adoption yet contradictory to the study results. The correlation coefficient registered r = 0.341 and the p-values 0.001 purporting a positive and significant relationship respectively for perceived usefulness. This was strengthened by the regression results. Customers' attitudes towards e-banking services are directly influenced by their perception of the usefulness of the technology. When customers discern technology to be time-saving, convenient and offer more advantages compared to traditional banking systems, their attitude towards its adoption will be positive. These results are in line with Davis' TAM that state that perceived usefulness (PU) positively affects attitudes towards acceptance of a given technology. In this study, the PU was conceived to be more important than PEOU in influencing behavioural intention towards online banking adoption. This is in line with previous studies (see, Teka, 2020;Al-Smadi, 2012;Anyona, 2018). Considering the average mean of 4.15 it can be extrapolated that majority of respondents find ebanking overall to be advantageous and therefore useful in the adoption of online banking by customers. Results reported that satisfaction has a positive and significant impact on level of adoption of online banking (R = 0.367, p = 0.000 and = 0.131) for PU, (R = 0.189, p = 0.0045, = 0.1512), and (R = 0.309, p = 0.0000 and = 0.1284). Therefore, the increase in satisfaction level tends to increase adoption of online services for banking. Wandi et al., in their study on the impact of product and service quality on awareness and customer satisfaction in Islamic banks report a positive and significant effect on quality that results into customer satisfaction. Relatedly, a meta-analysis of 46 studies by Alkhowaiter observe that behavioural intentions affect customer satisfaction. Also studies like that of Anouze & Alamro and Al-Khalaf & Choe provide consistent results that support the premise that customer satisfaction is strongly linked to adoption of online banking. The study analysed factors that affect online banking among customers of commercial banks in Kampala, Uganda. The study adopted a Technology adoption model (TAM) to produce results and concludes as follow: Customers are willing to adopt online banking if they perceive it to be easy to use. For instance, if it can easily be assessed on several or multiple internet supporting devices, if it does not require a lot of mental effort to operate, and if it is easy to learn and flexible to use. Customers are also willing to adopt and continue usage if they perceive online banking to be useful in their life. Such as if it makes it easier or faster for them to accomplish tasks, provides a wider range of information than they would get from a desk at the banking hall, and/or offers them increased prestige, self-image or ranks them as trendy and savvy among their colleagues and peers. However, bank customers might become sceptical and abandon internet banking if the perceived risks increase or surpass any advantages. Such as where it is costly to access online banking, when sharing sensitive information is involved, when the risk of losing money to errors, frauds, and cons are increased. Users also decline online banking with the perception that their age or educational background may hinder their efforts. Most especially because the internet is perceived as a generation Y and Generation Z platform. Generation X and above would rather go to the traditional bank than 'take a risk' on online banks. Customer satisfaction is also very key in mediating the relationships between perceived usefulness, perceived ease of use and perceived risks and online banking adoption. Therefore, this study recommends that bank customers in Kampala, Uganda adopt online banking solely for reasons associated with its ease of use and usefulness to those who adopt. Online banking requires a good fit between the technology application and users to encourage adoption. This study, therefore, recommends that financial institutions design applications that are simple, precise, and concise to enable understanding and reduce the amount of mental effort required to interact with them. Financial systems and services should be flexible, easily accessible, and easy to learn to encourage adoption. The study also suggests that an online system offers a wider range of information in few clicks and improves customers' prestige and self-image and yet advocates for all regardless of the level of education or age. Financial institutions should therefore invest resources on propositions that will increase online banking that supports the above recommendations. Banks should likewise invest and focus more on ways to profitably reduce perceived risks that customers inhibit. They should introduce more stringent security features such as one-time passwords (OTP), biometrics, two-step verification, and the like to boost customer confidence in the verification and security aspect of online banking. The communities should also be sensitised and addressed through campaigns, promotions, and luring advertisements about security systems put in place and the endless advantages of internet banking to increase usage. This study however was limited to perceived risks, perceived ease of use, and perceived usefulness and how these can be mediated in their relationship with adoption of online banking with only customers of three banks; Stanbic Bank, Standard Chartered Bank and Centenary Bank within Kampala. A study of all financial institutions in the country might give a different picture on the online banking model. However, several other variables were left out and considered out of scope for this particular study. The study also analysed general online especially ebanking channels. A more specific study on each element of internet banking may have introduced a different series of challenges or explanations for instance why a user may acknowledge EFTs to be riskier than SWIFT. Further research could also be carried out to study more factors affecting adoption such as attitude, customer behaviour, and effort of expectancy to explore more online banking adoption models. |
Distinction and Base Change An irreducible smooth representation of a $p$-adic group $G$ is said to be distinguished with respect to a subgroup $H$ if it admits a non-trivial $H$-invariant linear form. When $H$ is the fixed group of an involution on $G$ it is suggested by the works of Herv\'e Jacquet from the nineties that distinction can be characterized in terms of the principle of functoriality. If the involution is the Galois involution then a recent conjecture of Dipendra Prasad predicts a formula for the dimension of the space of invariant linear forms which once again involves base change. We will describe the proof of this conjecture (in the generic case) for $SL(n)$ which is joint work with Dipendra Prasad. Then we describe one more newly discovered connection between distinction and base change which is that base change information appears in the constant of proportionality between two natural invariant linear forms on a distinguished representation. This latter result is for discrete series for $GL(n)$ and is joint with Nadir Matringe. This paper is a report on the author's talk in the International Colloquium on Arithmetic Geometry held in January 2020 at TIFR Mumbai. Introduction The so called relative Langlands program has got a major boost in the last decade thanks to the fundamental works of Gan-Gross-Prasad and Sakellaridis-Venkatesh . Distinguished representations are the basic objects of study in the relative Langlands program and they are studied both locally and globally. In the local setting, given a p-adic group G, a subgroup H of G, and a character of H, an irreducible admissible representation of G is said to be (H, )-distinguished if Hom H (, ) = {0}. In the global setting, where H < G are adelic groups, and is a cuspidal representation of G, the interest is not in any non-trivial (H, )-invariant linear form, instead distinction is defined in terms of the non-vanishing of a specific (H, )-invariant linear form which is the (H, )-period integral. When is the trivial character of H, it is often omitted from the definition and we say that the representation is H-distinguished. In situations such as in , local distinction is connected to the sign of certain local root numbers and global distinction is connected to the special values of certain automorphic L-functions. In situations such as in , it is expected that distinction can often be characterized in terms of the principle of functoriality. The notion of a distinguished representation, in both its local and global avatars, goes back to the pioneering work of Harder-Langlands-Rapoport about the Tate conjecture on algebraic cycles in the context of Hilbert-Blumenthal surfaces . In this work they are led to study distinction for the pair (R F/Q GL, GL), where F is a real quadratic number field. Several of the connections that distinction has 2020 Mathematics Subject Classification. 11F70, 22E50. 1 with other objects of interest are already there in . In particular, it is proved that a cuspidal representation of GL 2 (A F ) of trivial central character is distinguished precisely when its Asai L-function has a pole at s = 1 and such a representation is characterized as a base change lift of a cuspidal representation of GL 2 (A Q ) with non-trivial central character. Roughly around the same time, the work of Jacquet-Lai introduced the relative trace formula to investigate distinction . In the nineties, a series of papers of Jacquet and his collaborators investigated distinction for a number of symmetric pairs such as where E/F is a quadratic extension of p-adic fields (or number fields) and U(n, E/F) denotes a unitary group defined with respect to E/F. It was proposed that distinction for a symmetric pair (G, H) should have a simple characterization in terms of the principle of functoriality (for instance, see ). An instance of this proposal is that an irreducible admissible generic representation of GL n (E) is distinguished with respect to U(n, E/F) if and only if it is a base change lift from GL n (F). This is now known by the works of Jacquet and Feigon-Lapid-Offen (and ). Similarly, an irreducible admissible generic representation of GL n (E) is (GL n (F), n−1 E/F )-distinguished if and only if it is a base change lift from U(n, E/F), where E/F is the quadratic character of F associated to E/F. This suggestion is sometimes referred to as the Flicker-Rallis conjecture (it is the local version of the global conjecture stated in This paper is an informal exposition of two results of the author which bring to light the close relationship between distinction and base change; one of these is joint work with Dipendra Prasad and the other is joint work with Nadir Matringe . The first is Theorem 4.1 and the second is Theorem 5.1 of this paper. Theorem 4.1 proves a conjecture of Dipendra Prasad for G = SL(n). We start by recalling a few aspects of Prasad's conjecture in Section 2 and then summarize a number of facts connecting distinction for (GL n (E), GL n (F)) and base change from U(n, E/F) to GL n (E) (cf. ) in Section 3.2. In Section 4, we take up Theorem 4.1, and we discuss Theorem 5.1 in Section 5. Prasad's Conjecture Prasad's conjecture is for p-adic fields and for a Galois pair . As in Section 1, E/F is a quadratic extension of p-adic fields. Let G be a connected reductive group defined over F. Let G = G(E) and H = G(F). The Galois involution acts on G and hence on representations of G. For an irreducible admissible representation of G its Galois conjugate is denoted by. The contragredient representation of is denoted by ∨. If is such that ∨ ∼ =, we say that is conjugate self-dual. There are two objects associated to the data (G, E/F) which appear in the formulation of the conjecture. One of these is the opposition group denoted by G op constructed in which in particular has the property that it is isomorphic to G over E. The other is a character of H of order ≤ 2, denoted by G, constructed in . The conjecture has three parts. The first part asserts that an irreducible admissible representation of G which is (H, G )-distinguished is conjugate self-dual and moreover its L-packet arises as the base change lift of an L-packet of G op (F). The two examples considered in Section 1 correspond to Thus, we have already seen that the first part of Prasad's conjecture is true in the case of both these examples for generic representations. In fact the assertion on conjugate self-duality is known to be true for all irreducible admissible representations. For G = GL(n), this is , and for G = U(n, E/F), this is . The second and third parts of the conjecture are for generic representations so assume that G is quasi-split over F. Fix a Borel subgroup B of G and let N be its unipotent radical. The second part of the conjecture probes for distinction inside a generic L-packet and proposes a simple recipe to detect distinction. It says that an irreducible admissible representation which is generic for a non-degenerate character of N(E)/N(F) is (H, G )-distinguished provided its L-packet is a base change lift of an L-packet of G op (F). The third and perhaps the most important part of the conjecture is the multiplicity formula. We do not state it as it involves more technical terms and refer to for the precise statement. It suffices to say that a key ingredient in the conjectural formula for the multiplicity -which is dim C Hom H (, G ) -is the cardinality of the fiber of the base change map from L-packets of G op (F) to L-packets of G. Thus the connection between distinction and base change is indeed quite deep. For G = GL(n), the corresponding Galois pair is a Gelfand pair ; i.e., dim C Hom H (, G ) = 1 for every irreducible admissible representation of G which is (H, G )-distinguished. This fits well with the conjectural multiplicity formula which in this case equals the cardinality of the fiber of the base change map from U(n, E/F) to GL n (E) as this map is known to be injective (cf. ). For G = U(n, E/F), the multiplicity can be more than 1. For instance, it is easy to see that the principal series representation Ps(, −1 ) of GL 2 (E) with = is both GL 2 (F)-distinguished and (GL 2 (F), E/F )-distinguished and that the corresponding invariant linear functionals are U(2, E/F)-invariant. It can be seen that dim C Hom H (, G ) = 2 in this case. More generally, if is an irreducible admissible generic representation of GL n (E) then it is parabolically induced from a number of essentially square-integrable representations i of GL n i (E), 1 ≤ i ≤ t, n = n 1 + + n t, say Suppose and r many of these i 's are Galois invariant. The conjectural multiplicity formula would then predict We refer to for more details. By , the right hand side is known to be a lower bound with equality known to hold if the Galois invariant i 's are distinct. Recently, R. Beuzart-Plessis has proved this multiplicity formula in general . 3. Base change from U(n, E/F) to GL n (E) In Section 2, we have briefly mentioned the connection between distinction for the Galois pair (GL n (E), GL n (F)) and base change from a unitary group. We make it more precise in this section. We closely follow . In order to discuss base change we need to introduce the notion of a Langlands parameter of a group G defined over a p-adic field k. It is an admissible homomorphism from the Weil-Deligne group of k it sends W k to semisimple elements and its restriction to SL 2 (C) is algebraic. Two Langlands parameters are equivalent if they are conjugate by G ∨. We denote by (G) (or sometimes by (G(k))) the set of equivalence classes of Langlands parameters of G. The groups of interest to us are GL(n) and the quasi-split unitary group defined by where the Weil group W F of F acts by projection to Gal(E/F), and w ∈ W F W E acts as the automorphism g → J t g −1 J −1. The base change map from U(n) to GL n (E) at the level of Langlands parameters is a map from Now the Flicker-Rallis conjecture (see where it is stated in the global context) is the assertion that, for an irreducible admissible generic representation of GL n (E) with Langlands parameter, Remark 1. What we described above is what is called the stable base change map from U(n) to GL n (E). There is also an unstable base change map with respect to an extension of E/F to E. A representation of GL n (E) is in the image of the stable base change map if and only if ⊗ is in the image of the unstable base change map with respect to. As mentioned in Section 2, this is now proved thanks to and . We end this section by stating these two results and by paraphrasing the Flicker-Rallis conjecture in the language of Prasad's conjecture. To this end, we introduce the notion of parity of a conjugate self-dual Langlands parameter for GL n (E). So let. Thus the Flicker-Rallis conjecture follows from combining Theorem 3.1 and Theorem 3.2. Theorem 3.1 (Matringe). An irreducible admissible generic representation of GL n (E) is GL n (F)-distinguished if and only if its Langlands parameter is conjugate self-dual of parity +1. Now we state all these together in the language of Prasad's conjecture. (SL n (E), SL n (F)) Prasad's conjecture for SL(n) is proved in (see also for some of the early works in this direction). In this section, following , we summarise the key steps involved in its proof. The opposition group for G = SL(n) over E/F is G op = SU(n, E/F) and Prasad's character G is the trivial character. Thus, we are interested in the space of SL n (F)invariant linear forms on an irreducible admissible generic representation of SL n (E) and also in the base change map from SU(n) to SL n (E). Base change for SU(n) fits into the commutative diagram where p F and p E are the natural projections induced by the homomorphism The key observation is that the maps p F, p E are surjective and BC is injective. The surjectivity of p E follows from Tate's theorem according to which To proceed, let ∈ (SU(n)). Let ∈ p −1 F (). Now observe that The above observation leads us to make the following crucial definitions (see also ). Two members of BC((U(n))) are weakly (resp. strongly) equivalent if they differ by a character of E (resp. E /F ). Strong and weak equivalence classes are similarly defined in the set of (GL n (F), n−1 E/F )-distinguished representations. It follows that the cardinality of is the number of strong equivalence classes in the weak equivalence class of BC( ). We now state the main theorem of which is the exact analogue of Theorem 3.3 for SL(n) with an additional condition to probe for distinction inside an L-packet . Part of Theorem 4.1 follows from the commutative diagram above for base change from SU(n) together with the fact that ∈ | SL n (E) for some irreducible admissible generic representation of GL n (E) which can be taken to be (GL n (F), The key ingredient in proving is Theorem 4.2 below, which follows from combining a number of results. Firstly, we have a result due to Flicker by which for an irreducible admissible unitary generic representation of GL n (E) and for a nondegenerate character of N(E)/N(F), the integral N(F)\P(F) W(p)dp is absolutely convergent for W in the -Whittaker model W (, ) of, where P(F) denotes the mirabolic subgroup of GL n (F) . Also, by , the linear form defined on W (, ) by such an integral is nontrivial. If is non-unitary but generic and also GL n (F)-distinguished then it is shown in that the integral N(F)\P(F) W(p)| det p| s−1 dp (which is convergent for Re(s) large and admitting a meromorphic continuation to C by the Rankin-Selberg theory of the Asai L-function and in particular by ) is holomorphic at s = 1. We denote this regularized integral by * N(F)\P(F) W(p)dp which is not identically zero as a linear form on W (, ) (see, for instance, ). Secondly, we have a result due to Youngbin Ok by which Hom P(F) (, 1) = Hom GL n (F) (, 1) for any irreducible admissible GL n (F)-distinguished representation (see and ). We remark here that Ok's result in the tempered case is proved independently in . Theorem 4.2. The unique, up to multiplication by scalars, GL n (F)-invariant linear form on a GL n (F)-distinguished irreducible admissible generic representation of GL n (E) is given on its -Whittaker model W (, ) by where is a non-degenerate character of N(E)/N(F). In order to prove the multiplicity formula dim C Hom SL n (F) (, 1) = |pBC −1 ( )|, choose an irreducible admissible generic representation of GL n (E) containing the representation which is (GL n (F), E/F )-distinguished and such that BC( ) =. The right hand side is the number of strong equivalence classes in the weak equivalence class of (inside Image(BC)). The left hand side can be shown to be the number of strong equivalence classes in the weak equivalence class of (among the (GL n (F), n−1 E/F )-distinguished irreducible generic representations of GL n (E)). Remark 2. A consequence of Theorem 4.1 is that an SL n (F)-distinguished irreducible admissible generic representation of SL n (E) is conjugate self-dual. This follows from the uniqueness of non-degenerate Whittaker models for GL n (E). Indeed, for an irreducible -generic representation of SL n (E) which is SL n (F)-distinguished let be an irreducible admissible generic representation of GL n (E) such that appears in its restriction to SL n (E). By multiplicity one of Whittaker functionals on we see that is the unique -generic representation in its L-packet. A character of N(E)/N(F) has the property that −1 = and therefore it follows that both ∨ and are generic with respect to the same character of N(E) and thus they are isomorphic. Remark 4. The assertions in Remark 2 are true also over finite fields; i.e., for the pair (SL n (F q 2 ), SL n (F q )). This is . A new connection between distinction and base change In Section 2, we saw Prasad's conjecture which related distinction and base change in a precise way and in particular the cardinality of the fiber of the base change from the opposition group G op (F) goes into the conjectural formula for the dimension of the space of (G(F), G )-invariant forms on an irreducible admissible generic representation of G(E). In Section 4, we saw the main ideas behind the proof of Prasad's conjecture for SL(n) in . In this section, we present a result, joint with Nadir Matringe, which illustrates the connection between distinction and base change in yet another way which is that base change information appears in the constant of proportionality between two natural invariant linear forms on a distinguished representation. The result is for the pair (GL n (E), GL n (F)) and for GL n (F)-distinguished discrete series representations of GL n (E) and it is contained in . We give an informal introduction to the result and its proof. The main points to keep in mind are: The pair (GL n (E), GL n (F)) is of multiplicity one . There are two natural GL n (F)-invariant forms on a GL n (F)-distinguished discrete series representation of GL n (E). One due to Flicker , denoted by ℓ, which we saw in Theorem 4.2, and the other due to Kable, say . The distinguishing linear forms and ℓ differ by a constant by Flicker's multiplicity one result. Flicker-Rallis conjecture, recalled in Theorem 3.3, according to which distinction for (GL n (E), GL( n (F)) is related to base change from U(n, E/F). Our result evaluates the proportionality constant in above and it involves the formal degrees of the base changed and base changing representations. We state the result towards the end of this section (cf. Theorem 5.1). Thus the two inputs for the statement of Theorem 5.1 are base change and formal degree. We have introduced base change from U(n) to GL n (E) in Section 3.2. Let us now recall the definition of the formal degree of a discrete series representation. If is a discrete series representation of a p-adic group, there exists d () ∈ R >0 such that This d () is the formal degree of, which of course depends on the choice of the Haar measure. Remark 5. Recall the orthogonality relations for matrix coefficients for a finite group. A matrix coefficient of a (unitary) representation is a function on G given by Thus, formal degree, for infinite dimensional representations, plays the role of the dimension of a finite dimensional representation. The Hiraga-Ichino-Ikeda conjecture gives a formula for the formal degree of a discrete series representation of a p-adic group in terms of its adjoint gamma function evaluated at s = 0 . This conjecture is proved for GL(n) in itself (cf. ) and it is proved for U(n, E/F) by Beuzart-Plessis . With respect to the specific choice of Haar measure as in , we have: For a discrete series representation of GL n (E), its formal degree is given by where the gamma factor is the Rankin-Selberg gamma factor. For a discrete series representation of U(n, E/F), its formal degree is given by (see also ) where the gamma factor is the twisted Asai gamma factor. on the -Whittaker model W (, ) of an irreducible unitary generic representation of GL n (E) (cf. Theorem 4.2). As mentioned earlier, this linear form is first considered by Flicker who also proved the absolute convergence of the above integral . The form ℓ is always non-zero and clearly P(F)-invariant. It is GL n (F)-invariant precisely when is GL n (F)-distinguished (see Section 4). Now consider on W (, ) which is obviously GL n (F)-invariant but this integral is convergent only when is a discrete series representation . So assume is a discrete series representation. The form is non-zero precisely when is GL n (F)distinguished . Now is the following. Theorem 5.1 (Anandavardhanan-Matringe). Let be a discrete series representation of GL n (E) which is GL n (F)-distinguished. Let be the (discrete series) representation of U(n, E/F) that base changes to (stably or unstably depending on the parity of n). Let d() (resp. d()) denote the formal degree of. Then, where c is a positive constant that does not depend on the representations and. For the proof of Theorem 5.1, we refer to . However, we indicate the key ingredients in its proof in a sort of informal way. The starting point of the proof of Theorem 5.1 is the functional equation for the Asai L-function defined via the Rankin-Selberg integral method. This is due to and . Thus, if Z(s, W, ) = N n (E)\GL n (E) W(g)((0,..., 0, 1)g)| det g| s dg for W ∈ W (, ) and for a Schwartz-Bruhat function on E n, we have where W ∈ W ( ∨, −1 ) is given by W(J t g −1 ) and is the Fourier transform of. Now by the proof of , the right hand side of is connected to the linear form. On the other hand, by , the left hand side of is connected to the linear form ℓ. This step is subtle and involves a new functional equation which in turn requires knowledge of the sign of the local root number which is a particular case of (cf. ). This analysis finally leads to the relation where c is a measure theoretic positive constant which does not depend on the representations and. Remark 7. In , we were led to the formulation of Theorem 5.1 by a result over finite fields showing that, for G = GL(n) or G = U(n, q), the Bessel function B, is a test vector for the natural G(F q )-invariant linear form on an irreducible generic representation of G(F q 2 ) which is a base change from G op (F q ). Here, is a nondegenerate character of N(F q 2 )/N(F q ), where N is the unipotent radical of a (fixed) Borel subgroup of G. In fact, by , where is the irreducible generic representation of G op (F q ) that base changes to. This identity for finite fields is generalized to all irreducible generic uniform representations for any connected quasi-split reductive group by Chang Yang . It will be of interest to look for a p-adic analogue as in Theorem 5.1 for G = GL(n). thank C. S. Rajan, my initial work with him, again during my postdoc period, has had a major impact on several of the results presented in this paper. Thanks are due to Rajat Tandon for introducing me to the beautiful area of representation theory of p-adic groups and in particular to distinguished representations. Finally I thank the organizers for the invitation to speak at the International Colloquium. |
/*******************************************************************************
**entry point for the mex call
**nlhs - number of outputs
**plhs - pointer to array of outputs
**nrhs - number of inputs
**prhs - pointer to array of inputs
*******************************************************************************/
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
const mxArray *keyVector = prhs[0];
const mxArray *inputVector = prhs[1];
mwSize lenKeys = mxGetNumberOfElements(keyVector);
mwSize lenInputVector = mxGetNumberOfElements(inputVector);
struct keyID *keyMap = NULL;
struct keyID *curr = NULL;
struct keyID *mapMemPool = (struct keyID*)mxCalloc(lenKeys,sizeof(struct keyID));
struct keyID **memPtr = &mapMemPool;
char * currKey = NULL;
const mxArray* cellP;
mwSize keyLen = 0;
unsigned int cidx = 0;
for (mwSize i = 0; i < lenKeys;++i)
{
cellP = mxGetCell(keyVector,i);
if (cellP != NULL)
{
currKey = mxArrayToString(cellP);
}
if (currKey)
{
keyLen = strlen(currKey);
HASH_FIND_STR(keyMap, currKey, curr);
if (curr==NULL)
{
curr = mapMemPool+cidx;
curr->name = currKey;
curr->id = cidx++;
HASH_ADD_KEYPTR( hh, keyMap, curr->name, strlen(curr->name), curr );
}
else
{
mexPrintf("Warn: duplicate key %s\n",currKey);
}
}
}
plhs[0] = mxCreateDoubleMatrix(lenInputVector,1,mxREAL);
double* idxVector = (double *)mxGetPr(plhs[0]);
double nan = mxGetNaN();
char* currOut;
for (mwSize i = 0;i<lenInputVector;++i)
{
cellP = mxGetCell(inputVector,i);
if (cellP != NULL)
{
currOut = mxArrayToString(cellP);
}
if (currOut)
{
HASH_FIND_STR( keyMap, currOut, curr);
}
if (curr)
{
idxVector[i] = curr->id+1;
}
else
{
mexPrintf("Warn: key %s not found\n",currOut);
idxVector[i] = nan;
}
}
return;
} |
Oral care practices in non-mechanically ventilated intensive care unit patients: An integrative review. AIMS AND OBJECTIVES To explore current oral care practices in nonmechanically ventilated ICU patients. BACKGROUND Oral hygiene is an important aspect of nursing care in hospitalised populations. Oral care is a disease preventive and cost-effective measure for patients, particularly in ICU patients. Numerous studies support the value of oral care practices in mechanically ventilated ICU patients. Due to evidence supporting the benefits of oral care in nonmechanically ventilated patients, it would be beneficial to examine the literature for oral care practices in this population. METHODOLOGY Literature searches of the following databases were performed: CINAHL Plus, MEDLINE, PsychInfo, Academic Search Premier, Cochrane Database of Systematic Reviews, and Web of Science. Three peer-reviewed articles were included in the review after inclusion criteria were applied. Findings were appraised, organised conceptually and synthesised using Torraco (2016b) as a guiding framework. Evidence was appraised using the Johns Hopkins Nursing Evidence-based Practice Rating Scale. PRISMA reporting guidelines were followed, when applicable. RESULTS Findings support the existing gap in the literature of oral hygiene practices in nonmechanically ventilated ICU patients. Themes included the type of oral care products used, frequencies of oral care, documented oral care practices and personnel that performed the care. STUDY IMPLICATIONS This integrative review identified an important gap in the literature for oral care practices in nonmechanically ventilated ICU patient populations. Further research on current oral care practices and development of evidence-based guidelines for this population are recommended. RELEVANCE TO CLINICAL PRACTICE Nurses should provide oral care to all hospitalised patients and follow oral care guidelines specific to their population, if available. |
/*
* Copyright 2013-2020 Real Logic Limited.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package uk.co.real_logic.sbe.generation.rust;
import org.agrona.generation.OutputManager;
import java.io.IOException;
import java.io.Writer;
import static java.lang.String.format;
import static uk.co.real_logic.sbe.generation.rust.RustGenerator.DATA_LIFETIME;
import static uk.co.real_logic.sbe.generation.rust.RustUtil.INDENT;
import static uk.co.real_logic.sbe.generation.rust.RustUtil.indent;
enum RustCodecType
{
Decoder
{
String scratchProperty()
{
return RustGenerator.SCRATCH_DECODER_PROPERTY;
}
String scratchType()
{
return RustGenerator.SCRATCH_DECODER_TYPE;
}
void appendDirectCodeMethods(
final Appendable appendable,
final String methodName,
final String representationType,
final String nextCoderType,
final int numBytes,
final int trailingBytes) throws IOException
{
indent(appendable, 1, "pub fn %s(mut self) -> CodecResult<(&%s %s, %s)> {\n",
methodName, DATA_LIFETIME, representationType, RustGenerator.withLifetime(nextCoderType));
indent(appendable, 2, "let v = self.%s.read_type::<%s>(%s)?;\n",
RustCodecType.Decoder.scratchProperty(), representationType, numBytes);
if (trailingBytes > 0)
{
indent(appendable, 2, "self.%s.skip_bytes(%s)?;\n",
RustCodecType.Decoder.scratchProperty(), trailingBytes);
}
indent(appendable, 2, "Ok((v, %s::wrap(self.%s)))\n",
nextCoderType, RustCodecType.Decoder.scratchProperty());
indent(appendable).append("}\n");
}
String gerund()
{
return "decoding";
}
},
Encoder
{
String scratchProperty()
{
return RustGenerator.SCRATCH_ENCODER_PROPERTY;
}
String scratchType()
{
return RustGenerator.SCRATCH_ENCODER_TYPE;
}
void appendDirectCodeMethods(
final Appendable appendable,
final String methodName,
final String representationType,
final String nextCoderType,
final int numBytes,
final int trailingBytes) throws IOException
{
indent(appendable, 1, "\n/// Create a mutable struct reference overlaid atop the data buffer\n");
indent(appendable, 1, "/// such that changes to the struct directly edit the buffer. \n");
indent(appendable, 1, "/// Note that the initial content of the struct's fields may be garbage.\n");
indent(appendable, 1, "pub fn %s(mut self) -> CodecResult<(&%s mut %s, %s)> {\n",
methodName, DATA_LIFETIME, representationType, RustGenerator.withLifetime(nextCoderType));
if (trailingBytes > 0)
{
indent(appendable, 2, "// add trailing bytes to extend the end position of the scratch buffer\n");
}
indent(appendable, 2, "let v = self.%s.writable_overlay::<%s>(%s+%s)?;\n",
RustCodecType.Encoder.scratchProperty(), representationType, numBytes, trailingBytes);
indent(appendable, 2, "Ok((v, %s::wrap(self.%s)))\n",
nextCoderType, RustCodecType.Encoder.scratchProperty());
indent(appendable).append("}\n\n");
indent(appendable, 1, "/// Copy the bytes of a value into the data buffer\n");
indent(appendable).append(String.format("pub fn %s_copy(mut self, t: &%s) -> CodecResult<%s> {\n",
methodName, representationType, RustGenerator.withLifetime(nextCoderType)));
indent(appendable, 2)
.append(format("self.%s.write_type::<%s>(t, %s)?;\n",
RustCodecType.Encoder.scratchProperty(), representationType, numBytes));
if (trailingBytes > 0)
{
indent(appendable, 2, "// fixed message length > sum of field lengths\n");
indent(appendable, 2, "self.%s.skip_bytes(%s)?;\n",
RustCodecType.Decoder.scratchProperty(), trailingBytes);
}
indent(appendable, 2).append(format("Ok(%s::wrap(self.%s))\n",
nextCoderType, RustCodecType.Encoder.scratchProperty()));
indent(appendable).append("}\n");
}
String gerund()
{
return "encoding";
}
};
void appendScratchWrappingStruct(final Appendable appendable, final String structName)
throws IOException
{
appendable.append(String.format("pub struct %s <%s> {\n", structName, DATA_LIFETIME))
.append(INDENT).append(String.format("%s: %s <%s>,%n", scratchProperty(), scratchType(), DATA_LIFETIME))
.append("}\n");
}
abstract String scratchProperty();
abstract String scratchType();
abstract void appendDirectCodeMethods(
Appendable appendable,
String methodName,
String representationType,
String nextCoderType,
int numBytes,
int trailingBytes) throws IOException;
abstract String gerund();
String generateDoneCoderType(
final OutputManager outputManager,
final String messageTypeName)
throws IOException
{
final String doneTypeName = messageTypeName + name() + "Done";
try (Writer writer = outputManager.createOutput(doneTypeName))
{
appendScratchWrappingStruct(writer, doneTypeName);
RustGenerator.appendImplWithLifetimeHeader(writer, doneTypeName);
indent(writer, 1, "/// Returns the number of bytes %s\n", this == Encoder ? "encoded" : "decoded");
indent(writer, 1, "pub fn unwrap(self) -> usize {\n");
indent(writer, 2, "self.%s.pos\n", scratchProperty());
indent(writer, 1, "}\n");
appendWrapMethod(writer, doneTypeName);
writer.append("}\n");
}
return doneTypeName;
}
void appendWrapMethod(final Appendable appendable, final String structName)
throws IOException
{
appendable.append("\n").append(INDENT).append(String.format(
"pub fn wrap(%s: %s) -> %s {%n", scratchProperty(), RustGenerator.withLifetime(scratchType()),
RustGenerator.withLifetime(structName)));
indent(appendable, 2, "%s { %s: %s }\n",
structName, scratchProperty(), scratchProperty());
indent(appendable).append("}\n");
}
String generateMessageHeaderCoder(
final String messageTypeName,
final OutputManager outputManager,
final String topType,
final int headerSize) throws IOException
{
final String messageHeaderRepresentation = "MessageHeader";
final String headerCoderType = messageTypeName + messageHeaderRepresentation + name();
try (Writer writer = outputManager.createOutput(headerCoderType))
{
appendScratchWrappingStruct(writer, headerCoderType);
RustGenerator.appendImplWithLifetimeHeader(writer, headerCoderType);
appendWrapMethod(writer, headerCoderType);
appendDirectCodeMethods(writer, "header", messageHeaderRepresentation, topType, headerSize, 0);
writer.append("}\n");
}
return headerCoderType;
}
}
|
/**
* Created by Tommy Ettinger on 6/20/2019.
*/
public class NoiseTests {
public static final int TRIAL_COUNT = 500000000;
@Test
public void testRange2D()
{
Noise noise = new Noise(543212345, 1f, Noise.SIMPLEX);
long state = 12345678901L;
float x, y, result, xLo = 0, yLo = 0, xHi = 0, yHi = 0;
float min = 0.5f, max = 0.5f;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
result = noise.singleSimplex((int)state, x, y);
if (result == (min = Math.min(min, result))) {
xLo = x;
yLo = y;
}
if (result == (max = Math.max(max, result))) {
xHi = x;
yHi = y;
}
}
System.out.println("Preliminary 2D min=" + min + ",max=" + max + ",multiplier=" + (1f / Math.max(-min, max)));
for (float g = -0.5f; g <= 0.5f; g += 0x1p-3f) {
for (float h = -0.5f; h <= 0.5f; h += 0x1p-3f) {
min = Math.min(min, noise.singleSimplex((int)state, xLo + g, yLo + h));
max = Math.max(max, noise.singleSimplex((int)state, xHi + g, yHi + h));
}
}
System.out.println("Better 2D min=" + min + ",max=" + max + ",multiplier=" + (1f / Math.max(-min, max)));
}
@Test
public void testRange3D()
{
Noise noise = new Noise(543212345, 1f, Noise.SIMPLEX);
long state = 12345678901L;
float x, y, z, result, xLo = 0, yLo = 0, zLo = 0, xHi = 0, yHi = 0, zHi = 0;
float min = 0.5f, max = 0.5f;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
result = noise.singleSimplex((int)state, x, y, z);
if (result == (min = Math.min(min, result))) {
xLo = x;
yLo = y;
zLo = z;
}
if (result == (max = Math.max(max, result))) {
xHi = x;
yHi = y;
zHi = z;
}
}
System.out.println("Preliminary 3D min=" + min + ",max=" + max + ",multiplier=" + (1f / Math.max(-min, max)));
for (float g = -0.5f; g <= 0.5f; g += 0x1p-3f) {
for (float h = -0.5f; h <= 0.5f; h += 0x1p-3f) {
for (float i = -0.5f; i <= 0.5f; i += 0x1p-3f) {
min = Math.min(min, noise.singleSimplex((int)state, xLo + g, yLo + h, zLo + i));
max = Math.max(max, noise.singleSimplex((int)state, xHi + g, yHi + h, zHi + i));
}
}
}
System.out.println("Better 3D min=" + min + ",max=" + max + ",multiplier=" + (1f / Math.max(-min, max)));
}
@Test
public void testRange4D() {
Noise noise = new Noise(543212345, 1f, Noise.SIMPLEX);
long state = 12345678901L;
float x, y, z, w, result, xLo = 0, yLo = 0, zLo = 0, wLo = 0, xHi = 0, yHi = 0, zHi = 0, wHi = 0;
float min = 0.5f, max = 0.5f;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
w = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
result = noise.singleSimplex((int)state, x, y, z, w);
if (result == (min = Math.min(min, result))) {
xLo = x;
yLo = y;
zLo = z;
wLo = w;
}
if (result == (max = Math.max(max, result))) {
xHi = x;
yHi = y;
zHi = z;
wHi = w;
}
}
System.out.println("Preliminary 4D min=" + min + ",max=" + max + ",multiplier=" + (1f / Math.max(-min, max)));
for (float g = -0.5f; g <= 0.5f; g += 0x1p-3f) {
for (float h = -0.5f; h <= 0.5f; h += 0x1p-3f) {
for (float i = -0.5f; i <= 0.5f; i += 0x1p-3f) {
for (float j = -0.5f; j <= 0.5f; j += 0x1p-3f) {
min = Math.min(min, noise.singleSimplex((int)state, xLo + g, yLo + h, zLo + i, wLo + j));
max = Math.max(max, noise.singleSimplex((int)state, xHi + g, yHi + h, zHi + i, wHi + j));
}
}
}
}
System.out.println("Better 4D min=" + min + ",max=" + max + ",multiplier=" + (1f / Math.max(-min, max)));
}
@Test
public void testRange5D()
{
Noise noise = new Noise(543212345, 1f, Noise.SIMPLEX);
long state = 12345678901L;
float x, y, z, w, u, result, xLo=0, yLo=0, zLo=0, wLo=0, uLo=0, xHi=0, yHi=0, zHi=0, wHi=0, uHi=0;
float min = 0.5f, max = 0.5f;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
w = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
u = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
result = noise.singleSimplex((int)state, x, y, z, w, u);
if(result == (min = Math.min(min, result)))
{
xLo = x; yLo = y; zLo = z; wLo = w; uLo = u;
}
if(result == (max = Math.max(max, result)))
{
xHi = x; yHi = y; zHi = z; wHi = w; uHi = u;
}
}
System.out.println("Preliminary 5D min="+min+",max="+max+",multiplier="+(1f/Math.max(-min, max)));
for (float e = -0.5f; e <= 0.5f; e += 0x1p-3f) {
for (float g = -0.5f; g <= 0.5f; g += 0x1p-3f) {
for (float h = -0.5f; h <= 0.5f; h += 0x1p-3f) {
for (float i = -0.5f; i <= 0.5f; i += 0x1p-3f) {
for (float j = -0.5f; j <= 0.5f; j += 0x1p-3f) {
min = Math.min(min, noise.singleSimplex((int)state, xLo + g, yLo + h, zLo + i, wLo + j, uLo + e));
max = Math.max(max, noise.singleSimplex((int)state, xHi + g, yHi + h, zHi + i, wHi + j, uHi + e));
}
}
}
}
}
System.out.println("Better 5D min="+min+",max="+max+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRange6D()
{
Noise noise = new Noise(543212345, 1f, Noise.SIMPLEX);
long state = 12345678901L;
float x, y, z, w, u, v, result, xLo=0, yLo=0, zLo=0, wLo=0, uLo=0, vLo=0, xHi=0, yHi=0, zHi=0, wHi=0, uHi=0, vHi=0;
float min = 0.5f, max = 0.5f;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
w = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
u = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
v = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
result = noise.singleSimplex((int)state, x, y, z, w, u, v);
if(result == (min = Math.min(min, result)))
{
xLo = x; yLo = y; zLo = z; wLo = w; uLo = u; vLo = v;
}
if(result == (max = Math.max(max, result)))
{
xHi = x; yHi = y; zHi = z; wHi = w; uHi = u; vHi = v;
}
}
System.out.println("Preliminary 6D min="+min+",max="+max+",multiplier="+(1f/Math.max(-min, max)));
for (float e = -0.5f; e <= 0.5f; e += 0x1p-3f) {
for (float f = -0.5f; f <= 0.5f; f += 0x1p-3f) {
for (float g = -0.5f; g <= 0.5f; g += 0x1p-3f) {
for (float h = -0.5f; h <= 0.5f; h += 0x1p-3f) {
for (float i = -0.5f; i <= 0.5f; i += 0x1p-3f) {
for (float j = -0.5f; j <= 0.5f; j += 0x1p-3f) {
min = Math.min(min, noise.singleSimplex((int)state, xLo + g, yLo + h, zLo + i, wLo + j, uLo + e, vLo + f));
max = Math.max(max, noise.singleSimplex((int)state, xHi + g, yHi + h, zHi + i, wHi + j, uHi + e, vHi + f));
}
}
}
}
}
}
System.out.println("Better 6D min="+min+",max="+max+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRangeBillow2D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.SIMPLEX_FRACTAL, 2, 2f, 0.5f);
noise.setFractalType(Noise.BILLOW);
long state = 12345678L;
float x, y, result;
float min = 0.5f, max = 0.5f;
int higher = 0, lower = 0;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
result = noise.getNoiseWithSeed(x, y, (int)state);
min = Math.min(min, result);
max = Math.max(max, result);
if(result > 1f)
higher++;
if(result < -1f)
lower++;
}
System.out.println("Billow 2D min="+min+",max="+max+",tooHighCount="+higher+",tooLowCount="+lower+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRangeFBM3D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.SIMPLEX_FRACTAL, 3, 2f, 0.5f);
noise.setFractalType(Noise.FBM);
long state = 12345678L;
float x, y, z, result;
float min = 0.5f, max = 0.5f;
int higher = 0, lower = 0;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
result = noise.getNoiseWithSeed(x, y, z, (int)state);
min = Math.min(min, result);
max = Math.max(max, result);
if(result > 1f)
higher++;
if(result < -1f)
lower++;
}
System.out.println("FBM 3D min="+min+",max="+max+",tooHighCount="+higher+",tooLowCount="+lower+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRangeFBM4D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.SIMPLEX_FRACTAL, 1, 2f, 0.5f);
noise.setFractalType(Noise.FBM);
long state = 12345678L;
float x, y, z, w, result;
float min = 0.5f, max = 0.5f;
int higher = 0, lower = 0;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (8.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-21f));
y = (state >> 58) / (8.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-21f));
z = (state >> 58) / (8.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-21f));
w = (state >> 58) / (8.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-21f));
result = noise.getNoiseWithSeed(x, y, z, w, (int)state);
min = Math.min(min, result);
max = Math.max(max, result);
if (result > 1f)
higher++;
if (result < -1f)
lower++;
}
System.out.println("FBM 4D min="+min+",max="+max+",tooHighCount="+higher+",tooLowCount="+lower+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRangeBillow3D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.SIMPLEX_FRACTAL, 2, 2f, 0.5f);
noise.setFractalType(Noise.BILLOW);
long state = 12345678L;
float x, y, z, result;
float min = 0.5f, max = 0.5f;
int higher = 0, lower = 0;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
result = noise.getNoiseWithSeed(x, y, z, (int)state);
min = Math.min(min, result);
max = Math.max(max, result);
if(result > 1f)
higher++;
if(result < -1f)
lower++;
}
System.out.println("Billow 3D min="+min+",max="+max+",tooHighCount="+higher+",tooLowCount="+lower+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRangeBillowInverse2D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.SIMPLEX_FRACTAL, 2, 0.5f, 2f);
noise.setFractalType(Noise.BILLOW);
long state = 12345678L;
float x, y, result;
float min = 0.5f, max = 0.5f;
int higher = 0, lower = 0;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
result = noise.getNoiseWithSeed(x, y, (int)state);
min = Math.min(min, result);
max = Math.max(max, result);
if(result > 1f)
higher++;
if(result < -1f)
lower++;
}
System.out.println("BillowInverse 2D min="+min+",max="+max+",tooHighCount="+higher+",tooLowCount="+lower+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRangeBillowInverse3D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.SIMPLEX_FRACTAL, 2, 0.5f, 2f);
noise.setFractalType(Noise.BILLOW);
long state = 12345678L;
float x, y, z, result;
float min = 0.5f, max = 0.5f;
int higher = 0, lower = 0;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
result = noise.getNoiseWithSeed(x, y, z, (int)state);
min = Math.min(min, result);
max = Math.max(max, result);
if(result > 1f)
higher++;
if(result < -1f)
lower++;
}
System.out.println("BillowInverse 3D min="+min+",max="+max+",tooHighCount="+higher+",tooLowCount="+lower+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRangeRidged2D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.SIMPLEX_FRACTAL, 3, 2f, 0.5f);
noise.setFractalType(Noise.RIDGED_MULTI);
long state = 12345678901L;
float x, y, result;
float min = 0.5f, max = 0.5f;
int higher = 0, lower = 0;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
result = noise.getNoiseWithSeed(x, y, (int)state);
min = Math.min(min, result);
max = Math.max(max, result);
if(result > 1f)
higher++;
if(result < -1f)
lower++;
}
System.out.println("Ridged 2D min="+min+",max="+max+",tooHighCount="+higher+",tooLowCount="+lower+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRangeRidged3D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.SIMPLEX_FRACTAL, 3, 2f, 0.5f);
noise.setFractalType(Noise.RIDGED_MULTI);
long state = 12345678901L;
float x, y, z, result;
float min, max;
min = max = noise.getNoiseWithSeed(0, 0, 0, (int)state);
int higher = 0, lower = 0;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
result = noise.getNoiseWithSeed(x, y, z, (int)state);
min = Math.min(min, result);
max = Math.max(max, result);
if(result > 1f)
higher++;
if(result < -1f)
lower++;
}
System.out.println("Ridged 3D min="+min+",max="+max+",tooHighCount="+higher+",tooLowCount="+lower+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRangeRidgedInverse2D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.SIMPLEX_FRACTAL, 2, 0.5f, 2f);
noise.setFractalType(Noise.RIDGED_MULTI);
long state = 12345678L;
float x, y, result;
float min = 0.5f, max = 0.5f;
int higher = 0, lower = 0;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
result = noise.getNoiseWithSeed(x, y, (int)state);
min = Math.min(min, result);
max = Math.max(max, result);
if(result > 1f)
higher++;
if(result < -1f)
lower++;
}
System.out.println("RidgedInverse 2D min="+min+",max="+max+",tooHighCount="+higher+",tooLowCount="+lower+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRangeRidgedInverse3D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.SIMPLEX_FRACTAL, 3, 0.5f, 2f);
noise.setFractalType(Noise.RIDGED_MULTI);
long state = 12345678L;
float x, y, z, result;
float min, max;
min = max = noise.getNoiseWithSeed(0, 0, 0, (int)state);
int higher = 0, lower = 0;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
result = noise.getNoiseWithSeed(x, y, z, (int)state);
min = Math.min(min, result);
max = Math.max(max, result);
if(result > 1f)
higher++;
if(result < -1f)
lower++;
}
System.out.println("RidgedInverse 3D min="+min+",max="+max+",tooHighCount="+higher+",tooLowCount="+lower+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testAverageSimplexFBM3D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.SIMPLEX_FRACTAL, 4, 2f, 0.5f);
noise.setFractalType(Noise.FBM);
long state = 12345678L;
float x, y, z;
BigInt big = new BigInt(0);
for (int i = 0; i < 0x10000; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
big.add(Math.round(4096 * noise.getNoiseWithSeed(x, y, z, (int)state)));
}
big.div(65536);
System.out.println("Simplex FBM 3D average(in range [-4096,4096])="+big.toString());
}
@Test
public void testAveragePerlinFBM3D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.PERLIN_FRACTAL, 4, 2f, 0.5f);
noise.setFractalType(Noise.FBM);
long state = 12345678L;
float x, y, z;
BigInt big = new BigInt(0);
for (int i = 0; i < 0x10000; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
big.add(Math.round(4096 * noise.getNoiseWithSeed(x, y, z, (int)state)));
}
big.div(65536);
System.out.println("Perlin FBM 3D average(in range [-4096,4096])="+big.toString());
}
@Test
public void testAverageCubicFBM3D()
{
Noise noise = new Noise(543212345, 3.14159265f, Noise.CUBIC_FRACTAL, 4, 2f, 0.5f);
noise.setFractalType(Noise.FBM);
long state = 12345678L;
float x, y, z;
BigInt big = new BigInt(0);
for (int i = 0; i < 0x10000; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L & 0xffffffL) * 0x1p-24f));
big.add(Math.round(4096 * noise.getNoiseWithSeed(x, y, z, (int)state)));
}
big.div(65536);
System.out.println("Cubic FBM 3D average(in range [-4096,4096])="+big.toString());
}
@Test
public void testRangePerlin4D()
{
Noise noise = new Noise(543212345, 1f, Noise.PERLIN);
long state = 12345678901L;
float x, y, z, w, result, xLo=0, yLo=0, zLo=0, wLo=0, xHi=0, yHi=0, zHi=0, wHi=0;
float min = 0.5f, max = 0.5f;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
w = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
result = noise.singlePerlin((int)state, x, y, z, w);
if(result == (min = Math.min(min, result)))
{
xLo = x; yLo = y; zLo = z; wLo = w;
}
if(result == (max = Math.max(max, result)))
{
xHi = x; yHi = y; zHi = z; wHi = w;
}
}
System.out.println("Preliminary 4D min="+min+",max="+max+",multiplier="+(1f/Math.max(-min, max)));
for (float g = -0.5f; g <= 0.5f; g += 0x1p-6f) {
for (float h = -0.5f; h <= 0.5f; h += 0x1p-6f) {
for (float i = -0.5f; i <= 0.5f; i += 0x1p-6f) {
for (float j = -0.5f; j <= 0.5f; j += 0x1p-6f) {
min = Math.min(min, noise.singlePerlin((int)state, xLo + g, yLo + h, zLo + i, wLo + j));
max = Math.max(max, noise.singlePerlin((int)state, xHi + g, yHi + h, zHi + i, wHi + j));
}
}
}
}
System.out.println("Better 4D min="+min+",max="+max+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRangePerlin5D()
{
Noise noise = new Noise(543212345, 1f, Noise.PERLIN);
long state = 12345678901L;
float x, y, z, w, u, result, xLo=0, yLo=0, zLo=0, wLo=0, uLo=0, xHi=0, yHi=0, zHi=0, wHi=0, uHi=0;
float min = 0.5f, max = 0.5f;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
w = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
u = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
result = noise.singlePerlin((int)state, x, y, z, w, u);
if(result == (min = Math.min(min, result)))
{
xLo = x; yLo = y; zLo = z; wLo = w; uLo = u;
}
if(result == (max = Math.max(max, result)))
{
xHi = x; yHi = y; zHi = z; wHi = w; uHi = u;
}
}
System.out.println("Preliminary 5D min="+min+",max="+max+",multiplier="+(1f/Math.max(-min, max)));
for (float e = -0.5f; e <= 0.5f; e += 0x1p-3f) {
for (float g = -0.5f; g <= 0.5f; g += 0x1p-3f) {
for (float h = -0.5f; h <= 0.5f; h += 0x1p-3f) {
for (float i = -0.5f; i <= 0.5f; i += 0x1p-3f) {
for (float j = -0.5f; j <= 0.5f; j += 0x1p-3f) {
min = Math.min(min, noise.singlePerlin((int)state, xLo + g, yLo + h, zLo + i, wLo + j, uLo + e));
max = Math.max(max, noise.singlePerlin((int)state, xHi + g, yHi + h, zHi + i, wHi + j, uHi + e));
}
}
}
}
}
System.out.println("Better 5D min="+min+",max="+max+",multiplier="+(1f/Math.max(-min, max)));
}
@Test
public void testRangePerlin6D()
{
Noise noise = new Noise(543212345, 1f, Noise.PERLIN);
long state = 12345678901L;
float x, y, z, w, u, v, result, xLo=0, yLo=0, zLo=0, wLo=0, uLo=0, vLo=0, xHi=0, yHi=0, zHi=0, wHi=0, uHi=0, vHi=0;
float min = 0.5f, max = 0.5f;
for (int i = 0; i < TRIAL_COUNT; i++) {
x = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
y = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
z = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
w = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
u = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
v = (state >> 58) / (1.001f - (((state = (state << 29 | state >>> 35) * 0xAC564B05L) * 0x818102004182A025L >>> 40) * 0x1p-24f));
result = noise.singlePerlin((int)state, x, y, z, w, u, v);
if(result == (min = Math.min(min, result)))
{
xLo = x; yLo = y; zLo = z; wLo = w; uLo = u; vLo = v;
}
if(result == (max = Math.max(max, result)))
{
xHi = x; yHi = y; zHi = z; wHi = w; uHi = u; vHi = v;
}
}
System.out.println("Preliminary 6D min="+min+",max="+max+",multiplier="+(1f/Math.max(-min, max)));
for (float e = -0.5f; e <= 0.5f; e += 0x1p-3f) {
for (float f = -0.5f; f <= 0.5f; f += 0x1p-3f) {
for (float g = -0.5f; g <= 0.5f; g += 0x1p-3f) {
for (float h = -0.5f; h <= 0.5f; h += 0x1p-3f) {
for (float i = -0.5f; i <= 0.5f; i += 0x1p-3f) {
for (float j = -0.5f; j <= 0.5f; j += 0x1p-3f) {
min = Math.min(min, noise.singlePerlin((int)state, xLo + g, yLo + h, zLo + i, wLo + j, uLo + e, vLo + f));
max = Math.max(max, noise.singlePerlin((int)state, xHi + g, yHi + h, zHi + i, wHi + j, uHi + e, vHi + f));
}
}
}
}
}
}
System.out.println("Better 6D min="+min+",max="+max+",multiplier="+(1f/Math.max(-min, max)));
}
} |
Seventy percent more companies reported spam and malicious infections arrived via social networks in 2009 vs. 2008. By the end of last year, 72% of companies expressed concern that their employees' use of popular social sites could result in a security breach. And 60% of companies now consider Facebook to be the riskiest social network out there.
Those findings, released Monday, come from a survey of 500 companies worldwide conducted by security firm Sophos. They help quantify the rising tide of spam and malicious infections proliferating on Facebook, Twitter, MySpace, Bebo and other such social networks.
As the planet' s largest social network, Facebook might naturally be expected to emerge as the No. 1 target of cybercriminals, says Graham Cluley, a senior analyst at Sophos. But he says Facebook has exacerbated matters by asking its members to embrace a new, more granular privacy setting. Cluley demonstrates in this video how the new setting, in effect, authorizes Facebook to expose more of its member-generated content to everyone on the Internet.
Facebook's new privacy setting gives the company leeway to submit more content to Google, Microsoft Bing and Yahoo Search so the search services can incorporate more Facebook content into real-time search results, much as they've begun doing with Twitter microblog postings, says Cluley.
However, the wider release of Facebook members' data "inevitably means more information will be made available to cybercriminals who want to target you or you company for an attack," says Cluley.
Facebook continues to defend its new privacy setting as flexible and easy to change. But privacy advocates continue to criticize the move. And last week the Office of the Privacy Commissioner of Canada launched an investigation into a citizen's complaint about the new settings.
Meanwhile, Sophos' new survey includes extensive analysis about how Facebook, Twitter and other social networks have become like a candy store for data thieves. The fast-morphing Koobface social network worm is a case in point:
Most notably, the notorious Koobface worm family became more diverse and sophisticated in 2009. The sophistication of Koobface is such that it is capable of registering a Facebook account, activating the account by confirming an email sent to a Gmail address, befriending random strangers on the site, joining random Facebook groups, and posting messages on the walls of Facebook friends (often claiming to link to sexy videos laced with malware). Furthermore, it includes code to avoid drawing attention to itself by restricting how many new Facebook friends it makes each day. Koobface's attack vectors broadened, targeting a wide range of sites other than the one that gave it its name (i.e., Facebook). Social networking sites, including MySpace and Bebo, were added to the worm's arsenal in 2008; Tagged and Friendster joined the roster in early 2009; and most recently the code was extended to include Twitter in a growing battery of attacks. It is likely we will see more malware following in the footsteps of Koobface, creating Web 2.0 botnets with the intention of stealing data, displaying fake anti-virus alerts and generating income for hacking gangs.
By Byron Acohido |
Abstract 1122000165: Endovascular Intervention for a Thrombotic Adverse Event During Andexanet Alfa Infusion Introduction : Andexanet alfa is the only specific reversal agent for factor Xa inhibitors and received FDA approval in 2018. Here we report an early infusion adverse event in a patient with acute intraventricular hemorrhage (IVH) that received Andexanet alfa, with an unfavorable outcome. Methods : A 73yearold male presented to our emergency department (ED) after he developed sudden onset of severe headache without other associated neurological symptoms. An outpatient brain MRI showed IVH, that remained stable in size (2.4 cm3) on a followup head CT performed in our ED. CT angiogram showed a 60% stenosis of the left supraclinoid internal carotid artery. The patient was taking apixaban 5 mg twice daily for atrial fibrillation (last dose 5.5 hours prior to presentation). Results : The anticoagulation was reversed with Andexanet alfa, 400 mg bolus given at 18:30, followed by 480 mg infusion over 2 hours started at 19:00 (12 hours from last apixaban dose). At 19:00, he developed left middle cerebral artery (MCA) ischemic stroke symptoms (global aphasia) that resolved with headofthebed flattening. CT perfusion demonstrated left ICA territory mismatch (342 ml) and 76 ml core. Shortly after CT perfusion, the patient developed a persistent complete left MCA stroke syndrome with NIH stroke scale (NIHSS) score 23. Decision was made to perform emergent cerebral angiogram which demonstrated a large, fresh thrombus in the left cervical ICA. Thrombectomy was successful with TICI score 2B. Patients neurological status initially improved. However, despite this intervention, patient developed a large territory infarct. As neurologic status remained poor, family withdrew care and patient died. Conclusions : ANNEXAA and ANNEXAR were parallel trials of Andexanet alfa for factor Xa inhibitor reversal that demonstrated a transient increase in prothrombotic factors post Andexanet alfa infusion. Neither of these phase 3 trials nor the previous phase 2 trials reported a clinical thrombotic event very early during the infusion. The ANNEXA4 trial (Phase 3) enrolled subjects with active major bleeding on a factor Xa inhibitor and 10% developed a thrombotic event during the 30day followup period. 41% of the thrombotic complications were acute ischemic stroke (AIS), 35% (5 patients) experienced an AIS in the first six days postadministration and the earliest reported thrombotic event occurred day 1 post infusion. Our case report illustrates an early cerebrovascular thrombotic event with dismal outcome despite timely and effective mechanical reperfusion therapy, which could be due to vessel reobstruction in setting of a hypercoagulable state. We aim to make vascular neurologists, neurointensivists and neurosurgeons aware of this possible occurrence when reversing patients with factor Xarelated intracranial hemorrhages. |
package router
import (
"encoding/json"
"errors"
"net/http"
"github.com/gofiber/fiber/v2"
"github.com/naiba/solitudes"
"github.com/naiba/solitudes/pkg/translator"
"golang.org/x/crypto/bcrypt"
)
func settings(c *fiber.Ctx) error {
c.Status(http.StatusOK).Render("admin/settings", injectSiteData(c, fiber.Map{
"title": c.Locals(solitudes.CtxTranslator).(*translator.Translator).T("site_settings"),
}))
return nil
}
type settingsRequest struct {
SiteTitle string `json:"site_title,omitempty"`
SiteDesc string `json:"site_desc,omitempty"`
WxpusherAppToken string `json:"wxpusher_app_token,omitempty"`
WxpusherUID string `json:"wxpusher_uid,omitempty"`
MailServer string `json:"mail_server,omitempty"`
MailPort int `json:"mail_port,omitempty"`
MailUser string `json:"mail_user,omitempty"`
MailPassword string `json:"mail_password,omitempty"`
MailSSL bool `json:"mail_ssl,omitempty"`
Akismet string `json:"akismet,omitempty"`
SiteDomain string `json:"site_domain,omitempty"`
SiteKeywords string `json:"site_keywords,omitempty"`
SiteHeaderMenus string `json:"site_header_menus,omitempty"`
SiteFooterMenus string `json:"site_footer_menus,omitempty"`
SiteTheme string `json:"site_theme,omitempty"`
SiteHomeTopContent string `json:"site_home_top_content,omitempty"`
SiteHomeBottomContent string `json:"site_home_bottom_content,omitempty"`
Email string `json:"email,omitempty" validate:"email"`
Nickname string `json:"nickname,omitempty" validate:"trim"`
OldPassword string `json:"old_password,omitempty" validate:"trim"`
NewPassword string `json:"new_password,omitempty" validate:"trim"`
}
func settingsHandler(c *fiber.Ctx) error {
var err error
defer func() {
err = solitudes.System.Config.Save()
}()
var sr settingsRequest
if err := c.BodyParser(&sr); err != nil {
return err
}
solitudes.System.Config.Site.SpaceName = sr.SiteTitle
solitudes.System.Config.Site.SpaceDesc = sr.SiteDesc
solitudes.System.Config.WxpusherAppToken = sr.WxpusherAppToken
solitudes.System.Config.WxpusherUID = sr.WxpusherUID
solitudes.System.Config.Email.Host = sr.MailServer
solitudes.System.Config.Email.Port = sr.MailPort
solitudes.System.Config.Email.User = sr.MailUser
solitudes.System.Config.Email.Pass = sr.MailPassword
solitudes.System.Config.Email.SSL = sr.MailSSL
solitudes.System.Config.Akismet = sr.Akismet
solitudes.System.Config.Site.Domain = sr.SiteDomain
solitudes.System.Config.Site.SpaceKeywords = sr.SiteKeywords
solitudes.System.Config.User.Nickname = sr.Nickname
solitudes.System.Config.User.Email = sr.Email
err = json.Unmarshal([]byte(sr.SiteHeaderMenus), &solitudes.System.Config.Site.HeaderMenus)
if err != nil {
return err
}
err = json.Unmarshal([]byte(sr.SiteFooterMenus), &solitudes.System.Config.Site.FooterMenus)
if err != nil {
return err
}
solitudes.System.Config.Site.Theme = sr.SiteTheme
solitudes.System.Config.Site.HomeTopContent = sr.SiteHomeTopContent
solitudes.System.Config.Site.HomeBottomContent = sr.SiteHomeBottomContent
if len(sr.OldPassword) > 0 && len(sr.NewPassword) > 0 {
if bcrypt.CompareHashAndPassword([]byte(solitudes.System.Config.User.Password), []byte(sr.OldPassword)) != nil {
return errors.New("invalid email or password")
}
b, err := bcrypt.GenerateFromPassword([]byte(sr.NewPassword), 1)
if err != nil {
return err
}
solitudes.System.Config.User.Password = string(b)
}
return nil
}
|
<gh_stars>100-1000
/*
*
* * Copyright 2020 New Relic Corporation. All rights reserved.
* * SPDX-License-Identifier: Apache-2.0
*
*/
package redis.clients.jedis.commands;
import com.newrelic.api.agent.weaver.SkipIfPresent;
// only exists in jedis 3.0+
@SkipIfPresent(originalName = "redis.clients.jedis.commands.ProtocolCommand")
public class Skip_ProtocolCommand {
}
|
Alissa Evans’ experience with stress stems primarily from her inability to definitively choose a major, a recently received D that taints her otherwise mediocre GPA and her complete and utter confusion regarding the abstract concept commonly referred to as her “future.” In the midst of a mid-college crisis, the Daily Bruin columnist decided to try a different stress-relieving activity every other week of winter quarter and chronicled her quest for mental homeostasis in Stress Less.
An average student’s daily academic routine likely does not include painting.
However, a study published in Art Therapy, the official journal of the American Art Therapy Association found that 45 minutes of creative activity, regardless of talent or artistic experience, significantly reduced the body’s production of the stress hormone cortisol. Over break, the mere idea of winter quarter threatened to shatter my increasingly fragile academic resolve, and so blind trust in science prompted me to try canvas painting as a means of stress relief.
Stress has been a dear friend of mine since the hormone-ridden days of early high school. While juggling a part-time job at a cafe, year-round soccer, an excessive number of Advanced Placement classes and a practically nonexistent social life, meltdowns were commonplace and a constant fear of failure was the norm. Unfortunately, my stress levels only increased in college, during which I found that a single test can alter the entirety of my future.
My artistic career began in a first grade art class where I consumed more Elmer’s glue than the U.S. school system should have allowed. As I grew older, I moved on to a more sophisticated palate by eating paint, and then to actually painting, which became a hobby. While far from an art connoisseur, I try to channel the minuscule fraction of artistic ability inherited from my great-grandfather, an experienced painter, into decorations for my room and the occasional cheap but meaningful gift.
The aim of my experiment, however, was not to create a flawless piece radiating professional artistry, but simply to use each stroke of the paintbrush to ease my overly stressed brain. And once I accepted that objective, it worked.
I practically crawled into my dorm room after a seemingly endless first day of class, baggy grey sweatpants pulled up past my belly button and hair in a wildly tangled knot on top of my head. I rubbed my dark under-eye circles – a reminder that I haven’t slept in four days – before transforming my cluttered desk into a makeshift painting studio.
As the soothing melody of Busta Rhymes’ “Calm Down” pervaded the room, I tried to transition my thoughts from distributed artificial intelligence to art and aesthetics.
My stress resurfaced when I was suddenly confronted by the overwhelming reality that I had no idea what to paint. While I would normally resort to plagiarizing someone else’s idea on Pinterest, it felt important to shift the emphasis from what I was painting to the act of painting itself. I brainstormed for about 30 minutes, came up with an idea – albeit novice and unoriginal – and I painted it. Simple.
With a mindset less focused on the quality of the art, it became notably easier to get lost in the long brush strokes, the vibrant colors, the texture of the paint and the bumpy surface of the tightly stretched canvas.
When a slip of the hand smeared the carefully blended colors and nearly convinced me to throw the painting in the trash and drop out of school, the gentle voice of Bob Ross reminded me that there are no mistakes in art, only happy accidents. And although I might expect Bob Ross to rethink his assertion after witnessing my messy artistic process, which includes painting over mistakes every five minutes, I made a conscious effort to be content with any and all outcomes.
By concentrating my energy on something other than nominal scales and 19th-century philosophy, I was able to activate more creative mental processes and give the technical side of my brain a much-needed rest. I would be lying if I said the multitude of assignments I had to do never crossed my mind. But the act of painting, of creating, made the inevitable cram session at 3 a.m. less daunting, and even encouraged more effective and concentrated studying.
The finished product could be marketed as a loose interpretation of Vincent van Gogh circa 1890, if van Gogh was blind and talentless. And while the painting confirmed that I should never pursue a career in art, it served its purpose of redirecting my focus and resetting my brain. I can honestly say the positive impact that the two-hour painting session had on my mental state was well worth the minor setback in studying.
That being said, the stress relieving effects, although palpable while painting, did not extend much beyond that single day. In order for it to serve as an effective, long-term stress reducer, small blocks of time should be dedicated to painting, or any other creative activity, as often as possible. I painted again three days after the first session, and although I had significantly less time to devote, the familiar calming effect was ever-present.
Painting might feel unpleasant to those who believe they lack the necessary skills or the creative inspiration. However, once I was able to let go of my inhibitions and simply enjoy the process, painting acted as a powerful stress reliever. |
<reponame>codeproject/DeepStack
import math
import torch
from .optimizer import Optimizer
class SparseAdam(Optimizer):
r"""Implements lazy version of Adam algorithm suitable for sparse tensors.
In this variant, only moments that show up in the gradient get updated, and
only those portions of the gradient get applied to the parameters.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
.. _Adam\: A Method for Stochastic Optimization:
https://arxiv.org/abs/1412.6980
"""
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8):
if not 0.0 < lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 < eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = dict(lr=lr, betas=betas, eps=eps)
super(SparseAdam, self).__init__(params, defaults)
@torch.no_grad()
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
with torch.enable_grad():
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad
if not grad.is_sparse:
raise RuntimeError('SparseAdam does not support dense gradients, please consider Adam instead')
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)
state['step'] += 1
grad = grad.coalesce() # the update is non-linear so indices must be unique
grad_indices = grad._indices()
grad_values = grad._values()
size = grad.size()
def make_sparse(values):
constructor = grad.new
if grad_indices.dim() == 0 or values.dim() == 0:
return constructor().resize_as_(grad)
return constructor(grad_indices, values, size)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
# Decay the first and second moment running average coefficient
# old <- b * old + (1 - b) * new
# <==> old += (1 - b) * (new - old)
old_exp_avg_values = exp_avg.sparse_mask(grad)._values()
exp_avg_update_values = grad_values.sub(old_exp_avg_values).mul_(1 - beta1)
exp_avg.add_(make_sparse(exp_avg_update_values))
old_exp_avg_sq_values = exp_avg_sq.sparse_mask(grad)._values()
exp_avg_sq_update_values = grad_values.pow(2).sub_(old_exp_avg_sq_values).mul_(1 - beta2)
exp_avg_sq.add_(make_sparse(exp_avg_sq_update_values))
# Dense addition again is intended, avoiding another sparse_mask
numer = exp_avg_update_values.add_(old_exp_avg_values)
exp_avg_sq_update_values.add_(old_exp_avg_sq_values)
denom = exp_avg_sq_update_values.sqrt_().add_(group['eps'])
del exp_avg_update_values, exp_avg_sq_update_values
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
p.add_(make_sparse(-step_size * numer.div_(denom)))
return loss
|
Human mitochondrial DNA diversity in an archaeological site in al-Andalus: genetic impact of migrations from North Africa in medieval Spain. Mitochondrial DNA sequences and restriction fragment polymorphisms were retrieved from three Islamic 12th-13th century samples of 71 bones and teeth (with >85% efficiency) from Madinat Baguh (today called Priego de Cordoba, Spain). Compared with 108 saliva samples from the present population of the same area, the medieval samples show a higher proportion of sub-Saharan African lineages that can only partially be attributed to the historic Muslim occupation. In fact, the unique sharing of transition 16175, in L1b lineages, with Europeans, instead of Africans, suggests a more ancient arrival to Europe from Africa. The present day Priego sample is more similar to the current south Iberian population than to the medieval sample from the same area. The increased gene flow in modern times could be the main cause of this difference. |
<filename>dataStruct/stack/stack_list.c
#include <stdio.h>
#include <stdlib.h>
//栈节点
typedef struct stack_node
{
int data;
struct stack_node *next;
} StackNode;
//链表栈
typedef struct stack_list
{
int count;
StackNode *top;
} StackList;
//创建一个stack
StackList *CreateStackList()
{
StackList *stack;
stack = (StackList *)malloc(sizeof(StackList));
if (!stack)
{
printf("分配内存空间失败");
exit(0);
}
stack->count = 0;
stack->top = NULL;
return stack;
}
//入栈
int Push(StackList *stack, int value)
{
if (!stack)
{
printf("栈未创建");
exit(0);
}
StackNode *node;
node = (StackNode *)malloc(sizeof(StackNode));
if (!node)
{
printf("分配内存空间失败");
exit(0);
}
node->data = value;
node->next = stack->top;
stack->top = node;
stack->count++;
return 0;
}
//出栈
int Pop(StackList *stack)
{
if (!stack)
{
printf("栈未创建");
exit(0);
}
if (stack->count == 0)
{
printf("栈为空");
exit(0);
}
StackNode *temp;
temp = stack->top;
stack->top = temp->next;
stack->count--;
free(temp);
return 0;
}
//遍历
void Display(StackList *stack)
{
if (!stack)
{
printf("栈未创建");
exit(0);
}
if (stack->count == 0)
{
printf("栈为空");
exit(0);
}
StackNode *temp;
temp = stack->top;
while (temp->next != NULL)
{
printf("stack.data=%d\n", temp->data);
temp = temp->next;
}
printf("stack.data=%d\n", temp->data);
}
int main(void)
{
StackList *stack = CreateStackList();
Push(stack, 1);
Push(stack, 2);
Push(stack, 3);
Display(stack);
Pop(stack);
Display(stack);
} |
import React from 'react'
import styled from '@emotion/styled'
import Slider from 'react-slick'
import { getGroupConfig } from '../groupConfigs'
import { useObservable } from 'micro-observables'
import { Button } from './Button'
export const ImagePlayer = () => {
const groupConfig = getGroupConfig('image')!
const { store } = groupConfig
const requests = useObservable(store.requests)
return (
<Container>
<Slider
lazyLoad="ondemand"
speed={0}
nextArrow={<Button>Next</Button>}
prevArrow={<Button>Prev</Button>}
>
{requests.map((request, idx) => (
<Image key={idx} src={request.url} />
))}
</Slider>
</Container>
)
}
const Container = styled.div`
width: 100%;
height: 280px;
color: white;
overflow: hidden;
.slick-slider {
.slick-slide {
height: 280px;
> div {
height: 100%;
}
}
.slick-arrow {
position: absolute;
bottom: 0;
margin: 16px;
z-index: 2;
&.slick-prev {
left: 0;
}
&.slick-next {
right: 0;
}
}
}
`
const Image = styled.div<{ src: string }>`
width: 100%;
height: 100%;
background-image: url(${props => props.src});
background-size: contain;
background-repeat: no-repeat;
background-position: center;
`
|
A proof-of-concept study of the "Employment and Arthritis: Making It Work" program. OBJECTIVE Work disability is a common outcome of inflammatory arthritis (IA), yet few services address employment. We conducted a proof-of-concept study of the "Employment and Arthritis: Making It Work" self-management program aimed at preventing work disability and maintaining at-work productivity in employed people with IA. METHODS The program was developed using the precede-proceed model and self-management concepts. Program goals included modifying risk factors for work disability and enhancing self-management of work problems due to IA, as identified in initial focus groups. The program included a self-learning manual, 5 group sessions, and individual visits with an occupational therapist for an ergonomic assessment and a vocational rehabilitation counselor. It was pilot tested in 2 groups (n = 19) and evaluated over 12 months of followup. RESULTS Participants consisted of 19 employed women with IA. Process evaluation demonstrated feasibility and excellent attendance and use of the self-learning manual. By 1 year, 80% reported increased confidence in requesting job accommodations, 74% had requested an accommodation, and 71% of requested accommodations were implemented. The occupational therapist and vocational rehabilitation counselor visits resulted in recommendations for change in 100% and 74% of participants, respectively, with implementation of some recommended changes in 89% and 63%, respectively. Improvements were observed in self-confidence in managing problems at work, fatigue interference with work, measures of limitations, and at-work productivity. CONCLUSION We developed a novel intervention to prevent work disability in patients with IA, combining self-management group sessions and professional assessments aimed at job retention, which resulted in people making changes to adapt their work to their arthritis, and improved fatigue, self-efficacy, and at-work productivity. |
<filename>src/__tests__/Footer.test.tsx
import { render, cleanup } from '@testing-library/react';
import Footer from '../components/footer/Footer';
beforeEach(cleanup);
describe('<Footer/>', () => {
it('renders footer component', () => {
render(<Footer />);
});
});
|
/**
* Only stores fake statistical measures to test if the
* {@link DefaultThresholdCalculator} uses them properly.
*
* @author Luiz Fernando Oliveira Corte Real
*/
public final class MockSignalStatistics implements SignalStatistics {
private double average;
private double standardDeviation;
private double thirdMoment;
private boolean calledReset = false;
@Override
public double getAverage() {
return this.average;
}
@Override
public double getStandardDeviation() {
return this.standardDeviation;
}
@Override
public double getThirdMoment() {
return this.thirdMoment;
}
@Override
public void update(double value) {
// Does nothing. I don't care about the signal
}
@Override
public void reset() {
this.calledReset = true;
}
/**
* Configures the value that {@link #getAverage()} should return
*
* @param value
* The value for {@link #getAverage()} to return
*/
public void returnsAverage(double value) {
this.average = value;
}
/**
* Configures the value that {@link #getStandardDeviation()} should return
*
* @param value
* The value for {@link #getStandardDeviation()} to return
*/
public void returnsStandardDeviation(double value) {
this.standardDeviation = value;
}
/**
* Configures the value that {@link #getThirdMoment()} should return
*
* @param value
* The value for {@link #getThirdMoment()} to return
*/
public void returnsThirdMoment(double value) {
this.thirdMoment = value;
}
/**
* @return true if the {@link #reset()} method was called at least once
* after the initialization of this object
*/
public boolean resetWasCalled() {
return this.calledReset;
}
} |
export = HtmlMinimizerPlugin;
/** @typedef {import("schema-utils/declarations/validate").Schema} Schema */
/** @typedef {import("webpack").Compiler} Compiler */
/** @typedef {import("webpack").Compilation} Compilation */
/** @typedef {import("webpack").WebpackError} WebpackError */
/** @typedef {import("webpack").Asset} Asset */
/** @typedef {import("jest-worker").Worker} JestWorker */
/** @typedef {import("./utils.js").HtmlMinifierTerserOptions} HtmlMinifierTerserOptions */
/** @typedef {RegExp | string} Rule */
/** @typedef {Rule[] | Rule} Rules */
/**
* @typedef {Object} MinimizedResult
* @property {string} code
* @property {Array<unknown>} [errors]
* @property {Array<unknown>} [warnings]
*/
/**
* @typedef {{ [file: string]: string }} Input
*/
/**
* @typedef {{ [key: string]: any }} CustomOptions
*/
/**
* @template T
* @typedef {T extends infer U ? U : CustomOptions} InferDefaultType
*/
/**
* @template T
* @typedef {InferDefaultType<T> | undefined} MinimizerOptions
*/
/**
* @template T
* @callback MinimizerImplementation
* @param {Input} input
* @param {MinimizerOptions<T>} [minimizerOptions]
* @returns {Promise<MinimizedResult>}
*/
/**
* @template T
* @typedef {Object} Minimizer
* @property {MinimizerImplementation<T>} implementation
* @property {MinimizerOptions<T> | undefined} [options]
*/
/**
* @template T
* @typedef {Object} InternalOptions
* @property {string} name
* @property {string} input
* @property {T extends any[] ? { [P in keyof T]: Minimizer<T[P]>; } : Minimizer<T>} minimizer
*/
/**
* @typedef InternalResult
* @property {string} code
* @property {Array<any>} warnings
* @property {Array<any>} errors
*/
/**
* @template T
* @typedef {JestWorker & { transform: (options: string) => InternalResult, minify: (options: InternalOptions<T>) => InternalResult }} MinimizerWorker
*/
/**
* @typedef {undefined | boolean | number} Parallel
*/
/**
* @typedef {Object} BasePluginOptions
* @property {Rules} [test]
* @property {Rules} [include]
* @property {Rules} [exclude]
* @property {Parallel} [parallel]
*/
/**
* @template T
* @typedef {BasePluginOptions & { minimizer: T extends any[] ? { [P in keyof T]: Minimizer<T[P]> } : Minimizer<T> }} InternalPluginOptions
*/
/**
* @template T
* @typedef {T extends HtmlMinifierTerserOptions
* ? { minify?: MinimizerImplementation<T> | undefined, minimizerOptions?: MinimizerOptions<T> | undefined }
* : T extends any[]
* ? { minify: { [P in keyof T]: MinimizerImplementation<T[P]>; }, minimizerOptions?: { [P in keyof T]?: MinimizerOptions<T[P]> | undefined; } | undefined }
* : { minify: MinimizerImplementation<T>, minimizerOptions?: MinimizerOptions<T> | undefined }} DefinedDefaultMinimizerAndOptions
*/
/**
* @template [T=HtmlMinifierTerserOptions]
*/
declare class HtmlMinimizerPlugin<T = import("html-minifier-terser").Options> {
/**
* @private
* @param {any} warning
* @param {string} file
* @returns {Error}
*/
private static buildWarning;
/**
* @private
* @param {any} error
* @param {string} file
* @returns {Error}
*/
private static buildError;
/**
* @private
* @param {Parallel} parallel
* @returns {number}
*/
private static getAvailableNumberOfCores;
/**
* @param {BasePluginOptions & DefinedDefaultMinimizerAndOptions<T>} [options]
*/
constructor(
options?:
| (BasePluginOptions & DefinedDefaultMinimizerAndOptions<T>)
| undefined
);
/**
* @private
* @type {InternalPluginOptions<T>}
*/
private options;
/**
* @private
* @param {Compiler} compiler
* @param {Compilation} compilation
* @param {Record<string, import("webpack").sources.Source>} assets
* @param {{availableNumberOfCores: number}} optimizeOptions
* @returns {Promise<void>}
*/
private optimize;
/**
* @param {Compiler} compiler
* @returns {void}
*/
apply(compiler: Compiler): void;
}
declare namespace HtmlMinimizerPlugin {
export {
htmlMinifierTerser,
Schema,
Compiler,
Compilation,
WebpackError,
Asset,
JestWorker,
HtmlMinifierTerserOptions,
Rule,
Rules,
MinimizedResult,
Input,
CustomOptions,
InferDefaultType,
MinimizerOptions,
MinimizerImplementation,
Minimizer,
InternalOptions,
InternalResult,
MinimizerWorker,
Parallel,
BasePluginOptions,
InternalPluginOptions,
DefinedDefaultMinimizerAndOptions,
};
}
type Compiler = import("webpack").Compiler;
type BasePluginOptions = {
test?: Rules | undefined;
include?: Rules | undefined;
exclude?: Rules | undefined;
parallel?: Parallel;
};
type DefinedDefaultMinimizerAndOptions<T> =
T extends import("html-minifier-terser").Options
? {
minify?: MinimizerImplementation<T> | undefined;
minimizerOptions?: MinimizerOptions<T> | undefined;
}
: T extends any[]
? {
minify: { [P in keyof T]: MinimizerImplementation<T[P]> };
minimizerOptions?:
| { [P_1 in keyof T]?: MinimizerOptions<T[P_1]> }
| undefined;
}
: {
minify: MinimizerImplementation<T>;
minimizerOptions?: MinimizerOptions<T> | undefined;
};
import { htmlMinifierTerser } from "./utils";
type Schema = import("schema-utils/declarations/validate").Schema;
type Compilation = import("webpack").Compilation;
type WebpackError = import("webpack").WebpackError;
type Asset = import("webpack").Asset;
type JestWorker = import("jest-worker").Worker;
type HtmlMinifierTerserOptions = import("./utils.js").HtmlMinifierTerserOptions;
type Rule = RegExp | string;
type Rules = Rule[] | Rule;
type MinimizedResult = {
code: string;
errors?: unknown[] | undefined;
warnings?: unknown[] | undefined;
};
type Input = {
[file: string]: string;
};
type CustomOptions = {
[key: string]: any;
};
type InferDefaultType<T> = T extends infer U ? U : CustomOptions;
type MinimizerOptions<T> = InferDefaultType<T> | undefined;
type MinimizerImplementation<T> = (
input: Input,
minimizerOptions?: MinimizerOptions<T>
) => Promise<MinimizedResult>;
type Minimizer<T> = {
implementation: MinimizerImplementation<T>;
options?: MinimizerOptions<T> | undefined;
};
type InternalOptions<T> = {
name: string;
input: string;
minimizer: T extends any[]
? { [P in keyof T]: Minimizer<T[P]> }
: Minimizer<T>;
};
type InternalResult = {
code: string;
warnings: Array<any>;
errors: Array<any>;
};
type MinimizerWorker<T> = Worker & {
transform: (options: string) => InternalResult;
minify: (options: InternalOptions<T>) => InternalResult;
};
type Parallel = undefined | boolean | number;
type InternalPluginOptions<T> = BasePluginOptions & {
minimizer: T extends any[]
? { [P in keyof T]: Minimizer<T[P]> }
: Minimizer<T>;
};
import { minify } from "./minify";
import { Worker } from "jest-worker";
|
The holidays are upon us. Thanksgiving gave way to Black Friday. Black Friday set the stage for Christmas. And we slackers out there---those of us who don't have the moxie or the guts to get up that early and go stand in line in the cold---still have our shopping ahead of us.
For some, this means deciding what to buy out of a bevy of choices. For shoppers and families looking to introduce a new video game console to their home, the choices are more varied than ever.
You could go with a last-gen offering like the PS3 or the Xbox 360. I give reasons why you should buy a PS3 over an Xbox 360 here, and reasons why you should buy an Xbox 360 over a PS3 here.
But today, I'm going to argue in favor of Nintendo's latest and greatest (and worst-named) video game console: the Wii U. This small box of gaming goodness packs a much bigger punch than it did last holiday season, and it's the best new-gen gaming machine on the market today in terms of pure gaming value.
Here are six reasons why.
1. It's the cheapest of the three.
Of the three new-gen consoles, the Wii U is by far the cheapest. For $299.99 you can snag a 32GB console with Nintendo Land and Super Mario 3D World.
That's a great deal. Even with the Xbox One Assassin's Creed Unity bundle at just $349.99 right now, the Wii U is cheaper and comes with two games instead of one. Update: The Xbox One bundle actually comes with two games also, as it includes Assassin's Creed IV as well. Meanwhile, the PS4 Lego Batman/LittleBigPlanet 3 bundle is going for $399.99.
2. You don't have to pay a monthly subscription to access content.
Of course, the Wii U is also cheaper thanks to Nintendo not charging for its online services. No app or online multiplayer is hidden behind a monthly subscription.
Compare this to Xbox Live Gold ($59.99/year) or PlayStation Plus ($49.99/year) and the pennies start to add up. Granted both those services do offer nice perks, but the Wii U's online is the only one of the three that is 100% free.
3. The best and the most exclusives.
Nevermind all that. You pay for what you get, right? Well the cheapest console also happens to have the best exclusives of the three---at least at the moment. Granted, the Wii U came out a year before its rivals, but that first year was hardly its best in terms of video games.
Now the Wii U boasts Super Mario 3D World, Pikmin 3, The Wonderful 101, Donkey Kong Country: Tropical Freeze, a remastered Legend of Zelda: Wind Waker, Mario Kart 8, Bayonetta 2 and Super Smash Bros. for Wii U. And that's just off the top of my head. Upcoming exclusives include the new Xenoblade game, a brand new Legend of Zelda, Splatoon and the Shin Megami Tensei/Fire Emblem crossover, just to name a few. Neither the Xbox One nor the PS4 can boast so many video games you can't play anywhere else.
Better yet, almost all of these games are simply excellent. Mario Kart 8 is my favorite in the series. Tropical Freeze is a hugely challenging platformer. Bayonetta 2 may be the best action game released in years. And Super Smash Bros. for Wii U is a fantastic brawler.
4. It's fantastic for couch co-op and competitive play.
While games like Mario Kart 8 and Super Smash Bros. can be played online, they're also really terrific to play with friends in person. Up to four players can play Mario Kart 8 and up to eight players can play the new Smash. Donkey Kong: Tropical Freeze has two-player co-op, and Super Mario 3D World has up to four.
I don't spend nearly as much time playing games with friends in person on any other system, mainly because those other systems are targeted largely at online play. There are a handful of great co-op games on Xbox One and PS4---Diablo III springs to mind, and Call of Duty can be fun---but the Wii U dominates this type of old-fashioned, in-person play.
5. You can play all your old Wii games.
The Wii U isn't limited to just Wii U games, either. For Wii owners, or for those of us who missed out on the Wii altogether, the Wii U is fully backwards compatible with Wii games. This is a nice perk, and one that Microsoft and Sony left out of their new-gen systems (for perfectly understandable reasons, but still...)
This means you can brush up on the various first and second-party exclusives you may have missed, including entries in the Metroid series, Mario Kart Wii, Super Mario Galaxy, a couple excellent Zelda titles, and more. At least a handful of terrific not-to-be-missed RPGs for the Wii are also available on Wii U, including Xenoblade and The Last Story.
(As I wrote this post I tracked down a copy of Fire Emblem: Radiant Dawn to play on my Wii U, actually.)
6. It's great for families.
This obviously won't apply to everyone, but if you have kids it's hard to beat the Wii U. The touchscreen gamepad makes navigating the system a breeze for kids, many of whom are already accustomed to touchscreen interfaces. The games aren't all kid-friendly (I'm looking at you Bayonetta) but many of them are.
This isn't to say they're easy games. Few games are as challenging as Donkey Kong: Tropical Freeze these days. But titles like Super Mario 3D World and Mario Kart 8 are wonderful for kids, and of course there's third-party stuff like Skylanders available also.
These are games you can play with your kids, which is even better. While my family dips into other consoles as well, we spend far and away the most hours playing Nintendo games together. As they get older, my kids can graduate from Mario and move on to tougher games like Donkey Kong and the upcoming Star Fox game which, one presumes, will be mind-bogglingly hard.
~
There are drawbacks to the machine, of course. You won't find graphics quite as high-def as the competition (though you'll soon appreciate the fine-tuning and care that go into Nintendo titles.) The 32GB of harddrive space is extremely limiting compared to the 500GB on Xbox One and PS4, though an external harddrive fixes that problem easily enough (at extra cost.) And there is a dearth of third-party content, unfortunately, as major publishers find stronger sales on alternative consoles. Call of Duty and Assassin's Creed fans would do better elsewhere.
Still, the Wii U is a terrific deal with terrific content this holiday season, especially for families and fans of the sort of weird creativity that goes into so many Nintendo games.
Do you have a Wii U at home already? If so, what are your thoughts on Nintendo's current home console? |
Champneys, one of the country’s oldest health spas, is put under the microscope in this new ITV documentary as it undergoes a challenging period of change.
Ben Fogle returns to Britain’s biggest natural harbour, Poole. By land, sea and air, Ben will explore all aspects of the place where he grew up.
Alan Titchmarsh and the team are heading west this week to make over a garden for the Woods family in Bideford in Devon.
In Diamond Geezers and Gold Dealers, cameras have been given exclusive, behind the scenes access to some of the distinctive characters who inhabit Hatton Garden, London.
Davina McCall and Nicky Campbell present a brand new series of the Bafta award winning series Long Lost Family, which traces and reunites families who have been apart for most of their lives.
Welcome to Norland College, the quintessentially British 120-year-old childcare training college in Bath which turns its students into elite 21st Century Mary Poppins-style nannies.
The team help Karan Waller who’d like a new garden to be a way of saying thank you to her family for all the support they’ve given her over the last few years as she battles a brain tumour.
Cruise Director Sam has stiff competition from her power hungry Deputy, Dan. In a desperate attempt to become Cruise Director himself, he decides to create a Game Show to win the applause of his boss - but will his gamble pay off? |
"""
utils.py
========
GSadjust utility functions
--------------------------------------------------------------------------------
This software is preliminary, provisional, and is subject to revision. It is
being provided to meet the need for timely best science. The software has not
received final approval by the U.S. Geological Survey (USGS). No warranty,
expressed or implied, is made by the USGS or the U.S. Government as to the
functionality of the software and related material nor shall the fact of release
constitute any such warranty. The software is provided on the condition that
neither the USGS nor the U.S. Government shall be held liable for any damages
resulting from the authorized or unauthorized use of the software.
"""
def index_or_none(l, i):
if i not in l:
return None
return l.index(i)
def init_cal_coeff_dict(obstreemodel):
"""
Initiate dict for storing meter calibration coefficients.
Parameters
----------
obstreemodel : ObsTreeModel
Returns
-------
dict
key: Meter (str), value: float
"""
try:
meter_list = {}
for i in range(obstreemodel.invisibleRootItem().rowCount()):
survey = obstreemodel.invisibleRootItem().child(i)
for ii in range(survey.rowCount()):
loop = survey.child(ii)
if loop.meter not in meter_list:
meter_list[loop.meter] = 1.000
return meter_list
except Exception:
return None
def init_station_coords_dict(obstreemodel):
"""
Stores a single set of coordinates for each station with the obsTreeModel
object. The coordinates of the last
Station in the Survey > Loop > Station hierarchy will be used.
"""
station_coords = dict()
for i in range(obstreemodel.invisibleRootItem().rowCount()):
survey = obstreemodel.invisibleRootItem().child(i)
for ii in range(survey.rowCount()):
loop = survey.child(ii)
for iii in range(loop.rowCount()):
station = loop.child(iii)
try:
station_coords[station.station_name] = (
station.long[0],
station.lat[0],
station.elev[0],
)
except Exception:
station_coords[station.station_name] = (0, 0, 0)
return station_coords
|
<reponame>OutlierVentures/BuyCoPoc
import express = require("express");
import userModel = require('../models/userModel');
import configModel = require('../models/configModel');
import serviceFactory = require('../services/serviceFactory');
import contractService = require('../services/contractService');
import fs = require('fs');
import web3plus = require('../node_modules/web3plus/lib/web3plus');
import contractInterfaces = require('../contracts/contractInterfaces');
import tools = require('../lib/tools');
import _ = require('underscore');
import Q = require('q');
import { Promise } from 'q';
/**
* Controller for migration between versions.
*/
export class MigrationController {
config: configModel.IApplicationConfig;
constructor() {
this.config = serviceFactory.getConfiguration();
}
update = (req: express.Request, res: express.Response) => {
var promises = new Array<Q.Promise<string>>();
// Ensure proposal registry contract
var registryCode: string;
try {
registryCode = web3plus.web3.eth.getCode(this.config.ethereum.contracts.proposalRegistry);
}
catch (ex) {
}
if (!registryCode || registryCode == "0x") {
promises.push(this.deployRegistry());
} else {
var deferShowRegistry = Q.defer<any>();
promises.push(deferShowRegistry.promise);
serviceFactory.getContractService()
.then(cs => {
if (cs.checkContractsVersion())
deferShowRegistry.resolve({ "address": this.config.ethereum.contracts.proposalRegistry });
else {
console.log("Version mismatch. Deploying new registry.");
return this.deployRegistry();
}
})
.then(newRegistryAddress => {
if(!deferShowRegistry.promise.isFulfilled())
deferShowRegistry.resolve({ "address": newRegistryAddress });
})
.catch(err => {
// An error occurred, just return it.
deferShowRegistry.reject(err);
});
}
Q.all(promises)
.then(function (results) {
res.status(200).json({
"status": "Ok",
"message": "Everything is up to date",
"results": results[0],
});
})
.catch(function (err) {
res.status(500).json(
{
"status": "Error",
"error": err
});
});
}
private deployRegistry(): Q.Promise<string> {
return Promise<string>((resolve, reject) => {
web3plus.deployContractFromFile("ProposalRegistry.sol", "ProposalRegistry", true, function (deployErr, deployRes) {
if (deployErr) {
reject(deployErr);
return;
}
// Return the contract address so it can be added to the configuration file.
// COULD DO: write config file here, or provide result values in a
// format that can be easily incorporated in the config file.
// ... or use a/the namereg contract...
console.log("MigrationController.update", "ProposalRegistry deployed at " + deployRes.address);
resolve(deployRes.address);
});
});
}
/**
* Seed the smart contracts with some test data.
*/
seedTestData = (req: express.Request, res: express.Response) => {
// Ensure some proposals
// Load the registry contract.
var contractService: contractService.ContractService;
var proposalContract: contractInterfaces.IProposalContract;
serviceFactory.getContractService()
.then(cs=> {
contractService = cs;
var deferAddTestData = Q.defer<string>();
var promises = new Array<Q.Promise<string>>();
promises.push(deferAddTestData.promise);
//var proposals = JSON.parse(fs.readFileSync('../client/data/proposals.json', 'utf8'));
// proposals.forEach(proposal
contractService.registryContract.addProposal("iPhone 6S", "Electronics", "Mobile phone",
15000, "2016-02-01", "2016-04-01", { gas: 2500000 });
contractService.registryContract.addProposal("OnePlus X", "Electronics", "Mobile phone",
10000, "2016-03-10", "2016-05-01", { gas: 2500000 });
contractService.registryContract.addProposal("Canon EOS 5D Mark III", "Electronics", "Camera",
40000, "2016-04-01", "2016-05-01", { gas: 2500000 });
contractService.registryContract.addProposal("Ethiopia Adado Coop", "Food and drink", "Coffee",
4, "2016-03-01", "2016-05-01", { gas: 2500000 });
return contractService.registryContract.addProposal("FTO Guatemala Huehuetenango", "Food and drink", "Coffee",
4, "2016-03-02", "2016-05-02", { gas: 2500000 });
}, err=> {
res.status(500).json({
"error_location": "loading registry",
"error": err,
});
return null;
})
.then(web3plus.promiseCommital)
.then(function addBackers(tx) {
// TODO: get the proposal by... generated ID? Now this always gets the first
// proposal, even if there are multiple.
// Use transaction hash.
var newProposalAddress = contractService.registryContract.proposals(1);
return contractService.getProposalContractAt(newProposalAddress);
})
.then(pc => {
proposalContract = pc;
proposalContract.back(15, tools.newGuid(true), { gas: 2500000 });
proposalContract.back(20, tools.newGuid(true), { gas: 2500000 });
proposalContract.back(35, tools.newGuid(true), { gas: 2500000 });
proposalContract.back(45, tools.newGuid(true), { gas: 2500000 });
return proposalContract.back(55, tools.newGuid(true), { gas: 2500000 });
})
.then(web3plus.promiseCommital)
.then(function finish(tx) {
res.status(200).json({
"status": "Ok",
"message": "Test data added",
"results": tx,
});
}, err => {
res.status(500).json({
"error_location": "adding backers",
"error": err,
});
return null;
});
}
}
|
Human immune response to cationized proteins. I. Characterization of the in vitro response to cationized diphtheria toxoid. Cationization of proteins, i.e., increasing net positive charge by the substitution of carboxyl groups with positively charged residues, has been reported to enhance protein immunogenicity in animal model systems. In the present study, we have investigated the effect of cationization on the in vitro cell-mediated immune response of human mononuclear cells to diphtheria toxoid. A series of cationized DT preparations were generated by covalent modification with ethylenediamine, with pIs ranging from 4.6 to > 9.3, and tested for their ability to induce proliferation of normal human peripheral blood mononuclear cells. Cationized DT (cDT) was found to induce an antigen-specific, augmented proliferative response, relative to native antigen, which was directly proportional to the degree of cationization. Further characterization of the response to cDT demonstrated that proliferative responses could be detected considerably earlier, and typically at much lower antigen concentrations, than the response to native DT; the response was dependent on HLA-DR; production of a number of cytokines, sp. IL-1 beta, IL-2, and IFN-gamma, was also elevated in cDT-stimulated cultures; and the enhanced proliferative response to cDT could be attributed to CD4+ helper T cells. These results demonstrate that cationization of proteins enhances the ability to generate a cell-mediated immune response in humans and suggest that cationization may have utility in the design of more effective carrier proteins for human vaccines. |
#include "2DGraph.h"
#ifdef PIXEL_COUNTER
unsigned long PixelCount;
#endif // #ifdef PIXEL_COUNTER
namespace Graphics
{
namespace Graphics2D
{
// struct FRONTBUFFER
void FRONTBUFFER::Initialize(unsigned short Maxx, unsigned short Maxy)
{
(*this).Maxx=Maxx;
(*this).Maxy=Maxy;
unsigned long Prod=(unsigned long)Maxx*(unsigned long)Maxy*3;
(*this).MaxGran=(unsigned short)(Prod>>16)+1;
(*this).LastGranLen=Prod&0x0000ffff;
if(!LastGranLen)
{
MaxGran--;
LastGranLen=0x00010000;
}
}
// class GRAPH
void GRAPH::Initialize(unsigned short Mode)
{
Close();
switch(Mode)
{
case(0x10f): // 320x200, 24-bit
BackBuffer.Initialize(320, 200, &SURFACE::_tol0x10f);
FrontBuffer.Initialize(320, 200);
break;
#ifdef EXTEND_MODES
case(0x0): // Custom mode
BackBuffer.Initialize(1600, 1200, &SURFACE::_st_tol);
break;
case(0x112): // 640x480, 24-bit
BackBuffer.Initialize(640, 480, &SURFACE::_tol0x112);
FrontBuffer.Initialize(640, 480);
break;
case(0x115): // 800x600, 24-bit
BackBuffer.Initialize(800, 600, &SURFACE::_tol0x115);
FrontBuffer.Initialize(800, 600);
break;
case(0x118): // 1024x768, 24-bit
BackBuffer.Initialize(1024, 768, &SURFACE::_tol0x118);
FrontBuffer.Initialize(1024, 768);
break;
case(0x11a): // 1280x1024, 24-bit (Not Suported)
BackBuffer.Initialize(1280, 1024, &SURFACE::_tol0x11a);
FrontBuffer.Initialize(1280, 1024);
break;
case(0x11f): // 1600x1200, 24-bit (Not Suported)
BackBuffer.Initialize(1600, 1200, &SURFACE::_tol0x11f);
FrontBuffer.Initialize(1600, 1200);
break;
#endif // #ifdef EXTEND_MODES
default:
Log.Message("Unsuported Graphics Mode.");
exit(1);
break;
}
BackBuffer.Clear(0x00);
Target=&BackBuffer;
#ifdef EXTEND_MODES
if(Mode)
#endif // #ifdef EXTEND_MODES
{
// Try 0x62 for GeForce cards
SetGraphicsMode(Mode);
Log.Message("Entered graphics mode.");
}
}
void GRAPH::Close()
{
if(BackBuffer.Data!=NULL)
{
BackBuffer.Close();
Log.Message("Closed Graph.");
}
}
void GRAPH::SetGraphicsMode(unsigned short Mode)
{
if(_GR_SetGraphicsMode(Mode)!=0x4f)
{
Log.Message("Unable to set graphics mode.");
exit(1);
}
(*this).Mode=Mode;
Log.Message("Switched to graphics mode ", (long)Mode);
}
void GRAPH::WaitVerticalRetrace()
{
while((inp(0x03da)&0x08));
while(!(inp(0x03da)&0x08));
}
void GRAPH::CopyBackBuffer()
{
unsigned long Addr=(unsigned long)BackBuffer.Data;
uint8 i;
FrontBuffer.MaxGran--;
for(i=0;i<FrontBuffer.MaxGran;i++)
{
_GR_SelectGranule(i);
memcpy(FrontBuffer.Data, (void *)Addr, 65536);
Addr+=65536;
}
_GR_SelectGranule(FrontBuffer.MaxGran);
FrontBuffer.MaxGran++;
memcpy(FrontBuffer.Data, (void *)Addr, FrontBuffer.LastGranLen);
}
void GRAPH::ClearScreen(uint8 Byte)
{
uint8 i;
FrontBuffer.MaxGran--;
for(i=0;i<FrontBuffer.MaxGran;i++)
{
_GR_SelectGranule(i);
memset(FrontBuffer.Data, Byte, 65536);
}
_GR_SelectGranule(FrontBuffer.MaxGran);
FrontBuffer.MaxGran++;
memset(FrontBuffer.Data, Byte, FrontBuffer.LastGranLen);
}
GRAPH Graph;
// struct LINE
void LINE::Draw(RGB Color)
{
short i, dx, dy, sdx, sdy, dxabs, dyabs, x, y, px, py;
dx=b.x-a.x;
dy=b.y-a.y;
dxabs=abs(dx);
dyabs=abs(dy);
sdx=Sgn(dx);
sdy=Sgn(dy);
x=dyabs>>1;
y=dxabs>>1;
px=a.x;
py=a.y;
Graph.Target->PutPixelCheck(px, py, Color);
if (dxabs>=dyabs)
for(i=0;i<dxabs;i++)
{
y+=dyabs;
if (y>=dxabs)
{
y-=dxabs;
py+=sdy;
}
px+=sdx;
Graph.Target->PutPixelCheck(px, py, Color);
}
else
for(i=0;i<dyabs;i++)
{
x+=dxabs;
if (x>=dyabs)
{
x-=dyabs;
px+=sdx;
}
py+=sdy;
Graph.Target->PutPixelCheck(px, py, Color);
}
}
} // namespace Graphics2D
} // namespace Graphics
|
This Friday, August 4, Bandcamp will donate all of its sales proceeds from the day to the Transgender Law Center, the Bandcamp Daily Staff announced. (Bandcamp makes 15 percent from all digital sales and 10 percent from all merchandise sales on the site.) In a statement, the staff write, “We support our LGBT+ users and staff, and we stand against any person or group that would see them further marginalized.” The benefit comes in the aftermath of Donald Trump’s abrupt declaration that he intends to ban transgender troops from serving in the military. (Military officials have since said that transgender people can serve in the military until further notice.) In addition to the fundraiser, the Bandcamp Daily Staff have recommended a number of records by trans and non-conforming artists, including ANOHNI, Aye Nako, and Mykki Blanco. Find those here.
The Transgender Law Center—a non-profit organization that seeks to “change law, policy, and attitudes so that all people can live safely, authentically, and free from discrimination regardless of their gender identity or expression”—issued its own a statement against Trump’s announcement, affirming that they “will not waver in [their] continued organizing and legal resistance to his agenda.”
Earlier this year, Bandcamp held a similar fundraiser when they donated all their proceeds from February 3 to the ACLU; they sold an estimate of $1 million worth of music that day.
Read “How to Get Involved in Politics Right Now: Take These Musicians’ Leads” on the Pitch. |
The design of program logic This paper contributes to the understanding of program structure in terms of stability and reliability in a quantitative sense. Distinctions are made between program structure and control structure. The characteristics of a good system will be identified qualitatively and it is apparent that all desirable characteristics concerns system stability. Stability is defined in terms of the resistence to the amplification of changes that has been made to the system. Some quantitative analysis is made to measure the quality of program structure. The techniques used here are the method of connectivity matrix and the method of random Markovian process. Program structures are defined as abstract relationships between subportions of a program, and the content of the program or its subportions is left undefined. Using these techniques, a collection of program structures is measured and ranked, and the preference of structures based on the stability criteria is shown. Two case studies, based on models of abstract structures, are presented here to show how the above techniques can be used to pick an appropriate system structure. |
Anxious to get a glimpse of the new Droid 2? The phone has arrived in the PCMag Labs, and we're ready to unbox.
An inexpensive printer that covers all bases is the best way to go for most students returning to school. Any of these MFPs should do the trick.
Bottom Line: The Mobile Edge EVO Backpack is great for carrying around your laptop and other accessories you'll need.
In a new YouTube video, Dell's Kevin Andrew shows the Dell Streak running Android 2.1 and discusses an upgrade to Froyo.
We went out and tried the famous death grip on various phones, the result was as expectedit affects more than just the iPhone 4.
Samsung Captivate Launches; AT&T's Top Android Phone?
In June, AT&T launched the iPhone 4. Now, the carrier has followed that up with perhaps its most impressive Android phone to date, the Samsung Captivate.
payLo by Virgin Mobile, via Sprint's prepaid portfolio, was announced today as the latest prepaid wireless service to join the fold.
The greatly anticipated Motorola Droid X is officially available on the Verizon network, beginning today.
Opera is bringing its premiere Web browsing experience to the Android platform with the launch of Opera Mini 5.1.
At InfoComm 2010, Samsung announced the SP-H03, a relatively bright and high-resolution pico projector.
We've compiled the best ways for all you soccer (football) fanatics to watch, listen or track all of the FIFA World Cup 2010 actionwhether you're watching online, on your TV, using your phone, or something else.
Highlights from PCMag's tests of 3G and 4G mobile networks in 18 cities across the USA. |
<filename>Chapter5-If Statement/OrdinalNumbers.py
##### 5-11. Ordinal Numbers: Ordinal numbers indicate their position in a list, such as 1st or 2nd .
# Most ordinal numbers end in th, except 1, 2, and 3 .
# Store the numbers 1 through 9 in a list .
# Loop through the list .
# Use an if-elif-else chain inside the loop to print the proper ordinal end- ing for each number .
#Your output should read "1st 2nd 3rd 4th 5th 6th 7th 8th 9th", and each result should be on a separate line .
Ordinal_numbers =[1,2,3,4,5,8,9,6,7]
for numb in Ordinal_numbers:
if numb==Ordinal_numbers[0]:
print("1st number in the list is:"+str(numb))
elif numb==Ordinal_numbers[1]:
print("2nd number in the list is:"+str(numb))
elif numb==Ordinal_numbers[2]:
print("3rd number in the list is:"+str(numb))
elif numb==Ordinal_numbers[3]:
print("4th number in the list is:"+str(numb))
elif numb==Ordinal_numbers[4]:
print("5th number in the list is:"+str(numb))
elif numb==Ordinal_numbers[5]:
print("6th number in the list is:"+str(numb))
elif numb==Ordinal_numbers[6]:
print("7th number in the list is:"+str(numb))
elif numb==Ordinal_numbers[7]:
print("8th number in the list is:"+str(numb))
elif numb==Ordinal_numbers[8]:
print("9th number in the list is:"+str(numb)) |
The present invention relates to a piezoelectric/electrostrictive material made of a porcelain obtained by firing, for example, to a piezoelectric/electrostrictive material used as an actuator or a sensor both assembled as an electromechanical transducer for positioning in precision machine tool or length control of optical path in optical instrument or in valve for flow rate control, etc. More particularly, the present invention relates to a piezoelectric/electrostrictive material suitably used in a very small sensor or a highly integrated very small actuator both used in an element for measurement of liquid property or very small weight.
As piezoelectric/electrostrictive materials, there have been known Pb(Zr,Ti)O3 (hereinafter referred to as PZT), BaTiO3, etc. They are in use in actuators, filters, various sensors, etc. PZT type piezoelectric/electrostrictive materials have been used mainly because they are superior in overall piezoelectric properties.
Pb contained in PZT, etc. is stabilized and essentially generates no problem caused by decomposition or the like. However, there are cases that a Pb-free material is required depending upon its application. Further, since Pb-containing porcelains such as PZT, PLZT [(Pb,La)(Zr,Ti)O3] and the like give rise to vaporization of small amount of Pb in high-temperature firing, they have had, particularly when used in applications requiring a thin or thick film, a problem that they hardly show stable properties owing to the compositional change during firing.
Meanwhile, BaTiO3 contains no Pb and offers a promising material for such a need. BaTiO3 viewed as a piezoelectric/electrostrictive material, however, is inferior in piezoelectric/electrostrictive properties to a PZT type material, and has seldom been used as an actuator or as a sensor.
The present invention has been made in view of the above-mentioned problems of the prior art and aims at providing a BaTiO3-based piezoelectric/electrostrictive material which is superior in piezoelectric/electrostrictive properties to conventional products and which can be suitably used in an actuator or a sensor, and a process for producing such a piezoelectric/electrostrictive material.
A piezoelectric/electrostrictive material, when used in an actuator, is required to show a large displacement to a voltage applied. A study by the present inventor, made on the piezoelectric/electrostrictive properties of BaTiO3-based porcelain indicated that by controlling the fine structure of BaTiO3-based porcelain, particularly the distribution of the crystal grain constituting the BaTiO3-based porcelain, a piezoelectric/electrostrictive material showing a large displacement can be obtained. This finding has led to the completion of the present invention.
According to the present invention, there is provided a piezoelectric/electrostrictive material made of a BaTiO3-based porcelain composed mainly of BaTiO3 and containing CuO and Nb2O5, characterized in that 85% or more of the crystal grains constituting the porcelain are grains having particle diameters of 10 xcexcm or less and the maximum particle diameter of the grains is in a range of 5 to 25 xcexcm.
In the piezoelectric/electrostrictive material of the present invention, at least Dart of the Ba may be substituted with Sr. Also in the present invention, the Ba/Ti ratio or the (Ba+Sr)/Ti ratio is preferably in a range of 1.001 to 1.01 because such a ratio can easily prevent the growth of abnormal grains occurring during the firing for porcelain formation and can easily control the particle diameters of the crystal grains constituting the porcelain.
According to the present invention, there is also provided a process for producing a piezoelectric/electrostrictive material made of a BaTiO3-based porcelain composed mainly of BaTiO3 and containing CuO and Nb2O5, characterized by weighing individual raw materials so as to give a predetermined composition, mixing and grinding them, calcinating the resulting mixed powder in the air at 850 to 950xc2x0 C., then grinding the resulting calcinated material until the ground material comes to have a specific surface area of 7 m2/g or less, and molding and firing the ground material.
The piezoelectric/electrostrictive material according to the present invention is described in more detail below. The piezoelectric/electrostrictive material according to the present invention is made of a BaTiO3-based porcelain composed mainly of BaTiO3 and containing CuO, Nb2O5, etc.
A specific composition of the porcelain of the present invention may be such wherein BaTiO3 is the main component and part of the Ba, for example, 0.1 to 10 mole % may be substituted with Sr. Also, the porcelain of the present invention may inevitably contain Zr, Si, Al, etc. in an amount of 0.5% by weight or less based on the total weight. Further in the BaTiO3-based porcelain of the present invention, the A/B ratio, which is a (Ba+Sr)/Ti, is preferably larger than 1, more preferably in a range of 1.001 to 10.1. Also, to the present porcelain are preferably added Nb2O, and CuO each in an amount of 0.05 to 0.5% by weight, more preferably each in an amount of 0.1 to 0.3% by weight based on the porcelain components excluding these components. Further, to the porcelain of the present invention may be added rare earth metals and/or transition metals other than the above components, in a total amount of 0.5% by weight or less in terms of their metal oxides. Incidentally, the forms of the components added are ordinarily oxides, carbonates or sulfates thereof.
The individual crystal grains constituting the porcelain of the present invention have crystal lattices of perovskite structure. The porcelain of the present invention is characterized in that the particle diameter distribution of the crystal grains constituting the porcelain is controlled as predetermined; specifically, 85% or more of the crystal grains are constituted by grains having particle diameters of 10 xcexcm or less and the maximum particle diameter of the grains is in a range of 5 to 25 xcexcm. In a preferred particle diameter distribution of the crystal grains, 90% to less than 100% of the crystal grains have particle diameters of 10 xcexcm or less and the maximum particle diameter of the grains is in a range of 10 to 25 xcexcm.
The action mechanism for why a porcelain having the above particle diameter distribution shows superior piezoelectric/electrostrictive properties, is not clear. However, from the results shown in Examples described later, it is clear that a porcelain constituted by crystal grains having a particle diameter distribution in the above mentioned range is superior in piezoelectric/electrostrictive properties to a porcelain having a particle diameter distribution in other range.
Next, description is made on the process for producing a piezoelectric/electrostrictive material according to the present invention.
First, raw materials (oxides, hydroxides and carbonates of metal elements) are weighed as so as to give a compositional range of the present invention and are mixed using a mixer such as ball mill or the like. In this mixing, it is preferred to allow the primary particles of each raw material after mixing to have particle diameters of 1 xcexcm or less, in order to allow the porcelain obtained to have a particle diameter distribution specified in the present invention.
Then, the resulting mixed powder is calcinated in the air at 850 to 950xc2x0 C. to obtain a calcinated material. An appropriate calcination temperature is 850 to 950xc2x0 C. With a calcination temperature above 950xc2x0 C., the resulting sintered material is nonuniform and, with a calcination temperature below 850xc2x0 C., an unreacted phase remains in the resulting sintered material, making it impossible to obtain a dense porcelain.
Next, the calcinated material obtained is ground using a grinder such as ball mill or the like until the ground material comes to have a specific surface area of preferably 7 m2/g or less, more preferably 5 m2/g or less. The ground material is molded by a monoaxial press and then by a hydrostatic press to obtain a molded material of desired shape. The molded material is fired at 1,100 to 1,250xc2x0 C. to obtain a sintered material. The most appropriate firing temperature is 1,150 to 1,200xc2x0 C.
In the above-mentioned production process, it is important to control the Ba/Ti ratio of BaTiO3 [the (Ba+Sr)/Ti ratio when part of the Ba has been substituted with Sr] depending upon the kinds and amounts of the components (e.g. CuO and Nb2O5) added to the main component BaTiO3. The Ba/Ti ratio [or the (Ba+Sr)/Ti ratio] is appropriately controlled so that an intended crystal grain diameter distribution can be obtained depending upon the amounts and forms (e.g. salt or metal) of the components added, the firing temperature, etc.
The sintered material (porcelain) obtained by firing is subjected to a polarization treatment and then allowed to stand for 24 hours or more, whereby the resulting material has a high strain property. The piezoelectric/electrostrictive material according to the present invention is superior in displacement property; therefore, it is useful as a general electromechanical transducer and is suitably used in an actuator, a sensor, etc. |
/**
* Class holding every statement extracted from the original bytecode and some method useful to process and rearrange
* them. Can be seen as a single method represented by a collection of statements.
*
* @author D.Pizzolotto
*/
public class ExtractedBytecode {
/**
* Flag used to indicate where should be put the code checking indicating if an exception has been caught
*/
public static final String POSTPROCESS_IS_CATCHED = "$$_EXCEPTION_CHECK";
/**
* Flag used to indicate where should be inserted the 'exception cleanup' code
*/
public static final String POSTPROCESS_EXCEPTION_CLEAR = "$$_EXCEPTION_CLEAR";
/**
* List containing every bytecode opcode transformed in C. Every entry of this list correspond to an opcode ready
* to be written in the C file
*/
public List<String> statements;
/**
* List containing every label found in the method, in order
*/
public List<String> labels; //ordered labels
/**
* List containing every try-catch block, represented with an ad-hoc structure
*/
public List<TryCatchBlock> tryCatchBlocks; //these are needed after gathering every label
/**
* Set containing the list of labels (as they can be found in labels) used by the C code
*/
public Set<String> usedLabels;
/**
* A set containing the list of every catch block that can be found anywhere in the method code. This is used to
* know which block should be undefined after every basic block
*/
public Set<String> catchedStatements;
/**
* Maximum size of the opcode stack
*/
public int maxStack;
/**
* Maximum size of the variables array
*/
public int maxLVar;
/**
* True if the method is static
*/
public boolean isStatic;
/**
* The type of return used in the function (the UPPERCASE letter preceding the RETURN in the opcode name)
*/
public char returnType;
/**
* Initialize this class
*
* @param staticMethod true if the method is flagged as static
*/
public ExtractedBytecode(boolean staticMethod) {
statements = new ArrayList<>();
tryCatchBlocks = new ArrayList<>();
labels = new ArrayList<>();
usedLabels = new HashSet<>();
catchedStatements = new HashSet<>();
this.isStatic = staticMethod;
this.returnType = 'V';
}
/**
* After collecting the extent of every try-catch block, and thus after the method reading is finished, this
* method is used to compute the length of every try-catch block. The length of these blocks can be used to know
* which is the innermost block in case they are nested (pro tip: the innermost is the shortest one)
*/
private void computeTryCatchLength() {
for (TryCatchBlock current : tryCatchBlocks) {
current.startIndex = labels.indexOf(current.start);
current.endIndex = labels.indexOf(current.end);
if (current.startIndex < 0 || current.endIndex < 0) {
throw new IllegalPatternError("Inconsistents try-catch blocks");
} else {
current.length = current.endIndex - current.startIndex;
}
}
}
/**
* Returns the C preprocessor defines used to signal that every catch has been exited. Note that this class
* defines catch blocks at the beginning of every basic block and undefines them at the end of it. This method
* generates the string that undefines them.
*
* @return The string representing a list of statements undefining every catch block preprocessor directive
*/
@NotNull
private String generateExitCatchs() {
//generate the string undefining every catch statements
StringBuilder sb = new StringBuilder();
for (String s : this.catchedStatements) {
sb.append("#undef CATCH_");
sb.append(s.replaceAll("/", "_"));
sb.append("\n");
}
return sb.toString();
}
/**
* Given a list of TryCatchBlock and a list of labels, returns the map (Label,(CatchStmt,TryCatchBlock)) where for
* each label pairs (catched exception, trycatch block) are recorded. Only the innermost try-catch for each label is
* saved
*
* @return The reorganized list of try-catch blocks (check above description)
*/
private Map<String, Map<String, TryCatchBlock>> reorganizeTryCatchs() {
//calculate the length of every try-catch block
this.computeTryCatchLength();
//<Label<Catch stmt,TryCatchBlock>>
Map<String, Map<String, TryCatchBlock>> defines = new HashMap<>();
//add try-catchs to every basic block
for (TryCatchBlock current : tryCatchBlocks) { //for every try-catch
for (int i = current.startIndex; i < current.endIndex; i++) { //for every label affected by the try-catch
Map<String, TryCatchBlock> map = defines.get(labels.get(i));
if (map == null) { //this is the first try catch of that label
map = new HashMap<>();
map.put(current.catched, current);
defines.put(labels.get(i), map);
} else { //another try-catch already exists
TryCatchBlock mapped = map.get(current.catched);
if (mapped == null) { //the try-catch was catching another exception
map.put(current.catched, current); //add the current catch
} else { //nested catch, need to keep the shortest one
if (mapped.length > current.length) { //mine is the shortest, the other one is removed
map.remove(current.catched);
map.put(current.catched, current);
}
/* else
the other one is the shortest, do nothing
*/
}
}
}
}
return defines;
}
/**
* Flatten the result of a reorganizeTryCatch() in order to get a pair (Label,String) where for each string the
* prepared `#ifdef catchedexception goto handle are used`
*
* @param reorgTryCatchRes The result of a reorganizeTryCatch() call
* @return The Map<Label,String> defined in the method description
*/
private Map<String, String> flattenTryCatchs(@NotNull Map<String, Map<String, TryCatchBlock>> reorgTryCatchRes) {
//now flatten the <Label<Catch stmt, TryCatchBlock>> into a <Label,Catch_stmt> by appending the TryCatchBlock
// handle
HashMap<String, String> retval = new HashMap<>();
for (Map.Entry<String, Map<String, TryCatchBlock>> pair : reorgTryCatchRes.entrySet()) {
StringBuilder catchstring = new StringBuilder();
for (Map.Entry<String, TryCatchBlock> inner : (pair.getValue()).entrySet()) {
catchstring.append("#define CATCH_");
catchstring.append(inner.getKey().replaceAll("/", "_"));
catchstring.append(" LABEL_");
catchstring.append(inner.getValue().handle);
catchstring.append("\n");
}
retval.put((pair.getKey()), catchstring.toString());
}
return retval;
}
/**
* Removes every unnecessary label, add and reorganize try-catch blocks for the method of this class. This MUST
* be called after the entire method has been visited
*/
public void postprocess() {
String exitCatchBlock = generateExitCatchs();
Map<String, Map<String, TryCatchBlock>> tryCatches = reorganizeTryCatchs(); //<label,catched stmt,
// trycatchblock>
Map<String, String> enterCatchBlock = flattenTryCatchs(tryCatches); //<label,#define1...#define2...>
ListIterator<String> it = statements.listIterator();
String labelpure = "";
while (it.hasNext()) {
String value = it.next();
if (value.length() > 6 && value.substring(0, 6).equals("LABEL_")) {
String label = value.substring(0, value.length() - 3);
labelpure = value.substring(6, value.length() - 3);
String catchme;
if (!usedLabels.contains(label)) {
it.remove();
}
it.add(exitCatchBlock);
if (enterCatchBlock.containsKey(labelpure)) {
catchme = enterCatchBlock.get(labelpure);
it.add(catchme);
}
} else if (value.equals(POSTPROCESS_IS_CATCHED) || value.equals(POSTPROCESS_EXCEPTION_CLEAR)) {
//need to add a dynamic type checking for the user-defined exceptions
//add also the ExceptionClear() block. For exceptions generated in the JVM and catched in the JNI
boolean clear = value.equals(POSTPROCESS_EXCEPTION_CLEAR);
it.remove();
// last label used, since I'm not right after a label ----------v
Map<String, TryCatchBlock> currentLabelCatch = tryCatches.get(labelpure);
if (currentLabelCatch == null) { //no catchblock for the current basic block, so throw the exception
it.add("_ThrowBack(child,env,_stack,&_index);\nRETURN_EXCEPTION;\n");
} else { //inside a catchblock, so if(raised exception instance of catched exception) goto catch, else
// throw
//flatten into array
List<TryCatchBlock> list = new ArrayList<>(currentLabelCatch.values());
//reorder array otherwise I could break inheritance (catching in the wrong block)
list.sort(Comparator.comparingInt(block -> (block.order)));
for (TryCatchBlock catched : list) {
//no need to if-elif-else since every if is broken by a goto
it.add("if(_ExceptionInstanceOf(child,env,_stack,\"" + catched.catched + "\")){\n");
if (clear) {
it.add("(*env)->ExceptionClear(env);\n");
}
it.add("goto LABEL_" + catched.handle + ";\n}\n");
}
it.add("_ThrowBack(child,env,_stack,&_index);\nRETURN_EXCEPTION;\n");
}
}
}
}
} |
New, secretly obtained photos show that elephants snatched from the wild in Zimbabwe months ago and airlifted recently to China are malnourished, sunken-looking, and scarred by wounds.
“These calves look really horrible,” says Joyce Poole, co-founder of ElephantVoices, a Kenya-based research and advocacy organization. Poole reviewed the photos, which were sent exclusively to National Geographic.
“I have seen at least 23 elephants,” wrote Chunmei Hu, a project manager with Nature University, a Beijing-based environmental NGO.
Hu says she took the photos on Monday at the Qingyuan Chimelong quarantine facility, in Guandong Province. “Most of the elephants have been hurt.”
Last month, conservationists alerted National Geographic that Chinese crews were in Hwange National Park—where tens of elephants had been held since November 2014—readying them for transport to China.
China’s purchase of the elephants from Zimbabwe is sanctioned under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES).
CITES General Secretary John Scanlon released a statement confirming the transfer of 24 elephants to China earlier this month. The statement said the elephants are destined for the massive Chimelong Safari Park, in Guangdong.
The export has been decried by animal welfare advocates and conservation organizations around the globe. Opponents say it’s cruel to subject elephants—known for their emotional depth, cooperative nature, and great intelligence—to the trauma of separation from their kind and confinement in prison-like zoos and safari parks.
Zimbabwe announced earlier this year that it would be selling more elephants abroad.
Bullhooks or Infighting?
Poole says the elephants—many as young as four—have protruding cheekbones, lackluster skin, a mottled complexion—which signifies poor condition—and abrasions.
She speculates that the wounds may have been inflicted by people, or by infighting among the elephants, or during their journey from Zimbabwe to China. Or indeed by a combination of all three.
Many of the injuries “are consistent with bullhook wounds,” Poole says, which are sometimes used in transporting and disciplining elephants. (Bullhooks are poker-like, metal instruments traditionally used to “train” elephants.)
“The calves are covered with so many smaller and larger wounds that no matter what they were caused by, the owners and/or handlers must be held accountable,” she says.
Attempts to reach the Qingyuan Entry-Exit Inspection and Quarantine Bureau, which oversaw the creation of the Chimelong facility, have not been successful. National Geographic also asked Meng Xianlin, Executive Director-General of the CITES Management Authority of China, to comment. No response was received by the time of publication.
Scott Blais, the CEO of the Global Sanctuary for Elephants, a Tennessee-based organization that aims to create a network of refuges for captive elephants, also reviewed the photographs. Blais once worked in the captive elephant industry.
Blais doesn’t think bullhooks are largely to blame for the wounds: “Many are in areas that a hook wouldn’t typically be used, and there are abrasions that are atypical for hook injuries.”
Rather, he believes, the wounds—some of which are deep and weeks old—are from infighting.
Blais says that what lies ahead for these elephants is many years of unnaturally aggressive behavior. “They’ve already started to lose empathy for one another," he says, "which is a core element of their normal state of being.” |
<reponame>Edimartin/edk-source
#include "Triangle2D.h"
/*
Library Triangle2D - Draw a 2D Triangle in EDK Game Engine
Copyright 2013 <NAME> (<EMAIL>)
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#ifdef printMessages
#warning " Inside Triangle2D.cpp"
#endif
edk::shape::Triangle2D::Triangle2D()
{
this->polygonColor.a=1.f;
//create a new polygon with 3 vertex
edk::shape::Polygon2D::createPolygon(3u);
}
edk::shape::Triangle2D::~Triangle2D()
{
//delete the polygon
edk::shape::Polygon2D::deletePolygon();
}
//createPolygon
bool edk::shape::Triangle2D::createPolygon(){
//return true
return true;
}
//Virtual Functions
bool edk::shape::Triangle2D::createPolygon(edk::uint32 vertexCount){
//set the function no do nothing
if(vertexCount)
return true;
return false;
}
void edk::shape::Triangle2D::deletePolygon(){
//set the function no do nothing
}
//change the vertex position to the polygon be counterClockwise
bool edk::shape::Triangle2D::calculateCounterClockwise(){
//
if(this->getVertexCount()==3u){
if(this->vertexs[0u] &&
this->vertexs[1u] &&
this->vertexs[2u]
){
if(!this->isCounterclockwise()){
edk::shape::Vertex2D* temp = this->vertexs[1u];
this->vertexs.set(1u,this->vertexs[2u]);
this->vertexs.set(2u,temp);
}
return true;
}
}
return false;
}
//print the triangle
void edk::shape::Triangle2D::print(){
//
printf("\nTriangle");
edk::shape::Polygon2D::print();
}
//Draw the triangle
void edk::shape::Triangle2D::draw(){
//draw the polygon
edk::GU::guPushMatrix();
edk::GU::guTranslate2f32(this->translate);
edk::GU::guRotateZf32(this->angle);
edk::GU::guScale2f32(this->scale);
edk::GU::guBegin(GU_TRIANGLES);
this->drawVertexs();
edk::GU::guEnd();
edk::GU::guPopMatrix();
}
void edk::shape::Triangle2D::drawWire(){
//draw the polygon
edk::GU::guPushMatrix();
edk::GU::guTranslate2f32(this->translate);
edk::GU::guRotateZf32(this->angle);
edk::GU::guScale2f32(this->scale);
edk::GU::guBegin(GU_LINES);
this->drawVertexs();
edk::GU::guEnd();
edk::GU::guPopMatrix();
}
|
Turbulence Measurements with Dual-Doppler Scanning Lidars Velocity-component variances can be directly computed from lidar measurements using information of the second-order statistics within the lidar probe volume. Specifically, by using the Doppler radial velocity spectrum, one can estimate the unfiltered radial velocity variance. This information is not always available in current lidar campaigns. The velocity-component variances can also be indirectly computed from the reconstructed velocities but they are biased compared to those computed from, e.g., sonic anemometers. Here we show, for the first time, how to estimate such biases for a multi-lidar system and we demonstrate, also for the first time, their dependence on the turbulence characteristics and the lidar beam scanning geometry relative to the wind direction. For a dual-Doppler lidar system, we also show that the indirect method has an advantage compared to the direct one for commonly-used scanning configurations due to the singularity of the system. We demonstrate that our estimates of the radial velocity and velocity-component biases are accurate by analysis of measurements performed over a flat site using a dual-Doppler lidar system, where both lidars stared over a volume close to a sonic anemometer at a height of 100 m. We also show that mapping these biases over a spatial domain helps to plan meteorological campaigns, where multi-lidar systems can potentially be used. Particularly, such maps help the multi-point mapping of wind resources and conditions, which improve the tools needed for wind turbine siting. |
/**
* Return true if the given colour is recognised as the path to follow.
*/
private boolean isPath (Color col)
{
return true;
} |
<gh_stars>1-10
package towerapi
type Users struct {
Count int `json:"count"`
Next string `json:"next"`
Previous string `json:"previous"`
Results []User `json:"results"`
}
type User struct {
Auth []interface{} `json:"auth"`
Created string `json:"created"`
Email string `json:"email"`
ExternalAccount string `json:"external_account"`
FirstName string `json:"first_name"`
ID int `json:"id"`
IsSuperuser bool `json:"is_superuser"`
IsSystemAuditor bool `json:"is_system_auditor"`
LastName string `json:"last_name"`
LdapDn string `json:"ldap_dn"`
Related struct {
AccessList string `json:"access_list"`
ActivityStream string `json:"activity_stream"`
AdminOfOrganizations string `json:"admin_of_organizations"`
Credentials string `json:"credentials"`
Organizations string `json:"organizations"`
Projects string `json:"projects"`
Roles string `json:"roles"`
Teams string `json:"teams"`
} `json:"related"`
Type string `json:"type"`
URL string `json:"url"`
Username string `json:"username"`
}
|
<filename>pr/src/main/java/project/model/Relationship.java
package project.model;
public class Relationship {
private String id;
private String usernameOrId1;
private String usernameOrId2;
public Relationship(){};
public Relationship(String i, String id1, String id2) {
id = i;
usernameOrId1 = id1;
usernameOrId2 = id2;
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getUsernameOrId1() {
return usernameOrId1;
}
public void setUsernameOrId1(String usernameOrId1) {
this.usernameOrId1 = usernameOrId1;
}
public String getUsernameOrId2() {
return usernameOrId2;
}
public void setUsernameOrId2(String usernameOrId2) {
this.usernameOrId2 = usernameOrId2;
}
}
|
Performance evaluation of wireless LANs in the indoor environment The results of a performance evaluation of a wireless LAN (WLLAN) in an indoor environment are presented. The LAN operates in the ISM bands using spread spectrum technology. Hardware specific parameters such as bit error rate vs. signal to noise ratio (BER vs. SNR), maximum transmission rate, and platform software overhead are measured experimentally for a single node. These empirical data, in combination with manufacturer specifications, are then used as a basis for deriving a network simulation model. A ray trace algorithm is used to obtain the indoor channel characteristics for point to point transmissions within a test room. The network simulation uses the measured BER vs. SNR, node and hardware specifications, and the ray trace channel characteristics to model the behavior of a multiple node network in a wireless environment. |
import Application from '../../src/HttpApplication/Application'
import { File } from 'atma-io'
UTest({
'should load commonjs module'(done) {
Application.clean().create({
configs: null,
config: {
env: {
server: {
scripts: {
npm: [
'appcfg'
]
}
}
}
}
})
.done(app => {
var scripts = app.config.env.server.scripts.npm;
has_(scripts, [
'/node_modules/appcfg/lib/config.js'
]);
is_(app.lib.config, 'Function');
done();
})
},
'should load commonjs module with alias'(done) {
Application.create({
configs: null,
config: {
env: {
server: {
scripts: {
npm: [
'appcfg::Foo'
]
}
}
}
}
})
.done(app => {
var scripts = app.config.env.server.scripts.npm;
has_(scripts, [
'/node_modules/appcfg/lib/config.js::Foo'
]);
is_(app.lib.Foo, 'Function');
done();
})
},
'should support array as `main` property': {
$before() {
this.path = {
package: /body-parser\/package\.json$/
};
this.package = class extends File {
exists = () => true
content = <any>{
main: [
'foo.js',
'baz.js',
'quux.css'
]
}
};
this.file = class extends File {
exists = () => true
content = 'Foo'
};
File.getFactory().registerHandler(
this.path.package, this.package
);
},
$after() {
File.getFactory().unregisterHandler(this.path.package, null);
},
'javascripts and styles'(done) {
Application.create({
configs: null,
config: {
env: {
server: {
scripts: {
npm: [
'body-parser'
]
}
},
client: {
styles: {
npm: [
'body-parser'
]
}
}
}
}
})
.done(app => {
var scripts = app.config.env.server.scripts.npm,
styles = app.config.env.client.styles.npm
has_(scripts, [
'/node_modules/body-parser/foo.js',
'/node_modules/body-parser/baz.js',
]);
has_(styles, [
'/node_modules/body-parser/quux.css'
]);
done();
})
}
}
}) |
Real time supervisory control for hybrid power system An efficient Maximum power point tracking is achieved by designing an real time control for hybrid power system. The maximum power point (MPP) is extracted from the renewable source i.e., solar and wind energy. The MPP depends on irradiance conditions, temperature for solar and wind speed ratio for wind energy. The characteristic output power is determined for solar and wind energy. The MPPT is obtained by using an MPPT algorithm for the input source. In this paper perturbation and observation is implemented. The operation of the proposed system is explained. The control circuit for the hybrid power system is realized by using microcontroller or FPGA (Field programmable gate array). |
<filename>regtests/class/simple.py
"""simple class"""
class A:
def __init__(self):
self.x = 5
def main():
a = A()
TestError(a.x == 5)
|
def status_wait(msg, check, cont_name, loop_cnt=20, t_sleep=5):
print(f'\n\t{msg}', end='', flush=True)
for r in range(loop_cnt):
cont_status = docker_client.containers.get(cont_name).status
cont_attrs = docker_client.containers.get(cont_name).attrs
if check == 'running' and cont_status == 'running':
return check
if check == 'healthy' and cont_attrs['State']['Health']['Status'] == 'healthy':
print()
return check
sleep(t_sleep)
print('.', end='', flush=True)
print()
cont_status = docker_client.containers.get(cont_name).status
cont_attrs = docker_client.containers.get(cont_name).attrs
return cont_status if check == 'running' else cont_attrs['State']['Health']['Status'] |
Avulsion fracture of the sublime tubercle of the ulna: a newly recognized injury in the throwing athlete. OBJECTIVE The purpose of this report is to describe the imaging features in three cases of avulsion injury of the sublime tubercle of the ulna that occurred in throwing athletes. CONCLUSION Avulsion fracture of the sublime tubercle of the ulna is a potential cause of chronic medial elbow pain in the throwing athlete. This entity is best evaluated with a combination of plain radiographs and coronal MR images, particularly gradient-echo images that show the continuity of the avulsed fragment with the ulnar collateral ligament. |
/**
* Function implementation.
*
* @author BaseX Team 2005-17, BSD License
* @author Christian Gruen
*/
public final class ArrayRemove extends ArrayFn {
@Override
public Item item(final QueryContext qc, final InputInfo ii) throws QueryException {
Array array = toArray(exprs[0], qc);
// collect positions, sort and remove duplicates
final LongList list = new LongList();
final Iter pos = exprs[1].iter(qc);
for(Item it; (it = pos.next()) != null;) {
list.add(checkPos(array, toLong(it), false));
}
list.sort().distinct();
// delete entries backwards
for(int i = list.size() - 1; i >= 0; i--) array = array.remove(list.get(i));
return array;
}
} |
<filename>packages/util/src/polyfill/setPrototypeOf.ts
/* eslint-disable @typescript-eslint/unbound-method */
// Copyright 2017-2020 @polkadot/util authors & contributors
// This software may be modified and distributed under the terms
// of the Apache-2.0 license. See the LICENSE file for details.
// React Native does not have Object.setPrototypeOf
if (!Object.setPrototypeOf) {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
Object.setPrototypeOf = function (obj: any, proto: object | null): void {
// eslint-disable-next-line no-proto
obj.__proto__ = proto;
return obj;
};
}
|
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import {Profesor} from './profesor';
import {Proyecto} from '../proyecto/proyecto';
@Injectable()
export class ProfesorDataServerService {
listaProfesores: Array<Profesor>;
listaProyectos: Array<Proyecto>;
constructor(private http: HttpClient)
{
}
cargarProyectos()
{
this.http.get('http://localhost:49475/Proyecto/obtenerProyectos').subscribe(data=>
{
this.listaProyectos= data as Array<Proyecto>;
});
}
cargarProfesores()
{
this.http.get('http://localhost:49475/Profesor/obtenerProfesores').subscribe(data=>
{
this.listaProfesores= data as Array<Profesor>;
});
}
guardarProfesor(profesor: Profesor)
{
const body={nombre:profesor.nombre, cedula: profesor.cedula, correo: profesor.correo, facultad: profesor.facultad, proyectoId:profesor.proyectoId};
this.http.post('http://localhost:49475/Profesor/guardarProfesor', body)
.subscribe();
return "Se guardo a: "+profesor.nombre;
}
}
|
<reponame>arendtio/math-queue
package mathqueue_test
import (
"math"
"testing"
"github.com/arendtio/mathqueue"
)
func check(t *testing.T, target int, result int) {
if target != result {
t.Fatal("Result:", result, "does not match target:", target)
}
}
func TestMedian(t *testing.T) {
mq := mathqueue.NewMathQueue()
mq.Enqueue(10)
check(t, 10, mq.Median())
mq.Enqueue(50)
check(t, 30, mq.Median())
mq.Enqueue(20)
check(t, 20, mq.Median())
mq.Enqueue(80)
check(t, 35, mq.Median())
mq.Dequeue() // 10
check(t, 50, mq.Median())
mq.Dequeue() // 50
check(t, 50, mq.Median())
mq.Dequeue() // 20
check(t, 80, mq.Median())
}
func TestPercentile(t *testing.T) {
mq := mathqueue.NewMathQueue()
// from 0 to 100 (101 elements)
for i := 0; i < 101; i++ {
mq.Enqueue(i)
}
check(t, 99, mq.Percentile(99))
}
func TestQuantile(t *testing.T) {
mq := mathqueue.NewMathQueue()
mq.Enqueue(10)
check(t, 10, mq.Quantile(1, 3))
mq.Enqueue(30)
check(t, 10, mq.Quantile(1, 3))
mq.Enqueue(80)
check(t, 20, mq.Quantile(1, 3))
mq.Enqueue(40)
check(t, 30, mq.Quantile(1, 3))
}
func TestMax(t *testing.T) {
mq := mathqueue.NewMathQueue()
mq.Enqueue(30)
check(t, 30, mq.Max())
mq.Enqueue(-40)
check(t, 30, mq.Max())
mq.Enqueue(80)
check(t, 80, mq.Max())
mq.Enqueue(50)
check(t, 80, mq.Max())
mq.Dequeue() // 30
check(t, 80, mq.Max())
mq.Dequeue() // -40
check(t, 80, mq.Max())
mq.Dequeue() // 80
check(t, 50, mq.Max())
}
func TestMin(t *testing.T) {
mq := mathqueue.NewMathQueue()
mq.Enqueue(30)
check(t, 30, mq.Min())
mq.Enqueue(10)
check(t, 10, mq.Min())
mq.Enqueue(-40)
check(t, -40, mq.Min())
mq.Enqueue(80)
check(t, -40, mq.Min())
mq.Dequeue()
check(t, -40, mq.Min())
mq.Dequeue()
check(t, -40, mq.Min())
mq.Dequeue()
check(t, 80, mq.Min())
}
func TestLimits(t *testing.T) {
mq := mathqueue.NewMathQueue()
mq.Enqueue(math.MaxInt64)
mq.Enqueue(1)
if mq.Sum() != -9223372036854775808 {
t.Fatal("Limits sohuld be tested more thoroughly")
}
}
|
The Federal Deposit Insurance Corp. took a look into the foreclosure operations at the largest mortgage servicers and found significant breakdowns at almost every stage of the process. However, these issues are largely isolated to the servicers that hold the largest share of the mortgage finance business. "To date, FDIC reviews of state nonmember banks have not identified instances of 'robo-signing' or other serious deficiencies in mortgage servicing operations," said the FDIC in an email alert Wednesday. "Nevertheless, any bank involved in residential mortgage servicing can benefit from understanding the issues identified in the interagency review." The FDIC looked into the 14 mortgage servicers that signed consent orders with their regulators in April. The settlements include requirements to fix holes in the process, provide more loss-mitigation efforts and even left the door open for fines. These large companies came under investigation from the agencies and the 50 state attorneys general when news surfaced of faulty documentation processes that are still being corrected. "Foreclosure governance processes of the servicers were underdeveloped and insufficient to manage and control operational, compliance, legal, and reputational risk associated with an increasing volume of foreclosures," the FDIC said in its report. Agents interviewed servicer employees and reviewed roughly 2,800 foreclosure files in both judicial and nonjudicial states. The findings match the scope of the problem the Office of the Comptroller of the Currency and the Federal Reserve found in their investigation. According to the study, the thin policies and procedures at these companies buckled under the record levels of delinquent loans. Monitoring, quality control and audit reviews failed to detect few of the corners cut. The reputational and legal risks of those practices were rarely communicated to the board of directors or senior management, according to the FDIC. Similar breakdowns occurred at third-party vendors hired by these servicers to handle documentation services. The Mortgage Electronic Registration Systems did not invest enough resources, staff or training to properly handle the caseloads, and Lender Processing Services (LPS) failed to establish enough internal controls or risk management to catch documentation issues, according to the FDIC. Both MERS and LPS are entangled in a variety of lawsuits and face sanctions across the country for allegedly mishandling mortgage documents during foreclosure. "In addition LPS executed and recorded numerous affidavits, assignments of mortgages, and other mortgage-related documents that contained inaccurate information or were not properly notarized or based on personal knowledge," the FDIC said. The FDIC reviewed practices at some of the smaller state nonmember banks under its umbrella. These companies collectively service less than 4% of mortgages in the U.S. The study did not find the same problems at the larger firms – not enough to warrant formal enforcement actions, the FDIC said. "Community banks fared far better than larger institutions in terms of delinquency rates on residential mortgage loans and have undertaken far fewer foreclosures," the FDIC said. "Nevertheless, community banks should be aware of the lessons learned from the horizontal review when assessing their servicing practices." Write to Jon Prior. Follow him on Twitter @JonAPrior. |
<reponame>mrpiggy97/golang-leetcode
package main
import (
"fmt"
"strings"
"unicode"
"github.com/mrpiggy97/golang-leetcode/arithmetic"
"github.com/mrpiggy97/golang-leetcode/stringMethods"
)
func main() {
var initialString = "fabian#@!-_"
var Sentence string = "this-is-a-sentence"
var separatedSentence []string = strings.Split(Sentence, "-")
var transformThis string = "christopher"
var firstNumber int = 18
var secondNumber int = 12
initialString = stringMethods.ReverseString(initialString)
fmt.Printf("%v %v\n", initialString, strings.ToUpper(initialString))
fmt.Printf("%v\n", separatedSentence)
fmt.Printf("%v\n", transformThis)
arithmetic.DivisorFind(firstNumber)
arithmetic.DivisorFind(secondNumber)
arithmetic.DivisorFind(4500000000000000000)
var newS string = stringMethods.CreateNewString("NyffsGeyylB")
fmt.Printf("%v\n", newS)
var otherString string = stringMethods.RepeatString(4, "a")
fmt.Printf("%v\n", otherString)
fmt.Printf("%v\n", arithmetic.MultiplyAllMembers([]int{1, 2, 3, 4, 5}))
fmt.Printf("%v\n", arithmetic.DoubleTheAge(36, 7))
fmt.Printf("%v\n", stringMethods.CompareEndOfString("", ""))
fmt.Printf("%v\n", stringMethods.ChangeToCamelCase("the-stealth-warrior"))
fmt.Printf("%v\n", arithmetic.Race(720, 850, 70))
fmt.Printf("%v\n", stringMethods.CheckIfUpperCase("CCsMO LA MADRE"))
fmt.Printf("%v\n", arithmetic.SumMembers(195))
fmt.Printf("%v\n", stringMethods.DuplicateEncode("fAbian"))
var names []string = []string{"fabian,", "jesus,", "rivas,"}
var appendThis []string = []string{"chris,", "agrippa,", "augustus,"}
var funcs []string = []string{"{", "}"}
for _, member := range appendThis {
names = stringMethods.InsertAtPosition(1, names, member)
funcs = stringMethods.InsertAtPosition(1, funcs, member)
fmt.Printf("%v\n", names)
fmt.Printf("%v\n", funcs)
}
var name string = "fabian Is the bOSSS"
fmt.Printf("%v\n", stringMethods.GetConvertedString(name, make(chan string, 1)))
var check []byte = []byte("")
fmt.Println(len(check))
for _, byteVal := range check {
fmt.Println(unicode.IsLetter(rune(byteVal)))
fmt.Println(unicode.IsDigit(rune(byteVal)))
}
}
|
Preadipocyte Factor-1 Is Associated with Marrow Adiposity and Bone Mineral Density in Women with Anorexia Nervosa Context: Despite having low visceral and sc fat depots, women with anorexia nervosa (AN) have elevated marrow fat mass, which is inversely associated with bone mineral density (BMD). Adipocytes and osteoblasts differentiate from a common progenitor cell, the human mesenchymal stem cell. Therefore, understanding factors that regulate this differentiation process may provide in-sight into bone loss in AN. Objective: The objective of the study was to investigate the relationship between preadipocyte factor-1 (Pref-1), a member of the epidermal growth factor-like family of proteins and regulator of adipocyte and osteoblast differentiation, and fat depots and BMD in AN. Design: This was a cross-sectional study. Setting: The study was conducted at a clinical research center. Patients: Patients included 20 women with AN (26.8 (cid:1) 1.5 yr) and 10 normal-weight controls (29.2 (cid:1) 1.7 yr). Interventions: There were no interventions. Main Outcomes Measure: Pref-1, leptin, IGF-I, IGF binding protein (IGF-BP)-2 and estradiol levels were measured. BMD A norexia nervosa (AN) is a primary psychiatric disorder, characterized by extreme self-imposed starvation, affecting 0.5-1% of college-aged women in the United States. There are many significant medical complications and comorbidities associated with the disease, and bone loss is among the most common. An estimated 50% of women with AN have osteopenia, with an additional 35% having evidence of osteoporosis. Low bone mass in adolescents and adults occurs in the setting of low sc and visceral fat depots, and loss of nutritionally dependent factors are important in the pathogenesis of bone loss. Recent advances in mesenchymal stem cell differentiation have demonstrated a possible inverse relationship between osteoblast differentiation and adipocyte differentiation. Because osteoblasts and adipocytes originate from a common progenitor, the human mesenchymal stem cell (hMSC), understanding the factors that potentially regulate the differentiation process of hMSCs into bone and fat may be of great importance in understanding clinical states of low bone mass. We have previously shown that although peripheral and visceral fat stores are low in AN, bone marrow adiposity is increased and inversely associated with bone mineral density (BMD). The clinical significance of bone marrow adiposity has been shown in a number of studies. For example, Schellinger et al. demonstrated that individuals with radiographic evidence of vertebral bone weakness, including wedging of vertebrae or vertebral body compression fractures, had higher percentages of vertebral marrow fat content compared with those without such findings. Little is known about the hormonal determinants of marrow fat. We therefore investigated preadipocyte factor-1 (Pref-1), an important factor in mesenchymal stem cell differentiation. We also investigated other hormonal mediators of low bone mass in AN including leptin, an adipokine linked to bone mass and decreased in AN, IGF-I, IGF binding protein (IGF-BP)-2 and estradiol. Pref-1 is a member of the epidermal growth factor (EGF)like-family of proteins and is expressed in several progenitor cell types including hMSCs and preadipocytes. Pref-1 is present on the extracellular membrane and is cleaved before it is released into the extracellular space to exert suppressive effects in an autocrine and paracrine manner on adipocyte and osteoblast differentiation. Interestingly, Pref-1 circulates in relatively high concentrations, although it is unclear how the circulating levels relate to target tissues. IGF-I is a nutritionally dependent hormone that is known to stimulate bone formation through effects on osteoblastic function. IGF-BP2 is one of the six IGF-BPs that binds IGF-I in the circulation and has been previously shown to be abnormally elevated in adult women with AN and has been shown to be inversely associated with markers of bone formation in AN. Although estrogen therapy has not been shown to increase BMD in women with AN, estrogen therapy in postmenopausal women has been shown to decrease marrow adipocyte volume and prevent increases in marrow adipocyte number. Leptin has been shown to increase trabecular bone volume and trabecular number in ovariectomized rats when administered peripherally. In humans with hypothalamic amenorrhea, a state characterized by low leptin levels, treatment with leptin has been shown to increase markers of bone formation including osteocalcin and bone-specific alkaline phosphatase. Given the phenotypic nature of AN with very little peripheral adipose tissue, low leptin, and reduced bone formation, we hypothesized that Pref-1, leptin, IGF-I, IGF-BP2 and estradiol would be associated with marrow adiposity in AN. Subjects Thirty women were studied: 20 women with AN (aged 19 -41 yr) and 10 normal-weight controls of comparable age (aged 25-42 yr). The 20 women with AN were recruited through referrals from local eating disorder providers and on-line advertisements, and the 10 normal-weight controls were recruited through on-line advertisements. Subjects met Diagnostic and Statistical Manual of Mental Disorders, fourth edition, weight and psychiatric criteria for AN. None of the subjects had received estrogen within 3 months of the study. All control subjects had a normal body mass index (BMI), a history of regular menstrual cycles and were receiving no medications known to affect bone mass. Control subjects did not have a past or present history of an eating disorder. Subjects with abnormal TSH, elevated FSH, chronic diseases known to affect BMD (other than AN), or diabetes mellitus were excluded from participation. All subjects were examined and blood was drawn for laboratory studies at a single study visit at our Clinical Translational Science Center. Height was measured as the average of three readings on a single stadiometer, and subjects were weighed on an electronic scale while wearing a hospital gown. BMI was calculated using the formula . The study was approved by the Partners Institutional Review Board and complied with the Health Insurance Portability and Accountability Act guidelines. Written informed consent was obtained from all subjects. The clinical characteristics and magnetic resonance imaging and dual-energy x-ray absorptiometry (DXA) data of nine subjects with AN and all of the control subjects have been previously reported. Radiological imaging All control subjects and a subset of 10 women with AN underwent 1 H-magnetic resonance spectroscopy of bone marrow of the L4 vertebral body, the proximal femoral epiphysis, metaphysis, and diaphysis to determine lipid content using a 3.0T magnetic resonance imaging system (Siemens Trio, Siemens Medical Systems, Erlangen, Germany); fitting of the 1 H-magnetic resonance spectroscopy data were performed using LC-Model software (version 6.1-4A; Stephen Provencher, Oakville, Ontario, Canada) as previously described. A single axial magnetic resonance imaging slice through the abdomen at the level of L4 and a single slice through the midthigh were obtained (Siemens Trio, 3T; Siemens Medical Systems) to determine abdominal sc adipose tissue (SAT), visceral adipose tissue (VAT), and total adipose tissue (TAT) as well as SAT of the thigh. All subjects (20 with AN and 10 healthy controls) underwent DXA to measure BMD of the anteroposterior (AP) lumbar spine (L1-L4), lateral spine (L2-L4), distal radius, total hip, femoral neck, and total body and body composition including fat mass (kilograms), lean mass (kilograms), and percent body fat using a Discovery A densitometer (Hologic Inc., Waltham, MA). CVs of DXA have been reported as less than 1% for bone, 1.1% for lean body mass, and 2.7% for fat mass. Statistical analysis Statistical analysis was performed using JMP software (SAS Institute, Carry, NC). The means and SEM measurements were calculated for AN and the control group, and the means were compared using the Student's t test. Correlations are for the group as a whole unless otherwise noted. Where data were not normally distributed, we either performed a transformation to approximate a normal distribution or used nonparametric tests. Log transformations were performed for leptin, abdominal SAT, VAT, and TAT. Clinical characteristics Clinical characteristics of the study subjects are presented in Table 1. Subjects with AN had lower weight, BMI, percent ideal body weight and percent body fat compared with controls. Hormonal parameters associated with body composition Percent body fat as measured by DXA, VAT and SAT were significantly lower in AN compared with the controls (Table 1). TAT as well as SAT of the thigh were also sig- nificantly lower in AN compared with controls (Table 1). There was an inverse correlation between Pref-1 and percent body fat (Fig. 2) Hormonal parameters associated with bone marrow fat content There was a positive correlation between Pref-1 and marrow fat of the proximal femoral metaphysis (R 0.50, P 0.01) (Fig. 3A). There was a negative correlation between leptin and marrow fat of L4 (Spearman's rho 0.45, P 0.05) (Fig. 3B). There was a significant inverse association between IGF-I and marrow fat of L4 in AN (R 0.70, P 0.02) and a positive correlation between IGF-I and L4 marrow adiposity in the controls (R 0.76, P 0.01). There was a significant positive association between IGF-BP2 levels and marrow adiposity of L4 in the group as a whole (R 0.46, P 0.04). In healthy controls there was a significant inverse association between IGF-BP2 levels and marrow adiposity of L4 (R 0.74, P 0.01). There were no associations between estradiol levels and bone marrow fat content. Hormonal parameters associated with BMD Subjects with AN had lower BMD of the total hip, AP spine, lateral spine, and total body compared with the controls (Table 1). There was an inverse correlation between Pref-1 and BMD of the AP spine (R 0.54, P 0.003) (Fig. 4A) and lateral spine (R 0.44, P 0.02) (Fig. 4B). Significant correlations were not found between Pref-1 and BMD of the hip, distal radius, or total body. Leptin was positively correlated with BMD of the AP spine (Spearman's rho 0.38, P 0.04) (Fig. 4C) and hip (Spearman's rho 0.42, P 0.03) (Fig. 4D). Significant correlations were not found between leptin and BMD of the lateral spine, distal radius, or total body. In AN, Pref-1 was inversely associated with BMD of the AP spine (R 0.51, P 0.02), and leptin was positively correlated with hip BMD (R 0.50, P 0.02). Estradiol levels were positively associated with BMD of the lumbar spine (Spearman's rho: 0.39, P 0.04). IGF-I was positively correlated with BMD of the AP spine (R 0.46, P 0.048) and BMD of the hip (R 0.49, P 0.03) in the group as a whole and with hip BMD in AN (R 0.65, P 0.04). IGF-BP2 was inversely associated with BMD of the lateral spine (R 0.48, P 0.04) and hip (R 0.59, P 0.008) in the group as a whole and with BMD of the hip in AN (R 0.64, P 0.048). Discussion We have shown that women with AN have elevated levels of Pref-1. In addition, our data support the role of Pref-1 as a regulator of adipocyte and osteoblast differentiation. We have also shown that IGF-I is negatively associated with L4 marrow adiposity in AN in contrast to IGF-BP2, which is positively associated with L4 marrow fat. Our data, demonstrating an inverse association between leptin and L4 marrow adiposity, support the hypothesis that the role of marrow fat is distinct from that of sc and visceral fat depots. AN is a psychiatric disorder characterized by extreme low body weight and is associated with multiple medical comorbidities including significant bone loss. Fracture risk is also significant in this population. A prospective study of 27 women with AN demonstrated a 7-fold increased risk of nonvertebral fracture during a mean of 2 yr of follow-up. A retrospective population-based study demonstrated a 3-fold increased risk of fracture many years after the initial diagnosis of AN, with the long-term cumulative incidence of any fracture being 57%. Thus, understanding the mechanisms of bone loss and the factors that regulate low bone mass in this population is of particular importance. We have recently shown that women with AN have increased marrow fat content in the lumbar spine, femoral metaphysis, and diaphysis and that there is an inverse relationship between marrow adiposity and BMD at multiple skeletal sites. The clinical importance of bone marrow fat content has been demonstrated in a number of studies. Schellinger et al. demonstrated that subjects with morphological evidence of bone weakness, such as Schmorl's nodes, end plate depression, wedging of vertebrae, and/or compression fractures had elevated levels of vertebral marrow fat content. A recent study also demonstrated that marrow adiposity, unlike visceral and sc fat, is not associated with increased risk of cardiovascular disease, suggesting that the genesis of marrow fat is clinically distinct from that of visceral and sc fat. Several studies have demonstrated an inverse relationship between marrow adiposity and BMD. Yet it is provocative that in AN, in which marrow adipose depots are elevated, visceral and sc fat depots are very low. This suggests that the role of marrow fat in the human is distinct from that of sc and visceral depots and may include the local regulation of bone formation. Osteoblasts and adipocytes are derived from a common progenitor, hMSCs. Many factors have been shown to affect differentiation of the hMSC into either osteoblasts or adipocytes. Peroxisome proliferator-activated receptor-␥ agonists and glucocorticoids have been shown to induce adi-pocyte differentiation, whereas in vitro studies have demonstrated that estrogen administration leads to osteoblastogenesis with concomitant inhibition of adipogenesis. Yet it is still unclear whether this process of differentiation is a switch process or independently regulated because there are states, such as puberty, during which both marrow adipocyte differentiation and osteoblast differentiation are increased, arguing against a mutually exclusive switch process. New evidence is also emerging that independent preosteoblast and preadipocyte populations of mesenchymal stem cells may exist, providing a possible explanation for states of concomitant osteogenesis and adipogenesis. Pref-1, a member of the EGF-like family of proteins, has also been shown to be an important regulator of adipocyte and osteoblast differentiation. Pref-1 is highly expressed in osteoblastic cell lines, preadipocytes, and hMSCs and is a negative regulator of adipocyte and osteoblast differentiation. Osteoblast-specific Pref-1 overexpression in a mouse model results in significantly low-bodyweight mice and significantly reduced BMD. Moreover, Pref-1's role as a regulator of energy stores has been further elucidated with the finding that overexpression of Pref-1 in a mouse model leads to lower adipose tissue mass than wild-type mice but increased insulin resistance. Therefore, it appears that Pref-1, an in vitro inhibitor of both osteoblast and adipocyte differentiation, may also be an important in vivo regulator of several metabolic processes. Leptin, which is a major regulator of appetite, has been shown to be decreased in AN, most likely as a result of reduced total body fat. With respect to the skeleton, leptin acts centrally to enhance sympathetic tone and reduce bone formation, although leptin may also have direct effects on distinct skeletal sites. Our findings of a positive association between leptin and BMD in the low leptin state of AN are consistent with findings that leptin, when provided sc to women with low leptin levels, stimulates markers of bone formation and with observational studies that demonstrate a positive association between leptin levels and bone mass in postmenopausal women. Our finding that leptin is inversely associated with marrow fat is also consistent with in vitro studies in which leptin blocks hMSC differentiation into adipocytes and stimulates osteogenesis. Whereas we found an inverse association between leptin and marrow fat at an axial site, we did not find an association between leptin and a peripheral site of marrow fat. One explanation for this may be that the sample size of our study population was simply too small to detect this difference. Another possible explanation is that leptin acts differentially at different marrow fat sites. Hamrick et al. demonstrated that when compared with wild-type mice, leptindeficient ob/ob mice had increased marrow adipocyte number in the femur, whereas in the vertebra there was decreased marrow adipocyte number. Therefore, it is possible that leptin is differentially associated with the various marrow depots in the human as well. IGF-I is known to be a stimulator of osteoblastogenesis and is known to be low in states of nutritional deprivation, such as AN, whereas IGF-BP2, a binding protein that binds to IGF-I in the circulation, has been shown to be elevated in adults with AN. We have shown that IGF-I is positively associated with BMD of the spine and hip and is inversely associated with L4 marrow adiposity in AN and that IGF-BP2 is positively associated with marrow adiposity. Interestingly, IGF-I was positively associated with L4 marrow adiposity in healthy controls, suggesting that IGF-I may have differential actions, depending on an individual's nutritional state and/or the hormonal milieu. There are several limitations to our study. First, this was a cross-sectional study, and therefore, we cannot determine causation based on these data; therefore, all of the relationships demonstrated in this study are purely associational and cannot imply causation. Second, the source of Pref-1 is not clear from our studies. It is possible that Pref-1, which is found in high levels in preadipocytes, is cleaved during the maturation process and released into the circulation, explaining the elevated levels of circulating Pref-1 observed in anorexia nervosa. Yet Pref-1 is synthesized in several tissues, including the liver, and therefore, we cannot exclude the possibility that this EGF-like protein from other tissue sources may contribute to the suppressed bone formation reported in AN. Third, any conclusions about leptin's effects on bone have to be tempered by the complex nature of its relationship to hypothalamic processing and sympathetic signaling. Fourth, because only a single estradiol level was measured, definitive conclusions regarding the relationship between estradiol and marrow adiposity cannot be made. Given the exploratory nature of our study, further studies, involving larger study populations, will be needed in further understanding the association between Pref-1, leptin, and marrow fat. In conclusion, our data demonstrate that women with AN have significantly higher levels of Pref-1, an important negative regulator of adipocyte and osteoblast differentiation. We have shown that Pref-1 is associated with marrow adiposity and low bone mass. Further understanding of the role of Pref-1 as a possible mediator of adipocyte and osteoblast differentiation may be of significant clinical importance. |
import { Switch } from '@chakra-ui/react'
import { SwitchProps } from './Switch'
/**
* Switches are the preferred way to adjust settings.
*
* The option that the switch controls, as well as the state it’s in, should be made clear from the
* corresponding inline label.
*/
export const SimpleSwitch = ({ label, ...other }: SwitchProps) => {
return <Switch aria-label={label} {...other} data-testid="SimpleSwitch" />
}
|
Ethnicity in Interaction: The State-of-the-Art This paper is a work-in-progress on the nature of ethnicity as viewed from an interactional sociolingui stic point of view. Given the goal of our main research, which concentrates on the ethnical bias of literary characters in general, and dramati c genre in particular, we focus our attention on ethnic identities as visible throug h face -to-face interaction. As the corpus of our main research (G.B. Shaw's plays), a dialogic corpus o f texts, belongs to the dramatic genre, it is an ideal field for micropragm atic analysis. It is known from the sociolinguistic literature (Wardhaugh, Trudgill, Romaine, etc.) that language is the primary and most overt marker of ethnic identity, therefor e it is not to be discussed here. Other, more covert markers of ethnicity will be address ed, like conversational strategies as consequences of speech acts, markers of power and solidarity, politeness and impoliteness, face, role, turn -taking issues, gender stereotypes. This study offers a theoretical summary which would be applied in later text -based analyses. |
Hatton munro & partners are delighted to present to the market this stunning four bedroom detached property which occupies an enviable plot at the end of a cul de sac. This property is very well presented throughout, has been fully modernised to a high standard and really does need to be viewed to appreciate all that it has to offer. Entry to the property is via a reception hallway with stairs rising to the first floor and providing access to the downstairs office/study, sitting room and to the 29'1" X 12'0" kitchen dining room. The 20'3" X 11'11" sitting room is to the front of the property and incudes a fantastic wall mounted fire. A set of glazed doors connect the sitting room with the dining room and modern fitted kitchen. Bi - folding doors open from the dining room to an enclosed 22ft square decked patio area. To the first floor are four bedrooms, master ensuite shower room and a family bathroom that has a stand alone bath. Outside, there are gardens, off road parking for up to five vehicles and a garage.
Reception HallwayDouble glazed panels sit either side of the entrance door. Covered radiator. Stairs rising to the first floor. Tiled flooring. Access from the entrance hall to the home office/study, sitting room and to the kitchen dining room.
Home Office/ Study (8'10" x 6'11"(extd 10'4") (2.69m x 2.11m ( ex td 3.15m))"L" shaped room with a double glazed bay window to the front. Radiator. Under stairs storage cupboard.
Kitchen (13'0" x 12'0" (3.96m x 3.66m))Double glazed window to the rear. This modern quality kitchen was installed in 2017 and comprises a one and half stainless steel sink drainer unit, range of modern wall and floor mounted units, work surfaces with cupboards and drawers below. Tiled splash backs. Built in stainless steel gas hob and extractor with a separate eye level stainless steel double oven. Integrated washing machine. Space and plumbing for an "American Style" fridge freezer. Inset spotlighting. Tiled flooring.
First FloorLanding with a double glazed window to the side. Access from this landing to the loft space as well as to all four bedrooms and a family bathroom. Covered radiator. Built in storage cupboard.
Ensuite Shower Room (8'6" x 4'2" (2.59m x 1.27m))Double glazed window to the side. Stainless steel towel radiator. This modern en suite comprises a white low level W.C, wash hand basin and a double shower enclosure with a mixer shower. Inset spotlighting. Part tiled walls. Tiled flooring.
Bedroom Two (11'9" x 11'1" (3.58m x 3.38m))Double glazed window to the rear. Radiator.
Bedroom Three (9'2" x 8'4" (2.79m x 2.54m))Double glazed window to the rear. Radiator.
Bedroom Four (9'1" x 8'4" (2.77m x 2.54m))Double glazed window to the rear. Radiator. This room has been fitted out as a dressing room with a range of modern fitted wardrobes.
Family Bathroom (7'8" x 7'2" (2.34m x 2.18m))Double glazed window to the front. Radiator with an added towel heater over. This modern bathroom suite comprises a white low level W.C, wash hand basin and a stand alone bath. Inset spotlighting.
Outside FrontThis detached property sits at the end of a cul de sac and occupies an impressive enclosed plot with off road parking and raised flower beds.
Parking And GarageA driveway provides off road parking for up to five vehicles and leads to a single garage. The garage has power, lighting and houses the wall mounted gas boiler plus there is useful storage within the eaves.
Outside RearEnclosed and private. Not overlooked gardens to the rear. 22ft square wood decked patio area. The remainder of the garden is made up of Indian stone patio areas plus raised flower beds. Gated access to both sides.
Rear Elevation Photo Floor PlansThese particulars, whilst believed to be accurate are set out as a general outline only for guidance and do not constitute any part of an offer or contract. Intending purchasers should not rely on them as statements of representation of fact, but must satisfy themselves by inspection or otherwise as to their accuracy. No person in this firms employment has the authority to make or give any representation or warranty in respect of the property. No appliances have been tested and buyers are advised to check prior to entering into a binding contract. |
// SPDX-License-Identifier: MIT
package com.mercedesbenz.sechub.integrationtest;
import static org.junit.jupiter.api.Assertions.*;
import java.net.URL;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.CsvSource;
import com.mercedesbenz.sechub.integrationtest.SecurityTestHelper.TestTargetType;
import com.mercedesbenz.sechub.integrationtest.api.OnlyForRegularTestExecution;
@OnlyForRegularTestExecution
class SecurityTestHelperTest {
private SecurityTestHelper helperToTest;
@BeforeEach
void beforeEach() throws Exception {
helperToTest = new SecurityTestHelper(TestTargetType.SECHUB_SERVER, new URL("https://localhost"));
}
@ParameterizedTest
@CsvSource({ "DHE-DSS-AES256-SHA, SHA", "SEED-SHA, SHA", "DHE-DSS-AES256-SHA256, SHA256", ", ", "DES-CBC-MD5, MD5" })
void getMac_finds_expected_ciphers(String cipher, String expectedMAC) {
/* prepare */
CipherCheck check = new CipherCheck();
check.cipher = cipher;
/* execute */
String result = helperToTest.getMac(check);
/* test */
assertEquals(expectedMAC, result, "MAC not as expected for cipher:" + cipher);
}
}
|
Can older fungal sequence data be useful? Abstract This chapter focuses primarily on the use and evaluation of fungal nucleotide sequences to obtain information that can be used to elucidate relationships between strains and specimens. The importance of the reexamination and evaluation of older material in order to identify new species derived from existing taxa that had previously been considered under wider concepts or acknowledged as species complexes is also discussed. |
Q:
Why everyone hates the undead?
After death some might turn into undead by different means of magic. when brough back to life as an undead gaining new sheer strength and magic powers is to be expected while things like sense of hunger and pain are lost but they can still starve to death or be killed, memories are retained.
Some people are genetically set to turn automatically into undead after death, but the rest of the population can only be brought back to life by magical events only.
Since all undead retain their memories, most of them have no dangerous intents (they don't eat people), actually it's not rare to find undead who want to go back to their families.
But almost every single living person just wants to straight out eliminate all undeads from existence.
Why would people want to exterminate undeads and what do they have to gain from it?
Corpses stop decaying after being brought back to life.
A:
Undead are harbingers of bad-luck, people who spend time around them seem to constantly contract mysterious diseases.
Undead are an affront to nature, the gods demand souls in the afterlife which is impossible if the Undead don't go.
Undead are really conservative, being essentially from the previous generation. This causes a large amount of political conflict between them and their more progressive descendants.
"Undead are taking our jobs!" how can living workers possibly compete when the undead can do what they do but with potentially decades more experience, no fatigue and possibly even magic powers. A lot of people here in britain hate immigrants who are simply harder working on average than the british, this is that times twenty or so.
Plain simple xenophobia, sadly it's been highly common throughout history to fear the other purely for irrational and subjective reasons this is likely to be a massive issue with people who are literally walking corpses.
Jealousy, if I thought someone had some secret to eternal life and wasn't sharing it with me, i'd get very annoyed. If they told me "You just weren't born right" I'd get even more annoyed.
All of the above. In real life prejudices are built up over many years of systematic change brought about for a variety of reasons. There is no "one reason" for example this can be seen for all the myraid reasons people give for homophobia "God doesn't like it" to "It's not natural" to "I've met a homosexual and they sullied my opinion of the entire group." to "Everyone else hates them, why can't I?"
A:
Instinct
There's a principle which animators and game-developers and some robotics people refer to as the Uncanny Valley. Things which don't actually closely resemble people, and don't actually move much like people, are not disturbing to view. And as an animation or robot gets closer to looking and moving like a person, it get more appealing and sympathetic.
But, when something gets very close to being human, but is not quite there, it becomes viscerally horrifying to many people. Dolls can strike many people as creepy. Waxy, jerky-motioned manikins (or zombies!) are the stuff of horror movies.
There's speculation as to why people tend to react this way, but it's a commonly observed tendency. |
def sample(self, global_step):
if self.record_size < self.learn_start:
sys.stderr.write('Record size less than learn start! Sample failed\n')
return False, False, False
dist_index = math.floor(self.record_size / self.size * self.partition_num)
partition_max = dist_index * self.partition_num
distribution = self.distributions[dist_index]
rank_list = []
print 'whole list:', distribution['strata_ends']
for n in range(1, self.batch_size + 1):
index = random.randint(max(distribution['strata_ends'][n],1),
distribution['strata_ends'][n + 1])
rank_list.append(index)
beta = min(self.beta_zero + (global_step - self.learn_start - 1) * self.beta_grad, 1)
alpha_pow = [distribution['pdf'][v - 1] for v in rank_list]
w = np.power(np.array(alpha_pow) * partition_max, -beta)
w_max = max(w)
w = np.divide(w, w_max)
rank_e_id = self.priority_queue.priority_to_experience(rank_list)
experience = self.retrieve(rank_e_id)
return experience, w, rank_e_id |
Role of the inflammasome, gasdermin D, and pyroptosis in nonalcoholic fatty liver disease Pyroptosis is a type of programmed cell death mediated by a multiprotein complex called the inflammasome through the proinflammatory activity of gasdermin D. This study aimed to recognize the final biological product that leads to pore formation in the cell membrane, lysis, proinflammatory cytokines release, and the establishment of an immune response. An exhaustive search engine investigation of an elevated immune response can induce a sustained inflammation that directly links this mechanism to nonalcoholic fatty liver disease and its progression to nonalcoholic steatohepatitis. Clinical studies and systematic reviews suggest that gasdermin D is a critical molecule between the immune response and the disease manifestation, which could be considered a therapeutic target for highly prevalent diseases characterized by presenting perpetuated inflammatory processes. Both basic and clinical research show evidence on the expression and regulation of the inflammasomegasdermin Dpyroptosis trinomial for the progression of nonalcoholic fatty liver disease to nonalcoholic steatohepatitis. |
Q:
Implications of using "Teutschland" in lieu of "Deutschland"
Someone mentioned recently that Teutschland was an archaic name for Deutschland and could be used to (ironically) highlight certain aspects of the country's culture, history, etc. A cursory search in Google didn't turn up anything substantial. The German Wikipedia article for Teutschland simply redirects to the main article for Deutschland, but the topic/word is not discussed in the article.
What would a speaker want to imply or express by choosing to use Teutschland as opposed to Deutschland?
A:
"Teutschland" is in fact an old spelling of "Deutschland". The designation "deutsch" originates from the Old High German word "diutisc", which meant "belonging to the people". In short, the meaning was to differentiate speakers of Germanic languages like Franconian or Gothic from their neighbors who spoke Romance languages. Over the centuries and in different areas, several different spellings were used, some beginning with "deu-", some with "teu-", some with "doi-", some with "toi-" and so on.
Today, the correct spelling is "Deutschland". Other spellings are sometimes used in historical contexts or in yearning for supposedly better "olden times": "When William II was Emperor, such a thing wouldn't have happened!" Those "olden times" do not necessarily refer to the Third Reich or the German Empire, but can refer to about any period in German history.
Mostly though, other spellings of "Deutschland" are in my experience used today to mock somebody who supposedly yearns for those "olden times" or has backwater or far right views. Say for example, in a story there's a character who can't keep up with the changes in modern society, and doesn't intend to. Such a character could be saying things like "That's not the way things are done in Doitschland!!!".
A:
Teutschland is a variant of Deutschland used in older styles of writing before spelling began to be standardized around 1850. For example in this text from 1745 you can see that the name is used beside Deutschland and other names: https://de.wikisource.org/wiki/Zedler:Teutschland
Unfortunately I'm not really sure about the etymology of the word but I think it stems from late middle high german Tiutschland which spawned these different variants. (I can't find any english sources for this but a german one is here https://de.wiktionary.org/wiki/Deutschland)
Today the word is mostly used in an academic - historic context and means that you are talking about Germany from the middle ages until around 1800 because it was the usual way of writing it at that particular time.
In a modern context you could use the word to describe something that is wrong in Germany and seems out of time and archaic or to mock someone that wants to return to these old times. However it's really rarely used because most people won't understand the reference or even know the word.
A:
"Teutsch" is used to suggest that german nationalism has gone overboard with someone. This is not particularly recent, either - Kurt Tucholsky wrote (in 1923,) about post-WW I Germany:
Da steht eine ganze Nation. Sie ist krachen gegangen, weil sie teutsch
war, statt deutsch zu sein – und statt sich zur Abkehr zu wenden,
glaubt sie, es liege daran, daß sie noch nicht teutsch genug war.
wich roughly translates as:
There stands a whole nation. It went down the drain because it was
"teutsch" instead of "deutsch" - and instead of turning back it
believed that this (sc. the lost war) was because it hadn't been "teutsch"
enough. |
def printed_length(text: str) -> int:
length = len(text)
for key in (":yamlparam:", ":yamlkey:", ":yamltype:", ":yamlcomment:"):
i_start = text.find(key)
while i_start >= 0:
i_key_stop = i_start + len(key) + 1
i_stop = text.find("`", i_key_stop) + 1
length -= len(key) + 2
if key == ":yamlparam:":
content = text[i_key_stop : (i_stop - 1)]
docname, paramname = content.split(":")
length -= len(docname) + 1
i_start = text.find(key, i_stop)
return length |
Deciphering the binding mode of dinitramine herbicide to ct-DNA, a thermodynamic discussion Dinitramine is a herbicide that has been used to control annual grasses and broadleaf weeds in cotton and soybeans in Iran. In this study, the electrochemical behavior of dinitramine was studied by cyclic voltammetry (CV) and differential pulse voltammetry (DPV) methods. The interaction of dinitramine with ct-DNA was evaluated by CV, competitive fluorescence, UV-Vis spectroscopy, FT-IR spectroscopy, and viscosity titration. In addition, the thermodynamic parameters of DINDNA complex were calculated by spectrophotometric titration. The values of Hbin., Sbin., and Gbin. (T = 290.65 K) of the DINDNA complex were +39.25 kJ mol−1, +215.71 J mol−1, and −23.45 KJ mol−1, respectively. These data revealed that the endothermic binding has its origin in the hydrophobic interactions. Also the high positive Sbin was explained according to the DIN structure that optimized by mechanical quantum calculations. However, all data showed that the major groove binding between DIN and ct-DNA is more predominant than other binding modes. |
<gh_stars>1-10
/*
* Copyright (c) 2020. This code created and belongs to Atlas render manager project.
* Owner and project architect: <NAME> | <EMAIL> | https://github.com/DanilAndreev
* Project: atlas-core
* File last modified: 11/12/20, 5:25 PM
* All rights reserved.
*/
import Controller from "../core/Controller";
import {Context} from "koa";
import Organization from "../entities/typeorm/Organization";
import {
IncludeUserIdsInBodyValidator,
OrganizationEditValidator,
OrganizationRegisterValidator
} from "../validators/OrganizationRequestValidators";
import RolesController from "./RolesController";
import User from "../entities/typeorm/User";
import RequestError from "../errors/RequestError";
import {getConnection, getRepository, In} from "typeorm";
import {IncludeUsernameInQueryValidator} from "../validators/UserRequestValidators";
import {findOneOrganizationByRequestParams} from "../middlewares/organizationRequestMiddlewares";
import {canManageUsers} from "../middlewares/withRoleAccessMiddleware";
import Role from "../entities/typeorm/Role";
import {UserPermissions, UserWithPermissions} from "../interfaces/UserWithPermissions";
import getUserPermissionLevelById from "../utils/organizations/getUserPermissionLevelById";
import HTTPController from "../decorators/HTTPController";
import NestedController from "../decorators/NestedController";
import Route from "../decorators/Route";
import RouteValidation from "../decorators/RouteValidation";
import RouteMiddleware from "../decorators/RouteMiddleware";
/**
* @function
* addUsersToOrg - for each userId provided in userIds, finds and adds user to organization.
* @param userIds - array of user ids.
* @param org - organization, where users will be added.
* @param defaultRole - defaultRole which will apply for each user in organization. Its org.defaultRole by default.
* @author <NAME>
*/
const addUsersToOrg = async (userIds: number[], org: Organization, defaultRole = org.defaultRole): Promise<void> => {
// TODO: make all in transaction
const errors = [];
const users = await User.find({
where: {
id: In(userIds)
},
relations: ["roles"]
});
for (const userId of userIds) {
const addUser = users.find(user => user.id === userId);
if (!addUser) {
throw new RequestError(404, "User not exist.", {errors: {notExist: userId}});
}
// if user already in org
if (org.users.find(user => user.id === addUser.id)) {
errors.push({present: addUser.id});
} else {
addUser.roles.push(defaultRole);
org.users.push(addUser);
await addUser.save();
}
}
await org.save();
if (errors.length) {
throw new RequestError(409, "Some users are already in organization.", {errors});
}
};
/**
* OrganizationController - controller for /organization routes.
* @class
* @author <NAME>
*/
@HTTPController("/organizations")
@NestedController(RolesController)
export default class OrganizationsController extends Controller {
/**
* Route __[GET]__ ___/organizations___ - get information about all organizations in the system.
* @method
* @author <NAME>
*/
@Route("GET", "/")
public async getOrganizations(ctx: Context): Promise<void> {
const orgs = await getRepository(Organization)
.createQueryBuilder("org")
.leftJoin("org.ownerUser", "ownerUser")
.select(["org", "ownerUser.id", "ownerUser.username"])
.getMany();
ctx.body = orgs;
}
/**
* Route __[POST]__ ___/organizations___ - register new organization.
* @method
* @author <NAME>
*/
@Route("POST", "/")
@RouteValidation(OrganizationRegisterValidator)
public async addOrganization(ctx: Context): Promise<void> {
// TODO: re-edit all with transactions.
if (await Organization.findOne({name: ctx.request.body.name})) {
throw new RequestError(409, "Organization with this name already exists.", {errors: {organization: 409}});
}
const authUser: User = await User.findOne(ctx.state.user.id);
if (!authUser) {
throw new RequestError(401, "Unauthorized.");
}
const organization = new Organization();
organization.ownerUser = authUser;
organization.name = ctx.request.body.name;
organization.description = ctx.request.body.description;
organization.users = [authUser];
const savedOrg = await organization.save();
const defaultRole = new Role();
if (ctx.request.body.defaultRole) {
const defaultRoleData = ctx.request.body.defaultRole;
defaultRole.name = defaultRoleData.name;
defaultRole.description = defaultRoleData.description || "Default user role.";
defaultRole.color = defaultRoleData.color || "#090";
defaultRole.permissionLevel = defaultRoleData.permissionLevel;
defaultRole.canManageUsers = defaultRoleData.canManageUsers;
defaultRole.canManageRoles = defaultRoleData.canManageRoles;
defaultRole.canCreateJobs = defaultRoleData.canCreateJobs;
defaultRole.canDeleteJobs = defaultRoleData.canDeleteJobs;
defaultRole.canEditJobs = defaultRoleData.canEditJobs;
defaultRole.canManagePlugins = defaultRoleData.canManagePlugins;
defaultRole.canManageTeams = defaultRoleData.canManageTeams;
} else {
defaultRole.name = "user";
defaultRole.description = "Default user role.";
defaultRole.color = "#090";
defaultRole.permissionLevel = 0;
defaultRole.canManageUsers = false;
defaultRole.canManageRoles = false;
defaultRole.canCreateJobs = true;
defaultRole.canDeleteJobs = false;
defaultRole.canEditJobs = false;
defaultRole.canManagePlugins = true;
defaultRole.canManageTeams = true;
}
defaultRole.organization = savedOrg;
await defaultRole.save();
await getConnection()
.createQueryBuilder()
.relation(Organization, "defaultRole")
.of(savedOrg)
.set(defaultRole);
// add roles from body
if (ctx.request.body.roles) {
for (const roleData of ctx.request.body.roles) {
const roleNamesSet = new Set(ctx.request.body.roles.map(role => role.name));
if (roleData.name === defaultRole.name || roleNamesSet.size !== ctx.request.body.roles.length) {
throw new RequestError(409, "Conflicting role names.", {errors: {roles: 409}});
}
const role = new Role();
role.name = roleData.name;
role.description = roleData.description;
role.color = roleData.color || "black"; // TODO random color
role.permissionLevel = roleData.permissionLevel;
role.organization = savedOrg;
role.canManageUsers = roleData.canManageUsers;
role.canManageRoles = roleData.canManageRoles;
role.canCreateJobs = roleData.canCreateJobs;
role.canDeleteJobs = roleData.canDeleteJobs;
role.canEditJobs = roleData.canEditJobs;
role.canManagePlugins = roleData.canManagePlugins;
role.canManageTeams = roleData.canManageTeams;
role.canEditAudit = roleData.canEditAudit;
await role.save();
}
}
// add users from body
if (ctx.request.body.userIds) {
await addUsersToOrg(ctx.request.body.userIds, savedOrg, defaultRole);
}
ctx.body = {
success: true,
organizationId: savedOrg.id
};
}
/**
* Route __[GET]__ ___/organizations/:organization_id___ - get information about organization.
* @method
* @author <NAME>
*/
@Route("GET", "/:organization_id")
public async getOrganizationById(ctx: Context): Promise<void> {
const org = await getRepository(Organization)
.createQueryBuilder("org")
.where("org.id = :id", {id: ctx.params.organization_id})
.leftJoin("org.ownerUser", "ownerUser")
.leftJoin("org.users", "user")
.leftJoin("org.roles", "role")
.leftJoin("org.defaultRole", "defaultRole")
.select([
"org",
"ownerUser.id", "ownerUser.username",
"user.id", "user.username",
"role.id", "role.name", "role.color", "defaultRole.id", "defaultRole.name",
])
.getOne();
if (!org) {
throw new RequestError(404, "Not found.");
}
ctx.body = org;
}
/**
* Route __[POST]__ ___/organizations/:organization_id___ - edit information about organization.
* @method
* @author <NAME>
*/
@Route("POST", "/:organization_id")
@RouteValidation(OrganizationEditValidator)
public async editOrganizationById(ctx: Context): Promise<void> {
const org = await Organization.findOne(ctx.params.organization_id);
if (!org) {
throw new RequestError(404, "Organization not found.");
}
if (ctx.state.user.id !== org.ownerUser.id) {
throw new RequestError(403, "You are not owning this organization.");
}
if (ctx.request.body.name && ctx.request.body.name !== org.name) {
if (await Organization.findOne({name: ctx.request.body.name})) {
throw new RequestError(409, "Organization with this name already exists.",
{errors: {name: "exists"}});
} else {
org.name = ctx.request.body.name;
}
}
await org.save();
ctx.body = {success: true};
}
/**
* Route __[DELETE]__ ___/organizations/:organization_id___ - delete organization by id.
* @method
* @author <NAME>
*/
@Route("DELETE", "/:organization_id")
public async deleteOrganizationById(ctx: Context): Promise<void> {
const org = await Organization.findOne(ctx.params.organization_id, {relations: ["ownerUser"]});
if (!org) {
throw new RequestError(404, "Organization not found.");
}
if (ctx.state.user.id !== org.ownerUser.id) {
throw new RequestError(403, "Forbidden.");
}
ctx.body = await Organization.delete(org.id);
}
/**
* Route __[GET]__ ___/organizations/:organization_id/users___ - get all organization users.
* @method
* @author <NAME>
*/
@Route("GET", "/:organization_id/users")
public async getOrganizationUsers(ctx: Context): Promise<void> {
const org = await getRepository(Organization)
.createQueryBuilder("org")
.where("org.id = :id", {id: ctx.params.organization_id})
.leftJoin("org.users", "user")
.leftJoin("user.roles", "userRoles", "userRoles.organization = org.id")
.orderBy({"userRoles.permissionLevel": "DESC"})
.select([
"org", "user.id", "user.username", "userRoles"
])
.getOne();
if (!org) {
throw new RequestError(404, "Organization not found.");
}
ctx.body = org.users;
}
/**
* Route __[POST]__ ___/organizations/:organization_id/users___ - add users to organization.
* @method
* @author <NAME>
*/
@Route("POST", "/:organization_id/users")
@RouteValidation(IncludeUserIdsInBodyValidator)
@RouteMiddleware(findOneOrganizationByRequestParams({relations: ["users", "ownerUser", "defaultRole"]}))
@RouteMiddleware(canManageUsers)
public async addOrganizationUsers(ctx: Context): Promise<void> {
const org = ctx.state.organization;
await addUsersToOrg(ctx.request.body.userIds, org);
ctx.body = {success: true};
}
/**
* Route __[DELETE]__ ___/organizations/:organization_id/users___ - delete users from organization.
* @method
* @author <NAME>
*/
@Route("DELETE", "/:organization_id/users")
@RouteValidation(IncludeUserIdsInBodyValidator)
@RouteMiddleware(findOneOrganizationByRequestParams({relations: ["users", "ownerUser"]}))
@RouteMiddleware(canManageUsers)
public async deleteOrganizationUsers(ctx: Context): Promise<void> {
const org = ctx.state.organization;
const errors = [];
const users = await User.find({
where: {
id: In(ctx.request.body.userIds)
},
relations: ["roles", "roles.organization"]
});
ctx.request.body.userIds.forEach(userId => {
if (!users.find(user => user.id === userId)) {
throw new RequestError(404, "User not exist.", {errors: {notExist: userId}});
}
});
const usersToDelete = [];
for (const deleteUser of users) {
// TODO: CHECK PERMISSION LEVEL OF USER TO DELETE.
// TODO: ownerUser cannot be deleted.
// if user not in org
if (!org.users.find(usr => usr.id === deleteUser.id)) {
errors.push({missing: deleteUser});
} else {
deleteUser.roles = deleteUser.roles.filter((role) => role.organization.id !== org.id);
usersToDelete.push(deleteUser.id);
await deleteUser.save();
}
}
// [1, 2, 3] 3 - not in org
// delete [1, 2], throw error
// TODO: if all users removed - remove organization ???
org.users = org.users.filter(usr => !usersToDelete.includes(usr.id));
await org.save();
if (errors.length) {
throw new RequestError(409, "Some users are not in organization.", {errors});
}
ctx.body = {success: true};
}
/**
* Route __[GET]__ ___/organizations/:organization_id/availableUsers___ - get users that are not in organization.
* @method
* @author <NAME>
*/
@Route("GET", "/:organization_id/availableUsers")
@RouteValidation(IncludeUsernameInQueryValidator)
public async getAvailableUsers(ctx: Context): Promise<void> {
const org = await Organization.findOne(ctx.params.organization_id, {relations: ["users", "ownerUser"]});
if (!org) {
throw new RequestError(404, "Organization not found.");
}
const users = await getRepository(User)
.createQueryBuilder("user")
//.innerJoin("user.organizations", "orgs", "orgs.id == :orgId", {orgId: +org.id})
//.where(":orgId != orgs.id")
.where("user.id NOT IN (:...userIds)", {userIds: org.users.map(u => u.id)})
.andWhere("user.username like :username",
{username: `${ctx.request.query.username ?? ""}%`})
.select([
"user.id", "user.username", "user.email", "user.deleted", "user.createdAt", "user.updatedAt",
])
.getMany();
ctx.body = users;
}
/**
* Route __[GET]__ ___/organizations/:organization_id/users/:user_id___ - get user in context of organization.
* @method
* @author <NAME>
*/
@Route("GET", "/:organization_id/users/:user_id")
public async getOrgUserById(ctx: Context): Promise<void> {
const org = await Organization.findOne(ctx.params.organization_id, {relations: ["users", "ownerUser"]});
if (!org) {
throw new RequestError(404, "Organization not found.");
}
const user: UserWithPermissions = await getRepository(User)
.createQueryBuilder("user")
.leftJoin("user.organizations", "userOrg", "userOrg.id = :orgId",
{orgId: org.id})
.where({id: ctx.params.user_id})
.andWhere("userOrg.id = :orgId", {orgId: org.id})
.leftJoin("user.roles", "userRoles", "userRoles.organization = :orgId",
{orgId: org.id})
.orderBy({"userRoles.permissionLevel": "DESC"})
.select([
"user.id", "user.username", "user.email", "user.deleted", "user.createdAt", "user.updatedAt",
"userRoles"
])
.getOne();
if (!user) {
throw new RequestError(404, "User not found in this organization",
{errors: {user: 404}});
}
user.permissions = user.roles.reduce((perms, role) => ({
canManageUsers: (role.canManageUsers && perms.canManageUsers < role.permissionLevel) ? role.permissionLevel : perms.canManageUsers,
canManageRoles: (role.canManageRoles && perms.canManageRoles < role.permissionLevel) ? role.permissionLevel : perms.canManageRoles,
canCreateJobs: (role.canCreateJobs && perms.canCreateJobs < role.permissionLevel) ? role.permissionLevel : perms.canCreateJobs,
canDeleteJobs: (role.canDeleteJobs && perms.canDeleteJobs < role.permissionLevel) ? role.permissionLevel : perms.canDeleteJobs,
canEditJobs: (role.canEditJobs && perms.canEditJobs < role.permissionLevel) ? role.permissionLevel : perms.canEditJobs,
canManagePlugins: (role.canManagePlugins && perms.canManagePlugins < role.permissionLevel) ? role.permissionLevel : perms.canManagePlugins,
canManageTeams: (role.canManageTeams && perms.canManageTeams < role.permissionLevel) ? role.permissionLevel : perms.canManageTeams,
canEditAudit: (role.canEditAudit && perms.canEditAudit < role.permissionLevel) ? role.permissionLevel : perms.canEditAudit
}),
{
canManageUsers: -1,
canManageRoles: -1,
canCreateJobs: -1,
canDeleteJobs: -1,
canEditJobs: -1,
canManagePlugins: -1,
canManageTeams: -1,
canEditAudit: -1
}
);
ctx.body = user;
}
/**
* Route __[GET]__ ___/organizations/:organization_id/users/:user_id/permissions___ - get user permissions in organization.
* @method
* @author <NAME>
*/
@Route("GET", "/:organization_id/users/:user_id/permissions")
@RouteMiddleware(findOneOrganizationByRequestParams({relations: ["users", "ownerUser"]}))
public async getUserPermissions(ctx: Context): Promise<void> {
const org = ctx.state.organization;
const user = await getRepository(User)
.createQueryBuilder("user")
.leftJoin("user.organizations", "userOrg", "userOrg.id = :orgId",
{orgId: org.id})
.where({id: ctx.params.user_id})
.andWhere("userOrg.id = :orgId", {orgId: org.id})
.leftJoin("user.roles", "userRoles", "userRoles.organization = :orgId",
{orgId: org.id})
.orderBy({"userRoles.permissionLevel": "DESC"})
.select([
"user.id", "user.username", "user.email", "user.deleted", "user.createdAt", "user.updatedAt",
"userRoles"
])
.getOne();
if (!user) {
throw new RequestError(404, "User not found in this organization",
{errors: {user: 404}});
}
const permissions: UserPermissions = user.roles.reduce((perms, role) => ({
canManageUsers: (role.canManageUsers && perms.canManageUsers < role.permissionLevel) ? role.permissionLevel : perms.canManageUsers,
canManageRoles: (role.canManageRoles && perms.canManageRoles < role.permissionLevel) ? role.permissionLevel : perms.canManageRoles,
canCreateJobs: (role.canCreateJobs && perms.canCreateJobs < role.permissionLevel) ? role.permissionLevel : perms.canCreateJobs,
canDeleteJobs: (role.canDeleteJobs && perms.canDeleteJobs < role.permissionLevel) ? role.permissionLevel : perms.canDeleteJobs,
canEditJobs: (role.canEditJobs && perms.canEditJobs < role.permissionLevel) ? role.permissionLevel : perms.canEditJobs,
canManagePlugins: (role.canManagePlugins && perms.canManagePlugins < role.permissionLevel) ? role.permissionLevel : perms.canManagePlugins,
canManageTeams: (role.canManageTeams && perms.canManageTeams < role.permissionLevel) ? role.permissionLevel : perms.canManageTeams,
canEditAudit: (role.canEditAudit && perms.canEditAudit < role.permissionLevel) ? role.permissionLevel : perms.canEditAudit
}),
{
canManageUsers: -1,
canManageRoles: -1,
canCreateJobs: -1,
canDeleteJobs: -1,
canEditJobs: -1,
canManagePlugins: -1,
canManageTeams: -1,
canEditAudit: -1
}
);
ctx.body = permissions;
}
/**
* Route __[GET]__ ___/organizations/:organization_id/users/:user_id/permissionLevel___ - get user permissions in organization.
* @method
* @author <NAME>
*/
@Route("GET", "/:organization_id/users/:user_id/permissionLevel")
public async getUserPermissionLevel(ctx: Context): Promise<void> {
// const user = User.findOne(ctx.params.user_id);
const permissionLevel = await getUserPermissionLevelById(ctx.params.user_id, ctx.params.organization_id);
if (permissionLevel === undefined) {
throw new RequestError(404, "User not found in this organization.");
}
ctx.body = permissionLevel;
}
} |
<reponame>RicardoTaverna/django_twilio_whatsapp_notificator
# Generated by Django 4.0 on 2021-12-10 05:37
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('user', '0003_appointment_phone'),
]
operations = [
migrations.RemoveField(
model_name='appointment',
name='phone',
),
]
|
/*
* Copyright 2014-2020 Real Logic Limited.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.aeron.driver;
import io.aeron.driver.media.SendChannelEndpoint;
import io.aeron.driver.media.UdpChannel;
import org.agrona.concurrent.status.AtomicCounter;
/**
* Supplier of channel endpoints which extend {@link SendChannelEndpoint} to add specialised behaviour for the sender.
*/
@FunctionalInterface
public interface SendChannelEndpointSupplier
{
/**
* A new instance of a specialised {@link SendChannelEndpoint}.
*
* @param udpChannel on which the sender will send.
* @param statusIndicator for the channel.
* @param context for the configuration of the driver.
* @return a new instance of a specialised {@link SendChannelEndpoint}.
*/
SendChannelEndpoint newInstance(UdpChannel udpChannel, AtomicCounter statusIndicator, MediaDriver.Context context);
}
|
Snow day? They just call this Monday.
With the District shut down and most of Washington burying itself under the covers, hoping for this falling white stuff to just leave us the hell alone, a few members of Congress are happily going about their work. Although votes have been canceled for the day, a few senators from snowier climates are in the office and are not above a little polite ribbing of their colleagues who decided to take the day off. Sen. Heidi Heitkamp, D-N.D., began it this morning with a shot of the Capitol.
Heitkamp's staff is working through the snow day, although the senator herself is still traveling back from North Dakota, where she attended an event Saturday in negative 52 degree weather, according to a spokesperson.
Sen. Patrick Leahy, D-Vt., was even sassier, braving the elements on his office balcony sans winter coat.
Sen. Lisa Murkowski, R-Alaska, upped the social-media game by posting a Vine of the falling snow from her office balcony.
" 'Snow day?' You mean 'Week day at the office,' yes?" one of Murkowski's staffers said in an email. Sen. Chris Murphy, D-Conn., and his staff were in the office promptly at 7:30 a.m. The senator even took a little time out of his morning to head up to the MSNBC studios in Northwest Washington for an appearance on Morning Joe before snapping this snow shot.
Sen. Dan Coats, R-Ind., had hoped to joined them, hopping on an early flight back to Washington, but was burned by some overconfidence in the District's ability to respond to a major snowstorm.
This is third time this year that the federal government has shuttered due to snow. Congress is scheduled to get back to work officially on Tuesday, though the Senate has delayed votes until Wednesday.
Updated Tuesday at 11 a.m.
Correction: The original version of this story mis-identified Sen. Heidi Heitkamp as a Republican. |
A year in the life of a village where Serbs and Albanians live side by side.
In February 2008, Kosovo's ethnic Albanian government declared independence from Serbia.
By December, some 50 countries recognised Kosovo but Serbia says this is something it will never accept.
Following the return in late 2007 of a handful of Serb families who fled during the 1999 war, the village of Berkovo is one of the few places in Kosovo where ethnic Serbs and Albanians live side by side as neighbours.
In Kosovo: A Year of Fear and Hope, Al Jazeera's Barnaby Phillips follows two families in Berkovo - one Albanian and one Serb - throughout this historic year.
The aim: to see what the prospects are of two communities living in peace in a land that has been divided for so long by hatred and war.
Click here to read Barnaby Phillips' article on the making of Kosovo: A Year of Fear and Hope. |
/** @module @airtable/blocks/ui: ViewPicker */ /** */
import PropTypes from 'prop-types';
import * as React from 'react';
import View from '../models/view';
import Table from '../models/table';
import { ViewType } from '../types/view';
import { SharedSelectBaseProps } from './select';
/**
* Props shared between the {@link ViewPicker} and {@link ViewPickerSynced} components.
*/
export interface SharedViewPickerProps extends SharedSelectBaseProps {
/** The parent table model to select views from. If `null` or `undefined`, the picker won't render. */
table?: Table | null;
/** An array indicating which view types can be selected. */
allowedTypes?: Array<ViewType>;
/** If set to `true`, the user can unset the selected view. */
shouldAllowPickingNone?: boolean;
/** The placeholder text when no view is selected. Defaults to `'Pick a view...'` */
placeholder?: string;
/** A function to be called when the selected view changes. */
onChange?: (viewModel: View | null) => void;
}
export declare const sharedViewPickerPropTypes: {
onMouseEnter: PropTypes.Requireable<(...args: any[]) => any>;
onMouseLeave: PropTypes.Requireable<(...args: any[]) => any>;
onClick: PropTypes.Requireable<(...args: any[]) => any>;
hasOnClick: PropTypes.Requireable<boolean>;
size: PropTypes.Validator<any>;
autoFocus: PropTypes.Requireable<boolean>;
disabled: PropTypes.Requireable<boolean>;
id: PropTypes.Requireable<string>;
name: PropTypes.Requireable<string>;
tabIndex: PropTypes.Requireable<number>;
className: PropTypes.Requireable<string>;
style: PropTypes.Requireable<object>;
'aria-label': PropTypes.Requireable<string>;
'aria-labelledby': PropTypes.Requireable<string>;
'aria-describedby': PropTypes.Requireable<string>;
table: PropTypes.Requireable<Table>;
allowedTypes: PropTypes.Requireable<ViewType[]>;
shouldAllowPickingNone: PropTypes.Requireable<boolean>;
placeholder: PropTypes.Requireable<string>;
onChange: PropTypes.Requireable<(...args: any[]) => any>;
};
/**
* Props for the {@link ViewPicker} component. Also accepts:
* * {@link SelectStyleProps}
*
* @docsPath UI/components/ViewPicker
*/
interface ViewPickerProps extends SharedViewPickerProps {
/** The selected view model. */
view?: View | null;
}
declare const ForwardedRefViewPicker: React.ForwardRefExoticComponent<ViewPickerProps & React.RefAttributes<HTMLSelectElement>>;
export default ForwardedRefViewPicker;
//# sourceMappingURL=view_picker.d.ts.map |
<reponame>willist/urllib2_openers
import os
import shutil
import urllib2
import unittest
from u2bs import CacheHandler, ThrottlingProcessor
class Tests(unittest.TestCase):
def setUp(self):
# Clearing cache
if os.path.exists(".urllib2cache"):
shutil.rmtree(".urllib2cache")
# Clearing throttling timeouts
t = ThrottlingProcessor()
t.lastRequestTime.clear()
self.path = 'https://www.python.org/'
self.cache_header = 'x-fs-cache'
self.throttling_header = 'x-throttling'
def testCache(self):
opener = urllib2.build_opener(CacheHandler(".urllib2cache"))
resp = opener.open(self.path)
self.assert_(self.cache_header not in resp.info())
resp = opener.open(self.path)
self.assert_(self.cache_header in resp.info())
def testThrottle(self):
opener = urllib2.build_opener(ThrottlingProcessor(5))
resp = opener.open(self.path)
self.assert_(self.throttling_header not in resp.info())
resp = opener.open(self.path)
self.assert_(self.throttling_header in resp.info())
def testCombined(self):
opener = urllib2.build_opener(
CacheHandler(".urllib2cache"), ThrottlingProcessor(1))
resp = opener.open(self.path)
self.assert_(self.cache_header not in resp.info())
self.assert_(self.throttling_header not in resp.info())
resp = opener.open(self.path)
self.assert_(self.cache_header in resp.info())
self.assert_(self.throttling_header not in resp.info())
|
New Rule: If America can't get its act together, it must lose the bald eagle as our symbol and replace it with the YouTube video of the puppy that can't get up. As long as we're pathetic, we might as well act like it's cute. I don't care about the president's birth certificate, I do want to know what happened to "Yes we can." Can we get out of Iraq? No. Afghanistan? No. Fix health care? No. Close Gitmo? No. Cap-and-trade carbon emissions? No. The Obamas have been in Washington for ten months and it seems like the only thing they've gotten is a dog.
Well, I hate to be a nudge, but why has America become a nation that can't make anything bad end, like wars, farm subsidies, our oil addiction, the drug war, useless weapons programs - oh, and there's still 60,000 troops in Germany - and can't make anything good start, like health care reform, immigration reform, rebuilding infrastructure. Even when we address something, the plan can never start until years down the road. Congress's climate change bill mandates a 17% cut in greenhouse gas emissions... by 2020! Fellas, slow down, where's the fire? Oh yeah, it's where I live, engulfing the entire western part of the United States!
We might pass new mileage standards, but even if we do, they wouldn't start until 2016. In that year, our cars of the future will glide along at a breathtaking 35 miles-per-gallon. My goodness, is that even humanly possible? Cars that get 35 miles-per-gallon in just six years? Get your head out of the clouds, you socialist dreamer! "What do we want!? A small improvement! When do we want it!? 2016!"
When it's something for us personally, like a laxative, it has to start working now. My TV remote has a button on it now called "On Demand". You get your ass on my TV screen right now, Jon Cryer, and make me laugh. Now! But when it's something for the survival of the species as a whole, we phase that in slowly.
Folks, we don't need more efficient cars. We need something to replace cars. That's what's wrong with these piddly, too-little-too-late half-measures that pass for "reform" these days. They're not reform, they're just putting off actually solving anything to a later day, when we might by some miracle have, a) leaders with balls, and b) a general populace who can think again. Barack Obama has said, "If we were starting from scratch, then a single-payer system would probably make sense." So let's start from scratch.
Even if they pass the shitty Max Baucus health care bill, it doesn't kick in for 4 years, during which time 175,000 people will die because they're not covered, and about three million will go bankrupt from hospital bills. We have a pretty good idea of the Republican plan for the next three years: Don't let Obama do anything. What kills me is that that's the Democrats' plan, too.
We weren't always like this. Inert. In 1965, Lyndon Johnson signed Medicare into law and 11 months later seniors were receiving benefits. During World War II, virtually overnight FDR had auto companies making tanks and planes only. In one eight year period, America went from JFK's ridiculous dream of landing a man on the moon, to actually landing a man on the moon.
This generation has had eight years to build something at Ground Zero. An office building, a museum, an outlet mall, I don't care anymore. I'm tempted to say that, symbolically, all America can do lately is keep digging a hole, but Ground Zero doesn't represent a hole. It is a hole. America: Home of the Freedom Pit. Ironically, it's spitting distance from Wall Street, where they knock down buildings a different way - through foreclosure.
That's the ultimate sign of our lethargy: millions thrown out of their homes, tossed out of work, lost their life savings, retirements postponed - and they just take it. 30% interest on credit cards? It's a good thing the Supreme Court legalized sodomy a few years ago.
Why can't we get off our back? Is it something in the food? Actually, yes. I found out something interesting researching last week's editorial on how we should be taxing the unhealthy things Americans put into their bodies, like sodas and junk foods and gerbils. Did you know that we eat the same high-fat, high-carb, sugar-laden shit that's served in prisons and in religious cults to keep the subjects in a zombie-like state of lethargic compliance? Why haven't Americans arisen en masse to demand a strong public option? Because "The Bachelor" is on. We're tired and our brain stems hurt from washing down French fries with McDonald's orange drink.
The research is in: high-fat diets makes you lazy and stupid. Rats on an American diet weren't motivated to navigate their maze and once in the maze they made more mistakes. And, instead of exercising on their wheel, they just used it to hang clothes on. Of course we can't ban assault rifles - we're the first generation too lazy to make its own coffee. We're the generation that invented the soft chocolate chip cookie: like a cookie, only not so exhausting to chew. I ask you, if the food we're eating in America isn't making us stupid, how come the people in Carl's Jr. ads never think to put a napkin over their pants? |
Land and Class in Kenya Christopher Leo Toronto: University of Toronto Press, 1984, pp. xii, 224. Unlike Mazzeo, Meyns identifies complementarities as well as contradictions: "diversification as an important element of national self-reliant development strategy can strengthen collective self-reliance by increasing complementarities between developing countries. As competition between developing countries is largely determined by their dependence on the world economic system, reducing dependence by diversification will also reduce such competition". The analytical, conceptual and historical review by the editor situates the comparative African cases, including his own on lessons from the defunct East African Community, and is a match for his cautious conclusion: African inequalities, interand intra-state, may yet undermine continental self-reliance. Mazzeo's concern for elite interests points to the possibility of more materialist analysis, which Mytelka and Meyns advance somewhat. The collective tone of the volume is revisionist regionalism, from the legalism of Woodie and political preoccupations of Gordenker and Mathews to the orthodox (West African) case studies of Asante and Fredland. In a spirit of relevant revisionism, transcending federalist-functionalist and subregional-continental debates, Mazzeo advances his own dialectical reformulation based on the chequered history of African regionalism and the redefinition of development itself: "regional cooperation among developing countries, to start with, should be an instrument of nation-building and satisfaction of basic needs". The new international division of labour makes the reconceptualization and revitalization of regionalism in Africa an imperative; the crisis merely reinforces the urgency of implementation. |
<gh_stars>1000+
package mage.cards.i;
import mage.MageInt;
import mage.abilities.Ability;
import mage.abilities.Mode;
import mage.abilities.common.EntersBattlefieldTriggeredAbility;
import mage.abilities.effects.common.GainLifeEffect;
import mage.abilities.effects.common.continuous.BoostTargetEffect;
import mage.cards.CardImpl;
import mage.cards.CardSetInfo;
import mage.constants.CardType;
import mage.constants.SubType;
import mage.target.common.TargetCreaturePermanent;
import java.util.UUID;
/**
* @author TheElk801
*/
public final class InspiringBard extends CardImpl {
public InspiringBard(UUID ownerId, CardSetInfo setInfo) {
super(ownerId, setInfo, new CardType[]{CardType.CREATURE}, "{3}{G}");
this.subtype.add(SubType.ELF);
this.subtype.add(SubType.BARD);
this.power = new MageInt(3);
this.toughness = new MageInt(3);
// When Inspiring Bard enters the battlefield, choose one —
// • Bardic Inspiration — Target creature gets +2/+2 until end of turn.
Ability ability = new EntersBattlefieldTriggeredAbility(new BoostTargetEffect(2, 2));
ability.addTarget(new TargetCreaturePermanent());
ability.getModes().getMode().withFlavorWord("Bardic Inspiration");
// • Song of Rest — You gain 3 life.
ability.addMode(new Mode(
new GainLifeEffect(3)
).withFlavorWord("Song of Rest"));
this.addAbility(ability);
}
private InspiringBard(final InspiringBard card) {
super(card);
}
@Override
public InspiringBard copy() {
return new InspiringBard(this);
}
}
|
Texas is getting lots of attention with the Republican Presidential candidates. Newt Gingrich and Herman Cain will debate this weekend in The Woodlands (a suburb just north of Houston) with a focus on our nation’s economic and fiscal issues. As an accountant, I am happy to see that candidates are taking a closer look at our country’s finances. Cain has laid out his 9-9-9 plan which offers an alternative to our current tax system. Even though I think there are some things in his plan that need to be revisited, his plan puts economics in perspective for the average American – something that had not been done before by others.
The national economic disease is also apparent here in Houston at local level with: out of control spending, tax hikes and a deficit that leaves taxpayers soured with career politicians and their big interest lobbyists. The time has come for candidates to be “politically correct” and have a candid conversation with voters about the real economic situation facing our city and country today as a result of the current administration’s mismanagement. At the local level, I took matters into my own hands and threw my hat in the ring for a bid at Houston’s City Council At-Large Position #2. At a time when many Americans are unemployed or underemployed and tightening their financial belts, I find it appalling that our elected officials continue to mismanage tax dollars and raise taxes on the backs of small businesses and tax payers.
I am fighting against wasteful government spending and programs, like Prop 1 and beautification projects, that only profit special interests. To put it in perspective, answer this question: If you are living paycheck to paycheck and in debt, do you pay for decorations for your home, or do you pay your mortgage and utilities? I want to ensure that Houstonians’ tax dollars are allocated correctly. The direction of our city is mirroring that of our country and is no longer consistent with our founding values.
I look forward to hearing what all of the Republican Presidential candidates have to say about the urgent financial situation we find ourselves in, and how we will navigate ourselves out of it. As a mother and homeowner, I am glad to see that this debate format is organized to really discuss the issues in depth. I hope to see more debates like this with all the Republican candidates. In a separate posting, I will share some simple economic tips that can help you get out of debt and/or save money.
Gingrich is my favorite candidate. he should be everyone’s favorite since he is the best man for the job, but what can you do? You can vote for your favorite candidate for the republican nomination for president athttp://www.nationalsponsor.com. Great content there as well, videos, articles, the like. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.