content
stringlengths 7
2.61M
|
---|
Turkey knockdown in successive flocks. Turkey knockdown was diagnosed in three of five flocks of hen turkeys on a single farm within a 12-mo period. The age of birds in the flocks affected ranged from 6 wk 2 days to 7 wk 4 days. The attack rate ranged from 0.02% to 0.30% with a case fatality rate in affected birds ranging from 0 to 74%. The diagnosis was made on the basis of clinical signs and histopathologic lesions associated with knockdown. The feed in all flocks contained bacitracin methylene disalicylate and monensin (Coban). Affected birds were recumbent, demonstrated paresis, and were unable to vocalize. Postmortem examination revealed few significant lesions although pallor of the adductor muscles and petechiation in adductor and gastrocnemius muscles were noted. Birds that had been recumbent for extended periods were severely dehydrated. Consistent microscopic lesions included degeneration, necrosis, and regeneration of adductor, gastrocnemius, and abdominal muscles. No lesion in cardiac tissue was noted. Results of our investigation indicated that changes in water consumption, vitamin E status, and brooder to finisher movement correlated with the occurrence of knockdown. Turkey knockdown was defined in 1993 as any condition identified in a turkey flock that has affected the neuromuscular system to a degree that a turkey is unable to walk or stand. This definition was later modified to...neuromuscular or skeletal systems to a degree that a turkey is unable to walk or stand properly. Knockdown may be associated with numerous feed, management, or disease factors alone or in combination. Dosage of monensin, feed restriction/gorging, water restriction, heat stress, copper, mycotoxins, sodium chloride in feed, and sulfa drugs have all been suggested as contributing factors; however, laboratory studies to duplicate this have not been successful. This report presents observations from a single farm at which three of five hen flocks in a single year experienced knockdown. When a flock was reported as affected, a detailed investigation was initiated within 3 hr. The fifth flock was followed on a twice weekly basis from 0 to 8 wk of age to determine if initiating events were evident, but knockdown did not occur. |
An explicit solution to a class of constrained optimal control problems Optimal control problems that admit closed form solutions are rare so that numerical methods have to be used to obtain an approximate solution in most cases. This paper derives an explicit solution to a class of constrained optimal control problem that arises in the investigation of nonlinear L<sub>2</sub>-gain properties. The optimal control problem is a nonlinear problem in ℝ<sub>2</sub> even for linear systems in ℝ due to the presence of an L<sub>2</sub>-norm constraint on the input signals. The solution to the optimal control problem considered is obtained via connection to the parameterized linear ℌ<sub>∞</sub>-control solutions in ℝ which are explicitly solvable. |
If you go to the famous Western Wall in Jerusalem, which is actually the western retaining wall of Herod’s Temple Mount and Judaism’s holiest prayer site, and then turn around, you will see at the other side of the plaza an area of less than half an acre that has recently been excavated. Large-scale archaeological excavations were conducted here between 2005 and 2010 on behalf of the Israel Antiquities Authority, initiated and underwritten by the Western Wall Heritage Foundation and directed by the authors of this article. What we have found sheds light on important transitional phases in Jerusalem’s history, but it also raises fascinating new questions. |
/**
* Main entry point.
*
* @param args the parameters
*/
public static void main(String[] args) {
try {
luisAuthoringKey = System.getenv("AZURE_LUIS_AUTHORING_KEY");
LUISAuthoringClient authoringClient = LUISAuthoringManager
.authenticate(com.microsoft.azure.cognitiveservices.language.luis.authoring.EndpointAPI.US_WEST, luisAuthoringKey);
LuisRuntimeAPI runtimeClient = LuisRuntimeManager
.authenticate(com.microsoft.azure.cognitiveservices.language.luis.runtime.EndpointAPI.US_WEST, luisAuthoringKey);
runLuisAuthoringSample(authoringClient);
runLuisRuntimeSample(runtimeClient);
} catch (Exception e) {
System.out.println(e.getMessage());
e.printStackTrace();
}
} |
#ifndef UNIQID_H
#define UNIQID_H
#define INVALID_UNIQ_ID 0xffffffff
DWORD
DeleteRecord(
IN ULONG UniqId
);
#endif
|
<filename>src/repos/dailyDataRepo.py
import datetime as dt
from typing import List, Tuple
import psycopg2
from src.typeDefs.dailyDataRow import IDailyDataRow
class DailyDataRepo():
def __init__(self, dbHost: str, dbname: str, uname: str, dbPass: str) -> None:
"""constructor method
Args:
dbConf (DbConfig): database connection string
"""
self.dbHost = dbHost
self.dbname = dbname
self.uname = uname
self.dbPass = dbPass
def insertRows(self, dataRows: List[IDailyDataRow]) -> bool:
dbConn = None
isInsertSuccess = True
try:
# get the connection object
dbConn = psycopg2.connect(host=self.dbHost, dbname=self.dbname,
user=self.uname, password=self.dbPass)
# get cursor for raw data table
dbCur = dbConn.cursor()
# create sql for insertion
dataInsertionTuples: List[Tuple] = [(x["genId"], x["dataType"], dt.datetime.strftime(x["targetDt"], "%Y-%m-%d"),
x["dataVal"], 0) for x in dataRows]
dataText = ','.join(dbCur.mogrify('(%s,%s,%s,%s,%s)', row).decode(
"utf-8") for row in dataInsertionTuples)
sqlTxt = 'INSERT INTO public.gens_daily_data(\
g_id, data_type, data_date, data_val, rev)\
VALUES {0} on conflict (g_id, data_type, data_date, rev) \
do update set data_val = excluded.data_val'.format(dataText)
# execute the sql to perform insertion
dbCur.execute(sqlTxt)
# commit the changes
dbConn.commit()
except Exception as e:
isInsertSuccess = False
print('Error while bulk insertion of generator daily data into db')
print(e)
finally:
# closing database connection and cursor
if(dbConn):
# close the cursor object to avoid memory leaks
dbCur.close()
# close the connection object also
dbConn.close()
return isInsertSuccess
|
When you examine the Presidential Debate held last Friday, when arm-chair pundits (including myself) discuss the performances of John McCain and Barack Obama, the main factor that inevitable arises is expectation. Everyone expected Obama to perform better than McCain when it came to the economy. And by most accounts, he did. While certain a win for the Obama campaign, he merely met expectations.
When the debate topic turned to foreign affairs, McCain was considered the strong candidate. But, in my opinion, McCain did not out-perform Obama. Obama held his own and the second half of the debate was a draw. But the fact that McCain did not deliver a knockout means Obama exceeded expectations. In that regard, Obama cemented the debate outcome as a victory for him.
Managing expectations is an important part of the debate process — and of leadership in general. If you promise too much and deliver only 80% of it, than you will be perceived as a failure. But, if you promise something more reasonable, it is easier to be seen as succeeded or surpassing your promise. And this is true of an employee’s output, a project’s outcome, or a company’s revenue. If you expect too much, you inevitable will be disappointed. This is true in business (AOL Time Warner merger), tech (iPhone 3G), or art (Star Wars: Episode I).
Of course, managing expectations can sometimes lead to incongruence. This Thursday, the Vice-Presidential debate will inevitably be a draw. Why? Sarah Palin only has to sound half-way decent to not be a failure — the various interviews with her have made expectations so low. Of course, Joe Biden may deliver one of his gaffes and simply hand the debate over to Palin. Of course, Biden putting his foot in his mouth is becoming somewhat expected, so it may just be a wash. We will have to wait and see.
Either way, I expect an interesting debate. |
<filename>compiler/src/core/Object.cpp
//
// Another CPU Language - ACPUL - a{};
//
// THIS SOFTWARE IS PROVIDED BY THE FREEBSD PROJECT ``AS IS'' AND ANY EXPRESS OR
// IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
// SHALL THE FREEBSD PROJECT OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
// INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
// ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
// OF SUCH DAMAGE.
//
// Made by d08ble, thanks for watching.
//
#include <iostream>
#include "Object.h"
#include "Name.h"
#include "CoreData.h"
#include "ErrorNumbers.h"
#include "Utils.h"
using namespace acpul;
extern CoreData *acpulCoreData;
static int _uid_global;
Object::Object()
: _followObject(NULL)
, _codeBlock(NULL)
, _parent(NULL)
, _blockUpdated(false)
, _link(NULL)
, _params(NULL)
, _ident(L"<NULLIDENT>")
{
_block.setObject(this);
_uid = ++_uid_global;
}
Object::~Object()
{
releaseParams();
}
void Object::addExpressionNode(stree &tree, stree::iterator node)
{
_block.addExpression(tree, node);
_blockUpdated = true;
}
Object *Object::queryObject2(const Name *name, int i)
{
Object *o = this;
Object *found = NULL;
for (; i < name->count(); i++)
{
const wchar_t *ident = (*name)[i];
found = o->objectForIdent(ident);
if (!found)
{
Object *o1 = NULL;
if (o->isFollowing())
{
o1 = o->followObject()->queryObject2(name, i);
}
if (!o1 && o->isLink())
{
o1 = o->link()->queryObject2(name, i);
}
return o1;
}
o = found;
}
return found;
}
Object *Object::queryObject(const Name *name)
{
Object *o = this;
Object *found = NULL;
for (int i = 0; i < name->count(); i++)
{
const wchar_t *ident = (*name)[i];
found = o->objectForIdent(ident);
if (!found)
return NULL;
o = found;
}
return found;
}
Object *Object::queryObjectWithParents(const Name *name)
{
#if 0
if (name.isParam)
{
return queryObject(name);
}
#endif
Object *object = NULL;
for (Object *o = this; o != NULL; o = o->parent())
{
object = o->queryObject2(name, 0);
if (object)
break;
}
return object;
}
void Object::mergeObjects(Object *target)
{
omap mergeMap;
mergeObjectsToMap(mergeMap);
printf("mergeObjects replace: map {\n");
omap &objects = target->objects();
omap::iterator i;
for (i = mergeMap.begin(); i != mergeMap.end(); i++)
{
// replace object
printf("%S\n", i->first);
objects[i->first] = i->second;
}
printf("mergeObjects replace: map }\n");
}
void Object::mergeObjectsToMap(omap &map)
{
// copy objects to object
omap::iterator i;
for (i = _objects.begin(); i != _objects.end(); i++)
{
// skip if present
if (map[i->first])
continue;
map[i->first] = i->second;
}
if (_followObject && _block.isFollowing())
{
printf("Follow %p %S\n", _followObject, _followObject->ident());
_followObject->dumpObjects();
_followObject->mergeObjectsToMap(map);
}
if (isLink())
{
printf("Link %p %S\n", _link, _link->ident());
_link->dumpObjects();
_link->mergeObjectsToMap(map);
}
}
bool Object::isLink()
{
// update block
if (_blockUpdated)
{
_blockUpdated = false;
_link = _block.getLinkObject();
}
return !!_link;
}
bool Object::isFollowing()
{
// update block -- maybe bug
//if (_blockUpdated)
//{
// _blockUpdated = false;
//}
return _followObject && _block.isFollowing();
}
bool Object::getExpressionValueAsNumber(float &v)
{
if (_block.isLink())
{
Object *link = _block.getLinkObject();
if (link)
{
return link->getExpressionValueAsNumber(v);
}
else
{
error(L"Link object is undefined %p", this);
return false;
}
}
return _block.getNumber(v);
}
Name *Object::getExpressionValueAsName()
{
if (_block.isLink())
{
Object *link = _block.getLinkObject(); // link will be null for undefined object
if (link)
{
return link->getExpressionValueAsName();
}
else
{
return acpulCoreData->nameForIdent(_ident);
}
}
error(L"Not link object %p for name query", this);
return NULL;
}
/* -- old code
bool Object::getExpressionValueAsIdent(const wchar_t &*v)
{
if (_block.isLink())
{
Object *link = _block.getLinkObject(); // link will be null for undefined object
if (link)
{
return link->getExpressionValueAsIdent(v);
}
else
{
v = _ident;
return true;
}
}
error(L"Not link object %p for name query", this);
return false; //_block.getName(v);
}*/
void Object::setObjectForIdent(const wchar_t *ident, Object *object)
{
_objects[ident] = object;
}
Object *Object::objectForIdent(const wchar_t *ident)
{
//
// PARAMS FIRST
//
if (_params)
{
omap::iterator j = _params->find(ident);
if (j != _objects.end())
return j->second;
}
//
// OBJECTS SECOND
//
omap::iterator i = _objects.find(ident);
return (i == _objects.end()) ? NULL : i->second;
}
void Object::follow(Object *object)
{
object->parent()->setObjectForIdent(object->ident(), this);
_followObject = object;
setIdent(object->ident());
setParent(object->parent());
}
void Object::releaseParams()
{
if (_params)
{
delete _params;
_params = NULL;
}
}
omap &Object::saveParams()
{
omap *params = _params;
_params = new omap;
return *params;
}
void Object::restoreParams(omap ¶ms)
{
releaseParams();
_params = ¶ms;
}
omap *Object::setParams(omap *params)
{
omap *params1 = _params;
_params = params;
return params1;
}
void Object::unsetParams(omap *params)
{
_params = params;
}
void Object::error(const wchar_t *s, ...)
{
va_list ap;
va_start(ap, s);
acpulCoreData->error(EN_OBJ_ERR, s, ap);
va_end(ap);
}
|
<reponame>BSGZ123/nxtframework
package com.newxton.nxtframework.service;
import com.newxton.nxtframework.entity.NxtShoppingCartProduct;
import java.util.List;
/**
* (NxtShoppingCartProduct)表服务接口
*
* @author makejava
* @since 2020-11-14 21:45:47
*/
public interface NxtShoppingCartProductService {
/**
* 通过shoppingCartId、productId查询对象列表
*
* @param shoppingCartId 购物车主键
* @param productId 产品主键
* @return 对象列表
*/
List<NxtShoppingCartProduct> queryByShoppingCartIdProductId(Long shoppingCartId, Long productId);
/**
* 通过ID查询单条数据
*
* @param id 主键
* @return 实例对象
*/
NxtShoppingCartProduct queryById(Long id);
/**
* 查询多条数据
*
* @param offset 查询起始位置
* @param limit 查询条数
* @return 对象列表
*/
List<NxtShoppingCartProduct> queryAllByLimit(int offset, int limit);
/**
* 查询指定购物车内所有选中的产品
* @param shoppingCartId
* @return
*/
List<NxtShoppingCartProduct> queryAllSelectedProductByShoppingCartId(Long shoppingCartId);
/**
* 查询指定购物车内所有选中的产品
* @param shoppingCartId
* @return
*/
List<NxtShoppingCartProduct> queryAllProductByShoppingCartId(Long shoppingCartId);
/**
* 新增数据
*
* @param nxtShoppingCartProduct 实例对象
* @return 实例对象
*/
NxtShoppingCartProduct insert(NxtShoppingCartProduct nxtShoppingCartProduct);
/**
* 修改数据
*
* @param nxtShoppingCartProduct 实例对象
* @return 实例对象
*/
NxtShoppingCartProduct update(NxtShoppingCartProduct nxtShoppingCartProduct);
/**
* 通过主键删除数据
*
* @param id 主键
* @return 是否成功
*/
boolean deleteById(Long id);
} |
<reponame>jokasimr/redis
import type { Connection } from "./connection.ts";
import { EOFError } from "./errors.ts";
import {
Deferred,
deferred,
} from "./vendor/https/deno.land/std/async/deferred.ts";
import { sendCommand } from "./protocol/mod.ts";
import type { RedisReply, RedisValue } from "./protocol/mod.ts";
export interface CommandExecutor {
readonly connection: Connection;
exec(
command: string,
...args: RedisValue[]
): Promise<RedisReply>;
}
export class MuxExecutor implements CommandExecutor {
constructor(readonly connection: Connection) {}
private queue: {
command: string;
args: RedisValue[];
d: Deferred<RedisReply>;
}[] = [];
exec(
command: string,
...args: RedisValue[]
): Promise<RedisReply> {
const d = deferred<RedisReply>();
this.queue.push({ command, args, d });
if (this.queue.length === 1) {
this.dequeue();
}
return d;
}
private dequeue(): void {
const [e] = this.queue;
if (!e) return;
sendCommand(
this.connection.writer,
this.connection.reader,
e.command,
...e.args,
)
.then(e.d.resolve)
.catch(async (error) => {
if (
this.connection.maxRetryCount > 0 &&
// Error `BadResource` is thrown when an attempt is made to write to a closed connection,
// Make sure that the connection wasn't explicitly closed by the user before trying to reconnect.
((error instanceof Deno.errors.BadResource &&
!this.connection.isClosed) ||
error instanceof Deno.errors.BrokenPipe ||
error instanceof Deno.errors.ConnectionAborted ||
error instanceof Deno.errors.ConnectionRefused ||
error instanceof Deno.errors.ConnectionReset ||
error instanceof EOFError)
) {
await this.connection.reconnect();
} else e.d.reject(error);
})
.finally(() => {
this.queue.shift();
this.dequeue();
});
}
}
|
Maria Aragon is only 10 years old, but will be performing tonight at the Air Canada Centre with Lady Gaga!
The Winnipeg-native’s cover video was posted by Lady Gaga on her Twitter and it now has more than 17 million views!
Since then Maria has performed her version of “Born This Way” for Ellen DeGeneres and Good Morning America.
But her big performance will happen tonight onstage singing with Lady Gaga herself. |
/*
Copyright 2012-present Appium Committers
<p>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
<p>
http://www.apache.org/licenses/LICENSE-2.0
<p>
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package io.appium.settings;
import android.app.Notification;
import android.app.NotificationChannel;
import android.app.NotificationManager;
import android.app.Service;
import android.content.Intent;
import android.graphics.BitmapFactory;
import android.os.Build;
import android.os.IBinder;
import android.support.annotation.RequiresApi;
import android.support.v4.app.NotificationCompat;
public class ForegroundService extends Service {
public static final String ACTION_START = "start";
public static final String ACTION_STOP = "stop";
private static final String CHANNEL_ID = "main_channel";
private static final String CHANNEL_NAME = "Appium Settings";
private static final String CHANNEL_DESCRIPTION = "Keep this service running, " +
"so Appium for Android can properly interact with several system APIs";
@Override
public IBinder onBind(Intent intent) {
return null;
}
@RequiresApi(Build.VERSION_CODES.O)
private void createChannel() {
NotificationManager mNotificationManager = (NotificationManager) this.getSystemService(NOTIFICATION_SERVICE);
if (mNotificationManager == null) {
return;
}
NotificationChannel mChannel = new NotificationChannel(CHANNEL_ID, CHANNEL_NAME, NotificationManager.IMPORTANCE_DEFAULT);
mChannel.setDescription(CHANNEL_DESCRIPTION);
mChannel.setShowBadge(true);
mChannel.setLockscreenVisibility(Notification.VISIBILITY_PUBLIC);
mNotificationManager.createNotificationChannel(mChannel);
}
@Override
public int onStartCommand(Intent intent, int flags, int startId) {
if (intent != null && intent.getAction() != null) {
switch (intent.getAction()) {
case ACTION_START:
startForegroundService();
break;
case ACTION_STOP:
stopForegroundService();
break;
}
}
return super.onStartCommand(intent, flags, startId);
}
private void startForegroundService() {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
createChannel();
}
NotificationCompat.BigTextStyle bigTextStyle = new NotificationCompat.BigTextStyle();
bigTextStyle.setBigContentTitle(CHANNEL_NAME);
bigTextStyle.bigText(CHANNEL_DESCRIPTION);
Notification notification = new NotificationCompat.Builder(this, CHANNEL_ID)
.setStyle(bigTextStyle)
.setWhen(System.currentTimeMillis())
.setSmallIcon(R.drawable.ic_launcher)
.setLargeIcon(BitmapFactory.decodeResource(getResources(), R.drawable.ic_launcher))
.build();
startForeground(1, notification);
}
private void stopForegroundService() {
stopForeground(true);
stopSelf();
}
}
|
<reponame>ryota-murakami/private-exp
import React from 'react'
import TestRenderer from '../../lib/TestRenderer'
import ArrowButton from './ArrowButton'
test('should render ArrowButton', () => {
const { container } = TestRenderer(<ArrowButton direction="left" />)
expect(container).toBeInTheDocument()
})
|
# Copyright 2018 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from neutron_lib import constants as lib_const
from neutron_lib import context
from oslo_utils import uuidutils
from neutron.agent.l3 import agent as l3_agent
from neutron.agent.l3.extensions import port_forwarding as pf
from neutron.agent.l3 import l3_agent_extension_api as l3_ext_api
from neutron.agent.l3 import router_info as l3router
from neutron.agent.linux import iptables_manager
from neutron.api.rpc.callbacks.consumer import registry
from neutron.api.rpc.callbacks import resources
from neutron.api.rpc.handlers import resources_rpc
from neutron.common import constants
from neutron.objects import port_forwarding as pf_obj
from neutron.objects import router
from neutron.tests import base
from neutron.tests.unit.agent.l3 import test_agent
_uuid = uuidutils.generate_uuid
TEST_FIP = '10.100.2.45'
BINARY_NAME = iptables_manager.get_binary_name()
DEFAULT_RULE = ('PREROUTING', '-j %s-fip-pf' % BINARY_NAME)
DEFAULT_CHAIN = 'fip-pf'
HOSTNAME = 'testhost'
class PortForwardingExtensionBaseTestCase(
test_agent.BasicRouterOperationsFramework):
def setUp(self):
super(PortForwardingExtensionBaseTestCase, self).setUp()
self.fip_pf_ext = pf.PortForwardingAgentExtension()
self.context = context.get_admin_context()
self.connection = mock.Mock()
self.floatingip2 = router.FloatingIP(context=None, id=_uuid(),
floating_ip_address='172.24.6.12',
floating_network_id=_uuid(),
router_id=_uuid(),
status='ACTIVE')
self.portforwarding1 = pf_obj.PortForwarding(
context=None, id=_uuid(), floatingip_id=self.floatingip2.id,
external_port=1111, protocol='tcp', internal_port_id=_uuid(),
internal_ip_address='1.1.1.1', internal_port=11111,
floating_ip_address=self.floatingip2.floating_ip_address,
router_id=self.floatingip2.router_id)
self.agent = l3_agent.L3NATAgent(HOSTNAME, self.conf)
self.ex_gw_port = {'id': _uuid()}
self.fip = {'id': _uuid(),
'floating_ip_address': TEST_FIP,
'fixed_ip_address': '192.168.0.1',
'floating_network_id': _uuid(),
'port_id': _uuid(),
'host': HOSTNAME}
self.router = {'id': self.floatingip2.router_id,
'gw_port': self.ex_gw_port,
'ha': False,
'distributed': False,
lib_const.FLOATINGIP_KEY: [self.fip]}
self.router_info = l3router.RouterInfo(
self.agent, self.floatingip2.router_id, self.router,
**self.ri_kwargs)
self.centralized_port_forwarding_fip_set = set(
[str(self.floatingip2.floating_ip_address) + '/32'])
self.pf_managed_fips = [self.floatingip2.id]
self.router_info.ex_gw_port = self.ex_gw_port
self.router_info.fip_managed_by_port_forwardings = self.pf_managed_fips
self.agent.router_info[self.router['id']] = self.router_info
self.get_router_info = mock.patch(
'neutron.agent.l3.l3_agent_extension_api.'
'L3AgentExtensionAPI.get_router_info').start()
self.get_router_info.return_value = self.router_info
self.agent_api = l3_ext_api.L3AgentExtensionAPI(None)
self.fip_pf_ext.consume_api(self.agent_api)
self.port_forwardings = [self.portforwarding1]
class FipPortForwardingExtensionInitializeTestCase(
PortForwardingExtensionBaseTestCase):
@mock.patch.object(registry, 'register')
@mock.patch.object(resources_rpc, 'ResourcesPushRpcCallback')
def test_initialize_subscribed_to_rpc(self, rpc_mock, subscribe_mock):
call_to_patch = 'neutron_lib.rpc.Connection'
with mock.patch(call_to_patch,
return_value=self.connection) as create_connection:
self.fip_pf_ext.initialize(
self.connection, lib_const.L3_AGENT_MODE)
create_connection.assert_has_calls([mock.call()])
self.connection.create_consumer.assert_has_calls(
[mock.call(
resources_rpc.resource_type_versioned_topic(
resources.PORTFORWARDING),
[rpc_mock()],
fanout=True)]
)
subscribe_mock.assert_called_with(
mock.ANY, resources.PORTFORWARDING)
class FipPortForwardingExtensionTestCase(PortForwardingExtensionBaseTestCase):
def setUp(self):
super(FipPortForwardingExtensionTestCase, self).setUp()
self.fip_pf_ext.initialize(
self.connection, lib_const.L3_AGENT_MODE)
self._set_bulk_pull_mock()
def _set_bulk_pull_mock(self):
def _bulk_pull_mock(context, resource_type, filter_kwargs=None):
if 'floatingip_id' in filter_kwargs:
result = []
for pfobj in self.port_forwardings:
if pfobj.floatingip_id in filter_kwargs['floatingip_id']:
result.append(pfobj)
return result
return self.port_forwardings
self.bulk_pull = mock.patch(
'neutron.api.rpc.handlers.resources_rpc.'
'ResourcesPullRpcApi.bulk_pull').start()
self.bulk_pull.side_effect = _bulk_pull_mock
def _get_chainrule_tag_from_pf_obj(self, target_obj):
rule_tag = 'fip_portforwarding-' + target_obj.id
chain_name = (
'pf-' + target_obj.id)[:constants.MAX_IPTABLES_CHAIN_LEN_WRAP]
chain_rule = (chain_name,
'-d %s/32 -p %s -m %s --dport %s '
'-j DNAT --to-destination %s:%s' % (
target_obj.floating_ip_address,
target_obj.protocol,
target_obj.protocol,
target_obj.external_port,
target_obj.internal_ip_address,
target_obj.internal_port))
return chain_name, chain_rule, rule_tag
def _assert_called_iptables_process(self, mock_add_chain,
mock_add_rule, mock_add_fip,
mock_send_fip_status, target_obj=None):
if target_obj:
obj = target_obj
else:
obj = self.portforwarding1
(chain_name,
chain_rule, rule_tag) = self._get_chainrule_tag_from_pf_obj(obj)
mock_add_chain.assert_has_calls([mock.call('fip-pf'),
mock.call(chain_name)])
mock_add_rule.assert_has_calls(
[mock.call(DEFAULT_RULE[0], DEFAULT_RULE[1]),
mock.call(DEFAULT_CHAIN, ('-j %s-' % BINARY_NAME) + chain_name,
tag=rule_tag),
mock.call(chain_name, chain_rule[1], tag=rule_tag)])
mock_add_fip.assert_called_once_with(
{'floating_ip_address': str(obj.floating_ip_address)},
mock.ANY, mock.ANY)
fip_status = {
obj.floatingip_id:
lib_const.FLOATINGIP_STATUS_ACTIVE}
mock_send_fip_status.assert_called_once_with(mock.ANY, fip_status)
@mock.patch.object(pf.PortForwardingAgentExtension,
'_sending_port_forwarding_fip_status')
@mock.patch.object(iptables_manager.IptablesTable, 'add_rule')
@mock.patch.object(iptables_manager.IptablesTable, 'add_chain')
@mock.patch.object(l3router.RouterInfo, 'add_floating_ip')
def test_add_update_router(self, mock_add_fip,
mock_add_chain, mock_add_rule,
mock_send_fip_status):
# simulate the router add and already there is a port forwarding
# resource association.
mock_add_fip.return_value = lib_const.FLOATINGIP_STATUS_ACTIVE
self.fip_pf_ext.add_router(self.context, self.router)
self._assert_called_iptables_process(
mock_add_chain, mock_add_rule, mock_add_fip, mock_send_fip_status,
target_obj=self.portforwarding1)
# Then we create another port forwarding with the same fip
mock_add_fip.reset_mock()
mock_send_fip_status.reset_mock()
mock_add_chain.reset_mock()
mock_add_rule.reset_mock()
test_portforwarding = pf_obj.PortForwarding(
context=None, id=_uuid(), floatingip_id=self.floatingip2.id,
external_port=2222, protocol='tcp', internal_port_id=_uuid(),
internal_ip_address='2.2.2.2', internal_port=22222,
floating_ip_address=self.floatingip2.floating_ip_address,
router_id=self.floatingip2.router_id)
self.pf_managed_fips.append(self.floatingip2.id)
self.port_forwardings.append(test_portforwarding)
self.fip_pf_ext.update_router(self.context, self.router)
self._assert_called_iptables_process(
mock_add_chain, mock_add_rule, mock_add_fip, mock_send_fip_status,
target_obj=test_portforwarding)
@mock.patch.object(iptables_manager.IptablesTable, 'add_rule')
@mock.patch.object(iptables_manager.IptablesTable, 'add_chain')
@mock.patch('neutron.agent.linux.ip_lib.IPDevice')
@mock.patch.object(iptables_manager.IptablesTable, 'remove_chain')
def test_add_update_router_port_forwarding_change(
self, mock_remove_chain, mock_ip_device, mock_add_chain,
mock_add_rule):
self.fip_pf_ext.add_router(self.context, self.router)
update_portforwarding = pf_obj.PortForwarding(
context=None, id=self.portforwarding1.id,
floatingip_id=self.portforwarding1.floatingip_id,
external_port=2222, protocol='tcp', internal_port_id=_uuid(),
internal_ip_address='2.2.2.2', internal_port=22222,
floating_ip_address=self.portforwarding1.floating_ip_address,
router_id=self.portforwarding1.router_id)
self.port_forwardings = [update_portforwarding]
mock_delete = mock.Mock()
mock_ip_device.return_value = mock_delete
self.fip_pf_ext.update_router(self.context, self.router)
current_chain = ('pf-' + self.portforwarding1.id)[
:constants.MAX_IPTABLES_CHAIN_LEN_WRAP]
mock_remove_chain.assert_called_once_with(current_chain)
mock_delete.delete_socket_conntrack_state.assert_called_once_with(
str(self.portforwarding1.floating_ip_address),
self.portforwarding1.external_port,
protocol=self.portforwarding1.protocol)
(chain_name,
chain_rule, rule_tag) = self._get_chainrule_tag_from_pf_obj(
update_portforwarding)
mock_add_chain.assert_has_calls([mock.call('fip-pf'),
mock.call(chain_name)])
mock_add_rule.assert_has_calls(
[mock.call(DEFAULT_RULE[0], DEFAULT_RULE[1]),
mock.call(DEFAULT_CHAIN, ('-j %s-' % BINARY_NAME) + chain_name,
tag=rule_tag),
mock.call(chain_name, chain_rule[1], tag=rule_tag)])
@mock.patch.object(pf.PortForwardingAgentExtension,
'_sending_port_forwarding_fip_status')
@mock.patch('neutron.agent.linux.ip_lib.IPDevice')
@mock.patch.object(iptables_manager.IptablesTable, 'remove_chain')
def test_add_update_router_port_forwarding_remove(
self, mock_remove_chain, mock_ip_device,
mock_send_fip_status):
self.fip_pf_ext.add_router(self.context, self.router)
mock_send_fip_status.reset_mock()
self.port_forwardings = []
mock_device = mock.Mock()
mock_ip_device.return_value = mock_device
self.fip_pf_ext.update_router(self.context, self.router)
current_chain = ('pf-' + self.portforwarding1.id)[
:constants.MAX_IPTABLES_CHAIN_LEN_WRAP]
mock_remove_chain.assert_called_once_with(current_chain)
mock_device.delete_socket_conntrack_state.assert_called_once_with(
str(self.portforwarding1.floating_ip_address),
self.portforwarding1.external_port,
protocol=self.portforwarding1.protocol)
mock_device.delete_addr_and_conntrack_state.assert_called_once_with(
str(self.portforwarding1.floating_ip_address))
fip_status = {
self.portforwarding1.floatingip_id:
lib_const.FLOATINGIP_STATUS_DOWN}
mock_send_fip_status.assert_called_once_with(mock.ANY, fip_status)
class RouterFipPortForwardingMappingTestCase(base.BaseTestCase):
def setUp(self):
super(RouterFipPortForwardingMappingTestCase, self).setUp()
self.mapping = pf.RouterFipPortForwardingMapping()
self.router1 = _uuid()
self.router2 = _uuid()
self.floatingip1 = _uuid()
self.floatingip2 = _uuid()
self.floatingip3 = _uuid()
self.portforwarding1 = pf_obj.PortForwarding(
context=None, id=_uuid(), floatingip_id=self.floatingip1,
external_port=1111, protocol='tcp', internal_port_id=_uuid(),
internal_ip_address='1.1.1.1', internal_port=11111,
floating_ip_address='192.168.3.11',
router_id=self.router1)
self.portforwarding2 = pf_obj.PortForwarding(
context=None, id=_uuid(), floatingip_id=self.floatingip1,
external_port=1112, protocol='tcp', internal_port_id=_uuid(),
internal_ip_address='1.1.1.2', internal_port=11112,
floating_ip_address='192.168.3.11',
router_id=self.router1)
self.portforwarding3 = pf_obj.PortForwarding(
context=None, id=_uuid(), floatingip_id=self.floatingip2,
external_port=1113, protocol='tcp', internal_port_id=_uuid(),
internal_ip_address='1.1.1.3', internal_port=11113,
floating_ip_address='172.16.17.32',
router_id=self.router1)
self.portforwarding4 = pf_obj.PortForwarding(
context=None, id=_uuid(), floatingip_id=self.floatingip3,
external_port=2222, protocol='tcp', internal_port_id=_uuid(),
internal_ip_address='2.2.2.2', internal_port=22222,
floating_ip_address='172.16.58.3',
router_id=self.router2)
self.portforwardings_dict = {
self.portforwarding1.id: self.portforwarding1,
self.portforwarding2.id: self.portforwarding2,
self.portforwarding3.id: self.portforwarding3,
self.portforwarding4.id: self.portforwarding4}
def _set_pf(self):
self.mapping.set_port_forwardings(self.portforwardings_dict.values())
def test_set_port_forwardings(self):
self._set_pf()
pf_ids = self.portforwardings_dict.keys()
for pf_id, obj in self.mapping.managed_port_forwardings.items():
self.assertIn(pf_id, pf_ids)
self.assertEqual(obj, self.portforwardings_dict[pf_id])
self.assertEqual(
len(pf_ids), len(self.mapping.managed_port_forwardings.keys()))
fip_pf_set = {
self.floatingip1: set(
[self.portforwarding1.id, self.portforwarding2.id]),
self.floatingip2: set([self.portforwarding3.id]),
self.floatingip3: set([self.portforwarding4.id])
}
for fip_id, pf_set in self.mapping.fip_port_forwarding.items():
self.assertIn(
fip_id, [self.floatingip1, self.floatingip2, self.floatingip3])
self.assertEqual(0, len(pf_set - fip_pf_set[fip_id]))
self.assertEqual(
len([self.floatingip1, self.floatingip2, self.floatingip3]),
len(self.mapping.fip_port_forwarding))
router_fip = {
self.router1: set([self.floatingip1, self.floatingip2]),
self.router2: set([self.floatingip3])
}
for router_id, fip_set in self.mapping.router_fip_mapping.items():
self.assertIn(router_id, [self.router1, self.router2])
self.assertEqual(0, len(fip_set - router_fip[router_id]))
self.assertEqual(
len([self.router1, self.router2]),
len(self.mapping.router_fip_mapping.keys()))
def test_update_port_forwarding(self):
self._set_pf()
new_pf1 = pf_obj.PortForwarding(
context=None, id=self.portforwarding2.id,
floatingip_id=self.floatingip1,
external_port=11122, protocol='tcp',
internal_port_id=self.portforwarding2.internal_port_id,
internal_ip_address='192.168.3.11', internal_port=11122,
floating_ip_address='192.168.3.11',
router_id=self.router1)
self.mapping.update_port_forwardings([new_pf1])
self.assertEqual(
new_pf1,
self.mapping.managed_port_forwardings[self.portforwarding2.id])
def test_del_port_forwardings(self):
self._set_pf()
del_pfs = [self.portforwarding3, self.portforwarding2,
self.portforwarding4]
self.mapping.del_port_forwardings(del_pfs)
self.assertEqual(
[self.portforwarding1.id],
list(self.mapping.managed_port_forwardings.keys()))
self.assertEqual({self.floatingip1: set([self.portforwarding1.id])},
self.mapping.fip_port_forwarding)
self.assertEqual({self.router1: set([self.floatingip1])},
self.mapping.router_fip_mapping)
def test_clear_by_fip(self):
self._set_pf()
self.mapping.clear_by_fip(self.floatingip1, self.router1)
router_fip = {
self.router1: set([self.floatingip2]),
self.router2: set([self.floatingip3])
}
for router_id, fip_set in self.mapping.router_fip_mapping.items():
self.assertIn(router_id, [self.router1, self.router2])
self.assertEqual(0, len(fip_set - router_fip[router_id]))
fip_pf_set = {
self.floatingip2: set([self.portforwarding3.id]),
self.floatingip3: set([self.portforwarding4.id])
}
for fip_id, pf_set in self.mapping.fip_port_forwarding.items():
self.assertIn(
fip_id, [self.floatingip2, self.floatingip3])
self.assertEqual(0, len(pf_set - fip_pf_set[fip_id]))
self.assertEqual(
len([self.floatingip2, self.floatingip3]),
len(self.mapping.fip_port_forwarding))
pfs_dict = {self.portforwarding3.id: self.portforwarding3,
self.portforwarding4.id: self.portforwarding4}
for pf_id, obj in self.mapping.managed_port_forwardings.items():
self.assertIn(pf_id,
[self.portforwarding3.id, self.portforwarding4.id])
self.assertEqual(obj, pfs_dict[pf_id])
self.assertEqual(
len([self.portforwarding3.id, self.portforwarding4.id]),
len(self.mapping.managed_port_forwardings.keys()))
|
Are fertility differentials by education converging in the United States? "According to the theory of demographic transition, fertility differentials by education tend to become strongly negative in the early stages of transition, because family limitation tends to catch on first among the more educated. As the transition proceeds, contraceptive use diffuses to the less educated, and fertility differentials by education eventually tend to reconverge. The question addressed here is: Do fertility differentials by education disappear or become positive in advanced industrial societies? Evidence presented in this paper indicates that in the United States they do not. As late as 1990, the latest year that we consider, fertility differentials by education were still strongly negative." (SUMMARY IN ITA AND FRE) |
def generate_scoring_board():
scoring_board = board_possibility_counter(BOARD)
max_possibilites = scoring_board[4][4]
for row in range(10):
for col in range(10):
scoring_board[row][col] = max_possibilites - scoring_board[row][col]
return scoring_board |
A business licenses enable owners to operate legally within certain geographic areas. Before doing business with a specific person or company, you may want to ensure that the business is in compliance with the respective laws. Business licenses can take federal, state or local form. Therefore, you should know the type of license you want to check on before you begin your search. Even if the federal or state government does not require a business license, most businesses need a city or county license, according to the Score website.
Contact the state licensing board to check on a state license. Many states require business owners to apply for a state license via the secretary of state. For example, you would contact the Georgia secretary of state to verify a Georgia business license. The secretary of state can direct you to the appropriate department if it didn’t issue the license. For example, the Washington secretary of state website leads you to the Washington State Department of Licensing website, where you can check on a business or professional license.
Ask your local municipality to verify a local business license. Contact your city hall to check on a city license, and contact your county courthouse to verify a county license. For example, you would contact the City of Boise to check on a child care city license in Boise, Idaho; and you would contact the Gwinnett County government to check on a home-based business license in Gwinnett County, Georgia. The state licensing board website may have information on how to look up a local license.
Contact the respective federal agency to verify a federal business license. A few businesses, such as ammunition, firearms and explosives dealing and alcohol manufacturing, require a federal business license. For example, you would contact the Bureau of Alcohol, Tobacco, Firearms and Explosives to check on a firearm license.
Business licenses are issued according to the type of business or profession and geographic location. Therefore, you must check with the appropriate department to know if a license is required.
Have the necessary information to perform the check. For example, to conduct a business license search by person in New Jersey, you would need the individual’s first and last name, profession and city.
For special licenses, such as a contractor license or alcohol beverage license, contact the issuing institution. For example, you would contact the California Department of Consumer Affairs, Contractors State License Board, to verify a California contractor license. To verify a Wisconsin liquor license, you would contact the municipality where business is conducted.
Do not assume that a company isn’t licensed if your search doesn’t reveal licensing information. This can happen if the license is registered under a different name. Therefore, try to have the appropriate information when conducting the search.
Ferguson, Grace. "How to Check on a Business License." Small Business - Chron.com, http://smallbusiness.chron.com/check-business-license-2974.html. Accessed 21 April 2019.
Can You Start Multiple Businesses on the Same License? |
Spinal cord decompression: Is country of surgery a predictor of outcome? Dear Sir, We read the important paper of Shamim et al. about the question as to whether patients with spinal cord injury (SCI) benefit from spinal stabilization. We believe that the decision to perform spine surgery on patents with SCI should not be made only based on duration of hospital stay, economic issues, and neurological outcome. However, we would emphasize the apparent advantage of non-operative management of SCI patients in developing countries. In Zahedan, a city located in a poor socioeconomic province of Iran, we managed 108 patients with SCI during a 12-year period from 1994 to 2005. Of these patients, 50 were followed for more than 12 months. Assessment of outcome of these patients not only confirmed superiority of non-surgical management in patients with complete SCI in terms of cost and duration of hospitalization, but also, surprisingly, showed that the neurological outcome of patients with incomplete SCI in the non-surgical group was not different from that of the surgical group. Length of stay in surgery group of SCI patients was 11.1 ± 5.46 days, which was significantly longer than 5.8 ± 0.96 days in non-surgical patients (P = 0.017). All groups of patients with incomplete SCI including those treated non-operatively, patients had early operation or cases underwent late surgery, had significant and similar improvement, when compared to the preoperative examination (P = 0.02), with no difference among these three groups. Our results differ from those of the meta-analysis of La Rosa et al., which concluded neurological improvement after early decompression in incomplete SCI patients compared to late decompression or non-surgical management. In this meta-analysis, 26 studies were evaluated, all of which had been performed in developed countries, with no study from developing countries. The results of this meta-analysis is also different from the study performed by Shamim et al., which may indicate different outcome of spinal cord decompression in developed and developing countries. Despite the limitations of the study by Shamim et al., such as heterogeneous cohort of patients, inconsistent prednisolone prescription, late decompression in considerable number of patients, different surgical procedures, and lack of post-operative neurologic assessment of patients, it can be hypothesized that the country where surgery is performed (developing vs. developed countries) may have an effect on the outcome of SCI patients. Thus, results of some reports on favorable outcome of patients undergoing spinal decompression/stabilization from developed countries should be interpreted carefully if they are to be used in developing countries since many pre-, intra- and post-operative factors may contribute to the outcome of these patients. Further studies from developing countries should be performed to provide better guidance for spine surgeons in these countries to decide whether an SCI patient is likely to benefit from spinal decompression/stabilization or not. Dear Sir, We read the important paper of Shamim et al. about the question as to whether patients with spinal cord injury (SCI) benefit from spinal stabilization. We believe that the decision to perform spine surgery on patents with SCI should not be made only based on duration of hospital stay, economic issues, and neurological outcome. However, we would emphasize the apparent advantage of non-operative management of SCI patients in developing countries. In Zahedan, a city located in a poor socioeconomic province of Iran, we managed 108 patients with SCI during a 12-year period from 1994 to 2005. Of these patients, 50 were followed for more than 12 months. Assessment of outcome of these patients not only confirmed superiority of non-surgical management in patients with complete SCI in terms of cost and duration of hospitalization, but also, surprisingly, showed that the neurological outcome of patients with incomplete SCI in the non-surgical group was not different from that of the surgical group. Length of stay in surgery group of SCI patients was 11.1 ± 5.46 days, which was significantly longer than 5.8 ± 0.96 days in non-surgical patients (P = 0.017). All groups of patients with incomplete SCI including those treated non-operatively, patients had early operation or cases underwent late surgery, had significant and similar improvement, when compared to the preoperative examination (P = 0.02), with no difference among these three groups. Our results differ from those of the meta-analysis of La Rosa et al., which concluded neurological improvement after early decompression in incomplete SCI patients compared to late decompression or nonsurgical management. In this meta-analysis, 26 studies were evaluated, all of which had been performed in developed countries, with no study from developing countries. The results of this meta-analysis is also different from the study performed by Shamim et al., which may indicate different outcome of spinal cord decompression in developed and developing countries. Despite the limitations of the study by Shamim et al., such as heterogeneous cohort of patients, inconsistent prednisolone prescription, late decompression in considerable number of patients, different surgical procedures, and lack of post-operative neurologic assessment of patients, it can be hypothesized that the country where surgery is performed (developing vs. developed countries) may have an effect on the outcome of SCI patients. Thus, results of some reports on favorable outcome of patients undergoing spinal decompression/stabilization from developed countries should be interpreted carefully if they are to be used in developing countries since many pre-, intra-and postoperative factors may contribute to the outcome of these patients. Further studies from developing countries should be performed to provide better guidance for spine Commentary We read with interest the letter to editor titled, "Spinal cord decompression: Is country of surgery a predictor of outcome?" The authors hail from a poor socioeconomic province of Iran and report their results on managing a large number of SCI patients with a mean follow-up exceeding 12 months. First and foremost, we would like to commend the authors for the tremendous service they are providing in a resource-stricken setup. What impressed us even more is that despite their limitations, they continue to audit and critically analyze their outcomes, proving that resource deprivation is not an excuse for lack of scientific approach to patient management. The authors share their results of managing complete SCI patients with and without surgery, validating our own results and then go on to share results from comparing neurological outcome between incomplete SCI patients with or without surgical intervention. Here, they mention that their results differ from a meta-analysis done by La Rosa et al., published nearly 7 years earlier, and point out that none of the studies in the meta-analysis were from developing countries. Although in our own practice, we tend to agree with the recommendations of La Rosa et al. and other studies on incomplete SCI published more recently, we certainly agree with the authors that not all studies done in developed countries can be directly applied to developing countries. Especially in conditions where clear-cut evidence does not exist supporting one treatment modality such as that for surgical intervention in complete SCI, one must choose the management option best suited for one's own circumstances. To propose that the country of surgery may affect outcome would not be a fair statement. Outcomes depend on a whole lot more than just the country and, even within countries, developed or developing, outcomes vary greatly from center to center. This is especially true for more complex specialties like neurosurgery, and hence the argument for developing regional referral centers for such specialties. We believe that proper referral centers with specialized care even in developing countries can produce equivalent results. Citing our own example, despite working in a resource-restricted country, we have shared our results for various surgical procedures and shown that our results do not differ markedly from the available literature. [1,4, In the absence of specialized centers or when one is forced to provide advanced care despite limitations, such as during disasters, the results are bound to be inferior and to our mind, should not be compared with the set standards. One must realize that provision of care under these circumstances is out of necessity. It is bound to have limitations, and where each surgeon wants to provide the best care to his/her patient and continues to strive for it, it is perhaps unfair to compare his/her outcomes with surgeons working in controlled environments, be it in a developing country with resource limitations or a developed one with limitless abundance of resources. |
The next time you decide to call a person from the North-East a 'Chinki', you could end up cooling your heels behind bars for the next five years.
Growing incidents of racial discrimination and verbal abuse against citizens from the North-East have forced the Ministry of Home Affairs (MHA) to send a letter to all the states and Union Territories, asking them to book offenders guilty of atrocity against people from the region under the Scheduled Castes and Scheduled Tribes (Prevention of Atrocities) Act since a significant number of persons from the North-East belong to the Scheduled Tribes.
Under the law, an offender can end up spending five years in jail and the accused could be denied anticipatory bail as well. And in case the police fail to act on a complaint, he/ she could be imprisoned for a term which should not be less than six months and may be extended to a year.
The Act, considered draconian by many because of its harsh provisions, was put to use vigorously in Uttar Pradesh during Mayawati's rule to check atrocities against people from the lower castes.
According to the National Crime Records Bureau (NCRB), 6,272 persons were booked under the Act in Uttar Pradesh in 2010. But the implementation of the Act has been rare when it comes to major cities. Just 16 persons were booked under the Act in Delhi in 2010.
In the letter, MHA joint secretary (centre-state) S. Suresh Kumar has admitted that people from the North-East do face abuse in major cities and feel insecure.
"A sizeable number of persons belonging to the North-Eastern states are residing in metropolitan cities and in major urban areas of the country for education and employment. It is reported that people originating from these North-Eastern states are facing discrimination as they are addressed with derogatory adjectives or face discrimination in the form of targeted attacks, assault, molestation and other atrocities," the letter, a copy of which is in possession of Mail Today, states.
"This has caused considerable anguish and distress in the minds of people from the North-East. Hence, it is of utmost importance that this feeling of insecurity and negativity in the minds of the people should be assuaged by an adequate and pro-active response that would not only reassure them but also prove that the government would not tolerate discrimination in any form," it adds.
The MHA estimates that most North-East persons in major cities belong to the ST category. According to a provision under Section 3 of the SC/ ST Act , an offence will be committed if any member of the SC/ ST category is "deliberately insulted and humiliated in public view."
Not just the victim, but anyone who knows that an offence has been committed under the Act can lodge a complaint. The police are empowered under the Act to arrest the offender without any warrant and launch an investigation.
The MHA letter says that if a complaint is received from any citizen hailing from the North-East but no follow-up action is taken, then a "serious view" should be taken against the police officer concerned and also the officer incharge of the police station.
"If the complainant from the North-East is a member of the Scheduled Tribes, then the provision of Section 4 of the SC/ ST Act should be invoked," the letter states. |
/**
* Checks the operation of existsAdjLabel() method.
*/
@Test
public void testExistsAdjLabel() {
testAddAdjLabel();
assertThat(distrPceStore.existsAdjLabel(link1), is(true));
assertThat(distrPceStore.existsAdjLabel(link2), is(true));
} |
def convert_to_tfrecord(features_array, labels_array, filename):
features_dtype = features_array.dtype
labels_dtype = labels_array.dtype
if features_dtype != np.float32 and features_dtype != np.float64:
raise ValueError("Features must be float32 / float64. Found %s." %
features_dtype)
if labels_dtype != np.int32 and labels_dtype != np.int64:
raise ValueError("Labels must be int32 / int64. Found %s." % labels_dtype)
assert features_array.shape[0] == labels_array.shape[0], (
"Features shape != Labels shape. %d != %d" % (features_array.shape[0],
labels_array.shape[0]))
with tf.python_io.TFRecordWriter(filename) as writer:
for i in range(features_array.shape[0]):
serialized = _create_serialized_example(features_array[i, :],
labels_array[i])
writer.write(serialized) |
<filename>packages/predictions/src/Providers/Utils.ts
/**
* Changes object keys to camel case. If optional parameter `keys` is given, then we extract only the
* keys specified in `keys`.
*/
export function makeCamelCase(obj: object, keys?: string[]) {
if (!obj) return undefined;
const newObj = {};
const keysToRename = keys ? keys : Object.keys(obj);
keysToRename.forEach(key => {
if (obj.hasOwnProperty(key)) {
// change the key to camelcase.
const camelCaseKey = key.charAt(0).toLowerCase() + key.substr(1);
Object.assign(newObj, { [camelCaseKey]: obj[key] });
}
});
return newObj;
}
/**
* Given an array of object, call makeCamelCase(...) on each option.
*/
export function makeCamelCaseArray(objArr: object[], keys?: string[]) {
if (!objArr) return undefined;
return objArr.map(obj => makeCamelCase(obj, keys));
}
/**
* Converts blob to array buffer
*/
export function blobToArrayBuffer(blob: Blob): Promise<ArrayBuffer> {
return new Promise((res, rej) => {
const reader = new FileReader();
reader.onload = _event => {
res(reader.result as ArrayBuffer);
};
reader.onerror = err => {
rej(err);
};
try {
reader.readAsArrayBuffer(blob);
} catch (err) {
rej(err); // in case user gives invalid type
}
});
}
|
<reponame>joschout/tilde<filename>mai_version/probeersel/problog_tests/ProbLog_as_Python_datatructures.py
from problog.program import SimpleProgram
from problog.logic import Constant,Var,Term,AnnotatedDisjunction
from problog import get_evaluatable
coin,heads,tails,win,query =\
Term('coin'),Term('heads'),Term('tails'),Term('win'),Term('query')
C = Var('C')
p = SimpleProgram()
p += coin(Constant('c1'))
p += coin(Constant('c2'))
p += AnnotatedDisjunction([heads(C,p=0.4), tails(C,p=0.6)], coin(C))
p += (win << heads(C))
p += query(win)
print(get_evaluatable().create_from(p).evaluate()) |
Dhammic Technology Acceptance Model (DTAM) Upadana or a condition of attachment in Buddhism has been widely acknowledged among Buddhists as the root cause of suffering because it underlies human intention and action toward a perceived phenomenon. This article investigates whether the condition of attachment in Buddhism could contribute as an external variable in the Technology Acceptance Model. Participants were 498 students from a university in Southern Thailand. They responded to a 21-item self-report questionnaire on the six variables (perceived usefulness, perceived ease of use, attitude toward use, intention to use, attachment, and actual usage) of a proposed Dhammic Technology Acceptance Model. Results using structural equation modeling revealed that the Dhammic Technology Acceptance Model has a good model fit and that attachment has direct and indirect effects on the actual use of Facebook among students. Implications for theory and further research are discussed. |
Evangelicalism in the Church of England c.1790c.1900: A Miscellany This collection of four (largely) nineteenth-century texts continues the Church of England Record Societys commitment to publish shorter pieces that, of themselves, might struggle to see the light of day in our time. The book is therefore appropriately called a miscellany. Nevertheless, these writings by notable evangelicals of their day are by no means disconnected in ethos or to be underestimated in significance. Taken together, they provide a window not only into the inner world of evangelicalism in the Church of England during their period, but also into the atmosphere of the Church itself and its place in the nation. The whole volume is handsomely presented by its editors and publishers, and each piece is itself expertly introduced and carefully edited. The first, edited by Anne Scott, is a collection of Hannah Mores correspondence (17991802) over her intentions to establish a Sunday School in Blagdon, Somerset. Though successful at first, these plans were finally thwarted by the curate, Thomas Bere, despite his initial invitation to More to begin such a work. Mostly Mores letters to William Wilberforce, they show something of the struggles that evangelical Anglicans experienced in the face of those in ecclesiastical and political circles who accused them of sowing the seeds of infidelity to the established Church and, in the light of revolutionary movements abroad, to the nation itself. The second, edited by Mark Smith, is Henry Ryders Charge to the clergy of Gloucester in 1815, shortly after being made bishop of the diocese. Although clear in evangelical conviction, steeped in scripture and, at times, passionate in style, his tone is irenic and inclusive, charting a reasonable course through matters of controversy and presenting an expression of evangelicalism likely to command wide respect. Ryders wise pastoral advice and penetrating spiritual challenge to his clergy is always attractively framed by his own personal humility. |
/*-------------------------------------------------------------------------
*
* nodeCtescan.h
*
*
*
* Portions Copyright (c) 1996-2015, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
* src/include/executor/nodeCtescan.h
*
*-------------------------------------------------------------------------
*/
#ifndef NODECTESCAN_H
#define NODECTESCAN_H
#include "nodes/execnodes.h"
extern CteScanState *ExecInitCteScan(CteScan *node, EState *estate, int eflags);
extern TupleTableSlot *ExecCteScan(CteScanState *node);
extern void ExecEndCteScan(CteScanState *node);
extern void ExecReScanCteScan(CteScanState *node);
#endif /* NODECTESCAN_H */
|
The Diversity of Recreational Knowledge in Sulalatus Salatin: Reflection of Intellectuality of Melaka Sultanate The culture and the arts of the palace during the Malay sultanate of Melaka were very exclusive compared to the common people. The rulers of the kingdom must demonstrate the strength and greatness of all things in order to qualify themselves as rulers of the kingdom. Great rulers would then be able to expand its influence in the surrounding area. Therefore, the objective of this study is to identify and discuss the recreational knowledge in Sulalatus Salatin as a reflection of the intellect of the Malay rulers of Melaka. The method used is a text analysis to obtain the data of study. The result of the discussion found that there were various recreational practices conducted by the Melaka Malay rulers in the texts. The type of recreation performed is relaxed and rugged between are amongst that discussed in this article to demonstrate the power of the king in various skills besides highlighting the integrity of the government. The results of this study are also expected to benefit the community to know the leisure activities carried out by Melaka Malay rulers in the text of Sulalatus Salatin. |
/*
* @author Amir Eslampanah
*
*/
public class Wallet {
/**
* This class handles public and private keys and general address management
* as well as wallet encryption.
*
*/
BigFastList<AddressKeyPair> addressKeyPairs = new BigFastList<AddressKeyPair>();
/**
*
*/
public Wallet() {
// TODO Auto-generated constructor stub
}
/**
* @param addressKeyPairs
*/
public Wallet(BigFastList<AddressKeyPair> addressKeyPairs) {
// TODO Auto-generated constructor stub
this.addressKeyPairs = addressKeyPairs;
}
/**
* @return the addressKeyPairs
*/
public BigFastList<AddressKeyPair> getAddressKeyPairs() {
return this.addressKeyPairs;
}
/**
* @param addressKeyPairs
* the addressKeyPairs to set
*/
public void setAddressKeyPairs(
BigFastList<AddressKeyPair> addressKeyPairs) {
this.addressKeyPairs = addressKeyPairs;
}
} |
Mark Karpeles, president of Mt. Gox bitcoin exchange, speaks during a new conference in Tokyo on Feb. 28. (Photo: Jiji Press, AFP/Getty Images)
Mt. Gox, the Tokyo-based Bitcoin exchange that collapsed earlier this year after $425 million worth of its customers' digital currency disappeared, lost its bid to reorganize and will likely be liquidated, a Japanese bankruptcy court administrator said in a notice Wednesday.
The court will also investigate Mt. Gox CEO Mark Karpeles' liability in the collapse of the business, attorney Nobuaki Kobayashi, the court-appointed administrator, said Wednesday in a notice posted on the Bitcoin exchange's website.
Karpeles on Feb. 28 asked a Tokyo bankruptcy court for protection from its creditors while it restructured and reorganized.
Mt. Gox claimed in February that the digital wallets that stored the Bitcoin had been hacked and 850,000 Bitcoin were lost. The company later said it discovered 200,000 Bitcoin in an unused wallet, reducing the lost Bitcoin to 650,000.
A week later, Flexcoin, another Bitcoin exchange, shut down, claiming hackers "robbed" the company of all 896 of its bitcoins valued at $600,000.
The court dismissed Mt. Gox's rehabilitation application Wednesday as too "difficult for the company to carry out," Kobayashi said.
Karpeles "has lost his authority to administer the company's assets," the attorney said.
Once the bankruptcy proceedings begin, court administrators and other experts will examine the company's assets, investigate the disappearance of the Bitcoin and distribute the company's remaining assets among its creditors, the attorney said.
"I will strive to fairly and equitably administer the company's assets," Kobayashi wrote in the notice. He said he would work with the U.S. bankruptcy court, where the company has also filed for relief. A creditors' meeting has not yet been set, he said.
Although the court has not entered its final bankruptcy ruling, Kobayashi said once the bankruptcy proceedings start, "it will be unlikely that the company can restart the exchange."
Karpeles, in a note posted Wednesday on the Mt. Gox website, said the company had "no prospects for the restart of the business."
The dismissal of the company's application for rehabilitation "created great inconvenience and concerns to our creditors, for which we apologize," the note said.
Follow @DonnaLeinwand on Twitter.
Read or Share this story: http://usat.ly/Qe47rK |
The present invention relates to a system for the assessment of accuracy of N.C. (Numerically Controlled) machine tools. The accuracy of a workpiece produced by N.C. machine tools is influenced by many factors such as deviations caused by inaccurate geometry and errors caused by vibrations, load, handling, maintenance and environmental effect. With contouring operations, the characteristics of the feed drives and control systems also contribute to workpiece accuracy.
To assess the N.C. machine tool accuracy, both direct and indirect test methods can be undertaken, the direct approach necessitating the machining of a component or test piece followed by the measurement and evaluation of its geometry. This direct approach is generally confined to relatively small components whereas the indirect approach using some form of instrumentation in conjunction with artefacts is particularly useful for the assessment and evaluation of the geometry over the operating volume of the machine.
The present invention uses the indirect method for checking specified tolerances and sources of N.C. machine tool errors and also can be used as an aid for diagnosis of the machine tool's accuracy. Namely, the invention relates to a system for the assessment of contouring accuracy of N.C. machine tools by using a computer aided kinematic transducer link system and to a method for analysis and evaluation of different sources of machine tool errors. |
Maritime Territorial Disputes in East Asia: A Comparative Analysis of the South China Sea and the East China Sea This article systematically compares maritime territorial disputes in the East and South China Seas. It draws on the bargaining model of war and hegemonic stability theory to track the record of conflicts and shifts in the relative power balances of the claimants, leading to the conclusion that certainty and stability have improved in the South China Sea, with the converse happening in the East China Sea. To enrich the models, this article also considers social factors (constructivism) and arrives at the same conclusion. This calls for a differentiated methodological approach if we are to devise strategies to mediate and resolve these disputes. |
#include <stdio.h>
#include <string.h>
int main() {
char s1[20] = "Hello";
printf("s1=%s", s1);
strrev(s1); //This won't work with latest c libs!
printf("s1=%s", s1);
return 0;
} |
Application of a Backpropagation Artificial Neural Network in Predicting Plasma Concentration and Pharmacokinetic Parameters of Oral SingleDose Rosuvastatin in Healthy Subjects A backpropagation artificial neural network (BPANN) model was established for the prediction of the plasma concentration and pharmacokinetic parameters of rosuvastatin (RVST) in healthy subjects. The data (demographic characteristics and results of clinical laboratory tests) were collected from 4 bioequivalence studies using reference 10mg RVST calcium tablets. After the data were cleaned using extreme gradient boosting, 13 important factors were extracted to construct the BPANN model. The model was fully validated, and mean impact values (MIVs) were calculated. The model was used to predict the plasma concentration and pharmacokinetic parameters of oral singledose RVST in healthy subjects under fasting and fed conditions. The predicted and measured values were compared in order to evaluate the accuracy of prediction. The constructed model performed well in validation. The top 3 factors ranked by MIV related to RVST concentration are fasting/fed, time, and creatinine clearance. The timeconcentration profiles of the measured and predicted data agreed well. There were no significant differences (P >.05) in the area under the concentrationtime curve from 0 to the last measurable concentration (AUC0t) and extrapolated to infinity (AUC0∞), halftime of elimination, peak concentration, and time to peak concentration of the measured data and data predicted by BPANN. The BPANN model has an accurate prediction ability and can be used to predict RVST concentration and pharmacokinetic parameters in healthy subjects. |
He has been suspended (twice) for marijuana-related offenses, and he has been injured. But he also has been, at times, the best running back and one of the most dynamic offensive threats in the NFL. So should the Pittsburgh Steelers commit long term to Le'Veon Bell and buck the trend of going low with RB contracts? ESPN’s AFC North reporters provide their opinions.
Jamison Hensley, ESPN’s Baltimore Ravens reporter: Yes, although the Steelers have a legitimate reason not to do so. Pittsburgh has watched its past top running backs like Willie Parker and Rashard Mendenhall have a shelf life of only five to six years, but Bell is that special exception who deserves that commitment. Since being drafted by the Steelers in 2013, Bell has led the NFL with an average of 128.7 yards from scrimmage per game. He can impact a game, whether it’s patiently waiting for a hole to open up in the running game or breaking a big play off a short pass. Other dual threats like LeSean McCoy and DeMarco Murray ranked in the top five in total yards last season despite getting closer to age 30. Bell, who is 25, should be considered an offensive cornerstone like Antonio Brown.
Pat McManamon, ESPN’s Cleveland Browns reporter: Absolutely. It's true that running backs are not prized the way they once were, but not many running backs can do what Bell does. He's so important to Ben Roethlisberger in the passing game that he's almost a third or fourth wideout any time he's on the field. He runs well but is a major threat in the passing game. This does not mean the Steelers should stop looking for another quality back, though. Depth would help ensure that Bell does not wear out. As long as the Steelers have Roethlisberger they should have Bell as well. Without him, the offense is not as dangerous.
Katherine Terrell, ESPN’s Cincinnati Bengals reporter: Many might advise against committing significant money to a running back in what has become a passing era. The Steelers should make an exception for Bell. It’s true that a team can easily find another running back elsewhere. However, a special running back like Bell is rare. The team could go years without finding another back of his caliber. Bell makes the Steelers' offense tick, and his presence is one of the reasons they’re able to find so much success. He was able to shoulder a lot of the load last season when the Steelers were dealing with injuries at other positions. Teams might be wary of investing in running backs these days because the physical nature of the position often can lead to sudden decline without warning. But considering Bell is only 25, the Steelers could feel good about making a long-term commitment while he’s still in the prime of his career. They’d be wise to keep him around. |
def create_site(self, site_create_props):
result = SpoOperation(self.context)
qry = ServiceOperationQuery(self, "CreateSite", None, site_create_props, "siteCreationProperties", result)
self.context.add_query(qry)
return result |
#pragma once
#include <memory>
#include "level.h"
#include "audio/ambienceplayer.h"
#include "ui/hudmanager.h"
#include "graphics/worldrenderer.h"
namespace TankGame
{
class GameManager
{
public:
explicit GameManager(class IMainRenderer& renderer);
void OnResize(GLsizei width, GLsizei height);
void Update(const class UpdateInfo& updateInfo);
void DrawUI();
void SetLevel(Level&& level, bool testing = false);
void ExitLevel();
void Pause();
inline bool IsPaused() const
{ return m_hudManager.IsPaused(); }
inline bool IsTesting() const
{ return m_isTesting; }
inline Level* GetLevel()
{ return m_level.get(); }
inline const Level* GetLevel() const
{ return m_level.get(); }
inline const WorldRenderer& GetRenderer() const
{ return m_renderer; }
inline WorldRenderer& GetRenderer()
{ return m_renderer; }
inline void LevelComplete(std::string nextLevelName)
{ m_hudManager.ShowLevelComplete(nextLevelName); }
inline void ShowNoAmmoText()
{ m_hudManager.ShowNoAmmoText(); }
inline void SetQuitCallback(std::function<void()> quitCallback)
{ m_quitCallback = std::move(quitCallback); }
inline void SetShowGlobalHealthBar(bool showGlobalHealthBar)
{ m_hudManager.SetShowGlobalHealthBar(showGlobalHealthBar); }
inline void SetGlobalHealthBarPercentage(float percentage)
{ m_hudManager.SetGlobalHealthBarPercentage(percentage); }
private:
class IMainRenderer& m_mainRenderer;
std::function<void()> m_quitCallback;
float m_interactButtonOpacity = 0;
glm::vec2 m_interactButtonPos;
bool m_isTesting = false;
std::unique_ptr<Level> m_level;
HUDManager m_hudManager;
AmbiencePlayer m_ambiencePlayer;
WorldRenderer m_renderer;
};
}
|
If you feel like a big Ulta sale just ended, that’s because it did. But we’re absolutely not complaining about the major deals during Ulta Spring Haul 2019, happening right now. It features deals on basically everything: skincare, makeup, hair products and a lot more. It’s the perfect time to stock up on summer essentials you’ve been saving for. There’s nothing like a change of season and the first day of warm weather to make you crave brighter makeup, a more laid-back hairstyle, and glowing skin. Don’t worry—this sale has got you covered.
The Spring Haul event runs from today through April 20, and there are new deals every week at up to 50 percent off. Some of our favorite brands and items are available at amazing prices, including Makeup Revolution eyeshadow, St. Tropez self-tanner and Juvia’s Place palettes. And these aren’t old, forgotten about products. Many of these are actually best sellers. There are about 40 brands on sale each week, with a few items discounted from each brand. There’s something for everyone, like those love a full face beat and those who prefer a more natural vibe. Plus, Mother’s Day is just around the corner so don’t forget to grab a few items for Mom too. Ahead, a few of our must-haves from the sale.
Fans swear by this tea-infused tinted moisturizer for its skin-calming benefits.
This best-selling palette features nine pigmented shadows in gorgeous packaging.
Restore softness to your dry hair after a long beach day with this hydrating mask.
While the Spring Haul is both online and in Ulta stores, if there’s something specific you have your eye on, make sure you check Ulta’s website first to see if it’s an online exclusive. And you should probably hurry—we expect these deals to go fast. |
Proton Transport in Hierarchical-Structured Nafion Membranes: A NMR Study. It is known that hierarchical structure plays a key role in many unique material properties such as self-cleaning effect of lotus leaves and the antifogging property of the compound eyes of mosquitoes. This study reports a series of highly ordered mesoporous Nafion membranes with unique hierarchical structural features at the nanometer scale. Using NMR, we show for the first time that, at low RH conditions, the proton in the ionic domains migrates via a surface diffusion mechanism and exhibits approximately 2 orders of magnitude faster transport than that in the nanopores, whereas the nanopores play a role of reservoir and maintain water and thereby conductivity at higher temperature and lower humidities. Thereby creating hierarchical nanoscale structures is a feasible and promising strategy to develop PEMs that would enable efficient electrochemical performance in devices such as fuel cells, even in the absence of high humidity and at elevated temperatures. |
Strongly pressed by the task of population control. Today, the world population grows at an annual rate of over 80 million. The activities of "the Day of the 5 Billion" sponsored by the United Nations sounds alarm to the world: It is an emerging task to strictly control human reproduction. China, being a developing country, knows only too well the difficulties that over-rapid population growth brings upon economic and social development. Population control is a pressing task. China would like to make the nation prosperous by quadrupling the gross national product (GNP) to US$800 or US$1000 per capita, thus raising the people's living standard to the level of being well-off at the end of the century. The GNP would be quadrupled again to US$4000 per capita with the standard of living raised by the mid-21st century. In order to realize strategic goals, China must strive to control the total population of China at about 1.2 billion at the turn of the century, leaving a better population structure for the next century. At present, China has a population of 1.057 billion and is faced with a new baby boom. It is hoped that under the leadership of the Party's Central Committee and the State Council, governments at all levels, and the people of all nationalities will do a better job in population control by fulfilling the population plan for this year so as to lay down a good foundation for enforcing the plan during the 7th 5-year plan. Meanwhile, China will continue to make new contributions to the stabilization of the world's population together with the UNFPA and other international bodies and friendly countries who support China's population control policy. |
Nanomachining, by definition, involves mechanically removing nanometer-scaled volumes of material from, for example, a photolithography mask, a semiconductor substrate/wafer, or any surface on which scanning probe microscopy (SPM) can be performed. For the purposes of this discussion, “substrate” will refer to any object upon which nanomachining may be performed.
Examples of photolithography masks include: standard photomasks (193 nm wavelength, with or without immersion), next generation lithography mask (imprint, directed self-assembly, etc.), extreme ultraviolet lithography photomasks (EUV or EUVL), and any other viable or useful mask technology. Examples of other surfaces which are considered substrates are membranes, pellicle films, micro-electronic/nano-electronic mechanical systems MEMS/NEMS. Use of the terms, “mask” or, “substrate” in the present disclosure include the above examples, although it will be appreciated by one skilled in the art that other photomasks or surfaces may also be applicable.
Nanomachining in the related art may be performed by applying forces to a surface of a substrate with a tip (e.g., a diamond cutting bit) that is positioned on a cantilever arm of an atomic force microscope (AFM). More specifically, the tip may first be inserted into the surface of the substrate, and then the tip may be dragged through the substrate in a plane that is parallel to the surface (i.e., the xy-plane). This results in displacement and/or removal of material from the substrate as the tip is dragged along.
As a result of this nanomachining, debris (which includes anything foreign to the substrate surface) is generated on the substrate. More specifically, small particles may form during the nanomachining process as material is removed from the substrate. These particles, in some instances, remain on the substrate once the nanomachining process is complete. Such particles are often found, for example, in trenches and/or cavities present on the substrate.
In order to remove debris, particles or anything foreign to the substrate, particularly in high-aspect photolithography mask structures and electronic circuitry; wet cleaning techniques have been used. More specifically, the use of chemicals in a liquid state and/or agitation of the overall mask or circuitry may be employed. However, both chemical methods and agitation methods such as, for example, megasonic agitation, can adversely alter or destroy both high-aspect ratio structures and mask optical proximity correction features (i.e., features that are generally so small that these features do not image, but rather form diffraction patterns that are used beneficially by mask designers to form patterns).
In order to better understand why high-aspect shapes and structures are particularly susceptible to being destroyed by chemicals and agitation; one has to recall that such shapes and structures, by definition, include large amounts of surface area and are therefore very thermodynamically unstable. As such, these shapes and structures are highly susceptible to delamination and/or other forms of destruction when chemical and/or mechanical energy is applied.
It is important to note that in imprint lithography and EUV (or EUVL) that use of a pellicle to keep particles off the lithographic surface being copied is currently not feasible. Technologies that cannot use pellicles are generally more susceptible to failure by particle contamination which blocks the ability to transfer the pattern to the wafer. Pellicles are in development for EUV masks, but as prior experience with DUV pellicle masks indicates, the use of a pellicle only mitigates (but does not entirely prevent) critical particle and other contaminates from falling on the surface and any subsequent exposure to the high-energy photons will tend to fix these particles to the mask surface with a greater degree of adhesion. In addition, these technologies may be implemented with smaller feature sizes (1 to 300 nm), making them more susceptible to damage during standard wet clean practices which may typically be used. In the specific case of EUV or EUVL, the technology may require the substrate be in a vacuum environment during use and likely during storage awaiting use. In order to use standard wet clean technologies, this vacuum would have to be broken which could easily lead to further particle contamination.
Other currently available methods for removing debris from a substrate make use of cryogenic cleaning systems and techniques. For example, the substrate containing the high-aspect shapes and/or structures may be effectively “sandblasted” using carbon dioxide particles instead of sand.
However, even cryogenic cleaning systems and processes in the related art are also known to adversely alter or destroy high-aspect features. In addition, cryogenic cleaning processes affect a relatively large area of a substrate (e.g., treated areas may be approximately 10 millimeters across or more in order to clean debris with dimensions on the order of nanometers). As a result, areas of the substrate that may not need to have debris removed therefrom are nonetheless exposed to the cryogenic cleaning process and to the potential structure-destroying energies associated therewith. It is noted that there are numerous physical differences between nano and micro regimes, for the purposes here, the focus will be on the differences related to nanoparticle cleaning processes. There are many similarities between nano and macro scale cleaning processes, but there are also many critical differences. For the purposes of this disclosure, the common definition of the nanoscale is of use: this defines a size range of 1 to 100 nm. This is a generalized range since many of processes reviewed here may occur below this range (into atomic scales) and be able to affect particles larger than this range (into the micro regime).
Some physical differences between macro and nano particle cleaning processes include transport related properties including: surface area, mean free path, thermal, and field-effects. The first two in this list are more relevant to the thermo-mechanical-chemical behavior of particles while the last one is more concerned with particle interactions with electromagnetic fields. Thermal transport phenomenon intersects both of these regimes in that it is also the thermo-mechanical physical chemistry around particles and the interaction of particles with electromagnetic fields in the infrared wavelength regime. To functionally demonstrate some of these differences, a thought experiment example of a nanoparticle trapped at the bottom of a high aspect line and space structure (70 nm deep and 40 nm wide˜AR=1.75) is posited. In order to clean this particle with macroscale processes, the energy required to remove the particle is approximately the same as the energy required to damage features or patterns on the substrate, thereby making it impossible to clean the high aspect line and space structure without damage. For macro-scale cleaning processes (Aqueous, Surfactant, Sonic Agitation, etc.), at the energy level where the nanoparticle is removed, the surrounding feature or pattern is also damaged. If one has the technical capability to manipulate nano-sharp (or nanoscale) structures accurately within nano-distances to the nanoparticle, then one may apply the energy to clean the nanoparticle to the nanoparticle only. For nanoscale cleaning processes, the energy required to remove the nanoparticle is applied only to the nanoparticle and not the surrounding features or patterns on the substrate.
First, looking at the surface area properties of particles, there are mathematical scaling differences which are obvious as a theoretical particle (modelled here as a perfect sphere) approaches the nanoscale regime. The bulk properties of materials are gauged with the volume of materials while the surface is gauged by the external area. For a hypothetical particle, its volume decreases inversely by the cube (3rd power) while the surface area decreases by the square with respect to the particle's diameter. This difference means that material properties which dominate the behavior of a particle at macro, and even micro, scale diameters become negligible into the nano regime (and smaller). Examples of these properties include mass and inertial properties of the particle, which is a critical consideration for some cleaning techniques such as sonic agitation or laser shock.
The next transport property examined here is the mean free path. For macro to micro regimes, fluids (in both liquid, gaseous, and mixed states) can be accurately modelled in their behavior as continuum flow. When considering surfaces, such as the surface of an AFM tip and a nanoparticle, that are separated by gaps on the nanoscale or smaller, these fluids can't be considered continuum. This means that fluids do not move according to classical flow models, but can be more accurately related to the ballistic atomic motion of a rarefied gas or even a vacuum. For an average atom or molecule (approximately 0.3 nm in diameter) in a gas at standard temperature and pressure, the calculated mean free path (i.e., distance in which a molecule will travel in a straight line before it will on average impact another atom or molecule) is approximately 94 nm, which is a large distance for an AFM scanning probe. Since fluids are much denser than gasses, they will have much smaller mean free paths, but it must be noted that the mean free path for any fluid can't be less than the atom or molecule's diameter. If we compare the assumed atom or molecule diameter of 0.3 nm given above to the typical tip to surface mean separation distance during non-contact scanning mode which can be as small as 1 nm, thus except for the most dense fluids, the fluid environment between an AFM tip apex and the surface being scanned will behave in a range of fluid properties from rarefied gas to near-vacuum. The observations in the prior review are crucial to demonstrating that thermo-fluid processes behave in fundamentally different ways when scaled from the macro to nano scale. This affects the mechanisms and kinetics of various process aspects such as chemical reactions, removal of products such as loose particles to the environment, charging or charge neutralization, and the transport of heat or thermal energy.
The known thermal transport differences from macro and nano to sub-nano scales has been found by studies using scanning thermal probe microscopy. One early difference seen is that the transport rate of thermal energy can be an order of magnitude less across nanoscale distances than the macro scale. This is how scanning thermal probe microscopy can work with a nano probe heated to a temperature difference of sometimes hundreds of degrees with respect to a surface it is scanning in non-contact mode with tip to surface separations as small as the nano or Angstrom scale. The reasons for this lower thermal transport are implied in the prior section about mean free path in fluids. One form of thermal transport, however, is enhanced which is blackbody radiation. It has been experimentally shown that the Plank limit for blackbody spectral radiance at a given temperature can be exceeded at nanoscale distances. Thus, not only does the magnitude of thermal transport decrease, but the primary type of transport, from conduction/convection to blackbody which is in keeping with the rarefied to vacuum fluid behavior, changes.
Differences in the interactions of fields (an electromagnetic field is the primary intended example here due to its longer wavelengths compared to other possible examples), for the purposes in this discussion, could be further sub-classified as wavelength related and other quantum effects (in particular tunneling). At nanoscales, the behavior of electromagnetic fields between a source (envisioned here as the apex of an AFM tip whether as the primary source or as a modification of a relatively far field source) and a surface will not be subject to wavelength dependent diffraction limitations to resolution that far field sources will experience. This behavior, commonly referred to as the near-field optics, has been used with great success in scanning probe technologies such as near field scanning optical microscopy (NSOM). Beyond applications in metrology, the near field behavior can affect the electromagnetic interaction of all nanoscale sized objects spaced nano-distances from each other. The next near-field behavior mentioned is quantum tunneling where a particle, in particular an electron, can be transported across a barrier it could not classically penetrate. This phenomenon allows for energy transport by a means not seen at macro scales, and is used in scanning tunneling microscopy (STM) and some solid-state electronic devices. Finally, there are more esoteric quantum effects often seen with (but not limited to) electromagnetic fields at nanoscales, such as proximity excitation and sensing of plasmonic resonances, however, it will be appreciated by one skilled in the art that the current discussion gives a sufficient demonstration of the fundamental differences between macro and nano-scale physical processes.
In the following, the term “surface energy” may be used to refer to the thermodynamic properties of surfaces which are available to perform work (in this case, the work of adhesion of debris to the surfaces of the substrate and the tip respectively). One way to classically calculate this is the Gibb's free energy which is given as:G(p,T)=U+pV−TS where: U=Internal Energy; p=Pressure; V=Volume; T=Temperature; and S=Entropy.Since the current practice does not vary pressure, volume, and temperature (although this does not need to be the case since these parameters could equally be manipulated to get the desired effects as well) they will not be discussed in detail. Thus, the only terms being manipulated in the equation above will be internal energy and entropy as driving mechanisms in the methods discussed below. Entropy, since it is intended that the probe tip surface will be cleaner (i.e., no debris or unintended surface contaminates) than the substrate being cleaned is naturally a thermodynamic driving mechanism to preferentially contaminate the tip surface over the substrate (and then subsequently, contaminate the cleaner pallet of soft material). The internal energy is manipulated between the pallet, tip, debris, and substrate surfaces by the thermophysical properties characterized by their respective surface energies. One way to relate the differential surface energy to the Gibbs free energy is to look at theoretical developments for the creep properties of engineering materials at high temperatures (i.e., a significant fraction of their melting point temperature) for a cylinder of radius r, and length l, under uniaxial tension P:dG=−P*dl+γ*dA where γ=Surface energy density [J/m2]; and A=Surface area [m2].The observation that the stress and extrinsic surface energy of an object are factors in its Gibbs free energy induces one to believe these factors (in addition to the surface energy density γ) could also be manipulated to perform reversible preferential adhesion of the debris to the tip (with respect to the substrate) and then subsequently the soft pallet. Means to do this include applied stress (whether externally or internally applied) and temperature. It should be noted that it is intended that the driving process will always result in a series of surface interactions with a net ΔG<0 in order to provide a differential surface energy gradient to preferentially decontaminate the substrate and subsequently preferentially contaminate the soft pallet. This could be considered analogous to a ball preferentially rolling down an incline to a lower energy state (except that, here, the incline in thermodynamic surface energy also includes the overall disorder in the whole system or entropy). FIG. 6 shows one possible set of surface interactions where the method described here could provide a down-hill thermodynamic Gibbs free energy gradient to selectively remove a contaminate and selectively deposit it on a soft patch. This sequence is one of the theoretical mechanisms thought to be responsible for the current practice aspects using low surface energy fluorocarbon materials with medium to low surface energy tip materials such as diamond. |
//
// alt_clk_is_enabled() returns whether the specified clock is enabled or not.
//
ALT_STATUS_CODE alt_clk_is_enabled(ALT_CLK_t clk)
{
ALT_STATUS_CODE status = ALT_E_BAD_ARG;
switch (clk)
{
case ALT_CLK_MAIN_PLL:
case ALT_CLK_PERIPHERAL_PLL:
case ALT_CLK_SDRAM_PLL:
status = (alt_clk_pll_is_bypassed(clk) != ALT_E_TRUE);
break;
case ALT_CLK_MAIN_PLL_C0:
case ALT_CLK_MAIN_PLL_C1:
case ALT_CLK_MAIN_PLL_C2:
case ALT_CLK_MAIN_PLL_C3:
case ALT_CLK_MAIN_PLL_C4:
case ALT_CLK_MAIN_PLL_C5:
case ALT_CLK_MPU:
case ALT_CLK_MPU_L2_RAM:
case ALT_CLK_MPU_PERIPH:
case ALT_CLK_L3_MAIN:
case ALT_CLK_L3_SP:
case ALT_CLK_DBG_BASE:
case ALT_CLK_MAIN_QSPI:
case ALT_CLK_MAIN_NAND_SDMMC:
case ALT_CLK_PERIPHERAL_PLL_C0:
case ALT_CLK_PERIPHERAL_PLL_C1:
case ALT_CLK_PERIPHERAL_PLL_C2:
case ALT_CLK_PERIPHERAL_PLL_C3:
case ALT_CLK_PERIPHERAL_PLL_C4:
case ALT_CLK_PERIPHERAL_PLL_C5:
case ALT_CLK_SDRAM_PLL_C0:
case ALT_CLK_SDRAM_PLL_C1:
case ALT_CLK_SDRAM_PLL_C2:
case ALT_CLK_SDRAM_PLL_C5:
status = ALT_E_BAD_ARG;
break;
case ALT_CLK_L4_MAIN:
status = (ALT_CLKMGR_MAINPLL_EN_L4MAINCLK_GET(alt_read_word(ALT_CLKMGR_MAINPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_L3_MP:
status = (ALT_CLKMGR_MAINPLL_EN_L3MPCLK_GET(alt_read_word(ALT_CLKMGR_MAINPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_L4_MP:
status = (ALT_CLKMGR_MAINPLL_EN_L4MPCLK_GET(alt_read_word(ALT_CLKMGR_MAINPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_L4_SP:
status = (ALT_CLKMGR_MAINPLL_EN_L4SPCLK_GET(alt_read_word(ALT_CLKMGR_MAINPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_DBG_AT:
status = (ALT_CLKMGR_MAINPLL_EN_DBGATCLK_GET(alt_read_word(ALT_CLKMGR_MAINPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_DBG:
status = (ALT_CLKMGR_MAINPLL_EN_DBGCLK_GET(alt_read_word(ALT_CLKMGR_MAINPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_DBG_TRACE:
status = (ALT_CLKMGR_MAINPLL_EN_DBGTRACECLK_GET(alt_read_word(ALT_CLKMGR_MAINPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_DBG_TIMER:
status = (ALT_CLKMGR_MAINPLL_EN_DBGTMRCLK_GET(alt_read_word(ALT_CLKMGR_MAINPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_CFG:
status = (ALT_CLKMGR_MAINPLL_EN_CFGCLK_GET(alt_read_word(ALT_CLKMGR_MAINPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_H2F_USER0:
status = (ALT_CLKMGR_MAINPLL_EN_S2FUSER0CLK_GET(alt_read_word(ALT_CLKMGR_MAINPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_EMAC0:
status = (ALT_CLKMGR_PERPLL_EN_EMAC0CLK_GET(alt_read_word(ALT_CLKMGR_PERPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_EMAC1:
status = (ALT_CLKMGR_PERPLL_EN_EMAC1CLK_GET(alt_read_word(ALT_CLKMGR_PERPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_USB_MP:
status = (ALT_CLKMGR_PERPLL_EN_USBCLK_GET(alt_read_word(ALT_CLKMGR_PERPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_SPI_M:
status = (ALT_CLKMGR_PERPLL_EN_SPIMCLK_GET(alt_read_word(ALT_CLKMGR_PERPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_CAN0:
status = (ALT_CLKMGR_PERPLL_EN_CAN0CLK_GET(alt_read_word(ALT_CLKMGR_PERPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_CAN1:
status = (ALT_CLKMGR_PERPLL_EN_CAN1CLK_GET(alt_read_word(ALT_CLKMGR_PERPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_GPIO_DB:
status = (ALT_CLKMGR_PERPLL_EN_GPIOCLK_GET(alt_read_word(ALT_CLKMGR_PERPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_H2F_USER1:
status = (ALT_CLKMGR_PERPLL_EN_S2FUSER1CLK_GET(alt_read_word(ALT_CLKMGR_PERPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_SDMMC:
status = (ALT_CLKMGR_PERPLL_EN_SDMMCCLK_GET(alt_read_word(ALT_CLKMGR_PERPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_NAND_X:
status = (ALT_CLKMGR_PERPLL_EN_NANDXCLK_GET(alt_read_word(ALT_CLKMGR_PERPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_NAND:
status = (ALT_CLKMGR_PERPLL_EN_NANDCLK_GET(alt_read_word(ALT_CLKMGR_PERPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_QSPI:
status = (ALT_CLKMGR_PERPLL_EN_QSPICLK_GET(alt_read_word(ALT_CLKMGR_PERPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_DDR_DQS:
status = (ALT_CLKMGR_SDRPLL_EN_DDRDQSCLK_GET(alt_read_word(ALT_CLKMGR_SDRPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_DDR_2X_DQS:
status = (ALT_CLKMGR_SDRPLL_EN_DDR2XDQSCLK_GET(alt_read_word(ALT_CLKMGR_SDRPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_DDR_DQ:
status = (ALT_CLKMGR_SDRPLL_EN_DDRDQCLK_GET(alt_read_word(ALT_CLKMGR_SDRPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
case ALT_CLK_H2F_USER2:
status = (ALT_CLKMGR_SDRPLL_EN_S2FUSER2CLK_GET(alt_read_word(ALT_CLKMGR_SDRPLL_EN_ADDR)))
? ALT_E_TRUE : ALT_E_FALSE;
break;
default:
status = ALT_E_BAD_ARG;
break;
}
return status;
} |
Microsatellites reveal substantial among-population genetic differentiation and strong inbreeding in the relict fern Dryopteris aemula. BACKGROUND AND AIMS A previous study detected no allozyme diversity in Iberian populations of the buckler-fern Dryopteris aemula. The use of a more sensitive marker, such as microsatellites, was thus needed to reveal the genetic diversity, breeding system and spatial genetic structure of this species in natural populations. METHODS Eight microsatellite loci for D. aemula were developed and their cross-amplification with other ferns was tested. Five polymorphic loci were used to characterize the amount and distribution of genetic diversity of D. aemula in three populations from the Iberian Peninsula and one population from the Azores. KEY RESULTS Most microsatellite markers developed were transferable to taxa close to D. aemula. Overall genetic variation was low (H(T) = 0.447), but was higher in the Azorean population than in the Iberian populations of this species. Among-population genetic differentiation was high (F(ST) = 0.520). All loci strongly departed from Hardy-Weinberg equilibrium. In the population where genetic structure was studied, no spatial autocorrelation was found in any distance class. CONCLUSIONS The higher genetic diversity observed in the Azorean population studied suggested a possible refugium in this region from which mainland Europe has been recolonized after the Pleistocene glaciations. High among-population genetic differentiation indicated restricted gene flow (i.e. lack of spore exchange) across the highly fragmented area occupied by D. aemula. The deviations from Hardy-Weinberg equilibrium reflected strong inbreeding in D. aemula, a trait rarely observed in homosporous ferns. The absence of spatial genetic structure indicated effective spore dispersal over short distances. Additionally, the cross-amplification of some D. aemula microsatellites makes them suitable for use in other Dryopteris taxa. |
Wherein Amy’s Baking Co. Takes Patton Oswalt’s Twitter Bio Seriously
(Ed.: As of Thurs. 5/16, the Twitter account is now described as a parody. But let’s take a moment to appreciate how hard it is to tell with these people.)
Teh internets drama du jour comes courtesy of Amy’s Baking Co., a dining establishment in Scottsdale, AZ. They’ve earned one-star reviews on Yelp, and had their buns handed to them on a recent episode of Kitchen Nightmares.
LAist has a fantastic rundown of the owners themselves coming unhinged via social media today, and their ongoing salute to the caps-lock key is dominating the /r/drama subreddit to the point where they’re now threatening to sue Reddit. (So no, they don’t seem to understand what Reddit is.)
Sure, that would all be entertaining on its own. But wait, there’s more! Within the nuggets of full-blown stupid that Amy’s Baking Co. has provided, there’s one so succulent it almost deserves a place on their dusty, unread menu.
Earlier this evening, Patton Oswalt tweeted that he was heading to the restaurant. And then this happened. Keep in mind that Patton’s Twitter bio reads “Mr. Oswalt is a former wedding deejay from Northern Virginia”.
Yeah, cut that out, Patton! You’re just JEALOUS that nobody’s ever heard of you, so obviously you’re trying to ride their coattails. It must suck to not be a millionaire like the owners of Amy’s. Stupid loser wedding dj.
Don’t you just want to wrap yourself in those tweets and let them soak up your tears of joy as you drift off into dreamland?
(Some folks are questioning whether the account is authentic — and Amy’s is claiming all their social media accounts were hacked. Time will tell, but it’s gotta be hard to parody a voice that’s already so far off the deep end.)
If “Amy” and her husband are reading – and let’s not kid ourselves, they totally are – may we recommend sucking on one of these selections for dessert? |
import utils
from kivymd.uix.screen import MDScreen
from kivymd.uix.picker import MDDatePicker
from kivy.uix.button import Button
from kivy.app import App
from kivy.uix.image import Image
from kivymd.uix.card import MDCard
from kivy.uix.behaviors import ToggleButtonBehavior
from kivymd.uix.tooltip import MDTooltip
from kivy.graphics import *
from kivy.properties import ObjectProperty
from kivymd.uix.behaviors import (
RectangularRippleBehavior,
BackgroundColorBehavior,
FakeRectangularElevationBehavior,
)
utils.load_kv("ImagesPl.kv")
class MyButton(ToggleButtonBehavior,Image,BackgroundColorBehavior,
FakeRectangularElevationBehavior):
md_bg_color = [192,192,192, 1]
def __init__(self, **kwargs):
super(MyButton, self).__init__(**kwargs)
self.source = '1.png'
self.elevation=12
def on_state(self, widget, value):
if value == 'down':
self.source = '1.png'
with self.canvas:
Color(0,128,0)
Line(width=3, rectangle=(self.x, self.y, self.width, self.height))
else:
self.source = '1.png'
with self.canvas:
Color(192,192,192)
Line(width=3, rectangle=(self.x, self.y, self.width, self.height))
class ImagesPl(MDScreen):
dob=None
layout_content=ObjectProperty(None)
def __init__(self,**kwargs):
super(ImagesPl, self).__init__()
if self.layout_content:
self.layout_content.bind(minimum_height=self.layout_content.setter('height'))
class ButtonILike(Button):
pass |
<filename>Source/Runtime/Private/Asset/MeshResource.cpp
// Copyright(c) 2017 POLYGONTEK
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http ://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "Precompiled.h"
#include "Render/Mesh.h"
#include "Asset/Asset.h"
#include "Asset/Resource.h"
#include "Asset/GuidMapper.h"
BE_NAMESPACE_BEGIN
OBJECT_DECLARATION("Mesh", MeshResource, Resource)
BEGIN_EVENTS(MeshResource)
END_EVENTS
void MeshResource::RegisterProperties() {
}
MeshResource::MeshResource() {
mesh = nullptr;
}
MeshResource::~MeshResource() {
if (mesh) {
meshManager.ReleaseMesh(mesh);
}
}
Mesh *MeshResource::GetMesh() {
if (mesh) {
return mesh;
}
const Str meshPath = resourceGuidMapper.Get(asset->GetGuid());
mesh = meshManager.GetMesh(meshPath);
return mesh;
}
void MeshResource::Rename(const Str &newName) {
const Str meshPath = resourceGuidMapper.Get(asset->GetGuid());
Mesh *existingMesh = meshManager.FindMesh(meshPath);
if (existingMesh) {
meshManager.RenameMesh(existingMesh, newName);
}
}
bool MeshResource::Reload() {
const Str meshPath = resourceGuidMapper.Get(asset->GetGuid());
Mesh *existingMesh = meshManager.FindMesh(meshPath);
if (existingMesh) {
existingMesh->Reload();
return true;
}
return false;
}
bool MeshResource::Save() {
return false;
}
BE_NAMESPACE_END
|
<reponame>DorukBen/PathFinder<filename>src/util/evolution/crossover/Crossover.java<gh_stars>0
package util.evolution.crossover;
import data.Individual;
public interface Crossover {
Individual crossover(Individual first, Individual second);
}
|
import java.util.Scanner;
public class Main {
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
int n = scanner.nextInt();
String In1[] = new String[n];
int In2[] = new int[n];
String N1b[] = new String[n];
int N2b[] = new int[n];
String N1s[] = new String[n];
int N2s[] = new int[n];
int count = 0;
int minj = 0;
int l, i, j = 0;
for (int k = 0; k <= n - 1; k++) {
String x = scanner.next();
String y = x.substring(0, 1);
String z1 = x.substring(1, 2);
int z = Integer.parseInt(z1);
In1[k] = y;
In2[k] = z;
N1b[k] = y;
N2b[k] = z;
N1s[k] = y;
N2s[k] = z;
} //ここまでで配列に要素を入れる
//bubble sort
for (i = 0; i < n - 1; i++) {
for (j = n - 1; j > i; j--) {
if (N2b[j] < N2b[j-1]) {
int a = N2b[j - 1];
N2b[j - 1] = N2b[j];
N2b[j] = a;
String b = N1b[j - 1];
N1b[j - 1] = N1b[j];
N1b[j] = b;
}
}
}
for (int k = 0; k < n - 1; k++) {
System.out.print (N1b[k] + N2b[k] + " ");
}
System.out.println (N1b[n - 1] + N2b[n - 1]);
System.out.println("Stable");
//selection sort
for (i = 0; i <= n - 1; i++) {
minj = i;
for (j = i; j <= n - 1; j++) {
if (N2s[j] < N2s[minj]) {
minj = j;
}
}
int a = N2s[i];
N2s[i] = N2s[minj];
N2s[minj] = a;
String b = N1s[i];
N1s[i] = N1s[minj];
N1s[minj] = b;
}
for (int k = 0; k < n - 1; k++) {
System.out.print (N1s[k] + N2s[k] + " ");
}
System.out.println (N1s[n - 1] + N2s[n - 1]);
for (l = 0;l <= n - 1;l++) {
if (N1b[l] != N1s[l]) {
System.out.println("Not stable");
break;
}
if (l == n - 1) {
System.out.println("Stable");
}
}
}
}
|
Adsorption and fluctuations of giant liposomes studied by electrochemical impedance measurements. The present study describes a novel approach based on electrochemical impedance measurements to follow the adsorption of giant liposomes on protein-coated solid surfaces with a time resolution in the order of seconds. The technical key features are circular gold-film electrodes as small as a few hundred micrometers in diameter and measurements of the electrode capacitance using AC signals in the kilohertz regime. Using Monte Carlo simulations, we were able to support the experiments and extract the rate constant of liposome adsorption. Besides monitoring the adsorption of liposomes on protein-coated surfaces, we also applied this technique to study shape fluctuations of the adsorbed vesicles and compared the corresponding power spectra with those recorded for hard particles and living animal cells. |
/*
* Copyright 2000-2017 JetBrains s.r.o.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.intellij.openapi.fileEditor.impl.text;
import com.intellij.openapi.application.ApplicationManager;
import com.intellij.openapi.application.ModalityState;
import com.intellij.openapi.editor.Document;
import com.intellij.openapi.editor.Editor;
import com.intellij.openapi.fileEditor.FileEditorManager;
import com.intellij.openapi.fileEditor.FileEditorStateLevel;
import com.intellij.openapi.progress.util.ProgressIndicatorBase;
import com.intellij.openapi.progress.util.ProgressIndicatorUtils;
import com.intellij.openapi.project.Project;
import com.intellij.openapi.util.EmptyRunnable;
import com.intellij.openapi.util.Key;
import com.intellij.openapi.util.Ref;
import com.intellij.openapi.wm.IdeFocusManager;
import com.intellij.psi.PsiDocumentManager;
import com.intellij.ui.EditorNotifications;
import com.intellij.util.concurrency.AppExecutorUtil;
import org.jetbrains.annotations.NotNull;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicLong;
public class AsyncEditorLoader {
private static final ExecutorService ourExecutor = AppExecutorUtil.createBoundedApplicationPoolExecutor("AsyncEditorLoader pool", 2);
private static final Key<AsyncEditorLoader> ASYNC_LOADER = Key.create("ASYNC_LOADER");
private static final int SYNCHRONOUS_LOADING_WAITING_TIME_MS = 200;
@NotNull private final Editor myEditor;
@NotNull private final Project myProject;
@NotNull private final TextEditorImpl myTextEditor;
@NotNull private final TextEditorComponent myEditorComponent;
@NotNull private final TextEditorProvider myProvider;
private final List<Runnable> myDelayedActions = new ArrayList<>();
private TextEditorState myDelayedState;
private final CompletableFuture<?> myLoadingFinished = new CompletableFuture<>();
AsyncEditorLoader(@NotNull TextEditorImpl textEditor, @NotNull TextEditorComponent component, @NotNull TextEditorProvider provider) {
myProvider = provider;
myTextEditor = textEditor;
myProject = textEditor.myProject;
myEditorComponent = component;
myEditor = textEditor.getEditor();
myEditor.putUserData(ASYNC_LOADER, this);
myEditorComponent.getContentPanel().setVisible(false);
}
@NotNull
Future<?> start() {
ApplicationManager.getApplication().assertIsDispatchThread();
Future<Runnable> continuationFuture = scheduleLoading();
boolean showProgress = true;
if (worthWaiting()) {
/*
* Possible alternatives:
* 1. show "Loading" from the beginning, then it'll be always noticeable at least in fade-out phase
* 2. show a gray screen for some time and then "Loading" if it's still loading; it'll produce quick background blinking for all editors
* 3. show non-highlighted and unfolded editor as "Loading" background and allow it to relayout at the end of loading phase
* 4. freeze EDT a bit and hope that for small editors it'll suffice and for big ones show "Loading" after that.
* This strategy seems to produce minimal blinking annoyance.
*/
Runnable continuation = resultInTimeOrNull(continuationFuture, SYNCHRONOUS_LOADING_WAITING_TIME_MS);
if (continuation != null) {
showProgress = false;
loadingFinished(continuation);
}
}
if (showProgress) myEditorComponent.startLoading();
return myLoadingFinished;
}
private Future<Runnable> scheduleLoading() {
PsiDocumentManager psiDocumentManager = PsiDocumentManager.getInstance(myProject);
Document document = myEditor.getDocument();
return ourExecutor.submit(() -> {
AtomicLong docStamp = new AtomicLong();
Ref<Runnable> ref = new Ref<>();
try {
while (!myEditorComponent.isDisposed()) {
ProgressIndicatorUtils.runWithWriteActionPriority(() -> psiDocumentManager.commitAndRunReadAction(() -> {
docStamp.set(document.getModificationStamp());
ref.set(myProject.isDisposed() ? EmptyRunnable.INSTANCE : myTextEditor.loadEditorInBackground());
}), new ProgressIndicatorBase());
Runnable continuation = ref.get();
if (continuation != null) {
psiDocumentManager.performLaterWhenAllCommitted(() -> {
if (docStamp.get() == document.getModificationStamp()) loadingFinished(continuation);
else if (!myEditorComponent.isDisposed()) scheduleLoading();
}, ModalityState.any());
return continuation;
}
ProgressIndicatorUtils.yieldToPendingWriteActions();
}
}
finally {
if (ref.isNull()) invokeLater(() -> loadingFinished(null));
}
return null;
});
}
private static void invokeLater(Runnable runnable) {
ApplicationManager.getApplication().invokeLater(runnable, ModalityState.any());
}
private boolean worthWaiting() {
// cannot perform commitAndRunReadAction in parallel to EDT waiting
return !PsiDocumentManager.getInstance(myProject).hasUncommitedDocuments() &&
!ApplicationManager.getApplication().isWriteAccessAllowed();
}
private static <T> T resultInTimeOrNull(Future<T> future, long timeMs) {
try {
return future.get(timeMs, TimeUnit.MILLISECONDS);
}
catch (InterruptedException | TimeoutException ignored) {}
catch (ExecutionException e) {
throw new RuntimeException(e);
}
return null;
}
private void loadingFinished(Runnable continuation) {
if (myLoadingFinished.isDone()) return;
myLoadingFinished.complete(null);
myEditor.putUserData(ASYNC_LOADER, null);
if (myEditorComponent.isDisposed()) return;
if (continuation != null) {
continuation.run();
}
if (myEditorComponent.isLoading()) {
myEditorComponent.stopLoading();
}
myEditorComponent.getContentPanel().setVisible(true);
if (myDelayedState != null) {
TextEditorState state = new TextEditorState();
state.setFoldingState(myDelayedState.getFoldingState());
myProvider.setStateImpl(myProject, myEditor, state);
myDelayedState = null;
}
for (Runnable runnable : myDelayedActions) {
myEditor.getScrollingModel().disableAnimation();
runnable.run();
}
myEditor.getScrollingModel().enableAnimation();
if (FileEditorManager.getInstance(myProject).getSelectedTextEditor() == myEditor) {
IdeFocusManager.getInstance(myProject).requestFocusInProject(myTextEditor.getPreferredFocusedComponent(), myProject);
}
EditorNotifications.getInstance(myProject).updateNotifications(myTextEditor.myFile);
}
public static void performWhenLoaded(@NotNull Editor editor, @NotNull Runnable runnable) {
ApplicationManager.getApplication().assertIsDispatchThread();
AsyncEditorLoader loader = editor.getUserData(ASYNC_LOADER);
if (loader == null) {
runnable.run();
}
else {
loader.myDelayedActions.add(runnable);
}
}
@NotNull
TextEditorState getEditorState(@NotNull FileEditorStateLevel level) {
ApplicationManager.getApplication().assertIsDispatchThread();
TextEditorState state = myProvider.getStateImpl(myProject, myEditor, level);
if (!myLoadingFinished.isDone() && myDelayedState != null) {
state.setDelayedFoldState(myDelayedState::getFoldingState);
}
return state;
}
void setEditorState(@NotNull final TextEditorState state) {
ApplicationManager.getApplication().assertIsDispatchThread();
if (!myLoadingFinished.isDone()) {
myDelayedState = state;
}
myProvider.setStateImpl(myProject, myEditor, state);
}
public static boolean isEditorLoaded(@NotNull Editor editor) {
return editor.getUserData(ASYNC_LOADER) == null;
}
} |
Reducing Preprocessing Overhead Times in a Reconfigurable Accelerator of Finite Difference Applications Hardware accelerators integrating to general purpose processors (GPPs) are increasingly employed to achieve lower power consumption and higher processing speed. However due to impact of memory-wall problem, this kind of acceleration does not always achieve a demanded performance. To resolve this issue, a Large-Scale Reconfigurable Data-Path (LSRDP) has been proposed which is able to reduce the required memory bandwidth. Since the LSRDP consists of a large number of Processing Elements (PEs), it can otentially achieve a very high performance. To take advantages of the LSRDP architecture, a proper implementation for the target application is essential requirement. In this paper, three ways for implementing applications are introduced which include primitive implementation, software optimized version and software optimized with additional memory access controller hardware to decrease the amount of redundant data transfer. Our experimental results reveal about 100 smaller execution time on the LSRDP compared with GPP when the proposed optimization ideas are developed. |
from django.core.mail import EmailMessage
import os
def update():
print("업데이트 알림 시작")
categories = [
"banana",
"bulb",
"calculator",
"carrot",
"clock",
"crecent",
"diamond",
"icecream",
"strawberry",
"t-shirt",
]
#쌓인 이미지 개수
amount = 0
for category in categories :
if os.path.isdir("/home/new_jenkins/jenkins/workspace/k-test/backend/media/dataset/success/"+category) :
amount += len(os.listdir("/home/new_jenkins/jenkins/workspace/k-test/backend/media/dataset/success/"+category))
if amount > 100 :
email = EmailMessage(
'[PINGO] 분류 성공 이미지 누적 안내', # 제목
'현재' + str(amount) + '개의 이미지가 있습니다. 이미지를 GPU 서버로 옮겨서 테스트 해주십시오',
to=['<EMAIL>'], # 받는 이메일 리스트
)
email.send()
|
Young Americans may not buy many print newspapers, but they do enjoy a "print-like" reading experience.
A joint survey of news consumers from the Pew Research Center and The Economist found that 60% of Americans under the age of 40 prefer a traditional, print-like news reading experience on tablets, free of interactive components like audio and video. Those older than 40 expressed similar preferences.
The same appears to be true for consumers of lifestyle magazines. At Mashable's Media Summit late last month, Hearst President David Carey said readers favor a conventional reading experience on tablets like the iPad.
"We had to find out whether people wanted something all-new and interactive, or if they just wanted the magazine in mobile mode," Carey recounted on stage. "The industry overshot the interactivity early on. What we discovered is that most people just want the product itself," he said.
But still, a large contingent of readers in the survey — approximately four in 10 — expressed a preference for interactive news-reading experiences. That creates a difficult proposition for publishers who want to serve both sets of readers.
The study found that men and more highly educated adults tend to consumer more news on mobile: 43% of male tablet owners and 41% of male smartphone owners read news daily on their devices, compared to 32% of female tablet owners and 30% of female smartphone owners, respectively. Men also tend to check for news more often and are more likely to engage with long-form news articles and videos.
Beyond mobile news consumption habits, the study also uncovered some promising data for display advertisers. Younger tablet users are far more likely than their old counterparts to touch ads while reading news on their tablets: A quarter of 18 to 29-year-olds say they sometimes tap on ads, compared to 12% of 30 to 49-year-olds and 7% of 50 to 64-year-olds.
The findings were based on responses from 9,513 U.S. adults, 4,638 of whom are mobile device owners. |
Improving Adherence to Clinical Pathways Through Natural Language Processing on Electronic Medical Records This paper presents a pioneering and practical experience in the development and implementation of a clinical decision support system (CDSS) based on natural language processing (NLP) and artificial intelligence (AI) techniques. Our CDSS notifies primary care physicians in real time about recommendations regarding the healthcare process. This is, to the best of our knowledge, the first real-time CDSS implemented in the Spanish National Health System. We achieved adherence rate improvements in eight out of 18 practices. Moreover, the provider's feedback was very positive, describing the solution as fast, useful, and unintrusive. Our CDSS reduced clinical variability and revealed the usefulness of NLP and AI techniques for the evaluation and improvement of health care. |
Scales SOFA and qSOFA as prognosis of mortality in patients diagnosed with sepsis from a Peruvian clinic Introduction: Sepsis is a clinical condition that seriously threatens the bodys balance and is still a significant cause of death. Therefore, clinical management is aimed at a timely classification and implementation of emergency measures based on systems of scales for detection that helps reduce complications in patients. That is the importance of using SOFA (Sequential Organ Failure Assessment) and qSOFA (quick SOFA) in the different services for hospitalized patients. Objective: To evaluate the usefulness of SOFA and qSOFA scale as a predictor of mortality in patients with sepsis hospitalized in the intensive care unit (ICU) of the Good Hope Clinic from January to December 2015. Materials and methods: Retrospective study of adult patients hospitalized in ICU/NICU with sepsis diagnoses. Epidemiological, clinical, and laboratory data were collected to apply the SOFA and qSOFA scales. We performed a description of the variables studied, an analysis of the variables, and the scoring systems compared in the ROC curve. Results: The main infectious focus was respiratory (41.5%). The patients died was 28.3%. The variables serum creatinine and lactate were statistically significant with OR = 11.67 (95% CI 2.58-52.85, p<0.001) and OR = 5.78 (CI 95% 1.45-23.03, p = 0.009), respectively. The AUC for SOFA was 0.698, p = 0.026, 95% CI (0.54-0.85), showing to be statistically significant. A cutoff point of 7.5 was found with a sensitivity of 46.7% and 86.8% specificity. QSOFA did not show a statistically significant association. Conclusions: The SOFA scale showed a probability of death in patients with sepsis admitted to ICU/NICU. |
<reponame>pasmuss/cmssw<gh_stars>0
#ifndef SimG4CMS_TkSimHitPrinter_H
#define SimG4CMS_TkSimHitPrinter_H
#include "DataFormats/GeometryVector/interface/LocalPoint.h"
#include "DataFormats/GeometryVector/interface/LocalVector.h"
#include <string>
#include <fstream>
class G4Track;
class TkSimHitPrinter
{
public:
TkSimHitPrinter(std::string);
~TkSimHitPrinter();
void startNewSimHit(std::string,std::string,int,int,int);
void printLocal(Local3DPoint,Local3DPoint) const;
void printGlobal(Local3DPoint,Local3DPoint) const;
void printHitData(float, float) const;
void printMomentumOfTrack(float, std::string, int sign) const;
int getPropagationSign(Local3DPoint ,Local3DPoint);
void printGlobalMomentum(float,float,float)const ;
private:
int eventno;
static std::ofstream * theFile;
};
#endif
|
"Safe dangerous" 17-story Schlitterbahn waterslide Verrückt is billed as world’s tallest.
Police in suburban Kansas City were trying to figure out how a 10-year-old boy could have died on a "safe dangerous" 17-story amusement park waterslide that is billed as the world’s tallest.
A spokesperson for the Schlitterbahn Water Park in Kansas City, Kan., said the boy died Sunday on the Verrückt.
The park was closed pending "a full investigation," spokesperson Pam Renteria said in a statement posted on the park's website.
In a statement, the boy's family identified him as Caleb Schwab. He was the son of Kansas Rep. Scott Schwab and his wife, Michele.
TheKansas City Star reported that park officials did not immediately say whether the boy fell from the ride.
The Star reported that riders on the Verrückt are supposed to be at least 54 inches tall and 14 years old. It was not immediately clear why a 10-year-old was on the slide.
Kansas City Police Chief Terry Zeigler said the death was being treated as an accident.
Verrückt, a German word for "insane" or "crazy," drops riders at 65 mph from a height of almost 169 feet, taller than both Niagara Falls and the Statue of Liberty. Riders sit in a three-person raft and are secured with straps across the waist and shoulders. Its marketing materials include the slogan "R U Insane?"
The ride opened in July 2014.
At the time of its opening, USA TODAY said the ride would "thrill or terrify" riders, who drop the equivalent of two football fields in 18 seconds.
Schlitterbahn Waterparks & Resorts co-owner Jeff Henry, who created Verrückt with senior designer John Schooley, said he was among the first to ride the waterslide and said, "I'm still recovering mentally. It's like jumping off the Empire State Building. It's the scariest thing I've done."
Velcro seat belts lash riders to the raft, and netting encloses the chute to retain the raft in the unlikely event it goes flying, USA TODAY reported. During early testing, rafts did just that.
"We had many issues on the engineering side," said Henry, who owns 60 patents for innovations such as land-based water surfing and uphill water coasters.
After the Guinness Book of World Records certified Verrückt as the world's tallest water slide, Henry tore down half of it to make corrections, delaying the planned opening and costing an additional $1 million, USA TODAY reported. Testing was conducted after dark to avoid media helicopters that had been buzzing the park after hours.
Henry called the ride, "dangerous, but it's a safe dangerous now." He said Schlitterbahn "is a family water park, but this isn't a family ride. It's for the thrill seekers of the world, people into extreme adventure."
Just last week, USA TODAY named it one of the best waterpark rides in the nation.
The park will remain closed Monday, a spokesperson said. |
/*
* Copyright 2016 Smart Society Services B.V.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file
* except in compliance with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*/
package org.opensmartgridplatform.simulator.protocol.dlms.cosem;
import org.openmuc.jdlms.AttributeAccessMode;
import org.openmuc.jdlms.CosemAttribute;
import org.openmuc.jdlms.CosemClass;
import org.openmuc.jdlms.CosemInterfaceObject;
import org.openmuc.jdlms.datatypes.DataObject;
import org.openmuc.jdlms.datatypes.DataObject.Type;
@CosemClass(id = 1)
public class AdministrativeInOut extends CosemInterfaceObject {
@CosemAttribute(id = 2, type = Type.ENUMERATE, accessMode = AttributeAccessMode.READ_AND_WRITE)
public DataObject value;
public AdministrativeInOut(final AdministrativeStatusType administrativeStatusType) {
super("0.1.94.31.0.255");
this.value = DataObject.newEnumerateData(administrativeStatusType.value());
}
}
|
. A depotentiating effect of prednisolone, benactyzine, reserpine, propranolole and large doses of serotonin on the development of the intoxication was shown to occur in combination with inhibition of the biogenic amines accumulation in various tissues of cats with botulinic paresis. The same phenomenon was observed in intoxicated animals maintained without pharmacological treatment. |
// RemoveVSMB removes a VSMB share from a utility VM. Each VSMB share is ref-counted
// and only actually removed when the ref-count drops to zero.
func (uvm *UtilityVM) RemoveVSMB(ctx context.Context, hostPath string, readOnly bool) error {
if uvm.operatingSystem != "windows" {
return errNotSupported
}
uvm.m.Lock()
defer uvm.m.Unlock()
st, err := os.Stat(hostPath)
if err != nil {
return err
}
m := uvm.vsmbDirShares
if !st.IsDir() {
m = uvm.vsmbFileShares
hostPath = filepath.Dir(hostPath)
}
hostPath = filepath.Clean(hostPath)
shareKey := getVSMBShareKey(hostPath, readOnly)
share, err := uvm.findVSMBShare(ctx, m, shareKey)
if err != nil {
return fmt.Errorf("%s is not present as a VSMB share in %s, cannot remove", hostPath, uvm.id)
}
share.refCount--
if share.refCount > 0 {
return nil
}
modification := &hcsschema.ModifySettingRequest{
RequestType: requesttype.Remove,
Settings: hcsschema.VirtualSmbShare{Name: share.name},
ResourcePath: vSmbShareResourcePath,
}
if err := uvm.modify(ctx, modification); err != nil {
return fmt.Errorf("failed to remove vsmb share %s from %s: %+v: %s", hostPath, uvm.id, modification, err)
}
delete(m, shareKey)
return nil
} |
Qualitative Evaluation of the Possible Application of Collagen Fibres: Composite Materials with Mineral Fillers as Insoles for Healthy Footwear The research presented in this paper focuses on determination of the physical and mechanical properties of composites made on the basis of natural polymer, i.e., crushed collagen fibres which are waste from the leather industry. Mineral supplements such as dolomite, bentonite and kaolin were the filling of the composite produced. Their application was dictated by increasing the application range, and we strove to optimise the composition of the composite in terms of physical and mechanical properties determined in static tensile tests. An analysis of the water absorption capacity was also carried out in order to select optimal compositions which have the best properties. The test results indicate extensive application possibilities of the composites produced, one of which is in footwear insoles, whose quality is an important element determining the hygienic qualities of shoes due to the high density of sweat glands in the plantar part of the foot. This research indicates the possibility of individual development of the composite properties of collagen fibres and mineral supplements in terms of their application, taking into account the medical aspect. |
1. Field of the Invention:
The present invention relates to a process for the manufacture of electrodes in an integrated circuit, and more particularly to the formation of such electrodes in a charge coupled device.
The invention makes it possible to obtain electrodes or gates that are separated by the smallest feasible distance.
2. Description of the Prior Art:
With charge coupled devices (CCDs), it is common practice to define the electrodes in polycrystalline silicon (polysilicon) layers. After one such layer has been deposited on an insulating substrate, an etching operation is carried out through a pattern mask defining the electrode configuration. A constant problem in implementing this technique is to obtain a configuration having the required dimensions. In the present state of the art, it can be considered that while the dimensions are respected in the photolithography defining the pattern mask, there occurs a lateral retraction during etching. The extent of this retraction varies with the etching method used. In this respect, a distinction can be drawn between isotrophic etching, in which the removal of silicon occurs in the same manner in all directions, and anisotropic etching, in which it is possible to confer directional properties to the etching away of silicon by enhancing the action in the depth of the layer. But whichever the method used, lateral retraction remains inevitable. This problem is compounded by the fact that present circuits call for a growing number of electrode levels, leading to increasingly complex mask patterns for the different layers, unavoidable increases in etching periods, and consequently to a certain amount of over-etching of some layers, with relatively large shrinkages. To a certain extent, it is possible to compensate in advance for an over-etching by playing on the mask pattern. However, this kind of compensation is limited by the resolution of the photolithographic process and ceases to be applicable when dealing with very tight configurations, especially those of a periodic nature.
The present invention solves this problem and provides for very closely configured electrodes, in particular where inter-electrode spacings can be smaller than that obtainable with photolithographic resolution. |
The Effect of Job Involvement and Work Stress on Turnover Intention with Organizational Commitment as an Intervening Variable PT. Perkebunan Minanga Ogan The purpose of this study was to determine the effect of work involvement and work stress on the turnover intention with organizational commitment as an intervening variable at PT. Perkebunan Minanga Ogan. The approach used in this research is quantitative. To determine the sample using probability sampling, then probability sampling used is simple random sampling. The sample in this study were contract employees totaling 200 people at PT. Perkebunan Minanga Ogan. The analytical method used to determine the relationship between these variables is path analysis using the structural equation model (SEM) method and the AMOS program. The results of hypothesis testing are as follows: the work involvement variable has a positive and insignificant effect on organizational commitment. The job stress variable harms organizational commitment. The organizational commitment variable has a positive and significant effect on turnover intention. Job involvement variable harms turnover intention. The job stress variable has a positive and significant effect on turnover intention. Job involvement has a positive and significant effect on the turnover intention with organizational commitment as an intervening variable and work stress has a negative and significant effect on the turnover intention with organizational commitment as an intervening variable. |
/*
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER.
*
* Copyright (c) 1997-2012 Oracle and/or its affiliates. All rights reserved.
*
* The contents of this file are subject to the terms of either the GNU
* General Public License Version 2 only ("GPL") or the Common Development
* and Distribution License("CDDL") (collectively, the "License"). You
* may not use this file except in compliance with the License. You can
* obtain a copy of the License at
* https://glassfish.dev.java.net/public/CDDL+GPL_1_1.html
* or packager/legal/LICENSE.txt. See the License for the specific
* language governing permissions and limitations under the License.
*
* When distributing the software, include this License Header Notice in each
* file and include the License file at packager/legal/LICENSE.txt.
*
* GPL Classpath Exception:
* Oracle designates this particular file as subject to the "Classpath"
* exception as provided by Oracle in the GPL Version 2 section of the License
* file that accompanied this code.
*
* Modifications:
* If applicable, add the following below the License Header, with the fields
* enclosed by brackets [] replaced by your own identifying information:
* "Portions Copyright [year] [name of copyright owner]"
*
* Contributor(s):
* If you wish your version of this file to be governed by only the CDDL or
* only the GPL Version 2, indicate your decision by adding "[Contributor]
* elects to include this software in this distribution under the [CDDL or GPL
* Version 2] license." If you don't indicate a single choice of license, a
* recipient has the option to distribute your version of this file under
* either the CDDL, the GPL Version 2 or to extend the choice of license to
* its licensees as provided above. However, if you add GPL Version 2 code
* and therefore, elected the GPL Version 2 license, then the option applies
* only if the new code is made subject to such option by the copyright
* holder.
*/
package org.glassfish.admin.rest.cli;
import com.sun.enterprise.config.serverbeans.AuthRealm;
import com.sun.enterprise.config.serverbeans.Config;
import com.sun.enterprise.config.serverbeans.Domain;
import com.sun.enterprise.config.serverbeans.SecurityService;
import com.sun.enterprise.security.auth.login.LoginContextDriver;
import java.util.List;
import java.util.Map;
import java.util.HashMap;
import java.util.ArrayList;
import java.util.Properties;
import org.glassfish.hk2.api.ServiceLocator;
import org.glassfish.internal.api.Globals;
import com.sun.enterprise.security.auth.realm.RealmsManager;
import com.sun.enterprise.security.auth.realm.Realm;
import com.sun.enterprise.security.auth.realm.User;
import java.util.Enumeration;
import org.jvnet.hk2.config.types.Property;
/**
* AMX Realms implementation.
* Note that realms don't load until {@link #loadRealms} is called.
* @author <NAME>
*/
public class SecurityUtil {
private static final String DAS_CONFIG = "server-config";
private static String ADMIN_REALM = "admin-realm";
private static String FILE_REALM_CLASSNAME = "com.sun.enterprise.security.auth.realm.file.FileRealm";
private Domain domain;
public SecurityUtil(Domain domain) {
this.domain = domain;
_loadRealms();
}
public RealmsManager getRealmsManager() {
RealmsManager mgr = Globals.getDefaultHabitat().getService(RealmsManager.class);
return mgr;
}
private SecurityService getSecurityService() {
Config config = domain.getConfigs().getConfig().get(0);
return config.getSecurityService();
}
private void _loadRealms() {
List<AuthRealm> authRealmConfigs = getSecurityService().getAuthRealm();
List<String> goodRealms = new ArrayList<String>();
for (AuthRealm authRealm : authRealmConfigs) {
List<Property> propConfigs = authRealm.getProperty();
Properties props = new Properties();
for (Property p : propConfigs) {
String value = p.getValue();
props.setProperty(p.getName(), value);
}
try {
Realm.instantiate(authRealm.getName(), authRealm.getClassname(), props);
goodRealms.add(authRealm.getName());
} catch (Exception e) {
e.printStackTrace();
}
}
if (!goodRealms.isEmpty()) {
//not used String goodRealm = goodRealms.iterator().next();
try {
String defaultRealm = getSecurityService().getDefaultRealm();
/*Realm r = */Realm.getInstance(defaultRealm);
Realm.setDefaultRealm(defaultRealm);
} catch (Exception e) {
Realm.setDefaultRealm(goodRealms.iterator().next());
e.printStackTrace();
}
}
}
private String[] _getRealmNames() {
Enumeration<String> es = getRealmsManager().getRealmNames();
List<String> l = new ArrayList<String>();
while (es.hasMoreElements()) {
l.add(es.nextElement());
}
return (String[])l.toArray(new String[l.size()]);
}
public String[] getRealmNames() {
try {
return _getRealmNames();
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
public String[] getPredefinedAuthRealmClassNames() {
List<String> items = getRealmsManager().getPredefinedAuthRealmClassNames();
return (String[])items.toArray(new String[items.size()]);
}
public String getDefaultRealmName() {
return getRealmsManager().getDefaultRealmName();
}
public void setDefaultRealmName(String realmName) {
getRealmsManager().setDefaultRealmName(realmName);
}
private Realm getRealm(String realmName) {
Realm realm = getRealmsManager().getFromLoadedRealms(realmName);
if (realm == null) {
throw new IllegalArgumentException("No such realm: " + realmName);
}
return realm;
}
public void addUser(
String realmName,
String user,
String password,
String[] groupList) {
checkSupportsUserManagement(realmName);
try {
Realm realm = getRealm(realmName);
realm.addUser(user, password.toCharArray(), groupList);
realm.persist();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public void updateUser(
String realmName,
String existingUser,
String newUser,
String password,
String[] groupList) {
checkSupportsUserManagement(realmName);
try {
Realm realm = getRealm(realmName);
realm.updateUser(existingUser, newUser, password.toCharArray(), groupList);
realm.persist();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public void removeUser(String realmName, String user) {
checkSupportsUserManagement(realmName);
try {
Realm realm = getRealm(realmName);
realm.removeUser(user);
realm.persist();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public boolean supportsUserManagement(String realmName) {
return getRealm(realmName).supportsUserManagement();
}
private void checkSupportsUserManagement(String realmName) {
if (!supportsUserManagement(realmName)) {
throw new IllegalStateException("Realm " + realmName + " does not support user management");
}
}
public String[] getUserNames(String realmName) {
try {
Enumeration<String> es = getRealm(realmName).getUserNames();
List<String> l = new ArrayList<String>();
while (es.hasMoreElements()) {
l.add(es.nextElement());
}
return (String[])l.toArray(new String[l.size()]);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public String[] getGroupNames(String realmName) {
try {
Enumeration<String> es = getRealm(realmName).getGroupNames();
List<String> l = new ArrayList<String>();
while (es.hasMoreElements()) {
l.add(es.nextElement());
}
return (String[])l.toArray(new String[l.size()]);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public String[] getGroupNames(String realmName, String user) {
try {
Enumeration<String> es = getRealm(realmName).getGroupNames(user);
List<String> l = new ArrayList<String>();
while (es.hasMoreElements()) {
l.add(es.nextElement());
}
return (String[])l.toArray(new String[l.size()]);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public Map<String, Object> getUserAttributes(String realmName, String username) {
try {
User user = getRealm(realmName).getUser(username);
Map<String, Object> m = new HashMap<String, Object>();
Enumeration e = user.getAttributeNames();
List<String> attrNames = new ArrayList<String>();
while (e.hasMoreElements()) {
attrNames.add((String)e.nextElement());
}
for (String attrName : attrNames) {
m.put(attrName, user.getAttribute(attrName));
}
return m;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public String getAnonymousUser(ServiceLocator habitat) {
String user = null;
// find the ADMIN_REALM
AuthRealm adminFileAuthRealm = null;
for (AuthRealm auth : domain.getConfigNamed(DAS_CONFIG).getSecurityService().getAuthRealm()) {
if (auth.getName().equals(ADMIN_REALM)) {
adminFileAuthRealm = auth;
break;
}
}
if (adminFileAuthRealm == null) {
// There must always be an admin realm
throw new IllegalStateException("Cannot find admin realm");
}
// Get FileRealm class name
String fileRealmClassName = adminFileAuthRealm.getClassname();
if (fileRealmClassName != null && !fileRealmClassName.equals(FILE_REALM_CLASSNAME)) {
// This condition can arise if admin-realm is not a File realm. Then the API to extract
// the anonymous user should be integrated for the logic below this line of code. for now,
// we treat this as an error and instead of throwing exception return false;
return null;
}
List<Property> props = adminFileAuthRealm.getProperty();
Property keyfileProp = null;
for (Property prop : props) {
if ("file".equals(prop.getName())) {
keyfileProp = prop;
}
}
if (keyfileProp == null) {
throw new IllegalStateException("Cannot find property 'file'");
}
String keyFile = keyfileProp.getValue();
if (keyFile == null) {
throw new IllegalStateException("Cannot find key file");
}
String[] usernames = getUserNames(adminFileAuthRealm.getName());
if (usernames.length == 1) {
try {
habitat.getService(com.sun.enterprise.security.SecurityLifecycle.class);
LoginContextDriver.login(usernames[0], new char[0], ADMIN_REALM);
user = usernames[0];
} catch (Exception e) {
}
}
return user;
}
}
|
<gh_stars>10-100
/*
* Copyright (c) 2012-2020, <NAME>. All Rights Reserved.
*
* This file is part of DDogleg (http://ddogleg.org).
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.ddogleg.clustering;
import org.ddogleg.clustering.kmeans.TestStandardKMeans;
import org.ddogleg.clustering.misc.ListAccessor;
import org.junit.jupiter.api.Test;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import static org.junit.jupiter.api.Assertions.*;
/**
* @author <NAME>
*/
public abstract class GenericClusterChecks_F64 {
protected Random rand = new Random(234);
/**
* If hint is true then the first 3 elements are good initial seeds for clustering
*/
public abstract ComputeClusters<double[]> createClustersAlg(boolean seedHint, int dof);
/**
* Selects the best cluster using internal model. Inspired by a bug
*/
protected abstract int selectBestCluster( ComputeClusters<double[]> alg, double[] p );
/**
* Very simple and obvious clustering problem
*/
@Test void simpleCluster() {
List<double[]> points = new ArrayList<>();
ListAccessor<double[]> accessor = new ListAccessor<>(points,
(src,dst)->System.arraycopy(src,0,dst,0,1), double[].class);
for (int i = 0; i < 20; i++) {
points.add( new double[]{ i});
points.add( new double[]{100+i});
points.add( new double[]{200+i});
}
ComputeClusters<double[]> alg = createClustersAlg(true,1);
alg.initialize(243234);
alg.process(accessor,3);
AssignCluster<double[]> assignments = alg.getAssignment();
// test assignment
int cluster0 = assignments.assign(points.get(0));
int cluster1 = assignments.assign(points.get(1));
int cluster2 = assignments.assign(points.get(2));
// make sure the clusters are unique
assertTrue(cluster0!=cluster1);
assertTrue(cluster0!=cluster2);
assertTrue(cluster1!=cluster2);
// see if it correctly assigns the inputs
int index = 0;
for (int i = 0; i < 20; i++) {
assertEquals(cluster0,assignments.assign(points.get(index++)),1e-8);
assertEquals(cluster1,assignments.assign(points.get(index++)),1e-8);
assertEquals(cluster2,assignments.assign(points.get(index++)),1e-8);
}
}
@Test void computeDistance() {
int DOF = 5;
List<double[]> points = TestStandardKMeans.createPoints(DOF,200,true);
ListAccessor<double[]> accessor = new ListAccessor<>(points,
(src,dst)->System.arraycopy(src,0,dst,0,src.length), double[].class);
ComputeClusters<double[]> alg = createClustersAlg(false,5);
alg.initialize(243234);
alg.process(accessor,3);
double first = alg.getDistanceMeasure();
alg.process(accessor,10);
double second = alg.getDistanceMeasure();
// it's actually difficult to come up with meaningful tests for distance which don't make
// assumptions about the algorithm. So there's only these really simple tests
assertTrue(first!=second);
assertFalse(Double.isNaN(first));
assertFalse(Double.isNaN(second));
assertFalse(Double.isInfinite(first));
assertFalse(Double.isInfinite(second));
}
/**
* Make sure the assigner matches the best assignment.
*/
@Test void consistentAssignments() {
ComputeClusters<double[]> alg = createClustersAlg(false,1);
for (int trial = 0; trial < 5; trial++) {
List<double[]> points = new ArrayList<>();
ListAccessor<double[]> accessor = new ListAccessor<>(points,
(src,dst)->System.arraycopy(src,0,dst,0,1), double[].class);
for (int i = 0; i < 100; i++) {
points.add( new double[]{rand.nextDouble()});
}
alg.initialize(243234);
alg.process(accessor, 7);
AssignCluster<double[]> assigner = alg.getAssignment();
for (int pointIdx = 0; pointIdx < points.size(); pointIdx++) {
double[] p = points.get(pointIdx);
assertEquals(selectBestCluster(alg,p), assigner.assign(p));
}
}
}
}
|
#include "Clear.h"
bool Clear::Exec(const RE::SCRIPT_PARAMETER*, RE::SCRIPT_FUNCTION::ScriptData*, RE::TESObjectREFR*, RE::TESObjectREFR*, RE::Script*, RE::ScriptLocals*, double&, std::uint32_t&)
{
auto task = SKSE::GetTaskInterface();
task->AddUITask([]() {
auto ui = RE::UI::GetSingleton();
auto console = ui ? ui->GetMenu<RE::Console>() : nullptr;
auto view = console ? console->uiMovie : nullptr;
if (view) {
view->Invoke("Console.ClearHistory", nullptr, nullptr, 0);
}
});
return true;
}
void Clear::Register()
{
auto info = RE::SCRIPT_FUNCTION::LocateConsoleCommand("DumpNiUpdates"); // unused
if (info) {
info->functionName = LONG_NAME;
info->shortName = SHORT_NAME;
info->helpString = HelpStr();
info->referenceFunction = false;
info->params = nullptr;
info->numParams = 0;
info->executeFunction = &Exec;
info->conditionFunction = nullptr;
logger::info(FMT_STRING("Registered console command: {} ({})"), LONG_NAME, SHORT_NAME);
} else {
logger::error(FMT_STRING("Failed to register console command: {} ({})"), LONG_NAME, SHORT_NAME);
}
}
const char* Clear::HelpStr()
{
static const std::string help = []() {
std::string str;
str += "\"Clear\"";
return str;
}();
return help.c_str();
}
|
<reponame>aaarsene/o3de
/*
* Copyright (c) Contributors to the Open 3D Engine Project. For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Driller/Driller.h>
#include <AzCore/RTTI/RTTI.h>
#include <AzCore/EBus/EBus.h>
namespace AzFramework
{
/*
* Descriptors for drillers available on the target machine.
*/
struct DrillerInfo final
{
AZ_RTTI(DrillerInfo, "{197AC318-B65C-4B36-A109-BD25422BF7D0}");
AZ::u32 m_id;
AZStd::string m_groupName;
AZStd::string m_name;
AZStd::string m_description;
};
typedef AZStd::vector<DrillerInfo> DrillerInfoListType;
typedef AZStd::vector<AZ::u32> DrillerListType;
/**
* Driller clients interested in receiving notification events from the
* console should implement this interface.
*/
class DrillerConsoleEvents
: public AZ::EBusTraits
{
public:
//////////////////////////////////////////////////////////////////////////
// EBusTraits overrides
typedef AZ::OSStdAllocator AllocatorType;
//////////////////////////////////////////////////////////////////////////
virtual ~DrillerConsoleEvents() {}
// A list of available drillers has been received from the target machine.
virtual void OnDrillersEnumerated(const DrillerInfoListType& availableDrillers) = 0;
virtual void OnDrillerSessionStarted(AZ::u64 sessionId) = 0;
virtual void OnDrillerSessionStopped(AZ::u64 sessionId) = 0;
};
typedef AZ::EBus<DrillerConsoleEvents> DrillerConsoleEventBus;
/**
* Commands can be sent to the driller through this interface.
*/
class DrillerConsoleCommands
: public AZ::EBusTraits
{
public:
//////////////////////////////////////////////////////////////////////////
// EBusTraits overrides
typedef AZ::OSStdAllocator AllocatorType;
// there's only one driller console instance allowed
static const AZ::EBusHandlerPolicy HandlerPolicy = AZ::EBusHandlerPolicy::Single;
static const AZ::EBusAddressPolicy AddressPolicy = AZ::EBusAddressPolicy::Single;
//////////////////////////////////////////////////////////////////////////
virtual ~DrillerConsoleCommands() {}
// Request an enumeration of available drillers from the target machine
virtual void EnumerateAvailableDrillers() = 0;
// Start a drilling session. This function is normally called internally by DrillerRemoteSession
virtual void StartDrillerSession(const AZ::Debug::DrillerManager::DrillerListType& requestedDrillers, AZ::u64 sessionId) = 0;
// Stop a drilling session. This function is normally called internally by DrillerRemoteSession
virtual void StopDrillerSession(AZ::u64 sessionId) = 0;
};
typedef AZ::EBus<DrillerConsoleCommands> DrillerConsoleCommandBus;
} // namespace AzFramework
|
DeJ Loaf ain’t no joke. On Wednesday (Apr. 6), the Detroit spitter unleashed her new mixtape, All Jokes Aside, appearing naked on its cover.
Prior to unleashing All Jokes Aside, the Detroit rapper released a teaser for the tape. It turns out that preview was a look into the mixtape’s intro “Vibes,” in which she talks about taking care of business.
All Jokes Aside is the follow-up to July’s EP #AndSeeThatsTheThing.
Stream DeJ’s new mixtape below. |
Conditions for Ultrashort Pulse Decomposition in Multi-Cascade Protective Devices Based on Meander Lines With an Asymmetric Cross-Section This paper present the conditions for ultrashort pulse (USP) decomposition in multicascade structures based on a meander line turn with an asymmetric cross-section. First, a detailed analysis of a meander line turn with a broad-side coupling is presented, in which the even mode pulse propagates faster than the odd one. Then the conditions for an arbitrary number of cascades (N) are obtained, and their verification is performed. The possibility of USP attenuation is shown: up to 9.3 times in the structure with N=2, up to 21.11 times in the structure with N=3, up to 84.9 times in the structure with N=4, up to 213.9 times in the structure with N=5. It was revealed that that successive increase of N from turns with the same cross-section parameters to ensure conditions for USP decomposition requires an increase in the length of the first cascade by 8 times relative to the length of the previous cascade. In practice, a situation is possible when the propagation velocity for the odd mode is higher than that for the even mode. Therefore, we summarize conditions for an arbitrary case of a cross-section. |
package com.hjq.demo.modelViewer;
import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.opengl.Matrix;
import androidx.annotation.Nullable;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
/*
* Copyright 2017 <NAME>. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
public class ModelRenderer implements GLSurfaceView.Renderer {
private static final float MODEL_BOUND_SIZE = 50f;
private static final float Z_NEAR = 2f;
private static final float Z_FAR = MODEL_BOUND_SIZE * 10;
@Nullable private Model model;
private final Light light = new Light(new float[] {0.0f, 0.0f, MODEL_BOUND_SIZE * 10, 1.0f});
private final Floor floor = new Floor();
private final float[] projectionMatrix = new float[16];
private final float[] viewMatrix = new float[16];
private float rotateAngleX;
private float rotateAngleY;
private float translateX;
private float translateY;
private float translateZ;
public ModelRenderer(@Nullable Model model) {
this.model = model;
}
public void translate(float dx, float dy, float dz) {
final float translateScaleFactor = MODEL_BOUND_SIZE / 200f;
translateX += dx * translateScaleFactor;
translateY += dy * translateScaleFactor;
if (dz != 0f) {
translateZ /= dz;
}
updateViewMatrix();
}
public void rotate(float aX, float aY) {
final float rotateScaleFactor = 0.5f;
rotateAngleX -= aX * rotateScaleFactor;
rotateAngleY += aY * rotateScaleFactor;
updateViewMatrix();
}
private void updateViewMatrix() {
Matrix.setLookAtM(viewMatrix, 0, 0, 0, translateZ, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
Matrix.translateM(viewMatrix, 0, -translateX, -translateY, 0f);
Matrix.rotateM(viewMatrix, 0, rotateAngleX, 1f, 0f, 0f);
Matrix.rotateM(viewMatrix, 0, rotateAngleY, 0f, 1f, 0f);
}
@Override
public void onDrawFrame(GL10 unused) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
floor.draw(viewMatrix, projectionMatrix, light);
if (model != null) {
model.draw(viewMatrix, projectionMatrix, light);
}
}
@Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
Matrix.frustumM(projectionMatrix, 0, -ratio, ratio, -1, 1, Z_NEAR, Z_FAR);
// initialize the view matrix
rotateAngleX = 0;
rotateAngleY = 0;
translateX = 0f;
translateY = 0f;
translateZ = -MODEL_BOUND_SIZE * 1.5f;
updateViewMatrix();
// Set light matrix before doing any other transforms on the view matrix
light.applyViewMatrix(viewMatrix);
// By default, rotate the model towards the user a bit
rotateAngleX = -15.0f;
rotateAngleY = 15.0f;
updateViewMatrix();
}
@Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
GLES20.glClearColor(0.2f, 0.2f, 0.2f, 1f);
GLES20.glEnable(GLES20.GL_CULL_FACE);
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
//GLES20.glEnable(GLES20.GL_BLEND);
//GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
floor.init(MODEL_BOUND_SIZE);
if (model != null) {
model.init(MODEL_BOUND_SIZE);
floor.setOffsetY(model.getFloorOffset());
}
}
}
|
Telomere length, molecular cytogenetic findings, and immunophenotypic features in previously untreated patients with B-chronic lymphocytic leukemia. UNLABELLED Telomere length was evaluated by terminal repeat fragment method in 66 previously untreated patients with B-chronic lymphocytic leukemia (B-CLL) to ascertain whether telomere shortening was associated with genomic aberrations, immunoglobulin variable heavy chain (IgVH) mutational status, CD38 and ZAP-70 expression, and telomerase activity. Chromosomal aberrations were present in peripheral blood cells of 73% patients (48/66), no difference in telomere length between patients with good and intermediate prognosis according to cytogenetics was found. Association between telomere length and IgVH mutational status, ZAP-70 and CD38 expression was proved as significantly shorter telomeres in patients with unmutated IgVH status (p=0.01) and ZAP-70 positivity (p=0.01) and CD38 positivity (p=0.05) were detected. Telomerase activity was positive in 11 patients out of 21 examined, correlation between telomere length and telomerase activity was found (p=0.05). Telomere length and telomerase activity in combination with other prognostic parameters complete the risk profile of B-CLL patients and might serve for an easy decision on optimal treatment strategy. KEYWORDS B-chronic lymphocytic leukemia, telomere length, telomerase activity, chromosomal aberrations, prognosis. |
We pitted a few top phones against each other in a "voice off" over voice commands.
Contributor Jennifer Jolly puts three top smartphones through the paces to see how well they do with the latest voice command technology.
Most of us have talked to out loud to our smartphones before, but hoped no one was actually listening. (I can think of a few choice phrases I've uttered when calls drop or batteries die.) But when you tell your phone to do something; such as find directions to the nearest Peet's Coffee, name the song playing on the radio, or post an update to Facebook – and it does it right away – it feels like you've hit the jackpot on a smartphone sweepstakes.
For today's smartphone smackdown, we pitted a few top phones against each other in a "voice off" over voice commands. For a month, I carried around three smartphones – delivering a diatribe of demands to test SVoice on the GS4, Voice Command on the Moto-X and Siri on the new iPhone 5S (iOS 7). Why three instead of two? Without getting too into the weeds here, voice recognition varies dramatically even among Androids, because some phones have their own take on it. While Moto-X and Siri deliver similar experiences overall, SVoice lags behind.
JUST WHAT ARE VOICE COMMANDS?
If you haven't used voice commands recently – or taken some time to learn it on your device – it may be a bit of a mystery as to what you can actually tell your phone to do for you. When used right, you can dictate emails or text messages, make calls, open apps, play music, and search for information. Android and iPhone can use a lot of similar commands and your choice of platform isn't likely to make or break your voice command experience – though as I mention above – your choice of phone might.
If this were an Android-only speech smackdown, Moto-X would knock-out the GS4 fairly early-on. When used alone, the GS4 is fine, but in the side-by-side tests, it often took twice as long to respond, and was particularly tough to use if there any outside or ambient noise. An airplane, background music, even the slightest sound, would throw it off.
But Siri and Android Voice Command on the Moto-X were often tied in how quickly they responded, and what the responses were. Everything from the weather outside (what's the temperature outside right now) to baseball scores (who's winning the World Series), and even random fact-check web searches (what's the largest wave ever surfed or how do you peel garlic), often yielded the similar results. You can actually see this demonstrated several different ways in the video we did to go with this column.
BUT ANDROID PHONE AGAINST IPHONE?
The biggest surprise when pitting Android against iPhone, is that all three phones were much better than I thought they would be – and much more closely matched than even a few months ago. For instance, the Android phones will now tell you jokes when you ask, which used to be a Siri-only trick. And while Siri might be the only mobile personal assistant with a real-ish name, Android Voice Command is starting to develop a personality too. That's a surprisingly nice touch when you spend a lot of time trying to communicate with an object.
But in the stuff that really matters, Siri does the best job at talking to you like a normal person versus a robot. "She" does a really solid job dictating emails, reading and sending text messages, posting social media updates, and doing it all with punctuation. She's also my go-to gal when I want to schedule an appointment, check my calendar, make a dinner reservation, or find a child-friendly movie playing nearby. And she – has a great new "he" voice option (settings -> general -> voice gender).
When I need reliable directions, Android Voice Command and S Voice win – by a mile – though Siri's not as bad as she used to be. (A year ago in Florida, my iPhone took me to a swamp in the middle of nowhere instead of to the airport hotel I was trying to find in the middle of the night. Funny, and scary.) Android Voice Command also delivers the best detailed web searches, which should come as no surprise since it's Google.
The hands-down winner for truly hands-free is … The Androids again, because you can open them with your voice, by saying, "Okay Google Now" or "Hi Galaxy" for the GS4. With Siri, you have to touch and hold the home screen button. This is an especially important feature when you're driving. Another great trick, voice command lets you ask your phone what song is playing, and it tells you. With Siri, you have to use an app for that.
"set reminder for tonight to log video for battery script edit and finish writing segments for tomorrow."
Instead, she repeated – out loud on a speaker phone with my family in the car — well, let's just say it was way off, and very unsuitable in a family setting.
Coke vs. Pepsi. Republicans vs Democrats. Darth Vader vs. Obi-Wan. In the world of notable rivalries, "winning" often comes down to personal opinion. For this smartphone smackdown, finding a clear champion has proven especially challenging because the very latest version of voice recognition on both the iPhone and the Moto-X are really just that good. Part of what makes them so good is that I now have a better understanding of how to work them, so be sure to follow the prompts on your phone to learn more about what voice commands can do for you.
I hate doing this, but I'm going to have to declare it a tie. Or maybe, a better way to think about it is that the only clear winner – is you. As our phones get smarter – there's a good chance we'll not only be talking to them more – but soon, they'll actually be listening.
Jennifer Jolly is an Emmy award-winning consumer tech contributor and host of USA TODAY's digital video show TECH NOW. E-mail her [email protected]. Follow her on Twitter: @JenniferJolly. |
Therapeutic Efficacy of Subgingivally Delivered Doxycycline Hyclate as an Adjunct to Non-surgical Treatment of Chronic Periodontitis ABSTRACT Objectives Locally used doxycycline has been shown to concentrate in crevicular fluid and demonstrates a wide spectrum of activity against the periodontal pathogens. The aim of the present clinical study was to evaluate the efficacy of doxycyline hyclate 10% as an adjunct to scaling and root planing in the treatment of chronic periodontitis. Material and Methods 60 systemically healthy, chronic periodontitis patients were included in the study. Randomized clinical trial was performed over the 6 month period. Test group was treated by scaling and root planing followed by local delivery of doxycycline hyclate 10%, while the control group was treated by scaling and root planing along with placebo. Results Significantly greater (P < 0.001) reduction in the mean probing pocket depth was demonstrated in the test group (3.03 ± 0.92 mm) when compared with the control group (2.3 ± 0.65 mm). When the differences in clinical attachment level gain for the test group (2.0 ± 0.64 mm) versus control group (1.13 ± 1.07 mm) were analyzed by Student's unpaired t-test, test group showed statistically greater clinical attachment level gain (0.87 ± 0.22 mm, P < 0.001). Conclusions From the analysis of the results it can be concluded that the use of doxycyline hyclate 10% as an adjunct to scaling and root planing provides more favourable and statistically significant (P < 0.001) reductions in probing pocket depth and gains in clinical attachment level compared to scaling and root planing alone. INTRODUCTION The essential goal of periodontal therapy is the successful management of bacterial pathogens to the extent that destruction of the periodontium is arrested. A number of different non-surgical and surgical therapies have been successful in achieving this goal. The primary non-surgical approach involves mechanical scaling and root planing (SRP). The beneficial effects of SRP arise from a reduction in the microbial burden in the periodontal pocket, or a shift towards a less pathogenic microflora. However, the efficacy of SRP may be compromised at tooth sites with deep periodontal pockets. Furthermore, the long term success of SRP may be affected by remaining bacterial virulence factors and ineffective personal plaque control. Several antimicrobial agents have been attempted systemically as an adjunct to mechanical treatment of the periodontal disease. However, the efficacy of these means have been limited by systemic side effects, inability to reach the site of action in adequate concentrations and inability to maintain adequate drug levels for a sufficient time. Controlled local delivery of the local antimicrobial agents were subsequently developed that were successful in maintaining the higher concentration of drug in the periodontal pocket for longer periods than systemically delivered methods. Many of well controlled trials that evaluated the efficacy of various locally delivered antimicrobial agents have demonstrated better clinical response with doxycycline, when used as an adjunct to SPR. Locally used doxycycline has been shown to concentrate in crevicular fluid, successfully eliminate Actinobacillus actinomycetemcomitans and demonstrates a wide spectrum of activity against other suspected periodontal pathogens. Large multicenter human clinical trials have shown favourable responses following the use of doxycycline hyclate when delivered subgingivally to human periodontal pocket in a biodegradable controlled release delivery system. However, there is a paucity of literature on the efficacy of doxycyline hyclate in the south Asian population. Therefore, the aim of the present clinical study was to evaluate the efficacy of doxycyline hyclate 10% (Atridox ®, CollaGenx Pharmaceuticals, Inc., Newtown, PA, USA) as an adjunct to scaling and root planing in providing additional benefits compared to scaling and root planing with placebo in the treatment of chronic periodontitis. Study population and design Sixty systemically healthy, 25 male and 35 female (aged 30 to 45 years, mean age = 36.8 ± 4.87 years), were selected for the study. The subjects had to comply with following criteria: (i) positive for the diagnosis of generalized chronic periodontitis, as assessed on the basis of clinical attachment level (CAL), measured using a William's graduated periodontal probe (Hu-Friedy, Chicago, IL, USA), (ii) good general health, (iii) negative for hypersensitive to doxycycline, (iv) negative for the use of any antibiotic or antiinflammatory drugs within the 6 months preceding the beginning of the study. Pregnant or nursing females were excluded from the study. The subjects were not allowed to take any antibiotics and anti-inflammatory drugs, or chlorhexidine-based mouth rinses, during the entire period of the study. After the proper examination and diagnosis, all the patients received oral hygiene instructions. Six weeks following the initial therapy, plaque index (PI) and periodontal bleeding index (PBI) were taken to evaluate the oral hygiene status and patient compliance with oral hygiene instructions. One persistent pocket of 5 -7 mm probing depth surrounding a molar/premolar with a clinically healthy crown was chosen as the experimental site in each patient. An attempt was made to select the sites/teeth identical in terms of severity of the periodontal disease. A comparable number of sites, from the upper and lower dental arches were selected. Patients were divided into two groups: test group (treated by SPR followed by local delivery of doxycycline hyclate 10%), and control group (treated by SPR followed by local delivery of a gel containing glycerine as placebo). Clinical measurements were recorded at 6 weeks following the initial visits (baseline) and again at 6 months. Full mouth supragingival plaque was assessed by using PI. Gingival inflammation was assessed by PBI. The probing measurements recorded for assessment of the results were probing pocket depth (PPD), CAL and gingival recession (REC). These measurements were recorded at 6 sites of each experimental tooth; for later calculations, the mean of 6 sites was taken into consideration. All the probing measurements were made with William's graduated periodontal probe (Hu-Friedy, Chicago, IL, USA). The purpose and design of this clinical trial was explained to the patients and the informed consent Clinical Procedures After recording the PI and the PBI, probing measurements were carried out. PPD and CAL were measured as the distance from bottom of the pocket to the most apical portion of the gingival margin and the cemento-enamel junction, respectively. The same operator (V.D.), who was blinded to the treatment groups, collected the clinical data. SRP was done using Hoe scalers and standard Gracey currettes (Hu-Friedy, Chicago, IL, USA) under the local anaesthesia for the test and control group patients. In the test quadrants, the periodontal pockets were treated by 10% doxycycline hyclate in bioabsorbable vehicle, which was supplied as an Atrigel ® (CollaGenx Pharmaceuticals, Inc., Newtown PA, USA) delivery system consisting of the two syringes. Syringe A contained the polymer of poly lactic acid DL-Lactide (PLA), dissolved in biocompatible carrier N-methyl-2-pyrrolidone (NMP). Syringe B contained 10% doxycycline hyclate, which was equivalent to 50 mg of the doxycycline. These two syringes were coupled together just prior to use and was mixed for 100 cycles. Once mixed, the doxycyline hyclate 10% was allowed to set at room temperature for 15 minutes and then was mixed for another 10 cycles before the use. A 23 gauge blunt cannula was attached to the delivery syringe and gel was expressed into the periodontal pocket. Any overflow of material was gently packed into the pocket with a moist tip of curette in order to speed up coagulation of the polymer. The control sites were treated by SRP followed by local delivery of a gel containing glycerine as placebo. The patients were recalled at 6 months and the clinical measurements recorded at the baseline were repeated again. The means and standard deviations of PPD, CAL, REC, PI and PBI at the baseline and at the 6 months examination were calculated for both groups. The Student's paired t-test was used to compare the data from the baseline to those at the 6 months for each treatment group and between treatment groups. The level of significance was set at P < 0.05. All statistical analysis was carried out with the aid of statistical software (SPSS version 12.0, SPSS Inc., Chicago, IL, USA). RESULTS Thirty periodontal pockets (test sites) were treated by SRP combined with subgingival doxycyline hyclate 10% therapy, while thirty control sites were treated by SRP combined with placebo in a total of 60 chronic periodontitis patients. During the course of the study, the healing was uneventful. No periodontal pocket sites showed adverse reaction to doxycyline hyclate 10%. None of the selected patients dropped out before the termination of study. At the baseline, PPD was 5.83 ± 0.53 mm in the test group and 5.70 ± 0.65 mm in the control group. Similarly, mean CAL was 6.50 ± 0.50 mm in the test group and 6.53 ± 0.68 mm in the control group. The mean REC was 0.66 ± 0.60 mm in the test group and 0.83 ± 0.64 mm in the control group. At the baseline, no statistically significant differences in any of the investigated parameters were observed between the test and control groups, indicating that the randomization process was effective ( Table 1). The mean PPD reduction was 3.03 ± 0.92 mm in the test group and 2.30 ± 0.65 mm in the control group at the 6 months examination (Table 1). Student's paired t-test indicated that both groups showed significantly greater mean PPD reduction at the 6 months examination compared to the baseline (P < 0.001) ( Table 1). However, significantly greater (P = 0.001) reduction in mean PPD was demonstrated in the test group, showing an additional 0.73 ± 0.20 mm PPD reduction (Table 1). Considering moderate pockets (5 mm), a mean PPD reduction of 2.42 ± 0.53 mm for the test group and 2.00 ± 0.00 mm for the control group at the 6 months examination was observed. The differences between the groups were not statistically significant (P = 0.38). PPD reduction for patients with deep pockets (6 -7 mm) was statistically significant (P = 0.006) between the groups at 6 months follow-up ( Table 2). The mean CAL gain in the test group was 2.0 ± 0.64 mm and 1.13 ± 1.05 mm in the control group ( Table 1). The observed differences between the baseline and 6 months were found to be statistically significant (P < 0.001) in the both groups. However, significantly greater (P = 0.001) reduction in mean CAL was demonstrated in the test group (Table 1). No statistically significant difference (P = 0.12) was observed between groups for moderate pockets (5 mm) at the 6 months examination ( Table 2). A mean CAL gain of 1.42 ± 0.53 mm for the test group and 0.50 ± 1.37 mm for the control group was observed. For the patients with deep pockets (6 -7 mm), statistically significant difference (P = 0.000) was observed between groups in CAL gain at the 6 months examination (2.17 ± 0.57 mm and 1.29 ± 0.95 mm for the test and control groups, respectively) ( Table 2). The mean increase in REC was 1.03 ± 1.15 mm in the test group and 1.16 ± 1.05 mm in the control group at 6 months examination. A statistically significant increase in REC was found in the both groups (P < 0.001) ( Table 1). However, no statistically significant difference was found during the increase in REC between the test and control groups (P = 0.643). The test group presented a greater proportion of sites with PPD reduction of 3 mm and 4 -5 mm i.e. 70% and 20% respectively, than the control group (53.3% and 0% respectively). While, the control group presented a greater proportion of sites with PPD reduction of 2 mm i.e. 46.7% than in the test group (10%) (Figure 1). A greater proportion of sites presenting CAL gain of 3 mm were observed in the test group (80%) compared to the control group (30%). While, the control group presented a greater proportion of sites with CAL gain of 1 mm and 2 mm i.e. 35% each than the test group i.e. 3.3% and 16.6% respectively (Figure 2). DISCUSSION This clinical trial was performed to examine the clinical efficacy of subgingivally delivered controlled drug therapy of doxycycline hyclate 10% (Atridox ®, CollaGenx Pharmaceuticals, Inc., Newtown PA, USA) in conjunction with a non-surgical treatment of periodontal pockets affected by the chronic periodontitis. During the active treatment, doxycycline hyclate 10% was well tolerated with no side effects observed in any of patients. At the baseline, the investigated parameters on the sites treated with SRP + doxycycline hyclate 10% were similar to those of SRP + Placebo. The adjunctive use of locally delivered antibiotics can provide additional benefit when compared to the conventional therapy. The present study demonstrated significantly greater PPD reductions in the both groups. Similarly, at the 6 months examination significant gain of CAL was observed in the test (2.0 ± 0.64 mm) and control group (1.13 ± 1.07 mm) compared to the baseline with significantly higher (P < 0.001) CAL gain of 0.87 ± 0.22 mm in the test group compared to the control group. The difference in PPD reductions observed was apparently higher than the other studies with similar experimental design. Recently, Gupta et al. in a study reported a reduction of 2.73 ± 1.33 mm PPD at 3 months following the administration of doxycycline hyclate 10% as an adjunct to mechanical therapy in the treatment of chronic periodontitis. The authors also reported a gain of 1.73 ± 0.90 mm in CAL after the 3 months. Similarly, Garrett et al. compared two multicenter studies findings of the locally delivered doxycycline hyclate 10% as a monotherapy with SRP and showed similar results for either PPD reduction (1.3 mm and 0.9 mm, respectively) or CAL gain (0.8 mm and 0.7 mm, respectively). The results obtained in the present study were somewhat better in comparison with previous studies. The possible reasons were the exclusion of smokers from the study and stringent plaque control throughout the follow-up period by patients. Furthermore, it is well established that the organized structure of biofilm can block proper diffusion or even inactivate the pharmacological agents subgingivally. Thus, previous biofilm removal could favour greater efficacy of the antibiotic against subgingival pathogens, explaining the differences favouring the adjunctive therapy found in the present study. Considering both moderate and deep pockets, a difference of 2.42 ± 0.53 mm and 3.21 ± 0.95 mm respectively were observed at the 6 months period for PPD reduction and 1.42 ± 0.53 mm and 2.17 ± 0.57 mm for CAL gain at the 6 months examination both favouring the test group. However, considering deep sites, the present study showed significant differences in PPD reduction (0.71 ± 0.22 mm, P = 0.006) and CAL gain (0.88 ± 0.23 mm, P = 0.000) favouring the test group. Indeed, residual subgingival deposits are frequently observed after conventional instrumentation in the deep sites. The use of the local drug delivery is beneficial as it will provide the concentrated amount of drug in the pockets, which is of the greater significance when the pockets are deeper and the subgingival mechanical instrumentation alone cannot be relied upon. In addition, the proportion of sites showing attachment gain of 3 mm in the present study was significantly greater for the treatment with doxycycline hyclate 10% + SPR (80%) than for the control group (30%). Similarly, the proportion of sites showing PPD reduction of 3 mm was significantly greater for the treatment with doxycycline hyclate 10% (70%) than for the control group (53.3%). A greater proportion of sites showing PPD reduction and CAL gain could represent an advantage in maintenance therapy, limiting the need for surgical procedures to fewer non-responding sites and eventually extending the period between recall visits. The local delivery system may have provided bactericidal levels of the doxycycline hyclate 10% to the periodontal pocket, which resulted in a reduction of the subgingival bacterial bioburden which led to the positive clinical outcomes. The sustained levels achievable with the locally delivered doxycycline provided bactericidal concentrations (i.e., MIC 90s) among the vast majority of microorganisms present in the biofilm and may have been able to adequately penetrate the depth of the subgingival biofilm. An additional benefit of the local drug delivery system is that it is relatively simple procedure to perform and moreover, patient acceptance is better as compared to invasive procedures such as the surgical treatment. CONCLUSIONS From the analysis of results it can be concluded that the use of doxycycline hyclate 10% as an adjunct to scaling and root planing provides more favourable and statistically significant reductions in periodontal probing depth and gains in clinical attachment level compared to scaling and root planing. The use of doxycycline hyclate 10% as an adjunct to the scaling and root planing may provide additional benefits to the periodontal treatment, especially for the treatment of deep pockets. The present study had few limitations as well. The small sample size has limited statistical analysis of results and a long term analysis is needed to determine the stability of results. Most importantly microbial culture tests would have more comprehensively demonstrated the beneficial effects of doxycycline hyclate 10%. Further studies are needed to determine which type of the patients and lesions will benefit most from the incorporation of subgingivally delivered controlled release doxycycline as an adjunct to the non-surgical periodontal therapy. ACKNOWLEDGMENTS AND DISCLOSURE STATEMENTS The authors report no conflicts of interest related to this study. |
GPS positioning in a multipath environment We address the problem of GPS signal delay estimation in a multipath environment with a low-complexity constraint. After recalling the usual early-late estimator and its bias in a multipath propagation context, we study the maximum-likelihood estimator (MLE) based on a signal model including the parametric contribution of reflected components. It results in an efficient algorithm using the existing architecture, which is also very simple and cheap to implement. Simulations show that the results of the proposed algorithm, in a multipath environment, are similar to these of the early-late in a single-path environment. The performance are further characterized, for both MLEs (based on the single-path and multipath propagation) in terms of bias and standard deviation. The expressions of the corresponding Cramer-Rao (CR) bounds are derived in both cases to show the good performance of the estimators when unbiased. |
Facebook announced Wednesday that it plans to block white nationalism and separatism representation on its platform, according to Reuters.
The new policy comes on the heels of the massacre of 50 people was lived stream in Christchurch, New Zealand mosques in early March. Facebook said the ban on any “praise, support and representation” of the beliefs will take effect next week. The policy also applies to Instagram.
Civil rights groups have criticized the company and other social media platforms such as Twitter and YouTube for failing to “confront extremism” prior to the incident.
Facebook already blocked white supremacy under rules of “hateful” content but stated last year that “white nationalist or separatist content” was separate from the term, according to Motherboard. Facebook said the beliefs weren’t “explicitly racist” and thought it was a representation of people’s identity.
Civil rights groups criticized the platform’s statement saying there isn’t a difference.
Facebook said in a statement, “But over the past three months our conversations with members of civil society and academics who are experts in race relations around the world have confirmed that white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups,” the company said.
New Zealand Prime Minister, Jacinda Ardern, has already stated that platforms should be held accountable for users actions online. She said that the forms of extremism should have already been prohibited under the company’s hate-speech rules.
“Having said that, I’m pleased to see that they are including it, and that they have taken that step, but I still think that there is a conversation to be had with the international community about whether or not enough has been done,” she said on Thursday at a media conference.
“There are lessons to be learnt here in Christchurch and we don’t want anyone to have to learn those lesson over again,” she added. |
World Vasectomy Day
World Vasectomy Day (WVD) is an annual event to raise global awareness of vasectomy as a male-oriented solution to prevent unintended pregnancies. The goal is for doctors all over the world to perform vasectomies, connected to the event via Skype and social media platforms.
History
WVD was founded in 2012 by the American film-maker Jonathan Stack while he was working on a documentary about the decision of having a vasectomy. The underlying goal was to involve men in family planning decisions and educate them about vasectomies as a simple way of taking responsibility for birth control.
In 2013, prolific vasectomist Dr Douglas Stein was scheduled to perform vasectomies in front of an audience at the Royal Institution of Australia (RiAus) to launch the inaugural World Vasectomy Day with the world’s first live-streamed vasectomy. Dr Stein answered questions from a live audience in Australia and an international online audience.
The event's third edition was hosted by Indonesia in 2015, the fourth edition was centered in Kenya and featured a vasectomy broadasted live on Facebook.
On 17 November 2017, at its fifth anniversary, more than 1,200 vasectomists in more than 50 countries joined the event, making it the largest male-focused family planning event in history. This year's headquarter was located in Mexico. |
<gh_stars>0
/**
* Copyright 2016 SPeCS.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License. under the License.
*/
package org.specs.MatlabToC.Functions.MatisseInternalFunctions;
import org.specs.CIR.FunctionInstance.FunctionInstance;
import org.specs.CIR.FunctionInstance.FunctionType;
import org.specs.CIR.FunctionInstance.FunctionTypeBuilder;
import org.specs.CIR.FunctionInstance.InstanceProvider;
import org.specs.CIR.FunctionInstance.ProviderData;
import org.specs.CIR.FunctionInstance.InstanceBuilder.AInstanceBuilder;
import org.specs.CIR.FunctionInstance.Instances.InlineCode;
import org.specs.CIR.FunctionInstance.Instances.InlinedInstance;
import org.specs.CIR.Tree.PrecedenceLevel;
import org.specs.CIRTypes.Types.String.StringType;
import org.specs.MatlabToC.InstanceProviders.MatlabInstanceProviderHelper;
import org.specs.MatlabToC.Utilities.MatisseChecker;
import org.specs.MatlabToC.jOptions.MatlabToCKeys;
import org.suikasoft.jOptions.Interfaces.DataStore;
public class MatisseSignature extends AInstanceBuilder {
private static final MatisseChecker CHECKER = new MatisseChecker()
.numOfInputs(0)
.numOfOutputsAtMost(1);
public MatisseSignature(ProviderData data) {
super(data);
}
public static InstanceProvider getProvider() {
return new MatlabInstanceProviderHelper(MatisseSignature.CHECKER, data -> new MatisseSignature(data).create());
}
@Override
public FunctionInstance create() {
DataStore setup = getData().getSettings();
StringBuilder signatureBuilder = new StringBuilder("MATISSE, ");
if (setup.get(MatlabToCKeys.USE_PASS_SYSTEM)) {
signatureBuilder.append("with pass system, solver=");
signatureBuilder.append(setup.get(MatlabToCKeys.ENABLE_Z3) ? "z3" : "dummy");
} else {
signatureBuilder.append("classic");
}
String signature = signatureBuilder.toString();
StringType stringType = StringType.create(signature, getNumerics().newChar().getBits(), true);
FunctionType functionType = FunctionTypeBuilder
.newInline()
.returning(stringType)
.build();
InlineCode code = tokens -> {
return "\"" + signature + "\"";
};
InlinedInstance instance = new InlinedInstance(
functionType,
"$MATISSE_signature",
code);
instance.setCallPrecedenceLevel(PrecedenceLevel.Atom);
return instance;
}
}
|
from logging import Formatter, getLogger, StreamHandler, \
CRITICAL, DEBUG, ERROR, INFO, WARNING
from os import getenv
AVAILABLE_LEVELS = {
'DEBUG': DEBUG,
'INFO': INFO,
'WARNING': WARNING,
'ERROR': ERROR,
# CRITICAL should be used just to suppress logging messages during test cases
'CRITICAL': CRITICAL
}
DEFAULT_FORMAT = ('[%(asctime)s] %(levelname)-s in %(name)s:%(lineno)s '
'%(funcName)s() | %(message)s')
def get_logging_level_from_env_variable(logging_level_env_variable):
"""Returns the logging level from the passed environment variable name, if it is valid.
If it is not valid, then return a default level."""
logging_level = getenv(logging_level_env_variable, CRITICAL)
# if the logging level already exists, then return it,
if logging_level in AVAILABLE_LEVELS:
return AVAILABLE_LEVELS[logging_level]
# else, return the default logging level
# CRITICAL is default level in order to suppress logging messages during test cases
return CRITICAL
def create_logger(logger_name, level=INFO, format=DEFAULT_FORMAT):
"""Creates a logger object."""
# create logger
logger = getLogger(logger_name)
logger.setLevel(level)
# "avoid to emit the same record multiple times"
# Source: https://docs.python.org/3/library/logging.html#logging.Logger.propagate
logger.propagate = False
# create stream handler, set its level and set a new format to it
handler = StreamHandler()
handler.setLevel(level)
handler.setFormatter(Formatter(format))
# add the new handler to logger
logger.addHandler(handler)
return logger
|
/**
* Delete or purge an instance depending on the support of the owning repository.
*
* @param userId calling user
* @param guid unique identifier of the instance
* @param typeGUID unique identifier of the type
* @param typeName name of the type
* @param methodName name of calling method
* @throws PropertyServerException unable to delete or purge the instance - probably because of a mismatch between
* the type and the instance guid
*/
void deleteEntity(String userId,
String guid,
String typeGUID,
String typeName,
String methodName) throws PropertyServerException
{
OMRSMetadataCollection metadataCollection = errorHandler.validateRepositoryConnector(methodName);
Throwable deleteException;
try
{
metadataCollection.deleteEntity(userId,
typeGUID,
typeName,
guid);
log.debug("Entity soft-deleted: " + guid);
return;
}
catch (FunctionNotSupportedException exception)
{
log.debug("Soft-delete not supported: " + serverName);
try
{
metadataCollection.purgeEntity(userId,
typeGUID,
typeName,
guid);
log.debug("Entity purged: " + guid);
return;
}
catch (Throwable error)
{
log.debug("Entity purge failed: " + error.getMessage());
deleteException = error;
}
}
catch (Throwable error)
{
log.debug("Entity soft-delete failed: " + error.getMessage());
deleteException = error;
}
GovernanceProgramErrorCode errorCode = GovernanceProgramErrorCode.INSTANCE_NOT_DELETED;
String errorMessage = errorCode.getErrorMessageId() + errorCode.getFormattedErrorMessage(guid);
throw new PropertyServerException(errorCode.getHTTPErrorCode(),
this.getClass().getName(),
methodName,
errorMessage,
errorCode.getSystemAction(),
errorCode.getUserAction(),
deleteException);
} |
In a new series of features today we’re highlighting the top 10 ‘cheap eats’ in Hemel Hempstead.
These are the best places to eat on a budget in Hemel Hempstead according to reviews by locals posted and compiled on Trip Advisor.
Just hit the gallery icon in the image or the link above to get started.
Is your favourite in the top 10 photo gallery, or do you know of another eatery that deserves to be featured? Have your say on our Facebook page.
Rankings were correct at the time of publication. |
def write_ensemble_pdb(cgmodel, ensemble_directory=None):
if ensemble_directory == None:
ensemble_directory = get_ensemble_directory(cgmodel)
index = 1
pdb_list = get_pdb_list(ensemble_directory)
pdb_file_name = str(ensemble_directory + "/cg" + str(index) + ".pdb")
while pdb_file_name in pdb_list:
pdb_file_name = str(ensemble_directory + "/cg" + str(index) + ".pdb")
index = index + 1
write_pdbfile_without_topology(cgmodel, pdb_file_name)
return |
/**
* Base test case for testing management operations.
*
* @author Richard Achmatowicz (c) 2011 Red Hat Inc.
*/
public class OperationTestCaseBase extends AbstractSubsystemTest {
static final String SUBSYSTEM_XML_FILE = "subsystem-jgroups-test.xml" ;
public OperationTestCaseBase() {
super(JGroupsExtension.SUBSYSTEM_NAME, new JGroupsExtension());
}
protected static ModelNode getCompositeOperation(ModelNode[] operations) {
// create the address of the cache
ModelNode compositeOp = new ModelNode() ;
compositeOp.get(OP).set(COMPOSITE);
compositeOp.get(OP_ADDR).setEmptyList();
// the operations to be performed
for (ModelNode operation : operations) {
compositeOp.get(STEPS).add(operation);
}
return compositeOp ;
}
protected static ModelNode getSubsystemAddOperation() {
// create the address of the subsystem
PathAddress subsystemAddress = PathAddress.pathAddress(
PathElement.pathElement(SUBSYSTEM, JGroupsExtension.SUBSYSTEM_NAME));
ModelNode addOp = new ModelNode() ;
addOp.get(OP).set(ADD);
addOp.get(OP_ADDR).set(subsystemAddress.toModelNode());
// required attributes
addOp.get(ModelKeys.DEFAULT_STACK).set("maximal2");
return addOp ;
}
protected static ModelNode getSubsystemReadOperation(String name) {
// create the address of the subsystem
PathAddress subsystemAddress = PathAddress.pathAddress(
PathElement.pathElement(SUBSYSTEM, JGroupsExtension.SUBSYSTEM_NAME));
ModelNode readOp = new ModelNode() ;
readOp.get(OP).set(READ_ATTRIBUTE_OPERATION);
readOp.get(OP_ADDR).set(subsystemAddress.toModelNode());
// required attributes
readOp.get(NAME).set(name);
return readOp ;
}
protected static ModelNode getSubsystemWriteOperation(String name, String value) {
// create the address of the subsystem
PathAddress subsystemAddress = PathAddress.pathAddress(
PathElement.pathElement(SUBSYSTEM, JGroupsExtension.SUBSYSTEM_NAME));
ModelNode writeOp = new ModelNode() ;
writeOp.get(OP).set(WRITE_ATTRIBUTE_OPERATION);
writeOp.get(OP_ADDR).set(subsystemAddress.toModelNode());
// required attributes
writeOp.get(NAME).set(name);
writeOp.get(VALUE).set(value);
return writeOp ;
}
protected static ModelNode getProtocolStackAddOperation(String stackName) {
// create the address of the cache
PathAddress stackAddr = getProtocolStackAddress(stackName);
ModelNode addOp = new ModelNode() ;
addOp.get(OP).set(ADD);
addOp.get(OP_ADDR).set(stackAddr.toModelNode());
// required attributes
// addOp.get(DEFAULT_CACHE).set("default");
return addOp ;
}
protected static ModelNode getProtocolStackAddOperationWithParameters(String stackName) {
ModelNode addOp = getProtocolStackAddOperation(stackName);
// add optional TRANSPORT attribute
ModelNode transport = new ModelNode();
transport.get(ModelKeys.TYPE).set("UDP");
addOp.get(ModelKeys.TRANSPORT).set(transport);
// add optional PROTOCOLS attribute
ModelNode protocolsList = new ModelNode();
ModelNode mping = new ModelNode() ;
mping.get(ModelKeys.TYPE).set("MPING");
protocolsList.add(mping);
ModelNode flush = new ModelNode() ;
flush.get(ModelKeys.TYPE).set("pbcast.FLUSH");
protocolsList.add(flush);
addOp.get(ModelKeys.PROTOCOLS).set(protocolsList);
return addOp ;
}
protected static ModelNode getProtocolStackRemoveOperation(String stackName) {
// create the address of the cache
PathAddress stackAddr = getProtocolStackAddress(stackName);
ModelNode removeOp = new ModelNode() ;
removeOp.get(OP).set(REMOVE);
removeOp.get(OP_ADDR).set(stackAddr.toModelNode());
return removeOp ;
}
protected static ModelNode getTransportAddOperation(String stackName, String protocolType) {
// create the address of the cache
PathAddress transportAddr = getTransportAddress(stackName);
ModelNode addOp = new ModelNode() ;
addOp.get(OP).set(ADD);
addOp.get(OP_ADDR).set(transportAddr.toModelNode());
// required attributes
addOp.get(ModelKeys.TYPE).set(protocolType);
return addOp ;
}
protected static ModelNode getTransportAddOperationWithProperties(String stackName, String protocolType) {
ModelNode addOp = getTransportAddOperation(stackName, protocolType);
// add optional PROPERTIES attribute
ModelNode propertyList = new ModelNode();
ModelNode propA = new ModelNode();
propA.add("A","a");
propertyList.add(propA);
ModelNode propB = new ModelNode();
propB.add("B","b");
propertyList.add(propB);
addOp.get(ModelKeys.PROPERTIES).set(propertyList);
return addOp ;
}
protected static ModelNode getTransportRemoveOperation(String stackName, String protocolType) {
// create the address of the cache
PathAddress transportAddr = getTransportAddress(stackName);
ModelNode removeOp = new ModelNode() ;
removeOp.get(OP).set(REMOVE);
removeOp.get(OP_ADDR).set(transportAddr.toModelNode());
return removeOp ;
}
protected static ModelNode getTransportReadOperation(String stackName, String name) {
// create the address of the subsystem
PathAddress transportAddress = getTransportAddress(stackName);
ModelNode readOp = new ModelNode() ;
readOp.get(OP).set(READ_ATTRIBUTE_OPERATION);
readOp.get(OP_ADDR).set(transportAddress.toModelNode());
// required attributes
readOp.get(NAME).set(name);
return readOp ;
}
protected static ModelNode getTransportWriteOperation(String stackName, String name, String value) {
// create the address of the subsystem
PathAddress transportAddress = getTransportAddress(stackName);
ModelNode writeOp = new ModelNode() ;
writeOp.get(OP).set(WRITE_ATTRIBUTE_OPERATION);
writeOp.get(OP_ADDR).set(transportAddress.toModelNode());
// required attributes
writeOp.get(NAME).set(name);
writeOp.get(VALUE).set(value);
return writeOp ;
}
protected static ModelNode getTransportPropertyReadOperation(String stackName, String propertyName) {
// create the address of the subsystem
PathAddress transportPropertyAddress = getTransportPropertyAddress(stackName, propertyName);
ModelNode readOp = new ModelNode() ;
readOp.get(OP).set(READ_ATTRIBUTE_OPERATION);
readOp.get(OP_ADDR).set(transportPropertyAddress.toModelNode());
// required attributes
readOp.get(NAME).set(ModelKeys.VALUE);
return readOp ;
}
protected static ModelNode getTransportPropertyWriteOperation(String stackName, String propertyName, String propertyValue) {
// create the address of the subsystem
PathAddress transportPropertyAddress = getTransportPropertyAddress(stackName, propertyName);
ModelNode writeOp = new ModelNode() ;
writeOp.get(OP).set(WRITE_ATTRIBUTE_OPERATION);
writeOp.get(OP_ADDR).set(transportPropertyAddress.toModelNode());
// required attributes
writeOp.get(NAME).set(ModelKeys.VALUE);
writeOp.get(VALUE).set(propertyValue);
return writeOp ;
}
protected static ModelNode getProtocolAddOperation(String stackName, String protocolType) {
// create the address of the cache
PathAddress stackAddr = getProtocolStackAddress(stackName);
ModelNode addOp = new ModelNode() ;
addOp.get(OP).set("add-protocol");
addOp.get(OP_ADDR).set(stackAddr.toModelNode());
// required attributes
addOp.get(ModelKeys.TYPE).set(protocolType);
return addOp ;
}
protected static ModelNode getProtocolAddOperationWithProperties(String stackName, String protocolType) {
ModelNode addOp = getProtocolAddOperation(stackName, protocolType);
// add optional PROPERTIES attribute
ModelNode propertyList = new ModelNode();
ModelNode propA = new ModelNode();
propA.add("A","a");
propertyList.add(propA);
ModelNode propB = new ModelNode();
propB.add("B","b");
propertyList.add(propB);
addOp.get(ModelKeys.PROPERTIES).set(propertyList);
return addOp ;
}
protected static ModelNode getProtocolReadOperation(String stackName, String protocolName, String name) {
// create the address of the subsystem
PathAddress protocolAddress = getProtocolAddress(stackName, protocolName);
ModelNode readOp = new ModelNode() ;
readOp.get(OP).set(READ_ATTRIBUTE_OPERATION);
readOp.get(OP_ADDR).set(protocolAddress.toModelNode());
// required attributes
readOp.get(NAME).set(name);
return readOp ;
}
protected static ModelNode getProtocolWriteOperation(String stackName, String protocolName, String name, String value) {
// create the address of the subsystem
PathAddress protocolAddress = getProtocolAddress(stackName, protocolName);
ModelNode writeOp = new ModelNode() ;
writeOp.get(OP).set(WRITE_ATTRIBUTE_OPERATION);
writeOp.get(OP_ADDR).set(protocolAddress.toModelNode());
// required attributes
writeOp.get(NAME).set(name);
writeOp.get(VALUE).set(value);
return writeOp ;
}
protected static ModelNode getProtocolPropertyReadOperation(String stackName, String protocolName, String propertyName) {
// create the address of the subsystem
PathAddress transportPropertyAddress = getProtocolPropertyAddress(stackName, protocolName, propertyName);
ModelNode readOp = new ModelNode() ;
readOp.get(OP).set(READ_ATTRIBUTE_OPERATION);
readOp.get(OP_ADDR).set(transportPropertyAddress.toModelNode());
// required attributes
readOp.get(NAME).set(ModelKeys.VALUE);
return readOp ;
}
protected static ModelNode getProtocolPropertyWriteOperation(String stackName, String protocolName, String propertyName, String propertyValue) {
// create the address of the subsystem
PathAddress transportPropertyAddress = getProtocolPropertyAddress(stackName, protocolName, propertyName);
ModelNode writeOp = new ModelNode() ;
writeOp.get(OP).set(WRITE_ATTRIBUTE_OPERATION);
writeOp.get(OP_ADDR).set(transportPropertyAddress.toModelNode());
// required attributes
writeOp.get(NAME).set(ModelKeys.VALUE);
writeOp.get(VALUE).set(propertyValue);
return writeOp ;
}
protected static ModelNode getProtocolRemoveOperation(String stackName, String protocolType) {
// create the address of the cache
PathAddress stackAddr = getProtocolStackAddress(stackName);
ModelNode removeOp = new ModelNode() ;
removeOp.get(OP).set("remove-protocol");
removeOp.get(OP_ADDR).set(stackAddr.toModelNode());
// required attributes
removeOp.get(ModelKeys.TYPE).set(protocolType);
return removeOp ;
}
protected static PathAddress getProtocolStackAddress(String stackName) {
// create the address of the stack
PathAddress stackAddr = PathAddress.pathAddress(
PathElement.pathElement(SUBSYSTEM, JGroupsExtension.SUBSYSTEM_NAME),
PathElement.pathElement("stack",stackName));
return stackAddr ;
}
protected static PathAddress getTransportAddress(String stackName) {
// create the address of the cache
PathAddress protocolAddr = PathAddress.pathAddress(
PathElement.pathElement(SUBSYSTEM, JGroupsExtension.SUBSYSTEM_NAME),
PathElement.pathElement("stack",stackName),
PathElement.pathElement("transport", "TRANSPORT"));
return protocolAddr ;
}
protected static PathAddress getTransportPropertyAddress(String stackName, String propertyName) {
// create the address of the cache
PathAddress protocolAddr = PathAddress.pathAddress(
PathElement.pathElement(SUBSYSTEM, JGroupsExtension.SUBSYSTEM_NAME),
PathElement.pathElement("stack",stackName),
PathElement.pathElement("transport", "TRANSPORT"),
PathElement.pathElement("property", propertyName));
return protocolAddr ;
}
protected static PathAddress getProtocolAddress(String stackName, String protocolType) {
// create the address of the cache
PathAddress protocolAddr = PathAddress.pathAddress(
PathElement.pathElement(SUBSYSTEM, JGroupsExtension.SUBSYSTEM_NAME),
PathElement.pathElement("stack",stackName),
PathElement.pathElement("protocol", protocolType));
return protocolAddr ;
}
protected static PathAddress getProtocolPropertyAddress(String stackName, String protocolType, String propertyName) {
// create the address of the cache
PathAddress protocolAddr = PathAddress.pathAddress(
PathElement.pathElement(SUBSYSTEM, JGroupsExtension.SUBSYSTEM_NAME),
PathElement.pathElement("stack",stackName),
PathElement.pathElement("protocol", protocolType),
PathElement.pathElement("property", propertyName));
return protocolAddr ;
}
protected String getSubsystemXml() throws IOException {
return getSubsystemXml(SUBSYSTEM_XML_FILE) ;
}
protected String getSubsystemXml(String xml_file) throws IOException {
URL url = Thread.currentThread().getContextClassLoader().getResource(xml_file);
if (url == null) {
throw new IllegalStateException(JGroupsMessages.MESSAGES.notFound(xml_file));
}
try {
BufferedReader reader = new BufferedReader(new FileReader(new File(url.toURI())));
StringWriter writer = new StringWriter();
try {
String line = reader.readLine();
while (line != null) {
writer.write(line);
line = reader.readLine();
}
} finally {
reader.close();
}
return writer.toString();
} catch (URISyntaxException e) {
throw new IllegalStateException(e);
}
}
} |
<reponame>alvescleiton/pix-list<filename>src/components/Modal/index.tsx<gh_stars>0
import React from 'react'
import { FontAwesomeIcon } from '@fortawesome/react-fontawesome'
import { faTimes } from '@fortawesome/free-solid-svg-icons'
import { Background, Close, Container } from './styles'
interface Props {
children: React.ReactNode
isOpen: boolean
onClose(isModalOpen: boolean): void
}
const Modal = ({ children, isOpen, onClose }: Props) => {
return (
<Background visible={isOpen}>
<Container visible={isOpen}>
<Close onClick={() => onClose(false)}>
<FontAwesomeIcon icon={faTimes} />
</Close>
{ children }
</Container>
</Background>
)
}
export default Modal
|
LOS ANGELES/SAN FRANCISCO (Reuters) - Shares of Apple Inc slid more than 4 percent on Tuesday after a poor review for its iPhone 4 from an influential consumer guide underpinned mounting complaints about the hot-selling device’s reception and spurred speculation about a product recall.
A customer looks at an iPhone 4 at the Apple Store 5th Avenue in New York, in this June 24, 2010 file photo. REUTERS/Eric Thayer/Files
Analysts thought a recall unlikely but said the world’s most valuable tech company needs need to move quickly to avert longer-term damage to its widely respected brand, which allows it to charge a premium for products like the iPad and iPod. The stock should recover on Wednesday, some said.
Consumer Reports said on Monday it could not recommend the iPhone 4 — which sold 1.7 million units worldwide in its first three days — after its tests confirmed concerns about signal loss when the device is held in a certain way.
That report spurred widespread discussion on Tuesday, including on popular tech site Cnet and multiple blogs, about the possibility of an iPhone 4 recall: an unheard-of event for a company lauded by investors and tech aficionados for its marketing savvy and product quality.
Apple, which has called the iPhone 4’s June debut its most successful product launch ever, has not responded to the widely watched nonprofit organization’s report or to the recall talk.
The company has said all cellphones suffer some signal loss when cradled in different ways, and suggested that a software glitch might have misled users by overstating signal strength.
Hudson Square Research analyst Daniel Ernst pointed to speculation that Consumer Reports’ article might induce a recall. But some spotted a buying opportunity as Apple’s stock bounces off Tuesday’s low.
“There’s nothing to recall. There’s people lining up in droves to buy this phone,” said Gleacher & Co analyst Brian Marshall. He said he could not replicate the antenna problem on the iPhone 4.
Apple shares dipped below their 50-day moving average price of $256.26, sliding as much as 4.2 percent to $246.43. They pared losses to trade 2 percent lower at $252.19 Tuesday afternoon, as the Nasdaq gained 1.7 percent.
Shares of Research in Motion, which makes the rival Blackberry, climbed 3.4 percent to $55.62. Google Inc, whose Android operating system for smartphones is gaining traction on multiple devices, also rose, by more than 3 percent to $490.35, along with tech stocks overall.
Analysts said Apple needs to take quick action to avert any lasting damage to its reputation for quality products — an image honed by iconic gadgets such as the iPod and iPad — though they did not see sales being hurt for now.
“They need to provide an actual fix — not a bumper fix — so that the product performs as it should,” said Ashok Kumar at Rodman & Renshaw. “Apple should have taken a higher road when addressing the design flaw, instead of taking the hard-line stance that they did.”
“This is not a Toyota problem, but it is a problem that Apple needs to address head-on,” he said, referring to the Japanese automaker’s global recalls of more than 10 million vehicles since late last year.
NEED TO TAKE ACTION
JP Morgan warned that reports of wireless reception problems on the smartphone, which competes with Motorola Inc’s Droid and Palm Inc’s Pre, may eventually affect demand.
“Consumer Reports is a well-respected product reviewer, and the report should turn up the heat on Apple,” analyst Mark Moskowitz said in a client note. “Concerns around iPhone 4 reception do not appear to be impacting demand, but we think there are risks when a well-respected product rating agency such as Consumer Reports issues an unfavorable report.
“We continue to expect a fix from Apple, whether the solution is software- or hardware-related.”
Consumer Reports, which publishes guides on everything from cars to TVs, said in its Monday report that it had tested other phones — including the iPhone 3GS and Pre — and found none had the signal-loss problems of Apple’s latest iPhone.
It added that AT&T Inc, the exclusive mobile phone carrier for the iPhone 4 and whose network is often blamed for reception problems, was not necessarily the main culprit.
The report was the latest blow to the iPhone 4, which has been plagued by complaints about poor reception. Many of the complaints involve a wraparound antenna whose signal strength is said to be affected if the device is touched in a certain way.
Apple has been sued by iPhone customers in at least three complaints related to antenna problems.
Heavy options trading activity on Tuesday ahead of Friday’s July contract expiration suggested investors were bracing for possible Apple news before the weekend, analysts say.
“Apple shares are down on concerns about a possible defect with the new iPhone 4. Worries that iPhone 4 might be a lemon are weighing on their shares,” said Frederic Ruffy, options strategists at Web information site WhatsTrading.com. |
Possible uses of the binary icosahedral group in grand unified theories There are exactly three finite subgroups of SU that act irreducibly in the spin 1 representation, namely the binary tetrahedral, binary octahedral and binary icosahedral groups. In previous papers I have shown how the binary tetrahedral group gives rise to all the necessary ingredients for a non-relativistic model of quantum mechanics and elementary particles, and how a modification of the binary octahedral group extends this to the ingredients of a relativistic model. Here I investigate the possibility that the binary icosahedral group might be related in a similar way to grand unified theories such as the Georgi--Glashow model, the Pati--Salam model, various $E_8$ models and perhaps even M-theory. This analysis suggests a possible way to combine the best parts of all these models into a new model that goes further than any of them individually. The key point is to separate the Dirac spinor into two separate concepts, one of which is Lorentz-covariant, and the other Lorentz-invariant. This produces a model which does not have a CPT-symmetry, and therefore supports an asymmetry between particles and antiparticles. The Lagrangian has four terms, and there is a gauge group GL(4,R) which permits the terms to be separated in an arbitrary way, so that the model is generally covariant. Quantum gravity in this model turns out to be described by quite a different gauge group, namely SO, acting on a 5-dimensional space that behaves like a spin 2 representation after suitable symmetry-breaking. I then use this proposed model to make a few predictions and postdictions, and compare the results with experiment. 1. The group and its representations 1.1. Introduction. In a series of papers I have considered a number of options for using a finite group in place of SU as the basis for a potential finite model of quantum mechanics. The basic idea is to study the group algebra, which is closely related to, but not identical to, a Hilbert algebra with the group elements as a basis. So far, the most promising option is the group GL of all invertible 2 2 matrices over the field of order 3, that consists of the numbers −1, 0, 1 subject to the rule 1 + 1 = −1. However, the quarks are not immediately evident in this model, so it is not necessarily the best of the available options. There is just one further possibility that I have not explored in detail, namely the group SL of all 2 2 matrices of determinant 1 over the field of order 5, that consists of the numbers −2, −1, 0, 1, 2 with 1 + 2 = −2 and so on. This group is also known as the binary icosahedral group 2I, and is a double cover of the icosahedral group I ∼ = Alt, the alternating group on five letters. Since it has a 5-dimensional irreducible representation it maps into SU and may therefore be related in some way to the Georgi-Glashow model. The group also has two unitary representations in two dimensions, and one in four dimensions, so that it embeds in the gauge group of the Pati-Salam model. A further reason for thinking that this particular group might be useful is the fact that the 'fermionic' part of the group algebra has a natural structure as a 15-dimensional space of quaternions. Each of these quaternions can be expressed in 3 independent ways as a complex 2-space, which gives us a total of 45 Weyl spinors, that is generally recognised as the exact number that are required for the elementary fermions in the standard model of particle physics. Of course, this triple-counting is not adopted in the standard model, so it is not obvious that this process will actually work in practice. But it is worth a try, since this is the only one of the binary polyhedral groups that has enough spinors to provide a realistic prospect of modelling quarks in full. If this does not work, then it more or less rules out the possibility of a discrete model of quantum mechanics based on a finite analogue of SU. There has been some interest in using the group 2I in physics, where its relationship to E 8, also described in, may be relevant for the development of potential E 8 models. There is however a widely held view that such a model is impossible, since it does not appear to have enough spinors. For further background on this specific group see. The first of these references will be referred to frequently. For more general background on group theory and representation theory see. 1.3. Faithful representations. All the faithful representations are pseudo-real, which means they can be written as quaternionic matrices of half the size. In particular, there are two representations as 1 1 quaternion matrices, which are described in detail in. That source also gives some interesting 2 2 quaternion matrices for 2a + 2b. In terms of the generators i and w, we can take the following: In this last case there is a set of five involutions that generate a group Q 8 D 8 normalized by SL and denoted in by the following notation: By using the Pauli matrices in the standard way, the representations can be written as complex matrices as follows: For what it is worth, these matrices satisfy the same relations as the Dirac matrices 5, 0, i 1, i 2, i 3 respectively. The representation 4b can also be written as 2 2 quaternion matrices, as is done implicitly in, case O 1, and it is also possible to calculate some from the information in, or directly from the definition 4b = S 3 (2a) ∼ = S 3 (2b). The following matrices are written with respect to a basis which seems to make this representation about as clean and tidy as it is possible to get: where = 1 + and = 3 − 1. The group also contains the element u → (−1 + j + k)/2 0 0 1 of order 3, which might perhaps be useful for describing a weak interaction acting on a left-handed spinor only (see Section 2.4). The group of diagonal matrices then has order 12, and acts on the first coordinate as an irreducible subgroup of SU, but on the second coordinate as a subgroup of U. In particular, there is no element corresponding to u that acts on the 'right-handed' part of the spinor. Again, we may translate to 4 4 complex matrices if we wish: The final one of the faithful representations can be written as 3 3 quaternion matrices in various ways, such as 1.4. Faithless representations. The faithless representations are all real. The representations in dimension 3 can be written as follows: Notice that these matrices are very similar to those given above for 6, which makes clear the following tensor product structures: The representation 4a can be expressed as the quaternionic tensor product of 2a and 2b. If one just uses the complex tensor product, one obtains a complex representation which obscures the underlying real structure. If we take 2a in the form of left-multiplications by the above quaternions, and 2b as right-multiplications, then we obtain the following matrices: The other faithless representation 5 is most easily written as a deleted permutation representation on 6 points: 1 + 5 : i →, w →. Another way to construct 5 is as a monomial representation, which can be obtained from the permutation representation 1+4a by introducing some complex cube roots of unity, and = 2, into the action: 1 + 4a : i →, w →,, w → (2, 2,2) (4,4, 4). In fact, the representations 3a + 3b and 6 can also be obtained as monomial representations, by inserting scalars into the permutation representation 1 + 5 so that 3a + 3b : i → (5, −5)(6, −6), w →. w →. This version of 6 does not exhibit either of the tensor product structures. Indeed, it is a well-known hard problem in computational representation theory to find a tensor product decomposition for a representation that is known to have one. Subgroups and subalgebras 2.1. The structure of the group algebra. The faithless part of the group algebra is a direct sum of real matrix algebras, one for each irreducible representation: The faithful part, on the other hand, is a direct sum of quaternion matrix algebras: Since the gauge groups of the standard model and most Grand Unified Theories (GUTs) are compact, it is worth separating out the compact subgroups of the group algebra into the faithless part and the faithful part noting the isomorphisms Restricting from quaternions to complex numbers we therefore obtain which after identification of the four scalar factors U becomes isomorphic to, but not necessarily equal to, the standard model gauge group. Moreover, the finite group connects these various Lie groups together, and therefore reduces the apparent symmetries. In particular, the following well-known twoto-one maps are compatible with the finite group action: In addition, there are embeddings that are also compatible with the finite group action. It is very important to realise, however, that these maps between compact groups do not extend to maps between the non-compact groups. In particular, we note that Sp embeds in SL(2, H), whereas SO embeds in SL(5, R). While Sp is just a double cover of SO, there is no non-trivial homomorphism at all between SL(2, H) ∼ = Spin and SL(5, R). Of equal importance is the complex equivalent, restricting from 5 dimensions to 3, where again there is no non-trivial homomorphism between SL(2, C) ∼ = Spin and SL (3, R). It is therefore of vital importance to distinguish correctly between the groups SO and SL(3, R), and to interpret them appropriately in both relativistic and quantum physics. Spinors and isospinors. First we need a 4-dimensional faithful representation to hold the Dirac spinor. The two obvious choices are 2a + 2b and 4b. If we want an action of Spin ∼ = SL(2, C), then we have to use 4b. But if we want an invariant splitting into left-handed and right-handed spinors then we have to use 2a + 2b. In order to implement the standard model, therefore, we may need to consider both of them, and mix them together in some appropriate way. The Dirac algebra must then appear as some mixture of the following versions of the square of the spinor: The first looks to be the closest to the standard model, but does not have an action of the Lorentz group on the spinor. The second does have an action of the Lorentz group, but incorporates some extra symmetries that are not in the standard model. The last looks even stranger, as it contains no scalar representation at all. In both cases, however, the extra symmetries arise from converting the permutation representation 1+ 4a into the monomial representation 5. This introduces a hidden triplet symmetry, either once or twice. One triplet symmetry might be useful for incorporating the colour symmetry of the strong force into the Dirac algebra, while the other may be useful for extending the standard model to deal with three generations simultaneously. The first two suggested analogues of the Dirac algebra split into symmetric and antisymmetric parts as follows: Hence it is only in the antisymmetric part of the algebra that we see any difference between them. This difference occurs in the top two degrees of the Dirac algebra, that is in the terms 5 and 5. It therefore does not affect the implementation of quantum electrodynamics (QED), but only affects its unification with the weak interaction. The third suggestion splits into left-handed and right-handed components as Since this is an invariant splitting, it must be a splitting that defines the weak and/or strong force, so that 2a and 2b are technically isospin representations rather than spin representations. Let us define 2a to be left-handed and 2b to be righthanded, as our notation already suggests. To obtain the Dirac algebra as usually described we must incorporate both 2a ⊗ 4b and 4b ⊗ 4b, each of which contains a copy of 3a + 5. Then these two copies of 3a + 5 must be effectively identified with complex multiples of each other in order to implement the formalism of electroweak mixing. At the same time, the symmetry of 5 must be broken to 1 + 4 in some way. If we want a compact SU as a gauge group for the strong force, then the only available place for it is in Sp, but the action of SU necessarily breaks the symmetry of SU L SO R and SU R SO L. It therefore breaks the symmetry of the weak gauge group SU L, and of the generation symmetry group SU R, if that is how we choose to interpret these groups. Both these examples of symmetry-breaking are features of the standard model, which is a promising sign that this group algebra contains the features we need. Indeed, there is no intrinsic relationship between the two tensor product structures, and therefore a wide range of possible interpretations. 2.3. Vector and adjoint representations. The adjoint representations of symplectic groups arise from the symmetric squares of the defining representations, so that they are associated to the representations The 'vector' representations on the other hand arise from anti-symmetric squares, in the forms In particular, notice the breaking of symmetry from SO to SO, in the form of a restriction from Spin ∼ = SU to Spin ∼ = Sp. This allows us to associate the adjoint representation of SU with the representation which will play an important role later on in this paper. But now we have a great deal more symmetry-breaking to contend with. In particular, there is a change of signature from compact SU to SL(2, H), which generalises the Lorentz group SL(2, C). The compact part of this group is now represented in 3a + 3b + 4a, while the boosts lie in 5. The Lorentz group therefore mixes either 3a or 3b with a 3-dimensional subspace of 5. This can be achieved by breaking the symmetry group from SL to SL, for which all three of these 3-dimensional representations are equivalent, and can therefore be mixed in arbitrary proportions. Maximal subgroups. For the purposes of studying symmetry-breaking, we need to study subgroups of the binary icosahedral group. There are three types of maximal subgroups, obtained from the icosahedron or its dual dodecahedron by fixing either one of the five cubes inscribed in the dodecahedron, one of the six pairs of opposite faces of the dodecahedron, or vertices of the icosahedron, one of the ten pairs of opposite vertices of the dodecahedron, or faces of the icosahedron. In terms of the 2 2 matrices over the field of order 5, we may define Then i, j, k, v generate a subgroup of the first type, j, t one of the second type, and k, w one of the third type. In terms of quaternions, in the 2a representation we can take The first subgroup is a copy of the binary tetrahedral group, 2T = 2Alt or SL. The other two are binary dihedral groups, sometimes called dicyclic groups, 2D 10 and 2D 6 = 2Sym respectively. When treating them individually, we do not need to distinguish the 5 copies of 2T, the 6 copies of 2D 10 or the 10 copies of 2D 6. But when we come to consider them together, it will matter which copy we take. The most important point is that a copy of 2T and a copy of 2D 6 can intersect in either a subgroup of order 6 or a subgroup of order 4. This fact turns out to be important for the properties of electro-weak unification, and especially for the distinction between leptons and quarks. All three of these maximal subgroups split the 4b 'Dirac spinor' representation into two halves, but the properties of the two halves are quite different in the three cases. The character tables of the three groups are given by In the first two cases, the Dirac spinor splits as the sum of the last two 2-dimensional representations: in the first case these representations are complex conjugates of each other, analogous to the Weyl spinors in the standard model. In the second case, they are equivalent to the restrictions of 2a and 2b. But in the third case the Dirac spinor splits as 1 + 1 + 2, so that there is a splitting into 1 + 1 and 2 which cuts across the complex conjugation symmetry. It seems likely, therefore, that this last case will be useful for explaining why and how the left-handed and right-handed Weyl spinors behave so differently in the standard model. In terms of the quaternionic notation, there are two significant copies of the complex numbers, defined by k and i + j, and it is the interplay between these two that we need to investigate closely. In particular, k defines a complex structure on the whole of the Dirac spinor, while i + j defines a complex structure only on half the Dirac spinor. Unlike the standard model complex structures defined by i and i(1− 5 ), however, these two complex structures anti-commute with each other. It is this anti-commutation that allows the finite model to model three generations rather than just one. Modular representations 3.1. Reduction modulo 3. The interplay between 1+4a and 5, as well as between 2a + 2b and 4b, suggested by the above discussion can be studied in terms of the 3-modular representation theory, along the lines taken in. This is because the differences appear only in the column headed w, where the elements of order 3 occur. The 3-modular Brauer character table is as follows: That is, we have just deleted the column headed w and the rows 5 and 4b. I have also changed the font so as to distinguish between ordinary representations and modular representations. There is an extra subtlety, though, in that in order to represent and we need a square root of 5, or equivalently, modulo 3, a square root of −1. Hence we need to work over the field of order 9 if we want to distinguish 3a from 3b and 2a from 2b. On the other hand, if we want to work over the field of order 3, we can simply add these rows together, and amalgamate the last two columns, and work with the table over the prime field. Thus we have the same structure as in, that is effectively three Weyl spinors glued on top of each other. The only difference here is that there is a larger symmetry group to play with. One curious feature of this larger symmetry group is that there is an embedding of SL in SL, where the field of order 9 consists of the elements 0, ±1, ±i, ±1 ± i satisfying the rules 1+1 = −1 and i 2 = −1. Hence, over this field, the 2-dimensional representations behave as though they were complex conjugates of each other. In other words, at the discrete level we can treat 2a and 2b very much like Weyl spinors, swapped by the field automorphism that negates i, as a finite analogue of complex conjugation. But this 'complex conjugation' only exists at the discrete level, and does not lift to complex conjugation at the continuous level. Over the field of order 3, the combined representation 4d represents the finite analogue of a Weyl or Dirac spinor, and the constituents of its tensor square are 4d ⊗ 4d ∼ 1 + 4c + 6a + 4c + 1. 3.2. Reduction modulo 2. On reduction modulo 2 we lose the distinction between fermionic and bosonic representations, and have just four irreducibles Again, over the field of order 2 we have to combine s and t, and and, to get Here, 4e is what remains of the Dirac spinor, and 4f is what remains of spacetime. There is not much information left in this table, but for what it is worth, we have 4e ⊗ 4f ∼ 1 + 1 + 1 + 1 + 4e + 4e + 4e In all cases, 4f splits off as a direct summand, but 4e and 1 can glue together in quite complicated ways. In other words, the mathematics may allow us to glue some scalars, such as mass and charge, onto the spinors, without affecting the spacetime representation. 3.3. Reduction modulo 5. On reduction modulo 5, the spacetime representation 4a breaks up as 1 + 3, and therefore blurs the distinction between Minkowski spacetime and Euclidean spacetime. Similarly, 6 breaks up as 2 + 4, and the distinction between left-handed 2a (and 3a) and right-handed 2b (and 3b) disappears. The Brauer character table is as follows: The representation 5 is projective, so splits off into a block of its own. The other four PIMs have the following structures: Possibly the most significant part of the reduction modulo 5 is that it permits us to model a quantised version of Lorentz transformations on either 1.3 or 3.1 in place of Euclidean spacetime in 4a. Lifting back to the real representations, this corresponds to a 'mixing' between 4a and both 1 + 3a and 1 + 3b. We may then compare Here we see a subtle distinction between 3a + 3b and 3a + 3a (or 3b + 3b) for the Euclidean and Minkowski versions of the electromagnetic field. In the Minkowski case, both 3a + 3a and 3b + 3b can be regarded as complex 3-spaces, on which the real quadratic form can be extended either to a complex quadratic form or to an Hermitian form. In the former case, the group SO(3, C) ∼ = SO acts, together with a scalar U, while in the latter case the group is U. In particular, we could use 3a + 3a for electromagnetism and Maxwell's equations, and 3b + 3b for the strong force. Towards interpretation 4.1. Spacetime. For the purposes of attempting to construct a physical model from the group algebra, we should not take too much notice of the Lorentz group itself, and look instead at the bigger picture. But I do not want to interpret the extension of Spin to Spin as an extension of spacetime from 3 + 1 dimensions to 5 + 1 dimensions, since that is unphysical. Instead I want to rearrange the three 3-dimensional representations in such a way that spacetime can still be interpreted in 3 + 1 dimensions. A Lorentzian signature is available in either 1 + 3a or 1 + 3b, and a Euclidean signature in both of these and also in 4a. On the other hand, the group algebra does not provide any method for mixing a 'time' coordinate in 1 with 'space' coordinates in 3a or 3b, so that spacetime is more naturally associated with the representation 4a, and the corresponding algebra of 4 4 real matrices, that contains SL(4, R) and therefore SO. Classical physics and relativity do not mention spinors, so must be expressible entirely in terms of the faithless part of the group algebra. General relativity, for example, is constructed using rank 2 tensors on a (Euclidean or) Minkowski spacetime. It is straightforward to calculate all the rank 2 tensors for all the available combinations of spacetime representations, as follows: Both the first and the fifth of these are equivalent to the modified Dirac algebra 4b ⊗ 4b, which suggests that one or both of the representations 4a ⊗ 4a and may be able to provide some suggestions for how to implement a quantisation of general relativity, albeit with some inevitable changes to the interpretation. 4.2. Labellings. By this stage we have identified two possible different types of 'spinors' in 2a + 2b and 4b, and three possible different types of 'vectors' in 1 + 3a, 1 + 3b and 4a. The most likely interpretations are that 4b contains Dirac spinors, while 2a + 2b contains some kind of 'isospinors'. Similarly, 4a is most likely to be interpreted as spacetime, and/or its dual 4-momentum, while 1 + 3a and 1 + 3b are more likely to be interpreted as force fields of some kind. Nevertheless, there may be other possibilities, and we should not pre-judge the issues. All these characters have the property that the value on i is 0. Hence the tensor product of any vector or spinor with any character that is 0 on the elements of order 3 and 5 gives rise to a multiple of the relevant half of the regular representation. It is straightforward to calculate that there are exactly two such characters, namely 3a + 3b + 4a + 5 = 2 (3a + 3b) = 3a + 3b + 3a ⊗ 3b, Either of these labellings can therefore be used to label the entire group algebra as spinors or vectors of various types for various types of particles. The first option is the finite equivalent of the adjoint representation of the gauge group SL(2, H), so seems suitable for a labelling of force mediators. This suggests using the other labelling for matter, although we might prefer to follow the Georgi-Glashow labelling with the equivalent of 5 + 2, which gives the first labelling again. The representation 3a + 3b + 4a + 5 is equivalent to the monomial representation of dimension 15, and has many expressions in terms of smaller representations, which may represent more primitive concepts: The representation 1 + 4a + 5 + 5, equivalent to the permutation representation on 15 letters, also has a number of possible derivations: Note incidentally that by losing the Lorentz group we have lost the standard model distinction between fermions and bosons. Hence in this more general case, some of the force mediators might be classified as fermions by the standard model. Clearly, however, massive fermions cannot mediate forces, so this could only possibly apply to neutrinos. Indeed, the distinction between the two halves of the group algebra appears rather to be a distinction between massive particles in the faithful part, and massless particles in the faithless part. So let us use this as the defining property, and sort out the fermion/boson distinction later. 4.3. Labellings for vectors. It seems obvious that we must identify 3a + 3b with the electromagnetic field, and with the photon. Thus we have a 3-dimensional space for the momentum, in each of two helicities. The standard model identifies 3a with 3b, which splits 4a + 5 into 1 + 3 + 5, hence identifying 3 + 5 with gluons, and effectively throwing away the scalar. The scalar could be thought of as a graviton, perhaps, but the description as 3a ⊗ 3b strongly suggests that we should identify this space with three generations of neutrino momenta. In other words, it forces us to decompose a gluon into a pair of neutrinos. If this is correct, then it implies that there is an invariant splitting of the neutrinos into two types, which might be interpreted as neutrinos and antineutrinos. However, in this discrete model there are only half as many neutrino degrees of freedom as in the standard model, so that the concepts of momentum, generation, and neutrino/antineutrino are mixed together in a complicated fashion that depends on the individual observer. This could explain the phenomenon of neutrino oscillation, at least at a conceptual level. To see this in more detail, we have to tensor with the observer's spacetime, either in 4a or 1 + 3a or 1 + 3b. Then we have This seems to suggest that the particles in 4a change the energy in some way, while those in 5 do not. The former, then, are participating in the weak interaction to change the observer's measurement of the mass, while the latter are participating in the strong force and/or gravity, with no change of mass. The second and third equations show that the chirality of the weak interaction, that is the question of whether it is 1 + 3a or 1 + 3b that is mixed with 4a on the right-hand side, may not be intrinsic, but may instead be determined by the observer's choice of 1 + 3a or 1 + 3b to parametrise spacetime on the left hand side. In other words, the chirality of the weak interaction may be equal to the chirality of the observer's motion relative to an inertial frame. While this conjecture is inconsistent with the conventional understanding of the standard model, it is not inconsistent with experiment, since the experiment has only been done in one particular chirality of the motion of the experiment. 4.4. Labellings for spinors. The total dimension of the faithful part of the real group algebra is 60, but since it consists of quaternionic representations, we should really consider them as such, say with the quaternions acting by left-multiplication, and the finite group acting on the right. Then we have 15 dimensions of quaternions, splitting as an algebra into 1 + 1 + 4 + 9, and as a representation into 1 + 1 + 2 + 2 + 3 + 3 + 3. Each quaternion can represent a Weyl spinor, so that we have enough degrees of freedom for one generation of standard model fermions. In fact, of course, the standard model does not use the quaternion structure, instead choosing a particular identification of H with C 2. By varying the complex structure, therefore, we can in effect obtain three times as many degrees of freedom, and hence have exactly the right number for a three-generation model. I have suggested two possible labellings for two possible spinors, as follows: In the labelling of 2a with 1 + 4a + 5 2, we can possibly see 2a as (massless) neutrinos, with 6 giving mass to 3 generations of the other particles, with electrons in the form so that we can take out a generation symmetry in 3b and reinstate a Dirac spinor in 2b + 2a. Moreover, we see an intriguing relationship between 4a acting on the left-handed Weyl spinor 2a, and 1 + 3a acting on the right-handed Weyl spinor 2b. However, it seems more likely that, as already suggested, these representations are better interpreted as isospin representations. Whichever way we look at it, this may have something interesting to tell us about the mixing of electromagnetism with the weak force. It may also have something to say about the mixing with the strong force, and the three generations. Similarly for the quarks, except that there now appears to be a 'colour' symmetry in 4b that is mixed up with 2a or 2b. If this is a reasonable interpretation, then this structure enforces colour confinement via which has an extra copy of 2b that can be used to define the charge. Now we can see a potential new labelling of the 15 fermions, completely different from the standard model: Only the left-handed spinors get generation labels, and only the right-handed spinors get colour labels, with only 2, rather than 3, degrees of freedom. Of course, this allocation may not be correct, but it gives a rough idea of how it might be possible to classify fermions in a scheme similar to the Georgi-Glashow scheme, but with less profligacy of unobservable variables, so that all three generations can be incorporated without increasing the size of the model. It may be objected that the neutrinos have no generation label in this scheme. This is consistent with the experimental fact that the generation of a neutrino depends on the observer. That is to say, in this model the generation label on a neutrino comes from its interaction with another particle, rather than from its intrinsic properties. Another possibility is to use 3a + 3b + 4a + 5 as a labelling for spinors of type 4b. This gives a splitting of the 15 quaternions into 6 + 6 + 8 + 10 complex numbers, which might be useful for modelling fermions in the form of 6 leptons, 6 quarks, 8 spin 1/2 baryons (the baryon octet) and 10 spin 3/2 baryons (the baryon decuplet). However, this does not seem to match the spin 1/2 representations 2a and 2b and the spin 3/2 representation 4b consistently to the given particles. 5. Electro-weak aspects of the standard model 5.1. The Dirac equation. In the proposed model, the Dirac equation is a tensor equation, which translates into the standard model as a matrix equation. It must arise from a tensor product of a 4-dimensional faithless representation, and a 4dimensional faithful representation, followed by a projection back onto the latter. In the standard model it is the Clifford algebra that provides this projection, but here it is the representation theory of the finite group that does this job. The options for the representations are 4a, 1 + 3a and 1 + 3b for the former, and 4b and 2a + 2b for the latter. These six cases all have a projection of the required kind, but only in the first two cases is the projection uniquely defined. Hence the first two are the only cases in which we could realistically hope to obtain a version of the Dirac equation, which confirms our choice of 4a as the spacetime representation. Ignoring an overall scale factor in each case, this gives us a gauge group SL(2, H) = Spin, respectively. Since the Dirac equation requires a Lorentz gauge in the form of the group Spin, only the first option is available to us. We therefore have to choose the Dirac matrices inside SL(2, H). There is a wide range of options, but note that we do not have a free choice within SL(4, C), as might be thought. In particular, we cannot use the complexification, and must make a careful choice between the and i to ensure that we only have 15 real dimensions, and not 16 complex dimensions. These choices were exhaustively studied in for all possibilities for the signature of the gauge group. For our purposes, with signature, the most suitable option is 0, 1, 2, 3, i 5, This requires us to modify the Dirac equation to use i 5 for the mass term, rather than the scalar i, but this change has essentially no effect on anything else. A similar modification is proposed in. It is easiest to choose the four rotations as generators, which means they must be anti-Hermitian, and the following is a suitable choice: (Note that the standard Bjorken-Drell convention is not a suitable choice, since it does not respect the underlying quaternionic structure.) It can then be easily checked that 0 is Hermitian, and therefore represents one of the boosts. The other four boosts are i 0 5, 0 1, 0 2 and 0 3, so that the gauge group mixes 0 with the other boosts into a vector representation of SO. All of this is consistent with the original proposal to modify the Dirac algebra to form the representation 4b ⊗ 4b. The Lie algebra of SO maps onto 3a + 3b + 4a, extended by a copy of the vector representation 5 to an image of the Lie algebra of SO. This Lie algebra, moreover, contains exactly the four 'force' representations 3a, 3b, 4a and 5, and might therefore be the Lie algebra of the gauge group of a completely unified quantum theory of everything. It contains 5 energy terms, sufficient for kinetic energy and four types of potential energy for the four forces. On the other hand, this algebra is neither compact nor unitary, so this theory is not a Yang-Mills theory. The reason for this is that is contains the 'Lorentz gauge group' SL 2 (C), which is not compact. Moreover, it does not satisfy the Coleman-Mandula theorem, and therefore cannot be a quantum field theory in the usual sense. Hence the theory can only make sense as a discrete theory, and cannot be translated into the standard language of quantum field theory. 5.2. Chiral Dirac matrices. As we saw in Section 5.1, the Dirac matrices i 5, 1, 2, 3 represent a basis for the representation 4a, and they can be translated therefore into matrices acting on 2a + 2b that swap the chiral components 2a and 2b. Alternatively, we can choose 5, i 1, i 2, i 3, which now represent boosts in SO, so must be represented by Hermitian matrices. It is reasonable to choose these as the matrices g, i, j, k given in Section 1.2. But if we want to interpret i 5 as a mass term, it may be better to use h instead of g for this purpose. Mathematically, it makes no difference, so it is just a question of which is more convenient for purposes of interpretation. In this case, 0 acts as a scalar on each representation, and in order to preserve the chiral components, we must restrict the Lie group generators to even products of the other matrices. Without 0, the even products of the other Dirac matrices give us a copy of Spin, such that 0 generates a copy of R that adjusts the scale factor between 2a and 2b. It can be readily checked that the splitting of Spin into SU L SU R is defined by the splitting: This splitting therefore translates to a splitting defined by projections with 1 ± 0, and therefore into a splitting between positive and negative energy. The standard model uses projections with 1 ± 5 instead, but this leads to a difficult problem of interpretation, since it is not obvious what these projections mean. They are clearly physically very different, but it is not clear why. Swapping 0 and 5 in this way, as the group algebra requires, separates positive and negative energy very clearly, and provides an obvious reason why they behave completely differently. It should also be noted that there is no action of the Lorentz group SL(2, C) on the representation 2a + 2b, although there is an action of SO on 2a ⊗ 2b = 4a. The standard model does not distinguish between SO and SL(2, C) here, and is therefore forced to complexify the Dirac algebra in order to allow SL(2, C) to act on the equivalent of 2a + 2b. Physical chirality of spacetime arises in this model from the splitting of the anti-symmetric square of the spacetime representation 4a into two chiral pieces as 3a + 3b. Hence the model directly and accurately models physical chirality on a Euclidean spacetime that is fundamental to quantum reality. It does not necessarily model physical chirality on a Minkowski spacetime, or on a macroscopic scale that is appropriate to any particular observer. Nor does it directly model any change of chirality as between different observers, although it does provide a gauge group SL(4, R) with which to transform between different observers' choices of SO. Indeed, the interpretation of a fundamental discrete chirality in terms of a macroscopic continuous chirality runs into all the usual problems, especially the measurement problem, discussion of which is beyond the scope of this paper. 5.3. Electroweak mixing. As I have already pointed out, although there are maps between the compact parts of the groups that act on 2a + 2b + 4b and on 3a + 3b + 4a(+5), this map does not extend to the non-compact parts. There is no copy of SO(3, C) ∼ = SO in the group algebra, that acts on 3a + 3b. But there is a copy of Hence, by complexifying the Dirac algebra one can identify SO with SO, factor out the central involution from both groups, and compute the correct answers with the resulting formalism. This is in effect what the standard model does, and this formalism is indeed capable of producing the same answers to the calculations that the group algebra model produces. But in the group algebra model, the adjoint representation 3a + 3b of SO is different from the adjoint representation of SO, which has its compact part in 3a+3b and its boosts in 4a+5. In particular, there is a mixing angle between adjoint SU L in 3a and spin SU in 3a + 3b, known as the electroweak mixing angle. This angle clearly depends on the energy, which changes the imposed complex structure on the rest of 3a + 4a. Nevertheless, with the exception of the running of the mixing angle, the standard model does an excellent job of describing what happens in electroweak interactions, as studied in the laboratory. Now extending from spin SU to SL(2, C) = Spin acting on 4b, we extend from Spin to Spin, whose adjoint representation consists of 3a + 3b together with a 4-space inside 4a + 5. Therefore we also have a mixing between 4a and 5 defined by the embedding of our choice of SO in SO. The only reasonable interpretation of this symmetry-breaking is that SO defines our concept of charge, regarded as fixed, while allowing mass to vary as it does in the weak interaction, so that SO then defines our concept of mass. For the purposes of electroweak unification we only need SO, with a fixed definition of charge, and a variable mass. Then we have an action of SO as symmetries of the Dirac equation, as a quantum version of Einstein's equation But the finite group treats the energy as a scalar, and therefore strictly enforces not only the principle of conservation of energy, but also the principle of invariance of energy. In other words, the model proposes that there should be a definition of energy that does not depend on the observer. But if energy does not depend on the observer, then mass must depend on the observer. Since the gauge group is SO, this is not actually a physical distinction at all, it is purely a choice of gauge. One can choose a Lorentz gauge, or one can choose an SO gauge. For a finite group model, we have to choose a compact gauge group, namely SO. This contains both SU L and SU R as subgroups, the former implementing the change of mass seen experimentally in weak interactions. Moreover, we can now see roughly how the values of mass that are used in the standard theory may depend, at least in some instances, on the observer's choice of Lorentz gauge. They depend on the electroweak mixing between 3a and 3b, that is between left-handed and right-handed angular momentum. Since the group algebra model is independent of scale, this mixing scales up to astronomical scales. Masses of elementary particles are then related to the mixing of angular momentum on a Solar System scale. In other words, they are related to rotation, acceleration and the gravitational field. But I emphasise again, that the model does not predict that particle masses change physically when the gravitational field changes. All that changes is the appropriateness or otherwise of particular gauges in particular circumstances. 6. Other aspects of the standard model 6.1. Helicity and chirality. Chirality in the standard model is expressed by splitting the Dirac spinor representation 4b into two Weyl spinors. Any such splitting in the finite model can only be achieved by breaking the symmetry, and restricting to a subgroup. As we saw in Section 2.4, there are three types of maximal subgroups, which split the Dirac spinor into two in three distinct ways. There are therefore three distinct concept of 'chirality', which we must match up to the standard model concepts of helicity and chirality. First, there is the restriction to the subgroup SL of index 5, that is the binary tetrahedral group. In this case, the Dirac spinor restricts to a sum of two complex 2-spaces. As representations, these are complex conjugates of each other, so that there is a natural interpretation as left-handed and right-handed Weyl spinors, acted on by the subgroup SL(2, C) of SL(2, H). The complex structure here is defined in the character of the representation by a cube root of unity, for example (−1 + i + j + k)/2, and an imaginary quaternion perpendicular to it, say (i−j− k)/2, that effects complex conjugation. However, it is not possible to write the finite group representations without introducing some additional irrationalities, such as √ 3. This type of chirality corresponds to the chirality of SL(2, C) in the standard model, which we might call Minkowski chirality or M-chirality. The complex scalar multiplication group U that acts as the gauge group of electromagnetism in the standard model is the centraliser of SL(2, C) in SL(2, H), but does not correspond to any subgroup of the finite group. Second, there is the restriction to the subgroup of index 6, that is a dicyclic group of order 20, generated for example by j and t. In this case, the Dirac spinor restricts to a sum of two quaternionic 1-spaces, which are equivalent to the restrictions of 2a and 2b. These are acted on by a copy of Spin, so that this chirality is a Euclidean chirality, or E-chirality. In the standard model, the complex structure of the Dirac algebra ensures that M-chirality and E-chirality are identified with each other, or combined into a single concept of EM-chirality. This might be thought of as 'electromagnetic chirality', and is usually called helicity. Third, there is the restriction to the subgroup of index 10, that is a dicyclic group of order 12, generated for example by k and w. In this case the Dirac spinor restricts to a sum of two complex 1-spaces and a quaternionic 1-space. The ambient Lie group here is therefore U SU, so that this type of chirality might be called weak chirality, or W-chirality. In the action of the finite group, k maps into both U and SU, so creates a 'mixing' between U and SU. Note, however, that k itself cannot be directly identified with a generator for U em, since k acts by right-multiplication and U em acts by left-multiplication. The word chirality is usually reserved for the concept of W-chirality. The finite model makes the distinctions rather more clearly than the standard model does, and associates each of the three types of chirality with a particular type of symmetrybreaking. In so doing, it associates each of the forces with a particular subgroup (up to conjugacy) of the binary icosahedral group. Thus electromagnetism is associated with SL(2, C) and therefore with the binary tetrahedral group, the weak force with U SU and therefore with the dicyclic group of order 12, leaving Spin and the dicyclic group of order 20 for the strong force and/or quantum gravity. As we have seen, the representation 4b is most naturally written with respect to a basis that exhibits the W-chirality. The standard model of electroweak unification attempts to write W-chirality and M-chirality with respect to the same basis, which is essentially impossible. The mathematical structure of this representation does not lend itself to a nice description of either M-chirality or E-chirality. 6.2. The strong force. To obtain the remaining forces, we have to extend from the SO gauge group to SO. This adds a copy of the representation 5 to what is already there. Moreover, it provides a compact gauge group SO acting on 5 in its vector representation, and on 3a + 3b + 4a in its adjoint representation. In other words, we can interpret 5 as a set of five 'colours' for the strong force, or we can extend to six colours in 1 + 5, with the finite group acting as permutations of the colours, and therefore enforcing colour confinement by throwing away the scalar (energy) component. What the standard model does instead is to take 3 colours and 3 anticolours, thereby replacing 1 + 5 either by 3a + 3b or 6. If the gauge group for the strong force is SU, we must have 6. For compatibility with the group algebra model, we have to compare: From this we see that any choice of SU breaks the symmetry again. Such symmetry-breaking cannot be obtained by restricting to a subgroup of the finite group, since no proper subgroup has a 3-dimensional irreducible unitary representation. So there is no plausible copy of adjoint SU with which to model gluons, although there are representations such as 3b+5 that might be a suitable substitute. Therefore the proposed model is inconsistent with quantum chromodynamics (QCD) as a theory of the strong force. That does not mean it is inconsistent with experiment. Nor does it mean that it is necessarily inconsistent with lattice QCD. By working on a lattice, one reduces the symmetry to a finite group. If the lattice that is chosen is an ordinary cubic lattice then the symmetry is reduced to the subgroup SL of SL. Then the representations restrict as follows: These two are still not the same, so there still appears to be some inconsistency between the proposed model and lattice QCD. However, the difference may be subtle, and may be difficult to distinguish experimentally. Since this discussion takes us beyond the standard model, I postpone it to Section 7. 6.3. Gell-Mann matrices. In order to obtain the finite analogue of the Gell-Mann matrices we have to work with the representation 3a ⊗ 3b = 4a + 5. This representation can be written in terms of 3 3 matrices, with 3a acting by rightmultiplication by the matrices already given, and 3b acting by left-mutiplication by the transposed matrices. Calculation then shows that 4a is spanned by the matrices while 5 is spanned by the matrices Then the Gell-Mann matrices can be obtained as suitable complex linear combinations of these, using the relations The basis chosen here clearly breaks the symmetry of 4a to 1 + 3, and of 5a to 1 + 1 + 3, but there are more symmetrical spanning sets that exhibit the structures as deleted permutation representations on 5 and 6 points respectively. Thus in 4a we can choose five matrices adding to zero, and permuted by the group: Here the relations 2 + = √ 5 and 2 + = − √ 5 may be useful for understanding the structure. The five matrices displayed here correspond to the five quaternions 1, (−1 + √ 5(i + j + k))/4, (−1 + √ 5(i − j − k))/4, It follows from this that the basis elements for 4a given in correspond to the quaternions 1, i, j, k in some order, which we can choose arbitrarily. Similarly, in 5 the following six matrices are permuted by the group: Note, incidentally, that these matrices have rank 1 rather than 2. Hence they can be factorised as the product of a row vector (in 3a) and a column vector (in 3b), but not uniquely, since scalars can be transferred from one vector to the other. Now it is easy to imagine these six matrices coming in three pairs, obtained by changing the sign on the off-diagonal entries, and it is easy to imagine this pairing to be obtained by complex conjugation. One can then convert the representation 5 into a 3-dimensional unitary representation of SU. But this imposes a structure that is incompatible with the finite symmetry group, and creates a structure of three colours and three anticolours that span a 5-space, not a 6-space. In other words, the finite model has colour confinement built into it naturally. At the same time, we are able to match the permutation representation 1 + 5 to the monomial representations 3a + 3b and 6, in order to investigate how this apparent breaking of symmetry arises from the underlying algebraic structure. There is a third permutation representation worth considering here: 4a + 5 is the deleted permutation representation on 10 points, which can be taken to be the following matrices: 1 1 1 These matrices also have rank 1, so can be factorised as a product of a row vector and a column vector. In all of these representations, there is a √ 5 that is capable of holding the same discrete quantum numbers that are assigned to √ −1 in the standard model. Of course the algebra is quite different, so that translating from one to other cannot be done without a great deal of 'mixing' in order to transfer the quantum numbers, or other variables, from one model to the other. 6.4. Mixing. In the finite model these 3 3 matrices do not generate a Lie algebra or a Lie group, as they do in the standard model, but simply form a representation of the finite group. The 4a component then appears not only in the strong force, but also in the electro-weak force, in such a way that the mixing defines four mixing angles, associated to a suitable basis of the representation. These are presumably the mixing angles in the Cabibbo-Kobayashi-Maskawa (CKM) matrix, consisting of an overall 'CP-violating phase' associated to the identity matrix, together with the Cabibbo angle and two other generation-mixing angles. There are, of course, five other parameters here that correspond to the representation 5. Four of these presumably appear in the Pontecorvo-Maki-Nakagawa-Sakato (PMNS) matrix in an analogous way, while the fifth must be the electro-weak mixing angle. A precise allocation of these mixing angles to the matrices in the finite group algebra must await further development of the model. It may be that the CKM and PMNS matrices represent maps between 4a and 5, rather than vectors within them, or perhaps I have allocated them the wrong way round. 7. Beyond the standard model 7.1. A proposal. It has become clear from the preceding discussion that a clear distinction between 2a + 2b and 4b needs to be made. The former is acted on by Spin, the latter by Spin. The Lorentz group can be obtained as a subgroup Spin of Spin. Therefore 2a + 2b is a Lorentz-invariant spinor, while 4b is a Lorentz-covariant spinor. In theory, the standard model describes Lorentz-invariant spinors as 'isospinors', which are discrete internal variables. But in practice, the experimentally discrete electron generation symmetry is conflated with the experimentally continuous neutrino generation symmetry. This is a clear sign that that Lorentz-covariant symmetries have been conflated with Lorentzinvariant symmetries. I therefore suggest that we take 4b as the Lorentz-covariant spinor, so that we can extend from Spin to Spin to take account of the different Lorentz groups chosen by different observers, who each have their own preferred definition of inertial frame. This will allow us to distinguish between a particle physics inertial frame attached to the Earth, and a gravitational inertial frame which is not. It will also allow us to distinguish between a Solar System inertial frame, appropriate for Newtonian gravity and general relativity, and a galactic inertial frame, appropriate for a more general theory of gravity. On this basis we can treat the gauge groups of the theory as consisting of Hence we should be able to recover the standard model readily enough as a submodel of the proposed new model, although the discussion in the previous section suggests there may be a difficulty with implementing QCD. Moreover, we have a group SU R which can act as a generation symmetry group, in order to extend the standard model to three generations of fermions. In addition, we obtain a gauge group which might possibly be useful for a quantum theory of gravity, by restricting to the compact subgroup Spin of Spin. Finally, the compact subgroup Sp of SL(3, H) is 21-dimensional, and contains some version of the gauge group SU of the strong force. I propose that it might generalise the 20-dimensional Riemann Curvature Tensor, by allowing the invariant scalar (mass) to become covariant instead. Now to model the forces, we need to map from the gauge groups acting on the spinors, to corresponding groups acting on the faithless part of the algebra. This puts the weak force in 3a and the generation symmetry in 3b, as well as putting gravity in 5. But there is no similar invariant action of SU or U, so that in these cases the translation from gauge group to bosons cannot proceed in the standard Yang-Mills fashion. Nevertheless, I propose that all the four fundamental forces can be modelled in some way inside 3a + 3b + 4a + 5. In the standard model they mix together in various complicated ways, that are covariant rather than invariant, since they are experimentally known to vary with the energy. In the group algebra model, however, it is possible to split into four forces in an invariant way. Clearly, therefore, this is not exactly the same as the standard model splitting, but, if it works, it should produce a much simpler model in the end. Mediators for invariant forces must presumably be massless, and they must be bosonic as far as the Lorentz group is concerned. Of course, a Lorentz-invariant spinor is, counter-intuitively, bosonic in this sense. In particular, elements of 2a+2b are physically bosons, although mathematically fermionic in this model. In other words, neutrinos are technically bosons. This is a consequence of our separation of the two concepts of Lorentz-invariant and Lorentz-covariant spinors. Therefore, neutrinos are available as mediators for both quantum gravity and the invariant version of the weak force. Of course, this means that the Yang-Mills paradigm of quantum field theory does not apply, but that was always going to be the case for a discrete model, so cannot be taken as a criticism. We can now interpret 3a + 3b as photons and 3a ⊗ 3b = 4a + 5 as neutrinos, for a complete set of mediators. This suggestion may change our perspective on whether to regard neutrinos as Lorentz-covariant or Lorentz-invariant entities. It is not quite as obvious as it might seem that the covariant symmetries belong to the neutrino, and the invariant symmetries to the electron, since the experimental evidence relates mainly to a mixing of the two. The mathematics here suggests that it may be useful to convert from the standard model Lorentz-covariant (and therefore massive) neutrinos to Lorentz-invariant (and therefore massless) neutrinos. Hamiltonian formulation. In order to formulate a Hamiltonian, we need to identify a particular representation to hold potential and kinetic energy. The obvious choice is 4a, of which there are four copies in the group algebra. Now we can separate the four copies by tensoring 4a with each of 3a, 3b, 4a and 5 in turn, and projecting onto the unique copy of 4a in each case. The tensor product here corresponds to multiplication or division by an infinitesimal, so that the whole operation can be converted into integration or differentiation if desired. At the macroscopic level, we have four copies of 4a in the Hamiltonian, and they can be mixed together in arbitrary ways, with a GL(4, R) gauge group. That is the group of general covariance, that says we can split the Hamiltonian any way we like, and we will always get the same answer. The splitting of the Hamiltonian that is adopted in the standard model is therefore completely arbitrary, and the so-called constants are really artefacts of our decision to separate gravity from the other forces, and therefore they are artefacts of the particular gravitational environment we live in. The finite group suggests a completely different way to split the Hamiltonian, in such a way that we separate four quantised forces rather than three, and therefore have a quantum gravity that consists of half of the strong force, together with one more dimension that is new. We then obtain the Hamiltonians from consideration of the tensor products The first two lines contain the Hamiltonian for quantum electrodynamics, with the clear action of the Dirac matrices in 4a swapping the left-handed 3a with the right-handed 3b. The third line contains the modified Dirac algebra, with a clear distinction between 5 in the 5 representation, and the scalars in 1. While these three lines in the standard model are used only for electro-weak unification, it is clear at this point that they contain half of the strong force as well. The other half of the strong force is in the last line, and differs from the third line in having 3a+3b (three colours and three anti-colours) in place of 1 (no colour), thereby enforcing colour confinement. The usual interpretations of 3a+ 3b as the electromagnetic field, and as photons, regarded as spin 1 particles, are still viable in this scenario, in which 3a and 3b are the spin 1 (adjoint) representations of SU L and SU R respectively. But 4a is actually a spin 1/2 representation of both SU L and SU R. Hence an interpretation as spin 1 mediators is not viable in this case. The only reasonable interpretation of the group algebra model is then that the representation 4a consists of neutrinos and/or antineutrinos. The representation 5, on the other hand, is acted on by Spin rather than Spin. If we mix the two together by embedding Spin in Spin, we split 5 into 1 + 4, and must interpret these pieces as spin 0 and spin 1/2, rather than spin 2. If, however, we look at the action of Spin, we see a splitting 1 + 1 + 3, interpreted as two spin 0 and three spin 1 particles, but then presumably mixed together into a set of five spin 1 gluons. 7.3. Some implications. Another way of looking at this proposal is to think of the representation 4a + 5 as consisting of gluons. This interpretation is certainly more conventional, but it requires there to be 9 gluons rather than 8. The ninth gluon could then be interpreted as a graviton. But it is not well-defined. A better interpretation might be to consider 3a and 3b to be the momenta of neutrinos and antineutrinos respectively, so that 3a + 3b describes a photon obtained from the annihilation of a neutrino and an antineutrino. Then 3a ⊗ 3b must describe an interaction in which a nuclear process emits a neutrino and/or an antineutrino. In the standard model, the weak force accounts for 4 such processes, one of which (beta decay) emits an antineutrino, while the other 3 (change of electron generation) emit both. The other 5 are then strong force processes, in which the combination of neutrino and antineutrino is interpreted as a gluon. This creates an overlap between the 3 generation-changing processes of the weak force, and the other three gluons, and is responsible for the weak-strong mixing within the standard model. It also creates an overlap between beta decay and the proposed ninth gluon, or graviton, and hence creates a mixing between the weak force and gravity, that gives mass to the intermediate vector bosons. The corresponding part of the standard model is a set of four massive bosons, consisting of the three Z and W bosons plus the Higgs boson, forming a basis for another copy of 4a. A further consequence of the separation of invariant from covariant spinors is that the particle-antiparticle distinction becomes a distinction between the two halves of the invariant spinor, rather than the covariant spinor. Hence the proposed model does not permit an interpretation of an antiparticle as being a particle travelling backwards in time. In this model, there is no CPT-symmetry, and a particleantiparticle pair is better interpreted as a C-doublet rather than a PT-doublet. It follows that this model contains an intrinsic (invariant) distinction between particles and antiparticles, such that there is no particle-antiparticle symmetry. In other words it explains the particle-antiparticle asymmetry in the universe. Yet another consequence is the discreteness of photon polarisation, since the photon lies in the representation 3a + 3b, and the two components represent the two polarisations, that is, the two helicities in the standard terminology. But by using SO rather than SO, we convert the helicity into a chirality, and a Lorentz-covariant angular momentum into a Lorentz-invariant momentum with an additional sign for the chirality. Linear polarisation arises then from interactions with matter, which sets up a correlation between the internal symmetries in 3a+ 3b and the environmental symmetries in 4b. The measurement problem arises from the identification of 2a + 2b with 4b, which causes an identification of 3a + 3b with the adjoint representation of the Lorentz group, and therefore an identification of an observer-dependent property with an observer-independent property. In other words, the measurement problem would seem to be caused, in effect, by the confusion between SO and SO that pervades the whole Dirac algebra, and the resulting confusion between internal properties and environmental properties. 7.4. Quantum gravity. As has already been said, I take it as fundamental that the symmetry group of Einstein's equation is the group SO. At the quantum level it is necessary to restrict to a compact group, that is SO, which implies that energy is treated as a scalar. In relativity, on the other hand, it is usual to treat (rest) mass as a scalar, and therefore to work with the group SO. The way that these groups are combined in the usual formalism is to complexify the Clifford algebra, so that both signatures can be treated simultaneously. However, this blurs the distinction between mass and energy in a rather unfortunate way. It seems to me, therefore, that it may be better to maintain a clear distinction between mass and energy, and work with 5-momentum rather than 4-momentum. Since 5-momentum is a generalisation of both energy and mass, it would seem to be a natural choice for the fundamental concept on which any realistic quantum theory of gravity must ultimately be based. The gravitational force between two bodies defined by their mutual 5-momenta must, by Newton's third law, be described by an anti-symmetric tensor. Since Newton's law of universal gravitation depends on the product of the masses, this tensor has rank 2. It is therefore a tensor with 10 degrees of freedom, the same number as there are in both the stress-energy tensor and the Ricci tensor in general relativity. The difference, however, is that because general relativity only works with 4momentum, not 5-momentum, the two tensors are defined instead as symmetric rank 2 tensors on a Minkowski 3 + 1 spacetime, or its dual. While such a tensor can fulfil many of the same functions as the anti-symmetric tensor on 5-momentum, it is not the same tensor. Moreover, since a symmetric tensor does not comply with Newton's third law, the basic premise of general relativity implies that gravity is not a force in the Newtonian sense. That is, of course, the usual modern interpretation. But if it isn't regarded as a force, then it is hard to see how it can be quantised in any reasonable way. In particular, the spin 2 representations that appear in the Ricci tensor restricted to SO, and in the Riemann curvature tensor restricted to SO, cannot in this scheme represent quantised gravitons. The gravitons must live in the antisymmetric tensor, which breaks up as 3+3+4 for SO. Here, 3+3 represents spin 1 particles, namely photons, and 4 represents spin 1/2 particles, namely neutrinos. This contrasts with the Ricci tensor, which breaks up as 1 + 4 + 5 for SO, and which may well represent physical particles, although it is hard to see how these could be interpreted as gravitons. Now we are ready to consider how to do the quantisation. For this purpose, we need to use a finite group, and I propose that Alt is the most suitable group, for all the many reasons I have already given. The permutation representation on 5 letters breaks up as 1 + 4a, on which there is an invariant Lorentzian metric. However, this does not permit any mixing of the scalar, whether that be interpreted as energy or mass, with the rest of the representation. A better option, therefore, may be to consider using the irreducible representation 5, that is the monomial representation, that puts complex scalars of order 3 on each of the five letters. This has the effect that the mass term comes as a triplet, for three generations, and so do the momentum (or massless neutrino) terms. On the basis of these hypotheses, we conclude that quantum gravity must be described by the tensor 2 = 3a + 3b + 4a In particular, the change from the permutation representation to the monomial representation does not change the analysis of the gravitational force. But it does change the analysis of mass and energy, and incorporates the 3-generation structure of matter directly into the definition of (gravitational) mass. Hence it provides a modification of Newtonian gravity that is a consequence of the existence of three generations. This seems to imply a failure of the equivalence principle, which will be discussed further in Section 9. A new quantum process? The standard model contains 6 degrees of freedom for the photon, plus 8 degrees of freedom for gluons, making a total of 14 for the massless force mediators. By allowing the neutrinos into the picture, I extend this to 15. In other words, the model predicts an extra quantum process that is mass-neutral, and that does not exist in the standard model. I have already made such a prediction elsewhere, namely the (hypothetical) process e + + + 3p ↔ + 5n. Certainly the total masses on the two sides are equal to within experimental uncertainty, as is the total charge. It is not clear whether a single neutrino is sufficient or whether perhaps three are required, but this makes no material difference to the argument, and does not change any of the conclusions. If this process takes place right-to-left, then the and will then decay to electrons with the release of a vast amount of energy in the form of heat and neutrinos and antineutrinos (but no photons, so the process is invisible). In order to get the neutrons close enough together for this reaction to take place, they must be inside the nucleus of an atom. And in order for the and to get far enough away to avoid recombination, there must either be very large tidal forces on the atomic nucleus, or the atom must be highly ionised, or both. If this process is combined with an inverse process, then we obtain This removes the need for the neutrino, but now the mass on the right hand side is slightly greater than that on the left hand side, so the process needs an energy input to get going. But then it can take place spontaneously in a very hot, completely ionised beryllium atom, rotating very fast in a strong gravitational field, due to the presence of a strong magnetic field. Such conditions exist in the solar corona, so that this process provides a plausible mechanism for solar coronal heating. At the same time, it burns up two whole nucleons, and tries to create a nucleus of nothing but protons, which of course flies apart into hydrogen nuclei, or perhaps it can be combined with an inverse process to leave some helium behind. In the opposite direction, the process creates nucleons out of leptons, but needs a far greater energy input, and therefore much more extreme conditions, such as centres of stars, or the Big Bang. But it is a process of nucleosynthesis that is unknown to the standard model, and if it takes place in such extreme conditions then it might explain why the standard model predictions of metal abundances in the universe do not appear to be consistent with observations. In particular, the process could take place with the three electrons and three protons in a lithium atom, under conditions of extreme pressure and extreme heat, by first converting the electrons to the higher generations, and then converting the whole lot into neutrons. Thus the original 7 nucleons in lithium-7 have been converted into 9, and after some beta decay to create electrons and protons, one ends up with beryllium-9. In particular, this process might be involved in the formation of neutron stars. Moreover, since it takes out such a huge amount of energy, it results in rapid cooling. 8.2. Reduction to general relativity. The proposal here for quantum gravity is to base it on the representation 5, rather than 4a as in general relativity. Effectively, then, we extend the field-strength tensor of general relativity to the larger tensor This suggests that the stress-energy tensor should similarly be extended from that is by adding an extra copy of 5. Therefore we extend the 10 Einstein equations to 15. These equations come from the equivalence of representations The extra 5 equations appear to involve 5 fundamental gravitational masses, so that mass itself is a 5-vector in this model, rather than a scalar. Rather than taking only the neutron, proton and electron as fundamental, and equating the masses of neutron and proton more or less to 1 and the electron to 0, for a scalar mass, we seem to need to take these three masses independently, augmented by the other two generations of electron. The extra five equations then relate these five masses to properties of the spin 2 part of the gravitational action. Let us now consider the Riemann Curvature Tensor. As it stands, this tensor corresponds to But this tensor does not take account of the fermionic nature of the electron, proton and neutron. It may be worthwhile to replace the 'bosonic' monomial representation 3a + 3b by the corresponding 'fermionic' representation 6, thus: The effect of this proposed change is to replace two copies of 1 + 5, which might represent the particles + 5n, by 3a + 3b, which might represent e + + + 3p. Since this does not change the overall mass, it has no effect on Newtonian gravity, but it does have an effect on general relativity. Most notably, it is no longer possible to take out a scalar from the 21-dimensional tensor in order to leave a 20-dimensional curvature tensor. This scalar now plays the role of breaking the 3generation symmetry, and picking out the first generation electron as special. But the group algebra requires this symmetry to be unbroken. In other words, this modification of the Riemann Curvature Tensor is required in order to take account of the fact that there are three generations of electron. This proposed modification of the Riemann Curvature Tensor corresponds to the adjoint representation of the gauge group Sp, that has not been used at all so far. It therefore corresponds to a group of coordinate changes, that appears to describe changes between different observers' experiences of matter. Moreover, it contains a copy of SU, suitable for describing our experiences of the internal structure of the proton and the neutron. We have now split up the entire faithless part of the group algebra, into a field strength tensor 2 ∼ = S 2 (4b), two equivalent representations S 2 and 2 sharing a scalar, whose equivalence contains and extends the Einstein equations, and S 2, which enhances the Riemann Curvature Tensor as a description of the 'shape of spacetime' as defined by the observer's assumptions about the matter within it, plus one extra dimension to define the local mass scale. 8.3. In search of the equations. If the analysis in Section 8.2 is correct, then it follows that the fundamental mass ratios must be related to properties of the rotation of the Earth. Clearly this cannot imply that there is any physical change in mass if the rotational and/or gravitational parameters change. What it must mean is that our historical choice of gauges for the theory has been determined by our particular place in the universe. The required properties of the Earth's motion can be roughly separated into components defined by its rotation on its own axis, its revolution around the Sun, and the revolution of the Moon around the Earth. Of course, these components are not completely independent of each other, so it will be impossible to write down any exact equations at this stage. But we should be able to find some equations that express the five fundamental masses in terms of rotational and gravitational parameters, to an accuracy of better than 1%. If we neglect the gravitational effects of the Moon for the moment, there are just two obvious dimensionless parameters available, which we might as well take to be the number of days in a year, say 365.24, and the tilt of the Earth's axis, say 23.44. With these parameters, we want to calculate the mass ratios of three particles, presumably e, p and n. Experimentally, we have Hence both formulae are accurate to within about.01%, which is much more accurate than we could reasonably have hoped. Even the difference between the neutron and proton masses is postdicted to an accuracy of better than 1%. Of course, this does not mean that these mass ratios change if the tilt of Earth's axis changes, or the number of days in a year changes. That would be absurd, and is in any case contradicted by experiment. What it means is that the way that the theories have developed, and the way that various choices of coordinates have been made, reveal a dependence of sorts on the environment we live in. What it means is that the structure of the model that we have developed depends on what we feel is important to measure. What it means is that certain parameters that can be treated as constants in the standard model, do not necessarily have to be treated as constants. A more general model in which they are treated as variables may then reveal things about the universe that are not revealed by the standard model. Pions and kaons. At this stage we have two more gravitational parameters, but only one more mass ratio, so perhaps I have not made the right choice of fundamental mass ratios. Here I consider an alternative. The change of signature from SU to SU imposed by the structure of the group algebra has profound consequences for the interpretation of the strong force. Instead of the 8 massless gluons that are assumed to lie in the adjoint representation of SU, we now appear to have 8 gauge bosons which between them have four distinct masses. These must include the three pions, that mediate the strong force between nucleons, and presumably also the kaons. This gives us 4 masses, but seemingly only 7 particles. However, it is clear that the four boosts must represent the four charged particles, so that the rotations represent the neutral particles, forming a group U, and therefore splitting 1 + 3 into mass eigenstates. In other words, the model counts three neutral kaons, in the adjoint representation of SU R, rather than two, in two separate 2-dimensional representations of SU L, as in the standard model. Hence the group algebra model is incompatible with the standard model at this point. However, it is not incompatible with experiment, which very clearly detects three distinct neutral kaons with the same mass. Moreover, the model provides a physical explanation for kaon oscillations between the three eigenstates, in terms of the action of SO R, which represents the magnetic and/or gravitational field. That is, a kaon changes state as a result of an interaction with a magnetic field associated with quantum gravity. In practical terms, therefore, it is an interaction with an antineutrino. These antineutrinos combine in pairs (one in and one out) to create the gravitational field in 3b. In other words, the source of the antineutrinos that cause kaon oscillations is the gravitational field itself. Hence the 'mixing angle' between kaon states is simply the angle between the directions of the gravitational field at the locations of the two measurements. The original experiment that detected these oscillations was carried out over a horizontal distance of 57 feet, that corresponds to an angle of approximately 2.73 10 −6 radians, assuming a mean radius of the Earth of around 6367 km. The lifetimes of the relevant two kaon eigenstates differ by a factor of approximately 570, which means that the model postdicts that approximately.16% of detected decays will be two pion decays. The experiment found 45 ± 9 in a sample of 22700, that is.20% ±.04%, so that my postdiction is consistent with the experimental results, to an accuracy of 1. Admittedly, it is only a postdiction at this stage, but it can be converted into a genuine prediction that the effect depends on the geometry of the experiment relative to the gravitational field. Hence I predict that different results will be obtained for different experimental geometries. This can be tested. Given that the standard model makes predictions about kaon decays that are not consistent with experiment, I suggest that my alternative proposals should be tested. Now it must also be taken into account that in order to obtain the subgroup SU of Spin we have to break the symmetry of the finite group, from the binary icosahedral group SL to the binary tetrahedral group SL. This breaks the symmetry of the representation 5 into 2 + 3, in such a way that the 3 is equivalent to the restrictions of both 3a and 3b. Hence the 3-dimensional representation is now playing three roles simultaneously, in the weak force, the strong force and gravity. Going back to the unbroken symmetry, we have both pion and kaon masses represented in the representation 5, and hence we can hope to find mass ratios corresponding to the number of days in a month, and the inclination of the Moon's orbit. For this purpose, a lunar month seems most appropriate, at 29.53 solar days, but we should not expect very great accuracy from any particular way of counting the number of days in a month. The inclination of the Moon's orbit is around 5.14 on average, but varies between 4.99 and 5.30 on a cycle of length approximately 343 days, so again we should not expect great accuracy. where the appearance of cos 2 arises from the fact that 5 is a spin 2 representation. Of course, these conjectures are speculative, and may not be very convincing until a more complete model is built. But they are accurate to.05%, which would be quite remarkable if these were pure coincidences. 9. The equivalence principle 9.1. Gravitational and inertial mass. The strange formulae for certain mass ratios in the previous section can only hold if the strong equivalence principle fails. Otherwise, precise measurements of inertial masses over the past 50 years directly contradict the implied variability of gravitational mass ratios. Experimental evidence therefore requires inertial mass of the particles under consideration to be fixed, and requires gravitational mass of bulk matter, and therefore of atoms, to be fixed, within the limits of experimental uncertainty. The standard model permits the calculation of inertial masses of atoms, at least in some cases, in terms of inertial mass of electrons, protons and neutrons, together with other parameters of the theory. These inertial masses can then be calibrated against gravitational masses. Direct measurement of gravitational mass is difficult to perform accurately, and depends on an accurate value for Newton's gravitational constant G. This constant is notorious for being difficult to measure, and the CODATA 2018 value has a relative standard uncertainty of 22ppm, which may be over-optimistic in view of the inconsistencies between different experiments. It is certainly not possible to measure the gravitational masses of individual electrons and protons to anything like this accuracy, so that a direct test of the proposed formulae for gravitational mass ratios is impossible. But indirect tests may be possible, particularly if historical measurements or historical theories have in practice dealt with a unified practical 'mass' which is a mixture of a theoretical 'inertial mass' and 'gravitational mass'. In order to investigate this possibility, let us define inertial, gravitational and experimental mass ratios for the proton and electron as follows: where 365.256363 is the number of solar days in a sidereal year, is the Earth's axial tilt or obliquity, and is a mixing angle that may depend on both the type of experiment and the model that is used to interpret it. The historical record makes it clear that since the consolidation of the standard model in the 1970s, = 0. The question then is whether a different value of was used before the standard model. Classical Newtonian mass is defined by 4a alone, and calibrated by weighing objects in the Earth's gravitational field. It is therefore reasonable to call this gravitational mass. The discovery of electricity, and the development of the theory of electromagnetism in the 19th century, led to an electromagnetic field in 3a+3b, in such a way that the combination of charge and mass appears in 3a⊗3b = 4a+5. At this stage a calibration of 5 against 4a is implicitly required in order to maintain a single concept of mass, rather than separating out two components. Let us therefore label the 5-component 'inertial mass', while being careful to understand that the term 'inertial mass' as used in other contexts may well be a mixture of the two components distinguished here. The classical (WQED) analysis therefore uses a mass value for each particle that is a compromise between a theoretical gravitational mass in 4a and a theoretical inertial mass in 5. On the other hand, the QED analysis uses the Dirac spinor in 4b to produce a mass value in the square of the spinor By projecting out the unwanted terms, this representation can be identified with the classical one, so that the two can be calibrated against each other. The full standard model introduces 2a + 2b also into the Dirac spinor, so that the gravitational mass can be recovered in 2a ⊗ 2b = 4a. This provides enough information to separate the two different types of mass in 4a and 5. The standard model can therefore work entirely with the 5-mass, defined by a further projection from QED, and now defined as inertial mass, or simply mass. This pushes the calibration of the (gravitational) 4a-mass against the (inertial) 5-mass into the electro-weak mixing angle that defines the relationship between the spinors of types 2a + 2b and 4b. From this point onwards, therefore, the standard model can, and does, ignore gravity completely. Nevertheless, the icosahedral model shows how it may be possible to recover a gravitational mass, distinct from inertial mass, from the unified electro-weak theory, in such a way that the weak interaction becomes one ingredient in a quantum theory of gravity. 9.4. Parameters of the Lorentz gauge. The discrete model offers two distinct real versions of the Dirac algebra, corresponding to a single complex version in the standard model. Therefore the 16 different real 2-spaces each have their own complex structure, defined by some mixing angle between the real and complex parts. Six of these effectively identify the two symmetry groups, that is Spin acting on 4b and Spin acting on 2a + 2b, while the other 10 are parameters that must appear somewhere else in the standard model. These ten parameters can be mapped to two scalars and two vectors under the action of SO on adjoint SL(2, H), so that the two vectors can be identified with the Cabibbo-Kobayashi-Maskawa (CKM) matrix and the Pontecorvo-Maki-Nakagawa-Sakato (PMNS) matrix. Each matrix holds four real parameters of 4a in a 3 3 matrix representing 3a ⊗ 3b. The two scalars probably represent the electroweak mixing angle and the fine structure constant, although the former can also be interpreted as a mass ratio, so might appear elsewhere, in which case we might replace it with the strong coupling constant. Of course, all these parameters depend on a particular choice of identification of SO with SO, which can only be constant on a subgroup SO, so that they all vary with the energy scale, as experiment confirms. In addition, the complex scalar in the Dirac algebra is used for the mass term, so that each of the other 15 dimensions of the algebra gets its own mass value from this mixing. While these naturally split into 3 + 3 + 4 + 5, the symmetry-breaking splits 4 as 1 + 3 and 5 as 1 + 1 + 3 complex dimensions, so that they can be identified with 3 + 3 + 3 + 3 masses for the elementary fermions, and 1 + 1 + 1 for the Z, W and Higgs bosons, to give the 15 fundamental masses of the standard model. Here we see again that the masses of the elementary particles might be a compromise between inertial and gravitational masses, and hence gauge-dependent. It is known experimentally that the Z/W mass ratio varies according to the energy. For quarks in general, and for neutrinos and the Higgs boson, the masses are so difficult to measure accurately that there is no real evidence either way. That leaves just three charged lepton masses, for which the experimental evidence is that they are genuinely constant. This is consistent with a choice of constant SO. Speculations 10.1. The Koide formula. The empirical Koide formula relates the masses of the three generations of electron as follows: Although there is no generally accepted reason why this formula should hold, it has been predictive of more accurate values of the mass, and it has not been experimentally falsified yet. Writing x, y, z for the square roots of the masses, the formula can be re-arranged into the form In the group algebra model, masses for all three generations lie in the representation 5. Moreover, the generations are labelled by the representation 3b. Since S 2 (3b) = 1 + 5, we might expect to find the square roots of the masses in 3b. If we therefore take coordinates x, y, z for 3b, with the permutation (x, y, z) acting as a generation symmetry for electrons, then we have coordinates x 2, y 2, z 2, xy, xz, yz for 1 + 5, in which x 2 + y 2 + z 2 spans the scalar. The element of order 3 fixes also the vector xy + xz + yz. Hence there is some multiple of xy + xz + yz that is equal to the sum of the masses of the three generations, that is x 2 + y 2 + z 2. Why this multiple should be 4, I do not know, but I suspect it comes somehow from the relationship between 5 and 1 + 4a. On the other hand, it may come from the relationship between the permutation representation 1 + 5 and either 3a + 3b or 6, both of which are monomial representations on 6 points. Now there is an experimental problem, that while both the Koide formula and my formula (relating the three electron masses to the proton and neutron masses) are consistent with experiment, they are not consistent with each other (as pointed out to me by Marni Sheppeard in April 2015). They both offer predictions of the mass of the tau particle to at least 8 significant figures, of which only the first 4 agree. In terms of the model, the representation 5 contains not only the three generations of electron, but two more particles that I have identified as the proton and the neutron. By reducing the representation modulo 3 (see Section 3.1), we can separate the mass from the charge into 1 + 4a and obtain my formula. But to get the Koide formula, we have to split the representation into leptons and baryons, as 3 + 2. There is no mathematical operation that achieves this, other than breaking the symmetry by restricting to a subgroup. This has the effect of ignoring the difference in mass between the proton and the neutron. Hence the Koide formula is only approximately correct. The fact that it is such an extraordinarily good approximation is entirely due to the fact that the proton and neutron masses are so nearly equal. If the above conjecture regarding the origin of this mass difference is accepted, then we could say that the (historical and philosophical, not physical) reason for its accuracy is that there are so many days in the year! 10.2. Variable gravity. The change that I have proposed making to the Riemann Curvature Tensor effectively mixes the three generations of electron, and therefore has the effect that the ratio of gravitational to inertial mass for the electron is not constant, but depends on the local values of the tensor. If we take the inertial mass to be fixed, as it is in the standard model, then this means that the gravitational mass of an electron can vary according to the local conditions. This obviously has a very small effect on the actual gravitational force on ordinary matter, since there is such a small range of conceivable values for the electron mass. Nevertheless, these effects may be large enough to be detected by the right experiments. Even if we consider the whole range from 0 up to the difference between the neutron and proton masses, the effect could never be bigger than.1%. The actual range that might be achievable in experiments may be considerably smaller than this. There are in fact at least four independent experimental anomalies that might in principle be explained an effect of this kind. These are inconsistent measurements of Newton's gravitational constant G; galaxy rotation curves inconsistent with Newtonian gravity; the flyby anomaly; the Pioneer anomaly. Many proposals have been made to explain these anomalies, and some are regarded as adequately explained already. But if the current proposal can explain them all simultaneously, then it may be considered more satisfactory than having separate ad hoc explanations for each one. Consider first the measurements of G. Here we might expect to find different values of G for different materials, based essentially on the proportion of the total mass that can be attributed to electrons. For typical heavy metals this proportion does not vary much from one element to another, but for lighter elements, including iron, it is significantly larger. Thus we might expect slightly different results in the two cases, since both lighter and heavier elements have been used in actual experiments. The extremal difference amounts to roughly 3.6 electrons per iron atom, which is approximately 40ppm. Now if we look at the whole range of potential gravitational masses, this translates to a difference anywhere from 0 to 100ppm. If both masses in the experiment are affected, this could double to 200ppm. This range is sufficiently large to cover the reported anomalies, and suggests that it is worth taking a more detailed look at the individual experiments, to see if there are indeed correlations of the kind I suggest between the materials used in the experiment and the values of G that are reported in the results. If so, one can then try to see if the magnitudes of the anomalies are consistent with an explanation of the kind proposed. It may indeed be possible to interpret such a difference (if it is confirmed experimentally) as a magnetic effect of gravity. That is, the proposed new quantum process can happen in the nucleus of a metal atom as the result of an interaction with a very low energy neutrino, which is absorbed and re-emitted without any change in energy, but with a minute change in the direction of the momentum. If this is a valid interpretation, then it may also allow us to interpret an atom undergoing decay, with the emission of an anti-neutrino, as a magnetic monopole. Indeed, one might therefore regard the neutron itself as a magnetic monopole, and decay the evidence for it. Next consider galaxy rotation curves. Here the standard LCDM ( cold dark matter) model invokes 'dark matter' as a hypothetical constituent of galaxies, but no such matter has ever been detected in the laboratory. Alternative theories such as MOND (modified Newtonian dynamics) are empirical laws, without any physical explanation, but reportedly do a better job of describing what is actually observed. There are various different ways of looking at MOND-type models, many of which are equivalent to a failure of the equivalence principle in some form, either as a variable gravitational-to-inertial mass ratio, or a variable gravitational 'constant' G. My model can be interpreted in either of these ways, but is actually more subtle than either of them. Since in this model the electron gravitational mass (but not its inertial mass) depends on an angle between two different rotations, it is the rotation of the galaxy itself that is key to the understanding of the effects normally attributed to dark matter. Now it is possible to use these ideas to estimate the critical acceleration below which the MOND effects become noticeable. For this, we need to appreciate that our own rotation within the Milky Way affects our own understanding of the local gravitational fields that we measure. Since this rotation is effectively ignored in our modelling of gravity within the Solar System, our models of gravity cannot be more accurate than this. In other words, below a critical acceleration roughly equal to our own acceleration towards the centre of the Milky Way, both Newtonian gravity and general relativity break down. This is indeed what is observed. If this is really true, it means that the entire basis of our theories of gravity is misconceived, and that mass is an effect of gravity, not the cause. It is rotation that is the cause of gravity, not the other way around. This provides a whole new paradigm for gravity, for which we seek new experimental evidence. Now the flyby anomaly arises when a spacecraft flies close to the Earth in order to pick up speed from the motion of the Earth around the Sun. In a number of instances, the increase in speed is observed to be greater than that predicted by the theory. But in some instances, no anomalous increase in speed was detected, and in one case the anomaly had the opposite sign. An empirical law describing the effect was obtained, that was dependent on the effective rotation of the spacecraft in the North-South direction. Now rotations are described by the representation 3a or 3b, so that the two rotations of the Earth and the spacecraft are perpendicular vectors in (say) 3a, and the gravitational effect of these two rotations lies in S 2 (3a) = 1 + 5. where I and O are the effective latitudes of the inbound and outbound trajectories. The fact that this formula is a product of sines is what puts it into S 2 (3a). Finally, let us consider the Pioneer anomaly. This was an effect that was detected in the two Pioneer spacecraft, once they had finished their manoeuvres in the outer Solar System and were left to coast on out of the Solar System altogether. An anomalous acceleration towards the Sun, of the same magnitude as the critical acceleration mentioned above, was detected. While it is nowadays generally accepted that the anomaly can be explained by greater experimental uncertainties and systematic biases than were originally built into the analysis, there is also the possibility that the effect was real, and that the experiment actually did detect the rotation of the Solar System around the centre of the galaxy. No discussion of variable gravity can be considered complete without mention of dark energy or the cosmological constant. Recent experiments do indeed cast doubt on the existence of this concept, although it is an essential part of standard cosmology. In the group algebra model, two scalars in 1+5 are replaced by vectors in 3a + 3b. One of them represents dark matter, and has been conclusively dealt with in the preceding discussion. The other one represents dark energy, which likewise does not exist in the proposed model. It is replaced, again, by another rotation, presumably on an even larger scale. 10.3. Muons. The model under discussion here is relevant to the calculation of g−2 for the muon. There are two competing calculations, one using ordinary quantum chromodynamics (QCD), the other using lattice QCD. The former is in tension with experiment, while the latter is consistent with experiment. Now the proposed model essentially contains lattice QCD, after breaking the symmetry down to SL so that the cubic lattice in 3-space is preserved. This has the effect that 3a and 3b become identical, so that a complex structure can be put onto 3a + 3b, and a symmetry group SU imposed on it. However, the continuous group SU is not a subgroup of the group algebra, and does not preserve the essential quantum structure of the elementary particles. Hence, using this (unphysical) symmetry group is liable to produce the wrong answer. Therefore my model is consistent with both lattice QCD and experiment, while continuous QCD disagrees with all three independent methodologies. As a result it would be reasonable to conclude that QCD is definitively falsified. The error can moreover be located to the imposition of an inconsistent complex structure onto 3a+3b. The calculations then proceed in what should be 3a⊗3b = 4a+5, but after symmetry breaking is 3 ⊗ 3 = 1 + 3 + 2 + 3. This is the same symmetry-breaking that pertains in lattice QCD, but in continuous QCD, the representation 3 + 2 + 3 forms an irreducible adjoint representation of SU. These symmetries, however, are inconsistent with the 4a + 5 structure, and are therefore unphysical. The magnitude of the error can be estimated from the fact that the distinction between 3a and 3b is also of crucial importance for gravity, as described in the previous section. Since the muons in the experiment are rotating in a 15-meter diameter ring, and since rotation causes gravity, as we have seen above, there is a mixing of gravity with QCD in this experiment. The effect is much the same as in the flyby anomaly, and in the CP-violation of neutral kaon decays, so that there is a factor of sin, where ≈ 2.36 10 −6 radians. In other words, the ratio between the two competing values of muon g − 2 is approximately.99999764. Conclusion In this paper I have examined the hypothesis that there are close connections between the representation theory of the binary icosahedral group, on one hand, and various models that attempt to go beyond the standard model of particle physics, on the other. Many such connections have been found, but the question remains: is there a deeper meaning to these connections, or not? One or two general principles have emerged from this investigation, the first of which is that, as a general rule, symmetric tensors describe matter and structures, while anti-symmetric tensors describe forces. The second is a correspondence between permutation representations, in particular 1+ 4a and 1+ 5, and monomial representations, here 5 and 3a+ 3b respectively. The monomial representation 5 introduces a triplet symmetry to the fundamental concept of mass, and hence provides a place to model the three generations. The monomial representation 3a + 3b introduces a doublet symmetry to the fundamental particles, extending their number from 6 to 12. The forces then do not depend on whether we use the permutation representation or the monomial representation, since 2 (1 + 4a) = 3a + 3b + 4a = 2 Matter in the two cases appears to be described by S 2 (4a) = 1 + 4a + 5 S 2 = 1 + 4a + 5 + 5. It then appears that there is not as much difference as might have been thought, between the 4a case, on which the theory of relativity can be built, and the 5 case, on which the standard model of particle physics can be built. A third principle is the reduction of representations modulo 2 (which again relates 1 + 5 to 3a + 3b), modulo 3 (which relates 1 + 4a to 5) and modulo 5 (which relates 1 + 3a and 1 + 3b to 4a). One of the main themes of this paper has been the similarities and differences between 2 (1 + 5) and S 2. For many purposes they seem to be interchangeable, but they are different representations and therefore have different physical interpretations. I have suggested the former for boson labels, and the latter for fermion labels. Supersymmetry can perhaps be thought of as an apparent (but mathematically inconsistent, and therefore not real) correspondence between the two. General relativity is described by the field strength tensor in 2 (4a) and the stress-energy tensor in S 2 (4a), but appears to lack the force term in 4a, so that gravity is interpreted not as a force, but as curvature in spacetime (itself described by the 4a representation), via the Riemann Curvature Tensor in S 2 ( 2 (4a)). This model therefore suggests that it may be possible to restore gravity to the status of a force, by extending 2 (4a) to 2 (1 + 4a) = 2. Furthermore, by extending again to 2 (1 + 5), we obtain the same force tensor as in particle physics. In other words, the incompatibility of general relativity with particle physics seems to have disappeared, at least at the level of the finite symmetries. Of course, this is not yet a unified theory, and there is no guarantee that the Lie groups can be made to match up as closely as the finite group representations. But it does suggest a new place to try to build the foundations for such a theory. Moreover, even in its current very rudimentary form, the model makes predictions about kaon decays that are different from the standard model predictions, some of which are known to contradict experiment. Therefore the model offers a revised description of kaons that can be tested experimentally against the standard model description. Finally, I would like to point out that my model correctly postdicts six apparently fundamental dimensionless 'constants', all to an accuracy of.05% or better, by relating them to a gravitational gauge group. As far as quantum gravity and cosmology are concerned, the model suggests that a modification of general relativity is required in order to incorporate three generations of electrons into the description of the matter in the universe, and the effective gravitational force that arises as a consequence. I have shown that dark matter can be effectively modelled as the differences between the three generations, in such a way that it does not consist of particles, and can never be detected as particles. I have shown, moreover, the importance of taking into account rotations in the modelling of gravity, and therefore of mass, as well as the other fundamental forces. Such rotations appear to explain many things that cannot be explained in any other way. |
def _information_mat_to_probability_mat(info_df, background=None):
info_df = validate_matrix(info_df, matrix_type='information')
bg_df = _get_background_mat(info_df, background)
zero_indices = np.isclose(info_df.sum(axis=1), 0.0)
info_df.loc[zero_indices, :] = bg_df.loc[zero_indices, :]
prob_df = _normalize_matrix(info_df)
prob_df = validate_matrix(prob_df, matrix_type='probability')
return prob_df |
As cases of hunger-related deaths emerged in some parts of arid regions, the government reported on Monday that it had disbursed up to Sh1.4 billion for response in February, March and April 2019.
“There is no cause for alarm as the situation is not as bad as it was two years ago,” he said when he appraised the nation on the status of the drought that has left at least four people dead.
“The below-average short rains have slightly increased the food insecure population from 655,800 in August 2018 to the current 1,111,500, with the top 12 counties having a total of 865,300 food-insecure people,” the NDMA report states.
“The rest of the pastoral and marginal agricultural areas received less than 90 percent normal rains. Spatial distribution was poor across the country," the report says.
Farmers are drowning in maize, and have for months been begging the government to buy the stock. |
#include "PhysBody3D.h"
#include "glmath.h"
#include "Bullet/include/btBulletDynamicsCommon.h"
// =================================================
PhysBody3D::PhysBody3D(btRigidBody* body) : body(body)
{}
// ---------------------------------------------------------
PhysBody3D::~PhysBody3D()
{
delete body;
}
// ---------------------------------------------------------
void PhysBody3D::Push(float x, float y, float z)
{
body->applyCentralImpulse(btVector3(x, y, z));
}
// ---------------------------------------------------------
void PhysBody3D::GetTransform(float* matrix) const
{
if(body != NULL && matrix != NULL)
{
body->getWorldTransform().getOpenGLMatrix(matrix);
}
}
// ---------------------------------------------------------
void PhysBody3D::SetTransform(const float* matrix) const
{
if(body != NULL && matrix != NULL)
{
btTransform t;
t.setFromOpenGLMatrix(matrix);
body->setWorldTransform(t);
}
}
// ---------------------------------------------------------
void PhysBody3D::SetPos(float x, float y, float z)
{
btTransform t = body->getWorldTransform();
t.setOrigin(btVector3(x, y, z));
body->setWorldTransform(t);
}
void PhysBody3D::SetAsSensor(bool is_sensor)
{
if (this->is_sensor != is_sensor)
{
this->is_sensor = is_sensor;
if (is_sensor == true)
body->setCollisionFlags(body->getCollisionFlags() | btCollisionObject::CF_NO_CONTACT_RESPONSE);
else
body->setCollisionFlags(body->getCollisionFlags() & ~btCollisionObject::CF_NO_CONTACT_RESPONSE);
}
} |
package dataturks.security;
import bonsai.exceptions.AuthException;
import bonsai.sa.EventsLogger;
import org.apache.commons.codec.digest.DigestUtils;
import org.apache.commons.lang3.RandomStringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.security.SecureRandom;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
public class InternalLoginAuth {
private static final Logger LOG = LoggerFactory.getLogger(InternalLoginAuth.class);
public static int UID_LENGTH = 28;
public static int TOKEN_LENGTH = 64;
private static char[] possibleCharacters = (new String("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789")).toCharArray();
private static InternalLoginAuth instance = new InternalLoginAuth();
private Map<String, List<String>> tokenCache = new ConcurrentHashMap<>();
private InternalLoginAuth() {
}
private static InternalLoginAuth getInstance() {
return instance;
}
public static String generateUserId() {
String randomStr = RandomStringUtils.random( UID_LENGTH, 0, possibleCharacters.length-1, false, false, possibleCharacters, new SecureRandom() );
return randomStr;
}
public static String generateRandomUserToken() {
String randomStr = RandomStringUtils.random( TOKEN_LENGTH, 0, possibleCharacters.length-1, false, false, possibleCharacters, new SecureRandom() );
return randomStr;
}
public static String encryptedPassword(String password) {
return DigestUtils.sha256Hex(password);
}
public static void validateDataturksTokenElseThrowException(String userId, String token) {
if (!isDataturksUserValid(userId, token)) {
LOG.error("Dataturks validation failed for user " + userId);
EventsLogger.logErrorEvent("d_TokenValidationFailed");
throw new AuthException();
}
}
public static void addToken(String userId, String token) {
List<String> tokenList = null;
if(getInstance().tokenCache.containsKey(userId)) {
tokenList = getInstance().tokenCache.get(userId);
}else {
tokenList = new ArrayList<String>();
}
tokenList.add(token);
getInstance().tokenCache.put(userId, tokenList);
}
private static boolean isDataturksUserValid(String userId, String token) {
if (getInstance().tokenCache.containsKey(userId)) {
return getInstance().tokenCache.get(userId).indexOf(token) > -1 ;
}
return false;
}
}
|
A Study of the Antecedents of Trust in Social Media Posts Access to information and information sharing is an important motivation for people to join Social Networking Sites (SNS). The social media posts are one of the primary information sources shared in such sites. In this study, we propose factors that represents the antecedents of trust in SNS posts. We expect to find that familiarity with the sender of the post, disposition to trust, source credibility, and endorsement are the antecedents to creating trust. In addition, we suggest that involvement in the topic moderates the effect of source credibility, disposition to trust, and endorsement on trust in the post. |
<reponame>optimizely/docker-readthedocs
from readthedocs.settings.docker import *
from readthedocs.settings.sqlite import *
import os
environ = os.environ
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': environ['DB_ENV_DB_NAME'],
'USER': environ['DB_ENV_DB_USER'],
'PASSWORD': environ['DB_ENV_DB_PASS'],
'HOST': 'db',
'PORT': 5432,
}
}
SITE_ROOT = '/app'
ES_HOSTS = ['elasticsearch:9200']
REDIS = {
'host': 'redis',
'port': 6379,
'db': 0,
}
BROKER_URL = 'redis://redis:6379/0'
CELERY_RESULT_BACKEND = 'redis://redis:6379/0'
DEBUG = True
CELERY_ALWAYS_EAGER = False
|
TURLOCK, Calif. — You know the kid who has almost broken the Internet in the past couple of days with a viral video of his spot-on impression of Michael Jackson dancing? Well now he even has the attention of the Jackson family.
Brett Nichols, 17, a junior at Pitman High School in California, became an Internet sensation literally overnight (the video was only posted to YouTube yesterday and already has over 6,000,000 views).
TMZ is now reporting that the Jackson estate offered an invitation for Brett and his family (mom, dad and sister) to all attend a performance of Michael Jackson ONE in Las Vegas.
The Jackson estate contacted the principal of Pitman High School, who in turn told Brett’s family. As one would expect, TMZ reports that they were extremely excited. |
OP VIII 2Green space exposure is associated with slower cognitive decline in older adults: a 10-year follow-up of the whitehall ii cohort Background/aim Cognitive functioning is one of the most important indicators of healthy ageing. Evidence on beneficial associations of green spaces with cognitive function at older age is scarce and limited to cross-sectional studies. This study aimed to investigate the association between long-term green space exposure and cognitive decline. Methods This longitudinal study was based on three follow-ups (10 years) of 6506 participants (4568 years old) from the Whitehall II cohort, UK. Residential surrounding greenness was obtained across buffers of 500 and 1000 metre around the residential address at each follow-up using satellite-derived Normalised Difference Vegetation Index (NVDI) for each follow-up. A battery of four cognitive tests were applied in each follow-up to characterise reasoning, short-term memory, and verbal fluency. The cognitive scores were standardised and summarised in a global cognition z-score. Linear mixed effects models were used that included an interaction between age and greenness to estimate the impact of greenness exposure on trajectories of cognitive decline. Results An interquartile range increase in NDVI was associated with a difference in the global cognition z-score of 0.020 (95% confidence interval (CI): 0.003 to 0.037, p=0.02) over 10 years. Comparing study participants of 55.7 years old, this difference was equivalent to a 4.6% slower decline over 10 years. Similar positive associations were also observed for reasoning (0.022, 95%CI: 0.007 to 0.038) and verbal fluency (0.021, 95%CI: 0.002 to 0.040), but not for short-term memory (−0.003, 95%CI: −0.029 to 0.022). We observed some suggestions for stronger associations among women and participants with secondary school education. Conclusion Higher residential surrounding greenness was associated with slower cognitive decline. Further research is needed to confirm our findings and provide information on the specific characteristics of green spaces that can maximise healthy cognitive ageing. |
What do First Nations really think about Trans Mountain?
Ask Greenpeace, and they’ll tell you First Nations are eco-warriors bravely protecting the ocean from rapacious pipeline-crazed plutocrats. Ask the Fraser Institute, and they’ll say First Nations are enthusiastic, hard-hatted oilmen who are tired of the “environmentalist propaganda” saying otherwise.
The reality is somewhat more complex. The 1,147-km Trans Mountain pipeline expansion would affect more than 100 First Nations, each with their own unique economy, motivations and feelings about bitumen.
Below, some context for the current state of affairs between oil pipelines and Western Canada’s various First Peoples.
This is one of the more surprising developments of the last few weeks. Allan Adam, chief of Alberta’s Athabasca Chipewyan First Nation, has spent years as the world’s most visible opponent of oil sands development. From Neil Young to Leonardo DiCaprio to Jane Fonda, if a celebrity is in Fort McMurray to badmouth the oil sands, chances are they came on the invitation of Chief Adam. Then, last week, Adam expressed his support for any pipeline that could be built with an Indigenous ownership stake. “Let’s move on and let’s start building a pipeline and start moving the oil that’s here already,” he told CBC.
Joe Dion is the head of the Frog Lake Energy Resources Corporation, an oil and gas exploration firm wholly owned by the people of Alberta’s Frog Lake First Nation. He’s very pro-pipeline and is actively working to get more oil infrastructure in First Nations portfolios. But in a 2016 interview with the BBC, he expressed his support for the ongoing anti-pipeline protests at the United States’ Standing Rock Indian Reservation, saying he understood Sioux concerns that the project could threaten the Missouri River. “Water is life. Without water we can’t live. I stand with them,” said Dion. Anyways, the preceding two points should make it pretty clear by now that there is no universal “Indigenous opinion” on oil and gas projects.
Over the past six years, Kinder Morgan has approached 133 First Nations and Indigenous groups in both B.C. and Alberta. The company would not have been seeking consent or partnerships with all 133, some of which are well beyond the pipeline route and the route of all future tankers. Nevertheless, only 43 have inked mutual benefit agreements with the company, 33 of which are in B.C. As a rule, Trans Mountain does not reveal the names of the First Nations with whom they’ve signed deals. However, it’s possible to get a sense of who the 33 might be by looking at a list of 26 B.C. Indigenous groups who sent letters to the National Energy Board supporting the project. Of that list, the majority are quite small (usually with 100 to 200 members) and they represent a roughly 50-50 mix of inland and coastal territory. Among the most significant backers is the Kamloops-area Tk’emlúps te Secwépemc. Notably, however, Kinder Morgan was not able to get the backing of many of the B.C.’s major Indigenous power players, particularly the Lower Mainland nations of Squamish and Tsleil-Waututh, who are now actively leading campaigns against the project.
The specific language used by Trans Mountain is that where the project will cross First Nations reserve lands, “we have received their expressed consent.” Of course, this only covers reserve lands, not traditional territory that might form part of a future treaty (with large swaths of B.C. on untreatied land, most nations do not have a final agreement with the Crown). However, the company has claimed that they’ve obtained support from 80 per cent of First Nations “within proximity” to the pipeline right-of-way. The company’s mutual benefit agreements are confidential, but the details can be revealed by a First Nation if they wish. This was the case with the Whispering Pines First Nation, a band with about 100 members near Clinton, B.C. In 2015, then-chief Mike Lebourdais told local media that their agreement was worth between $10 and $20 million over 20 years.
Alberta and B.C. are currently toe-to-toe in one of the messiest regional standoffs in the history of Confederation. Miniature versions of the Pipeline War have been playing out among First Nations for years. In 2016, Haida clans stripped three hereditary chiefs of their titles for allegedly having approached Northern Gateway as rogue representatives of the community. Pipelines were the central issue at the 2016 annual gathering of the Assembly of First Nations in Gatineau, Que. In front of a room packed with some of the most anti-pipeline political leaders in the country, Jim Boucher, chief of the Northern Alberta Fort McKay First Nation, said “we’re pro-oilsands; if it weren’t for the oil my people would be in poverty right now.” In B.C.’s tiny Peters First Nation, a potential windfall of pipeline money opened up decades worth of familial divisions. Controversy over whether or not to back the pipeline can also be seen in several close community votes on whether to accept an agreement with Kinder Morgan. The nine bands within the Chilliwack-area Ts’elxwéyeqw Tribe rejected their agreement by 55.5 per cent to 44.5 per cent, with just 301 members of the bands turning out to vote. The Lower Nicola Indian Band approved an agreement, but the ‘yes’ side only came in at 59 per cent of the vote.
According to numbers from Natural Resources Canada, Kinder Morgan has committed $300 million to Indigenous benefit agreements. One thing the company isn’t offering, however, is equity. This isn’t for lack of trying, according to Kinder Morgan Canada president Ian Anderson. “I worked for a long time quietly to try and assemble support for (Indigenous equity ownership) on this project and it didn’t come to fruition,” Anderson said last year in Calgary. But the now-cancelled Northern Gateway pipeline did devise a plan for Indigenous ownership by offering 10-per-cent equity stakes to First Nations along the proposed route — 60 per cent of whom took them up on the offer. Joint First Nations ownership is already a major factor in the forestry and mining sector, and is likely going to be critical in future oil and gas projects. “We’re asking, ‘What’s in it for us?’ We’re not going to accept big companies extracting the wealth and leaving us with a big environmental mess. We want real equity in these projects,” Stephen Buffalo, president and chief executive of the Indian Resource Council, told CBC last year.
If you were a Brit reading the Guardian this week, you could have heard about how Prime Minister Justin Trudeau is “prolonging an old colonial pillage” by backing the Trans Mountain pipeline. Squamish Chief Ian Campbell said last year that the whole project is nothing but a “colonial tactic” to loot resources. The c-word has also been fired at environmental activists. Ken Brown, a former chief of B.C.’s Klahoose First Nation, has accused environmentalists of “eco-colonialism” in their attempts to convince First Nations to shut down resource development. Ernie Crey is chief of the Cheam First Nation, which has signed a mutual benefit agreement with Kinder Morgan. This week he accused environmentalists of “redwashing” their agenda. “We have a vigorous environmental movement in B.C. and they have learned that they can use aboriginal communities to advance their agenda,” he said.
While B.C. First Nations have been lukewarm on oil pipelines coming from Alberta, on LNG there is palpable Indigenous disappointment that there isn’t enough development. “First Nations leaders are trying to deal with their constituents’ frustration because of the delays or cancellation of these projects,” according to a report by the B.C. government and the First Nations LNG Alliance. There is some Indigenous opposition to LNG, but one weakness of oil pipelines is that most of the economic benefits will accrue to Alberta, where the oil is produced. B.C. can only get so much from being a trans-shipment point, and most jobs will be gone when construction is complete. LNG, by contrast, is an entirely made-in-B.C. enterprise, meaning long-term jobs and revenues from the production sites themselves. The Lax Kw’alaams Band, for instance, stood to gain $2 billion from the Petronas-led Pacific NorthWest LNG project. Huu-ay-aht First Nations on Vancouver Island are looking at $250 million for a co-management deal for the proposed Steelhead LNG project. For communities worried about marine safety, LNG also performs better in a spill than diluted bitumen. If an LNG tanker ruptures, any spilled product would simply lose the “liquid” in its name and turn back into gas. That’s still bad for the environment, but it’s not the kind of bad where dead seals wash up on the beach for a few weeks.
First Nations do not have a veto power over resource projects, even if it crosses their traditional territory. The Constitution guarantees that First Nations be consulted and accommodated — and projects have indeed been cancelled if a company were found to be phoning in that consultation. But if a company has done all their consulting due diligence, they can go forward on a project even if they don’t have universal First Nations consent. A sense of “they’re going to do it anyway” is why some First Nations were motivated to ink deals with Kinder Morgan despite not being tremendously excited about the pipeline. “We came to the determination, as a group, that (the project) was going to go ahead anyway … if we opposed it, we would have no way of addressing spills, because we would be disqualified from funding from Trans Mountain,” Robert Joseph, chief councillor of Vancouver Island’s Ditidaht First Nation, told the Times Colonist in 2016.
Consider the case of Martin Louie. When Enbridge’s Northern Gateway pipeline was still on the table, Louie was one of the project’s most visible opponents. A hereditary chief of the Nadleh Whut’en First Nation in north-central British Columbia, he sent complaints to the United Nations, petitioned Parliament and announced that Enbridge was banned from the lands of the Yinka Dene Alliance, of which the Nadleh Whut’en was a part. As Louie told the Financial Post earlier this year, however, he wasn’t anti-pipeline — he was mainly opposed to Enbridge’s “ridiculous” offer of only $70,000 per year to his community. The Vancouver-area Musqueam First Nation also isn’t a signatory with Kinder Morgan. Nevertheless, on Wednesday the Musqueam government issued a statement expressing support with those who had. “Musqueam maintains the right to speak on behalf of our territory and respects the views of other First Nations who are impacted by this proposal,” it read.
It’s called the Eagle Spirit Energy Pipeline, it’s a $16-billion First Nations’ led oil and gas pipeline from Alberta to the coast and the project is being spearheaded by Calvin Helin, a Tsimshian businessman and activist for Indigenous self-reliance. If Trans Mountain joins Energy East and Northern Gateway in the graveyard of never-built Canadian pipelines, expect Eagle Spirit to start getting more press. Of course, even Indigenous-championed pipelines can run into roadblocks. In December, an ambitious plan to build a natural gas pipeline through the Northwest Territories officially fizzled out, despite being 33 per cent owned by local Indigenous peoples. The pipeline’s death knell, ultimately, was falling prices for natural gas. |
Making electronic prescribing alerts more effective: scenario-based experimental study in junior doctors OBJECTIVE Expert authorities recommend clinical decision support systems to reduce prescribing error rates, yet large numbers of insignificant on-screen alerts presented in modal dialog boxes persistently interrupt clinicians, limiting the effectiveness of these systems. This study compared the impact of modal and non-modal electronic (e-) prescribing alerts on prescribing error rates, to help inform the design of clinical decision support systems. DESIGN A randomized study of 24 junior doctors each performing 30 simulated prescribing tasks in random order with a prototype e-prescribing system. Using a within-participant design, doctors were randomized to be shown one of three types of e-prescribing alert (modal, non-modal, no alert) during each prescribing task. MEASUREMENTS The main outcome measure was prescribing error rate. Structured interviews were performed to elicit participants' preferences for the prescribing alerts and their views on clinical decision support systems. RESULTS Participants exposed to modal alerts were 11.6 times less likely to make a prescribing error than those not shown an alert (OR 11.56, 95% CI 6.00 to 22.26). Those shown a non-modal alert were 3.2 times less likely to make a prescribing error (OR 3.18, 95% CI 1.91 to 5.30) than those not shown an alert. The error rate with non-modal alerts was 3.6 times higher than with modal alerts (95% CI 1.88 to 7.04). CONCLUSIONS Both kinds of e-prescribing alerts significantly reduced prescribing error rates, but modal alerts were over three times more effective than non-modal alerts. This study provides new evidence about the relative effects of modal and non-modal alerts on prescribing outcomes. |
Existence of ATP-evoked ATP release system in smooth muscles. Effects of stable ATP analogs such as alpha,beta-methylene ATP (alpha,beta-mATP) and beta,gamma-methylene ATP (beta,gamma-mATP) on ATP release and contractile response were evaluated in the vas deferens and ileal longitudinal muscles of guinea pig. In these smooth muscles, administration of alpha,beta-mATP (10, 30 or 100 microM) produced an ATP release accompanied by a transient contraction, but alpha,beta-methylene ADP (30 or 100 microM) or adenosine (30 microM) failed to elicit both the ATP release and the contraction. However, the peak responses of ATP release and contraction to alpha,beta-mATP (100 microM) in the vas deferens appeared around 2 min and 2.62 sec, respectively, after the injection of the drug. Beta,gamma-mATP (10 or 100 microM) caused an ATP release from the vas deferens. The ATP release as well as the contraction evoked by alpha,beta-mATP or beta,gamma-mATP were effectively inhibited by 300 microM suramin, a P2 purinoceptor antagonist. By contrast, ATP release and contractile response to norepinephrine in the vas deferens and those to bethanechol in the ileum were virtually unaffected by this antagonist. Veratridine and ouabain at (30 or 100 microM) caused markedly acetylcholine release from the ileum and norepinephrine release from the vas deferens, respectively. However, alpha,beta-mATP, even in a high concentration of 100 microM, did not elicit any release of acetylcholine or norepinephrine. These findings suggest that alpha,beta-mATP and probably beta,gamma-mATP evoke ATP release from not neuronal but mainly smooth muscular sites by activating suramin-sensitive P2x receptors, implying that "ATP-evoked ATP release system" exists. |
def update_weight(self, weight):
self.weight = weight |
LOS ANGELES — Viewing parties for CNN’s telecast of the first debate of the 2016 election cycle involving Democratic presidential candidates are planned today in downtown Los Angeles, the mid-Wilshire district, the Fairfax district and Pasadena.
The watch parties will be held at:
• Bugatta Supper Club, 7174 Melrose Ave., Los Angeles
• Busby’s East, 5364 Wilshire Blvd., Los Angeles
• Gamz Sports & Karaoke Bar, 500 Spring St., downtown Los Angeles
• Marash Armenian Center, 463 Martelo Ave., Pasadena
• Regent Theater, 448 S. Main St., downtown Los Angeles
The five-candidate debate at the Wynn Las Vegas is set to start at 5:30 p.m. and may run up to 2 1/2 hours, according to CNN.
• PHOTOS: Democrats hold first presidential debate in Las Vegas
The debate will be moderated by “Anderson Cooper 360” anchor Anderson Cooper. CNN chief political correspondent Dana Bash and Juan Carlos Lopez of CNN en Espanol will join Cooper in questioning the candidates.
Viewers may submit questions via Facebook or Instagram. “CNN Tonight” anchor Don Lemon will ask questions from viewers.
The debate will be shown on CNN, CNN International and CNN en Espanol.
CNN will also offer a live stream of the debate on CNN.com’s home page and across mobile platforms. All users will be able to watch CNN live online and on their mobile devices without logging in.
“I don’t think this is a debate where you’ll have candidates attack each other,” Cooper said in an interview on CNN’s “Reliable Sources” on Sunday.
“(Vermont Sen.) Bernie Sanders has been very clear. He’s not going to go after (former Secretary of State) Hillary Clinton by name. He’s not going to criticize her. And I see no reason Hillary Clinton would do that with any of the candidates.”
The other debaters will be former Rhode Island Gov. Lincoln Chafee, former Maryland Gov. Martin O’Malley and former Virginia Sen. Jim Webb. Candidates were required to meet a 1 percent threshold to be invited.
“It’s important to give fair questions to everybody across the board,” Cooper said.
“I think it’s just as interesting to kind of learn about some of these candidates who the American public doesn’t really know much about as it is to hear from some of the candidates you do.”
The debate “is an opportunity for Governor O’Malley to introduce himself and make his case,” his press secretary Haley Morris told City News Service. “No other candidate has put forward a more bold or progressive policy vision than Governor O’Malley and no other candidate has the track record of 15 years of executive leadership getting progressive results.”
There was no immediate response to emails sent to the other four campaigns seeking comment. |
package com.syd.mystudydemo.training_eventbus;
import android.os.Bundle;
import android.widget.TextView;
import com.syd.mystudydemo.R;
import com.syd.mystudydemo.activity.base.BaseActivity;
import org.greenrobot.eventbus.EventBus;
import org.greenrobot.eventbus.Subscribe;
import org.greenrobot.eventbus.ThreadMode;
import butterknife.BindView;
import butterknife.ButterKnife;
/**
* Created by sydMobile on 2018/4/3.
*/
public class ActivityReceiver extends BaseActivity {
@BindView(R.id.tv_content)
TextView tvContent;
@BindView(R.id.tv_publish)
TextView tvPublish;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_eventbus_main);
ButterKnife.bind(this);
init();
}
@Override
protected void initDef() {
super.initDef();
EventBus.getDefault().register(this);
}
@Subscribe(threadMode = ThreadMode.MAIN)
public void onEvent11(EntityEventObjectDemo eventObjectDemo) {
tvContent.setText(eventObjectDemo.getName() + " " + eventObjectDemo.getAdvantage());
}
@Override
protected void onStop() {
super.onStop();
}
}
|
<filename>chapter-3-ClassicalML/Classical ML.py
# Databricks notebook source
# MAGIC %md
# MAGIC # Azure Databricks Healthcare Demo
# MAGIC ## Chapter 3: Classical ML & Patient Length of Stay
# COMMAND ----------
# MAGIC %md
# MAGIC 
# COMMAND ----------
# MAGIC %md
# MAGIC ## Introduction
# MAGIC
# MAGIC With SQL Analytics, we were able to identify several contributing factors to help us better forecast patient length of stay. With an improved understanding of length of stay, Contoso Hospital can optimize bed usage and maximize treatment availability.
# MAGIC
# MAGIC Let's see how classical machine learning techniques can help us use our new data and insight to predict future bed availability.
# COMMAND ----------
# MAGIC %md
# MAGIC ## Part A: Single model training
# MAGIC
# MAGIC We'll use PySpark ML to start with and manually train a model.
# MAGIC
# MAGIC We'll also use Databrick's MLflow support to manage this end-to-end flow. It tracks our training experiments, manages models, and serves models as REST endpoints.
# COMMAND ----------
# DBTITLE 1,Fetch combined data from Delta Lake
train_df = spark.sql("SELECT * FROM lengthofstay")
display(train_df)
# COMMAND ----------
# DBTITLE 1,Encode data for training
import pyspark
from pyspark.ml.feature import StringIndexer, VectorAssembler, OneHotEncoder, VectorIndexer
categoricalColumns = ["gender","rcount","facid"]
stages = [] # stages in our Pipeline
for categoricalCol in categoricalColumns:
# Category Indexing with StringIndexer
stringIndexer = StringIndexer(inputCol=categoricalCol, outputCol=categoricalCol + "Index")
# Use OneHotEncoder to convert categorical variables into binary SparseVectors
encoder = OneHotEncoder(inputCols=[stringIndexer.getOutputCol()], outputCols=[categoricalCol + "classVec"])
# Add stages. These are not run here, but will run all at once later on.
stages += [stringIndexer, encoder]
# Transform all features into a vector using VectorAssembler
numericCols = ['eid', 'dialysisrenalendstage', 'asthma', 'irondef', 'pneum', 'substancedependence', 'psychologicaldisordermajor', 'depress', 'psychother', 'fibrosisandother', 'malnutrition', 'hemo', 'hematocrit', 'neutrophils', 'sodium', 'glucose', 'bloodureanitro', 'creatinine', 'bmi', 'pulse', 'respiration', 'secondarydiagnosisnonicd9']
assemblerInputs = [c + "classVec" for c in categoricalColumns] + numericCols
assembler = VectorAssembler(inputCols=assemblerInputs, outputCol="rawFeatures")
stages += [assembler]
# vectorIndexer identifies categorical features and indexes them, and creates a new column "features".
vectorIndexer = VectorIndexer(inputCol="rawFeatures", outputCol="features", maxCategories=6)
stages += [vectorIndexer]
display(train_df)
# COMMAND ----------
# DBTITLE 1,Set MLflow tracking URI to local
import mlflow
mlflow.set_tracking_uri("databricks")
# COMMAND ----------
# DBTITLE 1,Train model with PySpark & MLflow
from pyspark.ml.regression import GBTRegressor
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml import Pipeline
import numpy as np
import mlflow
import mlflow.spark
(trainingData, testData) = train_df.randomSplit([0.7, 0.3], seed=100)
# The next step is to define the model training stage of the pipeline.
# The following command defines a GBTRegressor model.
gbt = GBTRegressor(labelCol="lengthofstay")
# Define an evaluation metric. The CrossValidator compares the true labels with predicted values for each combination of parameters, and calculates this value to determine the best model.
evaluator = RegressionEvaluator(metricName="rmse", labelCol=gbt.getLabelCol(), predictionCol=gbt.getPredictionCol())
singleModelstages = stages + [gbt]
singleModelpipeline = Pipeline().setStages(singleModelstages)
with mlflow.start_run(run_name='los-experiment-spark') as run:
modelPipeline = singleModelpipeline.fit(trainingData)
mlflow.log_metric('Mean squared error', evaluator.evaluate(modelPipeline.transform(testData)))
# Log the best model.
mlflow.spark.log_model(modelPipeline, artifact_path="spark-model", registered_model_name="LengthOfStaySparkModel")
# COMMAND ----------
# DBTITLE 1,Evaluate model
predictions = modelPipeline.transform(testData)
rmse = evaluator.evaluate(predictions)
print("RMSE on our test set: %g" % rmse)
# COMMAND ----------
# DBTITLE 1,Wait for model to be registered
import time
from mlflow.tracking.client import MlflowClient
from mlflow.entities.model_registry.model_version_status import ModelVersionStatus
# Wait until the model is ready
def wait_until_ready(model_name, model_version):
client = MlflowClient()
for _ in range(1000):
model_version_details = client.get_model_version(
name=model_name,
version=model_version,
)
status = ModelVersionStatus.from_string(model_version_details.status)
if status == ModelVersionStatus.READY:
print("Model status: %s" % ModelVersionStatus.to_string(status))
break
time.sleep(1)
client = MlflowClient()
model_version_infos = client.search_model_versions("name = '%s'" % "LengthOfStaySparkModel")
latest_model_version = max([model_version_info.version for model_version_info in model_version_infos], key=int)
wait_until_ready("LengthOfStaySparkModel", latest_model_version)
# COMMAND ----------
# DBTITLE 1,Promote model to production
output = client.transition_model_version_stage(
name="LengthOfStaySparkModel",
version=int(latest_model_version),
stage="Production",
)
print(output.current_stage, " -> ", output.status)
# COMMAND ----------
# DBTITLE 1,Prepare to query production model
import os
os.environ["DATABRICKS_TOKEN"] = "<KEY>"
#
#
# Copy the below code from the Models tab after Model Serving has been enabled.
#
#
import os
import requests
import pandas as pd
def score_model(dataset: pd.DataFrame):
url = 'https://adb-6739797150782991.11.azuredatabricks.net/model/LengthOfStaySparkModel/Production/invocations'
headers = {'Authorization': f'Bearer {os.environ.get("DATABRICKS_TOKEN")}'}
data_json = dataset.to_dict(orient='split')
response = requests.request(method='POST', headers=headers, url=url, json=data_json)
if response.status_code != 200:
raise Exception(f'Request failed with status {response.status_code}, {response.text}')
return response.json()
# COMMAND ----------
# DBTITLE 1,Wait until model is ready
import requests
url = 'https://adb-6739797150782991.11.azuredatabricks.net/model/LengthOfStaySparkModel/Production/invocations'
# Wait until the model serving is ready
def wait_until_url_ready(url):
headers = {'Authorization': f'Bearer {os.environ.get("DATABRICKS_TOKEN")}'}
for _ in range(1000):
r = requests.request(method='POST', headers=headers, url=url)
if r.status_code == 415:
print("Model serving: READY")
break
time.sleep(1)
wait_until_url_ready(url)
# COMMAND ----------
# DBTITLE 1,Real-time model scoring
# Model serving is designed for low-latency predictions on smaller batches of data
num_predictions = 5
served_predictions = score_model(testData.toPandas()[:num_predictions])
model_evaluations = list(modelPipeline.transform(testData.limit(num_predictions)).select('prediction').toPandas()['prediction'])
# Compare the results from the deployed model and the trained model
pd.DataFrame({
"Model Prediction": model_evaluations,
"Served Model Prediction": served_predictions,
})
# COMMAND ----------
# DBTITLE 1,Batch model scoring
from pyspark.sql.functions import struct
import mlflow.pyfunc
testData.write.format("delta").mode("overwrite").option("overwriteSchema", "true").save("/mnt/delta/lengthofstaybatch")
apply_model_udf = mlflow.pyfunc.spark_udf(spark, f"models:/LengthOfStaySparkModel/production")
# Read the "new data" from Delta
new_data = spark.read.format("delta").load("/mnt/delta/lengthofstaybatch")
# Apply the model to the new data
udf_inputs = struct(*(testData.columns))
new_data = new_data.withColumn(
"prediction",
apply_model_udf(udf_inputs)
)
display(new_data.select("eid","lengthofstay","prediction"))
# COMMAND ----------
# DBTITLE 1,Write predictions to Delta Lake
new_data.write.format("delta").mode("overwrite").option("overwriteSchema", "true").save("/mnt/delta/predictions")
spark.sql("DROP TABLE IF EXISTS predictions")
spark.sql("CREATE TABLE predictions USING DELTA LOCATION '/mnt/delta/predictions/'")
new_data = spark.sql("SELECT * FROM predictions")
display(new_data)
# COMMAND ----------
# MAGIC %md
# MAGIC ## Part B: Multiple model training
# MAGIC
# MAGIC Now that we have a model in production, let's see how we can improve its performance through hyperparameter tuning.
# MAGIC
# MAGIC We'll use MLflow and PySpark to explore a grid of hyperparameters to train an optimal gradient-boosted decision tree model.
# COMMAND ----------
# DBTITLE 1,Define GBTRegressor with hyperparameter training stages
from pyspark.ml.regression import GBTRegressor
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml import Pipeline
import numpy as np
# The next step is to define the model training stage of the pipeline.
# The following command defines a GBTRegressor model.
gbt = GBTRegressor(labelCol="lengthofstay")
# Define a grid of hyperparameters to test:
# - maxDepth: maximum depth of each decision tree
# - maxIter: iterations, or the total number of trees
paramGrid = ParamGridBuilder()\
.addGrid(gbt.maxDepth, [2, 5])\
.addGrid(gbt.maxIter, [10, 200])\
.build()
# Define an evaluation metric. The CrossValidator compares the true labels with predicted values for each combination of parameters, and calculates this value to determine the best model.
evaluator = RegressionEvaluator(metricName="rmse", labelCol=gbt.getLabelCol(), predictionCol=gbt.getPredictionCol())
# Declare the CrossValidator, which performs the model tuning.
cv = CrossValidator(estimator=gbt, evaluator=evaluator, estimatorParamMaps=paramGrid)
hyperparamModelstages = stages + [cv]
hyperparamModelpipeline = Pipeline().setStages(hyperparamModelstages)
# COMMAND ----------
# DBTITLE 1,Train model with PySpark & MLflow
import mlflow
import mlflow.spark
with mlflow.start_run(run_name='los-experiment-hyperparam'):
modelPipeline = hyperparamModelpipeline.fit(trainingData)
test_metric = evaluator.evaluate(modelPipeline.transform(testData))
mlflow.log_metric('test_' + evaluator.getMetricName(), test_metric)
# Log the best model.
mlflow.spark.log_model(modelPipeline, artifact_path="spark-model", registered_model_name="LengthOfStaySparkModel")
# COMMAND ----------
# DBTITLE 1,Evaluate model
predictions = modelPipeline.transform(testData)
# Define an evaluation metric. The CrossValidator compares the true labels with predicted values for each combination of parameters, and calculates this value to determine the best model.
evaluator = RegressionEvaluator(metricName="rmse", labelCol=gbt.getLabelCol(), predictionCol=gbt.getPredictionCol())
rmse = evaluator.evaluate(predictions)
print("RMSE on our test set: %g" % rmse)
# COMMAND ----------
# DBTITLE 1,Wait for model to be registered
from mlflow.tracking import MlflowClient
client = MlflowClient()
model_version_infos = client.search_model_versions("name = '%s'" % "LengthOfStaySparkModel")
latest_model_version = max([model_version_info.version for model_version_info in model_version_infos], key=int)
wait_until_ready("LengthOfStaySparkModel", latest_model_version)
# COMMAND ----------
# DBTITLE 1,Promote new model to staging
output = client.transition_model_version_stage(
name="LengthOfStaySparkModel",
version=int(latest_model_version),
stage="Staging",
)
print(output.current_stage, " -> ", output.status)
# COMMAND ----------
# DBTITLE 1,Batch model scoring
from pyspark.sql.functions import struct
import mlflow.pyfunc
testData.write.format("delta").mode("overwrite").option("overwriteSchema", "true").save("/mnt/delta/lengthofstaybatch")
apply_model_udf = mlflow.pyfunc.spark_udf(spark, f"models:/LengthOfStaySparkModel/staging")
# Read the "new data" from Delta
new_data = spark.read.format("delta").load("/mnt/delta/lengthofstaybatch")
# Apply the model to the new data
udf_inputs = struct(*(testData.columns))
new_data = new_data.withColumn(
"prediction",
apply_model_udf(udf_inputs)
)
display(new_data.select("eid","lengthofstay","prediction"))
# COMMAND ----------
# MAGIC %md
# MAGIC ## Part C: Integration
# MAGIC
# MAGIC MLflow also provides first-class support for Azure ML. By updating the tracking URI, the experiment information captured by MLflow can be recorded as a native Azure ML experiment.
# MAGIC
# MAGIC Let's build a sklearn model and see how Azure ML can help us manage the end-to-end experiment lifecycle.
# COMMAND ----------
# DBTITLE 1,Integrate MLflow with Azure ML
import mlflow
import mlflow.azureml
import azureml.mlflow
import azureml.core
from azureml.core.authentication import InteractiveLoginAuthentication
from azureml.core import Workspace
try:
interactive_auth = InteractiveLoginAuthentication(tenant_id="<KEY>")
# Get instance of the Workspace and write it to config file
ws = Workspace(
subscription_id = '<KEY>',
resource_group = 'azure-databricks-demo',
workspace_name = 'databricks-demo-ml-ws',
auth = interactive_auth)
# Writes workspace config file
ws.write_config()
print('Library configuration succeeded')
except Exception as e:
print(e)
print('Workspace not found')
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
# COMMAND ----------
# DBTITLE 1,Train model with sklearn & MLflow
import mlflow
import mlflow.sklearn
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.compose import ColumnTransformer
from sklearn import ensemble
X = train_df.toPandas()
y = X['lengthofstay']
X = X.loc[:, X.columns != 'lengthofstay']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
params = {'n_estimators': 500,
'max_depth': 4,
'min_samples_split': 5,
'learning_rate': 0.1,
'loss': 'ls'}
column_tranformer = ColumnTransformer([('onehot', OneHotEncoder(handle_unknown='ignore'), ["gender","rcount","facid"])])
clf = Pipeline(steps=[('preprocessor', column_tranformer),
('classifier', ensemble.GradientBoostingRegressor(**params))])
experimentName = "patient-los-sklearn"
mlflow.set_experiment(experimentName)
with mlflow.start_run():
# Train the model using the training sets
clf.fit(X_train, y_train)
# Make predictions using the testing set
preds = clf.predict(X_test)
mlflow.log_metric('Mean squared error', mean_squared_error(y_test, preds, squared=False))
mlflow.sklearn.log_model(clf, artifact_path="model", registered_model_name="LengthOfStaySklearnModel")
# COMMAND ----------
# DBTITLE 1,Evaluate model
print("RMSE on our test set: %g" % mean_squared_error(y_test, preds, squared=False))
# COMMAND ----------
# MAGIC %md
# MAGIC With Azure ML, we gain access to hosting options like Azure Kubernetes Service for our model. Let's see how we can provision a k8s cluster and deploy our model, all from Databricks.
# COMMAND ----------
# DBTITLE 1,Create AKS cluster
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.exceptions import ComputeTargetException
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
aks_name = 'aks-mlflow'
try:
aks_target = AksCompute(ws, aks_name)
except ComputeTargetException:
# Create the cluster
aks_target = ComputeTarget.create(workspace=ws,
name=aks_name,
provisioning_configuration=prov_config)
aks_target.wait_for_completion(show_output = True)
print(aks_target.provisioning_state)
print(aks_target.provisioning_errors)
# COMMAND ----------
# DBTITLE 1,Deploy AKS service
# Webservice creation using single command
from azureml.core.webservice import AksWebservice, Webservice
from azureml.exceptions import WebserviceException
try:
webservice = AksWebservice(ws, 'los-aks')
print(service.state)
except WebserviceException:
# Set the web service configuration (using default here with app insights)
aks_config = AksWebservice.deploy_configuration(enable_app_insights=True, compute_target_name=aks_name)
# set the model path
model_path = "model"
(webservice, model) = mlflow.azureml.deploy( model_uri='azureml://experiments/patient-los-sklearn/runs/c8d244ca-5166-4e77-bafd-71d365e920eb/artifacts/model',
workspace=ws,
model_name='LengthOfStaySklearnModel',
service_name='los-aks',
deployment_config=aks_config,
tags=None, mlflow_home=None, synchronous=True)
webservice.wait_for_deployment()
# COMMAND ----------
# DBTITLE 1,Score against the service
import requests
import json
from pyspark.sql import Row
test_df = X_test[:num_predictions]
# `sample_input` is a JSON-serialized pandas DataFrame with the `split` orientation
sample_input = {
"columns": test_df.columns.tolist(),
"data": test_df.values.tolist()
}
headers = {'Content-Type':'application/json'}
# Authenticate against the service.
if webservice.auth_enabled:
headers['Authorization'] = 'Bearer '+ webservice.get_keys()[0]
response = requests.post(
url=webservice.scoring_uri, data=json.dumps(sample_input),
headers=headers)
response_json = json.loads(response.text)
rdd1 = sc.parallelize(response_json)
row_rdd = rdd1.map(lambda x: Row(x))
res=sqlContext.createDataFrame(row_rdd,['predictions'])
display(res)
# COMMAND ----------
# MAGIC %md
# MAGIC ## Part D: looking ahead
# MAGIC
# MAGIC With classical machine learning, we've been able to quickly build models to help Contoso Hospital better forecast patient bed availability. This is a reactive view though. How can we further help Contoso Hospital by *reducing* patient stay time?
# MAGIC
# MAGIC One category worth investigation is our patients diagnosed with pneumonia. By supporting physicians with a model to accelerate diagnosis, we can accelerate treatment and reduce the length of stay. Let's start by looking at our data sets.
# COMMAND ----------
# DBTITLE 1,Pneumonia patients & x-ray availability
from pyspark.sql import functions as F
prof_diag_smartd = spark.sql("SELECT * FROM lengthofstay_c")
xray_pneum = prof_diag_smartd.groupBy("pneum").agg(F.sum('xrayexamination'))
display(xray_pneum)
# COMMAND ----------
# DBTITLE 1,Check mounted x-ray directory
dbutils.fs.ls("dbfs:/mnt/xray/train/")
# COMMAND ----------
# DBTITLE 1,Display pneumonia x-ray images
image_df = spark.read.format("image").load("dbfs:/mnt/xray/train/PNEUMONIA/")
display(image_df)
# COMMAND ----------
# MAGIC %md
# MAGIC ## Clean Up
# MAGIC
# MAGIC Uncomment and run the following cell to clean up after completing the notebook.
# COMMAND ----------
# DBTITLE 1,Archive and delete all models
#from mlflow.tracking.client import MlflowClient
#client = MlflowClient()
#for j in ['LengthOfStaySklearnModel', 'LengthOfStaySparkModel']:
# model_version_infos = client.search_model_versions("name = '%s'" % j)
# for i in [model_version_info.version for model_version_info in model_version_infos]:
# try:
# client.transition_model_version_stage(
# name=j,
# version=i,
# stage="Archived",
# )
# print('Archived...')
# except:
# print('Already Archived...')
# Delete model
# try:
# client.delete_registered_model(name=j)
# print('Deleted...')
# except:
# print('Already Deleted...')
|
import DateTimeFormatter, { PaddedToken } from '../../src/formats/DateTimeFormatter';
import { Token } from '../../src/Formatter';
const patterns = {
'yy': [ new Token('y', 2) ],
"yy'foo'": [ new Token('y', 2), 'foo' ],
'yy[gg]': 'Optional patterns are not supported',
'ppppppppy': [ new PaddedToken(8, ' ', 'y', 1) ],
'YYYY-MM-dd': [ new Token('Y', 4), '-', new Token('M', 2), '-', new Token('d', 2) ],
'YYYY.MM.dd HH:mm:ss': [ new Token('Y', 4), '.', new Token('M', 2), '.', new Token('d', 2), ' ', new Token('H', 2), ':', new Token('m', 2), ':', new Token('s', 2) ],
};
const dtf = new DateTimeFormatter();
describe('DateTimeFormatter.parsePattern', () => {
for (const [pattern, expected] of Object.entries(patterns)) {
test(pattern, () => {
if (Array.isArray(expected)) {
const result = dtf.tokenize(pattern);
expect(result).toBeDefined();
expect(expected.length).toEqual(result.length);
for (let i=0; i < expected.length; i++) {
expect(result[i]).toEqual(expected[i]);
expect(typeof result[i]).toEqual(typeof expected[i]);
}
} else {
expect(() => {
dtf.tokenize(pattern);
}).toThrow(expected);
}
});
}
}); |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.